Deepseek 1.5b on an "old-ish" laptop
okay, I admit I wasn't expecting much. I was surprised. I have ollama running locally, so I just pulled the 1.5b model down over my 4G connection. The token output was pretty fast, for instance, it was outputting at a rate that I would consider equiv. to getting it from a cloud service. It was as fast as I could read, just, sometimes it was going faster. My 1 minute experiment was this: I asked for some javascript code to be written to render a 3D world in a web-browser and to explain what was being done. The first section of the output showed the "reasoning" logic and "arguments" that were going on between the internal workings of the model. Now, I recall doing this myself about 2years ago when I first built E.L.VIS - long story, won't go into it - but I was asked to "stop" as I was getting to ahead of everyone else, including myself. As it was me, I not only stopped, I walked away & focused on "other things". Now though, it lo...