Posts

Showing posts with the label javascript

Stop pi$$ing around and just use this for all your LLM stuff

Image
Glad I got your attention. Right, I'm assuming you know what LLM is.  I assume you know what RAG is. I assume you know what an AI Agent is. I'm also assuming you're fed up being told that you need to code a solution by hand using Python, Langchain etc... to achieve this, when all you want to do is just use the technology and not have to hand crank every piece of the solution. Well, fear no more.  AnythingLLM is here to save the day.   VISIT the OFFICIAL WEBSITE HERE btw - I have no affiliation with this app or people, I just think it ticks a LOT of boxes, it is simple, efficient & effective - and had I been allowed to continue with the path that I was on 18months ago, I would have made the same thing (about a year ago), but hey-ho, that is not my purpose - but it does mean I appreciate what they've done. What's all the fuss about? Okay, you can install this locally, on your Linux laptop, if you must, your Mac and if you are a real knucklehead, a Windows machine ...

Local, offline LLMs on CPU (&GPU if you're rich), laptops, RPis and potentially phones

Image
On a previous post HERE I described my first foray into using ollama as an LLM Engine on a Raspberry Pi 5 device. This time, I'm going to describe how to install / use that ollama Engine with a front-end UI.   Show options where you could use docker containers to run the components Show how to use native installation onto a laptop (VM Ubuntu) as it's just simpler Then, how to modify that UI code, rebuild it & see the changes. "Why Tony? Why?" Y'know, if you have to ask "Why?" then you don't understand.  Well, the reason being, I did some work about 1-year ago in relation to LLMs, very cutting-edge, very new & never done before, pretty ground breaking stuff.  However, it was too far advanced that no-one really understood it, quite a lot still don't.  Now, 1-year later, people are saying, "wow!amazing" about things that are loosely similar to what I did a year ago... sigh... the burden of being a geek.  So, this time, I'm jus...

Raspberry Pi 5 & offline LLM (Ollama.ai)

Image
Well, it's been about a year since I was writing some python code ( yes, I did type that & admit that I did that & to be fair it worked okay....as cardboard code ), I was using Llama.cpp, langchain and I wrote RAG and COT code.  I was quantizing my own model on the laptop that I was using, therefore I was fully exploiting the 64CPUs, 16GB GPU & 128GB RAM and was then pushing the boundaries with that spec when using the streaming Q&A, ingesting my own documents & storing in a Chroma vector store and then questioning the content....I actually overloaded and crashed the Windows 10 OS on the laptop & it needed a fresh re-build afterwards... it never quite worked the same. Anyway, my point being I was chuffed that I could run a Llama LLM offline on a laptop & it worked pretty reasonably, I was using code that I'd written & it was doing okay.  I attempted to explain it to other people & it turns out it was too complicated... Then a very early vers...