Stop pi$$ing around and just use this for all your LLM stuff

Glad I got your attention.


Right, I'm assuming you know what LLM is.  I assume you know what RAG is. I assume you know what an AI Agent is.

I'm also assuming you're fed up being told that you need to code a solution by hand using Python, Langchain etc... to achieve this, when all you want to do is just use the technology and not have to hand crank every piece of the solution.


Well, fear no more.  AnythingLLM is here to save the day.  

VISIT the OFFICIAL WEBSITE HERE

btw - I have no affiliation with this app or people, I just think it ticks a LOT of boxes, it is simple, efficient & effective - and had I been allowed to continue with the path that I was on 18months ago, I would have made the same thing (about a year ago), but hey-ho, that is not my purpose - but it does mean I appreciate what they've done.



What's all the fuss about?

Okay, you can install this locally, on your Linux laptop, if you must, your Mac and if you are a real knucklehead, a Windows machine (do they still exist? if so, why?)

You can install / deploy it onto Cloud environments (if you must)

You can install it stand-alone, you can use it in single-user mode.

You can however, install it as a Docker container and you can use it in multi-user mode.

You can switch use any LLM model you want, you can use any Engine you want, you can use any Embedding model, you can use any vectorstore you want - it is PROPERLY modular.

You can use the LLM as-is, you can use it to ask question against your specific documents (RAG), you can even make Prompt Templates.

You can even create and use "AI Agents" - specialist knowledge tools to help with specific queries.

You can even ask it to create charts and graphs from your data.

You can even ask it to query SQL database table data and give you output responses.

Oh, it has TTS support too (Text to Speech), so it can talk back to you :-D

CHECK OUT THE DOCS:


....

ALL WITHOUT YOU WRITING A SINGLE LINE OF CODE. (it's already written in beautiful JavaScript - yay!)

(read that again)

....

Why are you still reading this? Go to the website or github repo, follow the installation instructions that suit your needs and get on the LLM band-wagon, who knows it might help you with your world / life / job / whatever.

If you like using AnythingLLM, just remember to let them know. I like it. Very much.



https://github.com/Mintplex-Labs/anything-llm



UPDATE:

okay, so I thought I'd give this a go, locally on my linux laptop.

I thought I'd go for the docker image installation:

Following the installation instructions was simple, EXCEPT I wanted to use ollama locally as I had it previously installed and models downloaded, so I had to tweak the docker command to have --net=host so that the container could reach out to the host machine:

tony@tony-magicbook:~$ export STORAGE_LOCATION=$HOME/anythingllm

tony@tony-magicbook:~$ touch "$STORAGE_LOCATION/.env"

tony@tony-magicbook:~$ sudo docker run -d -p 3001:3001 --net=host --cap-add SYS_ADMIN -v ${STORAGE_LOCATION}:/app/server/storage -v ${STORAGE_LOCATION}/.env:/app/server/.env  -e STORAGE_DIR="/app/server/storage" mintplexlabs/anythingllm

Then in portainer (you are running portainer, aren't you?!), it shows it as a running container and you can check the logs nicely:


and accessing localhost:3001 there is an intro install config setup that you step through, that is really nice & clean & simple - you can change the config settings afterwards.

Then you're all ready to go:


I'm not usually impressed this easily - this time I have been, this is very well thought out and super simple to just get up & running to do what you need to do.  Impressed.

I will now test out the RAG document querying as well as the AI Agent usage and the SQL database connectivity too - it's all just config changes from the admin screens.



They even offer a developer API too:


So, for instance, you have 10,000 documents to upload, you don't have to use the UI, you can use the API call from Python or Javascript or even a bash script to do so, it's just a basic API call.


Here's me testing an upload of an SDR+Manual PDF document:


and the response shown is:


As you can see the text of the document is converted - that is quite a long scroll bar!


I could probably also do this from the API, but chose to select to do from the app, add the document to the workspace:



So, I asked the question just to the LLM first and then using the document, note the difference:


and now using the document (note I didn't have to pick it or select it):


Note the CITATION > at the end, if you click on it, you get to see the text that was identified as being "meaningful" (sometimes that is a little questionable):



anyway, just testing it out atm - to see if it will be useful or not.  looks to be.


Comments