Gemma 3:270
https://ollama.com/library/gemma3
ollama pull gemma3:270m
compare the size to the other versions:
here's a news article doing an analysis for you - saves me repeating it: https://www.theregister.com/2025/08/15/little_llm_on_the_ram/
270million parameters, but only taking up 550MB of RAM. That is what makes this special. On very low compute devices, Raspberry Pi, Jetson, low-spec laptops, this model can perform VERY well.
I hear you say, "yes, but it might be performant and small, but it won't be able to do what I want it to do"
Well, maybe you should go take a look at this article: https://ai.google.dev/gemma/docs/core/huggingface_text_full_finetune
What is that? it's the instructions on how to enhance / extend the model to be specifically knowledgeable about YOUR specific SME subject.
Yup, you can get all the specific knowledge that you want to train the model on & here are the instructions on how to then teach this tiny little pocket-rocket model to be an SME in that specific subject area.
If you want it to be THE awesome nerd, fully knowledgeable on all things NISSAN MICRA (other cars do exist), then get every piece of information you can
hey, you can even use other AI Agent tools to go scrape that off the internet for you too!, then your modified model will be the font of all knowledge on the chosen specialist subject.
Smart huh and all inside 550Mb of RAM
Now... think of how useful that can be, when you are resource constrained ;-)
Comments
Post a Comment