June 17, 2024 at 10:59
I’m a huge fan of large language models (LLMs) and their capabilities, but I also understand their limitations. Tech nerds will argue that “AI” is a bubble, and perhaps it is; however, LLMs have numerous benefits. While I won’t discuss the technical details or applications of LLMs here, you can find an unlimited number of conversations about this topic online. This is simply meant to get you started with self-hosting an LLM locally, allowing you to control your data and use it independently of cloud services.
The open source Ollama project has made this process incredibly easy by allowing users to run a script and download various LLMs on the fly.
To get started, run the commands found below under “Quick Setup.” Now before you get too excited, there’s two main reasons I will recommend against doing it this way:
Pihole and Ollama are similar when it comes to installing. Both of them have people checking over their work to make sure everything is okay. This makes them pretty reliable, but you should still think about whether they’re a good fit for your needs.
With that said, getting Ollama chatting with you is really easy and requires little effort or technical knowledge. After which, selecting their default model will automatically download and setup Llama3 8B (4.7GB). If we compare performance, Llama3 70b is #12 in rankings, sitting right behind Google’s Gemini Pro.
Quick Setup
$ curl -fsSL https://ollama.com/install.sh | sh
$ ollama run llama3