Running Local AI Models
As AI models become more powerful, they are also becoming much more accessible. I have been interested in what happens when you trade a little model accuracy for the ability to run everything on your own machine, without relying on paid cloud infrastructure. For developers, researchers, and hobbyists, that shift is useful for more than just cost. It also changes how private, flexible, and portable these tools can be. This week I spent some time trying two popular ways of running local AI models: Ollama and LM Studio . Both are open source and free to use, and both make it much easier to get started than I expected. What I wanted to understand was not just whether they worked, but how they felt to use in practice and where each one made more sense. What I like about running models locally is that the benefits are immediate. There is the obvious cost saving, especially if you are experimenting often or working through lots of prompts, but privacy matters just as much. When the mode...