I've been testing local AI models on an M4 Pro with 48GB RAM for the past few weeks. Earlier in the year, small models that could run on my laptop felt like demos of Claude / Codex. The newer Gemma 4 and Qwen 3.6 releases are the first ones that felt useful enough for everyday research, coding assistance, and personal knowledge work.
The post is a practical snapshot of what changed for me: frontier AI pricing is going up, quality has been less predictable, and small local models are finally good enough to test seriously without buying GPUs.
It's time to test out what's "good enough" for personal use cases, so we can reduce reliance on high-cost, low-privacy options that we're all used to right now.
The post is a practical snapshot of what changed for me: frontier AI pricing is going up, quality has been less predictable, and small local models are finally good enough to test seriously without buying GPUs.
It's time to test out what's "good enough" for personal use cases, so we can reduce reliance on high-cost, low-privacy options that we're all used to right now.