
- buy a GPU (not nvidia so it’s cheaper)
- fix the GPU (broken fan, replace)
- start working in claude code AI (amazing!)
- focusing on daily consistent time with AI work
- learning how to code with local AI (prompt coding)
- use LM Studio (ai framework)
- LM Studio sort of works, sort of doesn’t (3 different PCs to test different configurations)
- local AI with a small card is slow and dumb
- use claude code remote to help setup a local AI GPU
- new ubuntu release using Apple unified memory
- old cpu, LM Studio doesn’t like
- use claude to get local AI working without LM Studio
- use Ollama with AMD GPU (since it’s not nvidia, requires more work to setup, but it’s cheaper)
- use docker to simplify connecting AI gpu to llama.cpp underneath
- use webUi as interface for web access
- use VS Code with extension to edit file in local PC directory
- ask a simple ?: how long does it take for Saturn to revolve once around the sun? (It actually had enough information to calculate it based on Kepler’s planetary laws of motion–it doesn’t have access to the web yet) It took 30 seconds to tell me: 29.43 years, google takes 3 seconds to pull NASA data: 29.5 years
It takes a lot of work to get this setup working like I want to! Every line item has hours if not days of details to go with it. Right now the AI companies (like OpenAI and Anthropic) are subsidizing AI development, but at some point in the near future, heavy AI users are going to have to start paying for actual AI runtime usage, so I figure I better have something that I can use locally for easier work tasks so I won’t be paying $hundreds of dollars per month for software development.






