Weekend project success! ✅ run Qwen3.5 35B on 3090 on old machine via llama.cpp ✅ use it from laptop on local network Not only it works but it is very capable and it gives me over 100 tokens / s ( 2000+ for prompt eval) It’s just… awesome. Thanks and for the inspiration from the podcast on sovereign AI. Next steps: use it securely from anywhere , find more old “cheap” hardware and set up more models, use them from Hermes agent … onward