What model do you want to run locally to do "real work"? I can run qwen3-32B on my Mac with a decent TPS.
And no battery powered device is going to last long running large AI models. How is that an ok thing to bash Apple about? Because they don't break the laws of physics?
And no battery powered device is going to last long running large AI models. How is that an ok thing to bash Apple about? Because they don't break the laws of physics?