I upgraded to a 4070 super last year. I ran both cards at the same time for a little bit, but it got really frustrating to keep the wrong card from being assigned to a particular task with llama. I really should’ve taken an R&D tax credit on my AI research but I’m still able to expense it for the business.
Right, this works with any models. To me, the most interesting part is that you can use a smaller model that you could run locally to get results comparable to SoTA models. Ultimately, I'd far prefer running local, even if slower, for the simple reason of having sovereignty over my data.
Being reliant on a service means you have to share whatever you're working on with the service, and the service provider decides what you can do, and make changes to their terms of service on a whim.
If locally running models can get to the point where they can be used as a daily driver, that solves the problem.
And a 9800X3D is not even the fastest CPU out there, nor even the fastest CPU you could use with your specific motherboard. A 9950X3D is essentially two of the 9800X3Ds combined, and would be a drop-in replacement.
Wrong. See benchmarks. Many games and single-threaded workloads run faster on 9800X3D.
There are various reasons for this, major one being that the 9800X3D has more L3 cache per thread than the 9950X3D.
And also wrong that a 9950X3D is 2x 9800X3D combined. A quick glance would tell you that, since 9950X3D has 128MB of L3 cache shared between more threads while 9800X3D has 96mb for half the threads, so more L3 per thread.
And most of the times, even when a 9800X3D loses to 9950X3D in games, it's usually within 1-4% margin for most games.
It's a monster for games and some workloads.
It's funny that people who blindly buy 9950X3D for gaming+office workloads without checking benchmarks often end up with similar or slower performance.
Much smarter to use the price difference on other hardware to speedup other things such as faster NVMEs, efficient silent cooling, faster GPUs, etc.
Not just uber slow to compile, because as a Rust dev I could take that. But it rejects correct programs without telling you why! The compiler will just time out and ask you to refactor so it has a better shot. I understand that kind of pathological behavior is present in many compilers but I hit it way too often in Swift on seemingly benign code.
Did that happen recently (the compiler just bailing out)?
Because they got much better at that, and it’s been a long while since that happened to me. Like “I don’t even remember when was the last time it happened” long.
> Plus Swift is arguably too unnecessarily complex now.
I would argue the allegations of complexity against Swift are greatly exaggerated. I find the language to be very elegant and expressive in syntax, high in readability, and fairly terse. Other than that, Swift feels near identical to every other OoP language I have used.
I won't even ask for an example of otherwise, but feel free to provide a repo where a human did that.
reply