I don't care what the official line is, something changed with it. It might be the same model but the way it answers questions and the content of those answers is much more dubious now. I think how it manages its conversation context was gimped in the web chat version. I find it routinely completely forgetting things in longer conversations.
It is the text generation speed that got me suspicious. It seems like ChatGPT's GPT4 suddenly got 2-3 times faster at generating text, while the API GPT4 still generates the text at the same pace as it always has.
I was using it to experiment lazily with machine learning and midi generation via CNN-LSTMs and noticed it happening in the middle of a chat with it. The context wasn't that long and it was providing excellent insights, then randomly started telling me how to make a music game in python and lost all previous context. I tried starting a new chat and adding the back context,had a good bit of code I was providing for context, and it could hardly keep track of the provide code context over the course of a few simple questions/changes, when before it had no problem and would actually surprise with drawing from long past context relevantly. After what ever happened it would randomly assume karras was being used if having not been provided any pytorch specific code for a few messages.
API doesn't have these issue, or didn't a week ago when I was using it often, but something happened with the web model, which I guess is understandable, as the main reason I was using it vs the API was cost savings.
I was thinking how I could optimise for longer conversations and the most obvious thing seemed to be to somehow summarise the conversation ever X messages and ask it to save what it things is important isn’t them.
If they’re doing something like this that would probably explain it.
I know everyone is high fiving but this like most Euro tech laws, is so obscenely murky and essentially opens the flood gates on anyone running servers over there.
I always liked cars as a kid. Faster the better. But Tesla kind of broke everything. A super car for 150k and its only going to get cheaper and faster as time goes by. There is no longer any parity. EV's don't have roaring engines, outside of the cringe fake engine noises some are adding. There is nothing to be excited about when your future 30k 4 door sedan can do 200mph, 0-60 in 3 seconds.
I thought "should" was a product of silly TDD, like my widget "should do this thing" and it doesn't yet because TDD, and then the developer would implement it.
Hot take, I feel like people that complain Rust is hard typically write bade code in other languages. Rust is just preventing you from making common mistakes or using patterns that make your code hard to reason about and debug later.
As a fellow Rust user, please stop saying this. Not everyone who codes like us is a bad programmer. It's this kind of sentiment that gives us a bad name.
Rust may influence us into patterns that are better in some (likely even most) situations, but not always. Sometimes the situation calls for other approaches, and that's okay.
The OP is crapping on people who find Rust hard to learn by broad-brushing them into the "bad programmer" bucket. It's a hallmark of Rust zealots, and one reason why the Rust community has such a bad name. Which is a shame because the language is decent and useful.
As somebody who could possibly be in the category you described, once I learned about Arc+Mutex, I pretty much write code the same way I used to. That and maybe OnceCell/Lazy?
This, this, and this again. Lots of patterns Rust enforces are sensible choices even with a garbage collector, because data races for instance are still a thing even if you have automatic memory management
Real people have disagreements and work through them. Lets not be mad for the sake of being mad. It was completely within the rights of WOTC to do whatever they wanted. People complained and they decided not to do and guarantee they will never do it in the future. Be happy.
I understand what you’re saying, but attempting to revoke a widely-used open source license is not just something completely within their rights.
Their sudden claim to have ability to “deauthorize” the 20+ year old OGL and destroy the partners and competitors who built businesses on the promises of a perpetual license - this was an egregious move that was almost certainly not within their rights.
Legal analysts were quite consistent in that it’s impossible to be certain but this process wouldn’t have withstood a solid legal test.
WotC, in trying to pull this, was attempting to leverage their expensive legal team to bully people into giving away their legal rights under the open license. That’s shady and deeply unethical, and shouldn’t be considered to be within their rights.
Now for the future content they make, they can release that under a closed license if they want and that is within their rights even if I don’t like that license.
Corporations are not "real people" and "they were acting in their rights" is not a defense against criticism for shitty behavior.
Companies that can only be coerced into doing the right things by threatening their bottom line are terrible companies and we SHOULD continue to take action as consumers to dismantle them once we have evidence that they are going to be driven like this.
There is a commercially available digital format/medium that has higher res than CD? Like, there are 96kHz discs being sold out there? And there are labels out there mastering and producing these discs? Sorry this is news to me! I thought 96kHz encodes were upsamples or homemade rips from analog formats.
Most hi-res audio is being sold as digital downloads. I don't think there's a currently-sold physical medium that contains digital data that's higher quality than a CD.
And for Alpine it's sort of fine. People have taken Alpine a lot further than the original intent, which is to add sprinkles of JS to what is otherwise a server rendered page.
For anything even slightly more complicated we have things like Mithril or Preact, and yeah they avoid this sort of thing for a reason.
I’m glad when tools like Alpine remain in their niche. Too many of them grow with their userbase, adding every requested feature, until they become as unwieldy as the framework they originally tried replacing. If you outgrow Alpine, migrate to another framework instead.
Then why not go with nextjs outright so you can apply the exact amount of interactivity for every little detail on your website without changing your entire setup halfway through.
Next.js is a fantastic solution if you're building a SPA.
Imagine I'm a very early stage startup so I start out with a simple app built on Laravel. All my engineers are relatively non-senior PHP folks, not great at JavaScript. So it's rendered server-side in PHP - easier to achieve enough performance, security, maintainability by leveraging the framework. Then I want to put some tiny bit of client-side interactivity. Going full SPA with Next.js would involve a lot of extra cost and complexity. jQuery and Alpine fit well into that niche, and Alpine particularly helps with reacting to state in a way that jQuery is hard.
I am nearing that age, and personally I find its a factor of motivation more than anything. I got really excited about a side project last year and smashed out more code than I had in years. Refactoring an api for work doesn't sound super exciting. When I don't care, my brain doesn't care as much either.
If you do not believe that is a factor then examine your sleep and physical fitness. These things factor a lot more into brain function as we age.