I remember being so disappointed with Apple back when I had a Macbook and the Apple store people were like "nah, if you spilled stuff on it you just buy a new macbook"
Same here, I've always used em dashes and have been called out on negative comparisons – I didn't even know they were an LLM thing. Should I read more LLM to know what phraseology to avoid, or will doing that nudge me towards sounding more LLM? :-(
Actually I opened up GitHub Sponsor just few weeks ago. Few tims i received enquiry from users (professors) who wanted to contribute back.only now i have proper channel to redirect such requests.
> It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation. What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality.
This regression towards the mean is still very much a feature of the newer models, in my experience. I don't see how a model that predicts the most likely word based on previous context + corpus data could possibly not have some bias towards non-novelty / banality.
> Let's follow one example: Nigeria is the most populous country in Africa. In Abstract Wikipedia, this might be stored as: Z27243(Q1033, Q138758272, Q6256, Q15, Z27243K5)
Haha that's like John Wilkins' "Real Character, and a Philosophical Language"
It's not that different from how LLM tokens work, only in a tree structure as opposed to a plain sequence. Having a tree structure makes it easier to formally define rewrite rules (which is key for interpretability), as opposed to learning them from data as LLM do.
Also tokens don't represent meaning in themselves, but are assigned points in a multidimensional space, they can only represent meaning in the network as a whole when combined with other tokens in context and order.
And the abstract concepts of Abstract Wikipedia are human-defined, top-down ways of carving the world into distinct categories which make some kind of logical sense, whereas LLM's work bottom-up and create overlapping, non-hierarchical, probabilistic networks of connections with nearly no imposed structure except the principle that you shall know a token by the company it keeps.
But you can type them both out with keys on a keyboard so in that sense I guess they're not that different.
> “Any distributed system based on exchanging data will be replaced by a system based on exchanging programs.”
So distributed systems tend to converge towards being more and more mystifying? Cf. the mythical mammoth:
> Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.
reply