Hacker Newsnew | past | comments | ask | show | jobs | submit | internet_points's commentslogin

I remember being so disappointed with Apple back when I had a Macbook and the Apple store people were like "nah, if you spilled stuff on it you just buy a new macbook"

Same here, I've always used em dashes and have been called out on negative comparisons – I didn't even know they were an LLM thing. Should I read more LLM to know what phraseology to avoid, or will doing that nudge me towards sounding more LLM? :-(

Oh, I thought it was in all of Philadelphia, but it's just inside courtrooms :(

All that on a single Github Sponsor[0].

[0] https://github.com/sponsors/Lakshmipathi


Actually I opened up GitHub Sponsor just few weeks ago. Few tims i received enquiry from users (professors) who wanted to contribute back.only now i have proper channel to redirect such requests.

Yes! That one's going in my $PATH. Such a useful use of cat!

> You have to go read it yourself afterwards

^ this is important.

Otherwise you may very well be missing anything really surprising or novel.

See for example https://www.programmablemutter.com/p/after-software-eats-the... , an experience report of NotebookLM where

> It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation. What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality.


On one hand 2024 in AI time was a decade ago.

On the other, Google might not have done much to upgrade the podcast feature since them.


This regression towards the mean is still very much a feature of the newer models, in my experience. I don't see how a model that predicts the most likely word based on previous context + corpus data could possibly not have some bias towards non-novelty / banality.

It’s gotten somewhat better over time though clearly not their top priority.

> Let's follow one example: Nigeria is the most populous country in Africa. In Abstract Wikipedia, this might be stored as: Z27243(Q1033, Q138758272, Q6256, Q15, Z27243K5)

Haha that's like John Wilkins' "Real Character, and a Philosophical Language"

https://en.wikipedia.org/wiki/La_Ricerca_della_Lingua_Perfet... is a great intro to the weird and wonderful world of abstract/universal/ideal/a priori languages.


It's not that different from how LLM tokens work, only in a tree structure as opposed to a plain sequence. Having a tree structure makes it easier to formally define rewrite rules (which is key for interpretability), as opposed to learning them from data as LLM do.

Also tokens don't represent meaning in themselves, but are assigned points in a multidimensional space, they can only represent meaning in the network as a whole when combined with other tokens in context and order.

And the abstract concepts of Abstract Wikipedia are human-defined, top-down ways of carving the world into distinct categories which make some kind of logical sense, whereas LLM's work bottom-up and create overlapping, non-hierarchical, probabilistic networks of connections with nearly no imposed structure except the principle that you shall know a token by the company it keeps.

But you can type them both out with keys on a keyboard so in that sense I guess they're not that different.


> “Any distributed system based on exchanging data will be replaced by a system based on exchanging programs.”

So distributed systems tend to converge towards being more and more mystifying? Cf. the mythical mammoth:

> Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.


If you read https://www-cs-faculty.stanford.edu/~knuth/papers/claude-cyc... it was more of a guided effort to write a program to find examples that helped with moving the proof along


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: