Hacker Newsnew | past | comments | ask | show | jobs | submit | EternalFury's commentslogin

If somehow recovering the capex expenditure is not counted, if somehow the cost of developing future models is not counted, then yes, inference costs of current leading models allow a profit.

But those things are tied together.

Even xAI, that now has a reasonably competitive model, is struggling to achieve PMF. Meta is in shambles because their models have underperformed for years now.


There are people who think knowledge discovery is just a matter of parroting past behavior and trying things at random until something sticks. I don’t.

Force is supreme until you use it, then everyone knows it has limits.

So…calculators are intelligent? How about accountants that failed arithmetic 101 in high-school, are they intelligent? Generally intelligent?

We are jagged, but we can smooth that jaggedness if we choose to do so. LLMs stay jagged.

There's no objective measure of intelligence comparisons, we only say llm is jagged compared to humans.

The real question is: Can it be generated using programs? If it can be, then LLMs will eventually monkey type these programs.

Yes. I doubt it can do that.

I am thinking there’s a large category of problems that can be solved by resampling existing proofs. It’s the kind of brute force expedition machine can attempt relentlessly where humans would go mad trying. It probably doesn’t really advance the field, but it can turn conjectures into theorems.

I wonder if teaching an LLM how to write Prolog and then letting it write it could be a great way to explore spaces like this in the future. Other people in I wonder if teaching an LLM how to write Prolog and then letting it write it could be a great way to explore spaces like this in the future.

I only ever learned it in school, but if memory serves, Prolog is a whole "given these rules, find the truth" sort of language, which aligns well with these sorts of problem spaces. Mix and match enough, especially across disparate domains, and you might get some really interesting things derived and discovered that are low-hanging fruit just waiting to be discovered.


Indeed, can't find my old comment on the topic but that's indeed the point, it's not how feasible it is to "find" new proof, but rather how meaningful those proofs are. Are they yet another iteration of the same kind, perfectly fitting the current paradigm and thus bringing very little to the table or are they radical and thus potentially (but not always) opening up the field?

With brute force, or slightly better than brute force, it's most likely the first, thus not totally pointless but probably not very useful. In fact it might not even be worth the tokens spent.


I'm of the opinion that everything we've discovered is via combinatorial synthesis. Standing on the shoulders of giants and all that. I'm not sure I've seen any convincing argument that we've discovered anything ex nihilo.


How do you think you can design a benchmark to solve truly novel problems?

It does seem good, but it’s slow.


I was going to say something, then I realized my cynicism is already at maximum.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: