Hacker Newsnew | past | comments | ask | show | jobs | submit | matt3D's commentslogin

I have a slightly similar frustration; Netflix, Disney et al requiring me to figure out the name of the film by deciphering the poster. I don’t know how this passes any kind of accessibility testing.


> Author: please write an article about this topic with examples of content you've created, discussions about dead ends and things that didn't work, and technical details about your setup.

I think the article itself serves as enough evidence to prove you should ignore their advice.


Is there a term for what I had previously understood Effective Altruism to be, since I don’t want to reference EA in a conversation and have the other person think I’m associated with these sorts of people.

I had assumed it was just simple mathematics and the belief that cash is the easiest way to transfer charitable effort. If I can readily earn 50USD/hour, rather than doing a volunteering job that I could pay 25USD/hour to do, I simply do my job and pay for 2 people to volunteer.


That's just called utilitarianism/consequentialism. It's a perfect respectable ethical framework. Not the most popular in academic philosophy, but prominent enough that you have to at least engage with it.

Effective altruism is a political movement, with all the baggage implicit in that.


Is there a term for looking at the impact of your donations, rather than process (like percentage spent on "overhead")? I like discussing that, but have the same problem as GP.


"Overhead" is part of the work. It's like saying you want to look at the impact of your coding, rather than the overhead spent on documentation.

An (effective) charity needs an accountant. It needs an HR team. It needs people to clean the office, order printer toner, and organise meetings.


Yes, that's why I prefer looking at actual outcomes, as professed by Effective Altruism. But I'd like to find a term to describe that that doesn't come with the baggage of EA.


> An (effective) charity needs an accountant. It needs an HR team. It needs people to clean the office, order printer toner, and organise meetings.

Define "needs". Some overheads are part of the costs of delivering the effective part, sure. But a lot of them are costs of fundraising, or entirely unnecessary costs.


> costs of fundraising

How does a charity spend money unless people give it money?

They need to fund raise. There's only so far you can get with volunteers shaking tins on streets.

If a TV adverts costs £X but raises 2X, is that a sensible cost?

Here's a random UK charity which spent £15m on fund raising.

https://register-of-charities.charitycommission.gov.uk/en/ch...

That allowed them to raise 3X the amount they spent. Tell me if you think that was unnecessary?

Sure, buying the CEO a jet should start ringing alarm bells, but most charities have costs. If you want a charity to be well managed, it needs to pay for staff, audits, training, etc.


> If a TV adverts costs £X but raises 2X, is that a sensible cost?

Maybe, but quite possibly not, because that 2X didn't magically appear, it came out of other people's pockets, and you've got to properly account for that as a negative impact you're having.


That's what an organization like Charity Navigator is for. Like a BBB for charities. I'm sure their methodology is flawed in some way and that there is an EA critique. But if I recall, early EA advocates used Charity Navigator as one of their inputs.


Charity navigator quantifies overhead. EA tried to quantify impact. To understand the difference, consider two hypothetical charities. Charity A has $1 million/year in administrative costs, while charity B’s costs are only $500,00/year.

Based on this, charity navigator says charity A is lower-ranked than charity B.

Now imagine that charity A and B can each absorb up to $1 billion in additional funding to work on their respective missions. Charity A saves one life for every $1,000 it gets, while B saves one life for every $10,000 it gets.

Charity navigator wouldn’t even attempt to consider this difference in its evals. EA does.

These evals get complex, and the EA organizations focused on charity evals like this have sophisticated methods for trying to do this well.


The "Program Expense Ratio" is pretty prominent in Charity Navigator's reports, and that's almost exactly a measure of "overhead".


I think the argument can be made that Deepseek is a state sponsored needle looking to pop another states bubble.

If Deepseek is free it undermines the value of LLMs, so the value of these US companies is mainly speculation/FOMO over AGI.


> the argument can be made that Deepseek is a state sponsored needle looking to pop another states bubble

Who says they don't make money? Same with open source software that offer a hosted version.

> If Deepseek is free it undermines the value of LLMs, so the value of these US companies is mainly speculation/FOMO over AGI

Freemium, open source and other models all exist. Does it undermine the value of e.g. Salesforce?


This is a more extreme example of the general hacker news group think about AI.

Geohot is easily a 99.999 percentile developer, and yet he can’t seem to reconcile that the other 99.999 percent are doing something much more basic than he can ever comprehend.

It’s some kind of expert paradox, if everyone was as smart and capable as the experts, then they wouldn’t be experts.

I have come across many developers that behave like the AI. Can’t explain codebases they’ve built, can’t maintain consistency.

It’s like a aerospace engineer not believing that the person that designs the toys in an Kinder egg doesn’t know how fluid sims work.


> the general hacker news group think about AI

I’m surprised to see this. From my perspective, reading comments and seeing which posts rise to the top, HN as a whole seems pretty bullish on the tech as whole…


I think there might also just be a vocal minority and/or some astroturfing hyping AI coding around hackernews. I personally keep trying all the latest shit and always stop using it, because it actively slows me down.


It’s changing over time. When copilot came out a few years ago, people were very against it due to it being trained oh GitHub codebase. Now there’s more support around it.


We don't want to admit it but HN has similar characteristics as many other platforms; echo chamber / group think. You see it over and over again.

HN participants (generally speaking) are against: AI, Crypto, HFT. I've worked in 2/3 of these industries so have first hand experience. My basic summary is that the average commenter here has a lot of misinformation on these topics (as a insider).


It also seems wildly inconsistent for someone who founded a self driving company. If models can't write code, which is a very constrained problem space, how are they supposed to drive?


I work in aerospace, trust me some of the aerospace engineers aren't any better.

But don't worry. The company puts them somewhere they can't do any damage. Most of them become managers.


I don’t know. He covers this pretty early in the post.

> The only reason it works for many common programming workflows is because they are common. The minute you try to do new things, you need to be as verbose as the underlying language.


[flagged]


You've seen plenty of people who hacked the ps3 and iphone as teenagers and created a low level system analysis tool for doing such system hacks? You've seen plenty of people writing self driving car software a decade ago? Why did you write this when you know nothing?


I actually have seen plenty of people that could have done something like this, but did not because they simply never tried. Being daring by itself is a skill, but we're talking raw technical ability here.

I've actually seen another developer that was probably in the same category write his own self-driving software. It kind of worked, but couldn't have ever been production ready, so it was just an exercise in flexing without any practical application.

So, what product that George built do you actually use?


So, if I understand correctly, you've seen plenty of people that didn't do what he did? This was not a compelling argument.


> you've seen plenty of people that didn't do what he did?

Yes, because I've seen them build software that was actually used. And I've seen a few that did just like him, impressive sounding projects that had no usage.

I understand it's something subjective. I get the same feeling when looking at Damien Hirst's monstrouly expensive stuff that leaves me cold. Even after I get the concepts behind the works, my end feeling is of "so what?".


Your calibration around skill gradients but also cause and effect is so far off that I'm wondering how you manage to navigate reality.


You have absolutely no idea about me or my life, but chose to insult me based on not being impressed by some github profile.

Why did you feel the need to say that?


For the comment above, the more relevant denominator is all humans vs. all developers. If you use all humans as the denominator, he's easily in the top 1% or 0.001% (I haven't followed his work closely, but you'd only have to be a good dev to be in top 1% of the global population).


Thank you, perhaps I worded it harshly, but that was my general feeling. Being a good developer already is a high level. Being able to start impressive-sounding projects that never materialize into anything is a luxury for which most competent developers simply don't have the extra energy.


In the early days of Bitcoin, I was able to send transactions programmatically. Built my own js library using json-rpc to communicate with a node.

Geohot live streamed himself broadcasting a transaction from scratch.

For that I respect his level of knowledge. Plus he built comma which is a product I use almost everyday.


>> Geohot is easily a 99.999 percentile developer

Really?

"The best model of a programming AI is a compiler... You give it a prompt, which is “the code”, and it outputs a compiled version of that code. Sometimes you’ll use it interactively, giving updates to the prompt after it has returned code, but you find that, like most IDEs, this doesn’t work all that well and you are often better off adjusting the original prompt and “recompiling”."

Really?


Programming languages, "the code", are languages specifically designed to concisely convey intent to a compiler.

If you do not understand his point, you need to read more about our field.


This.

I think his excellency in his own trade limited his vision for the 99% who just want to get by in the job. How many dev even deal with compiler directly these days? They write some code, fix some red underlines, then push, pray and wait for pipeline pass. LLMs will be gods in this process, and you can even beg another one if your current one does not work best.


Watching my children learn how to talk, I have come to the conclusion that the current LLM concept is one part of a two part problem.

Kids learn to speak before they learn to think about what they're saying. A 2/3 year old can start regurgitating sentences and forming new ones which sound an awful lot like real speech, but it seems like it's often just the child trying to fit in, they don't really understand what they're saying.

I used to joke my kids talking was sometimes just like typing a word on my phone and then just hitting the next predictive word that shows up. Since then it's evolved in a way that seems similar to LLMs.

The actually process of thought seems slightly divorced from the ability to pattern match words, but the patter matching serves as a way to communicate it. I think we need a thinking machine to spit out vectors that the LLM can convert into language. So I don't think they are a dead end, I think they are just missing the other half of the puzzle.


Another part are malleable memory. Something I imagine we as humans are accumulating context daily and doing reinforcement training while we sleep.


Pretty surprised BMAD-method wasn't mentioned.

For my money it's by far the best Claude Code compliment.


The BMAD system seems similar to the AgentOS mentioned in the post.

This way of context engineering has definitely been the way to go for me, although I’ve just implemented it myself… using Claude to help generate commands and agents and tweaking them to my liking, lately been using json as well as markdown to share context between steps.


What is this? Just a system prompt? What makes it so good for you?

https://github.com/bmad-code-org/BMAD-METHOD


It manifests as a sort of extension for Claude Code.

When I'm in the terminal I can call on Agents who can create standardised documents so there is a memory of the product management side of things that extends beyond the context window of Claude.

It guides you through the specification process so that you have extremely tight tasks for Claude to churn through, with any context, documentation and acceptance criteria.

Perhaps there are others similar, but I have found it completely transformative.


This amuses me to no end: https://github.com/bmad-code-org/BMAD-METHOD/issues/546

An AI tool finding issues in a set of YAML and Markdown files generated by an AI tool, and humans puzzled by all of it.

> We should really have some code reviewer...

Gemini to the rescue!


It’s basically a set of commands and agents and a way to structure context.


It is Agile for interactive session with an LLM.


BMAD is mentioned in the QA part, FWIW.


Same with taskmaster, also not there.


I never found taskmaster that useful, something about how it forced you to work didn’t click with me…


Yeah that's fair, it doesn't feel great. It does work if you have something very concrete you want to make and know how to do it, so its pretty easy to scope out into some tasks and subtasks, but working on something where you generate it as you go and requires editing tasks its pretty bad.


I'm curious how these changes align with their accessibility commitments.

For those struggling with impairments it must be hard to continue to adapt to your phone shape shifting with each update.


They don't necessarily want to be the gatekeepers of information, they just want your next click to be another news story on their website.

External links are bad for user retention/addiction.

This also has a side effect of back linking no longer being a measure of a 'good' website, so good quality content from inconsistently trafficked sites gets buried on search results.


I use OpenAI's batch mode for about 80% of my AI work at the moment, and one of the upsides is it reduces the frantic side of my AI work. When the response is immediate I feel like I can't catch a break.

I think once the sheen of Microsoft Copilot and the like wear off and people realise LLMs are really good at creating deterministic tools but not very good at being one, not only will the volume of LLM usage decline, but the urgency will too.


Yeah these things take time to play out. So I always just say, the large populous of people will finally realise fantasy and reality have to converge at some point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: