Hacker Newsnew | past | comments | ask | show | jobs | submit | gibbitz's commentslogin

https://www.independent.co.uk/news/world/middle-east/gaza-to...

There are details here including quotations from an unnamed doctor. If feel you can't trust the media credentials of the Independent, you could contact them for the identity of the unnamed doctor (who they are likely protecting based on the nature of the conflict) and ask them directly.


The doctor is named in the article now, perhaps as part of a later edit.

Since people are questioning the objectivity of the other domain, we'll use this link you found for the merged thread. I'll put the original link in the top text.


[flagged]


> Naming the doctor adds nothing. It’s a doctor from Gaza with an Islamic name, and presumably at a hospital in an area controlled by Hamas

All of this requires substantiation. Without it, a named medical professional rendering a medical opinion is credible.

> how can such claims accepted without more scrutiny?

What does "accepted" mean in this context? I'm forming a personal opinion. Based on the preponderance of evidence–evidence you'll see, in this very thread, I was earlier sceptical of–it looks like serious people are putting their names to the opinion that this toddler was tortured.


> Without it, a named medical professional rendering a medical opinion is credible.

That’s your opinion. I disagree. It’s not credible, because being a “professional” does not mean you are capable of ignoring your own biases, especially when they run deep as they do in this particular conflict. I’ll also point out that the medical opinion you’re referring to lacks any actual details. For example - if the injuries are consistent with a cigarette burn, what specifically makes it “consistent” and how does this medical professional differentiate this possibility from all the other ones? Why is anything substantial conveniently omitted from all these stories, which instead all use the vague phrasing of “consistent with”? Why are there no details on this doctor, where they practice, or their credentials anywhere?


> That’s your opinion. I disagree

That's fair. For what it's worth, we need more polls that have a ESH option for Israel and Palestine, because my patience with both sides in this has basically run empty.

> what specifically makes it “consistent” and how does this medical professional differentiate this possibility from all the other ones?

I'm not a medical professional. Another medical professional would need to disagree with the findings for this to rise to meriting attention again.

> Why are there no details on this doctor, where they practice, or their credentials anywhere?

They gave a name. Are you claiming they're a fake doctor?


And we should only trust Israel that never lies?

Did you investigate it? If someone posted that Claude code created a new language that was typesafe and 50% more efficient for LLM coding and 20% faster for a human to review without any details about the language, would you not look it up?

No knock on you directly, just an observation about the attitude in our culture. If this is true a child was tortured, if it's false someone is lying and needs to be outed (with facts) so they are not trusted. Neither one is good but is no one looking into it?


> Did you investigate it?

Nope. Rejecting a source doesn’t mean I am obligated to investigate it. As I said, whether this is true or not doesn’t seem particularly politically relevant. It would be interesting to know. But purely for curiosity, not because I think it will have practical effects.


This is indicative of too much context. Remember these systems don't "think" they predict. If you think of the context as an insanely large map with shifting and duplicate keys and queries, the hallucinating and seeming loss of context makes sense. Find ways to reduce the context for better results. Reduce sample sizes, exclude unrelated repositories and code. Remember that more context results in more cost and when the AI investment money dries up, this will be untenable for developers.

If you can't reduce context it suggests the scope of your prompt is too large. The system doesn't "think" about the best solution to a prompt, it uses logic to determine what outputs you'll accept. So if you prompt do an online casino website with user accounts and logins, games, bank card processing, analytics, advertising networks etc., the Agent will require more context than just prompting for the login page.

So to answer the question, if my agent loses context, I feel like I've messed up.


This is the first project where I've really let AI to do more than work on a single file at a time. The trouble is, there's no way for it to be useful without a fairly large context. When it runs out, it starts doing things that are actively destructive, yet very subtle and easy to miss at the same time. Mainly, it forgets the architecture. A couple days ago, it had a good handle on an a database table that I was writing side by side with an API that ran queries and did calculations on the data. I read the code it wrote for a particular API call, and didn't notice that it had started flipping the sign of one of the columns in a query, because it had misinterpreted the column name. A few minutes before that, it had written another query correctly, but from that point on it kept flipping the sign on that column. I only noticed after having it write several other queries and when it oddly mentioned in its "thinking" that X was Y-Z. Reading the thinking has been the main clue as to when it loses track, but if I didn't know exactly why X was Y+Z, the code built on that API would have given subtly inconsistent results that would have been very hard to trace.

Why does startup equal good? Maybe it's better if someone creates something out of the motivation to create a thing or provide it and not out of a desire to "win big". Otherwise it's just more junk or services nobody wants...

Also, life should not be a lottery…

Why is this never a "we can make more products" conversation with businesses? It's always about how many people we can not pay versus how many more things we can sell.

I think this is very telling of where the message is coming from and how the tools are being sold.

Typically a good thing would be creating more value for a company's consumers, not increasing unemployment. Are these tools to make our lives better or to increase profits for shareholders without taking risks?


100 x 5s is nearly 10 minutes. If it takes 10 minutes to write a PR there may be a "skill issue". The bottom end of this 1-2 minutes makes more sense.

How much productivity do we really need? Even at senior dev payscale 2 minutes is like a dollar. The tokens and calls involved in having a 5s commit could close in on 10¢, depending on your contract, the model etc. and that's today's costs. Do remember that my salary is on top of the rates for the LLM, so if the 5s response takes 5s for me to prompt, that's 15s (10 for me 5 for the LLM) that the boss is paying for.

This starts to feel like a billionaire eating ramen noodles just so he can reach his second billion dollars.

Where I work our contract limits API calls, so doing this could result in not being able to use the model when I need it later for something more sophisticated (planning, debugging etc.) than using tooling I'm paid to already know.


I think using GPT et.al. to create a bespoke tool to do what you need is giving the average home user too much credit. What I see more of is just using the prompt in the place of software to create an outcome. "Transcribe this recording", "give me a synopsis of the Godfather films", "How can I wow my girlfriend?". The fraction of home users who are using this to create software is likely highly limited to people with no skills trying to make apps to sell, which is not a tool to help them with something else. Even the software devs I know are using tools made for them, not making their own Claude Code or Cursor.

Right now, the greenfield is in how you use these tools. Making a bespoke specialized tool for yourself, or automating onboarding or CICD setups with simple commands or building bridges between "gatekept" existing software and agents are ripe for growth.

I get that we should see this as a good thing, but I see it as entering the last act of a play. Thousands of people are doing these things and coming up with uses for the tools around the clock. Novel uses for the technology will all be exhausted in the next couple of years and there will be less room for innovation than there was before LLMs.


This is the pattern. The labor is nearly worthless, so just have the bot reinvent the wheel every time.

I've been feeling the craft side of this for the last few years. My education is in Fine Art and I am a self taught UI developer. To me this was a craft of making the code do what the designer envisioned and working with creatives to create engaging and unique interfaces. Slowly but surely "standardization" eroded this via bootstrap and material UI and interfaces lost that spark of creativity. This was the beginning of thinking of sites as products in my mind. LLMs are just the nail in this coffin. Since tools like Claude Code and Cursor have entered the market, I don't do tech in my free time anymore. I don't enjoy it now. I just use the LLM at work like the business dictates (and monitors) then clock out promptly at 5:00.

Does the cursor leaderboard count? Leadership is watching this where I am.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: