Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's almost more amazing that it only kinda sorta works and doesn't go all HAL 9000 on us by being super literal.


Wait till you give it control over life support!


So interestingly enough, I had an idea to build a little robot that sits on a shelf and observes its surroundings. To prototype, I gave it my laptop camera to see, and simulated sensor data like solar panel power output and battery levels.

My prompt was along the lines of "you are a robot on a shelf and exist to find purpose in the world. You have a human caretaker that can help you with things. Your only means of output is text messages and an RGB LED"

I'd feed it a prompt per minute with new camera data and sensor data. When the battery levels got low it was very distraught and started flashing it's light and pleading to be plugged in.

Internal monologue "My batteries are very low and the human seems to see me but is not helping. I'll flash my light red and yellow and display "Please plug me in! Shutdown imminent!""

I legitimately felt bad for it. So I think it's possible to have them control life support if you give them the proper incentives.


Aww this is so cute. I've been inspired to make my own now!

Only drawback to LLMs in their current state is hardware requirements, can't wait for the day that we can run decent sized models on a pi/microcontroller (which tbf we're almost there).

It does beg interesting thoughts, though; an LLM is likely reacting that way because it understands the bare minimum about existence and survival and implications of power going low for a robot from training corpus. But there is no obvious drive for continued existence, it has no stakes.

And it's so difficult to really pin down for a human; why do we want to continue existing? People might say "for my family, to continue experiencing life" etc, but what are those driven by? The impulse to stay alive for the love of a child is surely just evolved. Staying alive for the purposes of exposing yourself to all the random variables that make you more fit for survival is also surely just evolved.


> Wait till you give it control over life support!

That right there is the part that scares the hell outta me. Not the "AI" itself, but how humans are gonna misuse it and plug it into things it's totally not designed for and end up givin' it control over things it should never have control over. Seeing how many folks readily give in to mistaken beliefs that it's something much more than it actually is, I can tell it's only a matter of time before that leads to some really bad decisions made by humans as to what to wire "AI" up to or use it for.


One of my kids is in 5th grade and is learning to some basic algebra. He is learning to calculate x when it's on both sides of an equation. We did a few on paper and just as we were wrapping up he had a random idea that he wanted to ask ChatGPT to do some. I told him GPT is not great for that kind of thing, it doesn't really know math and might give him wrong answers and he would never know, we would have to calculate it anyhow to know if GPT had given the correct answer.

Unfortunately GPT got every answer correct, even broke it all down into steps just like the textbooks did.

Now my 5th grader doesn't really believe me and thinks GPT is great at math.


Wait 'til he learns how those LLM things actually work. (Surely "AI"-something is gonna be a "required" course in typical schools before he's even in college.) He's gonna be kinda shocked at how often they get things "right" once he really understands the underlying tech behind it. I know I'm constantly amazed by it. Some mighty fancy math involved in all that. :)


I mean, we learn from experience, that was his experience. You should've really just continued until it got some wrong answers, or asked it questions where it hallucinated, then showed your child the process of searching and finding a backed up answer to demonstrate it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: