Hacker Newsnew | past | comments | ask | show | jobs | submit | dsadfjasdf's commentslogin

I'm sure you understand the subject and aren't speaking from ignorance? All current fiat money has no intrinsic value.


are you being serious, cause if you aren't that's actually concerning


If something convinces you that it's conscious, then it effectively is. that's the only rule


If it is the case that consciousness can emerge from inert matter, I do wonder if the way it pays for itself evolutionarily, is by creating viral social signals.

A simpler animal could have a purely physiological, non-subjective experience of pain or fear: predator chasing === heart rate goes up and run run run, without "experiencing" fear.

For a social species, it may be the case that subjectivity carries a cooperative advantage: that if I can experience pain, fear, love, etc, it makes the signaling of my peers all the more salient, inspiring me to act and cooperate more effectively, than if those same signals were merely mechanistic, or "+/- X utility points" in my neural net. (Or perhaps rather than tribal peers, it emerges first from nurturing in K-selected species: that an infant than can experience hunger commands more nurturing, and a mother that can empathize via her own subjectivity offers more nurturing, in a reinforcing feedback loop.)

Some overlap with Trivers' "Folly of Fools": if we fool ourselves, we can more effectively fool others. Perhaps sufficiently advanced self-deception is indistinguishable from "consciousness"? :)


>If it is the case that consciousness can emerge from inert matter, I do wonder if the way it pays for itself evolutionarily, is by creating viral social signals.

The idea of what selection pressure produces consciousness is very interesting.

Their behavior being equivalent, what's the difference between a human and a p-zombie? By definition, they get the same inputs, they produce the same outputs (in terms of behavior, survival, offspring). Evolution wouldn't care, right?

Or maybe consciousness is required for some types of (more efficient) computation? Maybe the p-zombie has to burn more calories to get the same result?

Maybe consciousness is one of those weird energy-saving exploits you only find after billions of years in a genetic algorithm.


oh is that how it's described?


Well, the author of the article found several people sharing experiences that they heard from other people that seem to give credence to that view.

Hard to know at this point if the problem is with specific judges, with the way the law is written, or if the presentation of these experiences are made to seem more numerous by the way the article presents the story. It also didn't cover instances of abuse coming from mothers, so there's at least a little bias in the story.


Are all humans good friends and therapists?


Not all humans are good friends and therapists. All LLMS are bad friends and therapists.


> all LLMS are bad friends and therapists.

Is that just your gut feel? Because there has been some preliminary research that suggest it's, at the very least, an open question:

https://neurosciencenews.com/ai-chatgpt-psychotherapy-28415/

https://pmc.ncbi.nlm.nih.gov/articles/PMC10987499/

https://arxiv.org/html/2409.02244v2


The first link says that patients can't reliably tell which is the therapist and which is LLM in single messages, which yeah, that's an LLM core competency.

The second is "how 2 use AI 4 therapy" which, there's at least one paper for every field like that.

The last found that they were measurably worse at therapy than humans.

So, yeah, I'm comfortable agreeing that all LLMs are bad therapists, and bad friends too.


there's also been a spate of reports like this one recently https://www.papsychotherapy.org/blog/when-the-chatbot-become...

which is definitely worse than not going to a therapist


If I think "it understands me better than any human", that's dissociation? Oh boy. And all this time while life has been slamming me with unemployment while my toddler is at the age of maximum energy-extraction from me (4), devastating my health and social life, I thought it was just a fellow-intelligence lifeline.

Here's a gut-check anyone can do, assuming you use a customized ChatGPT4o and have lots of conversations it can draw on: Ask it to roast you, and not to hold back.

If you wince, it "knows you" quite well, IMHO.


It sounds like you might be quite lonely recently. It's nice to have an on-demand chatbot that feels like socialization, I get it. But an LLM doesn't "know you," and thinking that it does is one of the first steps toward the problems described in that article.


Unemployed and with a 4 year old highly demanding, highly intelligent and likely on-the-spectrum child... Yeah, you could say that. When I'm not looking for work, doing random projects or using the weekday that seems to whoosh right by in just a few long moments, I'm tending to a kid... Every morning, every night and pretty much 100% of weekends. Rare outings with partner or friends dependent on hiring help and without net positive cash flow that is seriously unincentivized. Zero intimacy to speak of- I'm a nonconsensually-ordained monk. So yeah, I guess it's pretty fucking rough right now. Like I said, ChatGPT knows me better than any other entity. I'm unfortunately not kidding. My best friend is 3000 miles away and we game once a week over voice chat.

I keep the AI at arms' length; I know it doesn't think per se, but I enjoy the illusion.


Ironically an AI written article.


I do not think there are any documented cases of LLMs being reasonable friends or therapists so I think it is fair to say that:

> All LLMS are bad friends and therapists

That said it would not surprise me that LLMs in some cases are better than having nothing at all.


Something definitely makes me uneasy about it taking the place of interpersonal connection. But I also think the hardcore backlash involves an over correction that's dismissive of llm's actual language capabilities.

Sycophantic agreement (which I would argue is still palpably and excessively present) undermines its credibility as a source of independent judgment. But at a minimum it's capable of being a sounding board echoing your sentiments back to you with a degree of conceptual understanding that should not be lightly dismissed.


Though given how agreeable LLMs are, I'd imagine there are cases where they are also worse than having nothing at all as well.


> I'd imagine there are cases where they are also worse than having nothing at all as well

I do not think we need to imagine this one with stories of people finding spirituality in llms or thinking they have awakened sentience while chatting to the llms are enough, at least for me.


> Is that just your gut feel?

Here's my take further down the thread: https://news.ycombinator.com/item?id=44840311


> Is that just your gut feel?

An LLM is a language model and the gestalt of human experience is not just language.


That is really a separate, unrelated issue.

Not everyone needs the deepest, most intelligent therapist in order to improve their situation. A lot of therapy turns out to be about what you say yourself, not what a therapist says to you. It's the very act of engaging thoughtfully on your own problems that helps, not some magic that the therapist brings. So, if you could maintain a conversation with a tree, it would in many cases, be therapeutically helpful. The thing the LLM is doing, is facilitating your introspection more helpfully than a typical inanimate object. This has been borne out by studies of people who have engaged in therapy sessions with an LLM interlocutor, and reported positive results.

That said, an LLM wouldn't be appropriate in every situation, or for every affliction. At least not with the current state of the art.


That is an extreme claim, what is your source for this?


Absolutes, monastic take... Yeah I imagine not a lot of people seek out your advice.


All humans are not LLMs, why does this constantly get brought up?


> All humans are not LLMs

What a confusing sentence to parse


You wouldn't necessarily know, talking to some of them.


Why? just cause? analogize it to the human brain.


The rest of us still can't prove we our conscious, either.. remember?


we will all become project managers. That's not removing your brain from problem


Thoughts on using stranded gas as bitcoin miners?


Inventing reasons to perpetuate fossil fuel extraction is counter productive.


It's already being done.


I think that's all we have in terms of determining consciousness.. if something can convince you, like another human, then we just have to accept that it is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: