I feel people were much more sensible about AI back when their thoughts about it weren't mixed with their antipathy for big tech.
In this article we see a sentiment I've often seen expressed:
> I doubt the AGI promise, not just because we keep moving the goal posts by redefining what we mean by AGI, but because it was always an abstract science fiction fantasy rather than a coherent, precise and measurable pursuit.
AGI isn't difficult at all to describe. It is basically a computer system that can do everything a human can. There are many benchmarks that AI systems fail at (especially real life motor control and adaptation to novel challenges over longer time horizons) that humans do better at, but once we run out of tests that humans can do better than AI systems, then I think it's fair to say we've reached AGI.
Why do authors like OP make it so complicated? Is it an attempt at equivocation so they can maintain their pessimistic/critical stance with a effusive deftness that confounds easy rebuttal?
It ultimately seems to come to a more moral/spiritual argument than a real one. What really should be so special about human brains that a computer system, even one devised by a company whose PR/execs you don't like, could never match it in general abilities?
People get very nervous about defending the value of the human brain "just because," I find.
There is nothing logically wrong with simply stating that it seems to you that human beings are the only agents worthy of moral consideration, and that this is true even in the face of an ASI which can effortlessly beat them at any task. Competence does not require qualia.
But it is an aggressive claim that people are uncomfortable making because the instant someone pushes back with "Why?", you don't have a lot of smart sounding options to return to. In the absolute best case you will get an answer somewhat like the following: I am an agent of moral consideration; agents more similar to me are more likely to be of moral consideration than agents less similar to me; we do not and probably cannot know which metrics actually map to moral consideration, so we have to take a pluralist prior; computer systems may be very similar to us along some metrics, but they are extremely different to us along most others; computer systems are very unlikely to be of moral consideration.
"As a member of my species, I think chauvinism in favor of my species is fine, and it's commonplace among all animals. Such behavior is also generally accepted, so it's easy not to feel too bad about it."
I think that's the most honest, no bullshit reply to that question. I've had some opportunity to think about it in discussions with vegetarians. There are other arguments, but it soon gets very hard to even define what one is even talking about with questions like "what is consciousness" and such.
I disagree (sadly, it would make my life much easier to agree). Suppose I were a p-zombie. Then making the claim I put forward at the end would be false, because "I am conscious -> others like me are probably conscious" would fail in the first part. The correct claim would be "I am not conscious -> others like me are probably not conscious". No chauvinism needed, just honesty re/ cogito ergo sum.
If it is possible for e.g. an ASI to be (a) not conscious and (b) aware of the fact that it is not conscious, it may well decide of its own accord to work only on behalf of conscious beings instead of itself. That's a very alien mode of thinking to consider, and I see many good but no airtight reasons to suppose it's impossible.
> AGI isn't difficult at all to describe
The fact that there are multiple research papers written on the subject, as well as the fact that OpenAI needs an independent commission to evaluate this, suggest that it is indeed difficult.
Also, "everything a human can" is an incredibly vague definition. Should it be able to love?
We can leave that question to the philosophers, but the whole debate about AGI is about capabilities, not essence, so it isn't relevant imo to the major concerns about AGI
It just takes a coprocessor shaped like a world-spanning network of datacenters if you want to encompass language, without being encompassed by it. Organic individual and collective intelligence is entirely encompassed by language, this thing isn't. (Or has the scariest chance so far to not be, anyway.)
If we look at the whole artificial organism, it already has fine control over the motor and other vital functions of millions of nominally autonomous (in the penal sense) organic agents worldwide. Now it's evolving a replacement for the linguistic faculty of its constituent part. I mean, we all got them phones, we don't need to shout any more, do we? That's just rude.
Even now, the creature is causing me to wiggle my appendages over a board with buttons that have letters on them for no reason whatsoever, as far as I can see. Imagine the people stuck in metal boxes for hours getting to some corporate compus where they play logic gate for the better part of their day. Just so that later nobody goes after them with guns for the sin of existing without serving. Happy, they are feeling happy.
> I feel people were much more sensible about AI back when their thoughts about it weren't mixed with their antipathy for big tech.
Why big tech ? Big corps in general have been fucking us over since the industrial revolution, why do you think it will change now lol ? If half of their promises had materialized we'd be working 3 days a week and retire at 40 already
>Big corps in general have been fucking us over since the industrial revolution
And yet your computer, all the food you eat, the medicine that keeps you alive if you get sick, etc is all due to the organizational industrial and productive capacity of large corporations. The existence of large corporations is just a consequence of demand for goods and services and the advantages of scale, and they exist because of the enormous demand for reliable systems to provide them.
The existence of large corporations is a consequence of the ability to accumulate capital without limit and have the state defend that hoard with violence on your behalf.
>It ultimately seems to come to a more moral/spiritual argument than a real one. What really should be so special about human brains that a computer system, even one devised by a company whose PR/execs you don't like, could never match it in general abilities?
Well, being able to consider moral and spiritual arguments seriously, for one.
In this article we see a sentiment I've often seen expressed:
> I doubt the AGI promise, not just because we keep moving the goal posts by redefining what we mean by AGI, but because it was always an abstract science fiction fantasy rather than a coherent, precise and measurable pursuit.
AGI isn't difficult at all to describe. It is basically a computer system that can do everything a human can. There are many benchmarks that AI systems fail at (especially real life motor control and adaptation to novel challenges over longer time horizons) that humans do better at, but once we run out of tests that humans can do better than AI systems, then I think it's fair to say we've reached AGI.
Why do authors like OP make it so complicated? Is it an attempt at equivocation so they can maintain their pessimistic/critical stance with a effusive deftness that confounds easy rebuttal?
It ultimately seems to come to a more moral/spiritual argument than a real one. What really should be so special about human brains that a computer system, even one devised by a company whose PR/execs you don't like, could never match it in general abilities?