Hacker Newsnew | past | comments | ask | show | jobs | submit | BadThink6655321's commentslogin

Wnat about Gödel incompleteness? Comptuers aren't formal systems. Turing machines have no notion of truth. Their programs may. So a program can have M > N axioms in which case one of the N+1 axioms recognizes the truth that G ≡ ¬ Prov_S(⌜ G ⌝) because it was constructed to be true. Alternatively, construct a system that generates "truth" statements, subject to further verification. After all, some humans think that "Apollo never put men on the moon" is a true statement.

As for intentionality, programs have intentionality.


A ridiculous argument. Turing machines don't know anything about the program they are executing. In fact, Turing machines don't "know" anything. Turing machines don't know how to fly a plane, translate a language, or play chess. The program does. And Searle puts the man in the room in the place of the Turing machine.


So what, in the analogy, would be the program? Surely it's not the printed rules, so I think you're making the "systems reply" - that the program that knows Chinese is some sort of metaphysical "system" that arises from the man using the rules - which is the first thing Searle tries to rebut.

> let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.

In other words, even if you put the man in place of everything, there's still a gap between mechanically manipulating symbols and actual understanding.


People are doing things they personally do not understand, just by following the rules, all the time. One does not need to understand why celestial navigation works in order to do it, for example. Heck, most kids can learn arithmetic (and perform it in their heads) without being able to explain why it works, and many (including their teachers, sometimes) never achieve that understanding. Searle’s failure to recognize this very real possibility amounts to tacit question-begging.


Yes, it's a wrong-end-of-the-telescope kind of answer.

A human does simulates a Turing machine to do... something. The human is acting mechanically. So what?

If there's any meaning, it exists outside the machine and the human simulating it.

You need another human to understand the results.

All Searle has done is distract everyone from whatever is going on inside that other human.


In that case you've basically just created a split-brain situation (I mean like the actual phenomenon of someone who's had the main part of the connection between the two hemispheres of the brain). There's one system which is the man and the rules that he has internalized, and there's what the man himself consciously understands, and there's no reason that the two are necessarily communicating in some deeper way, in much the same way as a split-brain patient may be able to point to something they see in one side of their vision when asked but be unable to say what it is.

(Also, IMO, the question of whether the program understands chinese mainly depends on whether you would describe an unconscious person as understanding anything)

I also can't help but think of this sketch when this topic comes up (even though, importantly, it is not quite the same thing): https://www.youtube.com/watch?v=6vgoEhsJORU


Only because "actual understanding" is ambiguously defined. Meaning is an association of A with B. Our brains have a large associative array with the symbols for the sound "dog" is associated with the image of "dog' which is associated with the behavior of "dog" which is associated with the feel of "dog", ... We associate the symbols for the word "hamburger" with the symbols for the taste of "hamburger", with ... We undersand something when our past associations match current inputs and can predict furture inputs.


"Actual understanding" means you have a grounding for the word down to conscious experience and you have a sense of certainty about its associations. I don't understand "sweetness" because I competently use the word "sweet." I understand sweetness because I have a sense of it all the way down to the experience of sweetness AND the natural positive associations and feelings I have with it. There HAS to be some distinction between understanding all the way down to sensation and a competent or convincing deployment of that symbol without those sensations. If we think about how we "train" AI to "understand" sweetness, we're basically telling it when and when not to use that symbol in the context of other symbols (or visual inputs). We don't do this when we teach a child that word. The child has an inner experience he can associate with other tastes.


You mentioned experience, but it's not clear to me if you mean that it's a requirement for "actual understanding." Is this what you're saying? If so, does that mean a male gynecologist doesn't have an "actual understanding" of menstrual cycles and menopause?

I think about astronomers and the things they know about stars that are impossible to experience even from afar, like sizes and temperatures. No one has ever seen a black hole with their own eyes, but they read a lot about it, collected data, made calculations, and now they can have meaningful discussions with their peers and come to new conclusions from "processing and correlating" new data with all this information in their minds. That's "actual understanding" to me.

One could say they are experiencing this information exchange, but I'd argue we can say the same about the translator in the chinese room. He does not have the same understanding of chinese as us humans, associating words to memories and feelings and other human experiences, but he does know that a given symbol evokes the use of other specific symbols. Some sequences require the usage of lots of symbols, some are somewhat ambiguous, and some require him to fetch a symbol that he hasn't used in a long time, maybe doesn't even know where he stored it. To me this looks a lot like the processes that happen inside our minds, with the exception that his form of "understanding" and the experiences that this evokes to him are completely alien to us. Just like an AGI would possibly be.

I'm not confortable looking at the translator's point of view as if he's the analogous to a mind. To me he's the correlator, the process inside our minds that makes these associations. This is not us, it's not under our conscious control, from our perspectives it just happens, and we know today it's a result of our neural networks. We emerge somehow from this process. Similarly, it seems to me that the experience of knowing chinese belongs to the whole room, not the guy handling symbols. It's a weird conclusion, I still don't know what to think of it though...


When I say "experience," I mean a sufficient grounding of certainty about what a word means, which includes how it's used, how it relates to the world that I'm experiencing, but also the mood or valence the word carries. I can't feel your pain, or maybe you've been to a country that I haven't been to and you're conveying that experience to me. Maybe you've been to outer space. I'm not saying to understand you I need to literally have had the exact experience as you, but I should be able to sufficiently relate to the words you are saying in order to understand what you are saying. If I can't sufficiently relate, I say I don't understand. You can see how this differs from what an AI is doing. The AI is drawing on relationships between symbols, but it doesn't really have a self, or experience, etc etc.

The process of fetching symbols, as you put it, doesn't feel at all like what I do when somebody asks me what it was like to listen to the Beatles for the first time and I form a description.


The irony here is that performing like an LLM the very thing that Searle has the human operator do. If it the sort of interaction that does not need intelligence, then no conclusion about the feasibility of AGI can be drawn from contemplating it. Searle’s arguments have been overtaken by technology.


Can you expand on this? The thought experiment is just about showing that there is more to having a mind than having a program. It’s not an argument about the capabilities of LLMs or AGI. Though it’s worth noting that behavioral criteria continue to lead to people overestimating the capabilities of promise of AI.


LLMs are capable of performing the task specified for the Chinese room over a wide range of complex topics and for a considerable length of time. While it is true that their productions are wrong or ill-conceived more often than one would expect from a well-informed human, and sometimes look like the work of a rather stupid one, the burden now rests on Searle's successors to show that every such interaction is purely syntactic.


You and Searle both seem to not understand a simple, obvious fact about the world, which is that (inhomogenous) things don't have the same thing inside. A chicken pie, for example, doesn't have any chicken pie inside. There's chicken inside, but that's not chicken pie. There's sauce, vegetables and pastry, but those aren't chicken pie either. All these things together still may not make a chicken pie. The 'chickenpieness' of the pie is an additional fact, not derivable from any facts about its components.

As with pie, so with 'understanding'. A system which understands can be expected to not contain anything which understands. So if you find a system which contains nothing which understands, this tells you nothing about whether the system understands[0].

Somehow both you and Searle have managed to find this simple fact about pie to be 'the grip of an ideology' and 'metaphysical'. But it really isn't.

[0] And vice-versa, as in Searle's pointlessly overcomplicated example of a system which understands Chinese containing one which doesn't containing one which does.


I don't remember when I first started using BBEdit. Perhaps not long after the demise of MPW. I still use it for everything but Lisp development.


Does the testimony of the adults who were negatively affected by it, both as adults and as children, count?


Just recently a black child, who was born at home with jaundice, was taken from the parents to be given care in a hospital, because a doctor didn’t think the parents would provide effective treatment.

https://www.cbsnews.com/news/temecia-rodney-mila-jackson-ret...


It's unequivocal that the state has the right and obligation to intervene in cases of child neglect/abuse (which refusal or inability to provide necessary medical care is an example of) and is easily evidenced by the existence of agencies like the Child Protective Services or Children's Aid Society.

It is completely unnecessary to state the child's ethnicity to make this point which is entirely irrelevant to the argument you are attempting to make.


Earlier today I came across this on Scott Aaronson's site:

"Also, are the ultimate equations that govern the universe “real,” while tables and chairs are “unreal” (in the sense of being no more than fuzzy approximate descriptions of certain solutions to the equations)? Or are the tables and chairs “real,” while the equations are “unreal” (in the sense of being tools invented by humans to predict the behavior of tables and chairs and whatever else, while extraterrestrials might use other tools)? Which level of reality do you care about / want to load with positive affect, and which level do you want to denigrate?"

https://scottaaronson.blog/?p=3628


I think Scott Aarons' statement is misguided . It is not about what is "better" or "worse" . It is about the difference between concrete vs. abstract.

Abstract things are representations of many different concrete things, they are abstractions of concrete things. Abstractions are "real" but they exist on a different ontological level than concrete things. They only exist in our head, and therefore many say they don't "really" exist by which we commonly mean that they don't exist in the physical world.

There is clearly a big difference between what exists in the physical world, and what we SAY or write about it. The latter are descriptions, which are often abstractions. Like say "redness" is an abstraction of the common properties of all red objects.


Edward Feser published a review of Koon's book ([0], [1]). I was struck by the number of times Feser argued that a "common sense" view of QM is the one that is "basically correct." Given the documented failures of common sense in mathematics and physics (see especially Feynman's comments in the first 15 minutes of [2]), why should anyone think common sense is a reliable metric to how things are?

[0] https://www.thepublicdiscourse.com/2023/01/86512/

[1] https://edwardfeser.blogspot.com/2023/01/koons-on-aristotle-...

[2]https://www.youtube.com/watch?v=41Jc75tQcB0


The article you provide a link to is the review I linked to (and that article mentions that Heisenberg was partial to the hylomorphist view of QM).

> Given the documented failures of common sense in mathematics and physics [...], why should anyone think common sense is a reliable metric to how things are?

I think the most serious objection is that categorical dismissal of common sense is a form of skepticism, and as a consequence, you undermine the very claims you are appealing to. All science takes place within a context and that context is going to be common sense, ultimately; the alternative is some truncation or corruption of it. So you might as well own it and own it to the fullest. All skepticisms suffer from the same problem, namely, the strange belief that you can know something while undermining the very conditions possibility of knowing it.

Note that by "common sense", we mostly mean that we take the human apprehension of the world as basically accurate, even if it is fuzzy around the edges or needs correction or refinement [0]. So at the very least, I think that the presumption is in favor of common sense. Your question does not provide a reason for doubting common sense categorically or even rejecting common sense interpretations of QM. It is a better idea to engage with the proposed interpretation, to understand it, and make specific criticisms instead.

[0] https://www.firstthings.com/web-exclusives/2012/10/aristotle...


No, common sense means we're using metaphors from a limited range of Earth-based everyday experience on a fairly cold planet where everything moves slowly to make predictions about physical phenomena.

In science, common sense has been wrong so consistently its wrongness is practically empirical.

QM, relativity, thermodynamics, gravity, astrophysics, electromagnetism, and math itself are profoundly and consistently unintuitive - to the extent that if someone starts a claim about reality with "Well, obviously..." you can pretty much bet they're wrong.


> All skepticisms suffer from the same problem, namely, the strange belief that you can know something while undermining the very conditions possibility of knowing it.

I think these conditions are overstated. The human brain is a paraconsistent reasoning machine: it is made to be robust to inconsistency and contradiction. I think it is obvious that there exist logical contradictions in the belief systems of every human being, and it is equally obvious that we can reason productively in spite of them, so is it really that big of a deal?

It is not clear to me that we ought to believe something merely to avoid an inconsistency. If our common sense is indeed error-ridden, I would argue that it is ultimately better to accept the skeptic position and let our brains deal with the internal inconsistency than to accept a falsehood merely to preserve consistency.


> All skepticisms suffer from the same problem, namely, the strange belief that you can know something while undermining the very conditions possibility of knowing it.

This is incorrect. That's now how Pyrrhonism works. They have the same critiques of other skeptics.


I have no idea what a common sense interpretation of QM would be.

I'm having trouble locating what I'm thinking of but I thought it had been established that at least one of three very non-common-sense interpretations of QM had to hold at this point.


Isn’t the problem that everyone thinks a different interpretation is the “common sense” one?


They also "broke" the ability to set/remove a bookmark by double-tapping the page. The more Books degrades to become like Kindle, the less incentive I have to buy books from the Apple store.


That’s weird, I have the exact opposite issue, I’m constantly triggering the bookmark via double-tap where in iOS 15 it was never an issue.


Agreed, this is my problem too. By the time I'm finished with a novel I have hundreds of pointless bookmarks.


I'm a theist. His writings wouldn't have converted me, either.


The lock analogy fails. First, the viewer was given the data. Second, obfuscation is not a lock.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: