Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Hacker News deserves a stronger counterargument than “this is silly.”

Their counterargument is that said structural definition is overly broad, to the point of including any and all forms of symbolic communication (which is all of them). Because of that, your argument based on it doesn't really say anything at all about AI or divination, yet still seems 'deep' and mystical and wise. But this is a seeming only. And for that reason, it is silly.

By painting all things with the same brush, you lose the ability to distinguish between anything. Calling all communication divination (through your structural metaphor), and then using cached intuitions about 'the thing which used to be called divination; when it was a limited subset of the whole' is silly. You're not talking about that which used to be called divination, because you redefined divination to include all symbolic communication.

Thus your argument leaks intuitions (how that-which-was-divination generally behaves) that do not necessarily apply through a side channel (the redefined word). This is silly.

That is to say, if you want to talk about the interpretative nature of interaction with AI, that is fairly straightforward to show and I don't think anyone would fight you on it, but divination brings baggage with it that you haven't shown to be the case for AI. In point of fact, there are many ways in which AI is not at all like divination. The structural approach broadens too far too fast with not enough re-examination of priors, becoming so broad that it encompasses any kind of communication at all.

With all of that said, there seems to be a strong bent in your rhetoric towards calling it divination anyway, which suggests reasoning from that conclusion, and that the structural approach is but a blunt instrument to force AI into a divination shaped hole, to make 'poignant and wise' commentary on it.

> "I don’t like AI so I’m going to pontificate" sidesteps the actual claim

What claim? As per ^, maximally broad definition says nothing about AI that is not also about everything, and only seems to be a claim because it inherits intuitions from a redefined term.

> difference between saying "this tool gives me answers" and recognizing that the process by which we derive meaning from the output involves human projection and interpretation, just like divination historically did

Sure, and all communication requires interpretation. That doesn't make all communication divination. Divination implies the notion of interpretation of something that is seen to be causally disentangled from the subject. The layout of these bones reveals your destiny. The level of mercury in this thermometer reveals the temperature. The fair die is cast, and I will win big. The loaded die is cast, and I will win big. Spot the difference. It's not structural.

That implication of essential incoherence is what you're saying without saying about AI, it is the 'cultural wisdom and poignancy' feedstock of your arguments, smuggled in via the vehicle of structural metaphor along oblique angles that should by rights not permit said implication. Yet people will of course be generally uncareful and wave those intuitions through - presuming they are wrapped in appropriately philosophical guise - which is why this line of reasoning inspires such confusion.

In summary, I see a few ways to resolve your arguments coherently:

1. keep the structural metaphor, discard cached intuitions about what it means for something to be divination (w.r.t. divination being generally wrong/bad and the specifics of how and why). results in an argument of no claims or particular distinction about anything, really. this is what you get if you just follow the logic without cache invalidation errors.

2. discard the structural metaphor and thus disregard the cached intuitions as well. there is little engagement along human-AI cultural axis that isn't also human-human. AI use is interpretative but so is all communication. functionally the same as 1.

3. keep the structural metaphor and also demonstrate how AI are not reliably causally entwined with reality along boundaries obvious to humans (hard because they plainly and obviously are, as demonstrable empirically in myriad ways), at which point go off about how using AI is divination because at this point you could actually say that with confidence.



You're misunderstanding the point of structural analysis. Comparing AI to divination isn't about making everything equivalent, but about highlighting specific shared structures that reveal how humans interact with these systems. The fact that this comparison can be extended to other domains doesn't make it meaningless.

The issue isn't "cached intuitions" about divination, but rather that you're reading the comparison too literally. It's not about importing every historical association, but about identifying specific parallels that shed light on user behavior and expectations.

Your proposed "resolutions" are based on a false dichotomy between total equivalence and total abandonment of comparison. Structural analysis can be useful even if it's not a perfect fit. The comparison isn't about labeling AI as "divination" in the classical sense, but about understanding the interpretive practices involved in human-AI interaction.

You're sidestepping the actual insight here, which is that humans tend to project meaning onto ambiguous outputs from systems they perceive as having special insight or authority. That's a meaningful observation, regardless of whether AI is "causally disentangled from reality" or not.


> humans tend to project meaning onto ambiguous outputs from systems they perceive as having special insight or authority

This applies just as well to other humans as it does AI. It's overly-broad to the point of meaninglessness.

The insight doesn't illuminate.


> It's not about importing every historical association, but about identifying specific parallels that shed light on user behavior and expectations.

Indeed, I hold that driving readers to intuit one specific parallel to divination and apply it to AI is the goal of the comparison, and why it is so jealously guarded, as without it any substance evaporates.

The thermometer has well-founded authority to relay the temperature, the bones have not the well-founded authority to relay my fate. The insight, such as you call it, is only illuminative if AI is more like the latter than the former.

This mode of analysis (the structural) takes no valid step in either direction, only seeding the ground with a trap for readers to stumble into (the aforementioned propensity to not clear caches).

> That's a meaningful observation, regardless of whether AI is "causally disentangled from reality" or not.

If the authority is well-founded (i.e., is causally entangled in the way I described), the observation is meaningless, as all communication is interpretative in this sense.

The structural approach only serves as rhetorical sleight of hand to smuggle in a sense of not-well-founded authority from divination in general, and apply it to AI. But the same path opens to all communication, so what can it reveal in truth? In a word, nothing.


> That's a meaningful observation, regardless of whether AI is "causally disentangled from reality" or not.

And regardless of how many words someone uses in their failed attempt at "gotcha" that nobody else is playing. There are certainly some folks acting silly here, and it's not the vast majority of us who have no problem interpreting and engaging with the structural analysis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: