Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I apologize for the unintended confusion. I don't find all expression safe in this context and have avoided some of it as well as the amount of work I could put into describing what amounts to a ~36 year life obsession for me.

> Are you talking about my first paragraph or symbolic AI?

In the link you provided and the second paragraph of your first reply you seem, to my reading, to suggest using a system to facilitate discovering agreement on specific actions, knowledge, and tactical choices. Stated differently agreement within groups, perhaps large groups. You discussed in both comments the challenge of being specific and static, which is the downfall, in my opinion, to many symbolic systems - the presumption that our ability to discretely describe reality is sufficient. To me fuzzy categories and useful broken models comment about that finding. The systems you are describing sound useful but seem to solve a different problem than I mean to target.

> I assume here you are trying to say that human input is not reliable.

Yes, I find human output to be unreliable and I believe it is well understood to be so. An example of a system that has elements of scaling social knowing is Facebook. I believe it is well understood that people often (and statically speaking prevalently) present a facsimile of themselves there when they are presenting anything actually more than superficially adjacent to themselves at all. This introduces varying amounts of noise in to the signal and displaces participation in life, perhaps in exchange for reduced communication overhead. Humans additionally make errors on the regular, whether through "fat fingers", an unexamined self, "bias", or whatever. See also "Nosedive" [0].

> I don't understand what's your approach with AI here

I haven't really described it - the ask was literally for the problem, not for solutions. There is a certain level of vaporware in my latest notion for exactly how to solve it. As stated obliquely however, there are aspects of the solution that I don't really want to be dragged through a discussion on here on HN.

> an AI that can't explain itself

I haven't specified unexplainable AI. I actually see evidence based explainability as a key feature of my current best formulation of a concrete solution. That, in context presents quite a few nuts to crack.

> Finally, I'm very familiar with meditations on moloch

I only meant to link the fish story but the link in MoM was broken and I failed to find a backup on archive.org, not putting a whole ton of effort into looking.

Consider how the described "games" change if those willing to cooperate and achieve the maximal outcomes could preselect to only play with those who are inclined to act similarly? If you grouped the defectors and cooperators to play within their chosen strategies based on prior action? Iterated games have different solutions and I find those indicative of life, except that social accountability doesn't scale. In real life such specificity is impossible and no guarantees exist. Yet, I believe that the rights systemic support structures could solve a number of problems, including a small movement of the needle towards greater game theoretic affinity and thereby a shift in the local maxima to which we have access.

[0] https://en.wikipedia.org/wiki/Nosedive_(Black_Mirror)



Thanks, that was much clearer. Well, there are indeed many options and paths we could take in the space, so good luck with whatever you end up trying. Only one final note: I'm a very secretive person myself, and even beyond that I understand your reticence to share more details about some of your specific ideas... but I think that sharing more openly would align better with that shift in the local maxima you aspire to achieve. For example, I'm sure at least some of us would be interested in reading a submission or blog post about many of these ideas.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: