I've been working on a web framework in Clojure. There are already a few others, but they're, in my opinion, too focused on the Clojure community, and are much harder to sell to people I know who might otherwise use Clojure. I wanted something that was more like a Rails or a Phoenix or a Next.js. We'll see if it goes anywhere.
It can still fuck it up. And you need to actually read the code. But still a time saver for certain trivial tasks. Like if I'm going to scrape a web page as a cron job I can pretty much just tell it here's the URL, here's the XPath for the elements I want, and it'll take it from there. Read over the few dozen lines of code, run it and we're done in a few minutes.
You might want to ask ChatGPT what that is referencing. Specifically, Steve Jobs telling everyone it was their fault that Apple put the antenna right where people hold their phones and it was their fault they had bad reception.
The issue is really that LLMs are impossible to deterministically control, and no one has any real advice on how to deterministically get what you want from them.
I recognized the reference. I just don’t think it applies here.
The iPhone antenna issue was a design flaw. It’s not reasonable to tell people to hold a phone in a certain way. Most phones are built without a similar flaw.
LLMs are of course nondeterministic. That doesn’t mean they can’t be useful tools. And there isn’t a clear solution similar to how there was a clear solution to the iPhone problem.
"Antennagate" is the gift that keeps on giving. Great litmus test for pundits and online peanut galleries.
You are technically correct: it was a design flaw.
But folks usually incorrectly trot it out as an example of a manufacturer who arrogantly blamed users for a major product flaw.
The reality is that that essentially nobody experienced this issue in real life. The problem disappeared as long as you used a cell phone case. Which is how 99.99% of people use their phones. To experience the issue in real life you had to use the phone "naked", hold it a certain way, and have slightly spotty reception to begin with.
So when people incorrectly trot this one out I can just ignore the rest of what they're saying...
I mean I hate low effort slop that's pushed from every vaguely technical source and tells me it's great. It's very clearly junk.
I think what people hate is the sense of quality degradation. Everything feels cheap. Everything feels like it's meant to be consumed in the next 20 seconds, and then forgotten about. People want stuff that doesn't suck.
There are plenty of things I work hard on because I like them, but there are also lots of things I have to work hard on because they need to get done.
I have similar feelings about Ikigai, the overlap of what you love, what you are good at, what the world needs, and what you can be paid for. Those things don't really overlap for me.
But that's not an answer. Why should intelligence and not some other quality be coupled to consciousness? In my experience, consciousness (by which I'm specifically talking about qualia/experience/awarenesss) doesn't at all seem tightly coupled to intelligence. Certainly not in a way that seems obvious to me.
I can understand what less cognizant or self aware means, but "less conscious" is confusing. What are you implying here? Are their qualia lower resolution?
If one is to quantify consciousness it would probably make sense to think of it as an area of awareness and cognizance across time.
Awareness scales with sensory scale and resolution (sensory receptors vs input token limits and token resolution). E.g. 128k tokens and tokens too coarse to count rs in strawberry.
Cognizance scales with internal representations of awareness (probably some relation to vector space resolution and granularity, though I suspect there is more to it than just vector space)
And the third component is time, how long the agent is conscious for.
Pretty much. Most animals are both smarter than you expect, but also tend to be more limited in what they can reason about.
It's why anyone who's ever taken care of a needy pet will inevitably reach the comparison that taking care of a pet is similar to taking care of a very young child; it's needy, it experiences emotions but it can't quite figure out on its own how to adapt to an environment besides what it grew up around/it's own instincts. They experience some sort of qualia (a lot of animals are pretty family-minded), but good luck teaching a monkey to read. The closest we've gotten is teaching them that if they press the right button, they get food, but they take basically their entire lifespan to understand a couple hundred words, while humans easily surpass that.
IIRC some of the smartest animals in the world are actually rats. They experience a qualia very close to humans to the point that psychology experiments are often easily observable in rats.
Karl Friston's free energy principle is probably roughly 80% of my reasons to think they're coupled. The rest comes from studying integrated information theories, architecture of brains and nervous systems and neutral nets, more broadly information theory, and a long tail of other scientific concepts (particle physics, chemistry, biology, evolution, emergence, etc...)
Isn't that begging the question? If you just accept the presupposition that intelligence is tightly coupled to consciousness, then all that makes perfect sense to me. But I don't see why I should accept that. It isn't obvious to me, and it doesn't match my own experience of being conscious.
Totally possible that we're talking past each other.
"brain as computer" is just the latest iteration of a line of thinking that goes back forever. Whatever we kinda understand and interact with, that's what we are and what the brain is. Chemicals, electricity, clocks, steam engines, fire, earth; they're all analogies that help us learn but don't necessarily reflect an underlying reality.
I have found that chaining things in Smalltalk get immensely painful after just a couple elements. Unlike most other languages, I find myself reaching for silly little var names for lots of different things I would normally just let flow.
Really? To each their own, but honestly, I found smalltalks way of chaining things to be one of the most elegant parts of what is admittedly a syntactically-simple language (the old "fits on a postcard" thing).
With Smalltalk, with regard to return values or chaining, you get to have your cake and eat it too.
Your methods CAN return sensible values which don't necessarily have to be the original object. BUT, if you just want to chain a bunch of sends TO A PARTICULAR OBJECT, you can use ;, and chain the sends together without requiring that each intermediate method returns the original object.
That combined with the fact that chaining sends requires no special syntax. You just write them out as a single sentence (that's what it feels like), and you can format it across multiple lines however you wish. There's no syntax requirements getting in the way.
Just finish the whole thing with a dot. Again, just like a regular sentence.
And if you find precedence or readibility becoming confusing, then just put stuff in parens to make certain portions clearer. There's absolutely no harm in doing do, even if in your particular use case the normal precedence rules would have sufficed anyway.
Smalltalk
complex method chaining;
again (as mentioned previously)
reads like English.
Once for navigating a collection of deeply nested routes in a webapp, and once for navigating deeply nested xml to grab very particular data.
Both times it was pretty pleasant and nice to use.
I wouldn't reach for them in most normal situations cause they're more complicated to get right than simple looping (or `clojure.walk/prewalk`), but if you have large semi-predictable data structures, you can do cool stuff with zippers.
reply