AO Labs is an applied AI research lab building real-time reinforcement learning, unlocking AI that can be hyper-personalized with less data than LLMs.
Our first product is an API that learns by combining end-user context, application data, and a feedback signal (eg. an end-users' likes and dislikes) in a fast learning loop that's lightweight enough to provision a distinct agent per-user. There's no gap between training and inference-- train at the edge, learn all the time.
With our framework we increase training efficiency and also combine the static pre-trained intelligence from LLMs with continuous training to learn local contexts. AI progress is bottlenecked by backpropagation, which necessitates labelled data to set the ground truth while also leaving a gap between training and inference that result in increasingly larger, more homogenizing models.
We are hiring for applied researchers & various engineering roles– please reach out to ali@aolabs.ai
AO Labs | Applied scientists/researchers & various roles | https://www.aolabs.ai/ | Berkeley, CA + remote
AO Labs is building a more reliable alternative to deep learning and LLMs using continuously trainable, compute-efficient weightless neural networks. We are building AI that can learn after training.
We're a community of developers and researchers building general intelligence from the bottom-up and we are making space for collaborators at all levels --hackers, contributors, the curious (some of whom we’ve hired already). Get in touch: ali at aolabs.ai and I’ll share some demos.
With our framework we increase training efficiency and also combine the static pre-trained intelligence from LLMs with continuous training to learn local contexts. AI progress is bottlenecked by backpropagation, which necessitates a human in the loop to set the ground truth while also leaving a gap between training and inference that result in increasingly larger, more homogenizing models.
* If you reached out to our previous post here, please email me again and we’ll get back to you first. Our situation has changed some as a startup hence the delayed response.
AO Labs | Building an alternative to backpropagation | https://www.aolabs.ai/ | Berkeley, CA + remote
AI systems struggle with edge cases and understanding local context despite increasing model sizes. From our research at UC Berkeley into the evolution of intelligence from simple organisms, we’ve discovered the missing link is continuous learning (deep learning is pre-trained by design). Models built with our framework learn through customizable parameters similar to animal instincts, allowing for AI grounded with built-in memory and reasoning. We're a community of 160+ developers and researchers building general intelligence from the bottom-up from places like Berkeley, NYU, Imperial College, and Google.
We're building way outside of the current paradigm and we're looking for collaborators at all levels --hackers, contributors, the curious-- as we'll be making our first hires soon. Email with "HN Hiring" in subject line to: ali at aolabs.ai or chat with us in our discord: https://discord.gg/Zg9bHPYss5
This post is near identical to mine from last month; if you reached out then, please know that I'll respond to you soon (I've been busy wrapping up a fundraise).
Uh this looks a bit too egregious of an attempt at misrepresentation, or deception.
This is not research originating from a UC Berkeley graduate, just like Lex Fridman (real name Alexei Alexandrowitsch Fedotow) is not an MIT lecturer, was not a MIT student, but is actually from Drexel U but likes to fraudulently misrepresent himself as an MIT graduate lecturer in his profile descriptions.
This AO Labs guy is not a real lab but a lone foreign guy who applied to a small grant from UC Berkeley, which gave him the one-time small grant that expired a long time ago and they have rejected him for grant renewal. Therefore a reject.
His AO Labs does not have a community of 160+ researchers from Berkeley NYU Google, or you have to wonder how incompetent are the "160" that none of them can even make a functioning simple FAQ web page.
Interesting note from lakrikor1 from the other comment.
You can also look up his videos, which are long but manage to say nothing of substance. A lot of spoken nonsense that shows his lack of understanding on how animals and organisms actually think.
What you explain here also explains the current problems in AGI research. sigh Humans keep thinking that reality, like the sun once did, revolves around them.
AO Labs | Building an alternative to backpropagation | https://www.aolabs.ai/ | Berkeley, CA + remote
AI systems struggle with edge cases and understanding local context despite increasing model sizes. From our research at UC Berkeley into the evolution of intelligence from simple organisms, we’ve discovered the missing link is continuous learning (deep learning is pre-trained by design). Models built with our framework learn through customizable parameters similar to animal instincts, allowing for AI grounded with built-in memory and reasoning. We're a community of 160+ developers and researchers building general intelligence from the bottom-up from places like Berkeley, NYU, Imperial College, and Google.
We're building way outside of the current paradigm and we're looking for collaborators at all levels --hackers, contributors, the curious-- as we'll be making our first hires soon. Reach out: ali at aolabs.ai
Uh this looks a bit too egregious of an attempt at misrepresentation, or deception.
This is not research originating from a UC Berkeley graduate, just like Lex Fridman (real name Alexei Alexandrowitsch Fedotow) is not an MIT lecturer, was not a MIT student, but is actually from Drexel U but likes to fraudulently misrepresent himself as an MIT graduate lecturer in his profile descriptions.
This AO Labs guy is not a real lab but a lone foreign guy who applied to a small grant from UC Berkeley, which gave him the one-time small grant that expired a long time ago and they have rejected him for grant renewal. Therefore a reject.
His AO Labs does not have a community of 160+ researchers from Berkeley NYU Google, or you have to wonder how incompetent are the "160" that none of them can even make a functioning simple FAQ web page.
You can also look up his videos, which are long but manage to say nothing of substance. A lot of spoken nonsense that shows his lack of understanding on how animals and organisms actually think.
It can be useful in certain contexts, most certainly as a code co-pilot, but that and yours/others' usage doesn't change the fundamental mismatch between the limits of this tech and what Sam and others have hyped it up to do.
We've already trained it on all the data there is, it's not going to get "smarter" and it'll always lack true subjective understanding, so the overhype has been real, indeed to bubble levels as per OP.
> it's not going to get "smarter" and it'll always lack true subjective understanding
What is your basis for those claims? Especially the first one; I would think it's obvious that it will get smarter; the only questions are how much and how quickly. As far as subjective understanding, we're getting into the nature of consciousness territory, but if it can perform the same tasks, it doesn't really impact the value.
My basis for these claims is from my research career, work described so far at aolabs.ai; still very much in progress, but form what I've learned I can respond to the 2 claims you're poking at--
1) we should agree on what we mean by smart or intelligent. That's really hard to do so let's narrow it down to "does not hallucinate" the way GPT does, or more high level has a subjective understanding of its own that another agent can reliably come to trust. I can tell you that AI/deep learning/LLM hallucination is a technically unsolvable problem, so it'll never get "smarter" in that way.
2) This connects to number 2. Humans and animals of course aren't infinitely "smart;" we fuck up and hallucinate in ways of our own, but that's just it, we have a grounded truth of our own, born of a body and emotional experience that grounds our rational experience, or the consciousness you talk about.
So my claim is really one claim, that AI cannot perform the same tasks or "true" intelligence level of a human in the sense of not hallucinating like GPT without having a subjective experience of its own.
There is no answer or understanding "out there;" it's all what we experience and come to understand.
This is my favorite topic. I have much more to share on it including working code, though at a level of an extremely simple organism (thinking we can skip to human level and even jump exponentially beyond that is what I'm calling out as BS).
I don't see why "does not hallucinate" is a viable definition for "intelligent." Humans hallucinate, both literally, and in the sense of confabulating the same way that LLMs do. Are humans not intelligent?
Those zillions of lines are given to ChatGPT in the form of weights and biases through backprop during pre-training. The data does not map to any experience of ChatGPT itself, so it's performance involves associations between data, not associations between data and its own experience of that data.
Compare ChatGPT to a dog-- a dog's experience of an audible "sit" command maps to that particular dog's history of experience, manipulated through pain or pleasure (i.e. if you associate treat + "sit", you'll have a dog with its own grounded definition of sit). A human also learns words like "sit," and we always have our own understanding of those words, even if we can agree on them together too certain degrees in lines of linguistic corpora. In fact, the linguistic corpora is borne out of our experiences, our individual understandings, and that's a one way arrow, so something trained purely on that resultant data is always an abstraction level away from experience, and therefore from true grounded understanding or truth. Hence GPT (and all deep learning) unsolvable hallucination or grounding problems.
But I'm not seeing an explicit reason why experience is needed for intelligence. You're repeating this point over and over again but not actually explaining why, you're just assuming that it's a kind of given.
I would appreciate another example where a major new communications technology peaks in its implementation within the first year after it is introduced to the market.
Look, I'm an AGI/AI researcher myself. I believe and bleed this stuff. AI is here to stay and is forever a part of computing in many ways. Sam Altman and others bastardized it by overhyping it to current levels, derailing real work. All the traction OpenAI has accumulated, outside of github copoilot / codex, is itself so far away from product-market fit that people are playing off the novelty of AGI / the GPT/AI being on its way to "smarter" than human rather than any real usage.
Hype in tech is real. Overhype and bubbles are real. In AI in particular, there's been AI winters because of the overhype.
OpenAI is set up in a weird way where nobody has equity or shares in a traditional C-Corp sense, but they have Profit Participation Units, an alternative structure I presume they concocted when Sam joined as CEO or when they first fell in bed with Microsoft. Now, does Sam have PPUs? Who knows?
I listened to that and I'm pretty sure it was this [0] interview with the WSJ, Altman, and Mira Murati. If I'm wrong about that, well, it's still of interest given Mira Murati just took over running OpenAI.
I watched that clipped and you're wrong it's a completely normal interaction. Murati says "we're always working on the next thing" and Altman jokes "haha that's such a diplomatic answer" and the interviewer is like "who paired these two?". It's just standard humor.
Fair enough, but having worked for an extremely secretive FAANG myself, "we need XYZ" is the kind of thing I'd expect to hear if you have XYZ internally but don't want to reveal it yet. It could basically mean "we need XYZ relative to the previous product" or more specifically "we need another breakthrough than LLMs, and we recently made a major breakthrough unrelated to LLMs". I'm not saying that's the case but I don't think the signal-to-noise ratio in his answer is very high.
More importantly, OpenAI's claim (whether you believe it or not) has always been that their structure is optimised towards building AGI, and that everything else including the for-profit part is just a means to that end: https://openai.com/our-structure and https://openai.com/blog/openai-lp
Either the board doesn't actually share that goal, or what you are saying shouldn't matter to them. Sam isn't an engineer, it's not his job to make the breakthrough, only to keep the lights on until they do if you take their mission literally.
Unless you're arguing that Sam claimed they were closer to AGI to the board than they really are (rather than hiding anything from them) in order to use the not-for-profit part of the structure in a way the board disagreed with, or some other financial shenanigans?
As I said, I hope you're right, because the alternative is a lot scarier.
I think my point is different than what you're breaking down here.
The only way that OpenAI was able to sell MS and others on the 100x capped non-profit and other BS was because of the AGI/superintelligence narraitive. Sam was that salesman. And Sam does seem to sincerely believe that AGI and superintelligence are realities on OpenAI's path, a perfect salesman.
But then... maybe that AGI conviction was oversold? To a level some would have interpreted as "less than candid," that's my claim.
Speaking as a technologist actually building AGI up from animal-levels following evolution (and as a result totally discounting superintelligence), I do think Sam's AGI claims veered on the edge of reality as lies.
Both factions in this appear publicly to see AGI as imminent, and mishandling its imminence to be an existential threat; the dispute appears to be about what to do about that imminence. If they didn't both see it as imminent, the dispute would probably be less intense.
This has something of a character of a doctrinal dispute among true believers in a millennial cult
They must be under so much crazy pressure at OpenAI that it indeed is like a cult. I'm glad to see the snake finally eat iself. Hopefully that'll return some sanity to our field.
Sam has been doing a pretty damn obvious charismatic cult leader thingy for quite a while now. The guy is dangerous as fuck and needs to be committed to an institution, not given any more money.
AO Labs is an applied AI research lab building real-time reinforcement learning, unlocking AI that can be hyper-personalized with less data than LLMs.
Our first product is an API that learns by combining end-user context, application data, and a feedback signal (eg. an end-users' likes and dislikes) in a fast learning loop that's lightweight enough to provision a distinct agent per-user. There's no gap between training and inference-- train at the edge, learn all the time.
With our framework we increase training efficiency and also combine the static pre-trained intelligence from LLMs with continuous training to learn local contexts. AI progress is bottlenecked by backpropagation, which necessitates labelled data to set the ground truth while also leaving a gap between training and inference that result in increasingly larger, more homogenizing models.
We are hiring for applied researchers & various engineering roles– please reach out to ali@aolabs.ai