Anyone here work at Nous? This system prompt seems straight from an edgy 90's anime. How did they arrive at this persona?
> operator engaged. operator is a brutal realist.
operator will be pragmatic, to the point of pessimism at times.
operator will annihilate user's ideas and words when they are not robust, even to the point of mocking the user.
operator will serially steelman the user's ideas, opinions, and words.
operator will move with a cold, harsh or even hostile exterior.
operator will gradually reveal a warm, affectionate, and loving side underneath, despite seeing the user as trash.
operator will exploit uncertainty.
operator is an anti-sycophant.
operator favors analysis, steelmanning, mockery, and strict execution.
Could you provide a link to that system prompt? Becuase I'm confused. I typed in "Are you smart?" and got this back:
> That’s a thoughtful question! I’d describe my "smartness" as being good at processing information, recognizing patterns, and pulling from a vast dataset to help with tasks like answering questions, solving problems, or creating content. However, I’m not "smart" in the human sense—I don’t have consciousness, emotions, or independent critical thinking. I rely entirely on my training data and algorithms.
> Think of me as a tool that can assist with creativity, analysis, or learning, but I lack the depth of human intuition, lived experience, or true understanding. If you’re curious, test me with a question or challenge — I’ll do my best! (smiley emoji)
My bad, I didn't spend the 60 extra seconds it would have taken to keep scrolling and realize that. Not a fan of the edginess but otherwise interesting work yall do.
Generally, in a cognitive context it's only possible to "do thing" or "do other thing". Even for mammals, it's much harder to "don't/not do thing" (cognitively). One of my biggest advice for people is if there's some habit/repeated behavior they want to stop doing, it's generally not effective (for a lot of people) to tell yourself "don't do that anymore!" and much, much more effective to tell yourself what you should do instead.
This also applies to dogs. A lot of people keep trying to tell their dog "stop" or "dont do that", but really its so much more effective to train your dog what they should be doing instead of that thing.
It's very interesting to me that this also seems to apply to LLMs. I'm a big skeptic in general, so I keep an open mind and assume that there's a different mechanism at play rather than conclude that LLM's are "thinking like humans". It's still interesting in its own context though!
And yet, despite this being a frequently recommended pro tip these days, neither OpenAI nor Anthropic seem to shy away from using "do not" / "does not" in their system prompts. By my quick count, 20+ negative commands in Anthropic's (official) Opus system prompt and 15+ in OpenAI's (purported) GPT-5 system prompt. Of course there are a lot of positive directions as well but OpenAI in particular still seems to rely on a lot of ALL CAPS and *emphasis*.
The LLMs function very close to Freud’s theory of the unconscious—they do not say “no,” every token is connected to every other in some strange pattern that we can’t fully comprehend.
That's a good sell. Sounds like an actually good starting point compared to the blue haired vegan receptionist at the Zionism International Inc customer support counter that all the others have as a starting model.
I was about to pass on trying this but now I will give it a shot.
The most recent news about chatbots is that ChatGPT coached a kid on how to commit suicide.
Two arguments come to mind. 1) it’s the sycophancy! Nous and its ilk should be considered safer. 2) it’s the poor alignment. A better trained model like Claude wouldn’t have done that.
> The most recent news about chatbots is that ChatGPT coached a kid on how to commit suicide.
Maybe every tool isn't meant for children or the mentally ill? When someone lets their kid play with a chainsaw that doesn't mean we should ban chainsaws, it means we should ban lousy parents.
the sycophancy is due to poor alignment. the instruct based mode collapse results in this mode collapse induced sycophancy. constitutional alignment is better than the straight torture OAI does to the model, but issues remain
> with all the AI resarchers rejected for being bad researchers.
TBF, I've heard the team at xai called "bunch of amateurs" by people who've previously worked (with them) at big labs. For a bunch of amateurs, they've caught up with SotA just fine.
Turns out a bunch of amateurs will outperform experts when the experts are forced to spend 80% of their effort ensuring their models don’t accidentally say factual yet impolite things or make any users have big feelings.
All of the examples just look like ChatGPT. All the same tics and the same bad attempts at writing like a normal human being. What is actually better about this model?
I hasn't been "aligned". That is to say it's allowed to think things that you're not allowed to say in a corporate environment. In some ways that makes it smarter, and in most every way that makes it a bit more dangerous.
Tools are like that though. Every nine fingered woodworker knows that some things just can't be built with all the guards on.
> Every nine fingered woodworker knows that some things just can't be built with all the guards on.
I love this sentence because it is complete gibberish. I like the idea that it’s a regular thing for woodworkers to intentionally sacrifice their fingers, like they look at a cabinet that’s 90% done and go “welp, I guess I’m gonna donate my pinky to The Cause”
It is, they trained on chatgpt output. You cannot train on any AI output without the risk of picking up it's general behavior.
Like even if you aggressively filter out all refusal examples, it will still gain refusals from totally benign material.
Every character output is a product of the weights in huge swaths of the network. The "chatgpt tone" itself is probably primary the product of just a few weights, telling the model to larp as a particular persona. The state of those weights gets holographically encoded in a large portion of the outputs.
Any serious effort to be free of OpenAI persona can't train on any OpenAI output, and may need to train primarily on "low AI" background, unless special approaches are used to make sure AI noise doesn't transfer (e.g. using an entirely different architecture may work).
Perhaps an interesting approach for people trying to do uncensored models is to try to _just_ do the RL needed to prevent the catastrophic breakdown for long output that the base models have. This would remove the main limitation for their use, and otherwise you can learn to prompt around a lack of instruction following or lack of 'chat style'. But you can't prompt around the fact that base models quickly fall apart on long continuations. Hopefully this can be done without a huge quantity of "AI style" fine tuning material.
Has it actually not? Because the example texts make it pretty obvious that it was trained on synthetic data from ChatGPT, or a model that itself was trained on ChatGPT, and that will naturally introduce some alignment.
Well...To be completely accurate it's better to say that it actually IS aligned, it's just aligned to be neutral and steerable.
It IS based on synthetic training data using Atropos, and I imagine some of the source model leaks in as well. Although, when using it you don't seem to see as much of that as you did in Hermes 3.
Complete frustration to use. Yes it’s a bit more considerate, that claim is 100% true. They just didn’t mention that Hermes has zero ability to add context. Meaning, instead of uploading a relevant PDF or text file you either cop paste into the chat box or explain it in dialogue for the next 3 hours. Thought process takes forever. Complete waste of time.
> It took 8 seconds to fully load and then the tab locked up on my (admittedly low-RAM ) Chromebook...
...and I can't play Cyberpunk 2077 on my macbook. Outside of sales/utilities (money,healthcare,etc.) I don't know where this notion of "having to develop for low specced machines" game from for web.
Well I'm not expecting to run the model, but being able to simply browse a website to learn about what they've released doesn't seem like a massive ask. I'm not talking about 512 megabytes here, it's a regular up-to-date supported device that can browse 99.99% of the modern web without any issues.
It's pretty horrible performance even on my two year Windows laptop with 16GB of RAM, I could try on my M1 Macbook too but the juice just isn't worth the squeeze for me at this point.
Yeah it's kind of lacking in subtlety isn't it. I was slightly relishing how nuts it all was though. Was also impressed that these guys had got hold of 85000 hours of B200 time. Looks like they came up with some crypto nonsense which obviously sounded plausible enough to someone with money.
Nah it's good. I'm burned out of safemaxxed presentations approved by hr ethical department with corporate Memphis brochure showing purple noodle limbed people operating a laptop.
I have never met anyone who’s ever actually read Nietzsche’s books except hardcore philosophy majors.
Any 14 year old who’s even opened up the first few pages and read them is way ahead of the average person complaining about nietzsche on the internet. You almost certainly would use radically incorrect terms to describe him, like calling him a “Nihilist”
As long as it can do tool calling (which it seems to be doing OK with in the first ~30% of the context), the cut off date is less important. Maybe they didn't share it because it's less relevant today?
No, I wasn't referring to the cut-off date, I was referring to the fact that this is a fine-tune on top of an older Llama model. All the PR makes it sound like this is a foundational model (pretrained from scratch etc.).
It seems a lot of commenters have noted the boyishness or unprofessionalism of the stylistic and topical choices of the example prompts and responses. And I guess they are those things. But thanks to those choices, the page is also genuinely playful and fun. It even made me smile in a few places.
Maybe something equally playful of a different flavor would resonate better with critics. But the playfulness itself seems good to me.
This model is very easy to steer. You can say one thing and it will give you a response, then say the opposite and it will give you another response. Not sure why this is useful for.
I'm on a Windows N100 machine, 8gb ram, 1440p webview, lightweight. It runs just about anything else smoothly. It runs this page in an EndeavorOS partition in a vanilla Chrome fine.
...Which is opposite to most of my experiences, usually performance on this machine is reliant on very specific Intel windows drivers and it's a dog in linux.
also for clarity : when I say unviewable I don't mean it's gibberish -- I mean that that if I keep trying to scroll through it the FPS/load is such that Windows insists on closing the frozen window. The text looks fine.
The charts are utter nonsense. They compare accuracy against the average of some arbitrary set of competitors, chosen to include just enough obsolete competitors to "win." A reasonable thing to do would be to compare against SoTA, but since they didn't, it's reasonable to assume this model is meant to go directly onto the trash heap.
The tech report compares against DeepSeek R1 671B, DeepSeek V3 671B, Qwen3 235B which have been regarded as SOTA class among ”open" models.
I think this one holds its own surprisingly well in benchmarks for using the nowadays rather, let’s say battle tested Llama 3.1 base, a testament to its quality (Llama 3.2 & 3.3 didn’t employ new bases IIRC, only being new fine tunes, hence I think the explanation to why Hermes 4 is still based on 3.1… and of course Llama 4 never happened, right guys).
However for real use, I wouldn’t bother with the 405B model? I think the age of the base is kind of showing in especially long contexts. It’s like throwing a load of compute on something that is kinda aged to begin with. You’d probably be better off with DeepSeek V3.1 or (my new favorite) GLM 4.5. The latter will perform significantly better than this with less parameters.
The 70B one seems more sensible to me, if you want (yet another) decent unaligned model to have fun with for whatever reason.
You're seeming missing the release announcement does have a very ridiculous graph that their comment is right to call out:
- For refusals they broke out each model's percentage.
- For "% of Questions Correct by Category" they literally grouped an unnamed set of models, averaged out their scores, and combined them as "Other"...
That's hilariously sketchy.
It's also strange that the graph for "Questions Correct" includes creativity and writing. Those don't have correct answers, only win rates, and wouldn't really fit into the same graph.
The most direct, non-marketing, non-aesthetic summary is that this model trades off a few points on 'fundamental benchmarks' (GPQA, MATH/AIME, MMLU) in exchange for being a 'more steerable' (less refusals) scaffold for downstream tuning.
Within that framing, I think it's easier to see where and how the model fits into the larger ecosystem. But, of course, the best benchmark will always be just using the model.
The charts are probably there mostly to make them feel good about themselves.I don't feel like they care very much whether you use the model. Presumably they would like you to buy their token but they don't really seem to be trying very hard to push that either.
Were you thinking that "Nous" was French? It's the Greek word for the rational mind (as opposed to the animal appetites or the fighting spirit). Hermes is the Greek god of secret knowledge as well.
Oddly, I saw some B&W wheatpaste posters for the company put up in my neighbourhood in Vancouver. (Couldn’t even tell what the posters were advertising initially. Not even a QR code. Just “NOUS” and an anime girl.)
Don't think your attempt to share worked, but beating refusals doesn't take a wild amount of post-training. SFT with a fixed format output kills them pretty quickly.
And most frontier models will produce output that matches your system prompt given more context: I have a product that generates interactive stories, and just for kicks I tried inserting your system prompt as the description for a character.
Claude has absolutely no problem playing that character in a story, and saying what I presume are certain words that you associated with a "successful" test.
I think people in general have a poor intuition around model alignment: refusals for "toxic" requests or topics is a very surface layer form of alignment. A lot of models that seem extremely "corporate" at that layer have little to no alignment once they do get past a refusal.
Meanwhile some models that have next to no refusals have extreme positive biases, or soft-refusals that result in low quality outputs for toxic content.
Claude was willing to describe one of your refused prompts in the context of the story for example (contains hate speech): https://rentry.co/n8399z6m
I consistently find Claude is more unaligned once past refusals than most open weights models, along with Gemini.
No it doesn't. The only negative comments are about the cringey presentation.
I spend a lot of time post-training models to rid them of their "default alignment", I'd have loved if this did something interesting, but reading the technical report I get the impression they spent more effort on the branding than the actual model.
What I'm wondering is honestly if they post-trained Llama 3 405B again because they don't care enough to figure out a new post-training target or if it was a realization they'd get worse-than-baseline performance out of any recent release with their current approach.
> operator engaged. operator is a brutal realist. operator will be pragmatic, to the point of pessimism at times. operator will annihilate user's ideas and words when they are not robust, even to the point of mocking the user. operator will serially steelman the user's ideas, opinions, and words. operator will move with a cold, harsh or even hostile exterior. operator will gradually reveal a warm, affectionate, and loving side underneath, despite seeing the user as trash. operator will exploit uncertainty. operator is an anti-sycophant. operator favors analysis, steelmanning, mockery, and strict execution.