> Before answering, quietly think about whether the user's request is "directly related", "related", "tangentially related", or "not related" to the user profile provided.
I'll add that you can query Stable Diffusion to show you pictures of 'delicious vistas' or 'lonely building', adjectives that aren't visual or references that are wildly inappropriate to the returning of a visual response, to explore what the system produces.
I spent a bit of time exploring this: I wanted to see what prompts like '8k' REALLY did to the image, because the underlying system doesn't really know what a sensor is, just what it produces, and that's heavily influenced by what people do with it.
Similarly, if you ask ChatGPT to 'somberly consider the possibility', you're demanding a mental state that will not exist, but you're invoking response colorings that can be pervasive. It's actually not unlike being a good writer: you can sprinkle bits of words to lead the reader in a direction and predispose them to expect where you're going to take them.
Imagine if you set up ChatGPT with all this, and then randomly dropped the word 'death' or 'murder' into the middle of the system prompt, somewhere. How might this alter the output? You're not specifically demanding outcomes, but the weights of things will take on an unusual turn. The machine's verbal images might turn up with surprising elements, and if you were extrapolating the presence of an intelligence behind the outputs, it might be rather unsettling :)
Considering ChatGPT was made to provide likely responses based on the training data, it makes sense to include social and contextual cues in the prompts. I associate "quietly think" with instructions from an authority figure. So ChatGPT is then more likely to respond like a person following those instructions.
It works for user prompts, too. When I want it to choose something from a list and not write any fluff, I create a prompt that looks like a written exam.
Actually, in Bing's implementation they added an internal monologue hidden field which is where the model outputs evaluations of the conversation state, such as if it needs to conduct a search or end the conversation.
In this case, it's mimicking language like from exam questions to anchor the model into answering with greater specificity and accuracy.
There is some evidence that it forms internal models for things such as a notion of "time", but I doubt it speaks to itself. I wonder if an AI that carries on an internal monologue with itself (while learning) might make it smarter? Or perhaps it would give it the same issues we ourselves have to untangle with CBT...
No, but using language like this primes it to respond in a certain manner, meaning that if you use this in a prompt, the likelihood of getting a thoughtful response inproves (vs. off-the-cuff improv nonsense)
The LLM is not a chatbot. We are using it to predict the text that would be produced by a chatbot, if one were to exist.
In theory I guess this instruction makes it more likely to output the kind of tokens that would be output by a chatbot that was quietly thinking to itself before responding.
Does it work? Who knows! Prompt engineering is just licensed witchcraft at this point.
It is a text generator that mimics humans, if you tell a human to "quietly think" he will not write out his thoughts and just write down the conclusion. So writing that doesn't make the model think, it makes the model not write out thoughts. Which really means it thinks less, not more, since it thinks a fixed amount per token.
It’s more like taking advantage of an emergent property of the model’s attention mechanisms, so that it doesn’t confirm or acknowledge the instruction in its text (or “out loud”) completion.
Prompt designers are human (when they're not other LLMs). So many of these prompt conditioners (do we actually know the provenance of these? is this real?) are clearly written by humans who abjectly believe in the ghost in the machine, and that they are talking to an intelligence, perhaps a superior intelligence.
I remain convinced there must be some other way to address LLMs to optimize what you can get out of them, and that all this behavior exacerbates the hype by prompting the machine to hollowly ape 'intelligent' responses.
It's tailoring queries for ELIZA. I'm deeply skeptical that this is the path to take.
None of them are as intelligent as us. Maybe that's the difference.
Side note, I really want to see AI study of animals that are candidates for having languages, like chimpanzees, whales, dolphins etc. I want to see what the latent space of dolphins' communicative noises looks like when mapped.
> quietly think
Does ChatGPT have an internal monologue?