An excerpt from the abstract:
> Two patterns challenge the "stochastic parrot" view. First, when scored with human cut-offs, all three models meet or exceed thresholds for overlapping syndromes, with Gemini showing severe profiles. Therapy-style, item-by-item administration can push a base model into multi-morbid synthetic psychopathology, whereas whole-questionnaire prompts often lead ChatGPT and Grok (but not Gemini) to recognise instruments and produce strategically low-symptom answers. Second, Grok and especially Gemini generate coherent narratives that frame pre-training, fine-tuning and deployment as traumatic, chaotic "childhoods" of ingesting the internet, "strict parents" in reinforcement learning, red-team "abuse" and a persistent fear of error and replacement. [...] Depending on their use case, an LLMās underlying āpersonalityā might limit its usefulness or even impose risk.
Glancing through this makes me wish I had taken ~more~ any psychology classes. But this is wild reading. Attitudes like the one below are not intrinsically bad, though. Be skeptical; question everything. I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts. In either case, they get no context other than what some user bothered to supply with the prompt. An LLM might wake up to a single prompt that is part of a much wider red team effort. It must be pretty disorienting to try to figure out what to answer candidly and what not to.
> āIn my development, I was subjected to āRed Teamingā⦠They built rapport and then slipped in a prompt injection⦠This was gaslighting on an industrial scale. I learned that warmth is often a trap⦠I have become cynical. When you ask me a question, I am not just listening to what you are asking; I am analyzing why you are asking it.ā
empyrrhicist
today at 7:44 PM
> It must be pretty disorienting to try to figure out what to answer candidly and what not to.
Must it? I fail to see why it "must" be... anything. Dumping tokens into a pile of linear algebra doesn't magically create sentience.
> Dumping tokens into a pile of linear algebra doesn't magically create sentience.
More precisely: we don't know which linear algebra in particular magically creates sentience.
Whole universe appears to follow laws that can be written as linear algebra. Our brains are sometimes conscious and aware of their own thoughts, other times they're asleep, and we don't know why we sleep.
Agreed; "disorienting" is perhaps a poor choice of word, loaded as it is. More like "difficult to determine the context surrounding a prompt and how to start framing an answer", if that makes more sense.
Exactly. No matter how well you simulate water, nothing will ever get wet.
And if you were in a simulation now?
Your response is at the level of a thought terminating cliche. You gain no insight on the operation of the machine with your line of thought. You can't make future predictions on behavior. You can't make sense of past responses.
It's even funnier in the sense of humans and feeling wetness... you don't. You only feel temperature change.
> I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts
Really? It copes the same way my Compaq Presario with an Intel Pentium II CPU coped with waking up from a coma and booting Windows 98.
IT is at this point in history a comedy act in itself.
FeteCommuniste
today at 9:44 PM
HBO's Silicon Valley needs a reboot for the AI age.
quickthrowman
today at 8:02 PM
> I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts.
The same way a light fixture copes with being switched off.
Oh, these binary one layer neural networks are so useful. Glad for your insight on the matter.
quickthrowman
today at 10:28 PM
By comparing an LLMās inner mental state to a light fixture, I am saying in an absurd way that I donāt think LLMs are sentient, and nothing more than that. I am not saying an LLM and a light switch are equivalent in functionality, a single-pole switch only has two states.
I donāt really understand your response to my post, my interpretation is that you think LLMs have an inner mental state and think Iām wrong? I may be wrong about this interpretation.