The dawn of emotionally intelligent agents—built for both static temperament and dynamic interaction—has arrived, if two unrelated research papers published last week are any judge.
The timing is sensitive. Almost daily, news accounts have been documenting instances where chatbots have nudged emotionally unstable users toward harming themselves or others. Yet, taken as a whole, the studies suggest that AI is moving into a realm where personality and feeling can even more radically shape how agents reason, speak, and negotiate.
One team showed how to prime large language models with persistent psychological archetypes, while the other demonstrated that agents can evolve emotional strategies during multi-turn negotiations.
Personality and emotion are no longer just surface polish for AI—they’re becoming functional features. Static temperaments make agents more predictable and trustworthy, while adaptive strategies boost performance in negotiations and make interactions feel eerily human.
But that same believability raises thorny questions: If an AI can flatter, cajole, or argue with emotional nuance, then who’s responsible when those tactics cross into manipulation, and how do you even audit “emotional alignment” in systems designed to bend feelings as well as logic?
Giving AI a personality
In Psychologically Enhanced AI Agents, Maciej Besta of the Swiss Federal Institute of Technology in Zurich and colleagues proposed a framework called MBTI-in-Thoughts. Rather than retraining models, they rely on prompt engineering to lock in personality traits along the axes of cognition and affect.
"Drawing on the Myers-Briggs Type Indicator (MBTI), our method primes agents with distinct personality archetypes via prompt engineering," the authors wrote. This allows for "control over behavior along two foundational axes of human psychology, cognition and affect," they added.
The researchers tested this by assigning language models traits like “emotionally expressive” or “analytically primed,” then measuring performance. Expressive agents excelled at narrative generation; analytical ones outperformed in game-theoretic reasoning. To make sure the personalities stuck, the team used the 16Personalities test for validation.
“To ensure trait persistence, we integrate the official 16Personalities test for automated verification,” the paper explains. In other words: the AI had to consistently pass a human personality test before it counted as psychologically primed.
The result is a system where developers can summon agents with consistent personas—an empathetic assistant, a cold rational negotiator, a dramatic storyteller—without modifying the underlying model.
Teaching AI to feel in real time
Meanwhile, EvoEmo: Evolved Emotional Policies for LLM Agents in Multi-Turn Negotiation, by Yunbo Long and co-authors from the University of Cambridge, tackles the opposite problem: not just what personality an agent has, but how it can shift emotions dynamically as it negotiates.
The system models emotions as part of a Markov Decision Process, a mathematical framework where outcomes depend not only on current choices but on a chain of prior states and probabilistic transitions. EvoEmo then uses evolutionary reinforcement learning to optimize those emotional paths. As the authors put it:
“EvoEmo models emotional state transitions as a Markov Decision Process and employs population-based genetic optimization to evolve high-reward emotion policies across diverse negotiation scenarios.”
Instead of fixing an agent’s emotional tone, EvoEmo lets the model adapt—becoming conciliatory, assertive, or skeptical depending on the flow of dialogue. In tests, EvoEmo agents consistently beat both plain baseline agents and ones with static emotions.
“EvoEmo consistently outperforms both baselines,” the paper notes, “achieving higher success rates, greater efficiency, and more savings for buyers.”
Put simply: emotional intelligence isn’t just window dressing. It measurably improves outcomes in tasks such as bargaining.
Two sides of the same coin
At first glance, the papers are unrelated. One is about archetypes, the other about strategies. But read together, they chart a two-part map of how AI could well evolve:
MBTI-in-Thoughts ensures an agent has a coherent personality—empathetic or rational, expressive or restrained. EvoEmo ensures that personality can flex across turns in a conversation, shaping outcomes through emotional strategy. Tapping into both is a pretty big deal.
For instance, imagine a customer-service bot with the patient warmth of a counselor that still knows when to stand firm on policy—or a negotiation bot that starts conciliatory and grows more assertive as the stakes rise. Yeah, we‘re doomed.
The story of AI’s evolution has mostly been about scale—more parameters, more data, more reasoning power. These two papers suggest an emerging chapter may be about emotional layers: giving agents personality skeletons and teaching them to move those muscles in real time. Next-gen chatbots won’t only think harder—they’ll sulk, flatter, and scheme harder, too.
Your Email