Every AI has a personality — and why it matters
Within moments of meeting new people, we size up what they’re like. Are they reserved or outgoing? Gruff or friendly?
Psychologists call it “person perception,” and we do it automatically and mostly unconsciously.
When we encounter AI that has human-like characteristics, we go through a similar process of person perception. And we quickly make inferences about the AI’s personality.
As AI products become more sophisticated and widely used, their developers will need to think carefully about personality — with both business and ethical implications.
It doesn’t take much for AI to trigger our person perception processes.
In the mid-1960s, the MIT computer scientist Joseph Weizenbaum created a rudimentary, text-based chatbot named ELIZA. The underlying computer program was simple — light-years removed from today’s large language models.
But an ELIZA script called DOCTOR, which was inspired by the conversational style of the psychotherapist Carl Rogers, convinced many users that they were speaking with a wise, empathetic counselor.
Sixty years later, AI has become ever more human-like — particularly when we interact with it through voice-based interfaces.
What’s so special about voice? Far more than text, voice can convey a variety of information about the speaker, including psychological factors like emotion, mood, and intention, and demographic factors like ethnicity, socioeconomic status, and race. Together, these details greatly influence how we perceive the person — or AI — that we’re speaking with.
In the case of people we meet, personality matters because it affects how we interact with them and what happens as a result. For example, a teacher or a salesperson with an affable personality is more likely to succeed.
In the same way for AI, personality matters because it changes how people deal with it. An AI that is perceived as brusque may decrease engagement. Users may be less likely to return for more. An AI that is perceived as warm and friendly may be more persuasive because users feel understood.
The business implications are clear. Effectively matching AI personality to the use case is crucial for success
As AI is deployed more widely, its personality will become an important part of a company’s brand. And just as brands vary widely, AI personalities will, too. For example, the personality of a Whole Foods AI will likely be different from a Walmart AI.
Personality engineering is a nascent field in AI. As yet, there are no best practices for shaping how AI is perceived. But there is a long list of potential factors to consider, including voice characteristics, word choice, and humor.
The ethics of such AI personality engineering have only begun to be explored.
Is it ethical to design an AI personality with recognizable ethnic or racial voice characteristics in order to engage or persuade members of a particular group?
Is it ethical to deploy an especially empathetic-seeming AI personality for certain purposes, e.g., encouraging healthier behaviors, but not for others, e.g., selling used cars?
There are sure to be legal and regulatory constraints applied in coming years, but it’s too early to make predictions — or recommendations.
Only one thing is certain: every AI will have a personality because that’s the inevitable (and unavoidable) result of our automatic person perception processes.
We will ascribe a personality to each AI that we encounter — whatever the preferences of the AI’s creator. So it’s incumbent on those who develop AI to figure out what that personality will be.■