Introduction
In recent years, there have been significant advancements in the field of mental health, and artificial intelligence (AI) has played a crucial role in driving this progress. One of the most prominent examples of AI-powered mental health technology is Eliza AI, which offers accessible and non-judgmental mental health assistance to its users. However, as Eliza AI becomes increasingly popular, questions arise about its relationship with human mental health professionals. In this post, we explore the intricate dynamics between AI therapists like Eliza and their human counterparts with confidence.
What is ELIZA and what is the purpose of it?
ELIZA, is one of the earliest examples of a computer program designed to simulate human-like conversation. It was created in the mid-1960s by Joseph Weizenbaum, a computer scientist at the Massachusetts Institute of Technology (MIT). Eliza was primarily developed to explore the possibilities of natural language processing and human-computer interaction.
Eliza was designed to replicate the techniques used by a Rogerian psychotherapist, a method pioneered by psychologist Carl Rogers. This therapeutic approach involves actively listening to the client and prompting them to delve deeper into their thoughts and feelings. By engaging users in text-based conversations and responding with empathy and comprehension, Eliza aimed to emulate this approach.
How does ELIZA work?
Generally, you enter a sentence into ELIZA and the ELIZA program will produce a new sentence in response.

This is somewhat akin to today’s generative AI. You enter a prompt into a generative AI app such as ChatGPT and then ChatGPT generates a response to you. A notable difference is that ELIZA is conventionally devised to only take in a single sentence at a time and produce only a single sentence as output at a time
In the case of ChatGPT, your prompt can be very convoluted and can be many sentences in length, going into the size of many paragraphs. The same occurs with the output from ChatGPT in that it can generate for you many sentences or many paragraphs of output.
Scripts had to be written by people. Those scripts were fed into ELIZA. Whatever you saw ELIZA doing was driven by the human-devised script. It was the human that brought to the table a script that made ELIZA appear to be exhibiting intelligence.
The better the script that is fed into ELIZA, the better it will perform. Thus, the idea was that people might make more and more elaborate scripts that could be run in ELIZA, and ergo ELIZA would seem to be getting better and better.
What is the ELIZA effect?
From a psychological perspective, the Eliza effect is essentially a form of cognitive dissonance, where a user’s awareness of a computer’s programming limitations does not jibe with their behavior with and perception of that computer’s outputs. Because a machine is mimicking human intelligence, a person believes it is intelligent.
Our propensity to anthropomorphize does not begin and end at computers. Under certain circumstances, we humans attribute human characteristics to all kinds of things, from animals to plants to cars. It’s simply a way for us to relate to a particular thing, according to Colin Allen, a professor at the University of Pittsburgh who focuses on the cognitive abilities of both animals and machines. And a quick survey of the way many AI systems are designed and packaged today makes it clear how this tendency has spread to our relationship with technology.
“Rather than just relying on people’s tendency to do this, [technology] is being designed and presented in ways that encourage us,” he told Built In, adding that it’s “all part of keeping our attention” in the midst of everything else. “You want people to feel like they’re in some sort of interesting interaction with this thing.”
Think about it: Companies will design robots to be cute and childlike in an effort to make people more comfortable around them. Groundbreaking creations like the Tesla robot and Hanson Robotics’ Sophia are built to look like humans, while others are designed to act like humans. And the vast majority of AI voice assistants on the market today have human names like Alexa and Cortana. Watson, the supercomputer created by IBM that won a game of Jeopardy! in 2011, was named after the company’s founder Thomas J. Watson. Even ELIZA itself was named after Eliza Doolittle, the protagonist in George Bernard Shaw’s play Pygmalion.
Can it possibly become a threat for therapists?
AI therapy tools such as Eliza are not to be feared as competitors to human therapists. Rather, they serve as valuable resources that can complement and expand mental health care services. By utilizing the power of AI, we can extend the reach of therapy to more people, allowing for greater access to care and improved mental health outcomes.
Here is a video telling more about ELIZA in details if you are interested:)
Sources:
- https://www.forbes.com/sites/lanceeliot/2023/11/05/legendary-eliza-and-parry-go-head-to-head-with-chatgpt-in-a-revealing-battle-of-using-generative-ai-for-mental-health/?ss=ai&sh=77dea54c186b
- https://builtin.com/artificial-intelligence/eliza-effect
Additional info:
https://abilitynet.org.uk/news-blogs/eliza-ellie-evolution-ai-therapist
https://www.humanprotocol.org/blog/what-is-the-eliza-effect-or-the-art-of-falling-in-love-with-an-ai
https://www.theswaddle.com/inadequate-mental-healthcare-has-given-rise-to-ai-therapy-whats-the-harm
That’s interesting! The historical context, evolution of AI in mental health, and the dynamic with human therapists are all captivating. The ELIZA effect’s psychological impact adds an intriguing layer. Excited to explore more about ELIZA.