Share Site Map Contact us Home Page
Home > Background > Speaking Machines > Chatbot Review

History of AI

AI at the turn of the millennium

Speaking Machines
Language and Computers
Chatbots Explained
Chatbot Review

From Myth to Reality part 1

From Myth to Reality part 2

From Myth to Reality part 3

From Myth to Reality part 4

From Myth to Reality Full
Chatbot Review
  Printable version
  Related links
Chatbots (or, in short, "bots") are computer programs which are designed to converse as much like people as their programming allows. In many ways, they are the embodiment of Turing's vision.

The First Chatbots
People felt very close to ELIZA
The first well-known chatbot was ELIZA, designed by computer scientist Joseph Weizenbaum and released in 1966. Eliza simulated a therapist, repeating many of the user's questions back in some slightly different form, and recognizing several keywords - so, for example, typing in the word "mother" would cause ELIZA to respond "tell me more about your family."
Talking with Eliza  Related Article
Some users developed an emotional attachment to ELIZA and some psychiatrists went so far as to suggest that such programs could replace psychotherapists altogether. Research concluded that modeling of conversational capabilities in restricted domains might prove successful. The Eliza chatbot became a point of reference for other programs using similar techniques for providing a conversational interface.
PARRY was a paranoid chatbot
After Eliza came PARRY, written by Kenneth Colby. PARRY was modeled on the paranoid mind, and many expert psychiatrists found it difficult to tell whether PARRY was human or not. The program was designed to emit linguistic responses based on internal (affective) states. To create this effect, three measures - fear, anger and mistrust - were used and their values changed depending on the flow of the conversation.
ELIZA and PARRY try to trick users
Both ELIZA and PARRY are hard-wired programs that try to fool users into thinking they are talking to a (particular type of) human. The utilization of non sequitur (i.e., a statement that does not follow logically from what preceded it) in the PARRY program is similar to that of simulating typing mistakes of humans: in the long run, such tricks become very apparent to the user, and belie the computer's inability with language.