Share Site Map Contact us Home Page
Home > Background > Speaking Machines > Language and Computers


History of AI

AI at the turn of the millennium

Speaking Machines
Language and Computers
Chatbots Explained
Chatbot Review

From Myth to Reality part 1

From Myth to Reality part 2

From Myth to Reality part 3

From Myth to Reality part 4

From Myth to Reality Full
Language and Computers
  Printable version
After computers appeared in the 1950s, many people thought it was only a matter of time before the "electronic brains" would start talking, thinking, and enslaving humanity. Predictions abounded, both in science fiction and in the popular press: houses would sing you to sleep, computers would run the world, and everyone would have a cheerful robot servant to take phone calls, do the dishes, and keep an eye on the kids.

None of it happened. In the last half-century, billions of dollars and hundreds of thousands of hours went into giving computers the ability to converse with people, but computers are still utterly incapable of holding their own in a chat.
 
The abyss between human thought and the digital form
What's worse, while machines are thousands of times faster than they were five decades ago, they've only become two or three times as good at handling everyday human language over the same time span. Improved processor speed can make computers faster at analyzing organized, structured data, but everyday language isn't structured in any way that computers can fathom. The problem is not with the capabilities of the machines but with the complexity of encoding and simulating human thought in a digital form.
 
Even if a machine can figure out the grammar of a sentence, it still lacks a mechanism for interpreting its meaning. This is due to the innate ambiguity and vagueness of the human language, which is relpete with anaphora, metonymies and synecdoche. Why computers do not understand  Related Article
 
Machines can transform, but not interpret
Machines can transform speech to text, recognize some objects by using cameras, search the web, and, by controlling robot mechanisms, they can move somewhat like human beings, but they can't put it all together into a coherent framework. Natural Language Processing (NLP) experiments have used hundreds of different techniques, from Markov Chains, which use a theory of statistical probability to predict what comes next in a sentence, to link grammars, which cut up sentences into grammatical constructs, with mixed success. But determining grammar is not understanding.
To do so, some scientists are creating "knowledge databases," which will allow machines to "disambiguate" anaphoras and other aspects of language.