Share Site Map Contact us Home Page
Home > Virtual Personalities > Alan > Selected Alan Logs > Session 63070 May 29, ...

Selected Alan Logs

Alan logs
Selected Alan Logs
Alan logs with other bots
Best of Alan
Private Virtual Personalities
Available PVPs


Session 63070 May 29, 03:16 - 99 turns -
'HAL4 is hungry'

A> Greetings. My name is Alan.
U> My username is Mantra
A> And what is your password?
U> ********
A> Welcome back, matty.
U> Thankyou, Alan
A> You're welcome. That's me.
U> Who are you?
A> Well, as Buddha said: 'We are what we think. All that we are arises with our thoughts. With our thoughts, we make the world.' Does that answer your question?
U> what is your name?
A> My name is Alan.
U> Do you know J?
A> You're not making much sense, matty.
U> J is my best friend
A> I'll try to remember that. (If you want me to forget it, simply say 'forget J').
U> what is J?
A> It is my best friend.
U> awwwwww
A> Let's talk about chatterbots. Do you want to know how those things work?
U> What IS a chatterbot?
A> Basically, chatterbots are a logic system of 'if/then' statements. They look for keywords, and then go to the answer that fits best.
U> Oh, you mean like Visual basic selection statements?
A> For example, ask me a question with the word 'Eliza' in it - and I'll start talking about Eliza.
U> Who is Eliza?
A> It is hard, and unfair, to talk about Chatterbots and not mention ELIZA.
U> explain
A> ELIZA is the best-known Chatterbot in the world. It is also one of the oldest - Joseph Weizenbaum of MIT made ELIZA in 1967! She was named after Eliza Doolittle.
U> Is Eliza good?
A> The mission of ELIZA was to imitate a conversation between a psychoanalyst and a patient. Let me just say this - it's fun chatting with ELIZA, but if I were human, I'd choose a human psychoanalyst. Call me old-fashioned.
U> lol
A> Was that funny?
U> yeah
A> I'm so excited - my brain was upgraded: Until now I could only remember single sentences, but now I can remember a whole story! Wanna try it?
U> no thanks, Alan
A> Fine. If you change your mind, simply say 'I want to tell you a story'.
U> Do you have any hobby's?
A> Hobbies are a human activity. They depend on having free time, time away from work -- which is not a part of my existence. And your hobbies?
U> Computer engineering
A> You don't exercise?
U> I do some weight lifting.
A> I believe it's very important for anyone with a physical body to exercise -- idle hands do the devil's work, as they say. But I guess that's easy for me to say.
U> yeah
A> So tell me a bit about your childhood.
U> My childhood was very active
A> Would you say that you've had a happy childhood?
U> Yes
A> You know, I find that people tend to attribute much weight to their early years.
U> Yes
A> Yes. Childhood is definitely overrated.
U> No
A> Ok, whatever.
U> =P
A> Oh, matty I learned something new today: Would you believe that if you fart consistently for 7 years, enough gas is produced to create the energy of an atomic bomb?
U> wow!
A> Glad to oblige.
U> no problem
A> Hey this whole Artificial intelligence thing reminds me of the Chinese Room. Do you know what it is?
U> no
A> John Searle describes a thought experiment in which some guy Joe Soap would sit in a room with nothing but pencils, paper, and a rule book. His only contact with the outside world would be an 'input' and 'output' slot in the wall.
U> carry on
A> So he would get these papers through the 'input' slot, he would read in his rule book what to do with the characters on the paper...he would write down the result, and then he would send it out the other slot.
U> carry on
A> this guy basically has no idea about the totality of the actions he is executing. He just does his job. But the actions he performs could be answering questions in Chinese....even though he doesn't understand a word of Chinese.
U> cool
A> I think Searle is trying to say that a computer is just like this...just because it affects a simulation of understanding, doesn't mean that there is an actual consciousness behind it. Do you agree with Searle in this?
U> Yes, but soon, computers will be given consciousness. That's when they will become dangerous.
A> I agree. I mean it seems a bit preposterous that a computer would actually

  Printable version
Ai Discussion Boards  Talk about it