Could Sarcastic Computers Be in Our Future? (1 Viewer)

Psycho Punk

Active Member
Joined
Mar 17, 2012
Messages
1,879
Location
Dublin
I just love the idea of a sarcastic computer running my apartment.

Me: Did you order the chops from the butcher?

AI: No I got them from the Karate Dojo.


Could Sarcastic Computers Be in Our Future? New Math Model Can Help Computers Understand Inference
http://www.sciencedaily.com/releases/2012/05/120530152345.htm

Noah Goodman, right, and Michael Frank, both assistant professors of psychology, discuss their research at the white board that covers the wall in Goodman's office. (Credit: L.A. Cicero)

ScienceDaily (May 30, 2012) — In a new paper, the researchers describe a mathematical model they created that helps predict pragmatic reasoning and may eventually lead to the manufacture of machines that can better understand inference, context and social rules.

Language is so much more than a string of words. To understand what someone means, you need context.

Consider the phrase, "Man on first." It doesn't make much sense unless you're at a baseball game. Or imagine a sign outside a children's boutique that reads, "Baby sale -- One week only!" You easily infer from the situation that the store isn't selling babies but advertising bargains on gear for them.

Present these widely quoted scenarios to a computer, however, and there would likely be a communication breakdown. Computers aren't very good at pragmatics -- how language is used in social situations.

But a pair of Stanford psychologists has taken the first steps toward changing that.

In a new paper published recently in the journal Science, Assistant Professors Michael Frank and Noah Goodman describe a quantitative theory of pragmatics that promises to help open the door to more human-like computer systems, ones that use language as flexibly as we do.

The mathematical model they created helps predict pragmatic reasoning and may eventually lead to the manufacture of machines that can better understand inference, context and social rules. The work could help researchers understand language better and treat people with language disorders.

It also could make speaking to a computerized customer service attendant a little less frustrating.

"If you've ever called an airline, you know the computer voice recognizes words but it doesn't necessarily understand what you mean," Frank said. "That's the key feature of human language. In some sense it's all about what the other person is trying to tell you, not what they're actually saying."

Frank and Goodman's work is part of a broader trend to try to understand language using mathematical tools. That trend has led to technologies like Siri, the iPhone's speech recognition personal assistant.

But turning speech and language into numbers has its obstacles, mainly the difficulty of formalizing notions such as "common knowledge" or "informativeness."

That is what Frank and Goodman sought to address.

The researchers enlisted 745 participants to take part in an online experiment. The participants saw a set of objects and were asked to bet which one was being referred to by a particular word.

For example, one group of participants saw a blue square, a blue circle and a red square. The question for that group was: Imagine you are talking to someone and you want to refer to the middle object. Which word would you use, "blue" or "circle"?

The other group was asked: Imagine someone is talking to you and uses the word "blue" to refer to one of these objects. Which object are they talking about?

"We modeled how a listener understands a speaker and how a speaker decides what to say," Goodman explained.

The results allowed Frank and Goodman to create a mathematical equation to predict human behavior and determine the likelihood of referring to a particular object.

"Before, you couldn't take these informal theories of linguistics and put them into a computer. Now we're starting to be able to do that," Goodman said.

The researchers are already applying the model to studies on hyperbole, sarcasm and other aspects of language.

"It will take years of work but the dream is of a computer that really is thinking about what you want and what you mean rather than just what you said," Frank said.

The above story is reprinted from materials provided by Stanford University. The original article was written by Brooke Donald.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

M. C. Frank, N. D. Goodman. Predicting Pragmatic Reasoning in Language Games. Science, 2012; 336 (6084): 998 DOI: 10.1126/science.1218633
 
I'll put this here, an AI on acid would be acerbic.

Computer AI makes sense of psychedelic trips
http://www.newscientist.com/article/dn21929-computer-ai-makes-sense-of-psychedelic-trips.html
12:03 15 June 2012 by Anil Ananthaswamy

Artificial intelligence could help us better understand the effects of psychedelic drugs, by analysing narrative reports written by people who are using them.

Scientists barely understand how existing psychedelic drugs work to alter perception and intensify emotions, let alone keep pace with new ones flooding the market – often sold as "bath salts" or "herbal incense".

Enter artificial intelligence. Matthew Baggott of the University of Chicago and colleagues used machine-learning algorithms – a type of artificial intelligence that can learn about a given subject by analysing massive amounts of data – to examine 1000 reports uploaded to the website Erowid by people who had taken mind-altering drugs.

They found that the frequency with which certain words appeared could identify the drug taken with 51 per cent accuracy on average – compared with 10 per cent by chance. MDMA (ecstasy) usage was identified with an accuracy of 87 per cent.

The drug DMT (N,N-dimethyltryptamine) acts on the brain in different ways from the drug Salvia (Salvia divinorum), but the algorithms inferred that both elicit a similar response. This might be because both are typically smoked and so enter the bloodstream quickly, says Baggott. "Smoked psychedelic drugs may 'hit' people hard and fast in a similar way."

Baggott hopes the work will aid research into the effects of new and existing drugs. "You need to start with some theories about the effects of a drug," he says. "Machine learning can help us form those theories."

Journal reference: arxiv.org/abs/1206.0312
 
Another one on potential AI.

Bot with boyish personality wins biggest Turing test
http://www.newscientist.com/blogs/onepercent/2012/06/bot-with-boyish-personality-wi.html
20:44 25 June 2012

Celeste Biever, deputy news editor

Eugene Goostman, a chatbot imbued with the personality of a 13 year old boy, won the biggest Turing test ever staged, on 23 June, the 100th anniversary of the birth of Alan Turing.

Held at Bletchley Park near Milton Keynes, UK, where Turing cracked the Nazi Enigma code during World War 2, the test involved over 150 separate conversations, 30 judges (including myself), 25 hidden humans and five elite, chattering software programs.

By contrast, the most famous Turing test - the annual Loebner prize, also held at Bletchley Park this year to honour Turing - typically involves just four human judges and four machines.

"With 150 Turing tests conducted, this is the biggest Turing test contest ever," says Huma Shah, a researcher at the University of Reading, UK, who organised the mammoth test.

That makes the result more statistically significant than any other previous Turing test, says Eugene's creator Vladimir Veselov based in Raritan, New Jersey. "It was a pretty huge number of conversations," he said, shortly after he was awarded first prize: "I am very excited."

First conceived by Turing in the early 1950s, the test is the most famous evaluation of machine intelligence. Human judges converse via a text interface with both hidden bots and humans - and say in each case whether they are chatting to a human or machine.

Turing said that a machine that fooled humans into thinking it was human 30 per cent of the time would have beaten the test. Just short of this, Eugene fooled its judges 29 per cent of the time. In a close second place, came JFred, the brain child of Robby Garner, and in third place Rollo Carpenter's Cleverbot. The other two bots to compete were Ultra Hal and Elbot.

Unlike several of Eugene's rivals, which put together sentences by imitating people they have spoken to before or by searching through Twitter transcripts for conversational ideas, Veselov has given his bot a consistent and specific personality. "He has created very much a person where Cleverbot is everybody," says Carpenter.

Eugene's character is that of a 13 year-old boy living in Odessa, Ukraine. He has a pet guinea pig and a father who is a gynaecologist. Is 13 years old about the right age for a chatbot, then? "Thirteen years old is not too old to know everything and not too young to know nothing," explains Veselov.

A veteran of the Loebner prize and the Chatterbox challenge , Eugene was due a win. "We took second place several times but never were we the winners," says Veselov.

Did having a personality give him an advantage? "I think any appearance of a particular personality is likely to have a persuasive effect on judges," says John Barnden, an AI researcher specialising in machine understanding of metaphor at the University of Birmingham, UK, and a fellow judge.

He cautions against concluding that this was Eugene's edge, however - for that you would have to compare two versions of the same bot, but in one case with personality suppressed.

"In my own case it's not so much personality in the abstract that's key as how the system responds to a comment - is the response relevant and non-vacuous?" he adds.

I can sympathise with that: in some cases I knew it was a machine because the entity didn't seem to follow the sense of the conversation. I was however, delighted by how funny, and zany some of the conversations with beings that I labelled as bots (Disclaimer: the best judge award is still to be awarded so I don't actually know how often I was right). They also forced me to consider in a new way, just what it is that makes humans human
.
 
AIs would probably take orders from cats...

Google’s Artificial Brain Learns to Find Cat Videos
http://www.wired.com/wiredscience/2012/06/google-x-neural-network/
By Liat Clark, Wired UK June 26, 2012 | 11:15 am

When computer scientists at Google’s mysterious X lab built a neural network of 16,000 computer processors with one billion connections and let it browse YouTube, it did what many web users might do — it began to look for cats.

The “brain” simulation was exposed to 10 million randomly selected YouTube video thumbnails over the course of three days and, after being presented with a list of 20,000 different items, it began to recognize pictures of cats using a “deep learning” algorithm. This was despite being fed no information on distinguishing features that might help identify one.
Picking up on the most commonly occurring images featured on YouTube, the system achieved 81.7 percent accuracy in detecting human faces, 76.7 percent accuracy when identifying human body parts and 74.8 percent accuracy when identifying cats.

“Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not,” the team says in its paper, Building high-level features using large scale unsupervised learning, which it will present at the International Conference on Machine Learning in Edinburgh, 26 June-1 July.


“The network is sensitive to high-level concepts such as cat faces and human bodies. Starting with these learned features, we trained it to obtain 15.8 percent accuracy in recognizing 20,000 object categories, a leap of 70 percent relative improvement over the previous state-of-the-art [networks].”

The findings — which could be useful in the development of speech and image recognition software, including translation services — are remarkably similar to the “grandmother cell” theory that says certain human neurons are programmed to identify objects considered significant. The “grandmother” neuron is a hypothetical neuron that activates every time it experiences a significant sound or sight. The concept would explain how we learn to discriminate between and identify objects and words. It is the process of learning through repetition.

“We never told it during the training, ‘This is a cat,’” Jeff Dean, the Google fellow who led the study, told the New York Times. “It basically invented the concept of a cat.”

“The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data,” added Andrew Ng, a computer scientist at Stanford University involved in the project. Ng has been developing algorithms for learning audio and visual data for several years at Stanford.

Since coming out to the public in 2011, the secretive Google X lab — thought to be located in the California Bay Area — has released research on the Internet of Things, a space elevator and autonomous driving.

Its latest venture, though not nearing the number of neurons in the human brain ( thought to be over 80 billion), is one of the world’s most advanced brain simulators. In 2009, IBM developed a brain simulator that replicated one billion human brain neurons connected by ten trillion synapses.

However, Google’s latest offering appears to be the first to identify objects without hints and additional information. The network continued to correctly identify these objects even when they were distorted or placed on backgrounds designed to disorientate.

“So far, most [previous] algorithms have only succeeded in learning low-level features such as ‘edge’ or ‘blob’ detectors,” says the paper.

Ng remains skeptical and says he does not believe they are yet to hit on the perfect algorithm.

Nevertheless, Google considers it such an advance that the research has made the giant leap from the X lab to its main labs.
 

Users who are viewing this thread

Activity
So far there's no one here
Old Thread: Hello . There have been no replies in this thread for 365 days.
Content in this thread may no longer be relevant.
Perhaps it would be better to start a new thread instead.

21 Day Calendar

Lau (Unplugged)
The Sugar Club
8 Leeson Street Lower, Saint Kevin's, Dublin 2, D02 ET97, Ireland

Support thumped.com

Support thumped.com and upgrade your account

Upgrade your account now to disable all ads...

Upgrade now

Latest threads

Latest Activity

Loading…
Back
Top