background preloader

AI collection

Facebook Twitter

Artificial Intelligence - threat or no threat?

Will there ever be an 'ethical robot'? Spike Jonze's 'Her' and the Digital World's Intimacy. In 1966, Joseph Weizenbaum, a professor of computer science at M.I.T., wrote a computer program called Eliza, which was designed to engage in casual conversation with anybody who sat down to type with it. Eliza worked by latching on to keywords in the user’s dialogue and then, in a kind of automated Mad Libs, slotted them into open-ended responses, in the manner of a so-called non-directive therapist. (Weizenbaum wrote that Eliza’s script, which he called Doctor, was a parody of the method of the psychologist Carl Rogers.) “I’m depressed,” a user might type. “I’m sorry to hear you are depressed,” Eliza would respond.

Eliza was a milestone in computer understanding of natural language. Yet Weizenbaum was more concerned with how users seemed to form an emotional relationship with the program, which consisted of nothing more than a few hundred lines of code. Twombly himself is estranged from his wife. Soon Samantha becomes less machine-like than Twombly.

So where does that leave us? Is flat-pack furniture the ultimate test for robots? Artificial Intelligence experts are devising a new way to test machine intelligence. Prof Gary Marcus from 'Beyond the Turing Test Workshop' told BBC Click's Spencer Kelly the Turing Championships would host a series of events to test different parts of what defines intelligence. "We are trying to figure out a way of evaluating real progress towards artificial intelligence," said Prof Marcus, "not the kind of narrow progress where you build a computer programme that can do one thing," he added. The tests could include requiring a machine to assemble flat-pack furniture from a diagram - and the required parts - or a completing a comprehension challenge, Prof Marcus explained.

Last year, computer chat program Eugene Goostman was said to have passed the Turing test, but some artificial intelligence experts disputed the victory. More at BBC.com/Click and @BBCClick. Artificial intelligence 'will not end human race' | Technology. The head of Microsoft’s main research lab has dismissed fears that artificial intelligence could pose a threat to the survival of the human race. Eric Horvitz believed that humans would not “lose control of certain kinds of intelligences”, adding: “In the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.” Professor Stephen Hawking last month expressed his fears about the rise of AI.

He believed that technology would eventually become self-aware and supersede humanity: “The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.” Horvitz made his comments in an video interview after being awarded the Feigenbaum Prize by the AAAI for his contribution to artificial intelligence research.

He said last year that AI was the biggest existential threat to humans. Eric Horvitz Receives AAAI Feigenbaum Prize; Shares Reflections On AI Research - Inside Microsoft Research. Posted by Eric Horvitz Editor's note: Eric Horvitz, managing director of Microsoft Research's Redmond Lab, shares some reflections upon receiving the AAAI Feigenbaum Prize. Horvitz is being recognized by the AAAI for "sustained and high-impact contributions to the field of artificial intelligence through the development of computational models of perception, reflection and action, and their application in time-critical decision making, and intelligent information, traffic, and healthcare systems. " How do our minds work? How can our thinking, perceiving, and all of our experiences arise in networks of neurons?

I have wondered about answers to these questions for as long as I can remember. Until just a few decades ago, discussions on mind and brain generally occurred within philosophy and theology. Over the last century, research in psychology, biology, and computer science has brought into focus intriguing results and directions for approaching a science of intelligence. Microsoft's Bill Gates insists AI is a threat. Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?' - Science - News. Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! And the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

Such achievements will probably pale against what the coming decades will bring. The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history.

Loading gallery In pictures: Landmarks in AI development 1 of 4 Unfortunately, it might also be the last, unless we learn how to avoid the risks. Bill Gates, Stephen Hawking Say Artificial Intelligence Represents Real Threat. I worry about a lot of things — my health, my kids, and the size of my retirement account. I never worry about an impending robot apocalypse ... but maybe I should. A handful of very smart people in the science and technology worlds are worried about that very thing. First it was Microsoft cofounder Bill Gates, who got the Internet all fired up when he answered questions in a Reddit "AskMeAnything" thread. "I am in the camp that is concerned about super intelligence," Gates wrote in response to a question about the existential threat posed by artificial intelligence (AI). [Related News: Bill Gates Tells Reddit About His Mysterious 'Personal Agent' Project at Microsoft] Nobel-prize winner Stephen Hawking, who spends a lot of time thinking about the shape of the universe, is also worried.

Hawking, no technophobe, is bullish on the potential benefits of AI and machine learning, and he says AI could become "the biggest event in human history. "