background preloader

Are the robots about to rise? Google's new director of engineering thinks so…

Are the robots about to rise? Google's new director of engineering thinks so…
It's hard to know where to start with Ray Kurzweil. With the fact that he takes 150 pills a day and is intravenously injected on a weekly basis with a dizzying list of vitamins, dietary supplements, and substances that sound about as scientifically effective as face cream: coenzyme Q10, phosphatidycholine, glutathione? With the fact that he believes that he has a good chance of living for ever? He just has to stay alive "long enough" to be around for when the great life-extending technologies kick in (he's 66 and he believes that "some of the baby-boomers will make it through"). Or with the fact that he's predicted that in 15 years' time, computers are going to trump people. That they will be smarter than we are. But then everyone's allowed their theories. And now? But it's what came next that puts this into context. Google has bought almost every machine-learning and robotics company it can find, or at least, rates. And those are just the big deals. So far, so sci-fi. Well, yes.

http://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence

Related:  Possible Ending ScenariosSpeculating about AI & its Progress

Why a superintelligent machine may be the last thing we ever invent "...why would a smart computer be any more capable of recursive self-improvement than a smart human?" I think it mostly hinges on how artificial materials are more mutable than organic ones. We humans have already already developed lots of ways to enhance our mental functions, libraries, movies, computers, crowdsourcing R&D, etc.

Constraints On Our Universe As A Numerical Simulation Is Our Universe a Numerical Simulation? Silas R. Beane, Zohreh Davoudi and Martin J. Savage Intelligent Machines: The truth behind AI fiction Image copyright Thinkstock Artificial intelligence (AI) is the science of making smart machines, and it has come a long way since the term was coined in the 1950s. Nowadays, robots work alongside humans in hotels and factories, while driverless cars are being test driven on the roads. Behind the scenes, AI engines in the form of smart algorithms "work" on stock exchanges, offer up suggestions for books and films on Amazon and Netflix and even write the odd article. Stephen Hawking Says A.I. Could Be Our 'Worst Mistake In History' You have to understand Stephen Hawking's mind is literally trapped in a body that has betrayed him. Sadly, the only thing he can do is think. The things he's been able to imagine and calculate using the power of his mind alone is mindboggling.

The Doomsday Invention I. Omens Last year, a curious nonfiction book became a Times best-seller: a dense meditation on artificial intelligence by the philosopher Nick Bostrom, who holds an appointment at Oxford. Titled “Superintelligence: Paths, Dangers, Strategies,” it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology—even nuclear weapons—and that if its development is not managed carefully humanity risks engineering its own extinction. Central to this concern is the prospect of an “intelligence explosion,” a speculative event in which an A.I. gains the ability to improve itself, and in short order exceeds the intellectual potential of the human brain by many orders of magnitude. Such a system would effectively be a new kind of life, and Bostrom’s fears, in their simplest form, are evolutionary: that humanity will unexpectedly become outmatched by a smarter competitor.

10 Ways to Destroy an Arduino — Rugged Circuits Use a sledgehammer, fire a bullet at it, throw it into a pool....that’s not what we’re talking about. We’re going to show you how to electrically destroy your Arduino, though many of you seem to already know how to do that through unfortunate experience. You know what we mean....that funny smell, the scorch mark on a component, or the dreaded “programmer not in sync” error message -- all signs that you’ve just learned a lesson the hard way. Why are we doing this? Sure, Artificial Intelligence May End Our World, But That Is Not the Main Problem The robots will rise, we’re told. The machines will assume control. For decades we have heard these warnings and fears about artificial intelligence taking over and ending humankind. Such scenarios are not only currency in Hollywood but increasingly find supporters in science and philosophy. For example, Ray Kurzweil wrote that the exponential growth of AI will lead to a technological singularity, a point when machine intelligence will overpower human intelligence.

Does the ‘Chinese room’ argument preclude a robot uprising? In this blog series, Olle Häggström, author of Here Be Dragons, explores the risks and benefits of advances in biotechnology, nanotechnology, and machine intelligence. In this third and final post, Olle challenges John Searle’s ‘Chinese room’ argument about the future of robotics. There has been much recent talk about a possible robot apocalypse. One person who is highly skeptical about this possibility is philosopher John Searle. In a 2014 essay, he argues that “the prospect of superintelligent computers rising up and killing us, all by themselves, is not a real danger.”

Elon Musk Spooked Out by A.I. Artificial intelligence really spooks out Tesla and SpaceX founder Elon Musk. He's afraid, without proper regulation in place, it could be the "biggest existential threat" to humans. Musk was asked about AI at MIT's annual AeroAstra Centennial Symposium last week. He spooked himself out so badly answering the question, he was unable to concentrate for a few minutes after. Can a robot be conscious? In this blog series, Olle Häggström, author of Here Be Dragons, explores the risks and benefits of advances in biotechnology, nanotechnology, and machine intelligence. In this second post, Olle explores the computational theory of mind concept. Can a robot be conscious? I will try to discuss this without getting bogged down in the rather thorny issue of what consciousness –– really is.

AI Has Arrived, and That Really Worries the World's Brightest Minds On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race. That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg. Musk and Hawking fret over an AI apocalypse, but there are more immediate threats.

Related: