background preloader

Stephen Hawking Says A.I. Could Be Our 'Worst Mistake In History'

Stephen Hawking Says A.I. Could Be Our 'Worst Mistake In History'
You have to understand Stephen Hawking's mind is literally trapped in a body that has betrayed him. Sadly, the only thing he can do is think. The things he's been able to imagine and calculate using the power of his mind alone is mindboggling. However, and this is a very important thing - he is still human. He is as much influenced by human bias as the next person. He treats AI as he would a more advanced Human civilization. Computers are exceptionally good at calculation. Now the one things that computers can do very well - far better than humans - is make decisions based on logic. Computers are cooperative engines, believe it or not. SAI won't have fear - not like that. Related:  Possible Ending ScenariosTranshumanism

Are the robots about to rise? Google's new director of engineering thinks so… | Technology | The Observer It's hard to know where to start with Ray Kurzweil. With the fact that he takes 150 pills a day and is intravenously injected on a weekly basis with a dizzying list of vitamins, dietary supplements, and substances that sound about as scientifically effective as face cream: coenzyme Q10, phosphatidycholine, glutathione? With the fact that he believes that he has a good chance of living for ever? He just has to stay alive "long enough" to be around for when the great life-extending technologies kick in (he's 66 and he believes that "some of the baby-boomers will make it through"). But then everyone's allowed their theories. And now? But it's what came next that puts this into context. Google has bought almost every machine-learning and robotics company it can find, or at least, rates. And those are just the big deals. Bill Gates calls him "the best person I know at predicting the future of artificial intelligence". But then, he has other things on his mind. So far, so sci-fi. Well, yes.

How Much Longer Before Our First AI Catastrophe? As I distinctly recall, some speculated that the Stock Market crash of 1987, was due to high frequency trading by computers, and mindful of this possibility, I think regulators passed laws to prevent computers from trading in that specific pattern again. I remember something vague about about "throttles" being installed in the trading software that kick in whenever they see a sudden, global spike in the area in which they are trading. These throttles where supposed to slow down trading to a point where human operators could see what was happening and judge whether there was an unintentional feedback loop happening. This was 1987. I don't know if regulators have mandated other changes to computer trading software in the various panics and spikes since then. But yes, I definitely agree this is very good example where narrow AI got us into trouble. These are all examples of narrow machine intelligence. Obviously no system is going to be perfect. So it goes.

Risk of robot uprising wiping out human race to be studied 26 November 2012Last updated at 13:28 ET In The Terminator, the machines start to turn on the humans Cambridge researchers are to assess whether technology could end up destroying human civilisation. The Centre for the Study of Existential Risk (CSER) will study dangers posed by biotechnology, artificial life, nanotechnology and climate change. The scientists said that to dismiss concerns of a potential robot uprising would be "dangerous". Fears that machines may take over have been central to the plot of some of the most popular science fiction films. Perhaps most famous is Skynet, a rogue computer system depicted in the Terminator films. Skynet gained self-awareness and fought back after first being developed by the US military. 'Reasonable prediction' But despite being the subject of far-fetched fantasy, researchers said the concept of machines outsmarting us demanded mature attention. "What we're trying to do is to push it forward in the respectable scientific community."

Why a superintelligent machine may be the last thing we ever invent "...why would a smart computer be any more capable of recursive self-improvement than a smart human?" I think it mostly hinges on how artificial materials are more mutable than organic ones. We humans have already already developed lots of ways to enhance our mental functions, libraries, movies, computers, crowdsourcing R&D, etc. But most of this augmentation is done through offloading work onto tools and machinery external to the body. But to actually change the brain itself has been very slow going for us. "...would a single entity really be capable of comprehending the entirety of it's own mind?" It probably won't need to. This idea also applies to doctors. If genes and protein can generate such complexity after billions of years of natural selection, it seems reasonable to conclude that minds and culture lead to still greater levels of complexity through guided engineering.

Sure, Artificial Intelligence May End Our World, But That Is Not the Main Problem The robots will rise, we’re told. The machines will assume control. For decades we have heard these warnings and fears about artificial intelligence taking over and ending humankind. Such scenarios are not only currency in Hollywood but increasingly find supporters in science and philosophy. On Tuesday, leading scientist Stephen Hawking joined the ranks of the singularity prophets, especially the darker ones, as he told the BBC that “the development of full artificial intelligence could spell the end of the human race.” The problem with such scenarios is not that they are necessarily false—who can predict the future? Mark Coeckelbergh About Mark Coeckelbergh is Professor of Technology and Social Responsibility at De Montfort University in the UK, and is the author of Human Being @ Risk and Money Machines. These issues are far less sexy perhaps than that of superintelligence or the end of humankind. Go Back to Top.

Can we build an artificial superintelligence that won't kill us? SExpand At some point in our future, an artificial intelligence will emerge that's smarter, faster, and vastly more powerful than us. Once this happens, we'll no longer be in charge. But what will happen to humanity? And how can we prepare for this transition? Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute (MIRI) — a group that's dedicated to figuring out the various ways we might be able to build friendly smarter-than-human intelligence. io9: How did you come to be aware of the friendliness problem as it relates to artificial superintelligence (ASI)? Muehlhauser: Sometime in mid-2010 I stumbled across a 1965 paper by I.J. Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Spike Jonze's latest film, Her, has people buzzing about artificial intelligence. Her is a fantastic film, but its portrayal of AI is set up to tell a good story, not to be accurate.

Steve Wozinak: Computers Are Going to Take Over from Humans The Fermi Paradox: Back with a vengeance This article is partly adapted from my TransVision 2007 presentation, “Whither ET? What the failing search for extraterrestrial intelligence tells us about humanity's future.” The Fermi Paradox is alive and well. As our sciences mature, and as the search for extraterrestrial intelligence continues to fail, the Great Silence becomes louder than ever. Our isolation in the Universe has in no small way shaped and defined the human condition. To deal with the cognitive dissonance created by the Great Silence, we have resorted to good old fashioned human arrogance, anthropocentrism, and worse, an inter-galactic inferiority complex. Under closer scrutiny, however, these excuses don’t hold. Indeed, one of the greatest philosophical and scientific challenges that currently confronts humanity is the unsolved question of the existence of extraterrestrial intelligences (ETI's). We have yet to see any evidence for their existence. Back with a vengeance So, where is everybody? [1] Hart, M. [8] "All Wet?

Elon Musk Spooked Out by A.I. Artificial intelligence really spooks out Tesla and SpaceX founder Elon Musk. He's afraid, without proper regulation in place, it could be the "biggest existential threat" to humans. Musk was asked about AI at MIT's annual AeroAstra Centennial Symposium last week. He spooked himself out so badly answering the question, he was unable to concentrate for a few minutes after. "Do you have any plans to enter the field of artificial intelligence?" an audience member asked. "I think we should be very careful about artificial intelligence," Musk replied. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? According to Musk, AI humans are capable of building would make Space Odyssey's HAL 9000 look like "a puppy dog." The next question came from another audience member who asked how SpaceX plans to utilize telecommunications — something totally unrelated to AI. But Musk was too distracted to listen.

Are You Living in a Simulation? Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Apart form the interest this thesis may hold for those who are engaged in futuristic speculation, there are also more purely theoretical rewards. The structure of the paper is as follows. A common assumption in the philosophy of mind is that of substrate-independence. Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall here take it as a given. The argument we shall present does not, however, depend on any very strong version of functionalism or computationalism. We shall develop this idea into a rigorous argument. Writing .

Related: