background preloader

Artificial Intelligence

Facebook Twitter

The AI Revolution: Our Immortality or Extinction. We Fact-Checked Stephen Hawking's AMA Answers. Potential risks from advanced artificial intelligence. Note: this is a shallow overview of a topic that we have not previously examined.

Potential risks from advanced artificial intelligence

For shallow overviews, we typically work for a fixed amount of time, rather than continuing until we answer all possible questions to the best of our abilities. Accordingly, this is not researched and vetted to the same level as our standard recommendations. If you have additional information on this cause that you feel we should consider, please feel free to get in touch. We use our shallow overviews to help determine how to prioritize further research. What is the problem? Published: August 2015 Background and process We have been engaging in informal discussions around this topic for several years, and have done a significant amount of reading on it. For readers highly interested in this topic, we would recommend the following as particular useful for getting up to speed: Getting a basic sense of what recent progress in AI has looked like, and what it might look like going forward.

What is the problem? Why Human Intelligence and Artificial Intelligence Will Evolve Together, by Stephen Hsu. When it comes to artificial intelligence, we may all be suffering from the fallacy of availability: thinking that creating intelligence is much easier than it is, because we see examples all around us.

Why Human Intelligence and Artificial Intelligence Will Evolve Together, by Stephen Hsu

In a recent poll, machine intelligence experts predicted that computers would gain human-level ability around the year 2050, and superhuman ability less than 30 years after.1 But, like a tribe on a tropical island littered with World War II debris imagining that the manufacture of aluminum propellers or steel casings would be within their power, our confidence is probably inflated. AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). 2015 MIRI Summer Fundraiser: How We Could Scale. The Machine Intelligence Research Institute (MIRI) is a research nonprofit that works on technical obstacles to designing beneficial smarter-than-human artificial intelligence (AI).

2015 MIRI Summer Fundraiser: How We Could Scale

I'm MIRI's Executive Director, Nate Soares, and I'm here to announce our 2015 Summer Fundraiser! — Live Progress Bar — Why We Really Should Ban Autonomous Weapons: A Response. This is a guest post.

Why We Really Should Ban Autonomous Weapons: A Response

The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE. We welcome Evan Ackerman’s contribution to the discussion on a proposed ban on offensive autonomous weapons. This is a complex issue and there are interesting arguments on both sides that need to be weighed up carefully. This process is well under way, and several hundred position papers have been written in the last few years by think tanks, arms control experts, and nation states. His article, written as a response to an open letter signed by over 2500 AI and robotics researchers, makes four main points: Why the Argument Against a Ban on Autonomous Killer Robots Falls Flat.

The automation myth: Robots aren't taking your jobs— and that's the problem. President Obama has warned that ATMs and airport check-in kiosks are contributing to high unemployment.

The automation myth: Robots aren't taking your jobs— and that's the problem

Sen. Marco Rubio said that the central challenge of our times is "to ensure that the rise of the machines is not the fall of the worker. " A cover story in the Atlantic asked us to ponder the problems of a world without work. And in the New York Times, Barbara Ehrenrich warns that "the job-eating maw of technology now threatens even the nimblest and most expensively educated. " The good news is that these concerns are wrong. The bad news is that these concerns are wrong.

In other words, don't worry that the robots will take your job. The past of automation. Musk, Hawking warn of 'inevitable' killer robot arms race. A global robotic arms race "is virtually inevitable" unless a ban is imposed on autonomous weapons, Stephen Hawking, Elon Musk and 1,000 academics, researchers and public figures have warned.

Musk, Hawking warn of 'inevitable' killer robot arms race

In an open letter presented at the International Joint Conference on Artificial Intelligence in Buenos Aries, the Future of Life Institute signatories caution that "starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control". Although the letter, first reported by the Guardian, notes that "we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so", it concludes that "this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow". But for the academics and figures who signed the letter, AI weapons are potentially more dangerous than nuclear bombs.

My take on recent debate over AI-risk. In my post on donating to animal rights orgs, I noted that organizations that claim to be working on risks from AI are a lot less cash-starved, now that Elon Musk has donated $10 million to the Future of Life Institute.

My take on recent debate over AI-risk

They’re also a lot less publicity-starved, with not only Musk but also Stephen Hawking and Bill Gates lending their names to the cause. The publicity has, predictably, generated a lot of pushback (see examples here and here). And while I think the issue of AI risk is worth thinking about, I’m sympathetic to many of the points made by critics, and disappointed by the rebuttals I’ve seen. For example, Andrew Ng, who’s Chief Scientist at Baidu Research and also known for his online course on machine learning, has said: The above link is to a blog post by former MIRI executive director Luke Muehlhauser, whose response to Ng focuses on AI timelines.

Look Out, Scientists! AI Solves 100-Year-Old Regeneration Puzzle. An artificial intelligence (AI) system has solved a puzzle that has eluded scientists for more than 100 years: how a tiny, freshwater flatworm regenerates its body parts.

Look Out, Scientists! AI Solves 100-Year-Old Regeneration Puzzle

The system was developed by researchers from Tufts University, in Massachusetts, to help mine the mountains of experimental data in developmental biology using a method inspired by the principles of evolution. To demonstrate the system, the researchers put it to work on data from experiments on planaria — tiny worms whose extraordinary ability to regrow complex body parts when chopped up has made them a popular subject in regenerative medicine.

Despite more than a century of attention from scientists, and increasing insight into the chemical pathways that control the stem cells responsible for the uncanny ability of these worms to regenerate, no one has been able to come up with a model that explains the process fully. Is Stephen Hawking Right? Could AI Lead To The End Of Humankind? The famous theoretical physicist, Stephen Hawking, has revived the debate on whether our search for improved artificial intelligence will one day lead to thinking machines that will take over from us.

Is Stephen Hawking Right? Could AI Lead To The End Of Humankind?

The British scientist made the claim during a wide-ranging interview with the BBC. Hawking has the motor neurone disease, amyotrophic lateral sclerosis (ALS), and the interview touched on new technology he is using to help him communicate. It works by modelling his previous word usage to predict what words he will use next, similar to predictive texting available on many smart phone devices. But Professor Hawking also mentioned his concern over the development of machines that might surpass us. “Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever increasing rate,” he reportedly told the BBC. Bill Gates: Artificial intelligence could be dangerous. Worries about artificial intelligence possibly wiping out humanity aren’t just limited to science fiction writers — plenty of the world’s most brilliant people think developing AI could be a potentially catastrophic mistake as well.

Bill Gates: Artificial intelligence could be dangerous

In addition to famed physicist Stephen Hawking and Space X CEO Elon Musk, Microsoft cofounder and philanthropist Bill Gates is now warning about the dangers of AI. RELATED: Elon Musk explains why artificial intelligence should scare us to death During a recent Reddit AMA, Gates was asked how he felt about the potential dangers of AI and he responded that it’s definitely a cause for worry. “I am in the camp that is concerned about super intelligence,” Gates wrote. “First, the machines will do a lot of jobs for us and not be super intelligent. Edge.org. In Just-spring when the world is mud-lusciousthe little lame balloonman whistles far and weeand eddie and bill come running from marbles and piraciesand it's spring when the world is puddle-wonderful That "brillig thing of beauty electric" touches me deeply as I think about AI.

The youthful exuberance of luscious mud puddles, playing with marbles or pretending to be a pirate, running weee...all of which is totally beyond explanation to a hypothetical intelligent machine entity. You could add dozens of cameras and microphones, touch-sensors and voice output, would you seriously think it will ever go "weee", as in E. Profile for Eliezer_Yudkowsky. Eliezer S. Yudkowsky.