background preloader

Artificial Intelligence

Facebook Twitter

The AI Revolution: Our Immortality or Extinction. Note: This is Part 2 of a two-part series on AI. Part 1 is here. PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.) We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. — Nick Bostrom Welcome to Part 2 of the “Wait how is this possibly what I’m reading I don’t get why everyone isn’t talking about this” series. Part 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how it’s all around us in the world today.

This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI that’s way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.← open these i.e. Timeline. We Fact-Checked Stephen Hawking's AMA Answers. Potential risks from advanced artificial intelligence. Note: this is a shallow overview of a topic that we have not previously examined. For shallow overviews, we typically work for a fixed amount of time, rather than continuing until we answer all possible questions to the best of our abilities. Accordingly, this is not researched and vetted to the same level as our standard recommendations. If you have additional information on this cause that you feel we should consider, please feel free to get in touch. We use our shallow overviews to help determine how to prioritize further research. What is the problem?

It seems plausible that some time this century, people will develop algorithmic systems capable of efficiently performing many or even all of the cognitive tasks that humans perform. Published: August 2015 Background and process We have been engaging in informal discussions around this topic for several years, and have done a significant amount of reading on it. For our part, our understanding of the matter is informed by the following: Why Human Intelligence and Artificial Intelligence Will Evolve Together, by Stephen Hsu. When it comes to artificial intelligence, we may all be suffering from the fallacy of availability: thinking that creating intelligence is much easier than it is, because we see examples all around us. In a recent poll, machine intelligence experts predicted that computers would gain human-level ability around the year 2050, and superhuman ability less than 30 years after.1 But, like a tribe on a tropical island littered with World War II debris imagining that the manufacture of aluminum propellers or steel casings would be within their power, our confidence is probably inflated.

AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). But there is hope. The potential for improved human intelligence is enormous.

2015 MIRI Summer Fundraiser: How We Could Scale. The Machine Intelligence Research Institute (MIRI) is a research nonprofit that works on technical obstacles to designing beneficial smarter-than-human artificial intelligence (AI). I'm MIRI's Executive Director, Nate Soares, and I'm here to announce our 2015 Summer Fundraiser! — Live Progress Bar — Donate Now This is a critical moment in the field of AI, and we think that AI is a critical field. Science and technology are responsible for the largest changes in human and animal welfare, both for the better and for the worse; and science and technology are both a product of human intelligence.

If AI technologies can match or exceed humans in intelligence, then human progress could be accelerated greatly — or cut short prematurely, if we use this new technology unwisely. We're currently scaling up our research efforts at MIRI and recruiting aggressively. Why care about artificial intelligence? 1. 2. 3. Why now is a critical moment for MIRI and the field of AI Target 5 — $6M: A new MIRI.

-Nate. Why We Really Should Ban Autonomous Weapons: A Response. This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE. We welcome Evan Ackerman’s contribution to the discussion on a proposed ban on offensive autonomous weapons. This is a complex issue and there are interesting arguments on both sides that need to be weighed up carefully. This process is well under way, and several hundred position papers have been written in the last few years by think tanks, arms control experts, and nation states.

His article, written as a response to an open letter signed by over 2500 AI and robotics researchers, makes four main points: (1) Banning a weapons system is unlikely to succeed, so let’s not try. (2) Banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil. (3) The real question that we should be asking is this: Could autonomous armed robots perform more ethically than armed humans in combat?

Why the Argument Against a Ban on Autonomous Killer Robots Falls Flat. The automation myth: Robots aren't taking your jobs— and that's the problem. President Obama has warned that ATMs and airport check-in kiosks are contributing to high unemployment. Sen. Marco Rubio said that the central challenge of our times is "to ensure that the rise of the machines is not the fall of the worker. " A cover story in the Atlantic asked us to ponder the problems of a world without work. And in the New York Times, Barbara Ehrenrich warns that "the job-eating maw of technology now threatens even the nimblest and most expensively educated. " The good news is that these concerns are wrong. None of the recent problems in the American economy are due to robots — or, to be more specific about it, due to an accelerating pace of automation. The bad news is that these concerns are wrong. In other words, don't worry that the robots will take your job. The past of automation Machines have been replacing humans for hundreds of years.

But for society as a whole, these were huge leaps forward. The techno-pessimists often admit this. So what happened? Musk, Hawking warn of 'inevitable' killer robot arms race. A global robotic arms race "is virtually inevitable" unless a ban is imposed on autonomous weapons, Stephen Hawking, Elon Musk and 1,000 academics, researchers and public figures have warned. In an open letter presented at the International Joint Conference on Artificial Intelligence in Buenos Aries, the Future of Life Institute signatories caution that "starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control". Although the letter, first reported by the Guardian, notes that "we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so", it concludes that "this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow".

But for the academics and figures who signed the letter, AI weapons are potentially more dangerous than nuclear bombs. My take on recent debate over AI-risk | topherhallquist. In my post on donating to animal rights orgs, I noted that organizations that claim to be working on risks from AI are a lot less cash-starved, now that Elon Musk has donated $10 million to the Future of Life Institute. They’re also a lot less publicity-starved, with not only Musk but also Stephen Hawking and Bill Gates lending their names to the cause.

The publicity has, predictably, generated a lot of pushback (see examples here and here). And while I think the issue of AI risk is worth thinking about, I’m sympathetic to many of the points made by critics, and disappointed by the rebuttals I’ve seen. For example, Andrew Ng, who’s Chief Scientist at Baidu Research and also known for his online course on machine learning, has said: The above link is to a blog post by former MIRI executive director Luke Muehlhauser, whose response to Ng focuses on AI timelines. (I mention this last one because it seems like a cheaper alternative to Elon Musk’s project of Mars colonization.) Like this: Look Out, Scientists! AI Solves 100-Year-Old Regeneration Puzzle.

An artificial intelligence (AI) system has solved a puzzle that has eluded scientists for more than 100 years: how a tiny, freshwater flatworm regenerates its body parts. The system was developed by researchers from Tufts University, in Massachusetts, to help mine the mountains of experimental data in developmental biology using a method inspired by the principles of evolution. To demonstrate the system, the researchers put it to work on data from experiments on planaria — tiny worms whose extraordinary ability to regrow complex body parts when chopped up has made them a popular subject in regenerative medicine. Despite more than a century of attention from scientists, and increasing insight into the chemical pathways that control the stem cells responsible for the uncanny ability of these worms to regenerate, no one has been able to come up with a model that explains the process fully.

That is, until now. [Infographic: History of Artificial Intelligence] Is Stephen Hawking Right? Could AI Lead To The End Of Humankind? The famous theoretical physicist, Stephen Hawking, has revived the debate on whether our search for improved artificial intelligence will one day lead to thinking machines that will take over from us. The British scientist made the claim during a wide-ranging interview with the BBC. Hawking has the motor neurone disease, amyotrophic lateral sclerosis (ALS), and the interview touched on new technology he is using to help him communicate. It works by modelling his previous word usage to predict what words he will use next, similar to predictive texting available on many smart phone devices. But Professor Hawking also mentioned his concern over the development of machines that might surpass us. “Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever increasing rate,” he reportedly told the BBC.

“The development of full artificial intelligence could spell the end of the human race.” Could thinking machines take over? Machines already taking over. Bill Gates: Artificial intelligence could be dangerous. Worries about artificial intelligence possibly wiping out humanity aren’t just limited to science fiction writers — plenty of the world’s most brilliant people think developing AI could be a potentially catastrophic mistake as well. In addition to famed physicist Stephen Hawking and Space X CEO Elon Musk, Microsoft cofounder and philanthropist Bill Gates is now warning about the dangers of AI. RELATED: Elon Musk explains why artificial intelligence should scare us to death During a recent Reddit AMA, Gates was asked how he felt about the potential dangers of AI and he responded that it’s definitely a cause for worry. “I am in the camp that is concerned about super intelligence,” Gates wrote.

“First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern.” Edge.org. In Just-spring when the world is mud-lusciousthe little lame balloonman whistles far and weeand eddie and bill come running from marbles and piraciesand it's spring when the world is puddle-wonderful That "brillig thing of beauty electric" touches me deeply as I think about AI. The youthful exuberance of luscious mud puddles, playing with marbles or pretending to be a pirate, running weee...all of which is totally beyond explanation to a hypothetical intelligent machine entity. You could add dozens of cameras and microphones, touch-sensors and voice output, would you seriously think it will ever go "weee", as in E. E. Cummings' (sadly abbreviated) 1916 poem? To me this is not the simplistic "machines lack a soul", but a "principle divide" between manipulating symbols versus actually grasping their true meaning.

Trouble is, we are still discussing AI so often with terms and analogies by the early pioneers. We need a Three-Ring Test. What is real AI? But it is not all "iterative". Profile for Eliezer_Yudkowsky. Eliezer S. Yudkowsky.