background preloader

Machine Intelligence Research Institute

Machine Intelligence Research Institute
The Machine Intelligence Research Institute (MIRI) is a non-profit organization founded in 2000 to research safety issues related to the development of Strong AI. The organization advocates ideas initially put forth by I. J. Good and Vernor Vinge regarding an "intelligence explosion", or Singularity, which MIRI thinks may follow the creation of sufficiently advanced AI.[1] Research fellow Eliezer Yudkowsky coined the term Friendly AI to refer to a hypothetical super-intelligent AI that has a positive impact on humanity.[2] The organization has argued that to be "Friendly" a self-improving AI needs to be constructed in a transparent, robust, and stable way.[3] MIRI was formerly known as the Singularity Institute, and before that as the Singularity Institute for Artificial Intelligence. History[edit] In 2003, the Institute appeared at the Foresight Senior Associates Gathering, where co-founder Eliezer Yudkowsky presented a talk titled "Foundations of Order". Usefulness[edit] See also[edit] Related:  Artificial IntelligenceSuperintelligence

Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?' - Science - News Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring. The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history. Loading gallery In pictures: Landmarks in AI development 1 of 4 Unfortunately, it might also be the last, unless we learn how to avoid the risks.

Most significant present-day AI developments Clarke's three laws Clarke's Three Laws are three "laws" of prediction formulated by the British science fiction writer Arthur C. Clarke. They are: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. Origins[edit] Clarke's First Law was proposed by Arthur C. The second law is offered as a simple observation in the same essay. The Third Law is the best known and most widely cited, and appears in Clarke's 1973 revision of "Hazards of Prophecy: The Failure of Imagination". A fourth law has been added to the canon, despite Sir Arthur Clarke's declared intention of not going one better than Sir Isaac Newton. Snowclones and variations of the third law[edit] There exist a number of snowclones and variations of the third law and its contrapositive: Any technology distinguishable from magic is insufficiently advanced. The third law can be reversed in fictional universes involving magic: Any sufficiently analyzed magic is indistinguishable from science!

VUB Artificial Intelligence Lab Steve Steinberg on weak AI Steve Steinberg, former Legion of Doom member and current Wall Street hacker, posted a rare update to his .CSV blog, and it's a doozy. He unpacks two big developments in "weak" artificial intelligence that manage to slip under the radar, mostly because they don't involve emotional robots or bring The Singularity a few days closer. Along the way, he shreds insurance companies that seek to correlate bad credit with bad driving, and pokes at Google's trust of "man over machine," a "cultural quirk," as Steve puts it, that's overlooked amidst all the talk of algorithms and massive data sets. From .CSV: While strong AI still lies safely beyond the Maes-Garreau horizon (a vanishing point, perpetually fifty years ahead) a host of important new developments in weak AI are poised to be commercialized in the next few years. "new developments in AI"

Darpa sets out to make computers that teach themselves The Pentagon's blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves -- while making it easier for ordinary schlubs like us to build them, too. When Darpa talks about artificial intelligence, it's not talking about modelling computers after the human brain. That path fell out of favour among computer scientists years ago as a means of creating artificial intelligence; we'd have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms -- "probabilistic programming" -- to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better. But building such machines remains really, really hard: The agency calls it "Herculean". It's no surprise the mad scientists are interested. Image: Darpa

Big Data is the new Artificial Intelligence This is the first of a couple columns about a growing trend in Artificial Intelligence (AI) and how it is likely to be integrated in our culture. Computerworld ran an interesting overview article on the subject yesterday that got me thinking not only about where this technology is going but how it is likely to affect us not just as a people. but as individuals. How is AI likely to affect me? The answer is scary. Today we consider the general case and tomorrow the very specific. The failure of Artificial Intelligence. It didn’t work. Artificial Intelligence or AI, as it was called, absorbed hundreds of millions of Silicon Valley VC dollars before being declared a failure. The human speed bump. You see in today’s version of Artificial Intelligence we don’t need to teach our computers to perform human tasks: they teach themselves. Google Translate, for example, can be used online for free by anyone to translate text back and forth between more than 70 languages. Google Brain. 4) rinse repeat

6 Mashups of Music and Artificial Intelligence | Epicenter  If there is one thing computers do well, it’s math. All of music’s raw components — key, mode, melody, harmony and rhythm — can be expressed mathematically. As a result, computers can help people make music, even if they don’t know their elbow from an F clef. The following apps for computer, web browser and smartphone put the power of artificially intelligent music creation in your hands or let you hear music that was created or manipulated by machines. Without further ado: uJam One of the most impressive demonstrations I’ve seen this year, uJam is the brainchild of longtime audio-software developers Peter Gorges and Axel Hensen and their celebrity partners Hans Zimmer (film composer for Dark Knight, Gladiator, Lion King) and Pharrell Williams (producer for Madonna, Shakira, Gwen Stefani). Following the freemium model, uJam will be free to use on a basic level, with add-ons available for purchase. Emily Howell You can’t use the artificially intelligent Emily Howell software yourself.

Promises and Perils on the Road to Superintelligence Global Brain / Image credit: mindcontrol.se In the 21st century, we are walking an important road. Our species is alone on this road and it has one destination: super-intelligence. The most forward-thinking visionaries of our species were able to get a vague glimpse of this destination in the early 20th century. Paleontologist Pierre Teilhard de Chardin called this destination Omega Point. One conversation on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. For thinkers like Chardin, this vision was spiritual and religious; God using evolution to pull our species closer to our destiny. Today the philosophical debates of this vision have become more varied, but also more focused on model building and scientific prediction. It’s hard to make real sense of what this means. Promise #1: Omniscience Cadell Last

Between Ape and Artilect A compendium of interviews and dialogues originally appearing in H+ Magazine, Between Ape and Artilect has been released as a good old fashioned paper book (or ebook) by Humanity+ Press, available for purchase via Amazon.com – or available as a free PDF here. The book is edited by noted AI researcher and long-time Humanity+ Board member Ben Goertzel. During 2010-12, Dr. Goertzel conducted a series of textual interviews with researchers in various areas of cutting-edge science — artificial general intelligence, nanotechnology, life extension, neurotechnology, collective intelligence, mind uploading, body modification, neuro-spiritual transformation, and more. Between Ape and Artilect is a must-read if you want the real views, opinions, ideas, muses and arguments of the people creating our future. Itamar Arel: AGI via Deep LearningPei Wang: What Do You Mean by “AI”?

Related: