Interview d’Elon Musk, l’homme qui veut empêcher les machines de prendre le pouvoir. En créant OpenAI, une équipe de recherche à but non lucratif, Musk et Y Combinator espèrent limiter les risques de dérive en matière d’intelligence artificielle.
Comme si le domaine de l’intelligence artificielle (IA) n’était pas déjà assez compétitif – avec des géants comme Google, Apple, Facebook, Microsoft et même des marques automobiles comme Toyota qui se bousculent pour engager des chercheurs –, on compte aujourd’hui un petit nouveau, avec une légère différence cependant. Il s’agit d’une entreprise à but non lucratif du nom d’OpenAI, qui promet de rendre ses résultats publics et ses brevets libres de droits afin d’assurer que l’effrayante perspective de voir les ordinateurs surpasser l’intelligence humaine ne soit pas forcément la dystopie que certains redoutent. OpenAi - - Creating the standard for Artificial Intelligence. Panpsychism. In philosophy, panpsychism is the view that mind or soul (Greek: ψυχή) is a universal feature of all things, and the primordial feature from which all others are derived.
The panpsychist sees him or herself as a mind in a world of minds. Panpsychism is one of the oldest philosophical theories, and can be ascribed to philosophers like Thales, Plato, Spinoza, Leibniz and William James. Panpsychism can also be seen in eastern philosophies such as Vedanta and Mahayana Buddhism.
During the 19th century, Panpsychism was the default theory in philosophy of mind, but it saw a decline during the middle years of the 20th century with the rise of logical positivism. The recent interest in the hard problem of consciousness has once again made panpsychism a mainstream theory. Etymology The term "panpsychism" has its origins with the Greek term pan, meaning "throughout" or "everywhere", and psyche, meaning "soul" as the unifying center of the mental life of us humans and other living creatures.
How Much Power Does The Human Brain Require To Operate? The Coming Technological Singularity. ==================================================================== The Coming Technological Singularity: How to Survive in the Post-Human Era Vernor Vinge Department of Mathematical Sciences San Diego State University (c) 1993 by Vernor Vinge (Verbatim copying/translation and distribution of this entire article is permitted in any medium, provided this notice is preserved.)
This article was for the VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31, 1993. It is also retrievable from the NASA technical reports server as part of NASA CP-10129. A slightly changed version appeared in the Winter 1993 issue of _Whole Earth Review_. Abstract Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. The AI Revolution: Our Immortality or Extinction. The AI Revolution: Road to Superintelligence - Wait But Why. PDF: We made a fancy PDF of this post for printing and offline viewing.
Buy it here. (Or see a preview.) Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. We are on the edge of change comparable to the rise of human life on Earth. — Vernor Vinge What does it feel like to stand here? Les pratiques collaboratives dans l'éducation - François Taddei. ParisTech Review – Nos systèmes éducatifs sont-ils toujours adaptés à un monde qui change à une vitesse sans cesse plus grande, qui est de moins en moins vertical et hiérarchique et de plus en plus horizontal et collaboratif ?
François Taddei – Nos systèmes éducatifs sont fondés sur la résolution de problèmes classiques. Typiquement, pour entrer dans une grande école, il faut passer des concours qui consistent pour l’essentiel à résoudre des problèmes ordinaires. Google’s artificial intelligence mastermind responds to Elon Musk’s fears. We can get along fine, at least for a few decades, according to Demis Hassabis.
(Andrea Comas/Reuters) Demis Hassabis is an impressive guy. A former child prodigy, a chess master at 13 and the founder of DeepMind Technologies, a British artificial intelligence company that Google acquired last year. Now 38, he’s at the forefront of an emerging technology with an unmatched potential for good and bad. Hassabis and his researchers published a landmark paper this week, creating an algorithm that learns in a human-like manner.
Elon Musk, a DeepMind investor — he says the better to keep an eye on them — has led the charge, calling artificial intelligence mankind’s greatest threat. At a news conference Tuesday Hassabis addressed Musk’s concerns: Intelligence artificielle: attention danger, même Bill Gates a peur! - L'Express L'Expansion. FLI - Future of Life Institute. (If you have questions about this letter, please contact firstname.lastname@example.org) Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents - systems that perceive and act in some environment. In this context, "intelligence" is related to statistical and economic notions of rationality - colloquially, the ability to make good decisions, plans, or inferences.
The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Stephen Hawking Thinks These 3 Things Could Destroy Humanity. Stephen Hawking may be most famous for his work on black holes and gravitational singularities, but the world-renowned physicist has also become known for his outspoken ideas about things that could destroy human civilization.
Hawking suffers from a motor neuron disease similar to amyotrophic lateral sclerosis, or ALS, which left him paralyzed and unable to speak without a voice synthesizer. But that hasn't stopped the University of Cambridge professor from making proclamations about the wide range of dangers humanity faces — including ourselves. Here are a few things Hawking has said could bring about the demise of human civilization. [End of the World?
Top Doomsday Fears]