background preloader


Facebook Twitter

Silicon Valley, Home to the Transhumanist Nightmare - The David Icke Videocast Trailer. AI Has Arrived, and That Really Worries the World's Brightest Minds. On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion.

AI Has Arrived, and That Really Worries the World's Brightest Minds

This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race. That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg. Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. Click to Open Overlay Gallery AI problems that seemed nearly unassailable just a few years ago are now being solved.

Artificial Intelligence Aligned With Human Values. In January, the British-American computer scientist Stuart Russell drafted and became the first signatory of an open letter calling for researchers to look beyond the goal of merely making artificial intelligence more powerful.

Artificial Intelligence Aligned With Human Values

“We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial,” the letter states. “Our AI systems must do what we want them to do.” Thousands of people have since signed the letter, including leading artificial intelligence researchers at Google, Facebook, Microsoft and other industry hubs along with top computer scientists, physicists and philosophers around the world. By the end of March, about 300 research groups had applied to pursue new research into “keeping artificial intelligence beneficial” with funds contributed by the letter’s 37th signatory, the inventor-entrepreneur Elon Musk.

How do you go about doing that? How did you get into artificial intelligence? And you’ve been on the West Coast ever since? Autonomous or 'Semi' Autonomous Weapons? A Distinction Without Difference. Over the new year, I was fortunate enough to be invited to speak at an event on the future of Artificial Intelligence (AI) hosted by the Future of Life Institute.

Autonomous or 'Semi' Autonomous Weapons? A Distinction Without Difference

The purpose of the event was to think through the various aspects of the future of AI, from its economic impacts, to its technological abilities, to its legal implications. I was asked to present on autonomous weapons systems and what those systems portend for the future. The thinking was that an autonomous weapon is, after all, one run on some AI software platform, and if autonomous weapons systems continue to proceed on their current trajectory, we will see more complex software architectures and stronger AIs. Thus the capabilities created in AI will directly affect the capabilities of autonomous weapons and vice versa.

While I was there to inform this impressive gathering about autonomous warfare, these bright minds left me with more questions about the future of AI and weapons. This Blogger’s Books and Other Items from... Elon Musk donates $10m to keep artificial intelligence good for humanity. Elon Musk has put his money where his mouth is, donating $10m to an AI research institute earmarked for a global research programme aimed at keeping AI beneficial to humanity.

Elon Musk donates $10m to keep artificial intelligence good for humanity

The money will be used to support the goals laid out by the more than 100 AI researchers in early January, when they signed an open letter put out by the Future of Life Institute (FLI) calling on scientists to avoid AI “pitfalls”. “Here are all these leading AI researchers saying that AI safety is important”, said Musk. “I agree with them, so I’m today committing $10m to support research aimed at keeping AI beneficial for humanity.” The money will be used to support research around the world, and distributed through an open grants competition which will begin taking applications on January 22. FLI - Future of Life Institute. Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment.

FLI - Future of Life Institute

In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research.