background preloader

A.I.

Facebook Twitter

The Dark Secret at the Heart of AI - MIT Technology Review. Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it. Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems.

This raises mind-boggling questions. Silicon Valley, Home to the Transhumanist Nightmare - The David Icke Videocast Trailer. AI Has Arrived, and That Really Worries the World's Brightest Minds. On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race.

That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg. Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. Click to Open Overlay Gallery AI problems that seemed nearly unassailable just a few years ago are now being solved. Artificial Intelligence Aligned With Human Values | Q&A With Stuart Russell. In January, the British-American computer scientist Stuart Russell drafted and became the first signatory of an open letter calling for researchers to look beyond the goal of merely making artificial intelligence more powerful.

“We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial,” the letter states. “Our AI systems must do what we want them to do.” Thousands of people have since signed the letter, including leading artificial intelligence researchers at Google, Facebook, Microsoft and other industry hubs along with top computer scientists, physicists and philosophers around the world. By the end of March, about 300 research groups had applied to pursue new research into “keeping artificial intelligence beneficial” with funds contributed by the letter’s 37th signatory, the inventor-entrepreneur Elon Musk.

How do you go about doing that? How did you get into artificial intelligence? And you’ve been on the West Coast ever since? Autonomous or 'Semi' Autonomous Weapons? A Distinction Without Difference. Over the new year, I was fortunate enough to be invited to speak at an event on the future of Artificial Intelligence (AI) hosted by the Future of Life Institute.

The purpose of the event was to think through the various aspects of the future of AI, from its economic impacts, to its technological abilities, to its legal implications. I was asked to present on autonomous weapons systems and what those systems portend for the future. The thinking was that an autonomous weapon is, after all, one run on some AI software platform, and if autonomous weapons systems continue to proceed on their current trajectory, we will see more complex software architectures and stronger AIs. Thus the capabilities created in AI will directly affect the capabilities of autonomous weapons and vice versa. While I was there to inform this impressive gathering about autonomous warfare, these bright minds left me with more questions about the future of AI and weapons. This Blogger’s Books and Other Items from... Elon Musk donates $10m to keep artificial intelligence good for humanity | Technology.

Elon Musk has put his money where his mouth is, donating $10m to an AI research institute earmarked for a global research programme aimed at keeping AI beneficial to humanity. The money will be used to support the goals laid out by the more than 100 AI researchers in early January, when they signed an open letter put out by the Future of Life Institute (FLI) calling on scientists to avoid AI “pitfalls”.

“Here are all these leading AI researchers saying that AI safety is important”, said Musk. “I agree with them, so I’m today committing $10m to support research aimed at keeping AI beneficial for humanity.” The money will be used to support research around the world, and distributed through an open grants competition which will begin taking applications on January 22. “Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere”, said FLI co-founder Viktoriya Krakovna.

FLI - Future of Life Institute | AI Open Letter. Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research.