background preloader

Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?' - Science - News

Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?' - Science - News
Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring. The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone's list. Success in creating AI would be the biggest event in human history. Loading gallery In pictures: Landmarks in AI development 1 of 4 Unfortunately, it might also be the last, unless we learn how to avoid the risks.

Related:  ROBOTArtificial IntelligencePensées

J J Bryson; Ethics, Robots, Artificial Intelligence (AI), and Society Last revised 22 January 2017 (just the media links). For my latest views, see also my blogposts on AI and on Ethics. Everyone should think about the ethics of the work they do, and the work they choose not to do. Artificial Intelligence and robots often seem like fun science fiction, but in fact already affect our daily lives. For example, services like Google and Amazon help us find what we want by using AI. They learn both from us and about us when we use them. How Much Longer Before Our First AI Catastrophe? As I distinctly recall, some speculated that the Stock Market crash of 1987, was due to high frequency trading by computers, and mindful of this possibility, I think regulators passed laws to prevent computers from trading in that specific pattern again. I remember something vague about about "throttles" being installed in the trading software that kick in whenever they see a sudden, global spike in the area in which they are trading. These throttles where supposed to slow down trading to a point where human operators could see what was happening and judge whether there was an unintentional feedback loop happening. This was 1987. I don't know if regulators have mandated other changes to computer trading software in the various panics and spikes since then. But yes, I definitely agree this is very good example where narrow AI got us into trouble.

For a Booming Economy, Bet on High Growth Firms, Not Small Businesses - Daniel Isenberg , and Ross Brown by Daniel Isenberg and Ross Brown | 8:00 AM February 3, 2014 “Small business is the backbone of our economy.” President Barack Obama, August 17, 2010 “New businesses are the lifeblood of a healthy… economy.” Big Data is the new Artificial Intelligence This is the first of a couple columns about a growing trend in Artificial Intelligence (AI) and how it is likely to be integrated in our culture. Computerworld ran an interesting overview article on the subject yesterday that got me thinking not only about where this technology is going but how it is likely to affect us not just as a people. but as individuals. How is AI likely to affect me? The answer is scary.

Bill Gates, Stephen Hawking Say Artificial Intelligence Represents Real Threat I worry about a lot of things — my health, my kids, and the size of my retirement account. I never worry about an impending robot apocalypse ... but maybe I should. A handful of very smart people in the science and technology worlds are worried about that very thing. National Science Foundation and federal partners award $31.5M to advance co-robots in US From disaster recovery to caring for the elderly in the home, scientists and engineers are developing robots that can handle critical tasks in close proximity to humans, safely and with greater resilience than previous generations of intelligent machines. Today the National Science Foundation (NSF), in partnership with the National Institutes of Health, US Department of Agriculture and NASA announced $31.5 million in new awards to spur the development and use of co-robots — robots that work cooperatively with people. The awards mark the third round of funding made through the National Robotics Initiative (NRI), a multi-agency program launched in September 2012 as part of the Advanced Manufacturing Partnership Initiative, with NSF as the lead federal agency. The 52 new research awards, ranging from $300,000 to $1.8 million over one to four years, advance fundamental understanding of robotic sensing, motion, computer vision, machine learning and human-computer interaction.

Ethical trap: robot paralysed by choice of who to save - 14 September 2014 Video: Ethical robots save humans Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine's response. 15 Things Highly Confident People Don't Do Highly confident people believe in their ability to achieve. If you don’t believe in yourself, why should anyone else put their faith in you? To walk with swagger and improve your self-confidence, watch out for these fifteen things highly confident people don’t do. 1. They don’t make excuses.

Between Ape and Artilect A compendium of interviews and dialogues originally appearing in H+ Magazine, Between Ape and Artilect has been released as a good old fashioned paper book (or ebook) by Humanity+ Press, available for purchase via – or available as a free PDF here. The book is edited by noted AI researcher and long-time Humanity+ Board member Ben Goertzel. During 2010-12, Dr. Goertzel conducted a series of textual interviews with researchers in various areas of cutting-edge science — artificial general intelligence, nanotechnology, life extension, neurotechnology, collective intelligence, mind uploading, body modification, neuro-spiritual transformation, and more. These interviews were published online in H+ Magazine, and are here gathered together in a single volume. Between Ape and Artilect is a must-read if you want the real views, opinions, ideas, muses and arguments of the people creating our future.

Eric Horvitz Receives AAAI Feigenbaum Prize; Shares Reflections On AI Research - Inside Microsoft Research Posted by Eric Horvitz Editor's note: Eric Horvitz, managing director of Microsoft Research's Redmond Lab, shares some reflections upon receiving the AAAI Feigenbaum Prize. Horvitz is being recognized by the AAAI for "sustained and high-impact contributions to the field of artificial intelligence through the development of computational models of perception, reflection and action, and their application in time-critical decision making, and intelligent information, traffic, and healthcare systems." How do our minds work? How can our thinking, perceiving, and all of our experiences arise in networks of neurons? I have wondered about answers to these questions for as long as I can remember.

The Problem With China's Giant Robot Ambitions BEIJING — The Chinese government wants the country to transition from a "manufacturing giant" to a "manufacturing power," and in no industry is that ambition clearer than robotics. China's Ministry of Industry has announced a five-year plan to promote the sector, including the formulation of a robot industry technological roadmap. The goal is ultimately to grasp a 45% share of the world’s high-end robot market by 2020. Earlier this year President Xi Jinping mentioned in a speech that the "Robot Revolution" is expected to become an entry point as well as an important growth vector of the "Third Industrial Revolution," with a vast impact on the global manufacturing landscape.

Scientists have created next generation data storage devices that mimic the memory of the human brain The new technology could revolutionise the way we store data, and take scientists a step closer to creating a bionic brain. Scientists from RMIT University in Australia have built a new nano-device that will act as the platform for next-generation nanoscale memory devices that are highly stable and reliable. There are two types of memory - volatile and non-volatile. Non-voltile memory can access stored memory even when not powered, and at the moment the main non-volatile storage we use is flash memory. While this works well, the technology has reached its scaling limits and it’s getting harder and harder to make devices smaller while storing more memory. But the Australian scientists have now created the platform for revolutionary new nanoscale devices that will allow computers to store significantly more data by mimicking human memory.