Indian Creek Correctional Center. Ray Kurzweil on "I've Got a Secret" 'What Every Person Should Know About War' What is a war? War is defined as an active conflict that has claimed more than 1,000 lives. Has the world ever been at peace? Of the past 3,400 years, humans have been entirely at peace for 268 of them, or just 8 percent of recorded history. How many people have died in war? At least 108 million people were killed in wars in the twentieth century. How many people around the world serve in the military? The combined armed forces of the world have 21.3 million people. How many wars are taking place right now? At the beginning of 2003 there were 30 wars going on around the world.
Is there a genetic reason why we fight? There is no single "war gene. " Is war essentially male? Worldwide, 97 percent of today's military personnel are male. Can women fight as effectively as men do? Yes. Why are civilians so attracted to war? War is often regarded by observers as honorable and noble. Does the American public support war? How large is the American military? 'What Every Person Should Know About War' We Live in a Jungle of Artificial Intelligence that will Spawn Sentience. You don't have a flying car, jetpack, or ray gun, but this is still the future. How do I know? Because we're all surrounded by artificial intelligence. I love when friends ask me when we'll develop smart computers...because they're usually holding one in their hands. Your phone calls are routed with artificial intelligence. Every time you use a search engine you're taking advantage of data collected by 'smart' algorithms.
When you call the bank and talk to an automated voice you are probably talking to an AI...just a very annoying one. How did we create the jungle of AI that surrounds us today? Yes, you see, back in the late 80s scientists started rethinking the way they pursued AI. Here's Kurzweil's answer at the panel discussion: Along with increased processing power, artificial intelligence really took off in the 90s. Now that list of tasks has expanded. Kurzweil and others predict the continued growth of processing power which will help enable a human like artificial intelligence. The Myth of the Three Laws of Robotics - Why We Can't Control Intelligence. Like many of you I grew up reading science fiction, and to me Isaac Asimov was a god of the genre. From 1929 until the mid 90s, the author created many lasting tropes and philosophies that would define scifi for generations, but perhaps his most famous creation was the Three Laws of Robotics.
Conceived as a means of evolving robot stories from mere re-tellings of Frankenstein, the Three Laws were a fail-safe built into robots in Asimov's fiction. These laws, which robots had to obey, protected humans from being hurt and made robots obedient. This concept helped form the real world belief among robotics engineers that they could create intelligent machines that would coexist peacefully with humanity. Let's get something out of the way. (clockwise) Meet the Terminator, Matrix 'squid', Megatron, Cylon centurion, and HAL...aka Communism, Existentialism, Energy Crisis, Terrorism, and Xenophobia.
Asimov's robots are where the concern really lies. Friendly AIs. Old Dominion University Libraries - Remote login. Off-campus Library Resource Login MIDAS Account access Current ODU faculty, staff and students should click the button below to login with your MIDAS account. Online resources available through the ODU Library web site are limited to currently registered students, staff, and faculty of Old Dominion University due to licensing restrictions. All resources have usage guidelines and restrictions. No resource allows unlimited downloading of content. Abuse of such restrictions causes the resource to be made unavailable to everyone. No MIDAS Account If you do not have a MIDAS Account but are a valid user of ODU Library resources click the button below. To provide greater security, you will now be asked to enter your complete campus email address and your university identification number.
Moore’s Law Is Showing Its Age. Ray Kurzweil. Raymond Kurzweil ( KURZ-wyle; born February 12, 1948) is an American inventor and futurist. He is involved in fields such as optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments. He has written books on health, artificial intelligence (AI), transhumanism, the technological singularity, and futurism. Kurzweil is a public advocate for the futurist and transhumanist movements, and gives public talks to share his optimistic outlook on life extension technologies and the future of nanotechnology, robotics, and biotechnology. Kurzweil has been employed by Google since 2012, where he is a "director of engineering". Life, inventions, and business career Early life Kurzweil grew up in the New York City borough of Queens. Kurzweil attended Martin Van Buren High School. Mid-life While in high school, Kurzweil had corresponded with Marvin Minsky and was invited to visit him at MIT, which he did.
Books Old Dominion University Libraries - Remote login. TechnicalAgenda. FLI - Future of Life Institute. Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. The 20257 Open Letter Signatories Include: FLI - Future of Life Institute. Frequently Asked Questions about the Future of Artificial Intelligence Q: Who conceived of and wrote FLI's open letter on robust and beneficial AI? A: The open letter has been an initiative of the Future of Life Institute (especially the FLI founders and Berkeley AI researcher and FLI Advisory Board Member Stuart Russell) in collaboration with the AI research community (including a number of signatories). Q: What sorts of AI systems is this letter addressing? A: There is indeed a proliferation of meanings of the term "Artificial Intelligence", largely because the intelligence we humans enjoy is actually comprised of many different capabilities.
Q: What are the concerns behind FLI's open letter on autonomous weapons? A: Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. Q: Why is the future of AI suddenly in the news? Q: What is the general nature of the concern about AI safety? FLI - Future of Life Institute. Playing with Technological Dominoes Advancing Research in an Era When Mistakes Can Be Catastrophic by Sophie Hebden April 7, 2015 The new Centre for the Study of Existential Risk at Cambridge University isn’t really there, at least not as a physical place—not yet. For now, it’s a meeting of minds, a network of people from diverse backgrounds who are worried about the same thing: how new technologies could cause huge fatalities and even threaten our future as a species.
But plans are coming together for a new phase for the centre to be in place by the summer: an on-the-ground research programme. We learn valuable information by creating powerful viruses in the lab, but risk a pandemic if an accident releases it. Ever since our ancestors discovered how to make sharp stones more than two and a half million years ago, our mastery of tools has driven our success as a species. At its heart, CSER is about ethics and the value you put on the lives of future, unborn people. . - Huw Price Huw Price. FLI - Future of Life Institute. Artificial Intelligence: The Danger of Good Intentions Why well-intentioned AI could pose a greater threat to humanity than malevolent cyborgs. by Nathan Collins March 13, 2015 Nate Soares (left) and Nisan Stiennon (right)The Machine Intelligence Research InstituteCredit: Vivian Johnson The Terminator had Skynet, an intelligent computer system that turned against humanity, while the astronauts in 2001: A Space Odyssey were tormented by their spaceship’s sentient computer HAL 9000, which had gone rogue.
The idea that artificial systems could gain consciousness and try to destroy us has become such a cliché in science fiction that it now seems almost silly. But prominent experts in computer science, psychology, and economics warn that while the threat is probably more banal than those depicted in novels and movies, it is just as real—and unfortunately much more challenging to overcome. Back in 2000, Yudkowsky had somewhat different aims. Humans have hugeanthropomorphic blindspots. - Nate Soares.
The story of artificial intelligence. The Search For Artificial Intelligence In the pale light of a laboratory, a white humanoid quietly contemplates a series of objects. A toy car is held up. "Toy car," the robot says, with barely a pause. For decades, the concept of a machine that could not only recognise objects, but could also be taught to learn the shapes and distinctive features of new ones, was the preserve of SF writers like Isaac Asimov. But in 2005, fiction became a reality when Japanese scientists created Asimo (an acronym which stands for Advanced Step in Innovative MObility), a plastic-shelled humanoid standing some four feet tall and capable of recognising objects, faces, hand gestures and speech.
While robots and benevolent computers have existed in literature and philosophy for centuries, it was only in the years following World War II that artificial intelligence began to move from the realms of science fiction to reality - and it all began in England, with a quiet game of chess. Alan Turing. Artificial intellience gets its name, plays lots of chess. AI Gets Its Name In 1965, scientist Herbert Simon optimistically declared that "machines will be capable, within twenty years, of doing any work a man can do," while in a 1970 article for Life magazine Marvin Minsky claimed that "in from three to eight years we will have a machine with the general intelligence of an average human being. " Unfortunately, that initial burst of optimism felt between the late 50s and early 60s would soon dissipate.
In 1970, British mathematician Sir James Lighthill wrote a highly critical report on AI research, stating that "in no part of the field have discoveries made so far produced the major impact that was then promised. " The subsequent withdrawal of vital funds in both the US and UK dealt a serious blow to AI research, leading to the first 'AI winter' which would last until the early 80s.
The 80s and Expert Systems AI research briefly recovered from its winter in the early 80s. Garry Kasparov, in the midst of one of his battles with IBM's Big Blue. Artificial intelligence, perception, and achievements. Perception And Mobility The field of humanoid robotics is one offshoot of AI that attempts to meet the challenges of perception and mobility head on – a type of research not looked upon favourably by all members of the scientific community. Doctor Marvin Minsky, an important voice of optimism in the early days of AI, was highly critical of such projects.
Speaking to Wired magazine in 2003, he was particularly scornful of the field of robotics. "The worst fad has been these stupid little robots," said Minsky. "Graduate students are wasting three years of their lives soldering and repairing robots, instead of making them smart. It's really shocking. " Such comments didn't deter projects like Asimo, Honda's attempt to create a humanoid robot.
Honda's Asimo Despite its impressive abilities, Asimo is but a small shuffle forward on the road of AI research; a cutting-edge synthesis of already extant technologies and programs. Artificial intelligence, emotion, singularity, and awakening. The Heartless System For professor Noel Sharkey, the greatest danger posed by AI is its lack of sentience, rather than the presence of it. As warfare, policing and healthcare become increasingly automated and computer-powered, their lack of emotion and empathy could create significant problems. "Eldercare robotics is being developed quite rapidly in Japan," Sharkey said.
"Robots could be greatly beneficial in keeping us out of care homes in our old age, performing many dull duties for us and aiding in tasks that failing memories make difficult. But it is a trade-off. My big concern is that once the robots have been tried and tested, it may be tempting to leave us entirely in their care. Like all humans, the elderly need love and human contact, and this often only comes from visiting carers. " The GeckoSystems CareBot. This lack of empathy could become particularly problematic in the theatre of war. The end of the world? Theories Or Fairy Tales? IBM Watson: What is Watson? The Myth of the Three Laws of Robotics – Why We Can’t Control Intelligence. Artificial Intelligence @ MIRI. Bill Gates is worried about artificial intelligence too. Bill Gates has a warning for humanity: Beware of artificial intelligence in the coming decades, before it's too late. Microsoft's co-founder joins a list of science and industry notables, including famed physicist Stephen Hawking and Internet innovator Elon Musk, in calling out the potential threat from machines that can think for themselves.
Gates shared his thoughts on AI on Wednesday in a Reddit "AskMeAnything" thread, a Q&A session conducted live on the social news site that has also featured President Barack Obama and World Wide Web founder Tim Berners-Lee. "I am in the camp that is concerned about super intelligence," Gates said in response to a question about the existential threat posed by AI. "First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. Gates, who is co-chair of the Bill & Melinda Gates Foundation, isn't the only one worried.
Fearing the worst "It is safe for now! Are the robots about to rise? Google's new director of engineering thinks so… | Technology. It's hard to know where to start with Ray Kurzweil. With the fact that he takes 150 pills a day and is intravenously injected on a weekly basis with a dizzying list of vitamins, dietary supplements, and substances that sound about as scientifically effective as face cream: coenzyme Q10, phosphatidycholine, glutathione? With the fact that he believes that he has a good chance of living for ever? He just has to stay alive "long enough" to be around for when the great life-extending technologies kick in (he's 66 and he believes that "some of the baby-boomers will make it through").
Or with the fact that he's predicted that in 15 years' time, computers are going to trump people. That they will be smarter than we are. Not just better at doing sums than us and knowing what the best route is to Basildon. They already do that. But then everyone's allowed their theories. And now? But it's what came next that puts this into context. And those are just the big deals. So far, so sci-fi. Well, yes.