Google Will Soon Know You Better Than Your Spouse Does, Top Exec Says Ray Kurzweil, the director of engineering at Google, believes that the tech behemoth will soon know you even better than your spouse does. Kurzweil, who Bill Gates has reportedly called "the best person [he knows] at predicting the future of artificial intelligence," told the Observer in a recent interview that he is working with Google to create a computer system that will be able to intimately understand human beings. (Read Kurzweil's full interview with the Observer here.) "I have a one-sentence spec which is to help bring natural language understanding to Google," the 66-year-old tech whiz told the news outlet of his job. "My project is ultimately to base search on really understanding what the language means." "When you write an article, you're not creating an interesting collection of words," he continued. In short, the Observer writes, Kurzweil believes that Google will soon "know the answer to your question before you have asked it.
Risk of robot uprising wiping out human race to be studied 26 November 2012Last updated at 13:28 ET In The Terminator, the machines start to turn on the humans Cambridge researchers are to assess whether technology could end up destroying human civilisation. The Centre for the Study of Existential Risk (CSER) will study dangers posed by biotechnology, artificial life, nanotechnology and climate change. The scientists said that to dismiss concerns of a potential robot uprising would be "dangerous". Fears that machines may take over have been central to the plot of some of the most popular science fiction films. Perhaps most famous is Skynet, a rogue computer system depicted in the Terminator films. Skynet gained self-awareness and fought back after first being developed by the US military. 'Reasonable prediction' But despite being the subject of far-fetched fantasy, researchers said the concept of machines outsmarting us demanded mature attention. "What we're trying to do is to push it forward in the respectable scientific community."
Can We Make the Hardware Necessary for Artificial Intelligence? My POV is hardware driven, I do electronic design. I don’t present myself as “an authority” on Artificial Intelligence, much less “an authority” on sentient artificial intelligence, until they are Real Things, there is no such thing as an authority in that field. That said, if the hardware doesn’t exist to support sentient AI, doesn’t matter how wonderful the software is.
Are we already living in the technological singularity? The news has been turning into science fiction for a while now. TVs that watch the watcher, growing tiny kidneys, 3D printing, the car of tomorrow, Amazon's fleet of delivery drones – so many news stories now "sound like science fiction" that the term returns 1,290,000 search results on Google. The pace of technological innovation is accelerating so quickly that it's possible to perform this test in reverse. Google an imaginary idea from science fiction and you'll almost certainly find scientists researching the possibility. Warp drive? The Multiverse? The most radical prediction of science fiction is the technological singularity. Imagine a graph charting the growth in modern computing power. That spike is the singularity. Today as director of engineering at Google, Kurzweil is developing concrete policy based on those predictions. The most successful exploration of the singularity to date remains Accelerando by Charles Stross, a linked series of nine stories, first collected in 2005.
Why a superintelligent machine may be the last thing we ever invent "...why would a smart computer be any more capable of recursive self-improvement than a smart human?" I think it mostly hinges on how artificial materials are more mutable than organic ones. We humans have already already developed lots of ways to enhance our mental functions, libraries, movies, computers, crowdsourcing R&D, etc. But to actually change the brain itself has been very slow going for us. "...would a single entity really be capable of comprehending the entirety of it's own mind?" It probably won't need to. This idea also applies to doctors. Or think of it this way, genes and cells are hardly intelligent enough to "design" a human brain, yet over billions of years of blind selection intelligence was embedded in higher levels of biological organization (tissues, organs, systems of organs, etc.) that eventually lead to an organ that was capable of dealing with the more rapid changes of culture and learning.
Robots aren’t getting smarter — we’re getting dumber Huge artificial intelligence news! Our robot overlords have arrived! A “supercomputer” has finally passed the Turing Test! Except, well, maybe not. Here’s what actually happened: For five whole minutes, a chatbot managed to convince one out of three judges that it was “Eugene Goostman” — a 13-year-old Ukrainian boy with limited English skills. Alan Turing would not be impressed. So, raspberries to the Guardian and the Independent for uncritically buying into the University of Reading’s press campaign. But, the bogosity of Eugene Goostman’s artificial intelligence does not mean that we shouldn’t be on guard for marauding robots. Proof of this arrives in research conducted by a group of Argentinian computer scientists in the paper Reverse Engineering Socialbot Infiltration Strategies in Twitter. Out of 120 bots, 38 were suspended. More surprisingly, the socialbots that generated synthetic tweets (rather than just reposting) performed better too. (Emphasis mine.) Hey, guess what?
Possibility of cloning quantum information from the past Popular television shows such as "Doctor Who" have brought the idea of time travel into the vernacular of popular culture. But problem of time travel is even more complicated than one might think. LSU's Mark Wilde has shown that it would theoretically be possible for time travelers to copy quantum data from the past. It all started when David Deutsch, a pioneer of quantum computing and a physicist at Oxford, came up with a simplified model of time travel to deal with the paradoxes that would occur if one could travel back in time. "The question is, how would you have existed in the first place to go back in time and kill your grandfather?" Deutsch solved the Grandfather paradox originally using a slight change to quantum theory, proposing that you could change the past as long as you did so in a self-consistent manner. "Meaning that, if you kill your grandfather, you do it with only probability one-half," Wilde said. "We can always look at a paper, and then copy the words on it.
Stephen Hawking Says A.I. Could Be Our 'Worst Mistake In History' You have to understand Stephen Hawking's mind is literally trapped in a body that has betrayed him. Sadly, the only thing he can do is think. The things he's been able to imagine and calculate using the power of his mind alone is mindboggling. He treats AI as he would a more advanced Human civilization. Computers are exceptionally good at calculation. Now the one things that computers can do very well - far better than humans - is make decisions based on logic. Computers are cooperative engines, believe it or not. SAI won't have fear - not like that.
How Artificial Superintelligence Will Give Birth To Itself Kinja is in read-only mode. We are working to restore service. "So if you create an AI that has a terminal value of friendliness to humanity, the AI would not want to change itself in a way that caused it to be unfriendly to humanity," he says. "This way as the AI got smarter, it would use its enhanced intelligence to increase the odds that it did not change itself in a manner that harms us." "From there, the AGI would be interested in pursuing whatever goals it was programmed with — such as research, exploration, or finance." I think this is a mistake. There are also a lot of things that we know we are inclined to do instinctively (i.e. we do essentially have some programmed "terminal values") but that doesn't stop some people from breaking from those instincts – see for example suicide, killing our own families, etc, which are examples of people going against their survival instincts. Flagged Keep in mind that we're not talking about a human-like mind with paleolithic tendencies.
Google Officially Enters the Robotics Business With Acquisition of Seven Startups Last year, I visited a warehouse behind a typically fashionable San Francisco café where two startups, Bot & Dolly and Autofuss, were busy making the insanely immersive visuals for the film Gravity (among a host of other projects) using naught but assembly line robots, clever software, and high-def cameras. A few months later, I found myself in another warehouse—this time some forty minutes south of the city—where robotic arms, built and programmed by Industrial Perception, used advanced computer vision to sort toys and throw around boxes. What do these companies have in common? Industrial Perception robotic arm uses computer vision to move boxes. According to the New York Times, they were just secretly acquired by Google—along with four other robotics firms over the last six months—to design and build a fleet of next-generation robots under the direction of Andy Rubin, the former chief of Google’s mobile operating system, Android. Bot & Dolly’s Maya-based IRIS software.
Sure, Artificial Intelligence May End Our World, But That Is Not the Main Problem The robots will rise, we’re told. The machines will assume control. For decades we have heard these warnings and fears about artificial intelligence taking over and ending humankind. Such scenarios are not only currency in Hollywood but increasingly find supporters in science and philosophy. For example, Ray Kurzweil wrote that the exponential growth of AI will lead to a technological singularity, a point when machine intelligence will overpower human intelligence. On Tuesday, leading scientist Stephen Hawking joined the ranks of the singularity prophets, especially the darker ones, as he told the BBC that “the development of full artificial intelligence could spell the end of the human race.” The problem with such scenarios is not that they are necessarily false—who can predict the future? Mark Coeckelbergh About Mark Coeckelbergh is Professor of Technology and Social Responsibility at De Montfort University in the UK, and is the author of Human Being @ Risk and Money Machines.