background preloader

Moravec's paradox

Moravec's paradox
Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility." Linguist and cognitive scientist Steven Pinker considers this the most significant discovery uncovered by AI researchers. The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard. Marvin Minsky emphasizes that the most difficult human skills to reverse engineer are those that are unconscious. As Moravec writes: See also[edit] Related:  Miscellaneous & IdeasAI

Investing in the IT That Makes a Competitive Difference The Idea in Brief It’s not just you. It really is getting harder to outpace the other guys. Since the mid-1990s, competition in the U.S. economy has accelerated to unprecedented levels. To gain—and keep—a competitive edge in this environment, McAfee and Brynjolfsson recommend a three-step strategy: Deploy a consistent technology platform, rather than stitching together a jumble of legacy systems. By taking these steps, elevator-systems maker Otis realized not only dramatically shorter sales-cycle times but higher revenues and operating profit. The Idea in Practice The authors recommend these steps for staying ahead of rivals through IT-enabled process innovation: Deploy. Example: Before deploying a consistent platform, Cisco’s various units had nine different tools for checking an order’s status. Innovate. U.K. grocery chain Tesco has long used customer-rewards cards to collect detailed data on individual purchases, to categorize customers, and to tailor offers. Propagate.

Neuro Evolving Robotic Operatives Neuro-Evolving Robotic Operatives, or NERO for short, is a unique computer game that lets you play with adapting intelligent agents hands-on. Evolve your own robot army by tuning their artificial brains for challenging tasks, then pit them against your friends' teams in online competitions! New features in NERO 2.0 include an interactive game mode called territory capture, as well as a new user interface and more extensive training tools. NERO is a result of an academic research project in artificial intelligence, based on the rtNEAT algorithm. It is also a platform for future research on intelligent agent technology. The NERO project is run by the Neural Networks Group of the Department of Computer Sciences at the University of Texas at Austin . Currently, we are developing an open source successor to NERO , OpenNERO , a game platform for AI research and education.

Grey goo Grey goo (also spelled gray goo) is a hypothetical end-of-the-world scenario involving molecular nanotechnology in which out-of-control self-replicating robots consume all matter on Earth while building more of themselves,[1][2] a scenario that has been called ecophagy ("eating the environment").[3] The original idea assumed machines were designed to have this capability, while popularizations have assumed that machines might somehow gain this capability by accident. Definition[edit] The term was first used by molecular nanotechnology pioneer Eric Drexler in his book Engines of Creation (1986). In Chapter 4, Engines Of Abundance, Drexler illustrates both exponential growth and inherent limits (not gray goo) by describing nanomachines that can function only if given special raw materials: Drexler describes gray goo in Chapter 11 of Engines Of Creation: Early assembler-based replicators could beat the most advanced modern organisms. Risks and precautions[edit] Ethics and chaos[edit]

Introducing: Flickr PARK or BIRD tl;dr: Check it out at parkorbird.flickr.com! We at Flickr are not ones to back down from a challenge. Especially when that challenge comes in webcomic form. And especially when that webcomic is xkcd. In fact, we already had the technology in place to do these things. We put those things together, and thus was born parkorbird.flickr.com! Recognizing Stuff in Images with Deep Networks The thing we’re really excited to show off with PARK or BIRD is our image recognition technology. This model transforms an input image into a representation in which different objects and scenes are easily distinguishable by a simple binary classification algorithm, like an SVM. Each successive one of these layers, after training on millions of images, has learned to recognize higher- and higher-level features of images and the ways these features go together to form different objects and scenes. Acknowledgements If this all sounds like a challenge you’re interested in helping out with, you should join us!

Futurist Ray Kurzweil isn’t worried about climate change | Need to Know Author, inventor and futurist Ray Kurzweil famously and accurately predicted that a computer would beat a man at chess by 1998, that technologies that help spread information would accelerate the collapse of the Soviet Union, and that a worldwide communications network would emerge in the mid 1990s (i.e. the Internet). Most of Kurzweil’s prognostications are derived from his law of accelerating returns — the idea that information technologies progress exponentially, in part because each iteration is used to help build the next, better, faster, cheaper one. In the case of computers, this is not just a theory but an observable trend — computer processing power has doubled every two years for nearly half a century. Kurzweil also believes this theory can be applied to solar energy. I caught up with Kurzweil when he was in New York promoting a new documentary about his life to ask him about his optimistic views on the usually gloomy subject of energy and climate change.

Foxconn to replace workers with 1 million robots in 3 years SHENZHEN, July 29 (Xinhua) -- Taiwanese technology giant Foxconn will replace some of its workers with 1 million robots in three years to cut rising labor expenses and improve efficiency, said Terry Gou, founder and chairman of the company, late Friday. The robots will be used to do simple and routine work such as spraying, welding and assembling which are now mainly conducted by workers, said Gou at a workers' dance party Friday night. The company currently has 10,000 robots and the number will be increased to 300,000 next year and 1 million in three years, according to Gou. Foxconn, the world's largest maker of computer components which assembles products for Apple, Sony and Nokia, is in the spotlight after a string of suicides of workers at its massive Chinese plants, which some blamed on tough working conditions. The company currently employs 1.2 million people, with about 1 million of them based on the Chinese mainland. Related: Foxconn steps up investment in central China province

OVERVIEW OF NEURAL NETWORKS This installment addresses the subject of computer-models of neural networks and the relevance of those models to the functioning brain. The computer field of Artificial Intelligence is a vast bottomless pit which would lead this series too far from biological reality -- and too far into speculation -- to be included. Neural network theory will be the singular exception because the model is so persuasive and so important that it cannot be ignored. Neurobiology provides a great deal of information about the physiology of individual neurons as well as about the function of nuclei and other gross neuroanatomical structures. But understanding the behavior of networks of neurons is exceedingly challenging for neurophysiology, given current methods. Nonetheless, network behavior is important, especially in light of evidence for so-called "emergent properties", ie, properties of networks that are not obvious from an understanding of neuron physiology.

Why the future doesn't need us While some critics have characterized Joy's stance as obscurantism or neo-Luddism, others share his concerns about the consequences of rapidly expanding technology.[1] Summary[edit] Joy argues that developing technologies provide a much greater danger to humanity than any technology before it has ever presented. In particular, he focuses on genetics, nanotechnology and robotics. Joy also voices concern about increasing computer power. Criticisms[edit] In The Singularity Is Near, Ray Kurzweil questioned the regulation of potentially dangerous technology, asking "Should we tell the millions of people afflicted with cancer and other devastating conditions that we are canceling the development of all bioengineered treatments because there is a risk that these same technologies may someday be used for malevolent purposes?". Aftermath[edit] References[edit] Jump up ^ Khushf, George (2004). Further reading[edit] Messerly, John G. External links[edit]

Google's AI Is Now Smart Enough to Play Atari Like the Pros Last year Google shelled out an estimated $400 million for a little-known artificial intelligence company called DeepMind. Since then, the company has been pretty tight-lipped about what’s been going on behind DeepMind’s closed doors, but here’s one thing we know for sure: There’s a professional videogame tester who’s pitted himself against DeepMind’s AI software in a kind of digital battle royale. The battlefield was classic videogames. Google didn’t spend hundreds of millions of dollars because it’s expecting an Atari revival, but this new research does offer a hint as to what Google hopes to achieve with DeepMind. By merging these two techniques, Google has built a “a general-learning algorithm that should be applicable to many other tasks,” says Koray Kavukcuoglu, a Google researcher. But there are other interesting areas as well. Hassabis won’t tell us whether Google is running robot simulations too, but it’s clear that the Atari 2600 work is only the beginning.

Information Technology, Workplace Organization and the Demand for Skilled Labor: Firm-Level Evidence by Timothy Bresnahan, Erik Brynjolfsson, Lorin Hitt Timothy Bresnahan Stanford University - Department of Economics; Stanford Graduate School of Business; National Bureau of Economic Research (NBER) Erik Brynjolfsson Massachusetts Institute of Technology (MIT) - Sloan School of Management; National Bureau of Economic Research (NBER) Lorin M. University of Pennsylvania - Operations & Information Management DepartmentMay 1999 NBER Working Paper No. w7136 Abstract: Recently, the relative demand for skilled labor has increased dramatically. Number of Pages in PDF File: 47 working papers series Suggested Citation Bresnahan, Timothy and Brynjolfsson, Erik and Hitt, Lorin M., Information Technology, Workplace Organization and the Demand for Skilled Labor: Firm-Level Evidence (May 1999).

Lotus Artificial Life - Hardware Artificial Life This applet displays a cellular automata substrate capable of supporting evolving, self-reproducing which are capable of universal computation. The applet is fully interactive, allowing you to apply selection based on organisms visual characteristics using a variety of implements. Selection may also applied automatically. Currently the built in selection methods are for size and shape only. The cellular automata uses a strict von-Neumann neighbourhood and is based on an innovative, multi-layered design. The whole architecture is designed to be implemented on massively parallel hardware. Note: if you're playing with wiping out organisms manually you'll probably want to have the 'No selection at all' checkbox ticked - this causes all cells to be born pregnant and removes some constraints which abort malformed offspring.

Infosphere Infosphere is a neologism composed of information and sphere. The word refers to an environment, like a biosphere, that is populated by informational entities called inforgs. While an example of the sphere of information is cyberspace, infospheres are not limited to purely online environments. History of the Infosphere[edit] The first documented use of the word "InfoSphere" was a 1971 Time Magazine book review by R.Z. The Toffler definition proved prophetic as the use of "Infosphere" in the 1990s expanded beyond media to speculate about the common evolution of the Internet, society and culture. In his book "Digital Dharma," Steven Vedro writes, "Emerging from what French philosopher-priest Pierre Teilhard de Chardin called the shared noosphere of collective human thought, invention and spiritual seeking, the Infosphere is sometimes used to conceptualize a field that engulfs our physical, mental and etheric bodies; it affects our dreaming and our cultural life. Other Uses[edit]

Related: