background preloader

Kurzweil Accelerating Intelligence

Kurzweil Accelerating Intelligence

http://www.kurzweilai.net/

Related:  FuturologyThe SingularitySuperintelligenceCreativeIndustries

Technology singularity - News & Rumors Posts Tagged «technology singularity» Elon Musk warns us that human-level AI is ‘potentially more dangerous than nukes’ August 4, 2014 at 8:39 amElon Musk, the mastermind behind SpaceX and Tesla, believes that artificial intelligence is “potentially more dangerous than nukes,” imploring all of humankind “to be super careful with AI,” unless we want the ultimate fate of humanity to closely resemble Judgment Day from Terminator. Personally I think Musk is being a little hyperbolic — after all, we’ve survived more than 60 years of the threat of thermonuclear mutually assured destruction — but still, it’s worth considering Musk’s words in greater detail. What is transhumanism, or, what does it mean to be human? April 1, 2013 at 1:25 pmWhat does it mean to be human?

Technological Singularity The technological singularity is the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization in an event called the singularity.[1] Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable.[2] The first use of the term "singularity" in this context was by mathematician John von Neumann. Proponents of the singularity typically postulate an "intelligence explosion",[5][6] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human. Basic concepts

Darpa sets out to make computers that teach themselves The Pentagon's blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves -- while making it easier for ordinary schlubs like us to build them, too. When Darpa talks about artificial intelligence, it's not talking about modelling computers after the human brain. That path fell out of favour among computer scientists years ago as a means of creating artificial intelligence; we'd have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms -- "probabilistic programming" -- to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better. But building such machines remains really, really hard: The agency calls it "Herculean".

The Future of Artificial Intelligence Editor's Note: This article was originally printed in the 2008 Scientific American Special Report on Robots. It is being published on the Web as part of ScientificAmerican.com's In-Depth Report on Robots. In recent years the mushrooming power, functionality and ubiquity of computers and the Internet have outstripped early forecasts about technology’s rate of advancement and usefulness in everyday life. Alert pundits now foresee a world saturated with powerful computer chips, which will increasingly insinuate themselves into our gadgets, dwellings, apparel and even our bodies. Yet a closely related goal has remained stubbornly elusive. In stark contrast to the largely unanticipated explosion of computers into the mainstream, the entire endeavor of robotics has failed rather completely to live up to the predictions of the 1950s.

Butterfly effect In chaos theory, the butterfly effect is the sensitive dependency on initial conditions in which a small change at one place in a deterministic nonlinear system can result in large differences in a later state. The name of the effect, coined by Edward Lorenz, is derived from the theoretical example of a hurricane's formation being contingent on whether or not a distant butterfly had flapped its wings several weeks earlier. Although the butterfly effect may appear to be an unlikely behavior, it is exhibited by very simple systems. For example, a ball placed at the crest of a hill may roll into any surrounding valley depending on, among other things, slight differences in its initial position.

When will Singularity happen – and will it turn Earth into heaven or hell? Defined as the point where computers become more intelligent than humans and where human intelligence can be digitally stored, Singularity hasn't happened yet. First theorised by mathematician John von Neumann in the 1950s, the 'Singularitarian Immortalist' (and Director of Engineering at Google) Ray Kurzweil thinks that by 2045, machine intelligence will be infinitely more powerful than all human intelligence combined, and that technological development will be taken over by the machines. "There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality," he writes in his book 'The Singularity Is Near'. But – 2045? Are we really that close?

Moore's law Moore's law is the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper.[1][2][3] His prediction has proven to be accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development.[4] The capabilities of many digital electronic devices are strongly linked to Moore's law: processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras.[5] All of these are improving at roughly exponential rates as well. This exponential improvement has dramatically enhanced the impact of digital electronics in nearly every segment of the world economy.[6] Moore's law describes a driving force of technological and social change in the late 20th and early 21st centuries.[7][8] History[edit]

On the hunt for universal intelligence How do you use a scientific method to measure the intelligence of a human being, an animal, a machine or an extra-terrestrial? So far this has not been possible, but a team of Spanish and Australian researchers have taken a first step towards this by presenting the foundations to be used as a basis for this method in the journal Artificial Intelligence, and have also put forward a new intelligence test. "We have developed an 'anytime' intelligence test, in other words a test that can be interrupted at any time, but that gives a more accurate idea of the intelligence of the test subject if there is a longer time available in which to carry it out", José Hernández-Orallo, a researcher at the Polytechnic University of Valencia (UPV), tells SINC.

Related:  Techno et biotechWebsitesTRANSMEDIAFuturistTechnologyProblem AnalysisFunScienceSources of InformationvoipzekWiredTopicsFuturismGadgetryTo be SortedInnovationzazencoyotGoogle & TransHumanismeSTEM