Most significant present-day AI developments Alcor Life Extension Foundation This "bigfoot" Dewar is custom-designed to contain four wholebody patients and six neuropatients immersed in liquid nitrogen at −196 degrees Celsius. The Dewar is an insulated container which consumes no electric power. Liquid nitrogen is added periodically to replace the small amount that evaporates. The Alcor Life Extension Foundation, most often referred to as Alcor, is a Scottsdale, Arizona, USA-based nonprofit company that researches, advocates for and performs cryonics, the preservation of humans in liquid nitrogen after legal death, with hopes of restoring them to full health when new technology is developed in the future. As of February 28, 2014, Alcor had 973 members, 91 associate members and 121 patients in cryopreservation, many as neuropatients (79 of Alcor patients were neuropatients or brain preservation patients as of December 2013). Alcor also cryopreserves the pets of members. As of November 15, 2007, there were 33 pets in suspension. History
Clarke's three laws Clarke's Three Laws are three "laws" of prediction formulated by the British science fiction writer Arthur C. Clarke. They are: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.The only way of discovering the limits of the possible is to venture a little way past them into the impossible.Any sufficiently advanced technology is indistinguishable from magic. Origins Clarke's First Law was proposed by Arthur C. The second law is offered as a simple observation in the same essay. The Third Law is the best known and most widely cited, and appears in Clarke's 1973 revision of "Hazards of Prophecy: The Failure of Imagination". A fourth law has been added to the canon, despite Sir Arthur Clarke's declared intention of not going one better than Sir Isaac Newton. Snowclones and variations of the third law and its contrapositive: See also References
Steve Steinberg on weak AI Steve Steinberg, former Legion of Doom member and current Wall Street hacker, posted a rare update to his .CSV blog, and it's a doozy. He unpacks two big developments in "weak" artificial intelligence that manage to slip under the radar, mostly because they don't involve emotional robots or bring The Singularity a few days closer. Along the way, he shreds insurance companies that seek to correlate bad credit with bad driving, and pokes at Google's trust of "man over machine," a "cultural quirk," as Steve puts it, that's overlooked amidst all the talk of algorithms and massive data sets. From .CSV: While strong AI still lies safely beyond the Maes-Garreau horizon (a vanishing point, perpetually fifty years ahead) a host of important new developments in weak AI are poised to be commercialized in the next few years. But because these developments are a paradoxical mix of intelligence and stupidity, they defy simple forecasts, they resist hype. "new developments in AI"
Longecity LongeCity is the adopted "public name" of the Immortality Institute (sometimes abbreviated "ImmInst") a nonprofit 501(c)(3) organisation founded in 2002 by Bruce J. Klein and others. Aims [ edit ] The organisation states as its mission "to conquer the blight of involuntary death". a repository of high-quality information; an open public forum for the free exchange of information and views; an infrastructure to support community projects and initiatives; the facilities for supporting an international community of those with an interest in life extension. Activities [ edit ] The organization maintains an online forum for information exchange. Management [ edit ] LongeCity is a membership-based organisation governed by a 'constitution'. Names [ edit ] As of January 2011, the Immortality Institute has formally adopted 'LongeCity' as a second 'trading' name. Logo [ edit ] Previous Immortality Institute header/logo The Immortality Institute's logo makes use of the following symbols: See also [ edit ]
Darpa sets out to make computers that teach themselves The Pentagon's blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves -- while making it easier for ordinary schlubs like us to build them, too. When Darpa talks about artificial intelligence, it's not talking about modelling computers after the human brain. That path fell out of favour among computer scientists years ago as a means of creating artificial intelligence; we'd have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms -- "probabilistic programming" -- to parse through vast amounts of data and select the best of it. But building such machines remains really, really hard: The agency calls it "Herculean". It's no surprise the mad scientists are interested. The other question involves how to make computer-learning machines more predictable. Image: Darpa
6 Mashups of Music and Artificial Intelligence | Epicenter If there is one thing computers do well, it’s math. All of music’s raw components — key, mode, melody, harmony and rhythm — can be expressed mathematically. As a result, computers can help people make music, even if they don’t know their elbow from an F clef. The following apps for computer, web browser and smartphone put the power of artificially intelligent music creation in your hands or let you hear music that was created or manipulated by machines. uJam One of the most impressive demonstrations I’ve seen this year, uJam is the brainchild of longtime audio-software developers Peter Gorges and Axel Hensen and their celebrity partners Hans Zimmer (film composer for Dark Knight, Gladiator, Lion King) and Pharrell Williams (producer for Madonna, Shakira, Gwen Stefani). You can use this Flash-based software to record cover versions of popular songs, but the real magic lies in creating something from scratch, either with your voice or a musical instrument, in a multitude of styles.
Democratic transhumanism Philosophy According to Hughes, the terms techno-progressivism and democratic transhumanism both refer to the same set of Enlightenment values and principles; however, the term technoprogressive has replaced the use of the word democratic transhumanism. Trends Hughes has identified 15 "left futurist" or "left techno-utopian" trends and projects that could be incorporated into democratic transhumanism: List of democratic transhumanists This section contains an alphabetically ordered list of notable individuals who have identified themselves or been identified by Hughes as advocates of democratic transhumanism: Criticism Critical theorist Dale Carrico defended democratic transhumanism from Bailey's criticism. However, he would later criticize democratic transhumanism himself on technoprogressive grounds. References External links
Promises and Perils on the Road to Superintelligence Global Brain / Image credit: mindcontrol.se In the 21st century, we are walking an important road. Our species is alone on this road and it has one destination: super-intelligence. The most forward-thinking visionaries of our species were able to get a vague glimpse of this destination in the early 20th century. One conversation on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. For thinkers like Chardin, this vision was spiritual and religious; God using evolution to pull our species closer to our destiny. Today the philosophical debates of this vision have become more varied, but also more focused on model building and scientific prediction. It’s hard to make real sense of what this means. In contrast, today we can define the specific mechanisms that could realize a new world. Promise #1: Omniscience
Watson, Turing, and extreme machine learning One of best presentations at IBM’s recent Blogger Day was given by David Ferrucci, the leader of the Watson team, the group that developed the supercomputer that recently appeared as a contestant on Jeopardy. To many people, the Turing test is the gold standard of artificial intelligence. Put briefly, the idea is that if you can’t tell whether you’re interacting with a computer or a human, a computer has passed the test. But it’s easy to forget how subtle this criterion is. Alan Turing was thinking explicitly of this: in his 1950 paper, he proposes question/answer pairs like this: Q: Please write me a sonnet on the subject of the Forth Bridge. A: Count me out on this one. Q: Add 34,957 to 70,764. A: (Pause about 30 seconds and then give as answer) 105,621. We’d never think of asking a computer the first question, though I’m sure there are sonnet-writing projects going on somewhere. Dave Ferrucci, IBM scientist and Watson project director Equally important, Watson is not always right. Related: