background preloader

Paul Allen: The Singularity Isn't Near

Paul Allen: The Singularity Isn't Near
Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they’ll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It’s heady stuff. The AI Approach Related:  A I / Robotics et al.

Ray Kurzweil does not understand the brain There he goes again, making up nonsense and making ridiculous claims that have no relationship to reality. Ray Kurzweil must be able to spin out a good line of bafflegab, because he seems to have the tech media convinced that he’s a genius, when he’s actually just another Deepak Chopra for the computer science cognoscenti. His latest claim is that we’ll be able to reverse engineer the human brain within a decade. By reverse engineer, he means that we’ll be able to write software that simulates all the functions of the human brain. Sejnowski says he agrees with Kurzweil’s assessment that about a million lines of code may be enough to simulate the human brain. I’m very disappointed in Terence Sejnowski for going along with that nonsense. See that sentence I put in red up there? Let me give you a few specific examples of just how wrong Kurzweil’s calculations are. First up is RHEB (Ras Homolog Enriched in Brain). Got that? And it’s not just the one. I’ll make a prediction, too.

The Manifest Destiny of Artificial Intelligence Will AI create mindlike machines, or will it show how much a mindless machine can do? Brian Hayes Artificial intelligence began with an ambitious research agenda: To endow machines with some of the traits we value most highly in ourselves—the faculty of reason, skill in solving problems, creativity, the capacity to learn from experience. Early results were promising. Fifty years later, problem-solving machines are a familiar presence in daily life. In spite of these achievements, the status of artificial intelligence remains unsettled. It is not only critics from outside the field who express such qualms about the direction of AI. At the outset, research in artificial intelligence was the project of a very small community. It was a small community, but big enough for schisms and factional strife. An even older and deeper rift divides the “symbolic” and the “connectionist” approaches to artificial intelligence. Through the 1970s, most AI projects were small, proof-of-concept studies.

CHARTS: Here's What The Wall Street Protesters Are So Angry About... The "Occupy Wall Street" protests are gaining momentum, having spread from a small park in New York to marches to other cities across the country. So far, the protests seem fueled by a collective sense that things in our economy are not fair or right. But the protesters have not done a good job of focusing their complaints—and thus have been skewered as malcontents who don't know what they stand for or want. (An early list of "grievances" included some legitimate beefs, but was otherwise just a vague attack on "corporations." So, what are the protesters so upset about, really? Do they have legitimate gripes? To answer the latter question first, yes, they have very legitimate gripes. And if America cannot figure out a way to address these gripes, the country will likely become increasingly "de-stabilized," as sociologists might say. In other words, in the never-ending tug-of-war between "labor" and "capital," there has rarely—if ever—been a time when "capital" was so clearly winning.

Turing's Enduring Importance When Alan Turing was born 100 years ago, on June 23, 1912, a computer was not a thing—it was a person. Computers, most of whom were women, were hired to perform repetitive calculations for hours on end. The practice dated back to the 1750s, when Alexis-Claude ­Clairaut recruited two fellow astronomers to help him plot the orbit of Halley’s comet. ­Clairaut’s approach was to slice time into segments and, using Newton’s laws, calculate the changes to the comet’s position as it passed Jupiter and Saturn. The team worked for five months, repeating the process again and again as they slowly plotted the course of the celestial bodies. Today we call this process dynamic simulation; Clairaut’s contemporaries called it an abomination. By the time Turing entered King’s College in 1931, human computers had been employed for a wide variety of purposes—and often they were assisted by calculating machines. All these machines were fundamentally limited. Things Reviewed:

Transhumanism Criticisms - Future Criticisms of transhumanism take two main forms: those objecting to the likelihood of transhumanist goals being achieved (practical criticisms); and those objecting to the moral principles of transhumanism (ethical criticisms). However, these two strains sometimes converge and overlap, particularly when the ethics of changing human biology in the face of incomplete knowledge is considered. Critics or opponents of transhumanism often see transhumanists' goals as posing threats to human values. Some of the most widely known critiques of the transhumanist program refer to novels and fictional films. Futurehype argument (infeasibility) In his 1992 book Futurehype: The Tyranny of Prophecy, sociologist Max Dublin points out many past failed predictions of technological progress and argues that modern futurist predictions will prove similarly inaccurate. Major paradigm shifts seem to show that many technologies are evolving in an exponential pattern. Playing God arguments (hubris) See Also

Ray Kurzweil's Slippery Futurism "It is now 2009. Individuals primarily use portable computers, which have become dramatically lighter and thinner than the notebook computers of ten years earlier. Personal computers are available in a wide range of sizes and shapes," he wrote, and if he had ended the sentence there, surely no one would disagree. But instead he continues: "—and are commonly embedded in clothing and jewelry such as wristwatches, rings, earrings, and other body ornaments. "So far, I haven't seen Kurzweil straight-up admit that he was wrong. Is that all true? Or consider what Kurzweil wrote about education. He also seems to have had high hopes a decade ago for the antitumor compounds called angiogenesis inhibitors. It seems only fair to allow some latitude for interpretation on the dates. Kurzweil himself has no such difficulty, however. "I am in the process of writing a prediction-by-prediction analysis of these, which will be available soon and I will send it to you," he wrote.

AI robot: how machine intelligence is evolving | Technology | The Observer Marcus du Sautoy with one of Luc Steels's language-making robots. Photograph: Jodie Adams/BBC 'I propose to consider the question "Can machines think?"' "Of course the body is a machine. If the body were a machine, Turing wondered: is it possible to artificially create such a contraption that could think like he did? Last year saw one of the major landmarks on the way to creating artificial intelligence. Watson is not IBM's first winner. Playing chess requires a deep logical analysis of the possible moves that can be made next in the game. The program at the heart of Watson's operating system is particularly sophisticated because it learns from its mistakes. Despite Watson's win, it did make some very telling mistakes. It's this strange answer that gives away that it is a probably a machine rather than a person answering the question. The AI community is beginning to question whether we should be so obsessed with recreating human intelligence.

Google Engineer: “Google+ is a Prime Example of Our Complete Failure to Understand Platforms” Last night, high-profile Google engineer Steve Yegge mistakenly posted a long rant about working at Amazon and Google’s own issues with creating platforms on Google+. Apparently, he only wanted to share it internally with everybody at Google, but mistaken shared it publicly. For the most part, Yegge’s post focusses on the horrors of working at Amazon, a company that is notorious for its political infighting. The most interesting part to me, though, is Yegge’s blunt assessment of what he perceives to be Google’s inability to understand platforms and how this could endanger the company in the long run. The post itself has now been deleted, but given Google+’s reshare function, multiple copies exist on Google’s own social network and elsewhere on the web. Google+ Is a Knee-Jerk Reaction Here is the meat of his argument: “Google+ is a knee-jerk reaction, a study in short-term thinking, predicated on the incorrect notion that Facebook is successful because they built a great product. Ha, ha!

Nicholas Carr: Is Google Making Us Stupid? Bio Nicholas Carr A former executive editor of the Harvard Business Review, Nicholas Carr writes and speaks on technology, business, and culture. His intriguing 2003 Harvard Business Review article "IT Doesn't Matter," was an instant sensation, setting the stage for the global debate on the strategic value of information technology in business. His 2004 book, Does IT Matter? : Information Technology and the Corrosion of Competitive Advantage, published by Harvard Business School Press, was a bestseller and kept the worldwide business community discussing the role of computers and IT in business. A prolific and nimble thought leader, Mr Carr has written more than a dozen articles and interviews for Harvard Business Review and writes regularly for the Financial Times, Strategy & Business and The Guardian. Mr Carr has served as a commentator on CNBC, CNN, and other networks and has been a featured speaker worldwide at industry, educational, and government forums. Peter Norvig Internet

Klaus-Gerd Giesen - Humanisme et transhumanisme: l’Homme en question - 2004 Eric de Rus* Né dans les années 80 aux États-Unis, le courant de pensée transhumaniste tend aujourd’hui à s’étendre. Selon le site officiel de la World Transhumanist Association, «le transhumanisme est une approche interdisciplinaire qui nous amène à comprendre et à évaluer les avenues qui nous permettrons [sic] de surmonter nos limites biologiques par les progrès technologiques» (1). En quelques mots, le projet transhumaniste vise principalement à développer et à faire usage des nouvelles technologies, et en particulier de la génétique et des nanotechnologies, pour permettre à l’Homme de «s’améliorer». En nous appuyant sur un article de Klaus-Gerd Giesen (Université d’Auvergne/Universität Leipzig) intitulé Transhumanisme et génétique humaine (2), nous nous proposons ici de prolonger la perspective générale de l’auteur, en abordant toutefois le transhumanisme sous un angle philosophique. I. II. III. Essence humaine et perfectibilité peuvent donc coexister. IV. A. B. C. D. E.

Le critère de dégoût - Adamantin 13 septembre 2010 Le critère de dégoût (célèbre dans le monde anglo-saxon sous le nom de Yuck Factor, ou de Wisdom of repugnance) est un argument clé des bioconservateurs (formulé pour la première fois par Leon Kass, en 1997). Il désigne le sentiment de répulsion qui anime les individus au plus profond d'eux-mêmes lorsqu'il est question d'amélioration génétique, de clonage, etc. Ce sentiment communément partagé est mis en avant par les opposants à la médecine d'amélioration pour affirmer que celle-ci devrait être interdite, bien que, de leur aveu même, les raisons de ce dégoût soient difficilement identifiables et échappent apparemment à la raison. On pourrait penser que personne n'accorde beaucoup d'importance à cet argument. Or de nombreux philosophes le reprennent et consacrent souvent quelques pages à le discuter. Comme le montre très bien Leon Kass lui-même, le critère de dégoût est une réaction de l'humanité qui se sent mise en danger dans son essence même[1]. Notes Voir aussi :

Scott Draves: Making Computers Conscious – Art in Odd Places by John Critelli Can machines have souls? Scott Draves thinks they can. “Computers and robots started out as literally mechanical,” he says, “but as they develop, they are getting more subtle and more magical.” “Magical” certainly describes Drave’s project Electric Sheep. Find this sheep Electric Sheep is, at first glance, just a screen saver. But he needs help from thousands of fans. “Electric Sheep is not a self-contained system,” he says. Users create vibrant animations, called “sheep,” which are uploaded to the Electric Sheep server. Image source Draves sees this as a combination of evolution and intelligent design. At that point, the program will understand what people like well enough to create beautiful sheep on its own – without the voting process. “That’s a milestone of the path from inanimate to alive,” Draves says. Computers with souls And he believes that once computers are alive, they can have souls. “For me it’s an issue of ‘when’ and ‘how’ more than ‘if,’” he says. Find this sheep

The Friday Podcast: Why Do We Tip? : Planet Money Paul Sancya/AP In the 16th century, coffee shops prominently displayed coin boxes with the phrase "to ensure prompt service" written on the side. If you wanted your coffee in a hurry, you dropped a little something extra in the box, and made sure the waitress saw you do it. This, according to at least one version of history, is where tipping began. But today, we tip after we get served, not before. And, according to one expert we talk to on today's podcast, the quality of service we perceive makes a tiny difference in how much we tip. According to one theory, when you get down to it, we don't even tip for good service. Alain Giffard

Related: