robot adapts to injury Lindsay France/University Photography Graduate student Viktor Zykov, former student Josh Bongard, now a professor at the University of Vermont, and Hod Lipson, Cornell assistant professor of mechanical and aerospace engineering, watch as a starfish-like robot pulls itself forward, using a gait it developed for itself. the robot's ability to figure out how it is put together, and from that to learn to walk, enables it to adapt and find a new gait when it is damaged. Nothing can possibly go wrong ... go wrong ... go wrong ... The truth behind the old joke is that most robots are programmed with a fairly rigid "model" of what they and the world around them are like. If a robot is damaged or its environment changes unexpectedly, it can't adapt. So Cornell researchers have built a robot that works out its own model of itself and can revise the model to adapt to injury. "Most robots have a fixed model laboriously designed by human engineers," Lipson explained.
Artificial Robotic Hand Transmits Feeling To Nerves Astro Teller has an unusual way of starting a new project: He tries to kill it. Teller is the head of X, formerly called Google X, the advanced technology lab of Alphabet. At X’s headquarters not far from the Googleplex in Mountain View, Calif., Teller leads a group of engineers, inventors, and designers devoted to futuristic “moonshot” projects like self-driving cars, delivery drones, and Internet-beaming balloons. To turn their wild ideas into reality, Teller and his team have developed a unique approach. It starts with trying to prove that whatever it is that you’re trying to do can’t be done—in other words, trying to kill your own idea. As Teller explains, “Instead of saying, ‘What’s most fun to do about this or what’s easiest to do first?’ The ideas that survive get additional rounds of scrutiny, and only a tiny fraction eventually becomes official projects; the proposals that are found to have an Achilles’ heel are discarded, and Xers quickly move on to their next idea.
Silicon Chips Wired With Nerve Cells Could Enable New Brain/Machine Interfaces It's reminiscent of Cartman's runaway Trapper Keeper notebook in that long-ago episode of South Park, but researchers at the University of Wisconsin-Madison may be scratching the surface of a new kind of brain/machine interface by creating computer chips that are wired together with living nerve cells. A team there has found that mouse nerve cells will connect with each other across a network of tiny tubes threaded through a semiconductor material. It's not exactly clear at this point how the nerve cells are functioning, but what is clear is that the cells seem to have an affinity for the tiny tubes, and that alone has some interesting implications. To create the nerve-chip hybrid, the researchers created tubes of layered silicon and germanium that are large enough for the nerve cells' tendrils to navigate but too small for the actual body of the cell to pass through. What isn't clear is whether or not the cells are actually communicating with each other they way they would naturally.
Why are past, present, and future our only options? But things get awkward if you have a friend. (Use your imagination if necessary.) Low blow, Dr. Dave. Low blow... But seriously, I always figured if there was more than one dimension of time, that moving "left" or "right" would be the equivalent of moving to a parallel universe where things were slightly different. That is to say, maybe time really is 2 dimensional, but for all the reasons you mention, we're normally only aware of one of them—and for the most part, the same one that most of the people we meet are aware of. But take, say, a schizophrenic person—maybe they're tuned in differently; moving sideways through time instead of forward... or maybe moving through (and aware of) both simultaneously. They can't form coherent thoughts because they're constantly confronted with overlapping and shifting realities. I dunno... that's all just speculation, of course, but I find that thought fascinating.
Finding the Top Bot: High School Students (and Their Robots) Take the Prize at Tech Challenge [Slide Show] NEW YORK—Despite the rain and cold this past weekend, dozens of robots took the field to compete in the New York City FIRST Tech Challenge (FTC) regional championship at the Javitz Center in Manhattan. The tournament tested the skills and determination of 48 teams of high school students who have spent the past several months building, programming and otherwise preparing their bots to face off in a friendly game of HotShot! The objective of HotShot! This being a tournament developed and hosted by Dean Kamen's FIRST (For Inspiration and Recognition of Science and Technology) organization, the competition involved more than the ball game itself. At the start of every round each of the four robots in play relied on software and sensors to track the location of their goal, line up shots and, hopefully, score some points—without any intervention from the human players. FTC organizers emphasize that the program's goals extend well beyond even technology.
Artificial Intelligence - Volume 1: Chatbot NetLogo Model Produced for the book series "Artificial Intelligence"; Author: W. J. powered by NetLogo view/download model file: Chatbot.nlogo This model implements two basic chatbots - Liza and Harry. The model makes use of an extension to NetLogo called "re" for regular expressions. First press the setup button in the Interface - this will load the rules for each chatbot. The Interface buttons are defined as follows:- setup: This loads the rules for each chatbots.- chat: This starts or continues the conversation with the chatbot that was selected using the bot chooser. The Interface chooser and switch is defined as follows:- bot: This sets the chatbot to the Liza chatbot, the Harry chatbot or Both.- debug-conversation: If this is set to On, debug information is also printed showing which rules matched. Harry seems to do a bit better at being paranoid than Liza does at being a Rogerian psychotherapist. Try out the different chatbots by changing the bot chooser.
DIY Drones: The Future is Now | Think Tank What's the Big Idea? "Hacking the physical" is how Dale Dougherty described the burgeoning DIY Drone movement to Peter Daimandis in his book Abundance: The Future is Better Than You Think. Dougherty, the founder and publisher of Make magazine, was describing the broad trend that has enabled individuals to construct homebrew versions of sophisticated machines at a fraction of the cost. In the case of unmanned air vehicles (or UAVs), 90 percent of the functionality of a military drone was accomplished for just 1 percent of the military's price. This rapid cost reduction, or demonitization, has led to breathtaking innovation in this field, and UAV advocates see the FAA issuing personal and commercial licenses by 2015. What's the Latest Development? Jerry LeMeiux, a retired colonel with an engineering PhD and "10,000 hours of aviation experience," has just launched the so-called Unmanned Vehicle University. Here is a sample of LeMeiux's course that is available on Youtube:
h+ Magazine | Covering technological, scientific, and cultural trends that are changing human beings in fundamental ways. Michelle Ewens March 24, 2011 The concept of utility fog – flying, intercommunicating nanomachines that dynamically shape themselves into assorted configurations to serve various roles and execute multifarious tasks – was introduced by nanotech pioneer J. Storrs Hall in 1993. Recently in H+ Magazine, Hall pointed out that swarm robots are the closest thing we have to utility fog. This brings the concept a little bit closer to reality. For instance, a few years ago Dr. However, if a future foglet ever became conscious enough to dissent from its assigned task and spread new information to the hive mind, this might cause other constituent foglets to deviate from their assigned tasks. Eric Drexler, who coined “grey goo” in his seminal 1986 work on nanotechnology, “Engines of Creation,” now resents the term’s spread since it is often used to conjure up fears of a nanotech-inspired apocalypse. What Is It Like to Be a Foglet? The Psychology of Groupthink The Ethics of Military Foglets
Real Life Japanese Mech Robot Fires BBs With A Smile The Kuratas Mecha robot is an art/aspirational nerd project by Suidobashi Heavy Industry. This full-sized Mech robot features a ride-in cockpit, “rocket” launchers, and a “smile controlled” BB Gatling gun. That’s right: when you smile, this thing unleashes thousands of tiny plastic BBs. Unveiled at Wonder Fest 2012 in Tokyo, you can control the robot with either a set of master-slave joysticks or using a more fluid Kinect interface. You can “price out” your own Mech here but rest-assured you won’t be able to drive one of these off the lot any time soon. There are some who are suggesting this is CG but considering the AFP/Getty picked up some photos of it, it looks about as real as you can get. via plasticpals
The Emergence of Collective Intelligence | Ledface Blog ~Aristotle When we observe large schools of fish swimming, we might wonder who is choreographing that complex and sophisticated dance, in which thousands of individuals move in harmony as if they knew exactly what to do to produce the collective spectacle. So, what is “Emergence”? School of fishes dancing is an example of “emergence”, a process where new properties, behaviors, or complex patterns results of relatively simple rules and interactions. One can see emergence as some magic phenomena or just as a surprising result caused by the current inability of our reductionist mind to understand complex patterns. Humans can do it too We humans have even built artificial environments that allow for collective intelligence to express itself. Each and every actor in the financial markets has no significant control over or awareness of its inputs. Can we transpose it to other domains? Nobody can single-handedly create “collective intelligence”. Too remote of a possibility?
First Bionic Eye Sees Light of Day in U.S. After years of research, the first bionic eye has seen the light of day in the United States, giving hope to the blind around the world. Developed by Second Sight Medical Products, the Argus II Retinal Prosthesis System has helped more than 60 people recover partial sight, with some experiencing better results than others. Consisting of 60 electrodes implanted in the retina and glasses fitted with a special mini camera, Argus II has already won the approval of European regulators. PHOTOS: Mechanimals: Animals Fitted With Prosthetics "It's the first bionic eye to go on the market in the world, the first in Europe and the first one in the U.S.," said Brian Mech, the California-based company's vice president of business development. Those to benefit from Argus II are people with retinitis pigmentosa, a rare genetic disease, affecting about 100,000 people in the U.S., that results in the degeneration of the retinal photoreceptors. VIDEO: Man Controls Robotic Hand with His Mind