Universe Grows Like A giant Brain The universe may grow like a giant brain, according to a new computer simulation. The results, published Nov.16 in the journal Nature's Scientific Reports, suggest that some undiscovered, fundamental laws may govern the growth of systems large and small, from the electrical firing between brain cells and growth of social networks to the expansion of galaxies. "Natural growth dynamics are the same for different real networks, like the Internet or the brain or social networks," said study co-author Dmitri Krioukov, a physicist at the University of California San Diego. The new study suggests a single fundamental law of nature may govern these networks, said physicist Kevin Bassler of the University of Houston, who was not involved in the study.
Researchers Create Artificial Neural Network from DNA 5inShare Scientists at the California Institute of Technology (Caltech) have successfully created an artificial neural network using DNA molecules that is capable of brain-like behavior. Hailing it as a “major step toward creating artificial intelligence,” the scientists report that, similar to a brain, the network can retrieve memories based on incomplete patterns. Potential applications of such artificially intelligent biochemical networks with decision-making skills include medicine and biological research.
Thinkbase: Mapping the World's Brain If Freebase is an "open shared database of the world's knowledge," then Thinkbase (found via information aesthetics) is a mind map of the world's knowledge. The interesting and incredibly addictive Freebase visualization and search tool is the brainchild of master's degree student Christian Hirsch at the University of Auckland. Thinkbase is one of the cool proof of concept applications built on top of Freebase that we mentioned last week. As we've mentioned here on RWW, Freebase is best suited for complex inferencing queries -- the type that expose relationships between various entities to figure out an answer. Things like, "What's the name of the actor who was in both "The Lord of the Rings" and "From Hell?"
let's focus Export - Mapping Tools Radar visual literacy.pos Version 2.6. A simple and versatile visual brainstorming software supporting a variety of visual templates and methods (incl. mind mapping). www.mind-pad.com The Headcase Mind Mapper is a reasonably priced mind mapping program that - according to some reviews- is still quite buggy. www.nobox.de
Monthly Notices of the Royal Astronomical Society - All Issues - Wiley Online Library Monthly Notices of the Royal Astronomical Society Impact Factor: 4.9 ISI Journal Citation Reports © Ranking: 2011: 9/56 (Astronomy & Astrophysics) IBM Research creates new foundation to program SyNAPSE chips (Credit: IBM Research) Scientists from IBM unveiled on Aug. 8 a breakthrough software ecosystem designed for programming silicon chips that have an architecture inspired by the function, low power, and compact volume of the brain. The technology could enable a new generation of intelligent sensor networks that mimic the brain’s abilities for perception, action, and cognition. Dramatically different from traditional software, IBM’s new programming model breaks the mold of sequential operation underlying today’s von Neumann architectures and computers. It is instead tailored for a new class of distributed, highly interconnected, asynchronous, parallel, large-scale cognitive computing architectures.
Google scientist Jeff Dean on how neural networks are improving everything Google does Simon Dawson Google's goal: A more powerful search that full understands answers to commands like, "Book me a ticket to Washington DC." Jon Xavier, Web Producer, Silicon Valley Business Journal If you've ever been mystified by how Google knows what you're looking for before you even finish typing your query into the search box, or had voice search on Android recognize exactly what you said even though you're in a noisy subway, chances are you have Jeff Dean and the Systems Infrastructure Group to thank for it. As a Google Research Fellow, Dean has been working on ways to use machine learning and deep neural networks to solve some of the toughest problems Google has, such as natural language processing, speech recognition, and computer vision.
Birth of the global mind The best symbiosis of man and computer is where a program learns from humans but notices things they would not Global consciousness. We’ve heard that before. In the 1960s we were all going to be mystically connected; or it would come as a super-intelligent machine – Terminator’s Skynet – that is inimical to humanity. And yet, what if the reality is more mundane? Computer scientist Danny Hillis once remarked, “Global consciousness is that thing responsible for deciding that pots containing decaffeinated coffee should be orange.” Mental model A mental model is an explanation of someone's thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts and a person's intuitive perception about his or her own acts and their consequences. Mental models can help shape behaviour and set an approach to solving problems (akin to a personal algorithm) and doing tasks. A mental model is a kind of internal symbol or representation of external reality, hypothesized to play a major role in cognition, reasoning and decision-making. Kenneth Craik suggested in 1943 that the mind constructs "small-scale models" of reality that it uses to anticipate events. Jay Wright Forrester defined general mental models as:
New Techniques from Google and Ray Kurzweil Are Taking Artificial Intelligence to Another Level When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own. It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.”
What is neural network? - Definition from Whatis In information technology, a neural network is a system of programs and data structures that approximates the operation of the human brain. A neural network usually involves a large number of processors operating in parallel, each with its own small sphere of knowledge and access to data in its local memory. Typically, a neural network is initially "trained" or fed large amounts of data and rules about data relationships (for example, "A grandfather is older than a person's father"). A program can then tell the network how to behave in response to an external stimulus (for example, to input from a computer user who is interacting with the network) or can initiate activity on its own (within the limits of its access to the external world).
Superintelligence A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. ‘’Superintelligence’’ may also refer to the form or degree of intelligence possessed by such an agent. Technological forecasters and researchers disagree about when human intelligence is likely to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations.
A Neuroscientist's Radical Theory of How Networks Become Conscious - Wired Science It’s a question that’s perplexed philosophers for centuries and scientists for decades: Where does consciousness come from? We know it exists, at least in ourselves. But how it arises from chemistry and electricity in our brains is an unsolved mystery. Neuroscientist Christof Koch, chief scientific officer at the Allen Institute for Brain Science, thinks he might know the answer. Google and Neural Networks: Now Things Are Getting REALLY Interesting,… Back in October 2002, I appeared as a guest speaker for the Chicago (Illinois) URISA conference. The topic that I spoke about at that time was on the commercial and governmental applicability of neural networks. Although well-received (the audience actually clapped, some asked to have pictures taken with me, and nobody fell asleep) at the time it was regarded as, well, out there. After all, who the hell was talking about – much less knew anything about – neural networks.