Universe Grows Like A giant Brain The universe may grow like a giant brain, according to a new computer simulation. The results, published Nov.16 in the journal Nature's Scientific Reports, suggest that some undiscovered, fundamental laws may govern the growth of systems large and small, from the electrical firing between brain cells and growth of social networks to the expansion of galaxies. "Natural growth dynamics are the same for different real networks, like the Internet or the brain or social networks," said study co-author Dmitri Krioukov, a physicist at the University of California San Diego. The new study suggests a single fundamental law of nature may govern these networks, said physicist Kevin Bassler of the University of Houston, who was not involved in the study. "At first blush they seem to be quite different systems, the question is, is there some kind of controlling laws can describe them?" By raising this question, "their work really makes a pretty important contribution," he said. Similar Networks
Thinkbase: Mapping the World's Brain If Freebase is an "open shared database of the world's knowledge," then Thinkbase (found via information aesthetics) is a mind map of the world's knowledge. The interesting and incredibly addictive Freebase visualization and search tool is the brainchild of master's degree student Christian Hirsch at the University of Auckland. Thinkbase is one of the cool proof of concept applications built on top of Freebase that we mentioned last week. As we've mentioned here on RWW, Freebase is best suited for complex inferencing queries -- the type that expose relationships between various entities to figure out an answer. Things like, "What's the name of the actor who was in both "The Lord of the Rings" and "From Hell?" Thinkbase doesn't necessarily answer those questions -- at least not directly, but it does allow people to visually explore the relationships that Freebase can expose.
Researchers Create Artificial Neural Network from DNA 5inShare Scientists at the California Institute of Technology (Caltech) have successfully created an artificial neural network using DNA molecules that is capable of brain-like behavior. Hailing it as a “major step toward creating artificial intelligence,” the scientists report that, similar to a brain, the network can retrieve memories based on incomplete patterns. Potential applications of such artificially intelligent biochemical networks with decision-making skills include medicine and biological research. More details from Caltech: Consisting of four artificial neurons made from 112 distinct DNA strands, the researchers’ neural network plays a mind-reading game in which it tries to identify a mystery scientist. Check out these YouTube videos describing the research: Full story: Caltech researchers create the first artificial neural network out of DNA…
Monthly Notices of the Royal Astronomical Society - All Issues - Wiley Online Library Monthly Notices of the Royal Astronomical Society Impact Factor: 4.9 ISI Journal Citation Reports © Ranking: 2011: 9/56 (Astronomy & Astrophysics) IBM Research creates new foundation to program SyNAPSE chips (Credit: IBM Research) Scientists from IBM unveiled on Aug. 8 a breakthrough software ecosystem designed for programming silicon chips that have an architecture inspired by the function, low power, and compact volume of the brain. The technology could enable a new generation of intelligent sensor networks that mimic the brain’s abilities for perception, action, and cognition. Dramatically different from traditional software, IBM’s new programming model breaks the mold of sequential operation underlying today’s von Neumann architectures and computers. It is instead tailored for a new class of distributed, highly interconnected, asynchronous, parallel, large-scale cognitive computing architectures. “Architectures and programs are closely intertwined and a new architecture necessitates a new programming paradigm,” said Dr. “We are working to create a FORTRAN [a pioneering computer language] for synaptic computing chips. Paving the Path to SyNAPSE Take the human eyes, for example.
Birth of the global mind The best symbiosis of man and computer is where a program learns from humans but notices things they would not Global consciousness. We’ve heard that before. In the 1960s we were all going to be mystically connected; or it would come as a super-intelligent machine – Terminator’s Skynet – that is inimical to humanity. And yet, what if the reality is more mundane? Computer scientist Danny Hillis once remarked, “Global consciousness is that thing responsible for deciding that pots containing decaffeinated coffee should be orange.” What is different today, though, is the speed with which knowledge propagates. One might say that this is the same underlying mechanism of human knowledge capture and retransmission that has always driven the advance of civilisation. The web is a perfect example of what engineer and early computer scientist Vannevar Bush called “intelligence augmentation” by computers, in his 1945 article “As We May Think” in The Atlantic. Technology special Billion dollar brains
IBM simulates 530 billon neurons, 100 trillion synapses on supercomputer A network of neurosynaptic cores derived from long-distance wiring in the monkey brain: Neuro-synaptic cores are locally clustered into brain-inspired regions, and each core is represented as an individual point along the ring. Arcs are drawn from a source core to a destination core with an edge color defined by the color assigned to the source core. (Credit: IBM) Announced in 2008, DARPA’s SyNAPSE program calls for developing electronic neuromorphic (brain-simulation) machine technology that scales to biological levels, using a cognitive computing architecture with 1010 neurons (10 billion) and 1014 synapses (100 trillion, based on estimates of the number of synapses in the human brain) to develop electronic neuromorphic machine technology that scales to biological levels.” Simulating 10 billion neurons and 100 trillion synapses on most powerful supercomputer Neurosynaptic core (credit: IBM) Two billion neurosynaptic cores DARPA SyNAPSE Phase 0DARPA SyNAPSE Phase 1DARPA SyNAPSE Phase 2
uk.arXiv.org e-Print archive mirror Google scientist Jeff Dean on how neural networks are improving everything Google does Simon Dawson Google's goal: A more powerful search that full understands answers to commands like, "Book me a ticket to Washington DC." Jon Xavier, Web Producer, Silicon Valley Business Journal If you've ever been mystified by how Google knows what you're looking for before you even finish typing your query into the search box, or had voice search on Android recognize exactly what you said even though you're in a noisy subway, chances are you have Jeff Dean and the Systems Infrastructure Group to thank for it. As a Google Research Fellow, Dean has been working on ways to use machine learning and deep neural networks to solve some of the toughest problems Google has, such as natural language processing, speech recognition, and computer vision. Q: What does your group do at Google? A: We in our group are trying to do several things. |View All
Superintelligence A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. ‘’Superintelligence’’ may also refer to the form or degree of intelligence possessed by such an agent. Technological forecasters and researchers disagree about when human intelligence is likely to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Experts in AI and biotechnology do not expect any of these technologies to produce a superintelligence in the very near future. Definition Summarizing the views of intelligence researchers, Linda Gottfredson writes: Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. Feasibility
What is neural network? - Definition from Whatis In information technology, a neural network is a system of programs and data structures that approximates the operation of the human brain. A neural network usually involves a large number of processors operating in parallel, each with its own small sphere of knowledge and access to data in its local memory. Typically, a neural network is initially "trained" or fed large amounts of data and rules about data relationships (for example, "A grandfather is older than a person's father"). A program can then tell the network how to behave in response to an external stimulus (for example, to input from a computer user who is interacting with the network) or can initiate activity on its own (within the limits of its access to the external world). In making determinations, neural networks use several principles, including gradient-based training, fuzzy logic, genetic algorithms, and Bayesian methods. Contributor(s): Lee Giles This was last updated in July 2006 Email Alerts
INAF: Una stella di neutroni poco magnetica - Le Scienze Comunicato stampa - Un team di ricercatori guidato da Rosario Iaria del Dipartimento di Fisica e Chimica dell'Università di Palermo (e associato INAF), a cui ha partecipato Melania Del Santo dell’INAF, ha misurato il più basso campo magnetico per una stella di neutroni finora ottenuto con tecniche dirette Palermo, 18 marzo 2015 - Sono piccole: il loro raggio tipico è dell’ordine dei 10 chilometri. Ma anche super dense: al loro interno c’è stipata tanta massa quanto una volta e mezza quella del nostro Sole. Ma la loro vera ‘specialità’ è il magnetismo: il campo che esse possiedono, in virtù anche delle proprietà estreme in cui si trova la materia che le compone, è elevatissimo: anche mille miliardi di volte superiore a quello della Terra. Di stelle di neutroni ne conosciamo ormai a migliaia e, grazie a strumenti sempre più sofisticati, come gli osservatori spaziali dedicati all’astrofisica delle alte energie, ne stiamo indagando sempre meglio le caratteristiche.
An Introduction to Neural Networks Prof. Leslie Smith Centre for Cognitive and Computational Neuroscience Department of Computing and Mathematics University of Stirling. email@example.com last major update: 25 October 1996: minor update 22 April 1998 and 12 Sept 2001: links updated (they were out of date) 12 Sept 2001; fix to math font (thanks Sietse Brouwer) 2 April 2003 This document is a roughly HTML-ised version of a talk given at the NSYN meeting in Edinburgh, Scotland, on 28 February 1996, then updated a few times in response to comments received. Please email me comments, but remember that this was originally just the slides from an introductory talk! What is a neural network? Some algorithms and architectures. Where have they been applied? What new applications are likely? Some useful sources of information. Some comments added Sept 2001 NEW: questions and answers arising from this tutorial Why would anyone want a `new' sort of computer? What are (everyday) computer systems good at... .....and not so good at? Good at
Noogenesis Noogenesis (Ancient Greek: νοῦς=mind + γένεσις=becoming) is the emergence of intelligent forms of life. The term was first used by Pierre Teilhard de Chardin in regard to the evolution of humans. It also used in astrobiology in regard to the emergence of forms of life capable of technology and so interstellar communication and travel. Teilhard Noogenesis began with reflective thought; or with the first human beings. Teilhard imagines that noogenesis will eventually reach a critical point of consciousness, brought about by a maximum tension of human socialization. Astrobiology In astrobiology noogenesis concerns the origin of intelligent life and more specifically technological civilizations capable of communicating with humans and or traveling to Earth. The lack of evidence for the existence of such extraterrestrial life creates the Fermi paradox. References