Universe Grows Like A giant Brain The universe may grow like a giant brain, according to a new computer simulation. The results, published Nov.16 in the journal Nature's Scientific Reports, suggest that some undiscovered, fundamental laws may govern the growth of systems large and small, from the electrical firing between brain cells and growth of social networks to the expansion of galaxies. "Natural growth dynamics are the same for different real networks, like the Internet or the brain or social networks," said study co-author Dmitri Krioukov, a physicist at the University of California San Diego. The new study suggests a single fundamental law of nature may govern these networks, said physicist Kevin Bassler of the University of Houston, who was not involved in the study. "At first blush they seem to be quite different systems, the question is, is there some kind of controlling laws can describe them?" By raising this question, "their work really makes a pretty important contribution," he said. Similar Networks
Neural Correlates of Lyrical Improvisation: An fMRI Study of Freestyle Rap : Scientific Reports In this study, we used fMRI to investigate the neural correlates of spontaneous lyrical improvisation by comparing spontaneous freestyle rap to conventional rehearsed performance. Our results reveal characteristic patterns of activity associated with this novel form of lyrical improvisation, and may also provide more general insights into the creative process itself. It has been suggested that the creative behaviors could occur in two stages: an improvisatory phase characterized by generation of novel material and a phase in which this material is re-evaluated and revised1. A second salient feature of improvisation revealed by the GLM contrasts was a marked lateralization of task-related changes in the BOLD signal. Activation of left hemisphere language areas (in inferior frontal and posterior middle and superior temporal gyri) was predicted and is perhaps unsurprising given the nature of the genre.
Wireless Electricity is Real and Can Change the World A revolution in the method of transmitting and receiving power is taking place, and the results as it pertains to the everyday consumer may not be far behind. In fact, some forms of the technology will be made available this year. Picture yourself never having to worry about recharging your phone, IPod or laptop as long as you were inside a wireless energy zone. That zone can be located in your house, on the train, in the airport, or at your workplace. “Laptop batteries are always burning out and always need a charge. With major competition along many different technological avenues of bringing wireless power to the market, it’s almost assured that some form or another will be used in the mainstream before too long. Companies such as Sunnyvale, Calif., based PowerBeam, showcased their wireless lamps and picture frames which were powered by technology that can beam optical energy into photovoltaic cells using laser diodes, at last week’s Consumer Electronics Show in Las Vegas.
Researchers Create Artificial Neural Network from DNA 5inShare Scientists at the California Institute of Technology (Caltech) have successfully created an artificial neural network using DNA molecules that is capable of brain-like behavior. Hailing it as a “major step toward creating artificial intelligence,” the scientists report that, similar to a brain, the network can retrieve memories based on incomplete patterns. Potential applications of such artificially intelligent biochemical networks with decision-making skills include medicine and biological research. More details from Caltech: Consisting of four artificial neurons made from 112 distinct DNA strands, the researchers’ neural network plays a mind-reading game in which it tries to identify a mystery scientist. Check out these YouTube videos describing the research: Full story: Caltech researchers create the first artificial neural network out of DNA…
IBM Research creates new foundation to program SyNAPSE chips (Credit: IBM Research) Scientists from IBM unveiled on Aug. 8 a breakthrough software ecosystem designed for programming silicon chips that have an architecture inspired by the function, low power, and compact volume of the brain. The technology could enable a new generation of intelligent sensor networks that mimic the brain’s abilities for perception, action, and cognition. Dramatically different from traditional software, IBM’s new programming model breaks the mold of sequential operation underlying today’s von Neumann architectures and computers. It is instead tailored for a new class of distributed, highly interconnected, asynchronous, parallel, large-scale cognitive computing architectures. “Architectures and programs are closely intertwined and a new architecture necessitates a new programming paradigm,” said Dr. “We are working to create a FORTRAN [a pioneering computer language] for synaptic computing chips. Paving the Path to SyNAPSE Take the human eyes, for example.
Le cerveau préfère le sport au tabac ! Actualités Par Yan Di Meglio rédigé le 28 novembre 2012, mis à jour le 28 novembre 2012 Le cerveau préfère le sport au tabac ! Le cerveau préfère le sport au tabac ! Pour protéger le cerveau d'un vieillissement prématuré, les exercices de stimulation et les tests de mémoire sont régulièrement conseillés. Selon une étude écossaise publiée dans la revue Neurology, il semblerait cependant qu'il y ait plus efficace encore : le sport ! Ces scientifiques ont observé les pratiques sportives régulières de 700 personnes de plus de 70 ans. Les escaliers plutôt que l'escalator C'est le système cardiovasculaire, en meilleur état chez les sportifs, qui explique la résistance du cerveau au vieillissement, car il est mieux oxygéné. Des tests pratiqués sur près de 9 000 personnes de plus de 50 ans ont révélé des carences dans les facultés cognitives de certains d'entre eux. Et en finir avec le tabac Nous savions déjà que le tabac altérait l'oxygénation du cerveau. En savoir plus Mots clés Réagir à cet article
The brick-road-laying Tiger Stone - Image 1 of 6 Laying down paving bricks is back-breaking, time-consuming work... or at least, it is if you do it the usual way. Henk van Kuijk, director of Dutch industrial company Vanku, evidently decided that squatting/kneeling and shoving the bricks into place on the ground was just a little too slow, so he invented the Tiger Stone paving machine. The road-wide device is fed loose bricks, and lays them out onto the road as it slowly moves along. A quick going-over with a tamper, and you’ve got an instant brick road. View all One to three human operators stand on the platform of the Tiger Stone, and move loose bricks by hand from its hopper to its sloping “pusher” slot – the bricks do have to be fed into the pusher in the desired finished pattern. The tread-tracked machine is electrically-powered, and has few moving parts, so noise and maintenance are kept to a minimum. Via Gizmodo. About the Author Post a CommentRelated Articles Just enter your friends and your email address into the form below
IBM simulates 530 billon neurons, 100 trillion synapses on supercomputer A network of neurosynaptic cores derived from long-distance wiring in the monkey brain: Neuro-synaptic cores are locally clustered into brain-inspired regions, and each core is represented as an individual point along the ring. Arcs are drawn from a source core to a destination core with an edge color defined by the color assigned to the source core. (Credit: IBM) Announced in 2008, DARPA’s SyNAPSE program calls for developing electronic neuromorphic (brain-simulation) machine technology that scales to biological levels, using a cognitive computing architecture with 1010 neurons (10 billion) and 1014 synapses (100 trillion, based on estimates of the number of synapses in the human brain) to develop electronic neuromorphic machine technology that scales to biological levels.” Simulating 10 billion neurons and 100 trillion synapses on most powerful supercomputer Neurosynaptic core (credit: IBM) Two billion neurosynaptic cores DARPA SyNAPSE Phase 0DARPA SyNAPSE Phase 1DARPA SyNAPSE Phase 2
Google scientist Jeff Dean on how neural networks are improving everything Google does Simon Dawson Google's goal: A more powerful search that full understands answers to commands like, "Book me a ticket to Washington DC." Jon Xavier, Web Producer, Silicon Valley Business Journal If you've ever been mystified by how Google knows what you're looking for before you even finish typing your query into the search box, or had voice search on Android recognize exactly what you said even though you're in a noisy subway, chances are you have Jeff Dean and the Systems Infrastructure Group to thank for it. As a Google Research Fellow, Dean has been working on ways to use machine learning and deep neural networks to solve some of the toughest problems Google has, such as natural language processing, speech recognition, and computer vision. Q: What does your group do at Google? A: We in our group are trying to do several things. |View All
L'hyperstimulation du deuxième cerveau «La physiopathologie du syndrome de l'intestin irritable est encore méconnue mais, en l'abordant par ses symptômes, de nombreux phénomènes se produisant notamment au niveau de l'intestin ont pu être identifiés», souligne Michel Neunlist, directeur de l'unité Inserm de neuro-gastro-entérologie de l'Institut des maladies de l'appareil digestif (Imad) à Nantes. En étudiant la douleur, les chercheurs ont pu constater que les patients SII présentent souvent une sensibilité accrue au stimulus douloureux, traduit au niveau du système nerveux central (le cerveau). Les troubles digestifs pourraient eux être liés à un trouble de la motricité de l'intestin liée au système nerveux entérique (SNE), parfois considéré comme un deuxième cerveau. L'intestin est en effet l'une des zones les plus innervées de l'organisme et ce réseau nerveux est responsable, de façon autonome, de certaines fonctions digestives comme le mouvement péristaltique qui fait progresser les aliments dans l'intestin.
Phoenix Fly What is neural network? - Definition from Whatis In information technology, a neural network is a system of programs and data structures that approximates the operation of the human brain. A neural network usually involves a large number of processors operating in parallel, each with its own small sphere of knowledge and access to data in its local memory. Typically, a neural network is initially "trained" or fed large amounts of data and rules about data relationships (for example, "A grandfather is older than a person's father"). A program can then tell the network how to behave in response to an external stimulus (for example, to input from a computer user who is interacting with the network) or can initiate activity on its own (within the limits of its access to the external world). In making determinations, neural networks use several principles, including gradient-based training, fuzzy logic, genetic algorithms, and Bayesian methods. Contributor(s): Lee Giles This was last updated in July 2006 Email Alerts
An Introduction to Neural Networks Prof. Leslie Smith Centre for Cognitive and Computational Neuroscience Department of Computing and Mathematics University of Stirling. email@example.com last major update: 25 October 1996: minor update 22 April 1998 and 12 Sept 2001: links updated (they were out of date) 12 Sept 2001; fix to math font (thanks Sietse Brouwer) 2 April 2003 This document is a roughly HTML-ised version of a talk given at the NSYN meeting in Edinburgh, Scotland, on 28 February 1996, then updated a few times in response to comments received. Please email me comments, but remember that this was originally just the slides from an introductory talk! What is a neural network? Some algorithms and architectures. Where have they been applied? What new applications are likely? Some useful sources of information. Some comments added Sept 2001 NEW: questions and answers arising from this tutorial Why would anyone want a `new' sort of computer? What are (everyday) computer systems good at... .....and not so good at? Good at
COMA: Face au traumatisme, le cerveau se réorganise COMA: Face au traumatisme, le cerveau se réorganise Actualité publiée le 29-11-2012 Inserm et PNAS Une réorganisation des réseaux cérébraux, tel un mécanisme de résistance du cerveau au traumatisme, c’est ce que constatent ces chercheurs de l'Inserm qui ont analysé les réseaux cérébraux de patients dans le coma, un état où la personne est considérée comme "inconsciente". L’Inserm rappelle que le coma est un état où l'on observe une abolition de la conscience de soi et du monde extérieur et qu’il existe deux phases de coma, la phase dite "aigüe" puis la phase dite "chronique". Leurs résultats montrent que, · la connectivité cérébrale globale est conservée chez les patients dans le coma en comparaison avec les volontaires sains. · Mais la connectivité au niveau local, est réorganisée : La connectivité dans certaines régions cérébrales fortement connectées chez les patients sains est plus faible chez les patients dans le coma. Un nouveau mode d’évaluation personnalisé ? Neuro