background preloader

Neural Networks

Facebook Twitter

OPENNN. HasegawaLab. Hasegawa Lab., Imaging Science and Engineering Laboratory. Tokyo Institute of Technology. In this page, we introduce our unsupervised online incremental learning method, Self-Organizing Incremental Neural Network (SOINN).

Hasegawa Lab., Imaging Science and Engineering Laboratory. Tokyo Institute of Technology

We will release the 2nd-Generation SOINN, which is designed based on the Bayes Theory. What is SOINN? The SOINN is an unsupervised online-learning method, which is capable of incremental learning, based on Growing Neural Gas (GNG) and Self-Organizing Map (SOM). For online data that is non-stationary and has a complex distribution, it can approximate the distribution of input data and estimate appropriate the number of classes by forming a network in a self-organizing way. In addition, it has the following features: unnecessity to predefine its network structure, high robustness to noise and low computational cost. Published Papers Videos In HasegawaLab's Channel on YouTube , many ohter videos of our researches are broadcasted.

Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity. Results In this section we define a simple model circuit and show that every spiking event of the circuit can be described as one independent sample of a discrete probability distribution, which itself evolves over time in response to the spiking input.

Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

Within this network we analyze a variant of a STDP rule, in which the strength of potentiation depends on the current weight value. This local learning rule, which is supported by experimental data, and at intermediate spike frequencies closely resembles typical STDP rules from the literature, drives every synaptic weight to converge stochastically to the log of the probability that the presynaptic input neuron fired a spike within a short time window , before the postsynaptic neuron spikes at time This understanding of spikes as samples of hidden causes leads to the central result of this paper. Finally we discuss how our model can be implemented with biologically realistic mechanisms. Definition of the network model . That is fed to all. What is RoboEarth ?

Resurgence in Neural Networks - tjake.blog. If you’ve been paying attention, you’ll notice there has been a lot of news recently about neural networks and the brain.

Resurgence in Neural Networks - tjake.blog

A few years ago the idea of virtual brains seemed so far from reality, especially for me, but in the past few years there has been a breakthrough that has turned neural networks from nifty little toys to actual useful things that keep getting better at doing tasks computers are traditionally very bad at. In this post I’ll cover some background on Neural networks and my experience with them. Then go over the recent discoveries I’ve learned about.

At the end of the post I’ll share a sweet little github project I wrote that implements this new neural network approach. Background When I was in college I studied Cognitive Science, which is a interdisciplinary study of the mind and brain. PhilosophyPsychologyLinguisticsNeuroscienceArtificial Intelligence I ended up focusing on A.I. and eventually majored in Computer Science because of it. Generative Machines. Towards the evolution of an artificial homeostatic system. This paper presents an artificial homeostatic system (AHS) devoted to the autonomous navigation of mobile robots, with emphasis on neuro-endocrine interactions.

Towards the evolution of an artificial homeostatic system

The AHS is composed of two modules, each one associated with a particular reactive task and both implemented using an extended version of the GasNet neural model, denoted spatially unconstrained GasNet model or simply non-spatial GasNet (NSGasNet). There is a coordination system, which is responsible for the specific role of each NSGasNet at a given operational condition. The switching among the NSGasNets is implemented as an artificial endocrine system (AES), which is based on a system of coupled nonlinear difference equations. The NSGasNets are synthesized by means of an evolutionary algorithm. PyBrain. Programming Collective Intelligence. Subscriber Reviews Average Rating: Based on 8 Ratings "Useful book on machine learning, etc.

" - by Alex Ott on 20-JUL-2011Reviewer Rating: Very good introduction into machine-learning, information retrieval & data mining related questions. Could be used to get high-order overview of corresponding topics, especially by non-CS peoples.Report as Inappropriate "A 'Hands On' book on Artificial Intelligence" - by Tushar Goswami on 14-MAR-2011Reviewer Rating: After digesting some half a dozen books over the past few years on AI & NLP, I have had enough of 'theoretical' & 'over detailed description' of AI fundamentals & algorithms. But its not the copy-pastable code alone, the Chapter 12 which gives a Black-&-White comparison of AI algorithms comes handy when you want to measure the shortcomings & strengths of various algorithms (by the way, you may also want to compare efficiency between algorithms by using on AI toolkit, like the Weka UI).

Fast Artificial Neural Network Library. Temporal difference learning. Temporal difference (TD) learning is an approach to learning how to predict a quantity that depends on future values of a given signal.

Temporal difference learning

The name TD derives from its use of changes, or differences, in predictions over successive time steps to drive the learning process. The prediction at any given time step is updated to bring it closer to the prediction of the same quantity at the next time step. It is a supervised learning process in which the training signal for a prediction is a future prediction.

TD algorithms are often used in reinforcement learning to predict a measure of the total amount of reward expected over the future, but they can be used to predict other quantities as well. Continuous-time TD algorithms have also been developed. The Problem Suppose a system receives as input a time sequence of vectors (x_t, y_t)\ , t=0, 1, 2, \dots\ , where each x_t is an arbitrary signal and y_t is a real number. Where \gamma is a discount factor, with 0 \le \gamma < 1\ . Eligibility Traces. Heuristic search project.