background preloader

Artificial neural networks

Facebook Twitter

A brief tutorial on cascade-correlation. Repository of Neural and Cognitive Models. Neural network gets an idea of number without counting - tech - 20 January 2012. AN ARTIFICIAL brain has taught itself to estimate the number of objects in an image without actually counting them, emulating abilities displayed by some animals including lions and fish, as well as humans.

Neural network gets an idea of number without counting - tech - 20 January 2012

Because the model was not preprogrammed with numerical capabilities, the feat suggests that this skill emerges due to general learning processes rather than number-specific mechanisms. "It answers the question of how numerosity emerges without teaching anything about numbers in the first place," says Marco Zorzi at the University of Padua in Italy, who led the work.

The finding may also help us to understand dyscalculia - where people find it nearly impossible to acquire basic number and arithmetic skills - and enhance robotics and computer vision. The skill in question is known as approximate number sense. A simple test of ANS involves looking at two groups of dots on a page and intuitively knowing which has more dots, even though you have not counted them. VS265: Neural Computation - RedwoodCenter. This is the Fall 2012 VS 265 Neural Computation course webpage.

VS265: Neural Computation - RedwoodCenter

Course description This course provides an introduction to the theory of neural computation. The goal is to familiarize students with the major theoretical frameworks and models used in neuroscience and psychology, and to provide hands-on experience in using these models. Topics include neural network models, supervised and unsupervised learning rules, associative memory models, probabilistic/graphical models, sensorimotor loops, and models of neural coding in the brain. Instructors Bruno Olshausen Email: link Office: 570 Evans Office hours: immediately following lecture.

Data-driven Visual Similarity for Cross-domain Image Matching. Presented at SIGGRAPH Asia, 2011 People A data-driven technique to find visual similarity which does not depend on any particular image domain or feature representation.

Data-driven Visual Similarity for Cross-domain Image Matching

Visit the webpage to see some cool results and applications. Abstract The goal of this work is to find visually similar images even if they appear quite different at the raw pixel level. Featuring Articles Paper Supplementary Material Video Watch in HD! More Videos (Visual-Memex Traversal) Presentation Data Used Software.

Intro & basic

Tech papers. Neuroscience. Information transmission with spiking Bayesian neurons. New J.

Information transmission with spiking Bayesian neurons

Phys. 10 (2008) 055019doi:10.1088/1367-2630/10/5/055019. Neurdon. 28th German Conference on Artificial Intelligence, 2005, Koblenz. Simulators and tools. Neocognitron neural network. Introduction Artificial neural network architectures such as backpropagation tend to have general applicability.

Neocognitron neural network

We can use a single network type in many different applications by changing the network’s size, parameters, and training sets. In contrast, the developers of the neocognitron set out to tailor architecture for a specific application: recognition of handwritten characters. Such a system has a great deal of practical application, although, judging from the introductions to some of their papers, Fukushima and his coworkers appear to be more interested in developing a model of the brain .To that end, their design was based on the seminal work performed by Hubel and Weisel elucidating some of the functional architecture of the visual cortex.

Competitive learning. Steven E. Lamberson, Jr. Funwork #3 Page. ECE 595C Funwork #3 Steven E.

Steven E. Lamberson, Jr. Funwork #3 Page

Lamberson, Jr. October 9, 2004. Artificial Neural Networks/Competitive Learning. Competitive Learning[edit] Competitive learning is a rule based on the idea that only one neuron from a given iteration in a given layer will fire at a time.

Artificial Neural Networks/Competitive Learning

Weights are adjusted such that only one neuron in a layer, for instance the output layer, fires. Competitive learning is useful for classification of input patterns into a discrete set of output classes. Bio-plausible WTA architecture. Books for Neural Networks, Encog, and Artificial Intelligence. Hopfield Neural Network Example. Get the entire book!

Hopfield Neural Network Example

Now that you have been shown some of the basic concepts of neural network we will example an actual Java example of a neural network. The example program for this chapter implements a simple Hopfield neural network that you can used to experiment with Hopfield neural networks. The example given in this chapter implements the entire neural network. More complex neural network examples will often use JOONE. Long Term Memory: Matching versus Retrieval, Episodic versus Semantic. HTM_CorticalLearningAlgorithms. Bayesian FAQ. Also, if the evidence and cross validation are strongly in disagreement, I would predict that cross validation would be the better method for predicting generalisation error.

Bayesian FAQ

However, this is not always the case. If we take Takeuchi and MacKay's interpolation model and increase the number of hyperparameters, the predictive properties do change as the number of hyperparameters increases. My intuition is that the predictive properties of the model change only in a small and unimportant way, but all the same, the question you ask is a valid one. How should the prior on {alpha} be set? My answer would be that it is up to the user to define a prior that corresponds to the user's prior beliefs. [I note in passing that for practical purposes, it would not be unwise to always use H_2, because the extra hyperparameters of H_2 do not significantly increase its ability to overfit the data. How should the prior on alpha_1, alpha_2 be set? Some people (including many Bayesians!) Back to top g g^T, Vision.