Teuvo Kohonen Teuvo Kohonen (born July 11, 1934) is a prominent Finnish academician (Dr. Eng.) and researcher. He is currently professor emeritus of the Academy of Finland. Prof. Kohonen has made many contributions to the field of artificial neural networks, including the Learning Vector Quantization algorithm, fundamental theories of distributed associative memory and optimal associative mappings, the learning subspace method and novel algorithms for symbol processing like redundant hash addressing. Most of his career, Prof. Teuvo Kohonen was elected the First Vice President of the International Association for Pattern Recognition from 1982 to 1984, and acted as the first president of the European Neural Network Society from 1991 to 1992. For his scientific achievements, Prof. IEEE Neural Networks Council Pioneer Award, 1991Technical Achievement Award of the IEEE Signal Processing Society, 1995Frank Rosenblatt Technical Field Award, 2008 Bibliography[edit] External links[edit]

Multilayer perceptron A multilayer perceptron (MLP) is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate outputs. A MLP consists of multiple layers of nodes in a directed graph, with each layer fully connected to the next one. Except for the input nodes, each node is a neuron (or processing element) with a nonlinear activation function. MLP utilizes a supervised learning technique called backpropagation for training the network.[1][2] MLP is a modification of the standard linear perceptron and can distinguish data that are not linearly separable.[3] Theory[edit] Activation function[edit] If a multilayer perceptron has a linear activation function in all neurons, that is, a linear function that maps the weighted inputs to the output of each neuron, then it is easily proved with linear algebra that any number of layers can be reduced to the standard two-layer input-output model (see perceptron). is the output of the th node (neuron) and Layers[edit] in the , where

Restricted Boltzmann machine Diagram of a restricted Boltzmann machine with three visible units and four hidden units (no bias units). A restricted Boltzmann machine (RBM) is a generative stochastic neural network that can learn a probability distribution over its set of inputs. RBMs were initially invented under the name Harmonium by Paul Smolensky in 1986,[1] but only rose to prominence after Geoffrey Hinton and collaborators invented fast learning algorithms for them in the mid-2000s. RBMs have found applications in dimensionality reduction,[2] classification,[3] collaborative filtering, feature learning[4] and topic modelling.[5] They can be trained in either supervised or unsupervised ways, depending on the task. Restricted Boltzmann machines can also be used in deep learning networks. In particular, deep belief networks can be formed by "stacking" RBMs and optionally fine-tuning the resulting deep network with gradient descent and backpropagation.[7] Structure[edit] and visible unit for the visible units and where

Scholarpedia Figure 1: The array of nodes in a two-dimensional SOM grid. The Self-Organizing Map (SOM), commonly also known as Kohonen network (Kohonen 1982, Kohonen 2001) is a computational method for the visualization and analysis of high-dimensional data, especially experimentally acquired information. Introduction The Self-Organizing Map defines an ordered mapping, a kind of projection from a set of given data items onto a regular, usually two-dimensional grid. Like a codebook vector in vector quantization, the model is then usually a certain weighted local average of the given data items in the data space. The SOM was originally developed for the visualization of distributions of metric vectors, such as ordered sets of measurement values or statistical attributes, but it can be shown that a SOM-type mapping can be defined for any data items, the mutual pairwise distances of which can be defined. History Figure 2: Left image: Models of acoustic spectra of Finnish phonemes, organized on an SOM.

Protein Secondary Structure Prediction with Neural Nets: Feed-Forward Networks Introduction to feed-forward nets Feed-forward nets are the most well-known and widely-used class of neural network. The popularity of feed-forward networks derives from the fact that they have been applied successfully to a wide range of information processing tasks in such diverse fields as speech recognition, financial prediction, image compression, medical diagnosis and protein structure prediction; new applications are being discovered all the time. (For a useful survey of practical applications for feed-forward networks, see [Lisboa, 1992].) In common with all neural networks, feed-forward networks are trained, rather than programmed, to carry out the chosen information processing tasks. The feed-forward architecture Feed-forward networks have a characteristic layered architecture, with each layer comprising one or more simple processing units called artificial neurons or nodes. Diagram of 2-Layer Perceptron Training a feed-forward net 1. 2. 3.

Recurrent neural network A recurrent neural network (RNN) is a class of neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. This makes them applicable to tasks such as unsegmented connected handwriting recognition, where they have achieved the best known results.[1] Architectures[edit] Fully recurrent network[edit] This is the basic architecture developed in the 1980s: a network of neuron-like units, each with a directed connection to every other unit. For supervised learning in discrete time settings, training sequences of real-valued input vectors become sequences of activations of the input nodes, one input vector at a time. Hopfield network[edit] The Hopfield network is of historic interest although it is not a general RNN, as it is not designed to process sequences of patterns.

Kohonen Networks Kohonen Networks Introduction In this tutorial you will learn about: Unsupervised Learning Kohonen Networks Learning in Kohonen Networks Unsupervised Learning In all the forms of learning we have met so far the answer that the network is supposed to give for the training examples is known. Kohonen Networks The objective of a Kohonen network is to map input vectors (patterns) of arbitrary dimension N onto a discrete map with 1 or 2 dimensions. Learning in Kohonen Networks The learning process is as roughly as follows: initialise the weights for each output unit loop until weight changes are negligible for each input pattern present the input pattern find the winning output unit find all units in the neighbourhood of the winner update the weight vectors for all those units reduce the size of neighbourboods if required The winning output unit is simply the unit with the weight vector that has the smallest Euclidean distance to the input pattern. Demonstration Exercises

Feature learning Feature learning or representation learning[1] is a set of techniques in machine learning that learn a transformation of "raw" inputs to a representation that can be effectively exploited in a supervised learning task such as classification. Feature learning algorithms themselves may be either unsupervised or supervised, and include autoencoders,[2] dictionary learning, matrix factorization,[3] restricted Boltzmann machines[2] and various form of clustering.[2][4][5] When the feature learning can be performed in an unsupervised way, it enables a form of semisupervised learning where first, features are learned from an unlabeled dataset, which are then employed to improve performance in a supervised setting with labeled data.[6][7] Clustering as feature learning[edit] K-means clustering can be used for feature learning, by clustering an unlabeled set to produce k centroids, then using these centroids to produce k additional features for a subsequent supervised learning task. See also[edit]

Autoencoder An autoencoder, autoassociator or Diabolo network[1]:19 is an artificial neural network used for learning efficient codings.[2] The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data, typically for the purpose of dimensionality reduction. Overview[edit] Architecturally, the simplest form of the autoencoder is a feedforward, non-recurrent neural net that is very similar to the multilayer perceptron (MLP), with an input layer, an output layer and one or more hidden layers connecting them. The difference with the MLP is that in an autoencoder, the output layer has equally many nodes as the input layer, and instead of training it to predict some target value y given inputs x, an autoencoder is trained to reconstruct its own inputs x. I.e., the training algorithm can be summarized as For each input x, Do a feed-forward pass to compute activations at all hidden layers, then at the output layer to obtain an output x̂ Training[edit]

Related: Machine Learning