background preloader

Tech papers

Facebook Twitter

WorldSciNet. N. R. LUQUEDepartment of Computer Architecture and Technology, CITIC, University of Granada, Periodista Daniel Saucedo s/n, Granada, SpainJ. A. GARRIDODepartment of Computer Architecture and Technology, CITIC, University of Granada, Periodista Daniel Saucedo s/n, Granada, SpainR. R. CARRILLODepartment of Computer Architecture and Electronics, University of Almería, Ctra. This work evaluates the capability of a spiking cerebellar model embedded in different loop architectures (recurrent, forward, and forward&recurrent) to control a robotic arm (three degrees of freedom) using a biologically-inspired approach.

Keywords: Cerebellum; STDP; robot simulation; learning; biological control system; noise Cited by (17): Xiuqing Wang, Zeng-Guang Hou, Feng Lv, Min Tan, Yongji Wang. (2014) Mobile robots׳ modular navigation controller using spiking neural networks. Silvia Tolu, Mauricio Vanegas, Niceto R. Soft Winner-Take-All networks as models of cortical computation — Neuromorphic Cognitive Systems. Papers/shen_doctoralThesis.pdf. Hasegawa Lab., Tokyo Institute of Technology. In this page, we introduce our unsupervised online incremental learning method, Self-Organizing Incremental Neural Network (SOINN). We will release the 2nd-Generation SOINN, which is designed based on the Bayes Theory. What is SOINN? The SOINN is an unsupervised online-learning method, which is capable of incremental learning, based on Growing Neural Gas (GNG) and Self-Organizing Map (SOM).

For online data that is non-stationary and has a complex distribution, it can approximate the distribution of input data and estimate appropriate the number of classes by forming a network in a self-organizing way. Published Papers Videos In HasegawaLab's Channel on YouTube , many ohter videos of our researches are broadcasted. SOINN Demo Applet Go to top of page. 東京工業大学 長谷川研究室 (Hasegawa Lab.) Cnd.memphis.edu/ijcnn2009/tutorials/shen.pdf. Www.jatit.org/volumes/research-papers/Vol5No4/2Vol5No4.pdf. Sipi.usc.edu/~kosko/BAM.pdf. Pruning self-generating ensemble networks. Www.cse.cuhk.edu.hk/~king/PUB/01288245.pdf.

Www.foretrade.com/Documents/kohonen.pdf. Www.cs.unb.ca/profs/ghorbani/ali/papers/leij-Intrusion-cnsr2004.pdf. A cortical sparse distributed coding model linking mini- and macrocolumn-scale functionality | Frontiers in Neuroanatomy. Introduction The columnar organization of neocortex at the minicolumnar (20–50 μm) and macrocolumnar (300–600 μm) scales has long been known (see Mountcastle, 1997 ; Horton and Adams, 2005 for reviews). Minicolumn-scale organization has been demonstrated on several anatomical bases (Lorente de No, 1938 ; DeFelipe et al., 1990 ; Peters and Sethares, 1996 ). There has been substantial debate as to whether this highly regular minicolumn-scale structure has some accompanying generic dynamics or functionality. See Horton and Adams (2005) for a review of the debate. However, thus far no such generic function for the minicolumn – i.e., one that would apply equally well to all cortical areas and species – has been determined.

A distributed representation of an item of information is one in which multiple units collectively represent that item, and crucially, such that each of those units generally participates in the representations of other items as well. Figure 1. 1. 2. 1. 2. 3. Figure 2. Real Time Recurrent Learning. In deriving a gradient-based update rule for recurrent networks, we now make network connectivity very very unconstrained. We simply suppose that we have a set of input units, I = {xk(t), 0<k<m}, and a set of other units, U = {yk(t), 0<k<n}, which can be hidden or output units. To index an arbitrary unit in the network we can use Let W be the weight matrix with n rows and n+m columns, where wi,j is the weight to unit i (which is in U) from unit j (which is in I or U).

Units compute their activations in the now familiar way, by first computing the weighted sum of their inputs: where the only new element in the formula is the introduction of the temporal index t. Usually, both hidden and output units will have non-linear activation functions. Some of the units in U are output units, for which a target is defined. And define our error function for a single time step as The error function we wish to minimize is the sum of this error over all past steps of the network at each time step t. Where. Www.eie.polyu.edu.hk/~ensmall/pdf/PhysRevE66.pdf. Www.smartquant.com/references/NeuralNetworks/neural5.pdf.

CBCL-Paper-KBP-2007. A Tonotopic Artificial Neural Network Architecture. Nikko Ström (1997): "A Tonotopic Artificial Neural Network Architechture for Phoneme Probability Estimation," Proc. of the 1997 IEEE Workshop on Speech Recognition and Understanding, pp. 156-163, Santa Barbara, CA. Nikko Ström Department of Speech, Music and Hearing, Centre for Speech Technology, KTH (Royal Institute of Technology), Stockholm, Sweden Introduction In the most wide-spread type of hybrid HMM/ANN ASR systems, an artificial neural network (ANN) is utilized to compute the observation likelihoods of a hidden Markov model, (e.g., [1]). The choice to represent the input speech spectrum by a small set of features is an inheritance from the standard Continuous Density HMM (CDHMM). Although ANNs are very different from biological neural systems, human perception can be an important source of inspiration for innovations in ANN technology. Figure 1. Tonotopic sparse connection scheme In a more complex connection scheme, the connectivity is a function of the two units to connect.

Where , i.e. Hierarchial Bayesian Inference. Research on an online self-organizing radial basis function neural network.