background preloader

Grid/Place/HD Cells

Facebook Twitter

12

Theta Oscillations and Their Role in Creating Place and ... 10 Questions for György Buzsáki. Is Board of Governors Professor at the Center for Molecular and Behavioral Neuroscience at Rutgers University.

10 Questions for György Buzsáki

His recent book, , is a clear explication of the study of network-level dynamics in the nervous system, ranging from innovations in . Rather than list his numerous awards and accomplishments, I direct you to the which have placed him at the forefront of the rapidly progressing field of systems neuroscience. and I collaborated to bring you the following ten questions.1. Modeling necessarily requires simplification. In your view, what features of biological neural networks are so important that they must be captured in any accurate artificial neural network model? For instance, do you believe it is enough to supplement firing rate-coding units with a parameter for "phase," or do computational models need to simulate biological neural networks at a lower level?

Models can be useful in at least two different ways, inferential and inductive. 2. 3. 4. 5. 6. 7. 9. 10. NMDA receptors, place cells and hippocampal spatial memory. Buzsaki05. Spatiotemporal coupling between hippocampal acetylcholine ... A hybrid oscillatory interference/continuous attractor network model ... Intrinsic Circuit Organization and Theta–Gamma Oscillation ... Pascale Quilichini1,2, Anton Sirota1,3, and György Buzsáki1 +Show Affiliations Correspondence should be addressed to György Buzsáki, Center for Molecular and Behavioral Neuroscience, Rutgers University, 197 University Avenue, Newark, NJ 07102. buzsaki@axon.rutgers.edu.

Intrinsic Circuit Organization and Theta–Gamma Oscillation ...

Continuous attractor network. This article has not yet been published; it may contain inaccuracies, unapproved changes, or be unfinished.

Continuous attractor network

Hc-place-cells. Jacobian matrix and determinant. Or, component-wise: This matrix, whose entries are functions of x, is also denoted by Df, Jf, and ∂(f1,...

Jacobian matrix and determinant

,fm)/∂(x1,... ,xn). (Note that some literature defines the Jacobian as the transpose of the matrix given above.) Topological group. Formal definition[edit] and taking inverses: Although not part of this definition, many authors[2] require that the topology on G be Hausdorff; this corresponds to the identity map.

Topological group

Flip-flop (electronics) An animated interactive SR latch (R1, R2 = 1 kΩ R3, R4 = 10 kΩ).

Flip-flop (electronics)

An SR latch, constructed from a pair of cross-coupled NORgates. In electronics, a flip-flop or latch is a circuit that has two stable states and can be used to store state information. A flip-flop is a bistable multivibrator. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. Field-programmable gate array. A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence "field-programmable".

Field-programmable gate array

The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). (Circuit diagrams were previously used to specify the configuration, as they were for ASICs, but this is increasingly rare.) Technical design[edit] Contemporary field-programmable gate arrays (FPGAs) have large resources of logic gates and RAM blocks to implement complex digital computations. As FPGA designs employ very fast I/Os and bidirectional data buses it becomes a challenge to verify correct timing of valid data within setup time and hold time.

History[edit] Crystal structure. The (3-D) crystal structure of H2O ice Ih (c) consists of bases of H2O ice molecules (b) located on lattice points within the (2-D) hexagonal space lattice (a).

Crystal structure

The values for the H—O—H angle and O—H distance have come from Physics of Ice[1] with uncertainties of ±1.5° and ±0.005 Å, respectively. Shearing and Coaxial/Non Coaxial Strain. Deformation- produced in response to Stress Depends upon: Type of stress applied Rock properties (minerals, discontinuities, etc) Temperature Depth.

Shearing and Coaxial/Non Coaxial Strain

Gradient. In the above two images, the values of the function are represented in black and white, black representing higher values, and its corresponding gradient is represented by blue arrows.

Gradient

The Jacobian is the generalization of the gradient for vector-valued functions of several variables and differentiable maps between Euclidean spaces or, more generally, manifolds. A further generalization for a function between Banach spaces is the Fréchet derivative. Monotonic function. Figure 1. A monotonically increasing function. It is strictly increasing on the left and right while just non-decreasing in the middle.

Figure 2. Shearing-induced asymmetry in entorhinal grid cells. Combining multiple periodic grids at different spatial scales can result in non-periodic place fields. Colocalize. Grid cell. Trajectory of a rat through a square environment is shown in black. Red dots indicate locations at which a particular entorhinal grid cell fired. A grid cell is a type of neuron in the brains of many species that allows them to understand their position in space.[1][2][3][4][5][6] Grid cells derive their name from the fact that connecting the centers of their firing fields gives a triangular grid.

Discretization of continuous features. Typically data is discretized into partitions of K equal lengths/width (equal intervals) or K% of the total data (equal frequencies).[1] Mechanisms for discretizing continuous data include Fayyad & Irani's MDL method,[2] which uses mutual information to recursively define the best bins, CAIM, CACC, Ameva, and many others[3] Many machine learning algorithms are known to produce better models by discretizing continuous attributes.[4] Hippocampal Place Fields : Relevance to Learning and Memory: Relevance to ... Buzsaki2010Neuron. Temporal difference learning - Scholarpedia. Temporal difference (TD) learning is an approach to learning how to predict a quantity that depends on future values of a given signal. The name TD derives from its use of changes, or differences, in predictions over successive time steps to drive the learning process. The prediction at any given time step is updated to bring it closer to the prediction of the same quantity at the next time step.

It is a supervised learning process in which the training signal for a prediction is a future prediction. TD algorithms are often used in reinforcement learning to predict a measure of the total amount of reward expected over the future, but they can be used to predict other quantities as well. Continuous-time TD algorithms have also been developed. The Problem Suppose a system receives as input a time sequence of vectors (x_t, y_t)\ , t=0, 1, 2, \dots\ , where each x_t is an arbitrary signal and y_t is a real number.

Inhibitory synaptic plasticity: spike timing-dependence and putative network function.