際際滷

際際滷Share a Scribd company logo
Graded Patterns in Attractor Networks
  Tristan Webb              Supervisor: Jianfeng Feng                          Co-Supervisor: Edmund Rolls
  Complexity Science DTC, Computational Biology Research Group
  University of Warwick

   Summary
 We demonstrate how noise can exist in a neural network as large as the brain. Graded 鍖ring patterns allow us to tune noise levels in the
 engineering of neural networks. The levels of noise in the brain may change with age and play a functional role in the retrieval of memory.

   Attractor Neural Networks                                                                                                       Graded Patterns
 Neural coding, and its relationship to behavior, is heavily researched                                     The network was simulated numerically for a time period of four sec-
 in many areas of neuroscience. Attractor networks are a demonstra-                                         onds. We present the network two different periods of different exter-
 tion of how decisions, memories, and other cognitive representations                                       nal stimulus levels: 鍖rst a base period, and later a cue period. During
 can be encoded in a 鍖ring pattern (or set of active neurons) in a neu-                                     the cue period the qualitative 鍖ring pattern in the network is sporadic
 ral network.                                                                                               and uneven. When cues are applied, the 鍖ring rate for the neurons in
 An attractor network receives sensory information to the network                                           a winning decision pool is raised to through positive feedback, while
 through connections known as synapses. The network is character-                                           the other pool is suppressed through increased inhibition.
 ized by recurrent collateral synapses providing feedback to neurons.                                                                                                          Uniform                                                                               Graded
 Recurrent synaptic activity will cause the 鍖ring patterns in the network                                                                                              Final Second Mean Neuron Rates                                                        Final Second Mean Neuron Rates
                                                                                                                                                      60                                                                                      60
 to persist even after the input is removed.                                                                                                                                                   Winning Pool                                                                       Winning Pool
 Learning occurs through the modi鍖cation of synaptic strengths (wij ,                                                                                 50                                       Losing Pool                                    50                                  Losing Pool
 where i is the ith neuron and j is the jth synapse). An associative                                                                                  40                                                                                      40




                                                                                                                                   Firing Rate (Hz)




                                                                                                                                                                                                                           Firing Rate (Hz)
 learning (Hebbian) rule can create the correct structure for the re-                                                                                 30                                                                                      30
 call of information. This type of learning strengthens connections                                                                                   20                                                                                      20
 between neurons that are simultaneously active.
                                                                                                                                                      10                                                                                      10
 The network dynamics can be thought of as a gradient descent to-
 wards a local minimum in an energy landscape. When the network                                                                                        00        5       10   15 20 25                  30   35   40                           00        5     10   15 20 25          30      35   40
                                                                                                                                                                              Neuron Number                                                                         Neuron Number
 has reached this minimum the learned pattern is recalled. The en-
 ergy is de鍖ned as                                                                                          We imposed uniform and graded 鍖ring patterns on the network by
           1                                                  External Inputs                               selecting the distribution of the recurrent weight for each of the deci-
    E =       (yi  < y >)(yj  < y >)                                                                     sion pools. To achieve a uniform 鍖ring pattern, weights were all set
           2
               ij                             Recurrent 鍖ring
                                                            yj
                                                                              Dendrites                     to the same value w+ = 2.1. Graded 鍖ring patterns were achieved
                                                                                             Recurrent

 where yi is the 鍖ring of the ith neu-                                        wij            collateral
                                                                                                            by conforming weights to a discrete exponential-like distribution with
 ron, < y > is the populations mean 鍖r-                                                     synapses       mean value w+  2.1.
 ing rate. Fixed points in attractor net-                                                    Cell bodies


 works can correspond to a spontaneous                                                   Output 鍖ring
                                                                                                                                   Results
 state (where all neurons have a low 鍖r-                                                     yi
                                                                                                            Graded simulations were more likely to jump to a decision
 ing rate), or a persistent state in which a                                                                early.            This could be caused by decreased stability of the
 subset of neurons have a high 鍖ring rate.                                                                  spontaneous state.                                 Changes in reaction time distributions
                                                                                                            are statistically signi鍖cant and the decrease in reaction time
   Network Dynamics                                                                                         is robust across different 鍖ring rates of the winning pool.
                                                                                                                                                                 Variability in the system increases when
 Neurons in simulations use Integrate-and-Fire (IF) dynamics to de-                                                      Reaction Times vs Firing Rates
                                                                                                              1100                                               graded patterns are introduced. Here
 scribe the membrane potential of neurons. We chose biologically
                                                                                                              1000                                               we use the Fano factor to compute trial
 realistic constants to obtain 鍖ring rates that are comparable to ex-
                                                                                                            Reaction Time (msec)




                                                                                                               900                                               to trial variability of membrane potentials
 perimental measurements of neural activity. IF neurons integrate                                              800                                               across simulations. The Fano factor is
 synaptic current into a membrane potential, and then 鍖re when the                                             700                                               calculated from the variance in the poten-
 membrane potential reaches a voltage threshold.                                                               600      Graded Simulations
                                                                                                                        Uniform Simulations
                                                                                                                                                                 tial measured in a window with temporal
 The synaptic current 鍖owing into each neuron is described in terms                                            500
                                                                                                                 26 27 28 29 30 31 32 33 34
                                                                                                                    Winning Pool Final Second Firing Rate (Hz)   length T and expressed as a function of
 of neurotransmitter components. The four families of receptors used
                                                                                                                                                                 time,
 are GABA, NDMA, AMPArec , and AMPAext . The neurotransmitter re-                                                                                     Average Fano Factor of Membrane Potential
                                                                                                                            0.005                                                                                                                   Tr
 leased from a presynaptic excitatory neuron are AMPArec and NMDA,                                                                                                                                                                                       [Vi,n (T )  Vi (T ) ]2
 while inhibitory neurons transmit GABA currents. Each neuron re-                                                           0.004
                                                                                                                                                                                                                       F (T ) = n                                                              ,
 ceives external input through a spike train modeled by a Poisson pro-                                                      0.003
                                                                                                           Fano Factor




                                                                                                                                                                                                                                   Vi (T )
 cess with rate 了i = 3.0Hz.                                                                                                 0.002
                                                                                                                                                                                                             where Vi (T ) is the average potential of
 Synaptic current 鍖owing into a neuron is given by the following equa-                                                      0.001                           Graded Simulations                               neuron i in the time window, and Tr is the
 tion, where each term on the RHS refers to the current from one class                                                                                      Uniform Simulations
                                                                                                                            0.000 0.5                      1.0   1.5      2.0 2.5 3.0    3.5      4.0        number of trials.
 of neurotransmitter,                                                                                                                                                  Time (seconds)


        Isyn (t) = IGABA(t) + INDMA(t) + IAMPA,rec (t) + IAMPA,ext (t)                                                             Conclusion
                                                                                                              The transition time to an attractor state, or reaction time, is
   Architecture                                                                                               decreased when neurons 鍖re in a more biologically realistic
 We structure the network by establishing the strength of interactions                                        pattern.
 between two decision pools, D1 & D2, to be values that could occur                                           There is greater variability in the systems states over time when
 through associative learning.                                                                                graded patterns are introduced.
                                                                                                            We state that increased variance in synaptic input to each neuron can
                                                  Non-Speci鍖c     1                                         be thought of as increased noise in the system. Conceptually, graded
               Inhibitory       Excitatory
                Neurons         Neurons
                                                                      Blowup showing sub-populations
                                                                      of exictatory neurons
                                                                                                            patterns are more noisy because recurrent synaptic input to neurons
                                             w+
                                                  D1
                                                       w
                                                            D2
                                                                 w+
                                                                                                            will vary across the population.
                                                                                                            As neural networks become larger, noise will invariably become
                                                                                                            lower. However, when we consider the situation in brain, even though
 Neurons in the same decision pool are connected to each other with                                         the network is large, there is still signi鍖cant noise in the system. We
 an strong average weight w+, and are connected to the other excita-                                        present the hypothesis that this noise is due in part to graded 鍖ring
 tory pools with an weak average weight w.                                                                 pattens. Further work will explore this analytically.

Complexity DTC - University of Warwick - Coventry, UK                               Mail: tristan.webb@warwick.ac.uk                                                                              WWW: http://warwick.ac.uk/go/tristanwebb

More Related Content

Graded Patterns in Attractor Networks

  • 1. Graded Patterns in Attractor Networks Tristan Webb Supervisor: Jianfeng Feng Co-Supervisor: Edmund Rolls Complexity Science DTC, Computational Biology Research Group University of Warwick Summary We demonstrate how noise can exist in a neural network as large as the brain. Graded 鍖ring patterns allow us to tune noise levels in the engineering of neural networks. The levels of noise in the brain may change with age and play a functional role in the retrieval of memory. Attractor Neural Networks Graded Patterns Neural coding, and its relationship to behavior, is heavily researched The network was simulated numerically for a time period of four sec- in many areas of neuroscience. Attractor networks are a demonstra- onds. We present the network two different periods of different exter- tion of how decisions, memories, and other cognitive representations nal stimulus levels: 鍖rst a base period, and later a cue period. During can be encoded in a 鍖ring pattern (or set of active neurons) in a neu- the cue period the qualitative 鍖ring pattern in the network is sporadic ral network. and uneven. When cues are applied, the 鍖ring rate for the neurons in An attractor network receives sensory information to the network a winning decision pool is raised to through positive feedback, while through connections known as synapses. The network is character- the other pool is suppressed through increased inhibition. ized by recurrent collateral synapses providing feedback to neurons. Uniform Graded Recurrent synaptic activity will cause the 鍖ring patterns in the network Final Second Mean Neuron Rates Final Second Mean Neuron Rates 60 60 to persist even after the input is removed. Winning Pool Winning Pool Learning occurs through the modi鍖cation of synaptic strengths (wij , 50 Losing Pool 50 Losing Pool where i is the ith neuron and j is the jth synapse). An associative 40 40 Firing Rate (Hz) Firing Rate (Hz) learning (Hebbian) rule can create the correct structure for the re- 30 30 call of information. This type of learning strengthens connections 20 20 between neurons that are simultaneously active. 10 10 The network dynamics can be thought of as a gradient descent to- wards a local minimum in an energy landscape. When the network 00 5 10 15 20 25 30 35 40 00 5 10 15 20 25 30 35 40 Neuron Number Neuron Number has reached this minimum the learned pattern is recalled. The en- ergy is de鍖ned as We imposed uniform and graded 鍖ring patterns on the network by 1 External Inputs selecting the distribution of the recurrent weight for each of the deci- E = (yi < y >)(yj < y >) sion pools. To achieve a uniform 鍖ring pattern, weights were all set 2 ij Recurrent 鍖ring yj Dendrites to the same value w+ = 2.1. Graded 鍖ring patterns were achieved Recurrent where yi is the 鍖ring of the ith neu- wij collateral by conforming weights to a discrete exponential-like distribution with ron, < y > is the populations mean 鍖r- synapses mean value w+ 2.1. ing rate. Fixed points in attractor net- Cell bodies works can correspond to a spontaneous Output 鍖ring Results state (where all neurons have a low 鍖r- yi Graded simulations were more likely to jump to a decision ing rate), or a persistent state in which a early. This could be caused by decreased stability of the subset of neurons have a high 鍖ring rate. spontaneous state. Changes in reaction time distributions are statistically signi鍖cant and the decrease in reaction time Network Dynamics is robust across different 鍖ring rates of the winning pool. Variability in the system increases when Neurons in simulations use Integrate-and-Fire (IF) dynamics to de- Reaction Times vs Firing Rates 1100 graded patterns are introduced. Here scribe the membrane potential of neurons. We chose biologically 1000 we use the Fano factor to compute trial realistic constants to obtain 鍖ring rates that are comparable to ex- Reaction Time (msec) 900 to trial variability of membrane potentials perimental measurements of neural activity. IF neurons integrate 800 across simulations. The Fano factor is synaptic current into a membrane potential, and then 鍖re when the 700 calculated from the variance in the poten- membrane potential reaches a voltage threshold. 600 Graded Simulations Uniform Simulations tial measured in a window with temporal The synaptic current 鍖owing into each neuron is described in terms 500 26 27 28 29 30 31 32 33 34 Winning Pool Final Second Firing Rate (Hz) length T and expressed as a function of of neurotransmitter components. The four families of receptors used time, are GABA, NDMA, AMPArec , and AMPAext . The neurotransmitter re- Average Fano Factor of Membrane Potential 0.005 Tr leased from a presynaptic excitatory neuron are AMPArec and NMDA, [Vi,n (T ) Vi (T ) ]2 while inhibitory neurons transmit GABA currents. Each neuron re- 0.004 F (T ) = n , ceives external input through a spike train modeled by a Poisson pro- 0.003 Fano Factor Vi (T ) cess with rate 了i = 3.0Hz. 0.002 where Vi (T ) is the average potential of Synaptic current 鍖owing into a neuron is given by the following equa- 0.001 Graded Simulations neuron i in the time window, and Tr is the tion, where each term on the RHS refers to the current from one class Uniform Simulations 0.000 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 number of trials. of neurotransmitter, Time (seconds) Isyn (t) = IGABA(t) + INDMA(t) + IAMPA,rec (t) + IAMPA,ext (t) Conclusion The transition time to an attractor state, or reaction time, is Architecture decreased when neurons 鍖re in a more biologically realistic We structure the network by establishing the strength of interactions pattern. between two decision pools, D1 & D2, to be values that could occur There is greater variability in the systems states over time when through associative learning. graded patterns are introduced. We state that increased variance in synaptic input to each neuron can Non-Speci鍖c 1 be thought of as increased noise in the system. Conceptually, graded Inhibitory Excitatory Neurons Neurons Blowup showing sub-populations of exictatory neurons patterns are more noisy because recurrent synaptic input to neurons w+ D1 w D2 w+ will vary across the population. As neural networks become larger, noise will invariably become lower. However, when we consider the situation in brain, even though Neurons in the same decision pool are connected to each other with the network is large, there is still signi鍖cant noise in the system. We an strong average weight w+, and are connected to the other excita- present the hypothesis that this noise is due in part to graded 鍖ring tory pools with an weak average weight w. pattens. Further work will explore this analytically. Complexity DTC - University of Warwick - Coventry, UK Mail: tristan.webb@warwick.ac.uk WWW: http://warwick.ac.uk/go/tristanwebb