[PLOS][arxiv-ne][Citeseer] [powells][useit.com] [NIPS][Report Announcements]

Are You Nervous?

These are my working notes on applied neuromorphology, engineering using technology from the advances in neurobiology.John J. Barton.

Quotations appear in italics.

Why? ... because problems are easy or hard according to how algorithms fit on hardware and according to the representation of information...[John J. Hopfield]

 

Promising characteristics

Near quantal threshold noise tolerance in neural systems
We have shown that ion channel noise may be the source of the stimulus-dependent reliability and timing jitter characteristics which were observed in real neurons, both in vivo and in vitro [110, 127, 13, 139, 39]. The microscopic ion channel noise can affect the macroscopic behavior of neurons, since the initiation of a spike is the result of opening of a critical number of ion channels. Because this number is relatively small, fluctuations in the number of open channels may have a significant effect on the membrane voltage, and thus on the timing and occurrence of a spike. [Elad Schneidman, Thesis] Or to run this the other way: what structure would a brain have if it did not need (for eg energy reasons) to work close to the noise limits? But then Scheneidman says: For some stimuli, we find that noise would actually improve the information encoding.Thus, noise and information do not always play opposing roles in terms of neuronal function.See also The Impact of Synaptic Unreliability on the Information Transmitted by Spiking Neurons & Anthony Zador
"on-time computing"
Often there is no time to wait until a computation has converged, the results are needed instantly...[On the Computational Power of Circuits of Spiking Neurons Wolfgang Maass Thomas Natschlaager Henry Markram]
the capacity of the slow-loading, structure-based memory reservoir outstrips that contained in synaptic weight values by orders of magnitude.
Upon more careful examination, however, four types of experimental evidence weaken the link between the abstract synaptic weights of connectionist theory and the physical substrate for long-term learning and memory in the brain. First, a spate of recent experiments indicates that the efficacy of synaptic transmission at cortical synapses can undergo substantial fluctuations up (facilitation) or down (depression), or both, during brief trains of synaptic stimulation,...Second... the finding that individual synaptic contacts may on long timescales be scarcely more than binary-valued connections creates further distance between abstract synaptic weights—the memory containers of artificial neural learning systems—and the physical synapses of the brain...Third... the very notion of a “ connection strength” between two neurons is compli- cated by the fact that the efficacy of a given synaptic contact—that is, its weight—is likely to vary significantly depending on the ongoing activity of other synapses within the dendritic compartment....Fourth...given nonlinear dendritic physiology, changes in the addressing of synaptic contacts onto existing dendritic subunits, or formation of entirely new dendritric subunits, could constitute forms of plasticity that cannot be expressed in terms of simple weight changes from one neuron to the next.[Impact of Active Dendrites and Structural Plasticity on the Memory Capacity of Neural Tissue Panayiota Poirazi* and Bartlett W. Mel*]
Fast
[Early Cortical Orientation Selectivity: How Fast Shunting Inhibition Decodes the Order of Spike Latencies. Delorme, A. (2003)Journal of Computational Neuroscience, 15, 357-365].Human vision is 150ms.
Power Efficient
"...the disparity between the efficiency of computation in the nervous system and that in a computer is primarily attributable not to the individual device requirements, but rather to the way the devices are used in the system." [Meed Neuromorphic Electronic Systems, Proc IEEE, 1990] Carver Mead’s analysis of the power-efficiency of analogue computation (e.g. Mead, 1989 Analog VLSI and Neural Systems.)
Wafer-scalable
Because neural systems are robust to single element failure and because they are low power, analog VLSI CMOS neural wafers are feasible. Edge mounted, air convection cooled, 10 billion transistors, 10 trillion ops/sec. Predicted by Carver Mead in 1990.
 

Motivating applications.

Camera to retina model
What happens when CMOS transistors get smaller than optical pixels? Space for pre-processing on camera chips. A. El Gamal, D. Yang, and B. Fowler, Pixel Level Processing -- Why, What and How?, In Proceedings of the SPIE Electronic Imaging '99 conference, Volume 3650, January 1999.
Smart Sensors
Smart-sensors are information sensors, not transducers and signal processing elements....Spatio-temporal image processing involves an extra dimension of information in addition to spatial ones, i.e. temporal information. It is known that temporal information, usually addressed in the context of motion detection, can provide extra cues about the contents, structure, and other high or low level information present in a scene. This belief is strongly supported by experiments on species with relatively primitive visual system, but very capable of performing visual tasks. These creatures are insects. Insects heavily rely on motion detection in avoiding obstacles, landing, tracking, estimating range, and so on. [Vision Chips or Seeing Silicon Alireza Moini] Great review of vision chips 1997; see also Fovean.
Aural
The fact that most acoustic array processing techniques treat each time sample identically without considering how the reliability of directional cues varies over time may be a fundamental reason why the neural system is relatively more robust than many machine algorithms in the face of room reverberation.[NEURAL REPRESENTATION OF SOURCE DIRECTION IN REVERBERANT SPACE Barbara Shinn-Cunningham and Kosuke Kawakyu]
Cocktail Party Chats
One Microphone Blind Dereverberation based on Quasi-Periodicity of Speech Signals Tomohiro Nakatani, Masato Miyoshi, Keisuke Kinoshita [ps.gz][pdf] A classification-based cocktail-party processor Nicoleta Roman, DeLiang Wang, Guy J. Brown; It is not known exactly how human being is able to separate the different sound sources. Independent component analysis is able to do it, if there are at least as many microphones or 'ears' in the room as the are different simultaneous sound sources
Another von-neumann bottleneck: sensor integration and conditioning
This paper presents the Malleable Signal Processor (MSP), a reconfigurable computing module being developed to simplify integration of a wide variety of sensors and actuators into an on-board spacecraft processing system. Interfacing to sensors and actuators requires a host of control signals with complex timing relationships. In addition, sensor data usually needs some signal conditioning, such as calibration, format conversion and feature extraction, before it is useable by the host system. The concurrent processing demanded by these activities often exceeds the computational capacity of a microcontroller, necessitating custom interface circuitry. The MSP offers an alternative approach, employing programmable logic devices with on-board memory to generate control signals and to condition the data in a reconfigurable module. The MSP’s versatility will be demonstrated in a flight system, where it is used to interface to two very different kinds of sensors. The MSP represents a first step toward the more general application of configurable computing in Space.[Malleable Signal Processor: A General-purpose Module for Sensor Integration James C. Lyke, Gregory W. Donohoe AFRL/VSSE]
molecular electronics may provide an effective approach to implementing the above neuron model.
The concept of synaptic noise is of relatively recent interest in the field of neurobiology. According to [7], noise in the nervous system might have a number of roles. It might constrain the coding accuracy in neural structures [8]; enhance signal detection under some circumstances [9]; or affect the firing patterns of multimodal sensory cells [10]. Moreover, [11] shows how noise contributes to the contrast invariance of orientation tuning in the visual cortex of cats, because noise allows the averaged membrane potential of a neuron to be translated into spiking in a smooth and graded manner across a wide range of potentials. With respect to recurrent neural networks, applying synaptic noise to the learning process has been shown to improve convergence time and generalization performance to longer sequences [12]. Consequently, the use of synaptic noise in the learning rule might not only be beneficial for the learning process, it may also provide insights into the possible functions of biological noise. [A neural network for temporal sequential information Adriaan G. Tijsseling & LucBerthouze]
Quantum Cellular Automata Architectures - QCAA
...compact implementation of complex interconnection networks in a plane by using QCA wires, which has not been possible in VLSI
Nanotechnology
KnowmTech LLC is an intellectual property holding company for ideas relating to nanotechnology-based neural network systems and devices
Large sensor networks
Ou, S., Karuppiah, D. R., Fagg, A. H., Riseman, E., and Grupen, R. (2004), An Augmented Virtual Reality Interface for Assistive Monitoring of Smart Spaces to appear in the Proceedings of the IEEE International Conference on Pervasive Computing and Communications.
Non-linear "human-centric" signal processing
Filter speech into phonomes, then re-emit the phonemes as speech.
Non-linear system identification
Compensation of Loudspeaker Nonlinearities;Nonlinear Echo Cancelation; Image Processing; Equalization of Nonlinear Channels and Compensation of Nonlinearities in Communications Systems; Applications in Biomedical Engineering. [A BIBLIOGRAPHY ON NONLINEAR SYSTEM IDENTIFICATION Georgios B. Giannakis, Erchin Serpedin]
Control Systems
At its simplest, a control system is a device in which a sensed quantity is used to modify the behavior of a system through computation and actuation....A modern view of control sees feedback as a tool for uncertainty management....a common feature is that system-level requirements far exceed the achievable reliability of individual components. [Control in an information rich world. R. M. Murray, K. J. Astrom, S. P. Boyd, R. W. Brockett, and G. Stein]
Proteinomics
Neural networks have been applied to many pattern classification problems. Here, I review applications to the problem of predicting protein structure from protein sequence. [Neural networks predict protein structure: hype or hit?]
Gesture Interfaces
Given the size of the work space, the system's accuracy is about 5% for the worst case data and about 3% for most of the frames. If the depth component is ignored, the average accuracy is around 1%.... The current speed of about 3 fps is not great real time, but with some tuning and a dual 500Mhz Pentium III processor, the system could probably achieve 15 fps. [Robust Finger Tracking with Multiple Cameras Cullen Jennings]
Face Recognition
Face Recognition using Spiking Neurons [Delorme, A., Thorpe, S. (2001) Face processing using one spike per neuron: resistance to image degradation. Neural Networks, 14(6-7), 795-804]
Neuroimmunology
...the notion that a primary function of the immune system may be to serve as a sensory organ for stimuli such as bacteria, viruses, and tumor cells that are not recognized by central and peripheral nervous systems...[Blalock, J.E. and E.M. Smith. 1985. The immune system: Our mobile brain? Immunology Today 6:115-117.]
And of course robotics
Robots, After All Hans Moravec

Concepts

Consciousness is prediction.
Hypothesis: consciousness (just) a consequence of an advanced prediction system. A frog tracks a fly and strikes out to catch it where it will be when the frog's tongue is extended: the frog's brain predicts the fly's position. Therefore it must have an internal model of fly-flight. The model may be trivial, but it is exactly the advantage the frog enjoys over this unfortunate fly. An owl strikes out for a pond. Why? Because its internal model tells it to expect frog dinners near by. Are they conscious? In some tiny way, perhaps. Human behaviors are not so very different.
Sense/act systems. Sense/act programming.
50 years ago, when computers were invented, the critical problem to solve required rapid addition. Today addition is, for practical purpose instantaneous and free. I/O has become both the performance and conceptual bottleneck.
Neo-neural networks
A general purpose computational device cannot rely on a mysterious internal process, or even on process difficult for skilled human developers to grasp. The neural network path isn't good enough; the Echo State or Liquid State Machine approaches however interesting can only be a waypoint. A general purpose neo-network machine must have clear operating principles.
vs Biology; vs artificial neural networks
Sense/act systems are designed. They may resemble natural biological neural systems but not reproduce them; they may resemble artificial neural networks only superfically. The aim is to create realizable general purpose device that can be "programmed" in some sense of that word.
User-interfaces
on-time signal conditioning, eg speech recognition or gesture ui
Models of cognitive development as models for programming networks
 

 

Spikes, Temporal Codes, Spike Trains, Spike Waves

Spiking Models
An Effcient Method for Computing Synaptic Conductances Based on a Kinetic Model of Receptor Binding [ Destexhe, Z.F. Mainen and T.J. Sejnowski]
Reviews
A Taxonomy for Spatiotemporal Connectionist Networks Revisited: The Unsupervised Case [Guilherme de A. Barreto, Aluizio F. R. Araujo, Stefan C. Kremer]
Algorithms on spiking networks
We develop and extend algorithms that allow Asynchronous Spiking Neural Networks (ASNN's) to compute in ways traditionally associated with artificial neural networks, like pattern recognition and unsupervised clustering. Additionally, we investigate how spiking neurons could be used for solving the binding-problem: we propose a framework for dynamic feature binding based on the properties of distributed coding with populations of spiking neurons, and we investigate the most likely nature of the synchrony measure. [Spiking Neural Networks Sander M. Bohte.]
Could it be done with Spiking?
Nonlinear processing in LGN neurons Vincent Bonin* , Valerio Mante and Matteo Carandini
Spike Train Analysis
The techniques we will explore here ... avoid problems with binning in time and allow the flexibility of imposing synaptic interpretations on the train. ... how spike trains might be compared to each other for similarity. [ Applications in Temporal Coding analysis Bruce Land's Cornell Journal Club]
Two modes for neurons: rate driven and synchony
Thus, a neuron is capable of switching between computational modes, from the integration of firing rate input received from a large number of neurons, to the detection of coincident spike arrivals....The point we want to make here, is that neural information processing systems rely heavily, on the computational features of single units. In the computational neuroscience field the ideas outlined above, are well known. However, in the field of artificial intelligence and robotic applications, almost no attention is given to the properties of the neural model. For instance, a very succinct comparison between types of neural models existing reveals the following. A continuous rate-coding neuron, that represents the computational unit of the classical neural networks, can compute a temporal linear summation of inputs. A simplified model of the spiking neuron can in addition detect coincidence, can do multiplexing, and can compute in a temporal domain using delay codes (Maass, 1999). A compartmental model, which includes the dendritic tree, can perform spatial summation, nonlinear operations (division), can increase its discrimination and memory power up to thousand times that of the linear neuron, and can detect movement direction and binaural stimuli (Koch, 1999; Poirazi and Mel, 2000).[Motor control of direction: a biologically inspired neural network model]
Other Time Codes
Thorpe, S., Delorme, A., VanRullen, R. (2001) Spike based strategies for rapid processing. Neural Networks, 14(6-7), 715-726.
 
Event-driven simulations
We focus in this paper on its application to self-organizing maps. Standard event-driven approaches to simulation can significantly reduce computational time, but only when network activity is relatively low. In this article, we propose several strategies to manage efficiently, large numbers of spiking events. The simulation scales well with the increase of the neural activity and is more biologically plausible than competing methods. [Efficient event-driven simulation of spiking neural networks IOANA MARIAN RONAN G. REILLY DANA MACKEY]
Shunting of synaptic potentials by action potentials
By defining multiple spatial compartments and temporal windows for summation of synaptic inputs, shunting of synaptic potentials by APs represents a powerful computational mechanism influencing the way neurons integrate the many thousands of synaptic inputs they receive. [Differential shunting of EPSPs by action potentials.Hausser M, Major G, Stuart GJ. ]
 
 
 
Advantages of analog spike-signals.
Many engineered systems use spike-based communication when direct analog communication is not appropriate given power constraints,pad limitations, and channel-sharing requirements.[SYNCHRONY DETECTION FOR SPIKE-MEDIATED COMPUTATION Charles S. Wilson, Paul E. Hasler, and Stephen P. DeWeerth] See also for propagating spikes.
 

Learning

Memory vs Learning?
 
 
Three branches:

Isotropic Sequence Order Learning in a Closed Loop Behavioural System Bernd Porr Florentin WĻorgĻotter] Their time dependence looks like SDTP; Rug Warrior robot example. Analytical solution of spike-timing dependent plasticity based on synaptic biophysics [Bernd Porr, Ausra Saudargiene and FlorentinW¨org otter] These two connect the spiking version of Hebbian learning to control theory in some ways. Another cross over between sensor/actuator and computation: here they show actuation learning.

Hebbian Learning
spike timing-dependent synaptic plasticity [How synapses in the auditory system wax and wane: Theoretical perspectives A. N. Burkitt and J. L. van Hemmen] Barn owl biaural
Eligibility Traces
An essential feature of the model's learning rule is its use of synaptically-local eligibility traces for learning with delayed training information. Eligibility traces are key components of many reinforcement learning systems (e.g., Sutton & Barto, 1998) as well as models of classical conditioning (Sutton and Barto 1981, 1990; Klopf 1988), where they address the sensitivity of conditioning to the time interval between the conditioned and the unconditioned stimuli and the anticipatory nature of the conditioned response. Eligibility traces play the same role in this model, whose learning mechanism is much like classical conditioning, with corrective movements playing the role of unconditioned responses.[A Cerebellar Model of Timing and Prediction in the Control of Reaching Andrew G. Bartoy, Andrew H. Faggy, Nathan Sitko y, and James C. Houk] Lots of other interesting stuff in this paper.
Sensor/motor coupling
We show that it is possible to make an algorithm that, by analyzing the law that links its motor outputs to its sensory inputs, discovers information about the structure of the world regardless of the devices constituting the body it is linked to.[Perception of the structure of the physical world using unknown multimodal sensors and effectors David Philipona, J. Kevin O'Regan, Jean-Pierre Nadal, Olivier J.-M. D. Coenen] See also Bernd Porr.
 

 

Neurobiology

Summary of progress in techniques for neurobiology.
Perhaps the deepest mysteries facing the natural sciences concern the higher functions of the central nervous system. Understanding how the brain gives rise to mental experiences looms as one of the central challenges for science in the new millennium. [Nichols, MJ and WT Newsome (1999). The neurobiology of cognition. Nature 402(SUPP):C35-C38]. Nice side-by-side of " Localization of mental functions", according both phrenologists and neurobiologists! Claims that technology for experimental measures is the key problem, not conceptual models (of course all experimentalists say that). Cool new optical methods mentioned. Frontiers are consciousness (they seem unconvinced of potential progress) and understanding decisions (more promising).
3 Levels of Biological Organization: Phylogeny(evolution),Ontogeny(cell differentiation), Epigenesis (learning)
If one considers life on Earth since its very beginning, three levels of organization can be distinguished [9, 13, 14]:Phylogeny: ... the temporal evolution of the genetic program, the hallmark of which is the evolution of species, or phylogeny...The phylogenetic mechanisms are fundamentally nondeterminis- tic, with the mutation and recombination rate providing ... for the survival of living species, for their continuous adaptation to a changing environment, and for the appearance of new species.....Ontogeny is ... the developmental process of a multicellular organ- ism. This process is essentially deterministic: an error in a single base within the genome can provoke an ontogenetic sequence which results in notable, possibly lethal, malformations. Epigenesis: The ontogenetic program is limited in the amount of information that can be stored, thereby rendering the complete speci cation of the organism impossible. A well-known example is that of the human brain with some 10^10 neurons and 10^14 connections, far too large a number to be completely speci ed in the four-character genome of length approximately 3x10^9 . .... The epigenetic processes can be loosely grouped under the heading of learning systems. [ BioSystems, 68(2-3):235-244, February-March, 2003. Bio-Inspired Computing Tissues: Towards Machines that Evolve, Grow, and Learn C. Teuscher, D. Mange, A. Stau er, and G. Tempesti]
Rate-coding can't work
... firing frequencies of 10 Hz and above are well within the range observed in vivo. Thus, it is worthwhile to consider some functional implications of the rather surprising independence of the average response amplitude on stimulation frequency. One consequence is that at such high frequencies, the average synaptic output no longer contains information about the input frequency. In this regime, synapses cannot simply be transmitting information about the firing rate. ...depression dominates the synaptic response. A particularly intriguing finding is that LTP may double the response to the first impulse in a rapid train, while leaving the response to subsequent impulses almost unaffected. [Dynamic Synapses in the Cortex MinireviewAnthony M. Zador and Lynn E. Dobrunz]
Synaptic Dynamics favors inhibition (stability)
The central finding of this study is that, during repeated stimulation, inhibitory synapses on to layer 2/3 pyramidal neurons exhibit significantly less synaptic depression than do excitatory synapses. This difference emerges rapidly, often within the first three to five stimuli in a train, and depends on frequency, becoming more prominent at higher frequencies. ...The direction of this modification favors stability; increased activity shifts the balance in favor of inhibition. If instead, increased activity had shifted the balance in favor of excitation, repeated stimulation would be expected to evoke larger and larger responses, eventually leading to epileptiform discharge.[Differential Depression at Excitatory and Inhibitory Synapses in Visual Cortex, Juan A. Varela, Sen Song, Gina G. Turrigiano, and Sacha B. Nelson]

 

Artifical Intelligence vs Natural Intelligence

Artifical Intelligence, a branch of computer science; Natural Intelligence, a talent you exercise in reading this. Is AI like NI? Is NI like AI? Computer Scientist have come to think "no" on both scores; Neurobiologists don't care. But they do ask "what is the computational model underlying natural intelligence?"

How work on neural networks was impacted by AI: The Catastrophe
In the interim between the recognition of this problem and its solution, a great disaster befell the development of "tolerant" computers. In the mid-sixties, the chief proponents of Artificial Intelligence, alert to the logical gap in the then current neural models, are supposed, successfully, to have made the case before government that further research in the area of neural networks was premature. In 1969, Marvin Minsky and his associate S. A. Papert, published a monograph which proved that "elementary" (as they are now called) Perceptrons or Adalines could not perform two crucial logical operations [exclusive OR and not (exclusive OR)]. He conjectured then and maintains now (Johnson 1987, 52) that no multi-layering of McCulloch-Pitts neurons within the Perceptron or Adaline could solve the problem. For all practical purposes, funding in the United States came to a dead stop for twenty years, and research slowed to a crawl." [Neural Network Computing and Natural Language Processing* Frank L. Borchardt Duke University] Well this sounds a bit overstated from other accounts I have read. It seems like Minsky and Papert raised reasonable objections. More modern criticism of early neural networks--that they are not close to biological networks--makes an even more compelling case that work with Perceptrons is a dead end.
Renaming Artificial Intelligence to "Algoritstics" to match its true role in computer science.
Whether the computing profession is ill-informed about natural intelligence or not, there are good arguments for dropping the term artificial intelligence ... algoristics, would make a highly suitable replacement...Placing this renamed field alongside statistics and logistics, as a branch of mathematics, would benefit the computing profession greatly. Given that algoristic techniques are highly mathematical and require a much greater degree of mathematical knowledge than ordinary programming, they should be taught and studied primarily by mathematicians. ["Artificial Intelligence: Arrogance or Ignorance?" Neville Holmes, University of Tasmania]
A Famous Neurobiologist on AI
However, Searle and his target authors seem to be unaware of a subtle circularity in their appeal to empirical evidence.. About 50 years ago, with great developments in electronics and computer science, there began an invasion of researchers and ideas from the physical, engineering, and cognitive sciences, which grew to a flood that transformed neurobiology. Experimental designs and the interpretations of data were reformulated in terms of information, memory storage, analog comparators, networks, filters, integrators, logical gates, etc. In other words, to the extent that neurobiology is identified with computational neuroscience, it becomes indistinguishable from artificial intelligence. ["Commentary on "The Mystery of Consciousness" Walter J Freeman Hubert Dreyfus]
Super Intelligent Machines
However, a critical event in the progress of science is imminent. This is the physical explanation of consciousness and demonstration by building a conscious machine. We will know it is conscious based on our emotional connection with it. Shortly after that, we will build machines much more intelligent than humans, because intelligent machines will help with their own science and engineering. And the knowledge gap that has been shrinking over the centuries will start to grow. Not in the sense that scientific knowledge will shrink, but in the sense that people will have less understanding of their world because of their intimate relationship with a mind beyond their comprehension. We will understand the machine's mind about as much as our pets understand ours. We will fill this knowledge gap with religion, giving the intelligent machine the role of god. [Hibbard, W. Super-intelligent machines. Computer Graphics 35(1), 11-13. 2001.] Woof.
Frankenstien?
This simple fact has consequences on a completely different scale than any other event in human history, combining great danger with great opportunity. The danger is not, as commonly depicted in science fiction books and movies, that machines will take control away from humans. It is that machines will enable a small group of humans to take control away from democratic government. Despite our prejudices, humans all have about the same intelligence. The highest IQ in history is only twice the average, whereas the largest trucks, buildings, ships and computers are thousands of times their averages. When we start constructing artificial minds, the rough equality of intelligence will end. Unless we are very careful, the long term trend toward human social equality will end with it. Ensuring that intelligent machines serve general human interests rather than the interests of a few will be the great political struggle of the next century.[Goodbye Bill Hibbard University of Wisconsin - Madison] Hibbard sounds a bit wacky, but some how sensible.

Models of Cognitive Development

Parallel Distributed Processing
A network where nodes represent cognitive elements ("isa", "robin", "bird") but the processing resembles neural networks. [The parallel distributed processing approach to semantic cognition.]
UCI Repository Of Machine Learning Databases
This is a repository of databases, domain theories and data generators that are used by the machine learning community for the empirical analysis of machine learning algorithms.
 
SPA
The Supervised Growing Neural Gas Algorithm
The SGNG algorithm constructively builds the hidden layer of a radial basis function (RBF) network. Such an RBF network is different from the more common backpropagation networks in that the hidden units do not have a sigmoid but a Gaussian, ‘bellshaped’ activation function (see figure 1). This allows each hidden unit to be active only for inputs within a certain range (as opposed to being active for all inputs above a certain threshold, as with sigmoidal units) and it can thus be viewed as a receptive field for a region of the input space. ...The constructivist network contradicts the view that connectionist learning implies a homogeneous architecture, which is often held for connectionist past tense models. Although learning was based, as in conventional fixed-architecture networks, on the complex interactions of many simple units and on the gradual adjustment of connection weights, the constructivist network developed a “pseudo-modular” architecture where more space was given to the harder, irregular cases, and where a memory in the form of hidden unit receptive fields developed in addition to the direct input-output connections.[A Constructivist Neural Network Learns the Past Tense of English Verbs Gert Westermann In Proceedings of GALA 1997]

The Walter Freeman Wing: Chaos in my Brain.

As far as I have gathered so far, Freeman (UC Berkeley) analyized olfaction (smell) organs in detail from neurons through cortex. From an incredibly sound base of work he has jumped ahead to try to show that chaos theory explains abstraction, generalization and similar higher brain functions.

The Neurodynamics of Intentionality in Animal Brains May Provide a Basis for Constructing Devices that are Capable of Intelligent Behavior
What is new is the development of nonlinear mesoscopic brain dynamics, by which to apply chaos theory in order to understand and simulate the construction of meaningful patterns of endogenous activity that implement the perceptual process of observation....We believe that the basins of attraction in each of the sensory cortices are shaped by limbic input to sensitize them for receiving and processing the desired class of stimuli in every modality, whatever may be the goal at the moment of choice" [Walter J Freeman, Essay prepared for a "Workshop on Metrics for Intelligence" in a program for" Development of Criteria for Machine Intelligence" at the National Institute of Standards and Technology (NIST), Gaithersburg MD, 14-16 August 2000]. How intension affects perceptions. Promising bits; bunch of differential eqn in the rest.
Mesoscopic brain activity
Three levels of brain function are hypothesized to mediate transition from sensation and perception. Microscopic activity expressed by action potentials is sensory. Macroscopic activity of the whole forebrain expressed by behavior is perceptual. Mesoscopic activity bridges the gap by the formation of wave packets.The Wave Packet: An Action Potential for the 21st Century by Walter J. Freeman Journal of Integrative Neuroscience, 2(1), pp. 3-30. (103 refs!)
Walter Freeman
The model performs with outstanding efficacy the basic functions of sensory cortex: abstraction, generalization, and classification. "How Brains Make Up Their Minds." [W. J. Freeman (2001) New York: Columbia University Press]
Chaotic neurodynamics
http://www.msci.memphis.edu/~harterd/publications.html
Alternative to Freeman on his core work, olfaction.
Walter Freeman’s many seminal contributions to the development of a dynamical perspective on olfaction (Freeman 1978, 1992, 2000; Freeman&Skarda 1985) must be recognized here. Our approach and interpretations, however, differ from Freeman’s in at least three important ways. The first lies in the nature of the data. While we recognize the importance of macro- or mesoscopic signals (e.g. EEGs and field potentials) as experimental tools, we believe that they are not of the appropriate scale for analysis. The spatiotemporal phenomena that cause recognizable features in field potentials (e.g. local synchronization and nonstationary behavior) are indeed functionally relevant; but field potentials are only “shadows” of underlying distributed but precise neural-activity patterns, which need to be deciphered. The second difference lies in our theoretical model of population behavior. “ Winnerless competition,” introduced later, depends to a significant extent on a neuron-resolution-mechanistic understanding of odor signal processing. The third difference is that our experimental approach, using small olfactory systems (insects and fish), tries to separate stimulus-evoked activity from centrifugal “higher” influences providing contextual information. Our goal, illustrated here, is to understand the “unsupervised” sensory formatting of odor representations by early olfactory circuits first, although we agree that expectation influences odor-evoked neural activity (Pager 1983, Kay & Freeman 1998, Kay & Laurent 1999).Laurent G, Stopfer M, Friedrich R, Rabinovich M, Volkovskii A and Abarbanel H "Odor processing as an active dynamical process: experiments, computation and theory" Annu. Rev. Neurosci. 24:263-297 (2001) [PDF] I think that Freeman brought in a "dynamic view" of sensing, one that includes context in perception, but he then added a bunch of chaos theory stuff.. Laurent has dynamics at a more neural level and doesn't seem to think the chaos bit is helpful.
Complexity or Just Complex-ness?
While brains do indeed perform something akin to information processing, they differ profoundly from any existing computer in the scale of their intrinsic structural and dynamic complexity. [Koch C and Laurent G Complexity in Nervous Systems Science 284:96-98 (1999) ] This is Viewpoint peice; while it appears to be driving towards applications of complexity theory to consciousness, it never gets there. Several interesting references and numbers however.

Computation and Neurons

All modern computing machines are "von-Neumann" machines; brains are manifestly not von-Neumann. How then does the brain "compute" and how will the answer to that question alter how our machines work?

If brains are ultimately composed of logical elements simliar to von Neumann computers, then we can build intelligent machines using that technology and we can use that technology to understand neural systems. If brains are not similar to von-Neumann machines, then neurobiology can teach us a new, powerful computing approach.

Introduction
Basic Issues in Neural Computation Ron Meir Department of Electrical Engineering Technion, Israel. Siegelman and Sontag (1991,1995) A finite recurrent neural network with rational weights can compute, in real time, any function computable by a Turing machine. The Church-Turing Thesis: Any function that can be effectively (algorithmically) computed, can be computed by a Turing machine Nice slide deck in PDF with intro to Theory of Computation view on neural networks. Are all activities on a sensor/actuator engine function computation?
von-Neumann Computer Architectures
One facet of this is the fundamental view of memory as a "word at a time" kind of device. A word is transferred from memory to the CPU or from the CPU to memory. All of the data, the names (locations) of the data, the operations to be performed on the data, must travel between memory and CPU a word at a time. Backus [1978] calls this the "von Neumann bottleneck." As he points out, this bottleneck is not only a physical limitation, but has served also as an "intellectual bottleneck" in limiting the way we think about computation and how to program it." "The von Neumann Architecture of Computer Systems" H. Norton Riley Computer Science Department California State Polytechnic University Pomona, California September, 1987. The reference is to Backus, J. 1978. Can programming be liberated from the von Neumann style? A functional style and its algebra of programs. Communications of the ACM 21, 8, (August), 613-641.
Nematode model system
T. C. Ferrée, B. A. Marcotte and S. R. Lockery (1997). "Neural network models of chemotaxis in the nematode Caenorhabditis elegans ." Advances in Neural Information Processing Systems 9 : 55-61. MIT Press (This PDF file is some kind of large image, not text) Simulation of worm motion compared to experimental measurements. Good progress but it seems evident that something isn't quite modeled yet. More important, this is more a biophysics model than a neural network as far as I can tell.
Ontogenetic ( relating to the origin and development of individual organisms) Programming
Presentation of a new communication- and control-paradigm for multiagent systems inspired by gene regulation.Object-Oriented Ontogenetic Programming: Breeding Computer Programs that Work Like Multicellular Creatures (2002) Peter Schmutter University of Dortmund, Germany
Celluar Parallel System
The PIG Paradigm: The Design and Use of a Massively Parallel Fine Grained Self-Reconfigurable Infinitely Scalable Architecture From Proceedings of the First NASA/DoD Workshop on Evolvable Hardware Nicholas J. Macias.
Building a Neural Computer
In the work of [Siegelmann 95] it was showed that Artificial Recursive Neural Networks have the same computing power as Turing machines. A Turing machine can be programmed in a proper high-level language - the language of partial recursive functions. In this paper we present the implementation of a compiler that directly translates high-level Turing machine programs to Artificial Recursive Neural Networks. The application contains a simulator that can be used to test the resulting networks. We also argue that experiments like this compiler may give us clues on procedures for automatic synthesis of Artificial Recursive Neural Networks from high-level descriptions. [A compiler and simulator for partial recursive functions over neural networks Paulo J. F. Carreira, Miguel A. Rosa, J. Pedro eto, J. Félix Costa] I just cannot see how this can be the right path. Von Neumann machines were designed to evaluate formulae: their "proper" high-level language is naturally partial recursive functions. But biological networks were not designed to evaluate formulae, even abstractly.
 
Neural Computation: A Research Topic for Theoretical Computer Science? Some Thoughts and Pointers.
 
Shepherd's work in local logic circuits in the cortex.
Time
A DYNAMICAL APPROACH TO TEMPORAL PATTERN PROCESSING (1989)
Analog VLSI
NeuroPipe-Chip: A Digital Neuro-Processor for Spiking Neural Networks (2002) Tim Schoenauer, Sahin Atasoy, Nasser Mehrtash, Heinrich Klar Circuits for bistable spike-timing-dependent plasticity neuromorphic VLSI synapses Giacomo Indiveri
VLSI Implementations of Threshold Logic— A Comprehensive Survey Valeriu Beiu, Senior Member, IEEE, José M. Quintana, and María J. Avedillo
The Kerneltron, a massively parallel VLSI array processor for kernel-based pattern recognition, and the first Support Vector Machine in silicon (Genov and Cauwenberghs, 2001)
Computer construction.
Universal Turing Machine: The key idea is to think of the description of a Turing machine itself as data.

Von Neumann: To von Neumann, the key to building a general purpose device was in its ability to store not only its data and the intermediate results of computation, but also to store the instructions, or orders, that brought about the computation.

 

Neurogenomics

Neurogenomics
Regional and strain-specific gene expression mapping in the adult mouse brain Rickard Sandberg,*† Rie Yasuda,‡† Daniel G. Pankratz,* Todd A. Carter,* Jo A. Del Rio,§ Lisa Wodicka,§ Mark Mayford,‡ David J. Lockhart,§ and Carrolee Barlow*¶
 
Molecular Analysis of Human Neurological Disorders
We are also taking advantage of the rapid increase in genetic sequence available in the public domain. Using database searching, we identify new genes which we believe play a role in human brain disorders. We have identified several genes which are mammalian homologues of known drosophila genes which when mutated in drosophila give rise to specific neurodegenerative phenotypes. We are currently mapping the genes in human and mouse and using homologous recombination to delete these genes in mice.

This approach allows us to identify novel genes and previously cloned genes whose protein products are important for normal brain function. Our efforts are designed to use a multidisciplinary approach to contribute to the understanding of the molecular basis of human neurologic disease.

WebQTL
is an unique World Wide Web service that makes it possible for neurogeneticists to rapidly identify and map genes and quantitative trait loci (QTL), particularly those related to brain structure and behavior. WebQTL allows rapid QTL mapping of CNS traits for major recombinant inbred (RI) sets, for shared intercrosses and backcrosses, and particularly for a very large advanced intercross population (the G10). Neurogeneticists can feed the trait data they generated directly into the WebQTL's biostatistics and gene mapping module through the web interface.
Neuroscience Meets Quantitative Genetics: Using Morphometric Data to Map Genes that Modulate CNS Architecture
?
Noise in Genetic Regulation may parallel noise in neural systems
Many molecules that control genetic regulatory circuits act at extremely low intracellular concentrations. Resultant fluctuations (noise) in reaction rates cause large random variation in rates of development, morphology and the instantaneous concentration of each molecular species in each cell. To achieve regulatory reliability in spite of this noise, cells use redundancy in genes as well as redundancy and extensive feedback in regulatory pathways. However, some regulatory mechanisms exploit this noise to randomize outcomes where variability is advantageous. [McAdams, H. H. and A. Arkin, "It's a noisy business! Genetic regulation at the nanomolar scale,” Trends in Genetics. 15:65-69 (1999)]
Hybrid continuous and discrete gene control system may parallel neural systems
Modeling the cell cycle probably requires a top-down modeling approach and a hybrid control system modeling paradigm to treat its combined discrete and continuous characteristics. [McAdams, H. H. and L. Shapiro, "A bacterial cell-cycle regulatory network operating in time and space" Science 301(5641):1874-7 (2003) ]

 

Inbox

Variability gives complex systems their adaptive and homeostatic (working as it should, in balance) characteristics
The sandpile is an "open" and "self-organizing" system. Its form depends on a continuous flow of matter and energy through it, and emerges entirely from the interaction of large numbers of elements. Such systems in general display both homeostatic and adaptive properties. If the pile is disturbed by flattening it, it will, over time, return to its original form. If the dish is made larger, the sandpile will, over time, modify its shape and form to make use of the additional space....variability is not only not inconsistent with either homeostasis or adaptability but in fact reflects precisely those phenomena (large numbers of interacting elements in an open system) which gives complex systems their adaptive and homeostatic characteristics. ["Variability in Brain Function and Behavior",Paul Grobstein Department of Biology Bryn Mawr College published in The Encyclopedia of Human Behavior, Volume 4 (V.S. Ramachandran, editor), Academic Press, 1994 (pp 447-458).] Excellect, readable, provocative. Consequences in opens sytems design and proactive computing wing of ubicomp "Why intentions cannot be observed" A biobehavioral uncertainty principle: understanding behavior may not result in predicting behavior.
 
 

Brain Facts and Figures

 
Nanotechnology
Single-Electron Devices and Their Applications Konstantin K. Likharev Neuromorphic Networks with Molecular Single-electron Devices
 
 
 
 
Conferences
http://www-2.cs.cmu.edu/Groups/NIPS/NIPS2002/NIPS2002preproceedings/
 
Small World Networks
The different connectivity topologies exhibit the following features: random topologies give rise to fast system response yet are unable to produce coherent oscillations in the average activity of the network; on the other hand, regular topologies give rise to coherent oscillations, but in a temporal scale that is not in accordance with fast signal processing. Finally, small-world topologies, which fall between random and regular ones, take advantage of the best features of both, giving rise to fast system response with coherent oscillations. [Fast Response and Temporal Coherent Oscillations in Small-World Networks Luis F. Lago-Fernández, Ramón Huerta, Fernando Corbacho, and Juan A. Sigüenza]
 
Working Memory
rather than requiring precise tuning, this form of integrator network exploits the variability of synaptic and intrinsic properties to perform robust integration....Because the NMDA conductance has a negative resistance region, the current-voltage curve of a neuron can be bistable if the NMDA conductance has an appropriate value relative to the leak conductance.[Model for a robust neural integrator Alexei A. Koulakov, Sridhar Raghavachari, Adam Kepecs and John E. Lisman Nature Neuroscience , August 2002 Volume 5 Number 8 pp 775-782]

Gerstner and Kistler Spiking Neuron Models. Single Neurons, Populations, Plasticity Cambridge University Press, 2002

Home pages

Books

Neurobiology by Gordon M. Shepherd
Text book that covers all levels of neurobiology. As a total novice to biology some of the vocabulary was a challenge but this book is exceptionally well written. Shepherd mixs in history, discussions of why neural systems are the way they are, open quesitions, experimental technique, and yet moves the subject forward.
The Synaptic Organization of the Brain, 5th ed. Ed. Gordon M. Shepard
A collection of research-review-like chapters by different authors. Shepard's introduction repeatedly makes the case that classical artifical neural networks are not biologically realistic.
Neuro-Vision Systems: Principles and Applications. Ed. M. M. Gupta and G. K. Knopf
Reprints, many classics, incl. "The Neuron" By C. Stevens (great figures, esp. of action potential); "An Introduction to Neural Computing", by T. Kohonen (nice prespective on the role of neurotechnology); Hubel and Wiesel (a Scientific American version of their Nobel work); "Neuromorphic Electronic Systems", Carver Mead (famous VLSI wizard, whose work in the '90s lead to Fovean sensors; on the fundamentals of why neural computing).

Software

From bottom, up.

index of Software Tools

A Discrete-Event Neural Network Simulator for General Neuron Models (2002) Takaki Makino

Amygdala is open-source software for simulating spiking neural networks (SNNs). Includes Liquid State Machine simulation.

GENESIS (short for GEneral NEural SImulation System) is a general purpose simulation platform which was developed to support the simulation of neural systems ranging from complex models of single neurons to simulations of large networks made up of more abstract neuronal components.

ModelDB provides an accessible location for storing and efficiently retrieving compartmental neuron models. ModelDB is tightly coupled with NeuronDB. Models can be coded in any language for any environment, though ModelDB has been initially constructed for use with NEURON and GENESIS. Model code can be viewed before downloading and browsers can be set to auto-launch the models.

NEURON is a simulation environment for developing and exercising models of neurons and networks of neurons. It is particularly well-suited to problems where cable properties of cells play an important role, possibly including extracellular potential close to the membrane), and where cell membrane properties are complex, involving many ion-specific channels, ion accumulation, and second messengers.

Fast 5MB download creates an environment for downloads from ModelDB. The downloads are zip files that Neuron unzips, compiles into dlls, then runs, all on auto! Uses mingw.

http://snnap.uth.tmc.edu/overview.htm SNNAP was designed as a tool for the rapid development and simulation of realistic models of single neurons and small neural networks. The electrical properties of individual neurons are described with Hodgkin-Huxley type voltage- and time- dependent ionic currents. The connections among neurons can be made by either electrical or chemical synapses. The chemical synaptic connections are capable of expressing several forms of plasticity, such as homo- and heterosynaptic depression and facilitation.

http://www.synod.uni-freiburg.de/billing.pdf

NEST: An Environment for Neural Systems Simulations Markus Diesmann Dept. of Nonlinear Dynamics, Max-Planck Inst. für Strömungsforschung, Göttingen Marc-Oliver Gewaltig Future Technology Research, Honda R&D Europe (Deutschland) GmbH, Offenbach Abstract NEST is a framework for simulating large, structured neuronal systems. It is designed to investigate the functional behavior of neuronal systems in the context of their anatomical, morphological, and electrophysiological properties. NEST aims at large networks, while maintaining an appropriate degree of biological detail. This is achieved by combining a broad range of abstraction levels in a single network simulation. Great biological detail is then maintained only at the points of interest, while the rest of the system can be modeled by more abstract components. Here, we describe the conception of NEST and illustrate its key features. We demonstrate that software design and organizational aspects were of equal importance for the success of the project.

http://topographica.org/ Topographica is a software package for computational modeling of cortical maps. It is being developed by the Neural Networks Research Group at the University of Texas at Austin, and is funded by the NIMH Human Brain Project under grant 1R01-MH66991. The goal is to help researchers understand brain function at the level of the topographic maps that make up sensory and motor systems.

Network graphing, Graphviz (AT&T Open source)

Grants

INNOVATIVE EXPLORATORY STUDIES AND TECHNOLOGY DEVELOPMENT IN NEUROINFORMATICS RESEARCH (R21)
It is anticipated that most applications will be submitted by investigators with ongoing research programs who wish to change the focus of their current research effort or move into a new area of research utilizing innovative electronic and digital neuroinformatics research capabilities (methodological strategies, databases, and tools), but need additional funds to complete initial pilot studies. This PA also encourages applications from investigators conducting research outside of the basic and clinical neuroscience research field, whose expertise in methodological or technological approaches to Informatics (information technology, computers sciences, mathematics, physics, engineering) would significantly advance progress and new knowledge in this field.

Links

COMPUTATIONAL NEUROSCIENCE Books
Nice list
serendip
Serendip is a gathering place for people who suspect that life's instructions are always ambiguous and incomplete.BRAIN AND BEHAVIOR
tidy-lib
tidy-lib as basis for html->latex tool

Test Data

Frogs

Tongue Biophysics
Frog species use three non-exclusive mechanisms to protract their tongues during feeding: (i) mechanical pulling, in which the tongue shortens as its muscles contract during protraction; (ii) inertial elongation, in which the tongue lengthens under inertial and muscular loading; and (iii) hydrostatic elongation, in which the tongue lengthens under constraints imposed by the constant volume of a muscular hydrostat. [Nishikawa, K.C. 1999. Neuromuscular control of prey capture in frogs. Philosphical Transactions of the Royal Society of London, Biological Sciences 354:941-954]
oriented texture
[NONLINEAR OPERATOR FOR BLOB TEXTURE SEGMENTATION P. Kruizinga and N. Petkov]