Using artificial neural networks to understand brain function: the analysis of neuroelectric information Sabbatini, R.M.E. Center for Biomedical Informatics, State University of Campinas, P.O. Box 6005, Campinas SP 13081-970, Brazil Email: sabbatini@ccvax.unicamp.br Introduction One of the main goals of the neurosciences is to measure and to understand the complex flow of information that takes place in the nervous system. With the recent development of techniques for the simultaneous recording of large number of neurons, using implanted silicon electrode probes (eg.,Bower et al.,1988), silicon arrays in cell cultures, voltage-sensitive dyes, etc.; the informational analysis of recorded spike-train data is crucial to the study of spatial and temporal correlations between individual neurons, as well as the nonlinear dynamics of biological neural networks. A relatively recent paradigm, the artificial neural networks (ANNs) are proving to be very useful as computational tools for this purpose, thus closing the circle of mutual influence between the Neurosciences and Connectionist Science (Sabbatini, 1992). ANNs lend themselves very naturally to applications involving biological signal processing (Miller et al, 1992; Sabbatini, 1993). They can be used to implement several preprocessing functions, such as filters, detection of artifacts, etc.; as well as time-series and spectral analysis, classification and recognition of complex patterns. As a result, an increasing number of neuroscientists is beginning to use ANNs in neuroelectric signal processing. This paper reviews briefly the state-of-the-art in ANN applications in cell-level neuroelectric signal processing, which seems to be the most promising area of progress. Spike Train Analysis One of the most frequent uses of ANNs in neuroelectric research occurs in the analysis, separation and classification of complex single-unit and multiunit recordings in the nervous system. Published applications are still scarce, but they provide ample evidence about the power and robustness of ANN-based techniques, particularly in the presence of excessive noise or variability. * Single-unit spike train analysis: multilayer feedforward networks and Hopfield networks can be trained to predict successfully the interspike intervals for particular kinds of firing cells (Shah et al., 1991) and to classify spikes according to predetermined types (Espinosa et al, 1991); * Temporal coding: a time-delayed ANN can be used to convert a serially coded spike train into a spatially distributed topographical map; thus being able to extract and to represent information about its stochastic behavior, as well as to identify complex temporally coded patterns hidden in a firing sequence (Tam, 1990); * Separation of multiple unit recordings: the difficult problem of classifying, identifying and separating individual cell firings that contribute to the composite waveforms can be dealt well with ANN- based adaptive template matching (Wong et al, 1989) and pattern recognition techniques (Jansen, 1990; Iezzi & Micheli-Tzanakou, 1990; Xiao et al, 1990). * Analysis and representation of temporal and spatial correlation in multiple spike trains: the set of connection weights of multilayer networks, built to correlate neuroelectric inputs and outputs, after convergence of training with experimental data coming from several recorded cells, provide a way to depict graphically these correlations (Tam et al, 1988a, b) * Information content evaluation: training an ANN to classify spike train data according to the eliciting stimuli provides a mean to calculate the information transmitted by these signals (Hertz et al, 1992). Although it is improbable that "invented" computational tools, as the multilayer perceptron (MLP), have any biological counterpart, nonetheless they provide a useful tool to test hypotheses about biological neural networks. For example, Lockery et al. (1989) trained a feedforward MLP to reproduce the observed input/output relations in the leech's neural network responsible for the withdrawal reflex. A quantitative comparison of the ANN, after adaptation, and the biological structure, evidenced a functional similarity of function for the interneurons (hidden nodes) of both networks. So to speak, the ANN "learned" the most probable configuration of interneuron connections to reproduce observed behavior. This technique could be extended to other similar problems. Discussion Many investigations have provided evidence that ANN systems are the preferred method to be used when what's required is an adaptive solution to ill-defined and complex problems of pattern classification and recognition (Sabbatini, 1993). The automatic interpretation of biological signals requires the definition of numerous diagnostic criteria and parameters. Rule-based expert system often fail in this domain, either because logical knowledge is hard to define and to acquire, or because it's not available. They are too sensitive to noise and are usually too slow for real-time applications. Multilayer neural networks are particularly useful for complex neurolectric signal processing tasks, due to a number of reasons. First, it has been proved that they can approximate any continuous mapping function, to any desired degree of accuracy. Thus, their nonlinear nature makes possible the partitioning of the signal space into arbitrarily complex decision regions, overcoming many of the limitations posed by conventional linear multivariate techniques. Neural networks are easier to implement and to train using examples, rather than complex and unreliable heuristics. They do not require rigid analytical or causal assumptions, have a higher reliability in presence of noise and uncertainty and are able to provide graded or fuzzy responses. Complex nonlinear as well as time- ordered phenomena are easily treated by ANNs, in contrast to other approaches. In addition, ANNs have the strong incentive that, once true massively parallel neurocomputers are available, they will provide unprecedented, blinding-fast devices to implement intelligent applications in this field. The interaction between experimental Neurosciences and the field of ANNs is still at its infancy, but, theoretically, it offers a great potential for interesting developments. There is a natural and mutually reinforcing bond between the two areas, since both deal with neural computation, natural or artificial. In practical terms, however, this bond remains largely unrealized. Undoubtedly, the greatest payoff will come from using complex ANNs and hyperparallel computing hardware to tackle the new challenges posed by methodological advances in the study of large-scale neuronal assemblies. Particularly, the area of computational neurosciences, with its emphasis on the realistic computer- based modeling and simulation of biological neural networks, could act as a "bridge" between the two areas, either by giving biologically inspired ideas about new and more efficient ANN architectures to the connectionist systems researcher; or by using powerful ANN inventions to help the experimental investigation and modeling of the structure and function of the brain. The use of connectionist techniques to map biological neural structures into artificial representations, or, conversely, by using ANNs to decode the information flow in the neural tissue, are striking examples of the fruitfulness of the unique capabilities of collaboration between the two areas. The development of a standard method to represent neurobiological structure and function, either as a descriptive language or as a computer program, such as GENESIS (Wilson et al, 1989) would boost enormously these ties. Computer models of known biological computing structures, expressed in this way, could be directly tested, validated and translated into new ANN inventions (eventually by directly generating symbolic programming code in any chosen language), which could then be used to process further experimental data. Another point is that, despite the overabundance of ANN paradigms and algorithms in the literature, very few of them have been explored for neuroelectric signal processing tasks. With few exceptions, many useful architectures for signal processing, such as avalanche, time-delay and recurrent networks have not been applied or systematically compared with other, more common ANNs. The same applies to learning algorithms, which largely remain confined to three or four well-known paradigms, such as Hebbian, competitive and error backpropagation. Perhaps, as Tam (1990) has convincingly demonstrated, neuroscience research needs to invent its own ANN paradigms for special purposes. Many architectural and functional features of neurons and biological networks, which surely have a precise computational value, such as non homogeneous dendritic fields, spines, transmission delays, synapse-synapse interactions, presynaptic inhibition, etc., are beginning to be exploited in new ANN paradigms, with encouraging results. Still, there are many potential applications of ANNs in neuroelectric signal processing that have not been explored. For instance, the remarkable ability of ANNs to detect highly complex spatial and temporal patterns buried amid the electrical clutter characteristics of neural activity, could be used with advantage to devise new sorts of experiments in which sensory, electrical or chemical stimulation, roving electrode movement, event or recording windowing, etc., could be automatically driven by ANN-based "recognizer demons." Online detection of motor preparatory potentials by ANNs could be used to control intelligent neuroprostheses and brain-computer interfaces (Pfurtscheller et al, 1993). Applications of ANNs to neurobehavioral experimentation have also great potential (Sabbatini & Cardoso, 1994). For the near future, we can forecast that neuroscientists will use ultracomplex, biologically inspired ANN architectures, running in massively parallel supercomputers, as powerful auxiliary "brains" to their everyday research. Artificial visual brains, for example, organized according to known biological principles, but gaining from the enormous speedup factor provided by electronic or electro-optical devices, could be used to scan, to make sense of and to explain massive and highly complex neuroelectric and neuroimaging data streaming out of sophisticated experimental setups. Due to the complexity of even the simplest nervous systems, this trend could be the sole pathway to our understanding of the brain. References Bower, J.M.; Wilson, M.A.; Banik, J.; Nelson, M.; Rasnow, B. - Approaches to monitoring and interpreting the dynamics of real neural networks. Proc. 10th Annual Int. Conf. of IEEE Engineering in Medicine and Biology Society. New York: IEEE, p.1921-2, 1988. Espinosa E., I.; Quiza T., J.; Ayhilon M., A. - Detection and classification of neuronal spikes using a DSP chip and a neural network. Proc. 13th Ann. Int. Conf. IEEE Eng. Med. Biol. Soc., p. 1448-9, 1991. Hertz, J.A.; Kjaer, T.W.; Eskandar, E.N.; Richmond, B.J. - Measuring natural neural processing with artificial neural networks. Int. J. Neural Systems, 3 (Suppl): 91-103, 1992. Iezzi, R.; Micheli-Tzanakou, E. - Neural network analysis of neuronal spike-trains. Proc. 12th Ann. Int. Conf. IEEE Eng. Med. Biol. Soc., p. 1435-6, 1990. Jansen, R.F. - The reconstruction of individual spike trains from extracellular multineuron recordings using a neural network emulation program. J. Neurosci. Methods, 35(3):203-13, 1990. Lockery, S.R.; Wittenberg, G.; Kristan, W.B. Jr; Cottrell, G.W. - Function of identified interneurons in the leech elucidated using neural networks trained by back-propagation. Nature, 340(6233):468-71, 1989. Miller, A.S.; Blott, B.H.; Hames, T.K. - Review of neural network applications in medical imaging and signal processing. Med. Biol. Engineer. Comput., 30: 449-464, 1992. Pfurtscheller, G.; Flotzinger, D.; Kalcher, J. - Brain-computer interface - a new communication device for handicapped persons. J.Microcomp. Appl., 16: 293-9, 1993. Sabbatini, R.M.E. - Applications of connectionist systems in Biomedicine. Proc. 7th World Congress on Medical Informatics (MEDINFO 92). Amsterdam: North-Holland, 1992. Sabbatini, R.M.E. - Neural networks for classification and pattern recognition of biological signals. Proc. Ann. 15th Int. Conf. IEEE Eng. Biol. Med. Soc. New York: IEEE Press, 1993. Sabbatini, R.M.E.; Cardoso, S.H. - Applications of artificial neural networks in the automatic classification of behavioral patterns and sequences. Proceed. 3rd Computational and Neural Systems Conf. (CNS'94), 1994. Shah, S.; Faller, W.E.; Luttges, M.W. - Neural network analyses of stochastic information: application to neurobiological data. Biomed. Sci. Instrum., 27:231-8, 1991. Tam, D.C. - Decoding of firing intervals in a temporal-coded spike train using a topographically-mapped neural network. Proceed IJCNN'90 - Int. Joint Conf. Neural Networks. New York: IEEE Press, 3: 627- 632, 1990. Tam, D.C.; Perkel, D.H.; Tucker, W.S. - Correlation of multiple neuronal spike trains using the backpropagation error correction algorithm. Neural Networks, 1 (Suppl.1): 277, 1988a. Tam, D.C.; Perkel, D.H.; Tucker, W.S. - Temporal correlation of multiple neuronal spike trains using the backpropagation error correction algorithm. Neural Networks, 1 (Suppl.1): 278, 1988b. Wilson, M.A.; Bhalla, U.S.; Uhley, J.D.; Bower, J.M. - GENESIS:A system for simulating neural networks. In: Touretzky, D. (Ed.) - Advances in Neural Network Information Processing Systems. San Mateo, CA: Morgan Kauffmann Publ., p.257 ff., 1989. Wong, Y.; Banik, J.; Bower, J.M. - Neural networks for template matching: application to real-time systems classification of the action potentials of real neurons. In: Anderson, D. (Ed.) - Neural Information Processing Systems, AIP Press, 1989. Xiao, L.T.; Micheli-Tzanakou, E.; Dasey, T.J. - Analysis of composite neuronal waveforms into their constituents. Proc. 12th Ann. Int. Conf. IEEE Eng. Med. Biol. Soc., p. 1433-4, 1990. ---------------- Submitted to III Computational and Neural Systems Conf. Monterey, CA, USA - July 1994