To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The detection of image regions and their borders is one of the basic requirements for further (object domain-) image processing in a generalpurpose technical pattern recognition system and, very likely, also in the visual system. It is a pre-requisite for object separation (figure-ground discrimination, and separation of adjoining and intersecting objects), which in turn is necessary for the generation of invariances for object recognition (Reitboeck & Altmann, 1984).
Texture is a powerful feature for region definition. Objects and background usually have different textures; camouflage works by breaking this rule. For texture characterization, Fourier (power) spectra are frequently used in computer pattern recognition. Although the signal transfer properties of visual channels can be described in the spatial (and temporal) frequency domain, there has been no conclusive evidence that pattern processing in the primary visual areas would be in terms of local Fourier spectra.
In the following we propose a model for texture characterization in the visual system, based on region labeling in the time domain via correlated neural events. The model is consistent with several basic operational principles of the visual system, and its texture separation capacity is in very good agreement with the pre-attentive texture separation of humans.
Texture region definition via temporal correlations
When we look at a scene, we can literally generate a ‘matched filter’ and use it to direct our attention to a specific object region.
Is there enough motivation for a solid-state physics approach to the brain?
One of the salient features of the brain networks is that anatomical sections of a few millimetres width, taken from different parts of the cortex, look roughly similar by their texture. This observation might motivate a theoretical approach in which principles of solid-state physics are applied to the analysis of the collective states of neural networks. Such a step, however, should be made with extreme care. I am first trying to point out a few characteristics of the neural tissue which are similar to those of, or distinguish it from non-living solids.
Similarities. There is a dense feedback connectivity between neighbouring neural units, which corresponds to interaction forces between atoms or molecules in solids.
Generation and propagation of brain waves seem to support a continuous-medium view of the tissue.
Differences. In addition to local, random feedback there exist plenty of directed connectivities, ‘projections’, between different neural areas. As a whole, the brain is a complex self-controlling system in which global feedback control actions often override the local ‘collective’ effects.
Although the geometric structure of the neural tissue looks uniform, there exist plenty of specific biochemical and physiological differences between cells and connections. For instance, there exist some 40 different types of neural connection, distinguished by the particular chemical transmitter substances involved in signal transmission, and these chemicals vary from one part of the brain to another.
It is probably fair to say that we have not, to this day, formed a clear picture of the learning process; neither have we been able to elicit from artificial intelligence machines a sort of behavior which could possibly compare in flexibility and performance with that exhibited by human or even animal subjects.
Leaving aside the issue of what actually happens in a learning brain, research on the question of how to generate ‘intelligent’ behavior has oscillated between two poles. The first, which today predominates in artificial intelligence circles (Nilsson, 1980), takes it for granted that solving a particular problem entails repeated application, to a data set representing the starting condition, of some operations chosen in a predefined set; the order of application may be either arbitrary or determined heuristically. The task is completed when the data set is found to be in a ‘goal’ state. This approach can be said to ascribe to the system, ‘from birth’, the capabilities required for a successful solution. The second approach, quite popular in its early version (Samuel, 1959), has been favored recently by physicists (Hopfield, 1982; Hogg & Huberman, 1985), and rests on the idea that ‘learning machines’ should be endowed, not with specific capabilities, but with some general architecture, and a set of rules, which are used to modify the machines' internal states in such a way that progressively better performance is obtained upon presentation of successive sample tasks.
Rhythmic oscillation is a fundamental component which can be found in the various kinds of nervous systems (Friesen & Stent, 1977; Thompson, 1982). In a neural network with a ring-structured set of synaptic connections, a set of oscillations with different phases can be generated (Morishita & Yajima, 1972; Stein et al., 1974), and the occurrence of such rhythmic oscillation is also confirmed in different types of neural networks (Matsuoka, 1985). Since, however, various additional connections can cause a disturbance which easily extinguishes the rhythmic oscillation in the neural network, some function for maintaining the rhythmic oscillation should be expected to exist in the synapses if such signals play an important role in the nervous system.
A new synaptic modification algorithm is proposed which employs the average impulse density (AID) and the average membrane potential (AMP); examination of the effect of synaptic modification on rhythmic oscillation has been attempted (Tsutsumi & Matsumoto, 1984a). Simulation demonstrated some cases in which rhythmic oscillation reappears, by applying the algorithm to the disturbed ring neural network where the rhythmic oscillation was previously extinguished.
If that is the case, how can such oscillation derived from the neural network with feedback inhibition be processed in the following neural network with, for example, the feedforward system? Here we take, as an instance, the cerebellar circuitry including both feedback and feedforward systems, and discuss the relationship between synaptic modification and rhythmic oscillation in the neural network.
Modeling the brain requires some a priori description of its functional computing elements. The sophistication of this description depends on the goal of the model, but it is clear that describing single neurons as simple ‘leaky-integrators’ with a non-linear threshold has limited utility in exploring the capability of the neural net for processing information. In fact, cortical pyramidal neurons are functionally quite complicated, both in terms of how their complex geometry determines their electrotonic structure, and in terms of the active currents that modulate the linear response of these cells.
Over the past several years investigators have uncovered a plethora of currents in one type of cortical pyramidal neuron, the hippocampal pyramidal cell. The currents, which presumably are mediated by distinct ion channels, include the classical fast sodium current (INa), a persistent sodium current (INaP) (French & Gage, 1985), a delayed-rectifier potassium current (IDR) (Segal & Barker, 1984), a calcium current (ICa) (Halliwell, 1983), a slow calcium current (ICaS) (Johnston et al., 1980), a fast transient calcium-mediated potassium current (Ic) (Brown & Griffith, 1983), an after-hyperpolarization calcium-mediated potassium current (IAHP) (Lancaster & Adams, 1986), a muscarine-inhibited potassium current (IM) (Halliwell & Adams, 1982), a transient potassium current (IA) (Gustafsson et al., 1982), a chloride current (ICI(V)) (Madison et al., 1986), and a possibly mixed carrier anomalous rectifier current (IQ) (Halliwell & Adams, 1982). Each current has a unique time-and voltage-dependence, and some currents are sensitive to constituents in the extra-or intra-cellular environment, for example free Ca++ or muscarinic agonists.
The function of the brain is a far cry from that of existing computer systems. Our knowledge of the information processing mechanism of the brain is rather limited, but we know that the brain consists of a large number of neurons and that the individual neuron can be regarded as a sort of threshold element. So, we can find some similarities between the neuron systems and the existing computer systems.
The similarity we are especially interested in is that both of them can be considered as aggregates of digital circuits. Any function of the digitial circuit which is independent of the previous state can be easily implemented by a two-level AND-to-OR gate network. In addition the threshold element which substitutes for a neuron performs the function of an AND gate or an OR gate according to its threshold value. Those facts inspired us with some ideas about our AND–OR analog of neuron networks. Our neuron network also has two levels. The first level of the network acts as an analyzer of the input patterns, and the second level acts as a generator of the output patterns. The operations assigned to the two levels correspond to those of the AND plane and the OR plane, respectively.
The function of a standard AND-to-OR gate network is determined by its inherent wiring. However, that of our neuron network is to be formed by the interaction with the given environment. The learning process of the network we assume is based on the biological hypothesis that the creatures which have neuron systems tend to avoid continuous and invariable stimuli, in other words, they are fond of moderate changes of the environment. For instance, the reflex including the avoidant behavior from danger should be explained by this hypothesis.
It is quite evident that, while all sane people desire peace, all countries must practically provide for their self-defense. Thus, issues of procurement and implementation of combat systems must be objectively analyzed. When performing such analyses, it is useful and important to distinguish between specific combat components (‘anatomy’) and the Command, Control and Communications (C3) functions (‘physiology’) of these systems. If enormous financial and human resources must indeed be spent on a specific weapons system, then C3 models can offer procurement decision aids to spending these resources more efficiently. However, even before procuring specific weapon components, C3 models should be used to develop battle-management decision aids to help determine if it is feasible to consider building planned large-scale systems at all.
Even without agreement on just what C3 is, there is widespread criticism that we do not spend enough on C3 relative to what we spend on specific weapons systems (Blair, 1985). The Eastport Study Group (Eastport Study Group, 1985) has made this issue its primary concern with regard to the Strategic Defense Initiative (SDI) program. There is also an everpresent problem of weighing the political and military aspects between the hierarchical and distributed designs of C3, the former being politically desirable and appropriate for deterministic or modestly stochastic operations, and the latter being more appropriate for severely stochastic systems (Orr, 1983).
The quest or search for the ‘code’ or ‘codes’ involved in short-term memory and information processing in higher mammalian cortex is one of the most exciting and challenging problems in all of science. The recent experimental progress and results from recording in cortex using large arrays of microelectrodes (Krüger & Bach, 1981; Bach & Krüger, 1986) and using optical dye techniques (Blasdel & Salama, 1986; Grinvald et al., 1981) offer great opportunities in the search for the code. The crosscorrelation analyses of these type of data are enormously difficult (Gerstein, Perkel & Dayhoff, 1985; Aertsen, Gerstein & Johannesma, 1986). Thus the close interplay of new theoretical models and experiment will be crucial.
The basis for the tremendous magnitudes of the processing capabilities and the memory storage capacities remain mysteries despite the substantial efforts and results in modeling neural networks; see, e.g., references in Amari & Arbib (1982), Ballard (1986) and Pisco (1984). We believe the Mountcastle (1978) columnar organizing principle for the functioning of neocortex will help provide a basis for these phenomena and we constructed the trion model (Shaw, Silverman & Pearson, 1985; Shaw, Silverman & Pearson, 1986; Silverman, Shaw & Pearson, 1986). Mountcastle proposed that the well-established cortical column (Goldmann–Rakic, 1984), roughly 500 μm in diameter, is the basic network in the cortex and is comprised of small irreducible processing sub-units. The sub-units are connected into the columns or networks having the capability of complex spatial–temporal firing patterns.
Computer simulation of the activity of complex neural networks representing substantial portions of the brain is limited by a number of practical considerations, notably the capacity of existing computers and the finite human resources available for analysis of a proliferating output. Whatever the specific model chosen, whether operating in discrete or continuous time, and whether involving the firing states or the firing rates of neurons as the basic dynamical variables, there will arise the possibility that ‘edge effects’ seriously diminish the relevance of the simulation to the behavior of the actual biological system. Such effects may arise, principally, from the fact that the number of neuronal elements in the simulation is too small, or, secondarily, from the fact that the numbers of synaptic inputs to given elements are inappropriate.
In this contribution we shall make an attempt to quantify edge effects in terms of a simple conception of interneuronal distance, reasoning that the asymptotic autonomous behavior of neural models will hinge critically on the topological properties of the net. This will be especially true of the repertoire of cyclic modes (Clark, Rafelski & Winston, 1985) of an assembly of N binary threshold elements operating syncronously in discrete time. As a first approximation to a meaningful definition of the distance dki from neuron i to neuron k in such models, one may use simply the minimum number of synaptic junctions which information must traverse in going from i to k.
Cognition, according to Ulric Neisser, ‘refers to all processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used. It is concerned with these processes even when they operate in the absence of relevent stimulation, as in images and hallucinations’ (Neisser, 1966). To discover some of the neural mechanisms operating in these processes remains one of the great challenges of neuroscience.
Significant progress has been made in analyzing the mapping and coding processes that occur in the first few stages of mammalian vision. These processes of so-called early vision involve neural mechanisms that sort out geometrical features, heighten contrast, emphasize contours, and help determine shape and motion of objects (Marr & Poggio, 1979; Marr & Hildreth, 1980; Marr, 1982).
Cognition, as Neisser defines it, must go beyond these early processes, utilizing not only innate circuitry, but bringing the full richness of stored experience to bear on the new sensory information. The time scale of such processes must extend from the order of a hundred milliseconds to seconds and beyond.
Cognition and reafference
These higher and later cognitive processes are usually relegated to cortical association areas. In the present model, however, we assume that significant changes in the afferent sensory information are brought about at relatively peripheral sensory levels – the lateral geniculate nucleus in the case of vision – and that these changes play an important role in the transformation, reduction, and elaboration of the sensory input.
The use of pharmacological agents in neuroendocrine studies had a significant impact on our perception of the control mechanisms involved in prolactin secretion. In contrast to other anterior pituitary hormones, prolactin is thought to be regulated by the hypothalamus through a prolactin inhibiting factor (PIF) a peptide of MW < 5000 that is tonically released into the hypophysial portal vessels. The prolactin secretory cells themselves are assumed to be driven by a prolactin releasing factor (PRF), an unidentified as yet neurosecretory product. PRF neurosecretory cells are in turn thought to be driven by serotoninergic neurons located in the medial basal hypothalamus.
Of the major CNS neurotransmitters dopamine is a potent inhibitor of prolactin release, although the exact nature of the interaction between the PIF and dopamine is at best unclear at present. The original postulate that PIF secretion is stimulated by dopaminergic neurons located in the medial basal hypothalamus is challenged by the fact that dopamine receptors have been located on the prolactin secretory cells (Clemens, 1976). The emerging synthetic view of the problem postulates that the PIF secreting cells act in parallel and are at the same time driven by the dopaminergic neurons of the medial basal hypothalamus (Fig. 33.1).
Serotonin is the other major CNS neurotransmitter involved in the control of prolactin secretion. It is again as yet unclear whether serotonin stimulates prolactin secretion by inhibiting the PIF release or by stimulating PRF release.