To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Since the publication of the first edition 15 years ago, the subject of liquid crystals has grown enormously to become a fascinating interdisciplinary field of study. A variety of new thermotropic phases have been discovered, including over a dozen different smectic modifications, discotics, biaxial nematics, etc., which have opened up a veritable treasure-house for the theoretical condensed matter physicist. On the technological side, the advances have been no less spectacular: portable computers and hand-held TV sets using liquid crystal display devices are being sold in large numbers, and high-definition LCD-TV would seem to be just round the corner. The aim of the present edition is to bring the coverage up to date. The chapters dealing with the classical nematic, cholesteric and smectic types of liquid crystals have been revised substantially and a new chapter has been included on discotics. However, mainly for reasons of space, special topics like the applications of magnetic resonance techniques, non-linear optical properties, etc., have not been discussed here as these have been comprehensively reviewed elsewhere.
It is my privilege to express my thanks to my young colleagues who kept me alive to the subject: in particular to V. N. Raja, S. Krishna Prasad, D. S. Shankar Rao, S. M. Khened and Geetha Nair for their invaluable help throughout the preparation of this book, and to Sriram Ramaswamy and U. D. Kini for their advice on certain theoretical points. I am indebted to the Council of Scientific and Industrial Research, New Delhi, for a Bhatnagar Fellowship which made it easier for me to undertake this task.
Present classification of smectic liquid crystals is based largely on the optical and miscibility studies of Sackmann and Demus. The miscibility criterion relies on the postulate that two liquid crystalline modifications which are continuously miscible (without crossing a transition line) in the isobaric temperature–concentration diagram have the same symmetry and therefore can be designated by the same symbol. It is not clear whether this criterion is valid regardless of the differences in the molecular shapes and dimensions of the two components, but empirically Sackmann and Demus have found that in no case does a phase of a given symbol mix continuously with a phase of another symbol. The method is simple and has been used for the identification of a number of new phases, but, of course, it does not throw light on the precise nature of the molecular order in these phases. Systematic X-ray investigations have been carried out during the last decade at several laboratories, and particularly with the availability of synchrotron X-ray sources, considerable progress has been made in elucidating the structures.
The notation of Sackmann and Demus is according to the order of the discovery of the different phases and bears no relation to the molecular packing. The broad structural features of these phases are summarized in table 5.1.1. A more detailed description of these structures may be found in the excellent reviews by Pershan and by Leadbetter.
The unique optical properties of the cholesteric phase were recognized by both Reinitzer and Lehmann at the time of their early investigations which culminated in the discovery of the liquid crystalline state. When white light is incident on a ‘planar’ sample (whose optic axis is perpendicular to the glass surfaces) selective reflexion takes place, the wavelengths of the reflected maxima varying with angle of incidence in accordance with Bragg's law. At normal incidence, the reflected light is strongly circularly polarized; one circular component is almost totally reflected over a spectral range of some 100 °, while the other passes through practically unchanged. Moreover, contrary to usual experience, the reflected wave has the same sense of circular polarization as that of the incident wave.
Along its optic axis, the medium possesses a very high rotatory power, usually of the order of several thousands of degrees per millimetre. In the neighbourhood of the region of reflexion, the rotatory dispersion is anomalous and the sign of the rotation opposite on opposite sides of the reflected band. The behaviour is rather similar to that of an optically active molecule in the vicinity of an absorption. Following the theoretical work of Mauguin, Oseen and de Vries these remarkable properties can now be explained quite rigorously in terms of the spiral structure represented schematically in fig. 1.1.4.
Propagation along the optic axis for wavelengths ≪ pitch
Basic theory
We shall first consider the propagation of light along the optic axis for wavelengths much smaller than the pitch so that reflexion and interference effects may be neglected.
In this chapter we shall discuss the continuum theory of nematic liquid crystals and some of its applications. Many of the most important physical phenomena exhibited by the nematic phase, such as its unusual flow properties or its response to electric and magnetic fields, can be studied by regarding the liquid crystal as a continuous medium. The foundations of the continuum model were laid in the late 1920s by Oseen and Zöcher who developed a static theory which proved to be quite successful. The subject lay dormant for nearly thirty years afterwards until Frank reexamined Oseen's treatment and presented it as a theory of curvature elasticity. Dynamical theories were put forward by Anzelius and Oseen, but the formulation of general conservation laws and constitutive equations describing the mechanical behaviour of the nematic state is due to Ericksen and Leslie. Other continuum theories have been proposed, but it turns out that the Ericksen–Leslie approach is the one that is most widely used in discussing the nematic state.
The nematic liquid crystal differs from a normal liquid in that it is composed of rod-like molecules with the long axes of neighbouring molecules aligned approximately parallel to one another. To allow for this anisotropic structure, we introduce a vector n to represent the direction of preferred orientation of the molecules in the neighbourhood of any point. This vector is called the director. Its orientation can change continuously and in a systematic manner from point to point in the medium (except at singularities).
Chapter 3 has been devoted to the modeling of the dynamics of neurons. The standard model we arrived at contains the main features which have been revealed by neuroelectrophysiology: the model considers neural nets as networks of probabilistic threshold binary automata. Real neural networks, however, are not mere automata networks. They display specific functions and the problem is to decide whether the standard model is able to show the same capabilities.
Memory is considered as one of the most prominent properties of real neural nets. Current experience shows that imprecise, truncated information is often sufficient to trigger the retrieval of full patterns. We correct misspelled names, we associate images or flavors with sounds and so on. It turns out that the formal nets display these memory properties if the synaptic efficacies are determined by the laws of classical conditioning which have been described in section 2.4. The synthesis in a single framework of observations of neurophysiology with observations of experimental psychology, to account for an emergent property of neuronal systems, is an achievement of the theory of neural networks.
The central idea behind the notion of conditioning is that of associativity. It has given rise to many theoretical developments, in particular to the building of simple models of associative memory which are called Hebbian models. The analysis of Hebbian models has been pushed rather far and a number of analytical results relating to Hebbian networks are gathered in this chapter. More refined models are treated in following chapters.
Something essential is missing in the description of memory we have introduced in previous chapters. A neural network, even isolated, is a continuously evolving system which never settles indefinitely in a steady state. We are able to retrieve not only single patterns but also ordered strings of patterns. For example, a few notes are enough for an entire song to be recalled, or, after training, one is able to go through the complete set of movements which are necessary for serving in tennis. Several schemes have been proposed to account for the production of memorized strings of patterns. Simulations show that they perform well, but this does not mean anything as regards the biological relevance of the mechanisms they involve. In actual fact no observation supporting one or the other of the schemes has been reported so far.
Parallel dynamics
Up to now the dynamics has been built so as to make the memorized patterns the fixed points of the dynamics. Once the network settles in one pattern it stays there indefinitely, at least for low noise levels. We have seen that fixed points are the asymptotic behaviors of rather special neural networks, namely those which are symmetrically connected. In asymmetrically connected neural networks whose dynamics is deterministic and parallel (the Little dynamics at zero noise level), the existence of limit cycles is the rule. It is then tempting to imagine that the retrieval of temporal sequences of patterns occurs through limit cycles.
The architectures of the neural networks we considered in Chapter 7 are made exclusively of visible units. During the learning stage, the states of all neurons are entirely determined by the set of patterns to be memorized. They are so to speak pinned and the relaxation dynamics plays no role in the evolution of synaptic efficacies. How to deal with more general systems is not a simple problem. Endowing a neural network with hidden units amounts to adding many degrees of freedom to the system, which leaves room for ‘internal representations’ of the outside world. The building of learning algorithms that make general neural networks able to set up efficient internal representations is a challenge which has not yet been fully satisfactorily taken up. Pragmatic approaches have been made, however, mainly using the so-called back-propagation algorithm. We owe the current excitement about neural networks to the surprising successes that have been obtained so far by calling upon that technique: in some cases the neural networks seem to extract the unexpressed rules that are hidden in sets of raw data. But for the moment we really understand neither the reasons for this success nor those for the (generally unpublished) failures.
The back-propagation algorithm
A direct derivation
To solve the credit assignment problem is to devise means of building relevant internal representations; that is to say, to decide which state Iµ, hid of hidden units is to be associated with a given pattern Iµ, vis of visible units.
A neural network self-organizes if learning proceeds without evaluating the relevance of output states. Input states are the sole data to be given and during the learning session one does not pay attention to the performance of the network. How information is embedded into the system obviously depends on the learning algorithm, but it also depends on the structure of input data and on architectural constraints.
The latter point is of paramount importance. In the first chapter we have seen that the central nervous system is highly structured, that the topologies of signals conveyed by the sensory tracts are somehow preserved in the primary areas of the cortex and that different parts of the cortex process well-defined types of information. A comprehensive theory of neural networks must account for the architecture of the networks. Up to now this has been hardly the case since one has only distinguished two types of structures, the fully connected networks and the feedforward layered systems. In reality the structures themselves are the result of the interplay between a genetically determined gross architecture (the sprouting of neuronal contacts towards defined regions of the system, for example) and the modifications of this crude design by learning and experience (the pruning of the contacts). The topology of the networks, the functional significance of their structures and the form of learning rules are therefore closely intertwined entities. There is now no global theory explaining why the structure of the CNS is the one we observe and how its different parts cooperate to produce such an efficient system, but there have been some attempts to explain at least the most simple functional organizations, those of the primary sensory areas in particular.
This text started with a description of the organization of the human central nervous system and it ends with a description of the architecture of neurocomputers. An unattentive reader would conclude that the latter is an implementation of the former, which obviously cannot be true. The only claim is that a small but significant step towards the understanding of processes of cognition has been carried out in recent years. The most important issue is probably that recent advances have made more and more conspicuous the fact that real neural networks can be treated as physical systems. Theories can be built and predictions can be compared with experimental observations. This methodology takes the neurosciences at large closer and closer to the classical ‘hard’ sciences such as physics or chemistry. The text strives to explain some of progress in the domain and we have seen how productive the imagination of theoreticians is.
For some biologists, however, the time of theorizing about neural nets has not come yet owing to our current lack of knowledge in the field. The question is: are the models we have introduced in the text really biologically relevant? This is the issue I would like to address in this last chapter. Many considerations are inspired by the remarks which G. Toulouse gathered in the concluding address he gave at the Bat-Sheva seminar held in Jerusalem in May 1988.
Clearly, any neuronal dynamics can always be implemented in classical computers and therefore we could wonder why it is interesting to build dedicated neuronal machines. The answer is two-fold:
Owing to the inherent parallelism of neuronal dynamics, the time gained by using dedicated machines rather than conventional ones can be considerable, so making it possible to solve problems which are out of the reach of most powerful serial computers.
It is perhaps even more important to become aware that dedicated machines compel one to think differently about the problems one has to solve. To program a neurocomputer does not involve building a program and writing a linear series of instructions, step by step. In the process of programming a neurocomputer, one is forced to think more globally in terms of phase space instead, to eventually figure out an energy landscape and to determine an expression for this energy. Z. Pilyshyn made this point clear enough in the following statement (quoted by D. Waltz):
‘What is typically overlooked (when we use a computational system as a cognitive model) is the extent to which the class of algorithms that can even be considered is conditioned by the assumptions we make regarding what basic operations are possible, how they may interact, how operations are sequenced, what data structures are possible and so on. Such assumptions are an intrinsic part of our choice of descriptive formalism.’
Mind has always been a mystery and it is fair to say that it is still one. Religions settle this irritating question by assuming that mind is non-material: it is just linked during the duration of a life to the body, a link that death breaks. It must be realized that this metaphysical attitude pervaded even the theorization of natural phenomena: to ‘explain’ why a stone falls and a balloon filled with hot air tends to rise, Aristotle, in the fourth century BC, assumed that stones house a principle (a sort of a mind) which makes them fall and that balloons embed the opposite principle which makes them rise. Similarly Kepler, at the turn of the seventeeth century, thought that the planets were maintained on their elliptical tracks by some immaterial spirits. To cite a last example, chemists were convinced for quite a while that organic molecules could never be synthetized, since their synthesis required the action of a vital principle. Archimedes, about a century after Aristotle, Newton, a century after Kepler, and Wöhler, who carried out the first synthesis of urea by using only mineral materials, disproved these prejudices and, at least for positivists, there is no reason why mind should be kept outside the realm of experimental observation and logical reasoning.
We find in Descartes the first modern approach of mind.
Neural networks are at the crossroad of several disciplines and the putative range of their applications is immense. The exploration of the possibilities is just beginning. Some domains, such as pattern recognition, which seemed particularly suited to these systems, still resist analysis. On the other hand, neural networks have proved to be a convenient tool to tackle combinatorial optimization problems, a domain to which at first sight they had no application. This indicates how difficult is the task of foreseeing the main lines of developments yet to come. All that can be done now is to give a series of examples, which we will strive to arrange in a logical order, although the link between the various topics is sometimes tenuous. Most of the applications we shall present were put forward before the fall of 1988.
Domains of applications of neural networks
Neural networks can be used in different contexts:
For the modeling of simple biological structures whose functions are known. The study of central pattern generators is an example.
For the modeling of higher functions of central nervous systems, in particular of those properties such as memory, attention, etc., which experimental psychology strives to quantify. Two strategies may be considered. The first consists in explaining the function of a given neural formation (as far as the function is well understood) by taking all available data on its actual structure into account. This strategy has been put forward by Marr in his theory of the cerebellum. The other strategy consists in looking for the minimal constraints that a neuronal architecture has to obey in order to account for some psychophysical property. The structure is now a consequence of the theory. If the search has been successful, it is tempting to identify the theoretical construction with biological structures which display the same organization.