Wordtrade LogoWordtrade.com


Review Essays of Academic, Professional & Technical Books in the Humanities & Sciences



Science and Sensibility: The Elegant Logic of the Universe by Keith J. Laidler (Prometheus Books) The vast information explosion that science has produced continues to barrage us daily with both the trivial and the profound. How should we cope with this massive influx of new ideas about science and related areas? Though the public seems eager if not infatuated with acquiring more and more information, few understand how to make use of or how to integrate it to create a coherent worldview. Paradoxically, as the amount of information increases, knowledge-the result of our brains selecting and processing information to form an intelligible view of the world-has declined.

SCIENCE AND SENSIBILITY is exquisitely designed to provide a thorough grounding in the methods of science, stressing the importance of arriving at rational conclusions by carefully considering and evaluating the evidence available. Acclaimed science writer and chemistry professor Keith J. Laidler begins by reviewing the major contributions of the different branches of science-including biology, chemistry, physics, astronomy, and geology-and showing how, together, the research conducted in these areas leads to a unified conception of our place in the universe. He asserts that by dispelling the air of mystery that pervades the public's perception of science, we can more fully appreciate the beauty of the universe. Although much still remains to be discovered. Laidler stresses that evidence from every scientific field supports an elegantly logical and internally consistent picture of the formation and development of the universe and of life within it.

Dr. Laidler also explores the relationship between science and culture. Focusing on such topics as the nature-nurture debate, the importance of chance in everyday events, and the relationship between religion and science, he demonstrates how many of the important lessons learned from a study of science apply to many issues and situations faced by society. He points out that the scientific method of reaching the truth is used by judges in courts of law and by scholars in a wide variety of academic fields in the humanities and elsewhere, as well as by scientists. By learning to weigh relevant evidence in an unbiased fashion, we can objectively judge the enormous glut of information that surrounds us, integrate the useful portions into a scientific understanding, and still retain our sense of wonder.

This elegantly written and lucid explanation of science, both historically and in contemporary life, will not only spark interest in the wonders of many fascinating scientific disciplines but will stimulate readers to think more critically.

The One Culture?: A Conversation About Science by Jay A. Labinger and H. M. Collins (University of Chicago Press) In recent years, combatants from one or the other of what C. F Snow famously called "the two cultures" (science versus the arts and humanities) have launched bitter attacks but have seldom engaged in constructive dialogue. In The One Culture Jay A. Labinger and Harry Collins have gathered together some of the world's foremost scientists and sociologists of science to exchange opinions and ideas rather than insults.The contributors find surprising areas of broad agreement in a genuine conversation about science, its legitimacy and authority as a means of understanding the world, and whether science studies undermines the practice and findings of science and scientists.

Who has the right to speak about science? What is the proper role of scientific knowledge? How should scientists interact with the rest of society in decision making? Because science occupies a central position in the world today, such questions are vitally important. Although there are no simple solutions, The One Culture? shows the reader exactly what is at stake in the so-called Science Wars and provides a valuable framework for how to go about seeking the answers we so urgently need.

Fashionable Nonsense: Postmodern Intellectuals' Abuse of Science by Alan D. Sokal, Jean Bricmont (St Martins Press) PAPERBACK In 1996, physicist Alan Sokal published an essay in Social Text--an influential academic journal of cultural studies--touting the deep similarities between quantum gravitational theory and postmodern thinking. Soon thereafter, the essay was revealed to be a brilliant parody, a catalog of nonsense written in erudite but impenetrable lingo. The event sparked a furious debate in academic circles and across many disciplines--psychology, sociology, feminist studies, history, literature, mathematics, and the hard sciences--about the use and abuse of scientific theories in fields outside the scope of science.

Now Sokal and fellow physicist Jean Bricmont expand from where the hoax left off. In a witty and closely reasoned argument, the authors thoroughly document the misuse of scientific concepts in the writings of some of the most fashionable contemporary intellectual icons. From Jacques Lacan and Julia Kristeva to Luce Irigaray and Jean Baudrillard, the authors demonstrate the errors made by some postmodernists in their attempts to use science to illustrate and support their arguments. More generally, Sokal and Bricmont challenge the notion--held in some form by many thinkers in a range of academic fields--that scientific theories are mere "narratives" or social constructions.

At once provocative and measured, Fashionable Nonsense explores the crucial question of what science is and is not, and suggests both the abilities and the limits of science to describe the conditions of existence.

The Merely Personal: Observations on Science and Scientists by Jeremy Bernstein (Ivan R, Dee) A collection of essays representing the last ten years of the author's work. A compilation of observations on science and scientists, these essays include critical reviews of books on Einstein, descriptions of the author's encounters with influential scientists, and an attempt to explain Quantum Theory through the use of Tom Stoppard's play, Hapgood. Other essays include J. Robert Oppenheimer, father of the atomic bomb and head of the prestigious Institute for Advanced Study at Princeton during Bernstein's time there; mathematician Kurt Godel, who slowly descended into mental illness; and the taciturn Paul Dirac, one of the founders of quantum theory. In writing about scientists and others, like the poets W.H. Auden and Stephen Spender, Bernstein surveys the difference between "genius" and the "merely very good" In an attractive historical digression, he depicts how he investigated the circumstances of a portentous meeting between two contemporary geniuses, poet John Donne and astronomer Johannes Kepler in 1619. He goes on to discuss science as a muse for writers, and then explains what Tom Stoppard--whom he admires immensely--got wrong about quantum physics in his play Hapgood. In another piece, he suggests that Isaac Newton was not in fact being humble when he said, "If I have seen farther, it is by standing on the shoulders of giants" For a former staff writer at the New Yorker, Bernstein is stylistically unexciting in many essays, although the writing perks up toward the end of the collection. Those in the know about scientific biographies probably will not find much that is novel in character sketches, but they will enjoy the rest, and readers without much knowledge of modern science will learn from his carefully laid-out explications of relativity and quantum mechanics.

LIFE IS A MIRACLE: An Essay Against Modern Superstition  by Wendell Berry ($21.00, hardcover, 124 pages, Counterpoint; ISBN: 1582430586)

Berry takes on the assumptions contained in Wilson's CONSILIENCE offering a counter view that celebrates a view of nature that is not reducible to our understanding of it or exploitation of it by our technology. For the thousands who feel that Wilson's views contain a balanced view of science, Berry's meditations will challenge and perhaps refine such a viewpoint. LIFE IS A MIRACLE is deserving of a wide readership. Do not overlook it.

CONSILIENCE: The Unity of Knowledge by Edward O. Wilson, ($14.00, paperback, 367 pages, Random House; ISBN: 067976867X) HARDCOVER, THORNDIKE LARGE PRINT

"An original work of synthesis ... a program of unrivalled ambition: to unify all the major branches of knowledge, sociology, economics, the arts and religion, under the banner of science." The New York Times

One of our greatest living scientists presents us with a work of majestic learning and ambition whose central argument is at once path-clearing and as old as the Enlightenment. For biologist Edward O. Wilson believes that all knowledge is intrinsically unified, and that behind disciplines as diverse as physics and biology, anthropology and the arts, lies a small number of natural laws, whose interlocking he calls consilience.

Using the natural sciences as his model, Wilson forges dramatic rinks between fields. He explores the chemistry of the mind and the genetic bases of culture. He postulates the biological principles underlying works of art from cave drawings to Lolita. Ranging the spectrum of human knowledge and synthesizing it into a dazzling Whole, CONSILIENCE is science in the grand visionary tradition of Newton, Einstein, and Feynman.

THE PEARLY GATES OF CYBERSPACE: A History of Space from Dante to the Internet by Margaret Wertheim ($24.95, hardcover, 336 pages, W.W. Norton & Company; ISBN: 039304694X)

The Internet may seem an unlikely gateway for the soul but, as Margaret Wertheim argues in this bold, imaginative book, cyberspace has in recent years become a repository for immense spiritual yearning. The perfect realm awaits, we are told, not behind the pearly gates but behind electronic gateways labeled ".com" and ".net."

Seeking to understand this mapping of spiritual desire onto digitized space, Wertheim takes us on an astonishing historical journey, tracing the evolution of our conception of space from the Middle Ages to today. Beginning with the cosmology of Dante, we see how the medievals saw themselves embedded in both physical space and spiritual space. With the rise of modem science, however, space came to be seen in purely physical terms with spiritual space written out of the realm of reality. As Wertheim charts this seismic shift in the Western world picture, she follows the development of the modem scientific view of space from its first physicalist flickerings in the thirteenth century, through the discovery of astronomical space in the sixteenth and seventeenth, to the relativistic conception of space in the early twentieth century, and on to today with contemporary physicists' bizarrely beautiful notion of hyperspace.

Within this context, Wertheim suggests that cyberspace returns us to an almost medieval position: Once again we have a physical space for body and an immaterial space that many people hope will be a new space for soul. By linking the science of space to the wider cultural and religious milieu, Wertheim shows that the spiritualizing of cyberspace fits into a long history of imagined spaces. In particular, it may be seen as an attempt to realize a technological version of the Christian space of heaven.

"The digital reenchantment of the world as Wertheim points out in this brilliant and troubling book could as easily be the path to hell as the portal to paradise."

Margaret Wertheim is a science writer and commentator whose articles have been published in magazines and newspapers around the world, including the New York Times, The Sciences, and New Scientist. She was the writer/presenter of the PBS documentary "Faith and Reason" and is the author of Pythagoras' Trousers, a history of the relationship between physics and religion in Western culture. She lectures at universities and colleges around the country and currently lives in Los Angeles.

ROCK OF AGES: Science and Religion in the Fullness of Life by Stephen Jay Gould ($18.95, hardcover, 224 pages, Library of Contemporary Thought, Ballantine Books, ISBN: 0345430093) DOVE AUDIO VERSION

People of good will wish to see science and religion at peace.... I do not see how science and religion could be unified, or even synthesized, under any common scheme of explanation or analysis; but I also do not understand why the two enterprises should experience any conflict." So states internationally renowned evolutionist and best-selling author Stephen Jay Gould in the simple yet profound thesis of his brilliant new book.

Writing with bracing intelligence and elegant clarity, Gould sheds new light on a dilemma that has plagued thinking people since the Renaissance. Instead of choosing between science and religion, Gould asks, why not opt for a golden mean that accords dignity and distinction to each realm?

At the heart of Gould's penetrating argument is a lucid, contemporary principle he calls NONIA (for nonoverlapping magisterium of both domains of human concern, a "blessedly simple and entirely conventional resolution" that allows science and religion to coexist peacefully in a position of respectful noninterference. Science defines the natural world; religion, our moral world, in recognition of their separate spheres of influence.

In elaborating and exploring this thought-provoking concept, Gould delves into the history of science, sketching affecting portraits of scientists and moral leaders wrestling with matters of faith and reason. Stories of seminal figures such as Galileo, Darwin, and Thomas Henry Huxley make vivid his argument that individuals and cultures must cultivate both a life of the spirit and a life of rational inquiry in order to experience the fullness of being human.

In his best-selling books Wonderful Life, The Mismeasure of Man, and Questioning the Millennium, Gould has written on the abundance of marvels in human history and the natural world. In ROCK OF AGES, Gould's passionate humanism, ethical discernment, and erudition are fused to create a dazzling gem of contemporary cultural philosophy. As the world's preeminent Darwinian theorist writes, "I believe, with all my heart, in a respectful, even loving concordat between science and religion."

The author of more than fifteen books, Stephen Jay Gould is also author of the longest running contemporary series of scientific essays, which appears monthly in Natural History. He is the Alexander Agassiz Professor of Zoology and professor of geology at Harvard; is curator for invertebrate paleontology at the university's Museum of Comparative Zoology; and serves as the Vincent Astor Visiting Professor of Biology at New York University. He lives in Boston, Massachusetts, and New York City.

Embryos, Galaxies, and Sentient Beings: How the Universe Makes Life by Richard Grossinger, with a preface by Harold B. Dowse, and a foreword by John E. Upledger (North Atlantic Books) As an embryo is differentiating into a being, what recruits unruly molecules out of the vast anonymous entropy of nature into the extraordinary cyclonic motifs known as life? What kindles mind inside sheets of matter?

Scientists claim that probabilistic sets of equations underlie DNA, organizing and evolving through random selective processes and expressing themselves as genes and proteins. But what motivated atoms to make genes in the first place? Why is the universe conscious? More

FOR THE TIME BEING by Annie Dillard ($22.00, hardcover,  205 pages, Knopf; ISBN: 0375403809) Dove Books Audio Cassette

This personal narrative surveys the panorama of our world, past and present. Here is a natural history of sand, a catalogue of clouds, a batch of newborns on an obstetrical ward, a family of Mongol horsemen. Here is the story of Jesuit paleontologist Teilhard de Chardin digging in the deserts of China. Here is the story of Hasidic thought rising in Eastern Europe. Here are defect and beauty together, miracle and tragedy, time and eternity. Dillard poses Questions about God, natural evil, and individual existence. Personal experience, science, and religion bear on a welter of fact. How can an individual matter? How might one live?

Compassionate, informative, enthralling, always surprising, FOR THE TIME BEING shows one of our most original writers, her breadth of knowledge matched by keen powers of observation, all of it informing her relentless curiosity at the fullness of her writing powers.

NIGHT COMES TO THE CRETACEOUS: Dinosaur Extinction and the Transformation of Modern Geology  by James Lawrence Powell ($22.95, hardcover, 325 pages, W.H. Freeman & Co., ISBN: 0716731177)

It is widely touted today that the demise of the dinosaurs was caused by a cataclysmic collision of an asteroid and the Earth. Less than twenty years ago, however, this was considered a highly irregular, if not improbable, perhaps even preposterous, theory.

Where did this theory come from? What evidence do we have to support it? And how could scientists have been wrong, if indeed they are wrong, about such a major, significant "event" for so long?

In NIGHT COMES TO THE CRETACEOUS President and Director of the Los Angeles County Museum of Natural History, James Lawrence Powell explains the fascinating story, perhaps the best lesson this century, of how scientists challenged and overthrew orthodoxy, and demonstrates how science really works not behind ivied walls or in the mythical ivory towers, but down in the trenches. Drawing on information from many various disciplines vertebrate paleontology, micro paleontology, evolutionary biology, rare metal chemistry, astronomy, magnetism, statistics geological age dating, and the physics of nuclear explosions Powell presents the rich story behind the once outrageous suggestion that the dinosaurs died as a result of earth's catastrophic collision with an extraterrestrial object, and the scientific melee that ensued. The cast of characters who took part in the bitter debates that followed clearly shows that scientists, too, are passionate, albeit sometimes flawed and stubborn, human beings.

NIGHT COMES TO THE CRETACEOUS is the first comprehensive and objective account of how this incredible theory changed minds and the course of science forever. Readers young and old will revel in this candid portrait of how one dinosaur extinction theory waned and another rose to take its place in the classroom.

About the Author: James L. Powell is President and Director of the Los Angeles County Museum of Natural History. He taught geology for twenty years at Oberlin College, where he also served as acting President.


Boston Studies in the Philosophy of Science

Kluwer Academic Publishers

$153.00, hardcover, 319 pages, notes, bibliography, indexes, color plates


This collection of essays ranges from phenomenological descriptions of the beautiful in science to analytical explorations of the philosophical conjunction of the aesthetic and the scientific. The book is organized around two central tenets. The first is that scientific experience is laden with an emotive content of the beautiful, which is manifest in the conceptualization of raw data, both in the particulars of presenting and experiencing the phenomenon under investigation, and in the broader theoretical formulation that binds the facts into unitary wholes. The second major theme acknowledges that there may be deeply shared philosophical foundations underlying science and aesthetics, but in the twentieth century such commonality has become increasingly difficult to discern. The problem accounts in large measure for the recurrent debate on how to link Science and Beauty, and the latent tension inherent in the effort to tentatively explore what is oftentimes only their intuited synthesis.

The tension between art and science may be traced back to the Greeks. What became "natural philosophy" and later "science" has traditionally been posed as a fundamental alternative to poetry and art. It is a theme that has commanded central attention in Western thought, as it captures the ancient conflict of Apollo and Dionysus over what deserves to order our thought and serve as the aspiration of our cultural efforts. The modern schism between art and science was again clearly articulated in the Romantic period and seemingly grew to a crescendo fifty years ago as a result of the debate concerning atomic power. The discussion has not abated in the physical sciences, and in fact has dramatically expanded most prominently into the domains of ecology and medicine. Issues concerning the role of science in modern society, although heavily political, must be regarded at heart as deeply embedded in our cultural values. Although each generation addresses them anew, the philosophical problems which lay at the foundation of these fundamental concerns always appear fresh and difficult.

This anthology of original essays considers how science might have a greater commonality with art than was perhaps realized in a more positivist era. The contributors are concerned with how the aesthetic participates in science, both as a factor in constructing theory and influencing practice. The collection is thus no less than a spectrum of how Beauty and Science might be regarded through the same prism. Because of its eclectic nature, these essays will appeal to a wide audience troubled by the causes and consequences of our Two Cultures. Philosophers of science and aesthetics, as well as practicing artists and scientists, will hopefully find these essays useful.

This group of essays ranges from what I would call the phenomenological description of the beautiful in science, to analytical exploration of the conjunction of the aesthetic and the scientific. There is enormous diversity as to how the contributors to this volume regarded this task. Part of the eclecticism is reflected by the various disciplines represented: art history, biology, philosophy, physics, mathematics, history of science, and sociology. But I suspect that the issue draws upon much more variegated opinions of how to explore such a complex issue, reflections that override the particular academic perspective of the writer. This collection is no less than a spectrum of how art/beauty/aesthetics and science might be regarded through the same prism, and the refracted images are startling for their diversity. But there is some order to the project and we might broadly schematize the major themes.

The book is organized around two central tenets: The first is that scientific experience is laden with aesthetic content of the beautiful, which is manifest both in the particulars of presenting and experiencing the phenomenon under investigation, and in the broader theoretical formulation that binds the facts into unitary wholes. This orientation is what I refer to as the shared ethos of the project, but coupled to it is the more prominent sense of separation, a schism between the two domains. Thus the second major theme acknowledges that there may be deeply shared philosophical foundations grounding science and aesthetics, but in the twentieth century such commonality has become increasingly difficult to discern. This problem accounts in large measure for the recurrent attempts to address how science and aesthetics are linked, and the tension inherent in the effort to explore oftentimes only an intuited elusive synthesis. These essays therefore are diverse in the sense of approaching the topic from several points of view, and in their relative emphasis on either the synthetic or divisive character of the art-science relation.

David Kohn adroitly dissects the aesthetic influences on Charles Darwin's theory of evolution. Kohn carefully traces how two governing metaphors in On the Origin of Species the "wedging" metaphor (the action of natural selection as a powerful force) and the "entangled back" (to express the interrelatedness of nature) - operated in a particular aesthetic categorical framework - to emerge in a profound scientific theory. These are two themes developed here. The first is that Darwin was subject to profound emotional reactions on his Beagle voyage which provided the substantive foundation of Origin of Species, written more than 20 years later. For Darwin, the sublime and the beautiful not only were distinct emotions, but psychologically resided in tense balance, if not opposition: the peace of the former, the ecstasy of the latter. It was their tension that later framed the critical Darwinian theme, and their essential reconciliation was forged in the two striking metaphors of wedge and entangled bank. The second theme then shows how in an aesthetic construction, these metaphors arose from Darwin's youthful and highly emotional experience on the Beagle. In tracing the origin of the wedge and the entangled bank, Kohn discerns how nature's balance of life and death in natural selection began for Darwin with the depiction of natural landscapes in terms of a Romantic aesthetic. The metaphors are shown to play important cognitive (and emotional) roles in the transition between Darwin's appreciation of natural phenomena and his logically structured scientific expression of that understanding. Kohn's persuasive and original thesis is that the long struggle to develop the theory of natural selection found its expression in large measure in the reconciliation of the sublime and the beautiful in the critical organizing force of these two striking metaphors, and so Kohn thereby offers a lucid and carefully crafted portrait of scientific creativity.

The fulcrum of creativity is used by Robert Root-Bernstein to attack the popular view of a two cultures society. The distinction between science and art is based on an unacceptable distinction between thought and emotion, analysis and feeling. Yet, as many renowned scientists have argued, the work of science is both driven and sustained by an appreciation of beauty and a feeling of awe (e.g. Einstein, Dirac, Schrodinger). Analysis, emotion and sensibility are integral components of both the scientific and the artistic process. The three levels of aesthetic experience - sensual, emotional/imaginative and analytical - are common to the experience and process of science and art. The same applies for such elements as the play of tension and relief, realization of expectations, and surprise upon the encounter of unexpected connections of meanings. These aesthetic elements can be found in a scientific discovery, just as they can be found in a good novel or a fine symphony. The understanding of an essential and deep affinity between (great) science and (great) art is supported by the claims of many scientists, who submit that an aesthetic drive underlies science. Root-Bernstein has assembled a large and diverse testament for that opinion. He cites some scientists who even insist that an aesthetic sensibility is a prerequisite for first class scientific research.

He also adduces that the majority of scientists who were intellectual creators in their fields were also active in one or more of the arts. Moreover there are many examples of extremely fruitful interactions between artistic and scientific ways of thinking, so that he concludes that the claim of science and art embodying different approaches does not hold up to scrutiny.

An example of such a fusion is offered by Larry Holmes, who examines the classic Meselson-Stahl experiment, which has been characterized by many as "beautiful". By looking at this particular study, Holmes attempts to address the question of what informs the judgment of beauty of an experiment. Does the judgment refer to a historically specific expression of an experiment or to a protocol? Is the beauty in the actual experiment or in its description? The Meselson-Stahl paper reported how DNA replicates, providing a decisive answer to an important problem in one stroke; as Meselson himself characterized it, it was "clean as a whistle", and others described the study as "beautiful", "elegant" and "wonderful". The cleanliness of the data, along with the striking simplicity and symmetry of the visual representation of the results (included in the original paper and found in standard biological textbooks) seem to have struck scientists as qualities of beauty. Also the pedagogical value of the experiment is apparently connected with its aesthetic properties of simplicity and elegance. The features of simplicity and immediacy of the experimental results are present despite the fact that the knowledge presuppositions for carrying out and understanding the experiment are highly complex. The simplicity and symmetry of the findings are regarded as criteria of beauty, a theme that appears in several other papers.

Scientists obviously have described certain scientific insights and experiments as beautiful, but beyond such appraisals they might also consciously employ artistic design and license to depict their data. Michael Lynch and Samuel Edgerton visited astronomers, who were constructing visual images from raw mathematically represented stellar data, and found that the scientists deliberately attempted to aestheticize their presentation. This case study reverses the common notion of science playing a major role in defining the aesthetic of its culture (as discussed later in this book by Faxon), and shows how scientists inbred in their cultural milieu absorb an artistic temper, or orientation, and use the vocabularies and aesthetic judgments in composing images for analysis and publication. After briefly reviewing some historical connections between art and science, Lynch and Edgerton discuss the particular aesthetic factors invoked in digital image processing in astronomy. The technical site of image production - the image processing laboratory - has become a place where astronomers and their technical staff produce and reproduce images that are designed to appeal to various specialized and popular audiences. Choosing among an endless array of possibilities for turning "raw data" into processed images, allows the information to be "read" and displayed in various ways, reflecting the scientists "sense" of the visual depiction. Composed and recomposed to reveal "structure", images of a comet, for instance, might be highly varied and individualized. Thus the comet as a visual object is translated from raw electromagnetic data into multivarious visual images that in fact reflect an aestheticization process. Lynch and Edgerton offer some examples of how astronomers draw upon contemporary aesthetic sensibilities that were established decades ago by artists and later by mass media. There is a self-conscious limit to the artistic foray however, for the images addressed to a scientific audience, are in a sense, conservative; whereas color embellishes the dramatic effects used for popular audiences, journal articles and professional presentation largely eschew such bold images, and less dramatic monochromatic pictures are used. In any case, there are distinctive features of digital images that link them stylistically to "non-objective" paintings. Broadly, two basic areas of correspondence are identified: 1 ) a "play" between images, and sensitivity to motion and energy rather than surface and static form, and 2) the field of representation is flattened and composed of color patches, which have merged graphic, iconic or semiotic features within its frame. This represents one pole, the postmodern aesthetic, whereas other more "natural" styles are invoked by some astronomers, who edit our "artifact" and "humanize" their images. Irrespective of the artistic style, an aesthetic judgment is made in relation to the interpretation of the data, invoking an artistic translation to define a world far removed from direct visual perception. Art thus mediates science into human experience.

Aesthetic principles may also guide research programs as discussed by Scott Gilbert and Marion Faber. They maintain that embryology is unique among the subfields of biology in that an aesthetic perspective has always been central to it. As holists, embryologists have conceptualized their research aesthetically. Harrison, for instance, looked for the order of development of different parts of the body and established rules of laterality and mirror-image duplications. By identifying rules of order and symmetry, he approached the parts as working harmoniously to form a coherent whole. Many embryologists of the early twentieth century chose to study embryos while recognizing that the field of genetics promised more successful careers. The aesthetics of embryology was central to their choice. Historically, there has been a tension between embryology and genetics. Gilbert and Faber suggest that a difference of aesthetic attitude seems to loom at the center of this tension. Geneticists have labeled embryologists "mystics", who believe developmental problems are too complex to be solved by science. On the other hand, embryologists have been repelled by the reductionist attitude of geneticists who would cast all embryonic development in terms of gene action. The holism of embryology is expressed in a philosophy of organicism. (Organicism views the whole as functionally prior to the parts.) It is an approach through which embryologists attempted to formulate an alternative ground between vitalism and reductionism. While genetics emphasized uniformity, reductionism, preformation and simplicity, embryologists celebrated diversity, organicism, epigenesis and complexity. Recently genetics and embryology are coming closer together, which Gilbert and Faber regard as a challenge to the embryological aesthetic - the uniqueness of development of each species and to the philosophy of holism.

Sahotra Sarkar has offered a provocative argument of how an aesthetic choice, namely formalism, has governed certain scientific disciplines in the twentieth century. He describes how, in the beginning of the twentieth century, European art discovered the power of formalism, already practiced widely in so-called primitive culture. Formalism, the pursuit of forms for their own sake, takes on different meanings in various art forms. In painting, sculpture and photography, the form becomes the subject; in architecture, form dominates function. Forms are to be manipulated during construction of a work of art, they are directly (i.e. sensually) appreciated, yet they may also serve as symbols. Abstraction thus must precede construction, however, and this is an important caveat, the search for "meaning" or "truth" is eschewed. The formalist's art is "non-representational because its subjects are the forms that are, in a sense, within-itself". After briefly tracing the significance of formalism in art and architecture, Sarkar turns to address how formalism in both the physical and biological sciences, similarly functions to confer the "fundamentalist" character to a theory, and how such "forms" are aesthetically chosen. He maintains that the choice of the physics of elementary particles rather than of middle-sized objects as fundamental is largely an aesthetic choice. For instance, in particle physics, the usual defense of its fundamental importance is based on the argument that all other bodies in the universe are "composed of" these fundamental entities. But in the nether world of indistinguishable particles and transient resonances, the notion of "composed of" is highly problematic. To say that a proton is "composed of" undemonstrated quarks is quite different from the analogy of saying an organism is "composed of" certain organs. He suggests that the models which particle physicists construct are the result of a process akin to the method of analysis in formalist art. With this, and other examples from physics, Sarkar has endeavored to show that aesthetic considerations, along with evidential ones, are important in the way scientists choose their priorities. Sarkar applies the same argument to biology, where he examines how an erroneous deciphering of the genetic code (the so-called comma less code) involved a formalist approach. It was widely appealing until the experimental test proved it wrong. He notes that it was the aesthetic qualities of this comma-free code, with its appeal to mathematical manipulation, that captured the fancy of early molecular biologists. But more current, and perhaps more important, Sarkar cites the current sequencing of the human genome as another drive towards some ideal formalism. He has grave doubts of its brave promises, and believes its scientific appeal is based on its perceived aesthetic qualities.

Sarkar would draw our attention to an interesting postulate: the same pattern of choice apparent in the pursuit of the arts has also been manifest in the sciences. Formalism in the arts is mimicked by the pursuit of the smallest particles in physics, with the unproven hope that the principles found at that level will help explain phenomena at all other levels of organization. Similarly, a formal universalism of the genetic code is pursued at the expense of more complex biology. The similarities go even further. In the arts or in the sciences, the skills generally required by the formalist do not completely coincide with those that are required by those pursuing diversity and complexity. The skills of the formalist are often technical, and if abstraction is pursued for its own sake, the attention to technique can become of paramount importance. Note, formalism is only one mode of artistic practice. Physics might pursue everyday objects and processes, and biology could focus on exploring the diversity and complexity of organic life. What is of cultural interest is that instead, formalistic pursuits have caught our fancy.

The related issue of how aesthetic principles might govern scientific thinking in a broad venue of theory construction is pursued by Joseph Margolis and James McAllister, who each begin with a critique of Thomas Kuhn's assessment of aesthetic factors in the natural history of scientific theories. In assessing theories, scientists rely upon empirical criteria such as internal consistency, predictive accuracy and explanatory power. However besides empirical matters, aesthetic concerns are also operative, which cannot be defined in terms of a fixed set of properties, since what is considered attractive or beautiful has been different at different times and in different disciplines. In general, however, beauty in science (as in art) is identified as those features (whatever they may be) which convey an impression of aptness - they are appropriate, fitting or seemly. McAllister's paper contends that aesthetic criteria are as central to the scientist's acceptance of a theory as are empirical considerations. While a distinction can be drawn between empirical and aesthetic criteria, the latter are not merely "extra-scientific" (as they are sometimes judged), but an integral part of scientific development and change. The aesthetic canon is constructed by the aesthetic features of all past theories - an inductive mechanism which ensures that the aesthetic canon is conservative. What compels scientists to accept a new paradigm is that it is empirically better performing. Allegiance to the aesthetic canon must be suspended to accept a new theory. Indeed, for some the rupture is too deep and they hang on to the established aesthetic paradigm, that is, to the conservative aesthetic criteria. McAllister illustrates his view with an historical change, which (in contrast to the transition from the Ptolemaic to the Copernican system) was a revolution: Kepler's theory of planetary orbits as elliptical. This view violated a deeply rooted demand that the orbits be circular and uniform in motion. However, Kepler's theory was extremely powerful, effecting the conversion of scientists who were aesthetically repelled by it.

McAllister's paper argues that aesthetic factors are on the side of the conservative trend in the choice between theories, while empirical factors compel scientists toward innovation and radical breaks with established views. Joseph Margolis rejects the very basis of Kuhn's arguments regarding the role of aesthetics in scientific revolution as a "great muddle". Margolis is dissatisfied with Kuhn's attempt to examine the interface of scientific theory with aesthetics, since he maintains there are no useful definitions for such an exploration, nor can one establish an epistemic disjunction between "objective" and "subjective" as their respective grounding. Because there is no standard conceptual basis given in terms of "aesthetics" for pursuing any comparison between the sciences and the arts, the entire enterprise "has proved a complete shambles". Having thus summarily dismissed the very basis of our Elusive Synthesis, Margolis does, however, admit a certain nagging connection between science and art for consideration. Once he discards the need to secure scientific method, objectivity or rationality in a firm definition (in order to seek the influence of the aesthetic), and he further rejects any settled distinction between aesthetic and nonaesthetic, he is now prepared to offer another avenue to seek conceptual linkages between science and art. He argues that there is in fact a common "reason" they both share: professional taste/reason in the sciences, as in the arts, is a function of historical practice. What is "good" explanatory theory (or of painting) is what accords with practice. Reason then, in this view, is "an artifact of historical life" and the aesthetic is a convenient "catchall term for the informality with which the most formal criteria can be legitimated". In short, Margolis posits consensual practices broadly grounding scientific praxis and aesthetic taste to some common practical reason governing both. In this scheme, there can be no meaningful distinction between "objective" and "subjective", but at the same time there is no principled difference between what counts as objectivity in the arts and the sciences. And in this commonality, Margolis discerns that science does not "borrow" from the aesthetic, but rather the aesthetic is "essential to what we mean by objectivity in the sciences".

The basis of shared experience between the "separate domains" of science and art is fruitfully explored in a less nihilistic sense by Leon Chernyak and David Kazhdan, who propose that mathematics is the true theoretical counterpart of poetry. They employ Kant's aesthetic-expressive understanding of mathematics to argue their case. The conception of aesthetic experience changed radically with Kant's philosophy. Prior to Kant, aesthetic experience was identified as encountering self-expressive, authentic being; aesthetics was captured in the mystery of that encounter. After Kant, Nature was no longer conceived as self-expressive. Rather the subordination of Nature to a text became Reason's accomplishment. In finding itself having to 'speak for' the Other, or for non-Reason, Reason encountered its own limits - what Kant called the "finitude of human Reason". Poetry and mathematics are alike in that both seek ways to transcend the radical finitude of Reason. The Romantic tradition -- continued in this century by Heidegger and Gadamer -- is unable to discern anything more than a fascination with "calculating reason" in Kant's veneration of mathematics. By according mathematics a special place, however, Chernyak and Kazhdan contend that Kant identifies aesthetic experience as a fundamental, constituent component of human rationality. In their interpretation, aesthetic experience underpins Reason's activities. Reason depends upon the aesthetic faculty of judgment to give articulate form to the nature of human expressiveness. In effect, they understand poetry as the leap across the radical Finitude of Reason: the connection between the Other and Reason is achieved in the power of language. Because mathematics accomplishes this same leap, it may be viewed as a kind of poetry. With this thesis, Chernyak and Kazhdan attempt to provide an epistemological alternative to the fact that non-Reason (the Other or Nature), is neither constructed by Reason (an erroneous interpretation of Kant in their view) nor mirrored by Reason (the Enlightenment conception).

Kant also serves as the beginning of Catherine Chevalley's comparison of physics and art. Three lines of thought are interwoven: Kant and Cassirer on the notion of 'symbol' and the nature of human knowledge; Panofsky's analysis of the shift to linear perspective in art, and his understanding of symbolic forms in different fields as shaping specific "styles of art" in historical periods; and finally, the idea of physics-like-art in the context of quantum theory in the 1920s. She argues that Kant's view supported a deep division between science and art (schematic versus symbolic knowledge). This view would become troubled, however, if a language replete with analogies were used in science, or if scientific knowledge were obtained for objects not directly available to intuition. Both of these developments were heralded with quantum theory. In this case scientific knowledge would itself be symbolic. This was precisely Cassirer's claim. His position required a radical shift away from Kant's theory of knowledge, toward a unified view of all forms of knowing, including science and art. On another front, Panofsky's work in the 1920s raised the question of why linear perspective emerged when it did. He viewed it as an "interpretation" of space in art, rather than a "natural" representation. He showed that linear perspective emerged in connection with developments in the science of optics, analytic geometry and the coordinate-system conception of objects in space. In philosophy, linear perspective was connected to a conception of a separation between subject and object, with the knowing subject as objective spectator who represents the world. The striking affinities between developments in art, science and philosophy led Panofsky to formulate his idea of "styles of art" as constitutive of the entire Weltanschaunng of a period. The connection between science and art was also accentuated by Panofsky in the idea that techniques of representation effected development in both.

In German physics of the 1920's these influences from philosophy and Panofsky's work are seen in Bohr's and Heisenberg's explication of quantum mechanics. Their interpretation of quantum theory engaged a comparison between physics and art. Bohr's view was influenced by the "symbolic turn" in that he rejected all mechanical models of the movement of electrons in the atom. He pronounced "the failure of all spatio-temporal models" at this level and the need for recourse to symbolic analogies. Especially after 1924 he used the notion of symbolic representation regularly, by which he meant all elements of a physical theory with no correlate in intuition. A more sophisticated - i.e. symbolic - language was required. Heisenberg claimed that physical theories were like styles of art. He noted that the conceptual systems of physics (for instance, Newtonian and quantum) differ not only because their objects differ, but also because they create different groups of relations. As styles of art emerge through a set of formal rules so do symbolic idealizations underlying conceptual systems of physics. Contemporary science, according to Heisenberg, is changing the entire view of classical physics and modern philosophy, introducing (like a style of art) new presuppositions about the nature of reality. Heisenberg underscored the cognate tendencies toward abstraction in physics, mathematics and non-objective painting in the 20th century. Thus, both Bohr and Heisenberg broke ties with a Kantian epistemology dividing science and art, and with a Cartesian view of a distinction between subject and object.

This postmodern perspective is pursued by Alicia Faxon, who reviews these matters from the perspective of an art historian. She grapples with how the traditional intersection between science and painting is blurred in a postmodern aesthetic, and examines the notion of "aesthetics" in the postmodern world that has revolted against the rigid modernist view of any value as universal and ahistorical. If one regards modernism as resting on a narrow Western aesthetic masquerading as universal, the alternative postmodern aesthetic celebrates a multicultural vision, the availability of choices, and the effacement of boundaries between high culture and popular culture; in a word, postmodernism celebrates pluralism. It Reconstructs such notions as originality and the work of art as an autonomous object. The dangers of the postmodern conception are a loss of criteria of aesthetic value, widespread mediocrity and a domination by consumerism and the commodification of art. According to Faxon, the intersection of art and science occurs especially in the creation of aesthetic standards by which to form a canon or rule to achieve correct proportions of beauty, symmetry and harmony. In certain eras this canon has been sought explicitly. For instance, in classical antiquity mathematically determined proportions were applied to architecture and sculpture; or in the Renaissance, newly discovered anatomic facts of the human body were applied, as illustrated by Leonardo da Vinci's Proportions of the Human Figure. How might a postmodern application of scientific measurement of proportion and mathematical formulas differ from past applications? There can be no one set of measurements for a "perfect" human figure in the postmodern aesthetic. What role the traditional intersection of art and science might be in color theory, space and even time remains highly problematic, as Chevalley so clearly illustrated in the preceding essay. This leaves the current role of how science might influence aesthetics to be defined and expressed. Whatever that function might be, according to Faxon, the possibilities must be mutable, non-hierarchic, and permeable in order to echo change, multiculturalism and an attitude of inclusiveness.

Perhaps one of the more interesting pots in which to mix science and art is in a science museum. Hilde Hein explores the complex dynamics invoked in depicting science to the public, exposing the conceptual and social biases of such exhibits. Like Faxon, Hein is sensitive to postmodernism effects beyond the nature of the subject material, to include curatorial motives and the scientific education of the viewer. Science museums, like other museums, are essentially designed to engage and satisfy their audience, and thus aesthetic factors are instrumental in this design. Like museums devoted to other areas, science museums aestheticize their contents by decontextualizing and recontextualizing them, or as she says, "an object must die and be reborn in order to enter into a exhibition", and in this sense each exhibit is a work of art in being newly minted for aesthetic contemplation. In such presentation, particular orientations and messages are converged that may convey hidden agendas. After surveying the types of science museums and describing now history and purpose have affected their exhibited strategy, Hein turns to the role aesthetic factors have played in formulating the curatorial message. Without attempting to summarize the rich historical examples offered in this essay, simply note that the modern science museum, although still offering dioramas and other viewer-separated exhibits, has moved increasingly towards participatory experience. New technologies seek to actively transform the passive spectator into an engaged active one. Visitors easily access into film and video displays, holography, computer simulations and manipulable objects of various sorts, but Hein questions whether genuine cognitive interaction is produced. Are the limits imposed in the design of choices restrictive of the learning experience and does the viewer remain passive? The theatricality that shifts the viewer from the "objective" depiction of an older style to a "phenomenological" veridicality may be only an aesthetic choice, although this change has purported advantages: the static mausoleum in which objects are torn from their natural context and coldly (viz. fully) analyzed, may now be regarded in a more complete sensory setting, where sensuous interaction strengthens the viewing experience. Aesthetics are a crucial element in the effective lesson, but "the didacticism of such coercion is hidden by its aesthetic form". Hein reminds us that the problem of which reality to present remains unresolved. The isolated object has now been contextualized, imbued with meaning from complex interaction with other parts of the exhibit as well as the active participation of the viewer. But beyond the perceptive manipulation, there are conceptual social issues bestowing particular significance to the contextualization, the point of view and the construction of reality by perception. In response to this challenge, some recent exhibits have been designed to confront visitors with a profusion of data and invite them to create their own exhibition, or others have asked audiences to choose among alternative interpretations. The aesthetic dimension remains crucial to the success of an exhibit, and although there are various criteria, the invitation to engage the visitor to ask questions and ponder problems seems foundational to a gratifying experience: The museum must speak to its public, who then become participants in a dialogue. The insight that museum exhibits are fulfilling their historical role reflecting a changing postmodern ethos, suggests that they not only continue to serve as responsive public educational institutions, but also offer us insightful visions of ourselves.

Tauber’s contribution views the elusive merger of science and the aesthetic as essentially a philosophical problem. By examining Goethe as an exemplary case of the scientist-artist/artist-scientist, whose scientific venture is guided by an aesthetic holism, a sense that all aspects of experience must be included to describe phenomena, and that sole reliance on reductive characterization was doomed to falsifying our observation and stripping nature of its full meaning. Goethe as a scientist employed abstraction in seeing primal essences and was a proto-positivist in vigorously maintaining a strict separation of observing subject from his inquiry. But he included a third element, an aesthetic concern for the whole that would employ all the intellectual and intuitive faculties to place the "facts" in their broadest theory. He was highly sensitive to value-laden facts and the tyranny of scientific opinion, and sought to incorporate his own personal vision into the broadest conceptual framework, which included history and psychology as active agents on the scientific stage. For this eclecticism he was vilified. The nineteenth century witnessed the complete schism of science and the arts, and by examining the case of Nietzsche, I have chosen to show how radical aestheticization of experience was the extreme response to an objectified science totally divorced from the personal. I could just as pertinently have assumed the other side and shown the philosophic practice of a severely positivist scientist. The case I argue is simply that science as aesthetic is not a generally acknowledged category of judgment, yet in large measure science assumes a personalized (viz. meaningful) dimension when the phenomenon or theory is appreciated aesthetically.

Tauber believes our true predicament is captured by Husserl's dismay that a universal rationality could not encompass both science and art. This anthology has been designed to highlight points at which an elusive synthesis might begin. Notwithstanding the protestations of each of these essays, we complain of a Two Culture society. It is this intuition that lies at the foundation of Faxon's puzzlement of what will constitute a postmodern aesthetic. It is the same sentiment that drives Chevalley, Chernyak, and Kazhdan to seek a philosophical foundation for both science and art. It is the same orientation from which Margolis, McAllister, and Sarkar seek to expose aesthetic principles underlying scientific theory, and finally it is the psychological unity of experience that propels the observation of Root-Bernstein, Gilbert, Faber, Kohn, Lynch and Edgerton to cite aesthetic experience as underlying scientific insight, or in the case of Hein, education. These essays each claim an intersection, at some level, between science and art. Their respective syntheses would mend a fracture, a mistrust in a unifying knowledge. The drive toward objective contemplation logical analysis, scientific classification cuts us off from what existential phenomenologists refer to as being in the world. Always to scrutinize is to divorce ourselves from personal meaning. The dissection of the world yields a kind of knowledge which must still be integrated meaningfully. The scientific object may reside seemingly separate - "out there" - the focus of an inquiry of what it is - in itself - (ignoring the philosophical difficulties of that expectation), but the issue is to integrate that object to its observing subject in both, rational and emotional domains. The search for this common ground is the elusive synthesis of our very selves in a world ever more objectified from us. No wonder the "problem" of aesthetics and science remains - a beguiling reminder of the lingering fault of our very being.

he Epigenome: Molecular Hide and Seek edited by Stephan Beck, Alexander Olek (Wiley-VCH) discusses the history, current facts and future potential of the science arising from the human genome project. Describes the role of the Epigenome (cytosine methylation) in the interplay between nature and nurture. Focuses on the relationship between genetics and the environment.

Epigenomics is the study of how certain genes are "silenced" by methylation, leading to different temporal and spatial expression of genes. Believing that this field will lead to understanding of the interaction between genetics, disease, and the environment, Beck (The Sanger Centre, Wellcome Trust Genome Campus, UK) and Olek (Epigenomics AG, Germany) present nine essays, "written in popular science style," that cover the historical origins of the field and its role in the exploration of development, gene regulation, disease, diet, and aging. A basic understanding of genetic science seems to have been assumed.

This is the first book that describes the role of the Epigenome (cytosine methylation) in the interplay between nature and nurture. It focuses and stimulates interest in what will be one of the most exciting areas of post-sequencing genome science: the relationship between genetics and the environment.
Written by the most reputable authors in the field, this book is essential reading for researchers interested in the science arising from the human genome sequence and its implications on health care, industry and society.

The past 30 years have seen a tremendous surge of interest in the field of epigenomics. This work has led to exciting discoveries: new enzymes that can methylate mammalian DNA in both a maintenance and a de-novo manner, the bisulfite modification technique for the direct detection of methyl cytosine residues in genomic DNA, the role of abnormal methylation in disease and the aging process, and, more recently, the link between chromatin remodeling and methylation in the transcriptional repression of gene function. Over the next 30 years, epigenomics will be one of the boom fields of biotechnology with pharmaceutical companies waking up to the fact that the presence or absence of the 5th base has major implications in human health care and the treatment of an ever-increasing aged population.

One of the first major hurdles is the sequencing of the "methylome" or epigenome, which is unlikely to be achieved in the same manner as the sequencing of the human genome. This is because each individual tissue type in the human body has a unique methylation signature, and at least 180 cell types make up the human body. With advances in robotics and automation, however, this task does not seem quite as impossible as it would have been before the race to sequence the human genome began.

Sequencing the human epigenome aside, many questions have at best been only touched upon and many more are still unanswered. How are basic developmental pathways involving methylation established and later modified in a coordinated fashion? Can methylation directly cause gene inactivation or is the process a secondary phenomenon to stably repress gene function? How accurately is the original methylation pattern repaired in a region of DNA damaged by radiation, chemical reactions, or other environmental stimuli? In the next 30 years can the methylation machinery or methylation fingerprints be altered to therapeutically reverse the disease process or indeed the natural process of aging? These questions and many more need to be fully answered before we have true insight into the full impact that epigenomics has on basic human development and disease.

The potential of methylation science can be glimpsed if we recall that these few pages go through a list of many of the major sectors of the life sciences, diagnostics, pharmaceutical development, personalized medicine, and agriculture, in each of which we find highly attractive opportunities. At the same time, we discussed several completely different disease indications: we discussed disease management and diagnostics exclusively for oncology, and we illustrated uses in pharmaceutical re-search and development mainly for metabolic diseases. The tremendous potential of methylation as an applied science becomes obvious if we believe that the diagnostic applications expand into metabolic, cardiovascular, and other major human diseases. Likewise, pharmaceutical research and development can be supported with methylation information in as many disease indications. We believe that this is so, indeed attributing to methylation an importance that could be comparable to that of the other "big" genome sciences. We have argued that methylation is the only parameter that truly changes genome function in aging, as well as being affected by environmental influences. Therefore, we expect methylation to be not only competitive as a tool for diagnostics and research, but in many cases but simply irreplaceable. All in all, we expect methylation technologies to become a firm component of most, if not all, genomics-based research and development in the future.

Guide to Analysis of DNA Microarray Data, Second Edition by Steen Knudsen (Wiley-Liss) Written for biologists and medical researchers who don't have any special training in data analysis and statistics, Guide to Analysis of DNA Microarray Data, Second Edition begins where DNA array equipment leaves off: the image produced by the microarray. The text deals with the questions that arise starting at this point, providing an introduction to microarray technology, then moving on to image analysis, data analysis, cluster analysis, and beyond.

With all chapters rewritten, updated, and expanded to include the latest generation of technology and methods, Guide to Analysis of DNA Microarray Data, Second Edition offers practitioners reliable information using concrete examples and a clear, comprehensible style. This Second Edition features entirely new chapters on:

  • Image analysis
  • Experiment design
  • Automated analysis, integrated analysis, and systems biology
  • Interpretation of results

Intended for readers seeking practical applications, this text covers a broad spectrum of proven approaches in this rapidly growing technology. Additional features include further reading suggestions for each chapter, as well as a thorough review of available analysis software.

Microarray Quality Control by Wei Zhang, Ilya Shmulevich, Jaakko Astola (Wiley-Liss) covers the breadth of issues encompassed by quality control in microarray studies, including quality of biological samples, quality of DNA, hybridization protocols, scanning, data acquisition, image analysis, and data analysis. Its unique focus on quality control addresses a serious knowledge gap amongst the interdisciplinary group of workers—biologists, mathematicians, statisticians, engineers, physicians, and computational scientists—involved in microarray studies. This book provides the only systematic treatment of quality control beginning with experimental design through to interpreting results.

The increasing popularity of microarray analysis has been partially fueled by the frustration of biologists with limited technological tools they have had at their disposal for gaining a comprehensive understanding of complex bi­ological problems. The completion of the Human Genome Project and the understanding it heralded threw into even greater relief the problems with the current technology. Although still not finally known, the number of genes in the human genome is estimated to exceed 40,000. These genes and their protein products determine biological phenotypes, both normal and abnormal. Traditional molecular biology methods have only allowed biologists to peek at the activities of a small number of genes that are associated with certain biological processes. In addition, although the reductionist method has enabled researchers to delineate a number of signal transduction pathways, it cannot yield a comprehensive picture of the systems under study. By allowing si­multaneous measurement of the expression of thousands of genes, microarray technologies have marked a defining moment in biological research. Indeed, many publications have now shown the power of gene expression profiling in the diagnosis and prognosis of disease and the identification of gene targets. Microarray analysis is rapidly becoming a standard research tool.

Since their advent in 1995, microarrays have gained popularity at a tremen­dous rate and microarray core facilities have become commonplace in acad­emic and industrial settings. The cost of setting up microarray facilities and performing experiments has been dropping steadily, making them more ac­cessible to researchers. With this increased growth come new challenges, in particular those related to maintaining good quality control over the entire microarray experiment. Each stage of a microarray experiment presents its own pitfalls, and the aim of this book is to provide recommendations for identifying and preventing them.

This book focuses mainly on in-house microarray facilities and does not at-tempt to cover all aspects of quality control for commercially available arrays, such as Affymetrix chips. Nonetheless, many of the quality control issues, such as those dealing with biological sample handling and preparation as well as data analysis, will still be relevant and useful to those working with these technologies. At the end of each chapter we have provided a number of refer­ences relevant to the topics discussed in that chapter, in particular, to quality control issues. Not all of these references are cited in their respective chapters, but were selected by us, as they represent useful further reading material and provide many additional details.

As the title suggests, this book is about quality control and is not intended to be a "how-to" or tutorial book on microarrays. As such, it does not attempt to go into great depth in describing all phases of microarray production and analysis. For example, algorithms, statistical models, and so on, are developed only to the extent that they are necessary for the discussion, so as to make the book as self-contained as possible. In each chapter, some level of familiarity with the topics discussed therein is assumed. The book also relies on many examples for illustrating the concepts and issues. A number of other books on microarray production, experimentation, and data and image analysis, in addition to the journal articles listed in chapter references, may be of use to those wishing to study the relevant issues beyond the scope of this book. We feel that this book will be useful to both experienced microarray researchers as well as to those in the process of setting up microarray facilities. Our ultimate goal is to provide guidance on quality control so that each link in the microarray chain can be as strong as possible and the benefits of this exciting technology can be maximized.

Quantitative MRI of the Brain: Measuring Changes Caused by Disease edited by Paul Tofts (John Wiley) is designed to be a practical guide to how to perform quantitative MR measurements in the brain. It contains  both the methodology and clinical applications, reflecting the increasing interest in quantitative MR in studying disease and its progression.

  • The editor is an MR scientist with an international reputation for high quality research
  • The contributions will be written jointly by MR physicists and MR clinicians, producing a practical book for both the research and medical communities

The MRI machine is undergoing change in how it is used. Used conventionally, it produces images, which are viewed qualitatively by radiologists. Unusually dark, light, large or small areas are noted, and subtle changes cannot be detected. In its new use,that of a scientific instrument making measurements in the human brain, a myriad of physical and biological quantities can be measured for each small voxel of brain, of dimensions typically between 1-3 mm.

The book opens with a section on concepts which explores the principles of good practice in quantification, including quality assurance, MR data collection, and analysis aspects. Current limits are discussed in detail, and solutions proposed. A chapter on each of the major physical quantities follows (including proton density, T1, T2, diffusion, magnetisation transfer, spectroscopy, functional MRI and arterial spin labelling). The physical principles behind each quantity are given followed by their biological significance, practical techniques for measuring the quantity imperfections that can arise in the measuring process, and an extensive survey of clinical applications of that quantity. The pathological correlations with the MR quantities are discussed.

This is an indispensable ‘how to’ manual of quantitative MR, essential for anyone who wants to use the gamut of modern quantitative methods to measure the effects of neurological disease, its progression, and its response to treatment. It will appeal to research-minded radiologists, neurologists and MRI physicists who are considering undertaking quantitative work, as well as those already in the field.

Professor Paul Tofts has worked on the physical aspects of quantitative brain imaging since the early days of clinical NMR. He was the first to measure in-vivo concentrations of metabolites, and to use dynamic imaging to measure blood-brain barrier permeability and extra-cellular space in multiple sclerosis.

Describing the intersection of measurement science and MR imaging has been an international effort by the members of the MRI research community, with much communication by email, and rapid access to journal articles on-line, in a way that would not have been possible a few years ago; some co-authors have not even (yet) met each other. The Global Village has truly arrived. The overview boxes, and many of the footnotes, are generally my responsibility. The conventions regarding units and abbreviations follow those in the Style Guide of the journal Magnetic Resonance in Medicine, as much as possible.

In the past, physical science has been concerned with our view of the cosmos, and of atomic particles. Now we have the chance to see and measure inside our own living brain — to me this is equally profound. Which will history judge as being more important? A decade ago qMR techniques were almost nonexistent; in a decade's time they will be routine.

In spite of what science has achieved, many people are apparently able to resist disease, and heal themselves and others, in ways that are still mysterious to Western science, using approaches such as acupuncture, body work, homeopathy, reiki and shiatsu. The placebo effect is a phenomenon considered very powerful in medicine, and yet its mechanism of action is not fully understood. With qMR we may be in a position to objectively record responses to such treatments.  

Exotic Animal Medicine for the Veterinary Technician by Bonnie, D.V.M. Ballard (Iowa State University Press, Blackwell) Introduces exotic animal medicine for the veterinary technician. Provides descriptions of common procedures on exotics including venipuncture, bandaging and wound care, administration of drugs, tube feeding, catheter placement, and urine collection. Also covers basic anatomy and physiology.

This handbook is a straightforward introduction to exotic animal medicine for the veterinary technician. Exotic Animal Medicine for the Veterinary Technician introduces technicians to the exotic animals they're most likely to see in practice and provides clear-cut descriptions of common procedures on exotics.

Together with 12 contributing authors, Ballard and Cheek describe in detail common technical procedures performed on exotics including: venipuncture, bandaging and wound care, administration of drugs, tube feeding, catheter placement, and urine collection. The book's coverage also extends to basic anatomy, physiology, reproduction, husbandry, zoonotic diseases, restraint, radiology, surgery and anesthesia, parasitology, hematology, emergency and critical care, and nutrition.

Although intended primarily as a textbook for students in veterinary technician programs, the complete coverage of exotic animal clinical procedures makes this an invaluable reference for veterinary technicians and practicing veterinarians alike.

This book was written to provide the veterinary technician with important information about a variety of species commonly seen in exotic practice. This text would be beneficial to the technician who would like to work with these animals but may have graduated years ago before this area of medicine was popular. This text would also be helpful to the technician who works for a veterinarian who would like to add exotic species to his or her practice. While it was not written for veterinarians, they may find it beneficial as well.

With the help of this book, the technician will know what questions to ask to obtain an adequate history, be able to educate the client about husbandry and nutrition, be able to safely handle and restrain common species, and be able to perform necessary procedures when needed. Because the field of exotic animal medicine is a dynamic one, new knowledge is constantly emerging about many of the species kept as pets and new information can in some cases contradict what was thought to be true before. For many species, exotic animal medicine could be said to be in its infancy. For this reason it is essential that those interested in exotics keep up with the latest information. This is best accomplished by attending national conferences where the leaders in this field speak. We realize that for some of the species featured in this book, the information presented may need to be modified in the future.

While some of the contributors provided drug dosages and formularies, we do not take responsibility for what is provided. We also realize that while technicians do not make decisions about what drugs to use in any animal, they are required to be familiar with different pharmaceuticals, know where to find a dosage, and know how to calculate it.

This book was written with the assumption that the technician already is educated in anatomy, physiology, medical terminology, pathology, and pharmacology. Only what is unique to the species featured is presented.

Because what we know about exotic animal medicine is forever changing and much has not been scientifically proven, it is common to find contradicting information from one reputable source to the next. This can create frustration but also provide the challenge of working in a cutting edge area of medicine.

Multicellular Animals. Order in Nature - System Made by Man. Volume III by Peter Ax (Springer Verlag) The system of the multicellular animals presented here is an alternative to the traditional classification which still operates with the categories of Carl v. Linné, including typological divisions in artificial species groups. In a new approach to the phylogenetic order in nature this book strives for an objective systematisation of the Metazoa. It seeks a new path in the field of research and academic teaching based on the theory and methodology of phylogenetic systematics.
This third volume covers the Metazoa from the Nemathelminthes to the Mammalia.

The evolutionary order of organisms as a product of Nature and its representation in a phylogenetic system as a construct of Man are two different things.

The order in Nature consists of relationships between organisms. It is the result of an historic process that we call phylogenesis. When working with this order we must clearly differentiate between its identification and the subsequent description of what has been identified.

When identifying the order we are placing ourselves in the framework of "hypothetical realism;” it is principally impossible to determine how well or how poorly the real world and our cognitive apparatus match. For phylogenetics as an historically directed discipline, how-ever, the following limitation is more relevant. We cannot demonstrate experimentally whether or not that which we have interpreted from the products of phylogenesis corresponds to facts of (hypothetical) reality. Even so, we can separate ourselves from the fog of arbitrary opinions and speculation when our ideas on relationships are disguised as hypotheses that con be confirmed and supplemented at any time as a consequence of empirical experience - but that can just as easily be weakened or rejected. The phylogenetic system of organisms presents its knowledge on the order in Nature in the form of testable hypotheses and hence purports to be a science.

Thus, we come to the description or representation of order in a conceptual system of Man. In the present textbook, higher-ranking, supraspecific taxa from the Porifera to the Mammalia are identified as equivalents of phylogenetic products of Nature on the basis of derived characteristics (autapomorphies). We also present the sister group of every single taxon with demonstration of the commonly derived agreements (synapomorphies). A well-developed methodology is available in phylogenetic systematics for the evaluation of characteristics in the comparison of organisms.

There may be different, competing paths for the form of presentation. Our principle of systematization has led to the abandonment of the categories of traditional classifications. Sister groups (adelphotaxa) of concomitant origin are to be ordered next to each other on one level (coordination). Their combination to a new taxon occurs at the next higher level (superordination), their division at the next lower level (subordination). The result can be presented both as a hierarchical tabulation and as a diagram of phylogenetic relationships.

Let us take the Chordata as an example from this volume. The Tunicata and Vertebrata are coordinated at the level 2 as sister groups. The taxon Chordata is superordinated to them on level 1. Moreover, on level 3, the Acrania and Craniota are subordinated as adelphotaxa of the Vertebrata. A comparable subdivision of the Tunicata is not possible at present.  

Government Policy and Farmland Markets: The Maintenance of Farmer Wealth edited by Charles B. Moss, Andrew Schmitz (Iowa State University Press, Blackwell) Proceedings of a conference on Government Policy and Farmland Markets Implications For the New Economy, held in Washington, DC in May 2002. Includes references and index.

This timely and comprehensive look at farmland values has much to offer government policy makers, lenders, agricultural economists, and decision makers in agribusiness. Bringing together leading farmland authorities in the United States and Canada to examine the economic determinants of land value and the consequences of change in land values, this unique volume addresses the full range of economic issues surrounding farmland markets.  

With farmland accounting for an average of 70 percent of all agricultural assets, editors Moss and Schmitz provide expert analysis and review of this subject from the following perspectives:

  • A historical overview of the structure and performance of farmland markets in the United States;

  • The links between farmland values and agricultural policy in the United States;

  • The capital market dimension of farmland values;

  • The mechanics of farmland markets, especially the cost of buying and selling farmland;

  • Environmental concerns, including the potential impact of urban encroachment; and

  • The role of regulations against foreign ownership of farmland on farmland value in Canada.

It is not necessary to be a farmer, or even an economist, to figure out that land is the foundation of the farm enterprise. For most of this nation's history, the value of any particular piece of farmland was largely determined by how productive it was. Proximity to markets mattered, too, but less so as transportation, processing, and storage improved. In the 20th century, the initiation of the New Deal farm programs and acceleration in the growth of urban and suburban areas have presented two additional, important determinants of the price of farmland.

Understanding how government policies and urbanization affect farmland value matters because of the centrality of farmland to the requirements of the physical production of food and also to farm household wealth. Improvement in the well being of American farm households, an important policy objective, is too often seen solely in terms of commodity price levels. But income and wealth, together two important components of financial health, are better indicators of well being. Just as it is worth studying the impact of farm policy on farm prices, it is worth understanding the effects on farmland value. Pressure on land values created by competition for nonfarm land uses in urban and suburban areas can be strong in places, and more and more often the effects of urbanization are mediated by state, local, and Federal policies aimed at farmland preservation.

Land is the most important asset in the farm business and in the farm household-investment portfolio. Nationally, in 2001, real estate accounted for almost three-fourths of farm business assets. For the average investment portfolio of the farm household, worth about $660,000 in 2001, real estate comprised 60 percent of its value. For landowners who operate farms, the value of land as a business asset lies in its worth as collateral for operating and expansion loans. For landowners that do not operate farms, the revenue stream from renting farmland often represents an important component of income. Retired farmers appear to represent a sizable part of this class of nonoperatorowners. National USDA surveys show that about one-half of nonoperatorowners are age 65 or older (compared to about one-third of operator-owners) and that 85 percent live within 50 miles of the land they rent. For retired self-employed farmers without the benefits of standard pensions, the appreciation in farmland value that is realized on its sale or the stream of income from its rental may be significant to their financial security.

Productivity in agricultural uses, the level of farm-program benefits, and proximity to urban and suburban development represent major determinants of farmland value. In the West, access to water is also obviously important, and other regional constraints may matter, too. So, local farmland markets are conditioned by a variety of factors whose importance varies regionally and over time. Of special interest, though, is the role of Federal policy in determining land values because of its explicit aim to improve the lot of farmers. How well is this objective achieved, as measured by or reflected in the value of farmland?

Federal farm policy influences figure most in those areas where supported commodities are grown. In 2001, government payments went to about 40 percent of the nation's farms, mainly those growing the eight program crops (wheat, corn, soybeans, sorghum, cotton, rice, barley, and oats) which are geographically concentrated in the middle of the country. Economists have understood for some time that the value of Federal program benefits are capitalized in land values as these payments become a component of expected future returns. USDA's Economic Research Services estimates that, nationally, about one-quarter of the value of farmland is attributable to the expected continued benefits of farm programs tied to the farming operation. With the trend toward use of direct payments, as opposed to supply controls and price interventions, the translation of payments into benefits for landowners has become more transparent. For farmers who own the farm, the appreciated value of the land is realized when it is sold. For landowners who do not operate farms but rent out acreage, any payments made to tenants can be captured through adjustment in lease terms. Because 40 percent of farmers rent a part or all of the land they work, there may be a disconnect between the political desire to assist active farmers and the distribution of benefits to nonfarmer landlords. However, to the extent that landowners who do not farm are retired farmers, the outcome may not be inconsistent with aims to improve the well being of farm households. In any case, Federal farm policy is a direct contributor to farmland value appreciation, and the distribution of these benefits is an important matter for study.

The chapters in this volume address the range of economic issues concerning the value of farmland. Beginning with a historical perspective, the focus progresses to the interaction between government policies and farmland values, then to the mechanics and behavior of the markets for land, the major farm business asset. These markets have special characteristics that make transaction costs a significant component of a land purchase, and these aspects are also considered. The more contemporary influences of urbanization and also environmental subsidies and regulations are examined next, with concluding chapters about special circumstances in regional farmland markets. Taken together, these chapters represent a most comprehensive and timely look at farmland values. The future prospects for trade liberalization and domestic policy reform, for ongoing farmland conversion, and for management of environmental quality mean that farmland values, with their undeniable importance to farm businesses and households, will continue to be of. key interest.

International Humanitarian Law: Origins, Challenges, Prospects by John Carey, William V. Dunlap, R. John Pritchard (Transnational Publishers) In three distinct volumes the editors bring together a distinguished group of contributors whose essays chart the history, practice, and future of international humanitarian law. At a time when the war crimes of recent decades are being examined in the International Criminal Tribunals for Former Yugoslavia and Rwanda and a new International Criminal Court is being created as a permanent venue to try such crimes, the role of international humanitarian law is seminal to the functioning of such attempts to establish a just world order.

International Humanitarian Law: Origins by John Carey, William V. Dunlap, R. John Pritchard (Transnational Publishers)

A century after two separate international military tribunals convened by Great Britain, Italy, France, and Russia to work alongside a British war crimes court following violence between Greek Christians and Turkish Muslims in Crete, Dr. R. John Pritchard discusses the background to these proceedings. He observes their linkages to an initiative by the Tsar which led in the following year to the Hague Convention of 1899 and the first formal proposal by a Head of State for the establishment of an international criminal code backed up by a permanent International Court of Justice. Dr. Pritchard shows how this exercise, in what could be called "gunboat diplomacy," at the end of the 19th century anticipated more recent attempts by the international community to make individuals inter-nationally accountable for war crimes, genocide, and crimes against humanity.

If the history of international humanitarian law (IHL) consisted of buried artifacts, one of the archeologists digging for clues to the past would be Jane Garwood-Cutler, with her chapter on the British war crimes trials of suspected Italian war criminals in 1945-1947. She describes how these little-known proceedings, "with some legal scholars even discounting their existence . . . contain valuable lessons for those involved in international tribunals of the present day." 

The longest-surviving member of the American defense team in the Tokyo major war crimes trial, Carrington Williams, has provided us with the best personal memoir of the Tokyo Trial written by anyone who took part in those proceedings. His chapter contains a number of analytical elements that are of very great significance, much that is new and nuanced about the past, written by one who was very conscious of the perils of present challenges and future prospects. William reminds us that the quality and value of what we do in no small measure requires outstanding defense attorneys who can keep prosecutors up to mark. He also reminds us of what too many practitioners in IHL forget: when prosecutors or judges "get a result" by dishonest or inappropriate means, the whole endeavor is diminished in terms of its authority and enduring value. Finally, he serves as proof that there remains much to be learned by further research in these historical areas, that a good deal of recent scholarship on the historical events and legal issues that Tokyo and even Nuremberg were concerned about is woefully sub-standard and needs to be reexamined rather than regarded as settled.

Bohunka Goldstein provides an overview of the degree of implementation of IHL achieved by means of diplomacy, both official and non-governmental.  

Another historical study is that of Professor Emeritus Howard Levie of the U.S. Naval War College, who traces the development of limits on how land war-fare is fought, from Old Testament times to the present. He describes "that portion of the law of war on land which states adopted not solely to regulate warfare on land, but to make it less horrendous for the average person, military or civilian." Levie describes steps seeking to control measures such as bacteriological and toxin weapons, anti-personnel land mines, fire, dum-dum bullets, poison gas, and chemical weapons.  

Professor Alfred P. Rubin of the Fletcher School of Law and Diplomacy applies an acid test to "current legal theories resting on an asserted universal jurisdiction in the organs of the international community," which he calls "the product of good-hearted thinking [which] cannot work as expected in the world of affairs."  

Long-time ICRC official and scholar Michel Veuthey, after tracing the evolution of IHL from Solferino to Kosovo, concludes that the base of support for the 1949 Geneva Conventions must be broadened to include the media, artists, teachers, psychologists, and psychiatrists as well as philosophers and spiritual leaders.  

While William Schabas provides new insights into the dark subject of genocide, Alfred de Zayas writes on the scourge of "ethnic cleansing," describing the emerging "hard law" and "soft law" jurisprudence as well as implementable remedies. After examining the record of various international mechanisms for protecting individuals, retired U.S. Foreign Service Officer Patrick J. Flood concludes that, for now, ad hoc criminal tribunals have shown themselves to be effective.  

This volume does not include chapters on other origins or developments in IHL such as the trial of Philip von Hagenbach by an Imperial Court of the Holy Roman Empire, post-First World War trials, post-Second World War trials in Europe, or dozens of other situations in this and the last century raising IHL questions in various parts of the world. All of these, and numerous other subjects, certainly deserve attention, but we have focused on just a few areas knowing that many of these others have already received a great deal of attention and that still others among them remain neglected.

International Humanitarian Law: Challenges by John Carey, William V. Dunlap, R. John Pritchard (Transnational Publishers) The events of September 2001 and the worldwide threat of terrorist attacks, bring into sharper focus questions about the ramifications of unconventional warfare and how prisoners taken in armed conflict short of declared war should be treated. Here again international humanitarian law can provide the guideposts needed to find a just course through difficult times. The intent of these volumes is to help to inform where humanitarian law had its origins, how it has been shaped by world events, and why it can be employed to serve the future.

Of the trends I mentioned in my Foreword to Volume I of this inestimable series of essays, the one with the most dire implications has been the most pronounced. I refer to the campaign of the civilian leaders of the current Bush Administration to shield themselves from all accountability for war crimes and crimes against humanity. Not only has the United States crossed out the previous Clinton administration's signing of the Rome treaty establishing the International Criminal Court (ICC), but it has also been entering into bilateral treaties where the treaty partner agrees not to turn over any Americans in its jurisdiction to the new ICC. This campaign of bilateral treaties is steadily undermining the new Court even before it has had its first case. Moreover, the Bush Administration has successfully intimidated Belgium into dropping its innovative universal jurisdiction law.  

What are American political leaders afraid of? Radical anti-American critics accuse them of wishing to commit war crimes with impunity. This criticism is not based on any evidence. In the American military intervention in Iraq of 2003—begun and ended within the space of the six months since Volume 1 was published—we have witnessed the most controlled military invasion in history. Emblematic of the supremacy of international humanitarian law over the conduct of hostilities was the appearance for the first time of lawyers assigned to the military platoons, regiments, and armies. These lawyers had the final word whether military units could fire at certain targets, or in a certain direction, or at certain enemy combatants (or even at enemies disguised as civilians). The lawyers made these judgments on the basis of existing international humanitarian law. This kind of on-the-spot control over military initiatives in the field is totally unprecedented. This is not to say that there were no infringements of the laws of war during the Iraqi intervention, but one would be hard put to find any deliberate violations. On the basis of these "facts on the ground," as opposed to diplomatic initiatives emanating from Washington, D.C., one could almost credit the Bush administration with establishing a new standard for the legal conduct of military hostilities. These are not people who have a desire to violate the laws of war with impunity.  

But they also have no desire to be tried for the war crimes of others. They may send soldiers into battle accompanied by lawyers, they may personally approve of all missile targets to make sure that no target violates humanitarian law, they may send their most trusted and well-informed commanders into the field to fight in accordance with the letter and spirit of the Hague and Geneva Conventions, and they may insist upon military courts martial against American soldiers who violate those conventions. Nevertheless in any war, command communications may break down, regiments can run amok, and targets can be missed or even changed by the crew aboard an aircraft or on the deck of a vessel equipped with long-range missiles. A few inferences from recent engagements will illustrate the seriousness of the fear of war-crime prosecution on the part of American leaders. In the battle of Kosovo, where missiles were launched from aircraft at an altitude of 10,000 feet, a number of proscribed targets were hit in Kosovo and Serbia. Immediately there were calls for prosecution of the American leadership, who came within the jurisdiction of the International Criminal Tribunal for Former Yugoslavia (ICTY) because the alleged crimes were committed in Yugoslavia. What bothered the American leadership the most was that many of the alleged targeting violations came from NATO aircraft not under the immediate control of the United States. Even though the ICTY prosecutor eventually decided not to prosecute any of the American leaders for any of these alleged war crimes, the American leaders obviously resolved never again to put themselves in a position where they might be accused of responsibility for war crimes committed by commanders and officers of foreign countries. That is the main reason why the Afghanistan and Iraqi military interventions were dominated by American military forces. Even though the United States paid lip service to multilateral intervention, it is evident that the American position was "if we're going to be held to command responsibility, let's at least be sure that we are in command." (The offer by Great Britain to help in the Iraqi intervention, though earnestly solicited by the United States, was probably an embarrassment when it was actualized. The command-responsibility dilemma was neatly solved by assigning to the British troops the taking and occupation of the city of Basra, so that if any war crimes took place there, only the British could be charged.)

The political leaders of the United States are worried about command responsibility. They are not lawyers or legal scholars, and therefore the doctrine may appear far more dangerous and fuzzy to them than it really is. Even if lawyers for the Pentagon and the Department of State assure them that they bear zero risk of being convicted of war crimes under the doctrine of command responsibility, no lawyer can assure them that a foreign or international tribunal would never indict them. We can empathize with a public servant who wants to be remembered as a patriot and not as an indicted (even if never convicted) war criminal. Thus no American leader wants to take the personal risk of being charged as a war criminal; he or she would rather defeat the entire international enterprise of war crimes tribunals exemplified by the new International Criminal Court and Belgium's brief but brave attempt to open its courts to universal jurisdiction over the commission anywhere of war crimes and crimes against humanity.  

To make matters significantly worse from the perspective of American leaders, their lawyers will point out to them that it was an American precedent that initially shaped and broadened the doctrine of command responsibility: In re Yamashita, 327 US 1 (1946). The United States Army pressed hard for General Tomoyuki Yamashita's conviction in 1945, and the Supreme Court affirmed it.

Thus it is hard for an American leader today to disavow the Yamashita case. Yet, if progress is to be made in redefining the concept of command responsibility so that it applies where it reasonably should apply, we must re-imagine and reinterpret what happened in that case.  

"Command responsibility" as reasonably defined should include the following five elements:

  1. The defendant was the de jure commander;

  2. The defendant was the de facto commander (in other words, he was not a commander in name but powerless in fact);

  3. The defendant either knew or had reason to know of the commission by his subordinates of war crimes or crimes against humanity;

  4. The defendant could have used the power of his office to prevent or mitigate the war crimes or crimes against humanity; and

  5. The defendant failed to so use the power of his office.


Human Rights Functions of United Nations Peacekeeping Operations by Mari Katayanagi (International Studies in Human Rights, 73: Martinus Nijhoff) The United Nations' peacekeeping has evolved as a practical measure for preserving international peace and security, particularly vis-a-vis the polarisation of East-West during the Cold War. The evolution of peacekeeping since the end of the Cold War has two important features: the use of force which arguably exceeds self-defence on the one hand, and multifunctional operations on the other. The Security Council has started considering a wide range of factors including serious human rights violation as threats to inter-national peace and security. The concept of state sovereignty is undergoing changes with regard to international peace and security. It is no longer sufficient for the states committing human rights violations to claim sovereign immunity with regard to action taken by the international community. Peacekeeping missions are often deployed to supervise the implementation of a peace agreement signed among conflicting parties within a state. Recognising the UN's principle to seek peaceful settlement which underlies the legality of peacekeeping, this research focuses on human rights functions of multifunctional peacekeeping operations. Such functions have immense potential for enhancing conflict resolution through peaceful means. In order to illustrate these issues and the diverse practices of UN peacekeeping, the author of this thesis has dealt with four detailed case studies on El Salvador, Cambodia, Rwanda and the former Yugoslavia. The achievements, problems, and defects experienced by different operations are analysed using the insights of the author's own experience in a peacekeeping operation. Further, it is argued in this book that for human rights functions to be effective in a peacekeeping operation, the mandate should be explicitly provided for in legal documents, and the functions need to cover not only the investigation and monitoring of human rights violations but also institution-building. For instance, public education on human rights is an integral part of this process. Through comprehensive human rights functions linked with the maintenance of security by presence of a military component, peacekeepers may be able to contribute to the peace-building of states that is founded on the rule of law.

The purpose of this book is to analyse UN peacekeeping operations focusing particularly on human rights functions, and to seek to develop a better design for such functions in future operations: The book starts by reviewing the UN mechanism for international peace and security in Chapter 1. The role of the Security Council and the General Assembly in the mechanism will be discussed, and the origins of United Nations peace-keeping will be studied by means of four case-studies of observation missions and peacekeeping missions. Also, an examination will be made in Chapter 1 of the constitutional basis of United Nations peacekeeping operations, as well as the competence of the Security Council and of the General Assembly to create peacekeeping forces, and of the role of the Secretary-General in the area of international peace and security. The lastsection of Chapter 1 will deal with peacekeeping operations by regional arrangements or agencies in the context of Chapter VIII of the UN Charter.

Chapter 2 commences with a historical overview of the United Nations peacekeeping operations. An extensive review of varied definitions and theories of peacekeeping will follow which illustrates the evolution of United Nations peacekeeping operations. The review demonstrates that two features can be found in peacekeeping operations after the end of the Cold War, as described above. To clarify the concept of peacekeeping, the legal distinction between peacekeeping and enforcement action will be discussed. Chapter 2 concludes with a focus on non-military functions of peacekeeping, which in practice contains peacemaking and peace-building functions. Human rights are essential elements of such functions. The first two chapters thus explain how the Organisation reached the point of being largely involved in human rights work within the framework of peacekeeping.

The four chapters thereafter are devoted to case-studies. These chapters contain a detailed examination of the actual human rights activities per-formed in four different areas of the world by UN peacekeeping missions or a UN human rights field operation. The areas being examined are El Salvador, Cambodia, Rwanda and the former Yugoslavia. The operations in each area approached and challenged the human rights problems differently, which shows that there has been a consistent lack of effort by the Organisation to study and make use of the skills and experience gained in preceding operations.' Several points of analysis will be set up for examination in each case-study for the purpose of enabling a comparison of similar functions in different operations. One important factor for future peacekeeping operations when they undertake human rights functions will inevitably be institution-building, such as the reform of the judicial system or the restructuring and training of police forces. Peacekeeping, as a temporary measure to create the environment for sustainable peace, needs to assist in the establishment of domestic structures and mechanisms which protect and promote human rights. As is manifest in the case of the United Nations Mission in Bosnia and Herzegovina (UNMIBH), institution-building is becoming an integral part of peacekeeping.'

Chapter 7 considers the possibility of the use of force in protecting human rights. Given that the presence of UN peacekeepers did not succeed in protecting civilian lives either in Rwanda or Srebrenica (Bosnia-Herzegovina), we will examine the recent practice of the Security Council of contracting out enforcement operations to volunteering states. We will also consider the "humanitarian intervention", unilateral or multilateral intervention by one or more states without any authorisation from the UN Security Council. The problem represents a deviation from collective security as envisaged in the UN Charter. Seeking the possibility that peace-keeping forces can contribute in protecting human rights through non-military means, the potential role of peacekeepers as a "protecting power" will be studied. The potential of peacekeepers to carry out human rights functions will also be discussed.  

Summaries of Leading Cases on the Constitution, 14th Edition by Joseph Francis Menez, John R. Vile (Rowman & Littlefield) authoritative text and reference work is based upon landmark cases decided by the Supreme Court and still prevailing. Widely adopted and recommended for courses and research in American history, constitutional law, government, and political science. Clear, concise summaries of the most frequently cited cases since the establishment of the U.S. Supreme Court; each summary gives the question at issue, the decision and the reason behind it, votes of the justices, pertinent corollary cases, and notes offering further information on the subject; detailed explanation of the organization and functions of the Supreme Court; a complete text of the Constitution of the United States; a complete index of all cases cited; listings of all the chief justices and associate justices, the dates of their service, and president who appointed them, their state of origin, and their birth and death dates.

The year 2004 marks the fiftieth anniversary of the first publication of this book. Fifty years and fourteen editions of continuous use have es­tablished the value of this volume. This value is tied to the gravity of the subject it treats. Constitutional law is as important and exciting as the dream of constitutional government. In addition to its intellectual appeal, constitutional law has practical consequences. Because the United States has a written constitution enforceable in courts, citizens find that guarantees of political participation and basic rights are reali­ties rather than mere aspirations.

Like many important subjects, the study of constitutional law can be difficult. Undergraduates in political science and history often find con­stitutional law to be among the more demanding studies that they en-counter within the social sciences. Similarly, law students generally find classes on constitutional law to be at least as challenging as classes on torts or contracts.

There are a variety of reasons for this. Although cases involving civil rights and liberties are often quite engaging and contemporary, cases in­volving judicial review and jurisdiction, separation of powers, federalism, congressional powers under the commerce and taxing clauses, and the like (typically the staple of first semester courses on U.S. constitutional law) may seem quite arcane. Moreover, the Supreme Court has a long his­tory, and important cases frequently originate from earlier centuries. Such cases often focus on technical questions about issues that are not exactly in today's headlines or that are not generally understood. Few students who begin constitutional law understand, for example, that most rights in the first ten amendments are applied to the states not directly, but via the

due process clause of the Fourteenth Amendment (see chapter 8 on the Bill of Rights and its application to the states). In addition to changes that have occurred in political circumstances and ordinary terminology during such time periods, cases often come complete with their own language of "legalese," as recognized by the inclusion of a glossary of legal terms at the end of this volume.

Constitutional law is usually taught in classes in political science, history, and law. In classes in political science and law, students typi­cally use a "casebook" that contains excerpts of key cases grouped ac-cording to topic. In law schools, such casebooks are often supplemented by "hornbooks," or commentaries that are often as massive as the vol­umes they purport to explain. In classes in constitutional law, professors generally expect students to read cases before coming to class, be pre-pared to discuss them, and leave with an understanding not only of what individual cases say but how they relate to one another. History classes are more likely to take a secondary text on constitutional developments as a point of departure, but such texts will often be supplemented by readings from key cases.

Professors in all these disciplines are likely to encourage students to "brief' cases prior to coming to class and may even require the submis­sion of such written briefs as part of the class grade. As the terminology suggests, a "brief' is designed to provide a skeletal outline of key aspects of a case. Professors vary in the elements they want in a brief, but this book provides those elements that professors most typically request. At a minimum, professors will generally want the name of the case followed by identification of the justice or justices writing decisions, a discussion of the most important facts of the case, the central question(s) the case poses, the opinion at which the Court arrived, its reasons for coming to this decision, and notes on major concurring and dissenting opinions. Identifying the central question in each case, the Court's answer to this question, and its reasons for it are especially important with the reasoning generally the largest part of a brief.

The most persistent question that students pose in regard to a brief is: how long should a brief be? Briefs that are too long leave students preparing for an exam or paper with materials little shorter, and thus of little more help, than the cases reviewed. By contrast, briefs that are too short are likely to leave students struggling to remember the basic facts and issues in the cases. The length of briefs will thus typically vary with the length and complexity of individual cases.

With fifty years of use, this book has proven itself as a useful tool for conscientious students of the U.S. Constitution and its history, but, as is the case with almost any tool, it can be abused. It has been said that the ultimate touchstone of constitutionality is not what the Court or any other institution has said about it, but the Constitution itself. So too, the ultimate source for any student of Supreme Court opinions should be the decisions themselves and not what this author or anyone else has to say about them. Students who use this book as a substitute for reading and briefing cases on their own, that is, as a substitute for grappling with the original language and reasoning in opinions, will probably find that they will do better than those who read neither. But, especially in classes with essay examinations, students who rely solely on this book are likely to find themselves at a profound disadvantage when com­pared with those who conscientiously begin by reading and briefing cases, attending classes, participating in class discussions and studying groups, and use this book and similar aids to check and further their un­derstandings of such decisions.

This book provides a useful guide for how to brief cases and should generally point students in the right direction as to the meaning of key cases. However, wise students will quickly discover that cases often stand for more than one principle and that they might thus appear in some case-books to illustrate issues other than the ones they illustrate here. Because it is arranged both topically and chronologically within chapters, this book will also help students to understand how cases they read fit into larger historical contexts. There is no unanimously agreed upon canon of the most important Supreme Court cases, and no two casebooks compiled by different authors will likely cover an identical list of cases. Thus, al-though students will undoubtedly find that many cases in their case books are not briefed here, they are likely to find that many cases are briefed here that are not in their books. They will thus have the opportunity to put their readings within larger contexts by reading summaries of other cases contemporary to the ones they are assigned.

In short, this book is a supplement to, and not a substitute, for reading Supreme Court decisions and scholarly commentaries on them. It can point the way to understanding court decisions, but it cannot serve in place of close reading and intellectual grappling with such cases. Briefs contained here can provide skeletal outlines of the way that justices have thought, but students will need to read cases closely to understand the Court's reasoning in depth.

The European Central Bank: The New European Leviathan? by David J. Howarth, Peter Loedl (Palgrave Macmillan) provides a theoretically inspired account of the creation, design and operation of the European Central Bank. Issues explored include the theoretical approaches to the ECB, the antecedents of European monetary authority, the different national perspectives on central bank independence, the complex organisation of the bank, the issues of accountability and the difficult first years of the ECB in operation.

The principal objective of this study is to provide a detailed analysis of the institutional structure and operation of the ECB, and through this analysis reasonably speculate on the Bank's future operation. Our study seeks to understand why the ECB was designed the way it was, how it fits into the overall EU policy-making system, and the debates about its institutional structure. The book explores the history of European central bank cooperation and co-ordination in the context of European monetary integration. The book also examines the pref­erences of key national actors (in particular French, German and British) that determined both the organization and independence of the ECB as well as the on-going debates about the Bank's design and operation. The complex issues of legitimacy, accountability and transparency - all within the larger construct of political indepen­dence - are explored in the context of member state attitudes and the present and future operation of the ECB. By bringing together in a systematic and comprehensive way the various issues of ECB power and independence, we seek to provide academics, students, analysts and the wider public with an accessible overview. With euro coins at present firmly in the hands of nearly 300 million Europeans, it is now even more imperative that we have a broadly encompassing explanation of the powerful ECB.

The book does not claim to make a definitive theoretical statement about the determinants of European integration or the creation of the European Central Bank. While we feel that the ECB should be consid­ered a major step in the emergence of some unique confederal entity still rooted in its member states, we do not seek to explore the contri­bution of the ECB and EMU more generally in terms of the progress of European integration: this must be the subject of a future study. However, we do argue that the ECB's structure and operation, its suc­cesses and potential weaknesses have had and will continue to have an impact upon the shape of future efforts to create new supranational institutions and policies within the European Union. While some may fear the ECB `Leviathan', the book argues that the ECB can be held accountable - both through existing structures and policies and poss­ible future developments in European policy-making and institutional change.

The book is divided into seven chapters. Chapter 1 broadly outlines the theoretical and analytical approaches that can be applied to help explain the logic behind the creation of the ECB, its structure, inde­pendence and its current operation. Chapter 2 draws out the historical development of European monetary authority in terms of its develop­ment as an `epistemic community'. Starting in the postwar period, through the debates on EMU in the early 1970s and in the period from 1988 to 1991, and the preparation for the launch of the single currency in January 1999, we provide a detailed analysis of the gradual construc­tion of European monetary authority. In doing so, the authors provide historically-informed insights into the ECB's structure and power. Chapter 3 explores the prevailing attitudes of the three leading member states of the European Union - Germany , France and Britain - towards European monetary authority and the EMU project more gen­erally, in addition to the distinct national traditions of monetary policy-making that have largely shaped these attitudes.

Chapters 4 and 5 complement each other with a focus on the institu­tional structure of the ECB and an analysis of ECB independence. Chapter 4 describes the European System of Central Banks (ESCB) and the interaction of the ECB with other EU institutions - especially the Eurogroup, but also other institutions, for example, the European Parliament. Chapter 5 then evaluates the institutional structure of the ECB through the analytical lens of political independence. To what extent is the ECB independent from the influence of key political and policy-makers? Does this independence make the ECB unaccountable and illegitimate in the eyes of the public and politicians? Given the European Union's ongoing struggle with questions of transparency and the democratic deficit, can the ECB operate independently without causing further sacrifice on these concerns? While a great amount of literature has already explored this topic, we review the literature and bring it into the context of the overarching theme of the book.

Using the preceding chapters as a backdrop, Chapter 6 evaluates the ECB in action - the actual monetary policy during its first years in operation. Using the concept of credibility as a framework of analysis, this more journalistic chapter traces the monetary steps of the ECB from July 1998 through the official launch of the euro in 2002. Here the authors focus on the interest rate debates, exchange rate politics and the role of ECOFIN in more detail and evaluate the ECB's 'successes' and 'failures'. We suggest that the ECB - despite some problems in the area of policy credibility — is establishing itself and exerting its power more effectively, especially since the introduction of euro notes and coins in 1 January 2002 . Finally, Chapter 7 concludes with a summary of the key arguments of our analysis and the expectations of the future behaviour of the ECB. We also draw out some institutional issues related to the future operation of the ECB as well as the European Union.

Methodologically, this book relies on a variety of approaches — including interviews with officials of the European Central Bank and other leading monetary and political officials from a number of member states, reviews of secondary sources and journalistic accounts, and the employment of statistical data. Interviewees were given anonymity in order to encourage discussion and frankness. Although reliability remains a problem with any anonymous interview, the intent was not to pinpoint specific positions or catch an official slip-up, but rather to elicit open reflections on the role of the European Central Bank and other officials in the respective EU member states. The official positions of the ECB are also readily available from source material and published interviews in the press, and one can visit the Bank's web-site at www.ecb.int.

The Revival of Laissez-Faire in American Macroeconomic Theory: A Case Study of Its Pioneers by Sherryl Davis Kasper (Edward Elgar) Dr. Kasper covers the contributions of economists who opposed Keynesian thought and achieved fame and fortune by it. The analysis of modern economics is sophisticated, yet sufficiently transparent that the whole book contains only two equations.

Generally chapters trace the development of the Chicago school of laissez-faire economics from Frank Knight's studies of risk and uncertainty to Robert Lucas' rational expectations. Each chapter gives a brief introduction to one economist: Knight, Simons, von Hayek, Friedman, Buchanan and Lucas, followed by a summary of his thought, his methodology, and his conclusions. Dr. Kasper takes seriously their intellectual development while pointing out their limitations as well.

While Dr. Kasper herself is an institutionalist economist, who is critical of mathematical legerdemain, she writes of these men sympathetically with literary flair.

Though the hardback edition is too expensive to use as a textbook, its readability for undergraduates to professionals makes me hope that it will appear soon in paperback.

In the 1970s, the Keynesian orthodoxy in macroeconomics began to break down. In direct contrast to Keynesian recommendations of discretionary policy, models advocating laissez-faire came to the forefront of economic theory. Laissez-faire no longer stood as an exceptional policy endorsed for rare occurrences of market clearing; rather it became the policy standard. This book provides the definitive account of this watershed and traces the evolution of laissez-faire using the cases of its proponents, Frank Knight, Henry Simons, Friedrich von Hayek, Milton Friedman, James Buchanan and Robert Lucas.

By elucidating the pre-analytical framework of their writings, Sherryl Kasper accounts for the ideological influence of these pioneers on theoretical work, and illustrates that they played a primary role in founding the theoretical and philosophical use of rules as the basis of macroeconomic policy. A case study of the way in which interwar pluralism transcended to postwar neoclassicism is also featured.

The volume concludes that economists ultimately favored new classical economics due to the theoretical developments it incorporated, although at the same time, since Lucas uncritically adapted some of the ideas and tools of Friedman, an avenue for ideological influence remained.

Tracing the evolution of American macroeconomic theory from the 1930s to the 1980s, this book will appeal to those with an interest in macroeconomics and in the history of scholars associated with the Chicago School of economics.

Your Money or Your Life!: The Tyranny of Global Finance by Eric Toussaint (Pluto Press) When this book was released in English by Pluto Press in the spring of 1999, it will have already appeared in six other languages: French, Dutch, Spanish, German, Turkish and Greek. For a book that does not hide its hostility to the neo-liberal project, this in itself is a sign of renewed interest in global alternatives to mainstream thinking. Meetings have been organised to launch the book in a number of countries in Latin America, Africa and Europe. The meetings have provided an opportunity to test the validity of the book's main arguments. The results have been encouraging. As a result of the exchange organised around the proposals advanced in chapter 17, these proposals will be reworked in line with the thoughtful criticisms and additions I have received.

A number of significant events have taken place since the book was completed in May 1998 . They provide the raw material necessary for fine-tuning the book's main theses.


In a number of key countries around the world, we have seen either outright drops in production and consumption or significant drops in their rate of growth.

The term `systemic crisis' is fitting in so far as the economic strategy of a number of big states, large private financial institutions and industrial multinationals has been unsettled – due to the growing number of sources of imbalance and uncertainty in the world economic situation.

From the very start, the capitalist system has gone through a large number of generalised crises. On occasion, its very survival was in doubt; but it has always managed to weather the storm. However, the human cost of these crises – and of the ways in which the capitalist system has emerged from them – is incalculable.

Capitalism may once again weather the storm. It is by no means sure that the oppressed will be up to the task of finding a non-capitalist solution to the crisis. Although victory is far from guaranteed, it is imperative that the oppressed reduce the human cost of the crisis and pursue a strategy of collective emancipation that offers real hope for all humankind.


Recent studies carried out by economists in government and UN circles, have confirmed just how far buying power has dropped in various parts of the world. The Clinton administration's former Secretary of State for Labor, Robert Reich, for example, has said: `Workers have less money to spend on goods and services [...] The crisis is upon us'. He adds: `The sluggishness of American income levels is a highly sensitive matter, given the role played by household spending in overall economic performance. [Household debt] accounted for 60 per cent of available income at the beginning of the 1970s: it is now more than 90 per cent [...] We have hit the ceiling' (Robert Reich, `Guerre a la spirale de la deflation', Le Monde, 21 November 1998).

The 1998 report of the United Nations Development Programme (UNDP) gives some idea of the levels of household debt. In response to the drop in real income, households have clearly opted to finance a greater and greater share of their spending with debt. `Between 1983 and 1995, as a share of available income, debt has risen from 74 to 101 per cent in the USA; from 8 5 to 113 per cent in Japan; from 58 to 70 per cent in France.' In absolute terms, US household debt was 5.5 trillion (5,500 billion) dollars in 1997.

This phenomenon can also be found in the most `advanced' countries of the Third World. For example, in Brazil in 1996, fully two thirds of all families earning less than 300 dollars per month were in debt – that is, one million of the 1.5 million families in this category. According to the UNDP, bad cheques are a common method for financing consumer spending in Brazil. Between 1994 and 1996, the number of bad cheques rose six fold.

Robert Reich is quite right when he says that a ceiling has been reached. A recession in the North and an increase in interest rates in the South could lead to a huge drop in consumer spending in the North and across-the-board bankruptcy of households in countries of the periphery – in line with what we saw in the 1994-1995 Mexican crisis, and with what we have seen in the Southeast Asian crisis of 199 7-1998 and the Russian crisis of 1998.

Three examples illustrate this fall in income for the majority of the world's population. First, the UNDP notes that in Africa, `Consumer spending has on average dropped 20 per cent over the last 25 years'. Second, the UNDP notes that in Indonesia poverty could double as a result of the 1997 crisis. According to the World Bank, even before the crisis there were 60 million poor in Indonesia out of a total population of 203 million. Third, according to Robert Reich, real incomes continue to fall in much of Latin America. According to a World Bank report released at the end of 1998 (Agence France Presse, 3 December 1998), 21 countries experiences a fall in per capita income in 1997. The same report estimates that in 1998, some 36 countries – including Brazil, Russia and Indonesia – will register a drop in per capita income.

According to a 26 November 1998 press release issued by the Russian undersecretary of the economy, unemployment was expected to rise by 71 per cent between the end of 1998 and the beginning of 2001–from 8.4 million to 14.4 million.


Up until early 1998, International Monetary Fund (IMF) director Michel Camdessus had played down the scale of the Mexican and Asian crises. By the time of the October 1998 joint World Bank-IMF summit, however, he had come around to saying that the crisis was indeed systemic. At that same gathering, Bill Clinton declared that the crisis was the most serious one the world had experienced in 50 years.


The severity of the crisis in a large part of the world economy has led a number of Establishment economists to subject IMF and G7- supervised policies to harsh criticism. Jeffrey Sachs was a leading exponent of shock-therapy policies in Latin America in the mid-19 80s – the most brutal examples of which could be found in Bolivia – and in Eastern Europe at the beginning of the 1990s. By 1997, however, he was pillorying IMF and US-inspired policies in Southeast Asia. Unfortunately, this didn't stop him from overseeing the implementation in Ecuador of a ruthless austerity package in late 1998.

In the mid-1990s, Paul Krugman argued that increased free trade and global commerce would pave the way for growth in all those countries that joined in the globalisation process. As the crisis deepened and began to affect Brazil in 1998, Krugman suggested that the Brazilian president put in place coercive measures, for at least six months, to regulate capital flows. Robert Reich wondered aloud why the Clinton administration and other world leaders continued to defend tight-money and austerity policies at a time when such policies created a deflationary spiral. For one thing, he said Third World countries should not be forced to make huge cuts in public spending and to increase interest rates before they are eligible for loans (Le Monde, 21 November 1998).

In the June 1998 edition of Transition, in a broadside against the Washington consensus, World Bank vice-president and chief economist Joseph Stiglitz denounces the IMF's shortsightedness. He argues that although there is indeed proof that high inflation can be dangerous, there is no such proof that very low inflation rates necessarily favoured growth. Yet, for the moment, the IMF (and the World Bank, too, lest we forget) continue to promote the low-inflation dogma, even if this means destroying any possibility of economic recovery.

Nor have editorial writers at the Financial Times held back in their criticisms of the IMF: `The IMF's way of dealing with crises must also change. Its standard remedy was not appropriate for Asia, where the problem was mainly private-sector debt. Too much IMF money was used to bail out foreign creditors' ('How to change the world', Financial Times, 2 October 1998).

Making a major break with tradition, Stiglitz has even 'dared' to criticize the role of the sacrosanct markets in Latin America: 'The paradox is that the panicking market has, for reasons totally unrelated to the region, demanded that Latin American investments deliver unreasonably high interest and dividends to cover the perceived risks. By driving interest rates up and stock prices down, the markets risk doing severe damage to the Latin American economies' ('A Financial Taint South America Doesn't Deserve', International Herald Tribune, 19–20 September 1998).

Of course, the authors of these remarks have not exactly been won over to the cause of the oppressed. That being said, they do indeed reflect the unease Establishment economists feel over the patent inability of governments, financial markets and the international financial institutions to get the global economy back on a path towards growth.


The tendency towards concentration in the corporate sector has been given a huge boost as we approach the twenty-first century. There were more mega-mergers in 1998 than in any previous year – in banking, insurance, oil, chemicals, pharmaceuticals, automobiles and the media. This merger frenzy has amplified the power of a handful of companies over whole sectors of the global economy. The mergers have gone hand in hand with a renewed offensive on the employment front; they invariably mean dismissals and downsizing through 'voluntary' retirement.

At the same time, this striking increase in the concentration of capital has not necessarily meant greater stability for the companies that come out on top. Takeovers and mergers have proceeded with such reckless abandon that the new mega-firms are not likely to be any more resilient than other companies when confronted with abrupt shifts in the world economy.


In its 199 7 and 1998 reports, the UNDP keeps a tab on how many of the world's wealthiest individuals one would have to assemble to come up with a total fortune of one trillion (one thousand billion) dollars – keeping in mind that this sum is equal to the annual income of nearly 50 per cent of the world population.

Using data from Forbes magazine's annual listing of the world's wealthiest individuals, the UNDP calculates that in 1996 it would have taken 348 of the world's mega-rich to put together one trillion dollars. By 1997, however, this figure was brought down to 225. At this rate, in a few years the richest 150 people might well own as much wealth as the total annual income of three billion people! The gap between holders of capital, on the one hand, and the majority of the population, on the other, is growing wider and wider.

The UNDP also makes a radical critique of Thatcherism without mentioning the Iron Lady by name: 'During the 1980s, the gap [between rich and poor] in the United Kingdom widened by a degree never before seen in an industrialised country.'


Neo-liberalism has been the dominant creed for some 20 years. One of the major arguments made by neo-liberal opinion-makers has been that the private sector is much more efficient than government in economic matters. Yet 1997 and 1998 have been replete with examples of private-sector inefficiency. The 1998 reports of the World Bank and the Bank for International Settlements (BIS) concede that it was the private companies of Southeast Asia that had amassed unsustainable debt levels, not government. The same reports say that the previous Third World debt crisis (from 1982 onwards) had resulted from excess public-sector debt. In other words, once the private sector was given free access to international financial markets, it (alongside the financial institutions of the North that provided the loans) proved to be just as short-sighted and reckless as government.

In the most industrialised countries, the 'hedge funds' that boosted their financial fortunes over the last 15 years have also been reeling of late. The best known example is that of Long Term Capital Management (LTCM), a misnamed company if ever there was one. By late September 1998, LTCM was on the verge of bankruptcy. It had 4.8 billion dollars in real assets, 200 billion dollars in leveraged funds in its portfolio, and a notional value of 1.25 trillion (1,250 billion) dollars in derivatives. It is worth noting that LTCM had been advised all along by the two recipients of the 199 7 Nobel Prize in Economics, Myron Scholes and Robert Merton – two stalwarts of the 'science of financial risk', rewarded for their work on derivatives. As its bankruptcy loomed, even big international banks with conservative reputations admitted to having made imprudently large loans to LTCM. Had LTCM not been bailed out through the massive intervention of a number of big banks such as the Union des Banques Suisses (the biggest bank in the world before Deutsche Bank and Bankers Trust merged in late 1998), Deutsche Bank, Bankers Trust, Chase Bank, Barclays, Merrill Lynch, Societe Generale, Credit Agricole and Paribas, all these banks would have found themselves in a highly vulnerable position. Indeed, beyond reckless loans to LTCM, they have all increasingly become involved in speculative operations. In the second half of 1998, many of these big banks registered significant losses for the first time in years.

Finally, there is a long list of formerly state-owned companies that have in no way performed any better in private hands. Huge private industrial concerns have posted losses hand over fist as a result of strategic errors, particularly in the information technology sector.

Further proof of private-sector inefficiency have been the monumental errors made by such private rating agencies as Moody's and Standard and Poors. They had nothing but praise for countries now wallowing in crisis.


For the last 20 years, governments have said they would not come to the rescue of struggling companies and have privatised major state-owned concerns. Now, however, they have been rushing to bail out private-sector companies that threaten to go under. Funds for these rescue packages come from state coffers fed largely by taxes on working people and their families.

Here, too, the past two years have been telling. On 23 September 1998, the head of the US Federal Reserve convened a meeting of the world's top international bankers to put together a rescue package for LTCM ('Fed attacked over LTCM bail-out', Financial Times, 2 October 1998; Le Monde diplomatique, November 1998). Around the same time, the Japanese government was adopting a rescue plan for the country's private financial system, involving nationalisation of a part of private-sector debt – to the tune of 500 billion dollars to be shouldered by the state.

Thanks to IMF and World Bank intervention in the Southeast Asian crisis in 1997, some 100 billion dollars were pooled together to enable the region's private financial institutions to continue paying off their debts to private international lenders. Most of this money came from the state coffers of IMF and World Bank member-countries.

The October 1998 IMF package to keep Brazil afloat was also financed by public funds. The plan enabled Brazil to go on servicing its external and internal debts to the international and domestic private financial system. Private financial institutions categorically refused to contribute to this so-called rescue package. Instead, the IMF ensured that their debts would be paid off, and they cynically decided to hang back and refuse to make new loans to Brazil. They adopted exactly the same stance in the face of the 1982 crisis. The time has surely come to put an end to such publicly-funded bailout packages for private finance.


Right up until 1997, the IMF, the World Bank, the BIS and (more reluctantly) the United Nations Conference on Trade and Development (UNCTAD) sang the praises of financial liberalisation and deregulation. This, they declared, was the way forward for all countries seeking economic growth. Southeast Asia's high growth rates until 1997 were cited as living proof of the success to be had from pursuing such an approach. Once the region was plunged into crisis, the IMF, the World Bank and the BIS declared that the crisis was primarily due to the weakness of the region's private financial sector. This was the best argument they could find to obscure their own responsibility for what has happened.

Of course, the argument is wrong, and UNCTAD has been honest enough to say so. In the press release introducing its 1998 annual Report on Trade and Development, UNCTAD notes a weakening of Asia's private financial sector. This weakening, it says, is the result of the combination of three factors: first, the liberalisation of capital flows; second, high interest rates set by private financial institutions to attract foreign capital and discourage the flight of domestic capital; third, exchange rates fixing national currencies to the dollar.

Together, these factors produced a massive inflow of capital which thoroughly destabilised domestic financial markets. In other words: yes, the financial system was weak; but, no, this weakness was not a vestige of the pre-deregulation period, as the IMF, the World Bank and BIS would have it. On the contrary, it was the policy of deregulation that weakened financial markets. Simply put, the huge inflow of short-term capital was not matched by a corresponding increasein productive activities – which require long-term investments. As a result, most short-term capital was invested in speculative activities, in strict accordance with criteria of capitalist profit.

Southeast Asia's financial system was no weaker than those of other so-called emerging markets. Instead, it was undermined by deregulation measures which gave free rein to supposedly high-profit short-term activities such as the quick buying and selling of (often vacant) real estate. According to Walden Bello, 50 per cent of Thai growth in 199 6 stemmed from real-estate speculation. Although the IMF and the World Bank were supposed to be monitoring the economic reform process in these countries, their unflinching defense of neo-liberal precepts blinded them to the real problems at hand.


All but a handful of the countries of the periphery – which account for 85 per cent of the world's population – have now to endure yet another debt crisis. The immediate causes are: an increase in interest rates (which are actually falling in the countries of the North); a fall in all types of foreign capital inflows; and a huge drop in export earnings (caused by the fall in the prices of most of the South and the East's exports).

There has been a swift increase in the total debt owed by Asia, Eastern Europe (especially Russia) and Latin America. Short-term debt has increased, while new loans are harder to obtain and export earnings continue to fall. In relative terms, Africa has not been as hard hit by changes in the world situation: loans and investment by the North's private financial institutions have been so dismally low since 1980, things can hardly get any worse (except for South Africa).

With the 1997 Southeast Asian crisis spreading into Eastern Europe and Latin America, private financial institutions have been increasingly reluctant to make new loans to countries in the periphery (whether in the Third World or the former socialist bloc). Those countries which continue to have access to international financial markets – and continue to make government-bond issues in London and New York – have had to hike the guaranteed return paid on their issues in order to find buyers.

Argentina's October 1998 bond issue on the North's financial markets, for example, offered a 15 per cent rate of return – 2.5 times the average rate of the North's government bond issues. Yet this has not been enough to lure the North and the South's private lenders back from their preference for bonds from the North. As was the case in the early 1980s, when the last debt crisis hit, credit has become rare and dear for the periphery. Between 1993 and 1997, there was a steady increase in foreign direct investment (FDI) in Southeast Asia (including China) and the main economies of Latin America (drawn by the massive wave of privatisations). This tendency faltered in 1998 and could well do so again in 1999: FDI in Southeast Asia fell by more than 30 per cent between 199 7 and 1998; and loans fell by 14 per cent between the first half of 199 7 and the first half of 1998.

IMF-dictated measures in the countries of the periphery have led to recession, a loss of some of the key pillars of national sovereignty, and a calamitous fall in the standard of living. In some countries, these measures have merely worsened conditions that were already unbearable for much of the population.

While the incomes of domestic holders of capital in these countries continue to rise, there has been a disastrous fall in those of working-class households. This chasm is as wide or wider than at any time in the twentieth century.

During the months of September and October 1998, for example, holders of Brazil's internal debt were receiving nearly 50 per cent in annual interest payments, with inflation hovering below 3 per cent. Brazilian capitalists and multinational companies, especially those based in Brazil, could borrow dollars at 6 per cent interest on Wall Street and loan them to the Brazilian government at between 20 and 49.75 per cent! All the while, these same capitalists continued to siphon most of their capital out of the country, to shelter themselves from abrupt changes in the country's economic fortunes.


Global public opinion began to shift in 1997 and 1998, in response to the failure of policies imposed by a combination of neo-liberal governments, domestic and foreign holders of capital and the multi-lateral financial institutions.

In the wake of the neo-liberal whirlwind, a large number of people in Southeast Asia, Russia, Brazil, Mexico, Venezuela, Argentina, Central America and Africa have seen a drop in their standard of living.

For the 400 million inhabitants of the former Asian `dragons' and `tigers', IMF has come to mean `I'M Fired'. Across the planet, including in Europe, a sizeable share of the population has begun to challenge neo-liberal policies. In some cases, this has taken on contradictory and confused forms. In most countries, the weakness of the radical Left and the slavish submission of the traditional Left to the dictates of the market (that is, of holders of capital) have created an opening for parties and movements that redirect the population's consciousness and will to act against a series of scapegoats, be they foreigners or followers of a different faith.

Successful resistance to the ongoing neo-liberal offensive is no easy matter; but those engaged in struggle have a number of points in their favour, including partial victories. The October 1998 decision by the French government of Lionel Jospin to withdraw from negotiations on the Multilateral Accord on Investments (MAI) came about in response to a broad campaign of opposition organised by an array of movements, trade unions and parties in France, the USA, Canada, the Third World and across Europe. To be sure, multinational corporations and the US government will again attempt to push through the MAI's objectives of total freedom for holders of capital. For the moment, though, they have suffered a major reversal. It is indeed possible to roll back such government and corporate initiatives through campaigns and mobilisation.

Another sign of the changing times was the UNCTAD statement of September 1998 in favour of the right of countries to declare a moratorium on foreign-debt payments. UNCTAD said: 'A country which is attacked can decide to declare a moratorium on debt-servicing payments in order to dissuade "predators" and have some "breathing room" within which to set out a debt restructuring plan. Article VIII of the IMF's Statutes could provide the necessary legal basis for declaring a moratorium on debt-servicing payments. The decision to declare such a moratorium can be taken unilaterally by a country in the face of an attack on its currency' (UNCTAD press release, 28 August 1998).

Of course, UNCTAD is a small player in comparison to the G7, the IMF, the World Bank and the World Trade Organisation (WTO). But this forthright defiance of the so-called inalienable rights of money-lenders reveals that governments in the periphery are finding it increasingly difficult to justify their support for the neo-liberal globalisation project.

The UNDP's 1998 report calculates that a 4 per cent tax on the assets of the world's 225 wealthiest people would bring in 40 billion dollars. This is the modest sum that would have to be invested annually in `social spending' worldwide over a period of ten years in order to provide: universal access to clean water (1.3 billion people went without such access in 19 97); universal access to basic education (one billion people are illiterate); universal access to basic health care (17 million children die annually of easily curable diseases); universal access to basic nutrition (two billion people suffer from anaemia); universal access to proper sewage and sanitation facilities; and universal access by women to basic gynecological and obstetric care.

Meeting these ambitious targets would cost only 40 billion dollars annually worldwide over a period of ten years. The UNCTAD report compares this figure to some other types of spending which humankind could easily do without: in 1997, 17 billion dollars were spent on pet food in the USA and Europe; 50 billion dollars were spent on cigarettes in Europe; 105 billion dollars were spent on alcoholic drinks in Europe; 400 billion dollars were spent on drugs worldwide; there was 780 billion dollars in military spending worldwide; and one trillion (1,000 billion) dollars were spent on advertising.

1999 and 2000 are Jubilee years in the Judeo-Christian tradition which culturally dominates the select club of G 7 countries. With yet another debt crisis upon us, Jubilee tradition demands that we energetically call for the complete and total cancellation of the debts of the countries of the periphery.

A host of other measures must be implemented urgently, such as: a tax on international financial transactions (as called for by the ATTAC coalition); an inquiry into the overseas holdings of wealthy citizens of the countries of the periphery, leading to the expropriation and restitution of these holdings to the peoples of the countries in question when they are the result of theft and embezzlement; bold measures to restrict capital flows; an across-the-board reduction in the working week with corresponding hiring and no loss of wages; land reform providing universal access to land for small farmers and peasants; measures favouring equality between men and women.

Though incomplete and insufficient, these measures are a necessary first step towards satisfying basic human needs.

Nature of Money by Geoffrey K. Ingham (Polity Press) Mainstream economics fails to grasp the specific nature of money. It is seen either as a ‘neutral veil’ over the operation of the ‘real’ economy or alternatively as a ‘thing’ – a special commodity.
In this important new book, Geoffrey Ingham draws on neglected traditions in the social sciences to develop a theory of the ‘social relation’ of money. Money consists in socially and politically constructed ‘promises to pay’. This approach is then applied to a range of important historical and analytical questions. The historical origins of money, the ‘cashless’ monetary systems of the ancient Near Eastern empires, the pre-capitalist coinage systems of Greece and Rome, and the emergence of capitalist credit-money are all given new interpretations. Capitalism’s distinctiveness is to befound in the social structure – comprising complex linkages between firms, banks and states – by which private debts are routinely ‘monetized’. Monetary disorders – inflation, deflation, collapse of currencies – are the result of disruptions of, or the inability to sustain, these credit-debt relations. Finally, this theory of money’s ‘nature’ is used to clarify confusions in the recent debates on the emergence of new ‘forms’ and ‘spaces’ of money – global electronic money, local exchange trading schemes, the euro.

Excerpt: A theory of money should provide satisfactory answers to three closely related questions: What is money? Where does it come from, or how does it get into society? How does it get or lose its value? Part I examines the answers given by the main traditions in the social sci­ences. Chapter 1, `Money as a Commodity and "Neutral" Symbol of

Commodities', outlines the development of the analysis of money to be found in mainstream or orthodox economics. Here I elaborate my contention that this understanding of money is deficient, because it is quite unable to specify money – that is to say, how money differs from other commodities. It follows that if the question of what money is cannot be answered, then the other two – where it comes from and how it gets or loses value – are also likely to be unsatisfactory. Indeed, the question of how money gets into society has been dismissed as irrelevant by one branch of orthodoxy. As Milton Friedman famously remarked, economics might just as well assume that money is dropped by helicopter and then proceed with the analysis of the effects of different quantities on the price level. The quantity theory of money is deeply infused in both the academic and the common-sense answer to the third question of how money gets or loses its value. But I shall argue that there are good grounds for challenging the presumption of direct, linear causation between the quantity of money and the level of prices. Chapter 2, `Abstract Value, Credit and the State', attempts to draw together the strands in the alternative conception of money which Schumpeter identified and which have occupied what Keynes referred to as the `underworld' of monetary analysis. This account has two aims. The first is to make heterodoxy's commonalities more explicit and to trace their linkages. The second is to make more explicit what I take to be the inherently sociological nature of these nominal­ist, credit and state theories of money. Chapter 3, `Money in Socio­logical Theory', is not a comprehensive survey. Rather, I wish, first, to illustrate deleterious effects of the narrowly economic conception of money (both neoclassical and Marxist) on modern sociology and, secondly, to rebalance the exegetical accounts of Simmel and Weber on money. Their work on the actual process of the production of money, as opposed to its effects and consequences, was informed by the Historical School's analysis. I intend to restore and use it. These extended analytical critiques form the basis for chapter 4 in which the `Fundamentals of a Theory of Money' are sketched. This is organized in relation to the three basic questions referred to above. In effect, I shall attempt to reclaim the study of money for sociology. But it is not my intention simply to perpetuate the existing disciplinary divisions; nor do I advocate that a `sociological imperialism' replace economics' hegemony in these matters. Throughout this work, `sociological' is construed in what is today a rather old-fashioned Weberian manner in which, as Collins (1986) has persuasively argued, the social/cultural, economic and political `realms' of reality are each, at one and the same time, amenable to 'social/cultural', `economic' and, above all, `political' analysis.

Moreover, by a `sociology of money' I intend more than the self-evident assertion that money is produced socially, is accepted by con­vention, is underpinned by trust, has definite social and cultural conse­quences and so on. Rather, I shall argue that money is itself a social relation; that is to say, money is a `claim' or `credit' that is constituted by social relations that exist independently of the production and exchange of commodities. Regardless of any form it might take, money is essen­tially a provisional `promise' to pay, whose `moneyness', as an 'insti­tutional fact', is assigned by a description conferred by an abstract money of account. Money is a social relation of credit and debt denom­inated in a money of account. In the most basic sense, the possessor of money is owed goods. But money also represents a claim or credit against the issuer – monarch, state, bank and so on. Money has to be `issued'. And something can only be issued as money if it is capable of cancelling any debt incurred by the issuer. As we shall see, orthodox economics works from different premisses, and typically argues that if an individual in a barter exchange does not have the pig to exchange for the two ducks, it would be possible to issue a document of indebtedness for one pig. This could be held by the co-trader and later handed back for cancellation on receipt of a real pig. Is the `pig IOU' money? Contrary to orthodox economic theory, it will be argued that it is not, and, moreover, that such hypothetical barter could not produce money. Rather, I will argue that for money to be the most exchangeable com­modity, it must first be constituted as transferable debt based on an abstract money of account. More concretely, a state issues money, as payment for goods and services, in the form of a promise to accept it in payment of taxes. A bank issues notes, or allows a cheque to be drawn against it as a claim, which it `promises' to accept in payment by its debtor. Money cannot be said to exist without the simultaneous exist­ence of a debt that it can discharge. But note that this is not a particular debt, but rather any debt within a given monetary space. Money may appear to get its ability to buy commodities from its equivalence with them, as implied by the idea of the purchasing power of money as measured by a price index. But this misses out a crucial step: the origin of the power of money in the promise between the issuer and the user of money – that is, in the issuer's self-declared debt, as outlined above. The claim or credit must also be enforceable.6 Monetary societies are held together by networks of credit/debt relations that are underpinned and constituted by sovereignty (Aglietta and Orlean 1998). Money is a form of sovereignty, and as such it cannot be understood without reference to an authority.

This preliminary sketch of an alternative sociological analysis of money's properties and logical conditions of existence informs thehistorical and empirical analysis in Part II. Having rejected orthodox economics' conjectural explanations of money's historical origins, an alternative is presented in chapter 5, `The Historical Origins of Money and its Pre-capitalist Forms'. First, the origin of money is not sought by looking for the early use of tradable commodities that might have developed into proto-currencies, but rather, following the great Cam-bridge numismatist Philip Grierson, by looking behind forms of money for the very idea of a measure of value (money of account). This again takes up and builds on the nineteenth-century Historical School's legacy, and adds from more recent scholarship. The second part of the chapter, which discusses early coinage and its development to its sophistication in the Roman Empire , has two aims. The first is to cast doubt on the almost universally accepted axiom in orthodox economic analysis that the quantity of precious metal in coins was directly related to the price of commodities – that is to say, for example, that debasing the coinage caused inflation. The second theme resurrects another contentious issue from the Methodenstreit – the question of whether the ancient world was `capitalist'. At the time, the economic theorists argued that their explanatory models applied universally across time and space; `economic man' and his practices were to be found throughout history. The Historical School, including Weber, argued otherwise, and I elaborate their case with a more monetary interpretation of pre-capitalist history. Chapter 6, `The Development of Capitalist Credit-Money', pursues the theme by ar­guing that capitalism's distinctive structural character is to be found in the production of credit-money. Capitalism is founded on the social mechanism whereby private debts are `monetized' in the banking system. Here the act of lending creates deposits of money. This did not occur in the so-called banks of the ancient and classical worlds. Aside from its extended application of the theoretical framework, this chapter is also intended as a correction of the standard sociological account of the rise of capitalism. Here there is an overwhelming tendency to adhere to a loosely Marxist understanding in terms of the relations of production combined with a cultural element taken from The Protestant Ethic and the Spirit of Capitalism. One-sided emphasis on this book has led to a quite grotesque distortion of Weber's work (Ingham 2003). Chapter 7, `The Production of Capital­ist Credit-Money', involves a partial break with the historical narra­tive, to present a tentative `ideal type' of the contemporary structure of social and political relations that produce and constitute capitalist credit-money. Again, I attempt to draw out the implicit sociology in some of the more heterodox economic accounts of the empirical `stylized facts' involved in credit-money creation. As far as I am aware, no such analysis exists, and this chapter constitutes little more than a pointer to future research. Attention is drawn to the 'performa­tive' role of orthodox economic theory in the social production of the `fiction of an invariant standard' (Mirowski 1991). Case studies of three types of monetary disorder comprise chapter 8. The purpose is to illustrate the difference between the orthodox economic conception of money and the one being presented here. Orthodoxy has difficulty in accounting for monetary disorder, because of its commitment to the notions of money's neutrality and of long-run equilibrium as the normal state of affairs. If, however, money is seen as a social relation that expresses a balance of social and political forces, and there is no presumption that such a balance entails a normal equilibrium, then monetary disorder and instability are to be expected. The rise and fall of the `great inflation' of the 1970s, the protracted Japanese deflation of the 1990s, and Argentina 's chronic inability to produce viable money are examined. Chapter 9 again provides empirical examples to illustrate the approach. The first is a critique of the many recent conjectures that the impact of technical change (e-money, etc.) might bring about the `end of money'. These are the result of the fundamen­tal and widespread category error whereby `moneyness' is identified with a particular `form' of money. The second looks at the claims that local barter schemes, using information technology, might signifi­cantly encroach or even supersede formal money. Thirdly, the differ­ent analytical approaches to the eurozone single currency experiment are examined. A short Conclusion attempts to tie the argument and analysis together.

The Rise and Fall of the Soviet Economy: An Economic History of the USSR , 1945 – 1991 by Phillip Hanson (Longman) The economic dimension is at the very heart of the Russian story in the twentieth century. Economic issues were the cornerstone of soviet ideology and the soviet system, and economic issues brought the whole system crashing down in 1989-91. This book is a record of what happened, and it is also an analysis of the failure of Soviet economics as a concept. B> Examines why the Soviet economic system fell apart and explores if the economy simply overreach itself through military spending. Seeks to discover if the centrally-planned character of Soviet socialism was at fault or if a potentially viable mechanism came apart in Gorbachev's clumsy hands. Examines the failure and if it means that true socialism is never economically viable. For those interested in Soviet or Economic history.

The story of the post-war Soviet Union is a story that ends in failure. The Union of Soviet Socialist Republics, or USSR, disintegrated in 1991 into 15 separate states. The doctrine of Marxism-Leninism ceased to be the official basis of government programmes anywhere in the former Soviet territory. In none of the newly independent states of the former Soviet Union did a communist party have a constitutionally entrenched right to rule.

One measure of the completeness of the Soviet collapse is that now, after a decade of economic distress in most of those countries, nobody seriously expects a return to the communist order. In any history of the evolution of mankind's social arrangements - at any rate, in any history written early in the twenty-first century - the failure of Soviet communism can be treated as total.

What is less obvious now, and easily forgotten, is that as late as 1990 such a collapse seemed impossible. Almost all Soviet citizens and almost all foreign observers conceived the Soviet Union as a fixture. It was widely understood to be in difficulties. Its political and economic arrangements were widely seen as both inhumane and ineffective. But hardly anyone expected the state to evaporate, let alone to do so any time soon. (One notable exception among Western specialists was Alexander Shtromas [see Shtromas 1988 and Chapter 7 below].)

What is even more easily forgotten now is that well into the 1970s the Soviet Union was seldom described as failing. Its economy tended, up to the early 1970s, to grow faster than that of the United States. For a genera­tion or more after the Second World War, the traditional Soviet aim of `catching up and overtaking' the West was not patent nonsense. In the early and middle 1970s the Soviet Union was judged to have achieved strategic parity with the US (or, more broadly, with NATO). Its nuclear arsenal and conventional forces, in other words, could match those of the West in possible conflicts, whether in Europe or globally. Its influence seemed if anything to be spreading in Africa and Asia.

In the course of thirty years, from the end of the Second World War, the Soviet Union had recovered from wartime devastation and massive loss of life. It had made remarkable strides in military technology. It had broken the US monopolies of, successively, the atomic bomb, the hydrogen bomb and inter-continental ballistic missiles. It also had - though this was not widely known at the time - formidable arsenals of biological and chemical weapons. And the lives of Soviet citizens had at the same time improved immensely. After the death of Joseph Stalin in 1953, higher priority had been given to agriculture, to housing and to manufactured consumer goods, and the new priorities made a difference.

The reign of terror, moreover, had ended. Soviet citizens no longer feared a visit from the secret police in the small hours: provided, that is, they had not engaged in public criticism of the authorities. There were still political prisoners; there was still a gulag - the chain of labour camps to which millions of people had been consigned under Stalin, almost at ran­dom. But it was possible to live securely if you were prepared to keep your head below the parapet - as almost everyone was.

The story of the post-war USSR is therefore a story of rise and fall. This is true of its political standing in the world. It is also true of its economy - to whose rise and fall between 1945 and 1991 this book is devoted.

This is a work of synthesis, not original research from primary sources. From the mid-1960s, however, I was following Soviet economic develop­ments in real time. Some of the story from then on is based on my own interpretation of contemporary Soviet material - though with the consider­able advantage of hindsight. For the late Stalin and Khrushchev periods I have relied heavily on the work of others - mainly but not exclusively Western.

The approach adopted in this book is that of an economist looking back at half a century or so of one country's history and trying to make sense, for non-economists, of that country's economic experience. Perhaps that makes it a work of economic history. If there is, in principle, a methodological difference between economic analysis of the past and economic history, it has never been clear to me what that difference should be.

In contemporary practice, as distinct from principle, the difference is clear. Economics is, as my colleague Somnath Sen has observed, `a broad church run by fundamentalists', and the fundamentalists see physics as the subject to be emulated: theoretical model, hypothesis, hypothesis-testing with empirical data. Economic history, on the other hand, is a broad church not run by fundamentalists. Cliometri modelling may be prestigious, but it is not mandatory. Discursive narratives and tentative interpretations are permissible. In that sense, at least, this is an exercise in economic history. It is also addressed to the general reader. I have tried to avoid the use of jargon. Where I have failed to avoid it, I have, I hope, explained it.

Inevitably, in an account of Soviet economic experience, numbers loom large - and are controversial. In this book I have made extensive use of recalculations of Soviet data by the US Central Intelligence Agency (CIA).

Some readers will find this provocative. Surely the CIA `got it wrong'? What about the new evidence from the Soviet archives?

The CIA, it can be said with hindsight, got a number of things wrong about the USSR during the Cold War. Its analysts overstated the size of the Soviet economy relative to that of the US, though they did not do so because of a conscious intent to deceive. The late 1970s forecast by some CIA analysts of an imminent fall in Soviet oil production was misleading about the timing of that fall. But the Agency's estimate of changes over time in Soviet total output has not been undermined by new information. This was for gross national product or GNP, for practical purposes equivalent, for the USSR, to gross domestic product, or GDP. Angus Maddison, the leading connoisseur of international growth statistics, has reviewed these figures and pronounced them healthy (Maddison 1998).

During the Cold War somebody had to estimate Soviet GNP if there was to be any chance of understanding what was going on in Soviet society and Soviet politics. The second purpose of such estimates - to inform Western threat assessments by comparing Soviet production potential and the eco­nomic burden of defence with those of US - was less well served by CIA economic assessments. But the changes over time in GNP/GDP had also to be quantified, and here the Agency was more successful.

Recalculations were necessary because the Soviet official data for total output were in principle incapable of being compared with Western na­tional income figures and in practice also seriously distorted. The reasons for this verdict on Soviet data are given in Chapter 1.

The CIA's recalculations used methods developed in academic studies by Abram Bergson. Their methodology and a large part of the primary information on which they were based were open to scrutiny at the time. The Agency's figures were indeed scrutinised, and they were criticised from a variety of viewpoints. The US Defense Intelligence Agency (DIA) rout­inely produced different GNP numbers and higher defence-spending num­bers. Other analysts, including the present author, argued that in the 1970s and 1980s the CIA was overestimating the `real' (inflation-adjusted) growth of Soviet investment (see Chapter 5). Unofficial estimates made in the USSR itself, in semi-secrecy, by Grigorii Khanin made the trajectory of Soviet economic change in the post-war period look somewhat different in shape, but did not present a whole new story. In general the CIA estimates stood up fairly well to these critiques.

The Soviet archives and post-Soviet memoirs appear not to have changed the picture significantly. Soviet officials did not operate with a secret set of numbers that differed from those that were (from 1956 on) published. There were some secrets that were not published: the production of military

equipment, non-ferrous metals output, the gold reserves. Otherwise, the officials shared their data with us. Like other members of the Soviet intelli­gentsia, Soviet officials put less trust than Western scholars did in the reported figures of output in physical terms (tons of steel and the like). Otherwise they were no better informed than we were.

An economist's approach to Soviet history differs from that of a general historian in one obvious way. The focus of this book is in the following questions. Why did the Soviet people produce and consume what they did when they did? How did the economy work? Did the way in which it worked change over time? What were the changes in economic policy? Why were they made, and did they achieve their aims? These matters are unavoidably present also in a general history; they are after all fundamental in any society. But they do not provide the main focus.

This difference of focus may contribute to some differences of judge­ment. Nikita Khrushchev's period of rule (1953-64) is treated rather fa­vourably here, and not just for his liberalising de-Stalinisation. Yes, a great deal went wrong in the latter part of his time in office and, yes, he was ousted from the leadership. But he de-Stalinised the economy as well as the society: not by reforming the economic system but by drastically changing the priorities set for that system, in favour of the people. That was a pro­found change, and it endured.

Conversely, any economist finds it hard to be too respectful about Mikhail Gorbachev's achievements as a leader. To his leading Western biographer, Archie Brown, Gorbachev's approach to economic reform was `open-ended'. In Chapters 7 and 8 Gorbachev's record in economic policy is treated as riddled with avoidable inconsistencies from start to finish. It may have been open-ended, but it was full of holes in the middle, too.

In one respect this book resembles a certain kind of political history: the narrative is divided into chapters according to changes in leadership, albeit with two chapters each for Khrushchev, Brezhnev and Gorbachev. This disgracefully unfashionable approach, like that of a history of England organ­ised around kings and queens, is hard to avoid. New leaders did, time and again, make a difference. None of them, it is true, could do much to alter the dipping trajectory of the Soviet economy. But the sequence of reforms and policy changes, however ineffective these ultimately were, is clearly linked to the changes in leadership.

The Soviet economy rose and fell between 1945 and 1991 above all in a relative sense. From growing faster than most of the capitalist world, the Soviet Union began in the 1970s to cease to gain ground. The absolute gap in conventional measures of economic performance, such as per capita GDP or GNP, began to increase.

The most recent retrospective measures of Soviet total GDP, in 1990 dollars, show absolute declines, before 1990, in only three years: 1959, 1963 and 1979 (Maddison 1995, Table C 16c). Earlier Western assessments, in 1970 roubles and in 1982 roubles, show only one or two years of abso­lute decline before the collapse of 1990-91. (Per capita output shows rather more declines, but still mainly an upwards movement.) But Soviet eco­nomic performance, despite the predominant absolute increase in output, was deteriorating in comparison with that of the capitalist world.

Relative deterioration was critical in three respects: it threatened the Soviet Union's ability to match the West militarily; it undermined the self­confidence of Soviet elites and their belief that their social system could deliver; and it weakened Soviet citizens' attachment to that system. In the chapters that follow, I have tried to chart this rise and fall, and also to suggest reasons for the transition from relative success to relative failure. The arrangement of the material is chronological.

In this narrative, two things seem reasonably clear. First, the long-run trajectory of the Soviet economy was little influenced by policy changes made deliberately to improve it. Like policymakers in Western countries, Soviet leaders announced all sorts of initiatives which seldom if ever affected long-term (ten years or more) rates of output growth. Identifying points of change in long-run economic trends is always problematic; insofar as we can identify them, they are seldom if ever explicable as the intended results of policy changes.

Second, Soviet leaders were the directors of a giant firm - what some American commentators called USSR Inc. (see Chapter 1 for an outline of the system). Some acted like chairmen of the board, others like chief exec­utive officers, others again like both. One or two were, or became, more like part-time non-executive directors. The leadership team, at any rate, had formal powers to micro-manage everything, and ultimate responsibility for the economy, in ways that went far beyond the authority of any govern­ment in a developed market economy. The record will show, I trust, that each new leader came in with an economic agenda that for a time really did make a difference.

What they could do, and did do, was to change priorities in the allocation of resources and to tinker with methods of centralised economic administra­tion. After Stalin, Nikita Khrushchev shifted priorities in favour of the con­sumer and in favour of agriculture. No subsequent leader, except Gorbachev briefly in the mid-1980s, tried to shift priorities back towards investment. After Stalin, all, I suggest, were frightened - perhaps unnecessarily - of trying the patience of the Soviet people too far. Leonid Brezhnev (1964­82) removed some of the excessive tinkering of Khrushchev's last years but reaffirmed the post-Stalin priorities. In the mid-1970s he presided over a historically remarkable further downgrading of the priority of investment. Mikhail Gorbachev (1985-91) tried briefly to reassert this, but gave up.

What none of them did was to alter the basic features of the Soviet economic system. Gorbachev might be blamed, or praised, for undermin­ing the system, but he did not do so deliberately. Each of them ran into the limits on economic progress set by a system that took shape during industri­alisation in the 1930s.

The view put forward in this book is that the origins of Soviet relative economic decline can be found in the liberalisation of the Soviet social order that followed the death of Stalin in 1953. This is a contentious view and not, I think, one that can be made into a testable hypothesis. Mark Harrison has modelled, using game theory, a story of the interplay between planners and planned, in which the decline of coercion produces an output collapse (Harrison 2001a). His version of events, however, fits, as it is in­tended to do, the output collapse at the end of the period, in 1990-91. His interpretation of the earlier trajectory is different from mine, and I shall return to these different interpretations in the final chapter.

My own view, in some ways more conventional than Harrison's, is put forward rather tentatively. This is that the erosion of discipline, with effects on economic performance, started earlier and was more gradual, and was one of several factors producing an earlier slowdown. Where we agree is in attributing to a decline in `plan discipline' a role in Soviet economic failure.

It may be worth adding that no interpretation that stresses the role of coercion entails a favourable view of Stalinism or a belief that economic development under a totalitarian regime of terror could have continued indefinitely. It amounts, rather, to saying that Soviet central planning was an authority-intensive economic system, and that (in my version of events) the erosion of authority made it, gradually, less effective. Liberalising economic reforms time and again introduced internal inconsistencies into the system. If there is a viable halfway-house system between authoritarian central planning with state ownership and capitalist free enterprise, neither the Soviet Union nor any of the East European communist states man­aged to locate it. (China's comparatively successful reforms may be another story. But they were probably not feasible in more developed economies with political regimes that were already softer by the 1970s than that of Beijing.)

The softening of the political and social regime, as distinct from a soften­ing of plan discipline, also played its part. Not only did managers of production units no longer have reason to fear for their lives or their freedom if they failed to meet a key target. At the same time, as institutions and procedures became routine and - under Brezhnev - managers and plan­ners stayed in post for long periods, networks developed in which a factory manager could and often did conspire with his `higher authorities', in the ministry supervising his industry, to keep his output targets down and his input allocations up. Then all concerned could report success, without hav­ing to exert themselves unduly.

If the undermining effects of reform are one paradox in Soviet economic history, they are not the only one. Another is that the Soviet collapse, when it came, was not the result of a revolt by citizens discontented with their economic lot. In the later chapters of the book it will, I hope, be made clear that the collapse had more to do with high politics than with low econom­ics. The speed with which central planning was abandoned, and capitalism (of a sort) accepted, was indeed remarkable. The alacrity with which old practices were abandoned revealed, certainly, that little confidence in the established economic system remained either in elite circles or in the popu­lation at large. But there was no clamour from below for radical change. The notorious patience of Soviet citizens had by 1991 been sorely tried, but not exhausted.

The developments that account for these apparent paradoxes will, I hope, become clear in the later chapters. What is needed before the narrat­ive begins is a brief account of the Soviet economic system that endured, with some modifications, throughout the period covered by this book. Its institutions and procedures had taken shape in the 1930s. A brisk outline of them at the outset will make it easier to follow the fortunes of the Soviet economy after the Second World War. This is provided in Chapter 1, before we embark on the story of post-war economic recovery.

Therapeutic Action: An Earnest Plea for Irony by Jonathan Lear (Other Press) How can a conversation fundamentally change the structure of the human psyche? Jonathan Lear’s new work is concerned with not simply a mere change of belief that emerges out of conversation, not even massive changes of belief. Nor is he concerned only with changes in emotional life. Rather, he is concerned with basic changes in the ways the psyche functions. How could any conversation have such a result? There are three areas of inquiry which have addressed this question: religion, philosophy and psychoanalysis. Within psychoanalysis the form this question takes is: what is the therapeutic action of psychoanalysis? That is, what is it about the peculiar nature of psychoanalytic conversation that facilitates fundamental psychic change?

This book argues that, properly understood, irony plays a crucial role in therapeutic action. However, this insight has been difficult to grasp because the concept of irony has itself been distorted, covered over. It is regularly confused with sarcasm; it is often mistakenly assumed that if one is speaking ironically, one must mean the opposite of what one says, that one must be feigning ignorance, that irony and earnestness cannot go together. All of these assumptions are false. So, part of the therapeutic action of this book is conceptual therapy: we need to recover a vibrant sense of irony.

This book, then, is not merely about the therapeutic action of psychoanalysis, it is an enactment of conceptual therapy. It is thus written as an invitation to clinicians –psychologists, psychoanalysts, psychiatrists – to renew their own engagement with the fundamental concepts of their practice. To that end, the book investigates the concepts of subjectivity and objectivity that are appropriate for psychoanalysts, the concept of internalization and of transference. There is also an extended discussion of the theories of Hans Loewald and Paul Gray – and how they do and do not fit together. The very idea that love, or Eros, could be a drive – as Freud postulated – is given a new interpretation.

Therapeutic Action will be of interest to anyone concerned with the central concepts of psychoanalysis. And, indeed, to anyone interested in how conversation can bring about fundamental psychic change.

"Jonathan Lear’s Therapeutic Action vindicates its Oscar Wildean subtitle—An Earnest Plea for Irony—by giving us a Kierkigaardian reading, not so much of Hans Loewald, but of the transferences between Loewald and Lear. Just as the surviving traces of Plato in Freud was to identify reality-testing with a cognition freed of its sexual past, even so Lear attempts his own version of Kierkegaard’s ‘The Case of the Contemporary Disciple’. Lear is Plato to Loewald’s Socrates, which is an audacious venture. Therapeutic Action has the high merit of helping me to rethink some of my own transferences."—Harold Bloom

"Jonathan Lear’s psychoanalytic and philosophical sophistication have enabled him to produce a lucid, incisive, and convincing argument about how psychoanalysis leads to a better life through a particular deployment of the capacity for love. Anyone who loves rigorous, creative argument will find this encounter with Lear and his thinking about why the analytic conversation is transforming an intellectual experience of the highest and most exciting order. This is one of those rare books that can actually let you change the way you think and the way you live." —Robert A. Paul, Ph.D.

"In this bold and intriguing book, Jonathan Lear asks: how do psychoanalysts communicate not with their patients but with each other? Do the forms of psychoanalytic writing continue to reflect a distorted notion of scientific rigor? Do analysts in writing about therapeutic action ignore a key insight: that what they say matters less than how they say it? Does the health of the psychoanalytic profession currently hinge on analysts becoming more aware of how the form of their communication affects their lives as analysts? With these provocative questions, Lear returns to his own point of departure as an analyst: his conversations with Hans Loewald and Loewald’s paper on the therapeutic action of psychoanalysis. Honoring the dying wish of his mentor, he seeks to discover in a field riddled with discipleship how not to become a disciple. And with this ultimately personal ‘how-not-to’ book, Lear engages the difficult question: how to write about the process of psychic change without betraying either love or science. Therapeutic Action will enliven the thinking of anyone involved in analyzing the psyche." —Carol Gilligan, author of In a Different Voice and The Birth of Pleasure  

Psychoanalytic Knowledge and the Nature of Mind edited by Man Cheung Chung (Editor), Colin Feltham (Palgrave Macmillan) presents cutting edge thinking on some fundamental ideas in psychoanalysis by international scholars in the field of the philosophy of psychoanalysis. It explores the nature of psychoanalytic knowledge in the light of contemporary philosophical views or critiques of a diversity of topics relevant to psychoanalysis: the philosophy of mind; the notion of changing oneself; religion; the notion of interdisciplinary links with psychoanalytic knowledge; post-Freudian psychoanalytic knowledge and challenges to psychoanalytic methodology.

This book is concerned with the nature of psychoanalytic knowledge. Why is a new book on this topic justified? The answer is partly reflected by Neu's remark:

Freud's influence continues to be enormous and pervasive. He gave us a new and powerful way to think about and investigate human thought, action and interaction. He made sense of ranges of experience generally neglected or misunderstood. And while one might wish to reject or argue with some of Freud's particular interpretations and theories, his writings and his insights are too compelling to simply turn away. There is still much to be learned from Freud.

Indeed, due to Freud's enormous and pervasive influence' on many aspects of our cultures, our ways of thinking, writings and insights, it would be foolish to claim that there is nothing more we can learn from him. Hence, the present book.

To embark on a project which aims to explore the nature of psychoana­lytic knowledge is a difficult task. One reason for such difficulty is the absence of any one psychoanalysis of the kind that Freud insisted on, when he was constructing psychoanalytic knowledge. That is, Freud believed that he was the only person or authority who could validate and justify contri­butions which claimed to be part of psychoanalysis. Over the past few decades, however, psychoanalytic knowledge has grown so substantially that, if Freud were alive, he probably would have been shocked by and pos­sibly distraught at the increasingly diverse ways in which we now under-stand, discuss and debate psychoanalysis. Today, the view that there is only one psychoanalysis or one type of psychoanalytic knowledge is scarcely held, as Schwartz remarked:

I think we no longer have to treat Freud as a father but can treat him as a colleague, as the first theorist of the analytic hour. I think we are now mature enough to begin to look for ways to engage each other's metaphors and to handle all the difficult and threatening feelings that must inevitably accompany a search for deeper understanding of our-selves, of our place in the world and of our capacity for change.

One way to ease the difficulty of this task is to confine ourselves to a par­ticular approach and to concentrate on selected topics which bear relevance to psychoanalytic knowledge. The approach we take is philosophical in nature. We align ourselves with the idea that psychoanalysis is intimately related to philosophy. As Cavell suggests, 'psychoanalytic theory might better be read as philosophy, illuminating as it does conceptual issues about the nature of mind and thought' (Cavell, 1988, p. 859). Primarily, we aim to explore the nature of psychoanalytic knowledge in the light of contem­porary philosophical views or critiques of a diversity of topics relevant to it. These topics are the philosophy of mind, the notion of changing oneself, religion, interdisciplinary links with psychoanalytic knowledge, post-Freudian psychoanalytic knowledge, and challenges to psychoanalytic methodology. The diversity in the topics that we cover, of course, reveals the profound nature of psychoanalytic knowledge. We set the scene for our chosen path of philosophical exploration by our opening chapter which demonstrates one example of the influence of philosophy on the formation of psychoanalytic knowledge, followed by another chapter investigating the general relationship between philosophy and psychoanalysis.

Minds and Machines: Connectionism and Psychological Modeling by Michael Robert William Dawson (Blackwell Publishers) Synthetic psychology, part of the wider field of cognitive science, is based on the idea that building simple systems and placing them into environments will result in emergent phenomenon that can then be broken down into relatively simple component parts, avoiding the unnecessary complexities of analytic psychology. Dawson (psychology, U. of Alberta, Canada) promotes the value of synthetic psychology, exploring psychological modeling and the building of "models that behave." He discusses the use of connectionist models that consist of three building blocks: storing association in connection weights, using nonlinear activation functions, and creating sequences of decisions.

It is noon on a beautiful March day in Edmonton, Alberta. At King Edward Elementary School, a group of schoolchildren aren't playing outside in the sun-shine during the lunchtime recess. Instead, they form a crowd in a dark hall that is only illuminated by a small, bright flashlight resting on the middle of the floor. They are acting like scientists, and have in hand pencils and index cards for recording their observations. The focus of their scientific interest is the behavior of two small Lego robots that are navigating through the hallway. Both robots look like tiny tractors from outer space. Motors turn two large wheels at the rear, and a smaller front wheel turns as a robot steers through the hallway. Two small red LEDs shine like headlights mounted on the front. A thin, flexible barrier surrounds each robot, looking like a flexible hoop or shell.

One of the robots wanders down the hall, away from the flashlight, bumping into dark baseboards. As it comes into contact with the wall, it stops and does a short gyrating dance. Sometimes this causes it to point towards the light, but soon it steers itself to point in another direction. On several occasions, the stu­dents have to scramble out of the way of an approaching machine. The second robot spends more time bumping into the flashlight. When it is steering, it slowly moves in and out of the pool of light that the flashlight provides.

These students have had some experience building and programming other Lego robots as part of a weekly science fair club. They understand the function of the components that they can see on the moving robots. However, they did not construct these two machines. Their task was to try to figure out why each robot behaved the way that it did. By inspecting their movement, could the students come up with a general story about the program stored in each robot, about how each robot sensed the world, or about why the robots seemed to be different?

When they observed the robots behaving independently, their previous experi­ence was evident. Many of the kids wrote down observations like "One likes the light and the other one likes the dark." Nevertheless, some students came up with theories that far exceeded my programming abilities. One suggested that on of the robots ."thinks when stops, figures out things, searches for for dark." The complexity of their theories – or at least the complexity of the programming that their theories required – increased dramatically when they observed the two robots moving at the same time: "They want to get away from each other." "The black robot likes to hit things and the green robot likes people." "Together they attack things."

It is later that same week. University students in an undergraduate psychology course find themselves with pens and index cards in hand, facing the same task as the kids at King Edward Elementary School. The undergraduates have a strong technical background in the science of behavior, but have had no previous experience with Lego robots. Many of their observations of robot behavior lead to proposals of very sophisticated internal mechanisms, and of complex relation-ships between the two machines: "The turquoise robot seemed to be the smarter robot. It first began to move in a circular motion, but seems to be able to adjust its behavior according to the behavior of the black robot." "They must be sending sequence information to each other." "Turquoise keeps trying to attack Black." "Now Turquoise moves to where the flashlight used to be." "Turquoise seems to be more exploratory, directed." "The robots took turns moving. One moved, while the other remains stationary." "Turquoise hesitates until Black hits the light. Turquoise then follows Black off to the right."

The apparent complexity of the robots' behavior is, perhaps surprisingly, not evident in their internal mechanisms. The two robots are variations of one of the vehicles described in a classic text on synthetic psychology (Braitenberg, 1984). The two LEDs are part of a pair of light sensors that measure the brightness around them. Each light sensor is connected to a motor, and the motor's speed is deter-mined by the light sensor's signal. In one robot, the light sensor on the right of the machine drives the right motor and the left sensor drives the left motor. In the other robot, the connections between light sensors and motors are crossed so that the light sensor on one side sends a signal to the motor on the other side. The barrier surrounding the robot, when pushed, depresses one of four touch sensors mounted on the body of the robot. When any one of these sensors are activated, the motors are stopped, and then a reflex is initiated in which the two motors run backwards and then forwards at randomly selected speeds for a short period of time. These mechanisms, and only these mechanisms, are responsible for one robot preferring the dark and the other preferring the light, as well as for any apparently sophisticated interactions between the two machines.

1.1 Synthetic vs. Analytic Traditions

When asked to describe and explain robot behavior, both sets of students were facing the situation that makes scientific psychology difficult. The only given is the external behavior of a complicated system. The internal processes that mediate this behavior cannot be directly observed. The challenge is to infer plausible and

testable theories of these unknown internal processes purely on the basis of what can be seen directly. `How do we represent information mentally and how do we use that information to interact with the world in adaptive ways? The problem persists because it is extraordinarily difficult, perhaps the most difficult one in all of science" (Paivio, 1986, p. 3).

In spite of this difficulty, psychology has made many advances by carefully observing and analyzing behavioral regularities. For example, psychology has developed many detailed theories of the intricate processes involved in visual perception. These theories can predict minute aspects of behavior with aston­ishing accuracy, and are also consistent with discoveries about the underlying structure of the brain.

However, some researchers would argue that in spite of such success, psycho-logy and cognitive science in general requires alternative research strategies. There is a growing tendency in cognitive science to adopt a radically different – and nonanalytic – approach to understanding mental phenomena. This approach is evident in research associated with such labels as synthetic psychology, behavior-based robotics, or embodied cognitive science (e.g., Brooks, 1999; Pfeifer & Scheier, 1999). This research is based upon the general assumption that theory building in cognitive science would be better served by synthesis than analysis.

Practitioners of embodied cognitive science would not be surprised that the students came up with theories that overestimated the complexity of the two robots. They would also predict that these theories would become more and more complicated as the scene being observed became more complex as well (e.g., by containing two moving robots instead of just one). According to syn­thetic psychology's "law of uphill analysis and downhill synthesis," a theory created by analyzing a complicated situation is guaranteed to be more complic­ated than a theory created using the synthetic approach (Braitenberg, 1984).

One reason for this is that when we observe complex behavior, we have diffi­culty determining how much of the complexity is due to the mechanisms of the behaving agent and how much is due to the environment in which the agent behaves. For the kids in the hallway, the problem is to decide how much of the behavior is explicitly programmed, and how much is the result of both static and dynamic environmental variables. It seems that we have a tendency to attribute more intelligence to the behaving system than might be necessary.

Embodied cognitive science proposes that simpler and better theories will be produced if they are developed synthetically. In the most basic form, this is done as follows. First, a researcher decides on a set of basic building blocks, such as a library of primitive operations. For the two robots, the building blocks were the sensors, the motors, and a relatively simple programming language developed by Lego. Second, a system is constructed by organizing these building blocks it a particular way. For the two robots, this was done by physically situating the sensors and motors in a particular fashion, and by writing elementary code to convert sensor readings into motor speeds. Third, the system is situated in an environment, and its behavior is observed. For the designer, one of the interesting questions is whether the observed behavior is more surprising or interesting than might be expected given what is known about how the system was constructed.

If these three steps are followed, then there are two general expectations. First, because the system is constructed from known components, the researcher should have an excellent understanding of how it works. Second, when the system is situated into an environment, the interaction between the system and this environment should result in emergent phenomena. These emergent behaviors will be surprising in the sense that they were not directly intended or programmed into the system. The net result of all of this is (hopefully) a better and simpler theory of the complex behavior then would have been the case had the theory been created just by analyzing an existing system's behavior.

This book falls into two main parts. Chapters 1 through 8 explore what is meant by the term "psychological model." After reviewing the general rationale for modeling in psychology, we will discuss a wide variety of different approaches: models of data, mathematical models, and computer simulations. We will see that these can be viewed as falling on different locations on a continuum between modeling behavior and building models that behave. This will lead into a distinction between analytic and synthetic approaches to building psychological theories. This discussion ends with an example of connectionism as a medium that can provide models that are both synthetic and representational - but whose representations must be discovered by analysis.

Chapters 9 through 13 attempt to provide some basic concepts that will enable the reader to explore synthetic psychology using connectionist models. We consider what can be done with three general connectionist building blocks: storing associations in connection weights, using nonlinear activation functions, and creating sequences of decisions. An attempt is made to show that even some very old and simple connectionist architectures are relevant to modern psychology. Some techniques for analyzing the internal structure of a connection­ist network are provided. A website of supplementary material for this book (www.bcp.psych.ualberta.ca/-mike/Book2/) provides free software that can be used to generate connectionist models.  

Practical Ecocriticism: Literature, Biology, and the Environment by Glen A. Love (Under the Sign of Nature: Explorations in Ecocriticism: University of Virginia Press) is the first book to ground environmental literature firmly in the life sciences, particularly evolutionary biology, and to attempt to bridge the ever-widening gulf between the “Two Cultures.” Glen Love--himself one of the founders of ecocriticism--argues that literary studies has been diminished by a general lack of recognition for the vital role the biological foundation of human life plays in cultural imagination. Love presents with great clarity and directness an invaluable model for how to incorporate Darwinian ideas--the basis for all modern biology and ecology--into ecocriticial thinking.

Beginning with an overview of the field of literature and environment and its claim to our attention, and arguing for a biologically informed theoretical base for literary studies, Love then aims the lens of this critical perspective on the pastoral genre and works by canonical writers such as Willa Cather, Ernest Hemingway, and William Dean Howells. A markedly interdisciplinary and refreshingly accessible work, Practical Ecocriticism will interest and challenge the entire ecocritical community, as well as humanists, social scientists, and others concerned with the current rediscovery of human nature.

Unnatural Disasters: Case Studies of Human-Induced Environmental Catastrophes by Angus M. Gunn (Greenwood) This reference resource describes both the scientific background and the economic and social issues that resulted from environmental disasters resulting primarily from human activity. Categorized by the type of tragedy--including coal mine tragedies, dam failures, industrial explosions, and oil spills--this one-stop guide provides students with descriptions of some of the world's most tragic environmental disasters.

Awareness of disasters anywhere in the world is vital to the preservation of our global environment. No longer can we be indifferent to events in far-off countries. Many years ago the poet John Donne reminded us of this when he wrote, "No man is an island, entire of itself," and went on to show how all humanity and its environment form one interdependent whole. The tragedy of mercury poisoning in Iraq could have been pre-vented if those involved had been familiar with Japan's Minamata disease, which had been diagnosed and treated 15 years earlier. In light of this example, it is small wonder that George Santayana wrote, "Whoever for-gets the past is doomed to relive it." This book will heighten our aware­ness of past human-induced—as distinct from natural—disasters so that we can care more responsibly for the environment of planet Earth.

There are natural disasters and there are human-induced ones. The for­mer are familiar—earthquakes, hurricanes, floods, droughts--events over which we believe we have little control and in which we do our best to min­imize damage to ourselves and our property. Human-induced disasters appear to be fundamentally different. They are regarded as the results of human error or malicious intent, and whatever happens when they occur leaves us feeling that we can prevent a recurrence. More and more, we find that human activity is affecting our natural environment to such an extent that we often have to reassess the causes of so-called natural disasters. Pre­ventable human error might have contributed to some of the damage.

Take, for example, the great San Francisco earthquake of 1906. The firestorm that swept over the city immediately after the quake, causing far more damage than the direct impact of the earthquake, could have been minimized had alternatives to the city's water mains been in place. The story was similar when the Loma Prieta earthquake struck in 1989. The Marina District of San Francisco, which was known to be unstable and

had been severely damaged in 1906, was subsequently developed and built up. When Loma Prieta struck, the district collapsed due to liquefaction. The shaking of the relatively loose soil changed it into a liquid, and build­ings sank into it. These secondary effects of human action or inaction are increasingly important considerations in the study of disasters.

The human-induced disasters described in this book deal with unique events in which human activities were the principal causes of the tragedies. Sometimes the event was a result of ignorance, and at other times it was due to either error or poor judgment. There are also instances—fortunately, few—in which those responsible deliberately intended to cause destruction. The terrorists who bombed the World Trade Center are examples. The events, scattered throughout the twenti­eth century and the beginning of the twenty-first century, are drawn from many countries. The United States has the biggest number of a single country because it is an open society and its human-induced accidents are readily identified. Few other countries are as ready to publicize their mis­takes, so fewer documented case studies are available from them.

I selected case studies that significantly impacted the environment both immediately and over time. Oil spills are included because they are a con­tinuing threat to the physical environment. Coal-mining deaths were far too numerous in the past, and even today there are too many. The careless handling of poisonous chemical substances, as in Bhopal, are major threats to health. There is no attempt to include all the human-induced disasters of the last hundred years. Rather, selected examples of different kinds of events on land or water or in the air illustrate the various types of disasters.

The unique nature of disasters creates its own rare responses. Emotions are triggered in new ways. There is a sense of isolation felt by both indi­viduals and communities, and this can sometimes lead to passivity or even paralysis if the event is catastrophic. These people are in shock, as if they were recovering from major surgery and feeling that their bodies cannot absorb any more change. Some people react in other ways. There are indi­vidual acts of extraordinary bravery. There is also the opposite, looting and assault as people react to what they see as the total breakdown of the social order.

Each case study covers a single event, not a process occurring over time, and so the disasters of wars and epidemics are excluded. The nuclear bombing of Hiroshima, although a war event, is included because of the vast environmental consequences that followed it. Case studies are grouped in chapters by category. Within each chapter, the case studies are alphabetized by geographical location. Each case study opens with a sum­mary overview. Causes, consequences, and cleanup efforts, or remedial actions to ameliorate damage or prevent recurrence, follow. At the end of each chapter, a list of selected readings is provided for further study.

Scale and Geographic Inquiry: Nature, Society, and Method by Eric Sheppard (Editor), Robert B. McMaster (Blackwell Publishers) compares and integrates the various ways geographers think about and use scale across the spectrum of the discipline, and includes contributions by human geographers, physical geographers, and GIS specialists. Includes index and references. Softcover, hardcover available from the publisher.

The papers collected in this volume were originally commissioned as a series of public lectures celebrating the 75th anniversary of the Geography Department at the University of Minnesota during the spring of 2000. The Geography Department at the University of Minnesota is the fifth oldest in the United States, founded in 1925 by Darrell Haug Davis who had recently moved from the University of Michigan. Richard Hartshorne joined in 1924, followed by Ralph Hall Brown and Samuel Dicken, who together established Minnesota's reputation as a center for scholarship in historical, philosophical, and human geography. Hartshorne's Nature of Geography (1938), and Brown's Mirror for Americans (1941) and Historical Geography of the United States (1948) became classics in their fields. Major changes occurred after World War II with the retirement of Davis, the departure of Hartshorne and Dicken, and the death of Brown. Renewal of the department occurred under Jan Broek, and the intellectual leadership of John Borchert and Fred Lukermann, during which time the department expanded its scholarly profile to incorporate physical geography. John Fraser Hart, E. Cotton Mather, Philip W. Porter, Joe Schwartzberg, and Yi Fu Tuan established the department's national reputation as a center for cultural geography while Richard Skaggs and Dwight Brown established the department's biophysical program. Later, during the 1960s and 1970s the department, partially because of its situation in a thriving metropolitan region, developed considerable depth in urban geography with John Borchert and John Adams and other faculty members at the University of Minnesota. The department, now with some 22 faculty members, provides undergraduate and graduate instruction that emphasizes a broad education in human, physical, environmental geography, and geographic information sciencelsystems, stressing both a strong theoretical and a rigorous quantitative and qualitative empirical training in the discipline. Current areas of strength include urban and economic geography, cultural geography, nature–society relations, geographic information science, GIS and society, climate and biogeography, and geographic education.

The topic of the lecture series, "Scale and Geographic Inquiry," was chosen to reflect the department's reputation as a broad-based community of geographers with an abiding interest in the nature of geographic inquiry. Geographic scale has received considerable scholarly attention across the discipline in recent years, making it an ideal focus for examining the range of geographic inquiry. We invited as speakers a mix of geographers representing the breadth of the field, each a leading researcher on questions of geographic scale over the last decade. Each author gave a lecture and an informal seminar with faculty and students, and was asked to provide a chapter for this volume. We also commissioned a chapter on scale in biogeography to balance other contributions in physical geography.

The result is a set of essays by leading researchers that demonstrate the depth and breadth of scholarship on geographic scale, which we hope provides a definitive assessment of the field and a benchmark for further work on geographic scale in and beyond geography. While we began with the idea of categorizing these essays as either human, biophysical, or methods, in fact many defy such categorization. For example, Walsh et al. (Chapter 2) embrace methods and biophysical geography, Goodchild (Chapter 7) discusses cartography and human geography, and Swyngedouw (Chapter 6) applies a human geographic approach to environmental geography. One of the distinguishing features of geography over the last 40 years, emphasized at Minnesota, has been its ability to embrace an exceptionally broad range of epistemologies, methodologies, and topics, eschewing a canonical approach to the discipline. As the chapters that follow demonstrate, this diversity can create tensions between what may appear to be fundamentally different approaches to geographic scale. Yet, as we seek to show in our introductory and concluding essays, the diversity hides considerable over-lap. Geography's vitality depends on mutual respect and cross-fertilization between its different proponents, and certainly our understanding of geographic scale can only be enriched by engagement across, and not just within, the different approaches to the topic collected in this volume.

Ball RedBook, Volume 1: Greenhouses and Equipment, 17th edition by Chris Beytes (Ball Publishing) This professional horticulture reference, which has been in print continuously for 70 years, is fully revised and updated in this new edition. Based on real-life experiences from industry professionals including growers and equipment and greenhouse manufacturers, the presented information covers all aspects of greenhouse equipment-the structures themselves, benches, irrigation, curtains, environmental controls, machination, and the greenhouse as a retail facility. Discussed are the most recent developments in greenhouse evolution and the varieties of greenhouse structures available, from freestanding and gutter-connected to shade houses and open-roof greenhouses. There are many types of glazing available for each greenhouse, and the differences between glass, polyethylene film, and rigid plastic are explored. Also included is information on managing the greenhouse, marketing products, and operating a retail store from a greenhouse.

Ball RedBook, Volume 2: Crop Production: 17th edition by Debbie Hamrick (Ball Publishing) Offering detailed information on the production of 162 flower, herb, and vegetable crops, this essential resource for growers includes techniques and advice that work in real-life production, not just in the lab or trial greenhouse. Offered is information on how to decide what to grow, as well as tips about temperature, media, plant nutrition, irrigation, water quality, light, crop scheduling, and growth regulators. Details about propagation, growing, pest and disease control, troubleshooting, and post-harvest care are presented and arranged by genus name. Plants represented include annuals, perennials, pot foliage plants, flowering potted plants, herbs, and some vegetable bedding plants.

Most growers worldwide would agree that if they could only have one book on their office shelf, it would be the Ball RedBook.

The first edition of the Ball RedBook, published in 1932, sold for twenty cents, and was titled Ball Red Book, Miscellaneous Flower Seed Crops. George J. Ball penned the manu­script in longhand. At that time, almost all growers produced cut flowers, and most were also florists. That first edition featured cutting-edge crop culture on cut flowers such as asters (Callistephus), stock (Matthiola), snapdragons (Antirrhinum), larkspur (Consolida), calendula, sweet peas (Lathyrus), mignonette (Reseda odorata), zinnias, Clarkia (Godetia), centaurea, gerbera, Didiscus, and Scabiosa. The only bedding plants were petunias, candytuft (Iberis), marigolds (Tagetes), and lupine.

In today's floriculture industry, the bread-and-butter commodity cut flower production has moved offshore to Colombia and Ecuador, and most greenhouse producers focus on producing high-value, quick-turning bedding plants, perennials, foliage, and flowering pot plants. There are niche growers producing cut flowers, and, interestingly, many of the crops written about in the first Ball RedBook are viable, profitable cut flowers today. Incidentally, updated crop culture for most of these crops appears in this seventeenth edition.

As the industry changed, so has the Ball RedBook. Vic Ball took over the RedBook editing duties from his father and improved it with each subsequent edition. Over the years, the size of the book has increased to accommo­date an expanding list of crops grown from seed and cuttings.

The Ball RedBook also has always touched on the technology side of the industry, with Vic sharing the contents of his notebooks filled with comments from growers all over the United States, Canada, and Europe on new ideas such as hydroponics, Dutch trays, roll-out bedding plants, open-roof greenhouses, round-robin production systems, transplanters, and more. There was no innovation that excited growers about which Vic Ball wasn't interested. His passion for encouraging innovation among growers and sharing information about growers was boundless. Vic was an inspiration to those of us who were fortunate to work with him and to every grower he encountered. Vic served as editor of the sixteenth edition of the RedBook and passed away in 1997, shortly after it was published.

When we sat down to discuss updating the Ball RedBook, we kept Vic's energy and spirit at the fore. In his honor we have once again expanded the book. This time, we've split the Ball RedBook into two volumes: Volume 1: Greenhouses and Equipment, edited by Chris Beytes, and Volume 2: Crop Production, edited by Debbie Hamrick. Each volume is complete in its own right as a stand-alone book. Together, however, the volumes include enough practical information to set anyone interested in becoming a greenhouse grower on the road to success. Existing growers who have relied on the Ball RedBook as their "first consulted" reference text will find the volumes to be an invaluable resource.

Volume 1: Greenhouses and Equipment isn't just an update from the sixteenth edition. It's a whole new book, with content that reflects the major changes that have taken place in the hardware side of the business, such as open-roof greenhouses, automatic transplanters, and vision grading technology. And because today's floriculture industry is as much about sales and marketing as it is growing, we've included chap­ters titled The Office, Marketing Your Business and Your Products, and Retail Greenhouse Design. We worked with thirty-eight experts in the field to create this new volume.

Volume 2: Crop Production covers the basics of floricultural production in the green-house. Written in laymen's terms, the book is divided into two parts. Part 1 presents the basics of growing—including broad topics such as water, media, nutrition, temperature, light, and postharvest, as well as applied subjects such insect and disease control and growth regula­tors—all in grower-friendly text and graphics. Part 2 is a cultural encyclopedia of more than 160 greenhouse crops. Dozens of contributors lent their expertise to its pages. There, you'll

find propagation, growing on, pest control, troubleshooting, and postharvest information presented in an easy-to-use format.

 World Agriculture and the Environment: A Commodity-By-Commodity Guide to Impacts and Practices by Clay Jason (Island Press) presents a unique assessment of agricultural commodity production and the environmental problems it causes, along with prescriptions for increasing efficiency and reducing damage to natural systems. Drawing on his extensive travel and research in agricultural regions around the world, and employing statistics from a range of authoritative sources including the United Nations Food and Agriculture Organization, the author examines twenty of the world's major crops, including beef, coffee, corn, rice, rubber, shrimp, sorghum, tea, and tobacco. For each crop, he offers comparative information including:

  •  a fast facts overview section that summarizes key data for the crop

  • main producing and consuming countries

  • main types of production

  • market trend information and market chain analyses

  • major environmental impacts

  • management strategies and best practices

  • key contacts and references

With maps of major commodity production areas worldwide, the book represents the first truly global portrait of agricultural production patterns and environmental impacts.

This book identifies and explores the main threats that key agricultural commodities pose to the environment as well as the overall global trends that shape those threats. It then identifies new practices as well as tried-and-true ones that can increase pro­duction while minimizing environmental costs. Many who analyze the environ­mental impacts of agriculture focus on trade policies that affect specific agricultural commodities traded internationally. There are two problems with this approach. First, most agricultural products are consumed in the producing country and not traded across borders, even in a processed form. Second, the main environmental impacts are on the ground; for example, they relate to production practices, not trade. Trade and trade policies are one way to approach the problem, but only if they can be focused in such a way as to reduce the production impacts of commodities that are not by and large traded internationally.

This book takes the position that working with farmers directly to identify or co-develop better management practices (BMPs) may be far more effective in the short term and may provide better information to inform subsequent trade and policy strategies. While some BMPs may be encouraged by government or even interna­tional trading partners, most probably will not. In the end, the protection of endan­gered species and habitats with high conservation value is often essentially a local or regional issue that involves subsistence farmers or producers connected to local mar­kets rather than international ones.

Another issue that receives considerable attention among those interested in agriculture, poverty, and the environment is who causes the most environmental damage. A common assumption is that large-scale, capital-intensive, high-input commercial farms have more negative impacts than small farmers who are trying to scrape together a living by producing food for their families and selling surplus lo­cally. In fact, both are to blame. An increasing body of evidence suggests that small­er, more marginal producers may actually cause the bulk of environmental damage in both developing and developed countries. This damage can result from farming marginal land, not having efficient equipment (or the money to buy it), or not hav­ing good information about better practices.

This book does not attempt to answer the question of whether large-scale, high-input; low-input; or subsistence agriculture causes most environmental damage. Rather, the focus is to identify which practices are more environmentally destructive and whether better practices exist to reduce or avoid those impacts altogether for any of these systems of production. The focus is on primary production directly rather than on the processing of the primary products, except where processing occurs largely on the farm. Likewise, the focus is not on value-added processing through in­tensive feedlot systems such as those for cattle, chicken, or pigs. Such operations are more similar to factories than to farms and should be subject to the same pollution controls as other factories.

The twenty-one crops that are the focus of this volume include: bananas, beef, cashews, cassava, cocoa, coffee, corn (maize), cotton, oil palm, oranges, plantation-grown wood pulp, rice, rubber, salmon and shrimp from aquaculture, sorghum, soy-beans, sugarcane, tea, tobacco, and wheat. These crops occupy most of the land used for agriculture in the world. In addition, they represent a mix of temperate and tropical crops, annual and perennial crops, food and nonfood crops, meat and vegetable crops, and crops that are primarily traded internationally as well as those that are consumed primarily in the country of origin.

A number of significant crops are not discussed in this book. In many`cases, the excluded crops are those whose area of production is in decline, or ones that are not deemed as globally significant as another crop that is included. Some of the more obvious tradeoffs were the inclusion of wheat instead of barley, rye, or oats; sorghum instead of millet; cassava instead of sweet potato; soybeans and oil palm instead of peanuts (groundnuts), sunflowers, canola (rapeseed), olives, or coconuts; and sugar-cane instead of sugar beets.

Some of the omitted crops are very important locally. This is the case with such crops as potatoes, grapes, apples, horticulture crops, cut flowers, or sugar beets. The assumption, however, is that the issues and lessons that are raised through the dis­cussion of the crops that are included are transferable to most of the others. And, while no blueprints for sustainability are included, the larger purpose of this work is to help the reader understand how to think about agricultural production and the environment.

The discussion for each crop chapter follows the same outline. Each chapter be-gins with "Fast Facts" that summarize important comparable information for each crop, including maps of production areas. These facts include: the total area in and volume of production, the average and total value of production, the main producing and consuming countries, the percent of production exported, the species name(s), and the main environmental impacts as well as the potential to reduce those impacts.

In addition, each chapter presents (to the extent possible) comparable informa­tion about each crop. The discussion starts with an introduction to and history of each crop as well as an overview of the main producing and consuming countries. The main systems of production are described for each crop as well as any process­ing of the crop that occurs within the area of production. A section is included about the current substitutes for each crop and the impact of substitutes on markets. Market-chain analyses are included for each crop to the extent possible, but because this information is rarely in print, it is not complete. Market trends are also identified and analyzed but these, too, should not be considered definitive, as this is the stuff of crystal balls as well as fortunes to be gained from trading and is, as a result, rather in-complete. Finally, the major environmental impacts are discussed and strategies for addressing them are identified.

Much of the production and trade data in this book is based on statistics from the Food and Agriculture Organization of the United Nations (FAO). This data is gen­erally the best available but is considered by many to underestimate both total pro­duction and area in production. In addition, a wide range of figures from different sources are used to illustrate different issues raised in different chapters and some of this data is contradictory. Every attempt has been made to reconcile these numbers, but it has not always been possible.

Price data, too, has been difficult to obtain and more difficult to verify and standardize. In general, prices have been indexed to 1990 U.S. dollar values. World pro­ducer prices for individual commodities were calculated by transforming FAO pro­ducer price data into world averages. Such data is reported as individual commodity prices per country. A group of countries that represent world production was chosen, taking into consideration the available data. The world producer price for each com­modity is calculated from a simple arithmetic average of the chosen representative countries' producer prices.

Libraries are full of information about agriculture in general and about these crops in particular. Furthermore, the world is full of farmers who produce them and who can supply valuable information and strong opinions. In short there is no dearth of information or opinions about these commodities and how they are produced. There is also considerable information (a vast quantity of publications, research, data, and analyses) focused specifically on describing or proposing how to reduce the environmental damage from producing each crop. This volume draws on all of these sources, including my own personal experience with large and small producers in both the developing and the developed world.

Though every attempt has been made to make the crop chapters complete, in­evitably there are gaps. Some issues are harder to address for most of the commodi­ties in question. For example, little work has been undertaken to assess the cumula­tive environmental impact of any single crop in a specific place over time, much less the comparative impacts of crops that produce products that are readily substituted for each other. Even less has been done to evaluate the global impacts of a specific crop or to identify the likely environmental impacts of global trends within an in­dustry. This book offers insights of a different scale and focus and suggests how more comprehensive work on future trends could help those interested in the environ­ment and agriculture better understand issues of economic, social, and environ­mental viability.

 Synchronization of Mechanical Systems by Henk Nijmeije, Alejandro Rodriguez-Angeles (Nonlinear Science, 46: World Scientific) The main goal of this book is to prove analytically and validate experimentally that synchronization in multi-composed mechanical systems can be achieved in the case of partial knowledge of the state vector of the systems, i.e. when only positions are measured. For this purpose, synchronization schemes based on interconnections between the systems, feedback controllers and observers are proposed.

Because mechanical systems include a large variety of systems, and since it is impossible to address all of them, the book focuses on robot manipulators. Nonetheless the ideas developed here can be extended to other mechanical systems, such as mobile robots, motors and generators.

Synchronization is everywhere! This is the feeling one may get once alerted for it. Everyone is familiar with all kinds of biological rhythms ('biological clocks') that create some kind of conformity in time and in nature. This includes for instance neural activity and brain activity, but also the cardiac vascular system. Clearly, there are numerous other examples to be men­tioned, sometimes much more controversial like the claimed synchronicity of the monthly period of nuns in a cloister, and so on.

Synchronous motion was probably first reported by Huygens (1673), where he describes an experiment of two (marine) pendulum clocks hang­ing on a light weighted beam, and which exhibit (anti-)frequency synchro­nization after a short period of time. Synchronized sound in nearby organ tubes was reported by Rayleigh in 1877, who observed similar effects for two electrically or mechanically connected tuning forks. In the last century synchronization received a lot of attention in the Rus­sian scientific community since it was observed in balanced and rotors and vibro-exciters. Perhaps an enlightening potential new ap­plication for coordinated motion is in the use of hundreds of piezo-actuators in order to obtain a desired motion of a large/heavy mechanical set-up like for instance an airplane or Mm-scanner, or the coordination of microactuators for manipulation at very small scales.

In astronomy synchronization theory is used to explain the motion of ce­lestial bodies, such as orbits and planetary resonances,  In biology, biochemistry and medicine many systems can be modelled as os­cillatory or vibratory systems and those systems show a tendency towards synchronous behavior. Among evidences of synchronous behavior in the natural world, one can consider the chorusing of crickets, synchronous flash light in a group of fire-flies, and the metabolic synchronicity in yeast cell suspension,

The subject of synchronization has received huge attention in the last decades, in particular by biologists and physicists. This attention probably centers around one of the fundamental issues in science, namely curiosity: how come we find synchronous motion in a large ensemble of identical systems? Also, new avenues of potential use of synchronicity are now being explored.

Synchronization has much in common — and is in sense equivalent to-coordination and cooperation. In ancient times it was already understood that joint activity may enable to carry out tasks that are undoable for an individual.

The authors’ interest in the subject of synchronization is strongly influenced by a desire to understand what the basic ingredients are when coordinated mo­tion is required in an engineering system. We therefore have concentrated in this book on synchronization or coordination of mechanical systems, like in robotic systems. This allows to delve, on the one hand, in the theoretic foundations of synchronous motion, but, on the other hand, made it pos­sible to combine the theoretical findings with experimental verification in our research laboratorium.

This book concentrates therefore on controlled synchronization of me­chanical systems that are used in industry. In particular the book deals with robotic systems, which nowadays are common and important systems in production processes. However, the general ideas developed here can be extended to more general mechanical systems, such as mobile robots, ships, motors, microactuators, balanced and unbalanced rotors, vibro-exciters.

The book is organized as follows:

Chapter 1 gives a general introduction about synchronization, its definition and the different types of synchronization.

Chapter 2 presents some basic material and results on which the book is based. In Section 2.1 some mathematical tools and stability concepts used throughout the book are presented. The dynamic models of rigid and flexible joint robots are introduced in Section 2.2, including their most important properties. The experimental set-up that will be used in later chapters is introduced in Section 2.3, where a brief description of the robots and their dynamic models is presented.

Chapter 3 addresses the problem of external synchronization of rigid joint robots. The synchronization scheme formed by a feedback controller and model based observers is presented and a stability proof is developed.

Simulation and experimental results on one degree of freedom systems are included to show the applicability and performance of the proposed controller. The main contribution of this chapter is a gain tuning proce­dure that ensures synchronization of the interconnected robot systems.

The case of external synchronization for flexible joint robots is addressed in Chapter 4. The chapter starts by explaining the differences between rigid and flexible joint robots and the effects on the design of the synchronization scheme. The synchronization scheme for flexible joint robots and stability analysis is presented. The chapter includes a gain tuning procedure that guarantees synchronization of the interconnected robot systems. Simulation results on one degree of freedom systems are included to show the viability of the controller.

The problem of internal (mutual) synchronization of rigid robots is treated in Chapter 5. This chapter presents a general synchronization scheme for the case of mutual synchronization of rigid robots. The chapter includes a general procedure to choose the interconnections between the robots to guarantee synchronization of the multi-composed robot system. Simulation and experimental results on one degree of freedom systems are included to show the properties of the controller.

Chapter 6 presents a simulation and experimental study using two rigid robot manipulators and shows the applicability and performance of the syn­chronization schemes for rigid joint robots. Particular attention is given to practical problems that can be encountered at the moment of implemen­ting the proposed synchronization schemes. The robots in the experimental setup have four degrees of freedom, such that the complexity in the imple­mentation is higher than in the simulations and experiments included in Chapters 3 and 5.

Further extensions of the synchronization schemes designed here are discussed in Chapter 7. Some conclusions related to synchronization in general and robot synchronization in particular are presented in Chapter 8.


On Constructions, Reality, and Social Knowledge

by Sergio Sismondo

SUNY Press, State University of New York Press

$14.95, paper, 199 pages, notes, bibliography, index



By looking at science as a social and political activity, researchers have created novel accounts of scientific practice and rationality, accounts that largely contradict the dominant ideologies of science. Science without Myth is a philosophical introduction to and discussion of these social and political studies of science—a discussion of the social construction of scientific knowledge as a product of communities and societies marked by the circumstances of its production. Sismondo deals with a very central and current problem in the philosophy and sociology of science, the problem of truth and representation, especially for theoretical entities and constructs. In this clear and concise exposition, the author succeeds in making a very difficult and often technical controversy very accessible.

The book argues that there are a number of important and interesting ways in which scientific knowledge can be a social construction but that it often is knowledge of the material world; therefore, this book is an essay on mediation or the mediatory roles of scientists between nature and knowledge. By identifying and separating different senses of the "construction" metaphor, this book displays senses in which scientists construct knowledge, phenomena, and even worlds. It shows science as made up of thoroughly social processes and that those processes create representations of a preexisting material world. SCIENCE WITHOUT MYTH’s argument provides a counterbalance to skeptical tendencies of constructivist studies of science and technology by showing that skepticism cannot cut so deeply as to deny the possibility of knowledge and representation.

Preface and Acknowledgments
1. Introduction
2. The Grounds for Truth in Science: An Empiricist/Realist Dialogue
3. Epistemology by Other Means
4. Exploring Metaphors of "Social Construction"
5. Neo-Kantian Constructions
6. The Structures Thirty Years Later
7. Creeping Realism: Bruno Latour's Heterogeneous Constructivism
8. Metaphors and Representation
9. Power and Knowledge
Work Cited

Sergio Sismondo is William Webster Postdoctoral Fellow in the Humanities and Assistant Professor of Philosophy at Queen’s University, Canada.

Word Trade is an independent review agency serving the public, scholars, libraries, and booksellers.

Last modified: January 01, 2018

Special Contents

insert content here