Wordtrade LogoWordtrade.com
Technology

 

Review Essays of Academic, Professional & Technical Books in the Humanities & Sciences

 

New Directions for the Cognitive Study of Scientific and Technological Thinking by Michael E. Gorman, Ryan D. Tweney, David C. Gooding, Alexandra P. Kincannon  (Lawrence Erlbaum Associates) At the turn of the 21st century, the most variable commodity in society is knowledge, particularly new knowledge that may give a culture, a company, or a laboratory an adaptive advantage. Knowledge about the cognitive processes that lead to discovery and invention can enhance the probability of making valuable new discoveries and inventions. Such knowledge needs to be made widely available to ensure that no particular interest group 'corners the market' on techno-scientific creativity. It would also facilitate the development of business strategies and social policies based on a genuine understanding of the creative process. Further, through an understanding of principles underlying the cognitive processes related to discovery, educators can utilize these principles to teach students effective problem solving strategies as part of their education as future scientists. A special focus for this volume is to explore what fine-grained case studies can tell us about cognitive processes. The case study method is normally associated with sociology and anthropology of science and technology; these disciplines have been skeptical about cognitive explanations. Cognitive psychologists, in turn, have traditionally lamented the lack of rigor and control in case studies. The authors in this volume advocate a multiple-method approach and provide examples of empirical studies and theoretical analyses. In order to achieve true scientific and technological progress we need to understand the process by which our species is transforming our world. We hope this book makes an important step in that direction by leading to breakthroughs in our understanding of discovery and invention.

At the turn of the 21st century, the most valuable commodity in society is knowledge, particularly new knowledge that may give a culture, a company, or a laboratory an adaptive advantage (Christensen, 1997; Evans & Wurster, 2000; Nonaka & Takeuchi, 1995). Turning knowledge into a com­modity poses two dangers. One is to increase the risk that one culture, com­pany, or group can obtain an unfair advantage over others. The other is to impose a one -dimensional, goal-driven view of something that, as the chap­ters in this volume will show, is subtle, complex, and diverse as to its motivations and applications. To be sure, knowledge about the cognitive processes that lead to discovery and invention can enhance the probability of making valuable new discoveries and inventions. However, if made widely avail-able, this knowledge could ensure that no particular interest group "corners the market" on techno-scientific creativity. It would also facilitate the de­velopment of business strategies and social policies based on a genuine un­derstanding of the creative process. Furthermore, through an understanding of principles underlying the cognitive processes related to discovery, educators can use these principles to teach students effective problem-solving strategies as part of their education as future scientists.

A special focus of this volume is an exploration of what fine-grained case studies can tell one about cognitive processes. The case study method is normally associated with sociology and anthropology of science and tech­nology; these disciplines have been skeptical about cognitive explanations. If there is a well-established eliminativism among neuroscientists (Churchland, 1989), there is also a social eliminativism that seeks to re-place cognitive accounts with sociological accounts (Woolgar, 1987). In these socio-anthropological case studies, interactions, inscriptions, and actions are made salient. The private mental processes that underlie public behavior have to be inferred, and most anthropologists and sociologists of science are trained to regard these cognitive processes as epiphenomenal. By contrast, intellectual or internalist studies by historians of science go into reasoning processes in detail (Drakes, 1978; Mayr, 1991; Westfall, 1980). Historians of science have produced a large number of studies that describe processes of experimentation, modeling, and theory construction. These studies can be informative about reasoning and inference in relation to declarative knowledge, experiential knowledge and experimental data, and the dynamics of consen­sus formation through negotiation and other forms of personal interactions. However, historians generally have no interest in identifying and theorizing about general as opposed to personal and culture-specific features of creative processes. Thus, very few historical studies have combined historical detail with an interest in general features of the creative process.

Despite the usefulness of fine-grained case studies, cognitive psycholo­gists have traditionally lamented their lack of rigor and control. How can one identify general features, let alone develop general principles of scien­tific reasoning, from studies of a specific discovery, however detailed they may be? One answer is to develop the sorts of computational models pre­ferred by cognitive scientists (Shrager & Langley, 1990). One classic exam­ple of this kind of modeling is, of course, Kulkarni and Simon's (1988) simulation of a historical account by Larry Holmes (1989) dealing with Hans Krebs's discovery of the ornithine cycle (see also Langley, Simon, Bradshaw, & Zytkow, 1987). These models can be abstracted from histori­cal cases as well as current, ethnographic ones. However, it is important to remember that such models are typically designed to suit the representa­tional capabilities of a particular computer language. Models derived from other domains—such as information theory, logic, mathematics, and cormputability theory—can become procrustean beds, forcing the territory to fit the researcher's preconceived map (Gorman, 1992; Tweney, 1990).

The tension among historical, sociological, and cognitive approaches to the study of science is given thorough treatment in the chapter 2, by Nersessian. She distinguishes between good old-fashioned artificial intelli­gence, represented by computer programs whose programmers claim they discover scientific laws, and social studies of science and technology, repre­sented by detailed or "thick" descriptions of scientific practice. Proponents of the former approach regard cognition as the manipulation of symbols abstracted from reality; proponents of the latter see science as constructed by social practices, not reducible to individual symbol systems. Nersessian describes a way for cognitive studies of science and technology to move be­yond these positions, taking what she calls an environmental perspective that puts cognition in the world as well as in the brain. Ethnography can be informative about cognitive matters as well as sociological ones, as Nersessian's study of biomedical engineering laboratories shows, and historical research helps make the cognitive practices intelligible.

In order to ground cognitive theory in experimental data, psychologists have conducted reasoning experiments, mainly with college students but also with scientists. These experiments allow for controlled comparisons, in which all participants experience the same situation except for a variable of interest that is manipulated. A psychologist might compare the perfor­mance of scientists and students on several versions of a task. Consider, for example, a simple task developed by Peter Wason (1960). He showed par­ticipants in his study the number triplet "2, 4, 6" and asked them to propose additional triplets in an effort to guess a rule he had in mind. In its original form, the experimenter's rule was always "any three increasing numbers," which proved hard for most participants to find, given that the starting ex-ample suggests a much more specific rule (e.g., "three even numbers") . Each triplet can be viewed as an experiment, which participants used to generate and test hypotheses. For Wason, participants seemed to manifest a confirmation bias that made it hard to see the need to disconfirm their own hypothe­ses. Later research suggested a much different set of explanations (see Gorman, 1992, for a review) . Tasks such as this one were designed to permit the isolation of variables, such as the effect of possible errors in the experi­mental results (Gorman, 1989); however, they are not representative of the complexities of actual scientific practice. The idealization of reasoning processes in scientific and technological innovation (Gorman, 1992; Tweney, 1989), and of scientific experiments in computer models (Gooding & Addis, 1999), is a critical limitation of this kind of research.

What makes models powerful and predictive is their selectivity. A good model simplifies the modeled world so as to make it amenable to one's pre­ferred methods of analysis or problem solving. However, selectivity can

make one's models limited in scope and, at worst, unrealistic. Insofar as this is a problem, it is often stated as a dichotomy between abstraction and real-ism, as for example by the mathematician James Gleick: "The choice is al-ways the same. You can make your model more complex and more faithful to reality, or you can make it simpler and easier to handle" (Gleick, 1987, p. 278) . David Gooding illustrated this problem at a workshop with a joke that could become a central metaphor for science and technology studies: 

A millionaire with a passion for horse racing offered a large prize—enough to buy a few Silicon Graphics machines—to anyone who could predict the out-come of any horse race. Three scientists took up the challenge, a physiologist, a geneticist and a theoretical physicist. One year later the three scientists an­nounced their results. Here's what each reported:

The Physiologist: "I have analysed oxygen uptake, power to weight ratios, di­etary intake and metabolic rates, but there are just too many variables. I am unable to predict which horse will win."

The Geneticist: "I have examined blood lines, breeding programs and all the form books, but there are just too many uncertainties. I cannot predict who will win any race."

The Physicist: "I have developed a theoretical model of the dynamics of horse racing, and have used it to write a computer program that will predict the out-come of any horse race to 7 decimal places. I claim the prize. But—there is one proviso. The model is only valid for a perfectly spherical horse moving through a vacuum" 

Experimental simulations of scientific reasoning using tasks like Wason's (1960) 2–4–6 task are abstractions just like the spherical horse: They achieve a high degree of rigor and control over participants' behavior but leave out many of the factors that play a major role in scientific practice. Gooding (in press) argues that in the history of any scientific field there is a searching back and forth between models and real world complexities, to achieve an appropriate level of abstraction—not overly simple, capturing enough to be representative or valid, yet not so complex as to defeat the problem-solving strategies of a domain. Gooding's chapter for this volume (chap. 9) shows how scientists use visualization in creating models that en-able them to negotiate the tension between simplicity and solvability on the one hand and complexity and real world application on the other. He argues that, like any other scientific discipline, cognitive studies of science and technology must find appropriate abstractions with which to describe, investigate, model, and theorize about the phenomena it seeks to explain.

To compare a wide range of experiments and case studies conducted in different problem domains, one needs a general framework that will establish a basis for comparison. As Chris Schunn noted in the workshop on cognitive studies of science and technology that inspired this volume (see Preface), models, taxonomies, and frameworks are like tooth­brushes—no one wants to use anyone else's. In science and technology studies, this has been the equivalent of the "not invented here" syndrome. This usually reflects the methodological norms of a discipline, such as the sociological aversion to cognitive processes, which is remi­niscent of behavioral psychology's rejection of mental processes as unobservables. One strategy for achieving de facto supremacy is to assume, even if one cannot demonstrate it, that one's own "toothbrush" is supe­rior to any other (Gorman, 1992).

DISCOVERY 

Sociological and historical studies of the resolution of scientific controver­sies have shown that the supremacy of a particular theory, technology, or methodological approach involves negotiation. Because no method is epis­temologically neutral, this negotiation often focuses on the validity of the method(s) of establishing facts and of making inferences from them (Galison, 1997). Therefore, rather than promoting the investigative poten­tial of a single method, we advocate approaches that address both Gooding's problem of abstraction and Schunn's problem of shareable frameworks. Dunbar and Fugelsang develop one such approach in their contribution (chap. 3) to this volume. This approach combines experi­ments, modeling and case studies in a complementary manner. They de­velop the distinction (first made by Bruner, Goodnow, (St. Austin, 1956) between in vitro studies of scientific thinking (which involve abstract tasks like Wason's [1960] 2–4–6 task) and in vivo studies (which involve observ­ing and analyzing scientific practice). Dunbar (1999) used an in vitro task to study how participants reasoned about a genetic control mechanism and conducted in vivo studies of molecular biology laboratories. In chapter 3, Dunbar and Fugelsang label four more approaches in the same style: 

  1. Ex vivo research, in which a scientist is taken out of her or his laboratory and investigated using in vitro research, by presenting problems similar to those he or she would use in his or her research.

  2. In magnetico research, using techniques such as magnetic resonance imaging to study brain patterns during problem solving, including potentially both in vitro and in vivo research.

  3. In silico research, involving computational simulation and modeling of the cognitive processes underlying scientific thinking, including the good old-fashioned artificial intelligence work cited by Nersessian and alternatives.

  4. Sub specie historiae research, focusing on detailed historical accounts of scientific and technological problem solving. These in historico studies can serve as data for in silico simulations.

 

Later chapters offer a variety of examples of sub specie historiae and in vivo studies, with references to the other types of research noted earlier.

In chapter 4, Klahr takes a framework that he was involved in developing and stretches it in a way that makes it useful for organizing and comparing results across chapters in this volume. His idea is that discovery involve searches in multiple problem spaces (Klahr & Dunbar, 1988; Simon, Langley, & Bradshaw, 1981). For example, a scientist may have a set of pos­sible experiments she might conduct, a set of possible hypotheses that might explain the experimental results, and a set of possible sources of experimen­tal error that might account for discrepancies between hypotheses and re­sults. Klahr adds two other dimensions: (a) whether a space is general or domain specific and (b) whether the search is conducted by individuals, dyads, or teams. This framework leads to interesting questions, such as: Un-der what circumstances does a mismatch between current hypothesis and current experimental result lead to a search of a space of possible errors in the experiment? and When does it trigger a search for a new hypothesis? Empirical studies can provide specific answers to these questions.

We can now suggest one possible framework to support a comparative analysis of different kinds of study. We can combine Klahr's idea of multiple search spaces with Dunbar's distinction among six types of methodology to produce a multidimensional matrix that allows us to organize and compare the research studies reported in this volume. For example, the studies in-volving the introduction of error into the 2 4 6 task involve an in vitro methodology, three types of general problem spaces, and are conducted on individuals (Gorman, 1989) . Tweney and his colleagues have used scientists as experimental participants in a selection task and in "artificial universe" studies. Similarly, specific computational simulations could be put in the in silico row, with the problem spaces they modeled as columns. Historical case studies could be placed in the sub specie historiae row, with their problem spaces across the top; these would either be domain specific or, in the case of individuals who move across domains in the course of their careers, person specific. Consider, for example, Herbert Simon, whose career included orig­inal contributions to economics, computer science, psychology, and cogni­tive studies of science. There will, of course, be cases where methods are mixed within the same study, as Dunbar and Fugelsang do when they combine in magnetico with in vitro techniques.        At this point, gaps in the matrix draw our attention to ways of extending research. These gaps, together with studies we consider relevant but that do not fit the framework, suggest how the framework needs to be developed. They also help identify limitations in the framework itself, as we discuss in chapter 15. Similarly, new studies will add new problem spaces, especially because discovery and invention involve the creation of new problem spaces. Furthermore, as the "thick" historical and ethnographic studies show, much scientific and technological thinking is not conducted "over a defined problem space but across a domain consisting of, say, apparatus, ex­ternal texts (books, articles, tables, etc.), internal memory (reflecting the scientist's domain knowledge), and a symbol system that may need to be al­tered or created, rather than merely rearranged" (Tweney, 2001, p. 154) . We emphasize, therefore, that we use problem spaces as an organizational heu­ristic rather than as an attempt to prescribe an ontology of resources and procedures for discovery.

Trickett, Schunn, and Trafton report in chapter 5 a domain-specific in vivo study. They show that, to resolve anomalies, two astronomers and a physicist search both hypothesis and data spaces. The data consist of images of galaxies in the astronomy case and of various representations of the rela­tion between a model and data in the physics case. In both cases, the re-searchers had to decide which kinds of visualizations worked best; therefore, they searched through problem-specific spaces of visualizations. In both cases, the scientists generated new visualizations and studied exist­ing ones more closely. This study suggests the need to incorporate visualizations into our framework. As Gooding argues in chapter 9, visualizations are used across a wide range of contexts; in particular, they are used to generate phenomenological descriptions, proto-explanations, dynamical models, and in the context of communicating about results. It follows that ways of dealing with different modes of representation should be included in each of the spaces in the Simon–Klahr–Dunbar scheme, in addition to considering that researchers might search a space of possible visualizations on certain kinds of problems.

Trickett et al.'s study (chap. 5) also uses a dyad as a unit of analysis. Two astronomers worked together on identifying and accounting for anomalies in their visualizations. Collaboration was not the focus of the study, how-ever; the same analytic framework was used for the physicist working alone and the astronomers working together. Trickett et al. point out the need for future research to determine whether two or more scientists working to­gether are more likely to notice anomalies than individuals working in relative isolation.

Some of the most successful scientists and inventors kept notebooks, which they used to enhance their problem-solving strategies. Shrager's chapter, 6, provides a contemporary example of how record keeping can support learning. Shrager describes how he gathered data on the process by which he became a molecular biologist. His is therefore a reflexive in vivo study. Shrager kept both a laboratory notebook and a protocol book; he re-marked at the workshop (see Preface) , "If you lose your lab notebook, you're hosed." So, another kind of potential search space is the notes a scientist or inventor keeps on experiments, hypotheses, new designs, and so on—notes that are incredibly valuable when it comes to conducting further research and in establishing priority for an invention or discovery. Gorman and his colleagues have described the way in which Bell made a major improvement on his telephone by searching his own notebook for promising results and discovering one that led to an improved telephone design (Gorman, Mehalik, Carlson, & Oblon, 1993). Tweney and Gooding (1991) showed how Faraday used records of his work to monitor his research stratagems in order to evaluate and refine his research program. However, Shrager also kept a different kind of diary that included his observations on his own learning process. At the end of his chapter, he speculates that this kind of di­ary might be useful both for science and technology studies scholars and in the education of scientists and engineers.'

Notebooks and diaries are more than an external memory aid that scien­tists and inventors can search; they also create a space of reflection and transformation. Bell, for example, sprinkled his notebook with reflections on his style of invention, gradually realizing that his strength was not in the hands-on component of inventing but in developing the theoretical princi­ples underlying the transmission of speech. For Bell, this "theory" was not a formal hypothesis but a powerful mental model he described and deployed (Gorman, 1997). Similarly, Shrager's diary includes reflections on how the problem-solving style he has to develop for molecular biology differs from the style he uses in repairing cars.

Faraday's extensive laboratory investigations are recorded in his labora-tory diary, and partial records also survive in the form of material artifacts. These include many instruments and objects with which Faraday "fixed" phenomena. Chapter 7, by Tweney, Mears, and Spitzmüller, is an in historico study of Faraday's investigation of the fine structure of matter. Faraday's studies of gold colloids and thin gold films have left a rich source of informa-tion about his methods and a way of recovering the phenomena that Fara­day observed. Faraday designed experiments to test hypotheses, kept extensive notes, and struggled for appropriate ways to produce phenomena that would help him visualize the interaction of light and matter at the atomic level. In other words, he created artifacts in an effort to answer ques­tions about phenomena. These artifacts constitute a space of negotiationthat is closely related to the visual space explored by scientists in Trickett et al.'s and Gooding's chapters, because they also attempted to improve visu­alizations, just as Faraday struggled to find the right physical representations of his growing understandings. Tweney et al.'s account reveals the way in which Faraday's private speculations were eventually translated into public documents and artifacts intended to demonstrate phenomena, rather than simply explore them. Note also that Tweney et al.'s replications of Faraday's procedures constitute something like an attempt to merge the in historico method with other methods—a kind of real-world version of in silico investigations. It is important to emphasize that these replications add a necessarily very personal dimension to the analysis of Faraday's research. This personal dimension is also examined in Gooding's chapter, and it corresponds to the early, exploratory stages of the discovery process, as does Shrager's diary analysis.

Thagard's chapter, 8, provides a useful end-point to a series of chapters devoted to science. He took a list of successful habits for scientists gener­ated by workshop members (see Preface) and compared them with the rec­ommendations of three eminent scientists. In the workshop itself, Thagard recommended that the participants consider a space of questions a scientist might pursue. Finding an interesting problem that one can solve is not a trivial matter, not least because interest and importance do not necessarily go hand in hand with solvability. Answers to many of the questions that Darwin and Faraday articulated near the start of their careers emerged over several decades (Gruber, 1974; Tweney & Gooding, 1991); others eluded them entirely. Harder still is to find a problem that has the potential to make a breakthrough. As Thagard notes, Herb Simon recommended going off the beaten path to find such problems. He had a particular knack for cultivating collaborators as he charged into the unknown. Similarly, James Watson ad­vocated taking risks, but having very bright colleagues and mentors on whom to fall back.

Thagard's chapter highlights the importance of collaboration, an activity partly captured by having a dyadic category in one's evolving framework. However, within this dyadic category there ought to be an analysis of differ­ent kinds of collaborations. For example, some stay within a domain, and others stretch across disciplines. In some collaborations the work of each participant is easily distinguished from the other, and in others the whole is greater than the sum of the parts. Many collaborations involve entire teams of researchers, stretching beyond the dyadic category.

In chapter 10, Ippolito uses insights from the literature on scientific thinking to help readers understand Virginia Woolf's development as a writer, including her role in creating a new kind of stream-of-consciousness novel. Woolf kept a diary of reflections on her own process and generalizations about what it took to beet writer, She also fished wherever she went for interesting details about other human beings, and noted them—creating a space of "observations" where she conducted perceptual rehearsal, gaining "practiced familiarity with certain classes of problems" and from which she constructed representations she hoped to share with her readers. The pro-cess of creating these representations was rigorous and transformative. As Woolf (1926, p. 135) wrote, "Life is subjected to a thousand disciplines and exercises. It is curbed; it is killed. It is mixed with this, stiffened with that, brought into contrast with something else; so that when we get our scene ... a year later the surface signs by which we remembered it have disappeared." Ippolito compares Woolf's process to the way in which scientists such as Faraday, Newton, Maxwell, and Einstein developed sophisticated representations that led to discoveries.

INVENTION

Is the construction of the extended narrative of a novel more like an inven­tion than a discovery? Chapters 11–14 are an attempt to understand the kind of thinking that goes into the development of new technologies.

In chapter 11, Bradshaw focuses on the Rocket Boys described by Homer Hickham in his book of the same name. In his earlier work, Bradshaw in­voked a problem-space framework to explain why the Wright Brothers suc­ceeded; while their competitors search only through a design space, the Wrights considered both design and function spaces. Bradshaw's analysis of the Rocket Boys includes a larger array of more specific problem spaces, such as types of fuel, fins, and alternatives for the nozzle geometry. Bradshaw maintains that this decomposition of the design space into more specific problem spaces is one reason the boys succeeded,' but they could not test all possible variations arising from the combinations of these multi­ple factors. Instead, the boys developed a shared mental model of rocket de-sign. This model evolved as they studied sources and interacted with experts, such as their teachers. The boys also took good notes on their ex­periments and results and found a good division of labor within the team.

Hughes's chapter, 12, illustrates one of the key problems facing inventors: integrating components that have been shown to work into a larger system that also performs as specified. This is known as the problem of de-composition. In his earlier work, Hughes has shown how Edison was an in­ventor of systems (Hughes, 1977); in his more recent work, Hughes has shown how systems engineers create not just new artifacts but also the systems in which these artifacts will play a role (Hughes, 1998) . Hughes's work suggests the importance of including in one's framework the extent to which scientists and inventors are focusing on redesigning systems.

Chapter 12 reminds readers that new technological systems evolve through the work of many different actors. These occupy a space in which methods, techniques, meanings, and even the very basis for communication are negotiated: a trading zone. Gorman lays out in chapter 13 a framework for studying and understanding the kinds of trading zones that emerge around techno-logical systems. The effectiveness of the trading zone depends, in part, on the nature of the network linking participants. A network represents both the connectivity or patterns of communication between actors and the power or authority structure. What Gorman calls a State 1 is a structure dominated by one party or by the viewpoint of a single person or group. A State 2 is a zone where participants are relatively equal and communicate via a creole that is compatible with several interpretations and approaches to a problem. A State 3 occurs when participants develop a shared mental model. Therefore, Gorman suggests that a developed cognitive approach to science and technology should include trading zones, especially when considering research that depends on dyads, teams, and entire systems.

Because one of our goals in this volume is to establish that cognitive stud­ies of science and technology have relevance outside of academia, a repre­sentative from industry who also has strong academic credentials has contributed a chapter to this volume. Brad Allenby is the creator of Earth Systems Engineering Management (ESEM), which involves a new way of thinking about environmental issues that incorporates insights from the literature on science and technology studies. Human technological systems have been transforming nature for thousands of years, but the pace has accelerated. Allenby argues in chapter 4 that there is no point in trying to undo these effects; instead, scientists' responsibility is to manage these complex systems intelligently. To succeed, ESEM will require the formation of State 3 networks regarding environmental systems, because the complexity of these systems and their vulnerability to unexpected perturbations require a continuous dialogue among stakeholders. ESEM is one kind of activity that requires searches in multiple spaces at different levels and particular attention to how these spaces combine at the systems level.

The Industrial Information Technology Handbook by Richard Zurawski (CRC Press) focuses on existing and emerging industrial applications of IT, and on evolving trends that are driven by the needs of companies and by industry-led consortia and organizations. Emphasizing fast growing areas that have major impacts on industrial automation and enterprise integration, the Handbook covers topics such as industrial communication technology, sensors, and embedded systems.

The Handbook presents material in the form of tutorials, surveys, and technology overviews, combining fundamentals and advanced issues, with articles grouped into sections for a cohesive and comprehensive presentation. The text contains 112 contributed reports by industry experts from government, companies at the forefront of development, and some of the most renowned academic and research institutions worldwide. Several of the reports on recent developments, actual deployments, and trends cover subject matter presented to the public for the first time.

Features:

  • Introduces software and web fundamentals such as development platforms and frameworks, middleware, .NET, Java, and multidimensional database technology Discusses Internet and IP-related issues including core protocols, QoS in IP networks, network security, and ad hoc networking
  • Analyzes industrial communication systems, focusing on field area networks, Ethernet, mobile networks, security, and more
  • Overviews Internet, web and IT technologies in industrial automation and design.
  • Explores real-time embedded systems and networked embedded systems
  • Describes integration issues including e-Manufacturing, XML applications, network-based integration technologies, agent-based technologies, and applications for energy and power systems

Excerpt: The purpose of The Industrial Information Technology Handbook is to provide a reference useful for a broad range of professionals and researchers from industry and academia involved or interested in the use of infor­mation technology (IT) in industrial applications ranging from industrial automation to industrial enterprise integration. This is the first publication to cover this field in a cohesive and comprehensive way. The focus of this book is on existing technologies used by the industry, as well as newly emerging technologies and trends, evolution of which has been driven by the actual needs and by the industry-led consortia and organizations.

Over the last decade, IT has had a profound impact on the evolution of industrial enterprises toward fully integrated entities. Nowadays, we witness major efforts led by some of the largest multinationals to har­ness IT to achieve integration of I/O control, device configuration, and data collection across multiple networks and plant units; seamless integration between automation and business logistic levels to exchange jobs and production data; transparent data interfaces for all stages of the plant life cycle; the Internet- and Web-enabled remote diagnostics and maintenance, as well as electronic orders and trans-actions. Some of the IT technologies used in the office and enterprise operation have been adopted and/or transformed to suit and advance industrial controls and automation, and industrial enterprise integration. OPC, Real-Time Corba, Real-Time Linux, Windows CE, real-time specifications for Java (RTSJ, RTCE), and Real-Time UML are good examples, to mention some. The Internet technologies extended the boundaries of the operation of industrial enterprises to their customers and suppliers, as well as the management and technical staff through electronic commerce, B2B, remote data access, and monitoring and control. The industrial IT field spans a number of technological areas such as software and hardware technologies, as well as Web and networking technologies. Those technologies, when used in an integrative way, offer a potential for horizontal and vertical integration of functional layers of industrial units and enterprises, thus transforming traditional islands of automation and enterprise operations into more integrated enterprises better adjusted to cope with the demands of competitive markets.

The book contains 112 contributions on topics related to industrial IT, written by experts from industry and academia. One third of the contributions are from industry and industrial research establishments; from the leading multinational corporations at the forefront of the development of IT and industrial automation technologies, to mention ABB (Germany, Norway, Sweden, and Switzerland), Alcatel (Belgium), Cadence Systems, Microsoft Corporation, NEC Labs (U.S.), Rockwell Automation, Schneider Electric (France and Germany), Siemens (Austria and Germany), Volvo Truck Corp. (Sweden), and Yokogawa America. Most of the multinationals mentioned play a leading role in the formulation of long-term policies for technology development, and are key members of the industry–academe consortia implementing those policies.

The contributions from academia and governmental research organizations are represented by some of the most renowned and reputable institutions such as Columbia University, Cornell University, Frauenhofer FOKUS (Germany), Georgia Institute of Technology, Monash University (Australia), National Institute of Standards and Technology (U.S.), Princeton University, Politecnico di Torino (Italy), UC Berkeley, UC Irvine, UC San Diego, University of Texas at Austin and Dallas, University of Tokyo, Stanford University, Technical University of Berlin, Vienna University of Technology, and many others.

The material is presented in the form of tutorials, surveys, and technology overviews combining fundamentals with advanced issues, making this publication relevant to the beginner as well as seasoned pro­fessional from industry and academia. Particular emphasis is on the industrial perspective, illustrated by actual implementations and technology deployments. The contributions are grouped into sections for cohesive and comprehensive presentation of the areas treated. Some of these sections can be used as a ref­erence material for study (including a formal coursework) at the intermediate through advanced levels. The reports on recent technology developments, deployments, and trends frequently cover material released to the profession for the first time.

The handbook is designed to cover a very wide range of topics that comprise the field of industrial IT. The material covered in this volume will be of interest to a wide spectrum of professionals and researchers from industry and academia, as well as graduate students, from the fields of industrial and mechatronic engineering, production engineering, electrical and computer engineering, computer science, and IT.

The book is organized into two parts. Part 1, Fundamentals of Information Technology, presents material to cover new and fast evolving aspects of IT. Part 2, Industrial Information Technology, introduces cutting-edge and newly emerging areas of industrial IT.

The focus of the book is on fast evolving areas with a major impact on the evolution of industrial automation and industrial enterprise integration. Some of those areas have received limited coverage in other publications due to the fast evolution of the technologies involved, or material confidentiality, or limited circulation in the case of industry-driven developments. The areas covered in the book include industrial communication technology, sensor technology, embedded systems, the Internet, Web and IT technologies in industrial automation, and industrial enterprise integration. To complement this mate-rial, it was felt appropriate to include background reading material on some of the fastest evolving areas of IT and IP networking. This has been done primarily to assist readers with little or no background in IT and networking technologies, both essential to understand most of the chapters in Part 2 of the book.

Part 1 has two sections: Computer Software and Web Technologies and The Internet and IP Networks. The section on Computer Software and Web Technologies introduces some of the recent software technologies, platforms, and solutions that have had a profound impact on industrial control and automation, and industrial enterprise integration. It covers material in subsections: Development Platforms and Frameworks, The Unified Modeling Language, Middleware, Web Technologies, Web Programming, and Multidimensional Databases. Specifically, this section introduces J2EE platform and .Net framework in the Development Platforms and Frameworks subsection. J2EE is used extensively at the enterprise business level. The .Net is studied and experimented with by the industry for applications ranging from control to enterprise integration. An overview of UML and its extensions, variants, and implementations is pro­vided in the subsection The Unified Modeling Language. The UML language has become a de facto standard for modeling real-time and embedded systems, industrial control and automation, as well as business systems. The contribution on UML also introduces selected applications of the language in con­trol and automation. The Middleware subsection, on network connectivity software, gives a roundup of Microsoft's distributed components technologies, and Object Management Group Corba standard for object-oriented distributed computing, both of central importance to industrial control and automation. The Web servers, clients, and browsers, as well as languages used for Web and the Internet programming are introduced in the Web Technologies subsection. The client–server model is dominant in legacy automation systems. Web services, which allow for platform and language independence in the client–server model, are discussed in the Web Programming subsection. The Multidimensional Databases subsection gives an overview of multidimensional databases, which is the key technology for the analysis of data, and is essential for effective operation of automated industrial enterprises.

The section The Internet and IP Networks provides a comprehensive overview of the Internet and IP network technologies. These technologies have had a major impact on the vertical integration of func­tional levels of industrial enterprises and the operation of the business level through a variety of e-tech­nologies. This section provides a handy reference material useful for the study of the remaining sections.

The topics covered are organized in the following subsections: Introduction to the Internet, Internet Core Protocols, Quality of Service in IP Networks, Internet Applications and Application Services, Management of IP Networks, Network Security, and Ad Hoc Networking. Over the last decade, the Internet and IP network technologies have become the basis and catalyst for the transformation of industrial enterprises from so-called "islands of automation" into relatively well-integrated, both horizontally and vertically, modern industrial entities. The section offers a comprehensive overview of the Internet technologies. The contributions introduce fundamentals, as well as present new developments, making the material attractive to both novices and advanced users of those technologies.

A general overview of the Internet is given in the Introduction to the Internet subsection. Core protocols and IP routing are covered in the Internet Core Protocols subsection. This subsection also includes contribu­tions on multicast, congestion control and Quality of Service (QoS), mobile IP routing, and IP-mobility for cellular and wireless systems. The subsection QoS in IP Networks gives an overview of QoS in IP networks and introduces the main features of Multiprotocol Label Switching architecture that integrates label swap-ping forwarding with network layer routing. It also presents material on the Internet Integrated Services (IntServ) architecture and Reservation Protocol (RSVP), and Internet protocols for real-time Applications (RTP, RTCP, RTSP). Mail transfer protocols (SMTP, POP, IMAP), file transfer protocol (FTP), and hypertext transfer protocol (HTTP) are covered in the Internet Applications and Applications Services subsection. Selected aspects of IP networks management are presented in the Management of IP Networks subsection. This includes Simple Network Management Protocol (SNMP) and Dynamic Host Configuration Protocol (DHCP). Network security and firewalls are introduced in the Network Security subsection. The contribu­tion on Ad hoc networking concludes the section on The Internet and IP Networking.

Part 2 of the book begins with a section on Industrial Communication Systems. Industrial data networks play a pivotal role in the automation and integration of the factory floor by collecting process data and distributing control decisions. This is a very fast evolving area of technology with Ethernet becoming a de facto industry standard for communication in factories and plants at the fieldbus level. The random and native CSMA/CD arbitration mechanism is being replaced by other solutions allowing for deterministic behavior required in real-time communication to support soft and hard real-time dead-lines. The wireless and mobile communication technologies have already been recognized as a viable alternative for data communication on the factory floor, with the number of reported actual deployments on the rise. This necessitates the development of new solutions for the integration of wireline and wire-less fieldbuses. The requirement for the process data availability via remote access prompted research and creation of new solutions for linking factory floor with the Internet.

The material in this section is organized in the following subsections: Introduction to Data Communication Networks, Field Area Networks, Ethernet and Wireless/Mobile Network Technologies, Linking Factory Floor with the Internet and Wireless Fieldbusses, Security and Safety Technologies in Industrial Networks, and Automotive and Industrial Applications. The first section begins with an introduction to data communication networks to cover principles of lower-layer protocols for data commu­nication to be followed by material on wireless local area networks (WLAN) and wireless personal area networks (WPAN) is presented. The next subsection, Field Area Networks, contains a comprehensive tech­nical overview of some of the most widely used fieldbuses in the industry. The contributions describe and evaluate Profibus, WorldFIP, Fundation Fieldbus, CAN, EIA-709, time-triggered communication net-works, and IEEE 1394. The choice of networks and selected issues related to networked control systems are also covered. Ethernet is fast becoming a de facto industry standard for communication in factories and plants at the fieldbus level. The random and native CSMA/CD arbitration mechanism is being replaced by other solutions allowing for deterministic behavior required in real-time communication to support soft and hard real-time deadlines. These issues and various solutions, including those proposed and already adopted by the industry, are discussed in the Ethernet and Wireless/Mobile Network Technologies subsection. The subsection also provides a comprehensive overview of the issues involved in using wireless communication on the factory floor, ranging from the reliability and timeliness of lower-layer wireless protocols to suggestions for new MAC protocols. Bluetooth is also covered. The need for remote access to factory floor data calls for connecting factory floor with the Internet. The solutions are discussed in the Linking Factory Floor with the Internet and Wireless Fieldbuses subsection. The subsection also presents architectures and solutions for linking wireline and wireless fieldbuses, a new trend resulting from the presence on the factory floor of devices using wireless communications such as wireless sensors, for instance. The security and safety issues in industrial communication networks are presented in the Security and Safety Technologies in Industrial Networks subsection. An implementation of safety tech­nologies is presented in a case study describing Profisafe technology used with Profibus. The Automotive and Industrial Applications subsection presents a case study introducing and evaluating the use of differ­ent networks in automotive applications, specifically in the range of Volvo commercial automotive prod­ucts. This subsection also introduces and illustrates the use of the Manufacturing Message Specification (MMS) protocol, an OSI application layer messaging protocol designed for the remote control and mon­itoring of devices such as Remote Terminal Units (RTU), Programmable Logic Controllers (PLC), Numerical Controllers (NC), or Robot Controllers (RC).

The use of the Internet, Web, and IT technologies in industrial control, automation, and design is pre­sented in the section The Internet, Web, and IT Technologies in Industrial Automation and Design. This is a vast area and reporting has had to be limited to selected topics where the impact of these technologies was most notably felt. This section also includes a contribution on IT security in automation systems.

The contributions are organized into the following subsections: Internet and Web-Based Technologies in Industrial Automation, Component Technologies in Industrial Automation, Java Technology in Industrial Automation, Standards for Programmable Logic Controllers and System Design, Virtual Reality in Design and Manufacturing, and Security for Automation Systems. The issues, solutions, and technologies involved in remote monitoring and control, as well as maintenance, an aim for industrial enterprises and urban automation, are presented in the Internet and Web-Based Technologies in Industrial Automation subsection. This subsection also covers material on telemanipulation over the Internet, and material on the use of IT technologies. OLE for Process Control (OPC), the standard interface for access to Microsoft's Windows-based applications in automation, is one of the most popular industrial standards among users and developers of Human Machine Interface (HMI), Supervisory Control and Data Acquisition (SCADA), and Distributed Control System (DCS) for PC-based automation, as well as Soft PLCs. OPC is discussed in detail in the subsection on Component Technologies in Industrial Automation. (A compre­hensive evaluation of Corba technology, and implementations can be found in a contribution on OCEAN in the section on Integration Technologies.) An overview of Java technology, real-time extensions, and prospects for applications in controls and industrial automation are given in the lava Technology in Industrial Automation subsection. This contribution provides a roundup of two different real-time exten­sions for the Java language: Real-Time Specification for Java (RTSJ) developed by the Real-Time for Java Expert Group under the auspices of Sun Microsystems and reference implemented by TimeSys Corp, and Real-Time Core Extensions developed by Real-Time Java Working Group operating within J-Consortium. The contribution also introduces Real-Time Data Access (RTDA) specification developed by the Real-Time Data Access Working Group (RTAWG), also operating within J-Consortium. The RTDA specification focuses on API for accessing I/O-data in typical industrial and embedded applications. The subsection on Standards for Programmable Logic Controllers and System Design introduces three standards: IEC 60848 Ed. 2, IEC 61131-3, and IEC 61499. The IEC 61499 standard defines a reference archi­tecture for open and distributed control systems, which provides the means for compatibility between automation systems of different vendors. Virtual reality has become an important tool for the design and prototyping of products and complex systems. The subsection Virtual Reality in Design and Manufacturing contains two contributions introducing selected aspects of haptic simulation in design and manufacturing, and the use of virtual reality techniques in the design and validation of factory com­munication systems. The topic of IT security in automation systems is thoroughly explored at the end of the section. The IT security in automation systems poses a particular challenge given the requirement for remote control and manufacturing.

The current trend in the industry for flexible and distributed control and automation has accelerated the migration of the intelligence and control functions to the field devices, particularly sensors and actu­ators. The increased processing capabilities of those devices were instrumental in the emergence of a trend for networking of field devices around industrial data networks. The benefits are numerous, includ­ing increased flexibility, improved system performance, and ease of system installation, upgrade, and maintenance. The increased functional complexity of those devices, however, has pushed their development into the realm of formal design methods and tools characteristic of embedded systems. Another trend in networking of field devices has emerged recently: the wireless sensor networks. A number of deployments on the factory floor have been reported. In this context, the integration of wireline and wire-less technologies becomes an important issue. Recent advances in the research on "intelligent space," which initially targeted an office environment, have helped enhance navigation capabilities of industrial autonomous and mobile robots, which are an integral element of automated factories. Some of those issues and trends are presented in the Intelligent Sensors and Sensor Networks section.

The material is structured into the following subsections: Intelligent Sensor and Device Technology, Intelligent Sensors in Robotics, Sensor Systems and Networks, and Multisensor Data Fusion. The Intelligent Sensor and Device Technology subsection provides an in-depth overview of the technologies and standards behind intelligent sensors and actuators. It begins with material on the IEEE 1451 standards for

connecting sensors and actuators to microprocessors, control and field networks, and instrumentation systems. The standards also defined the Transducer Electronic Data Sheet (TEDS), which allows for the self-identification of sensors. The IEEE 1451 standards facilitate sensor networking, a new trend in industrial automation, which, among other benefits, offers strong economic incentives. The networking issues and supporting integration technologies are discussed in the next contribution, which also out-lines evolution of the intelligent device (sensors and actuators) networking concepts. Intelligent sensors, in addition to performing measurements and sensory information processing, embed complex func­tionalities essential for their operation and communications. The use of semi/formal design techniques and supporting tools, essential for a rapid and cost-effective design, is discussed in the subsection along with the introduction to the CAP modeling language, which allows for rapid prototyping of intelligent devices. The robotic technology has become an integral part of industrial automation. An overview of the fundamental sensor technologies in robotics is presented in the Intelligent Sensors in Robotics sub-section. The material covers vision, tactile sensing, sense of smell, and ultrasonic sensors. The subsection Sensor Systems and Networks presents contributions on intelligent space and sensor networks, and development methodologies. The intelligent space concept provides solutions, among others, for location of mobile robots nowadays frequently operating autonomously on the factory floor. Distributed wireless sensor networks is a relatively new and exciting proposition for collecting sensory data in the industrial environment. The design of this kind of network poses a particular challenge due to limited computational power and memory size, bandwidth restrictions, power consumption restriction if bat­tery powered, communication requirements, and unattended mode of operation in case of industrially hostile environments, to mention some. The subsection provides a comprehensive discussion of the design issues related to, in particular, self-organizing networks, to include Medium Access Control (MAC) layer protocols, routing protocols, security, location determination, and power management. It also discusses existing software solutions and systems supporting the development of the dynamic wire-less sensor networks. This presentation includes middleware, operating systems and execution environ­ments, languages, and whole software frameworks. The section concludes with a subsection and material on Multisensor Data Fusion.

The advances in the embedded systems particularly in design methodology and supporting tools, allowed for cost-effective migration of the intelligence and control functions into field devices. This, coupled with developments in other areas, promoted a shift toward decentralized architectures of industrial automation systems, and whole enterprises. The embedded systems is one of the fastest growing areas of technology, and crucial for the realization of fully automated plants and factories. The Real-Time Embedded Systems section gives a comprehensive overview of the field: basics and emerging trends. It cov­ers design aspects, languages and tools, security, real-time and embedded operating systems, and net-worked embedded systems.

The contributions are organized in the following subsections: Embedded Systems, Security in Embedded Systems, System-on-Chip and Network-on-Chip Design, and Networked Embedded Systems. The fundamental and advanced aspects of embedded systems and design are presented in the subsection Embedded Systems. This subsection contains material introducing comprehensively real-time systems, design of embedded systems, models of computation, languages for embedded systems to include hardware-level design and verification languages, design of software for embedded systems, real-time operating systems, and power-aware embedded computing. The security in embedded systems is presented in the Security in Embedded Systems subsection. The material in this subsection gives a roundup of the security issues and solutions adopted by the industry. The System-on-Chip and Network-on-Chip Design subsection provides a comprehensive overview of the issues involved in the design of systems and networks on chip. The material in this subsection covers system-on-chip and network-on-chip design, platform-based and derivative design, hardware/software interfaces design for systems-on-chip, and on-chip networks. Networked embedded systems are introduced in the Networked Embedded Systems subsection.

The last decade has witnessed an accelerated evolution of industrial enterprises. This trend has been driven by financial factors, and availability of practically proven and affordable technologies and solu­tions. The advances in the embedded systems design and falling fabrication cost allowed for deployment on the factory floor of a new generation of smart and relatively inexpensive field devices with increased functionality. The trend for networking smart field devices using industrial data networks has trans-formed control architecture from centralized into distributed and decentralized. This, in turn, allowed for efficient integration of I/O control, device configuration, and data collection across multiple networks and plant units. In this context, the control network, to date largely used to integrate devices at the field-bus level, attained a new integrative role. Other equally profound changes of an integrative nature include seamless integration between automation and business logistic levels to exchange jobs and production data; transparent data interfaces for all stages of the plant life cycle; the Internet- and Web-enabled remote diagnostics and maintenance, as well as electronic orders and transactions. This integration has been achieved through an interplay of different technologies and solutions. The IT technologies have played some of the most important roles in this process. This section shows a small cross-section of the integration efforts and results, primarily arising from activities of major control and automation vendors and multinational groupings. These efforts have a potential for lasting solutions and new technologies.

The material is organized in the following subsections: E-Technologies in Enterprise Integration, IT Technologies in Enterprise Integration, Network-Based Integration Technologies, Agent-Based Technologies in Industrial Automation, and Industrial Information Technology Solutions for Energy and Power Systems. An introduction to e-manufacturing is presented in the E-Technologies in Enterprise Integration subsection. This material gives an overview of the e-manufacturing strategies, fundamental elements, and require­ments to meet the changing needs of the manufacturing industry in transition to an e-business environment. It covers e-manufacturing, e-maintenance, e-factory, and e-business. The subsection IT Technologies in Enterprise Integration contains four contributions discussing the use of XML and Web Services, and other IT technologies in enterprise integration. The first contribution deals with replacing paper flow by electronic technologies, and presents the use of XML as a means for information exchange during the design and development of automation systems. The next contribution introduces the World Batch Forum's (WBF) Business To Manufacturing Markup Language (B2MML), which is a set of XML schemas that are based upon the ISA-95 Enterprise-Control System Integration Standards. The material demonstrates how B2MML can be used to exchange data between the business/enterprise and manufac­turing systems. The contribution also gives a roundup of the ISA-95 standard. The Web Services technology provides the means for the implementation of open and platform-independent integrated automation systems. The challenges for using Web Services in automated systems, solutions, and future trends are discussed in the third contribution. The final contribution in this subsection presents IT-based connectivity solutions for interfacing production and business systems. The presented concepts and architectures are a result of an extensive study and prototyping efforts conducted by ABB in the search for cost-effective approaches leveraging existing mainstream technologies such as enterprise application integration (EAI), Web services and XML, as well as emerging industry standards such as ISA 95 and CIM. The subsection Network-Based Integration Technologies presents novel industry approaches to the development of distributed and flexible automation systems centered around control networks. Prominent examples of these approaches discussed in this subsection are the PROFInet standard of the PROFIBUS User Organization, and the Interface for Distributed Automation (IDA) standard of the IDA Group e.V. Both the approaches draw extensively from the use of Ethernet TCP/IP, Web, and network security technologies, and allow for horizontal and vertical integration, and interoperability of devices

from different vendors to mention some features. Another notable development presented in this sub-section is the Open Controller Enabled by an Advanced real-time Network (OCEAN) project with the aim of realizing a real-time-capable platform for distributed control applications for numerical controls based on standardized communication systems and delivered as an open source. The development draws heavily from the results of the Open System Architecture for Controls within Automation (OSACA) Systems project, a precursor of OCEAN. The high degree of complexity of manufacturing systems coupled with the market-dictated requirements for agility lead to a development of new manufacturing architectures and solutions based on distributed, autonomous, and cooperating units, integrated by the plug-and-play approach, called agents. The Agent-Based Technologies in Industrial Automation subsection offers a comprehensive treatment of the topic. It begins with an introduction to the concept of agents and technology, cooperation and coordination models, interoperability, and applications to manufacturing systems. Agents dedicated to real-time manufacturing tasks are called holons. The use of holons and related concepts in manufacturing systems is discussed in the next contribution. The material presented in this contribution is largely based on the results of an international project, Holonic Manufacturing Systems, involving some of the leading multinational corporations. FactoryBrokerTM, a heterogeneous agent-oriented collaborative control system, developed and implemented by Schneider Electric GmbH (Industrial Automation), in cooperation with DaimlerChrysler AG, Research and Technology, Berlin, Germany, is the focus of the next presentation. The last contribution in this subsection introduces PABIS (Plant Automation Based on Distributed Systems), an IMS project under reference IST-1999-60016, which aims at providing a flexible plant automation solution based on the use of agent technologies. The use of IT in energy and power systems is presented in the concluding subsection. The material presented in this subsection introduces the IEC 61850 standard, the JEVis service platform, and the use of ABB IT platform in power network management. The IEC61850, which incorporates technologies such as Ethernet, TCP/IP, MMS (ISO 9506), and XML, aims at seamless information integration across the util­ity enterprise using off-the-shelf products. The primary target is the electrical substation automation: switchyards and transformers in the medium- and high-voltage transport and distribution. The JEVis system is an Internet-enabled database connected to distributed networks. It collects measurement data and presents it, after processing, to the customers via Web-based graphical front-ends. One of the JEVis application areas is energy management, or Demand Side Management. The use of ABB's Industrial IT Aspect Integrator Platform in power distribution and transmission applications is presented in the last contribution in the section. Some of the key customer benefits provided by Industrial IT include com­ponent-based network control system architecture and company-wide migration opportunities with a low investment.

To assist readers with locating material, a three-tier structure of the table of contents is offered in the book. A complete table of contents is presented at the front of the book. Each of the seven sections is pre-ceded by an individual table of contents. Finally, each chapter begins with its own table of contents.

Two indexes are provided at the end of the book, the index of authors contributing to the book, together with the titles of the contributions, and a detailed subject index.

Headline 3

insert content here