Holistic Universe
Holistic Universe
Articles
Antireductionism Complementary holism Complexity Confirmation holism DuhemQuine thesis Emergent evolution The Evolution of Cooperation Holarchy Holism Holism in ecological anthropology Holism in science Holon (philosophy) Indeterminacy (philosophy) Integral (spirituality) Integral Theory Integral ecology Law of Complexity/Consciousness Layered system Levels of Organization (anatomy) Logical holism Meshico Modular Function Deployment Modular programming Modular design Modularity Noosphere Ontology modularization Organicism Philosophy of Organism Powers of Ten Willard Van Orman Quine Semantic holism Sphoa Structured programming 1 3 5 12 15 17 18 30 31 37 37 43 46 53 58 65 67 68 69 71 71 73 74 75 78 84 87 88 90 91 93 101 108 112
Subroutine Synergetics (Fuller) Synergetics (Haken) Synergism (theology) Synergy Systems thinking Theory of Colours (book)
References
Article Sources and Contributors Image Sources, Licenses and Contributors 159 162
Article Licenses
License 163
Antireductionism
Antireductionism
Antireductionism is a reaction against reductionism, which instead advocates holism.[1] Although "breaking complex phenomena into parts, is a key method in science,"[2] there are those complex phenomena (e.g. in psychology, sociology, ecology) where some resistance to or rebellion against this approach arises, primarily due to the perceived shortcomings of the reductionist approach. When such situations arise, some people search for ideas that supply "an effective antidote against reductionism, scientism, and psychiatric hubris."[3] This in essence forms the philosophical basis for antireductionism. Such rebellions against reductionism also implicitly carry some critique of the scientific method itself, which engenders suspicion among scientists that antireductionism must inherently be flawed. Antireductionism often arises in academic fields such as history, economics, anthropology, medicine, and biology as dissatisfaction with attempts to explain complex phenomena through being reduced to simplistic, ill-fitting models, which do not provide much insight about the matter in hand.[4] An example in psychology is the "ontology of events to provide an anti-reductionist answer to the mind/matter debate [and]...the impossibility of intertranslating the two idioms by means of psychophysical laws blocks any analytically reductive relation between...the mental and the physical."[5] As Alex Rosenberg and Kaplan point out, "physicalism and antireductionism are the ruling orthodoxy in the philosophy of biology...[yet] both reductionists and antireductionists accept that given our cognitive interests and limitations, non-molecular explanations may not be improved, corrected or grounded in molecular ones."[6] This is "one of the central problems in the philosophy of psychology...an updated version of the old mind-body problem: how levels of theories in the behavioral and brain sciences relate to one another. Many contemporary philosophers of mind believe that cognitive-psychological theories are not reducible to neurological theories...most nonreductive physicalists prefer the idea of a one-way dependence of the mental on the physical."[7]
See also
Alexander Rosenberg E.F. Schumacher A Guide for the Perplexed Antiscience Philosophy of Mind Nonreductive physicalism Evolution Systems theory Systems science Emergence
Antireductionism
References
[1] Reductionism, Antireductionism, and Supervenience (http:/ / www. drury. edu/ ess/ philsci/ KleeCh5. html) [2] Reductionism vs. obscurantism (http:/ / www. geocities. com/ lclane2/ reductionism. html) by Les Lane. Archived (http:/ / www. webcitation. org/ 5knIQz0X1) 2009-10-25. [3] Jennifer Radden (Ed.) The Philosophy of Psychiatry: A Companion (http:/ / www. oup. com/ uk/ catalogue/ ?ci=9780195149531) [4] Reductionism and Antireductionism (http:/ / www. novartisfound. org. uk/ catalog/ 213abs. htm) by Thomas Nagel [5] Essays on Actions and Events (http:/ / www. oxfordscholarship. com/ oso/ public/ content/ philosophy/ 0199246270/ toc. html) by Donald Davidson [6] How to Reconcile Physicalism and Antireductionism about Biology (http:/ / www. journals. uchicago. edu/ PHILSCI/ journal/ issues/ v72n1/ 720114/ brief/ 720114. abstract. html) by Alex Rosenberg and D. M. Kaplan [7] Psychoneural Reduction The New Wave (http:/ / mitpress. mit. edu/ catalog/ item/ default. asp?ttype=2& tid=7434) by John Bickle
External links
John Bickle, Psychoneural Reduction: The New Wave (http://mitpress.mit.edu/catalog/item/default. asp?ttype=2&tid=7434), Bradford Books, March 1998, ISBN 0-262-02432-2. Ingo Brigandt and Alan Love, "Reductionism in Biology" (http://plato.stanford.edu/entries/reduction-biology/ ), in: The Stanford Encyclopedia of Philosophy. Donald Davidson, Essays on Actions and Events (http://www.oxfordscholarship.com/oso/public/content/ philosophy/0199246270/toc.html), OUP, 2001, ISBN 0-19-924627-0 Alex Rosenberg and D. M. Kaplan "How to reconcile physicalism and antireductionism about biology" (http:// www.journals.uchicago.edu/PHILSCI/journal/issues/v72n1/720114/brief/720114.abstract.html), Philosophy of Science, Volume 72.1, January 2005, pp.4368] Manfred Laubichler and Gunter Wagner (2001) "How molecular is molecular developmental biology? A reply to Alex Rosenberg's Reductionism redux: computing the embryo" (http://www.springerlink.com/content/ k8830102u196l161/), Biology and Philosophy 16: 5368 Bolender, John (1995) "Is multiple realizability compatible with antireductionism?" (http://cogprints.org/4042/ ) The Southern Journal of Philosophy XXXIII: pp.129142.
Complementary holism
Complementary holism
Complementary holism is a social theory or conceptual framework proposed by Michael Albert and Robin Hahnel, that sees all societies as consisting of a Human Center and Institutional Boundaries, and that all social relations in the political, economic, community/cultural and kinship "spheres" as mutually interacting to defining our social experiences. Complementary holism does not rest on an a priori assumption that a particular sphere is the base and all else is superstructure, as Historical Materialism does, but rather that we must take an empirical look at society's development and assess how it has been shaped by all social forces. Complementary holists agree with Marxists that economics is important to human and social development, just as they do with anarchists in regards to the State (polity) or with feminists in regards to gender inequality, but where they differ with Marxists is that they do not see economics, or class conflict, as the sole factor, nor do they feel it is possible, or productive, to say it is always the most important factor. In Liberating Theory, Michael Albert, Robin Hahnel et al. write that: Just as Marx and Engels paid strict attention to "state of the art" science in their time, we should keep up with contemporary developments. Ironically, however, though most contemporary Marxists pride themselves on being "scientific," few bother to notice that "state of the art" science has changed dramatically in the last hundred years. While avoiding simplistic mimicry and misapplication of scientific principles, we should update our methods by seriously examining contemporary science for new ideas relevant to our theoretical efforts. Modern quantum physics, for example, teaches that reality is not a collection of separate entities but a vast and intricate "unbroken whole." Ilya Prigogine comments, "The new paradigms of science may be expected to develop into the new science of connectedness which means the recognition of unity in diversity." When thinking about phenomena, we inevitably conceptually abstract parts from the whole in which they reside, but they then exist as separate entities only in our perceptions. There are no isolated electrons, for example, only fields of force continually ebbing and flowing in a seamless web of activity which manifests events that we choose to call electrons because it suits our analytic purposes. For the physicist, each electron, quark, or whatever is a "process" and a "network." As a process it has a developmental trajectory--extending through all time. As a network, it is part of an interactive pattern--stretching throughout all space. Every part embodies and is subsumed in a larger whole.
Complementary holism
Spheres
Complementary holism carves out the Human Center and Institutional Boundary into four separate but interconnecting social spheres: the Economic Sphere, the Kinship Sphere, the Community Sphere and the Political Sphere. 1. The Economic Sphere is where the production, consumption, and allocation of material means of life occur. The key institutions for the economy are workplaces, allocation mechanisms, property relations, and remuneration schemes. 2. The Kinship Sphere is where child rearing, nurturing future generations, socializing and care giving occur. Key institutions are the family, with parental and child rearing roles, where gender and sexuality, and other relations form for boys and girls, men and women, fathers and mothers, adults, children and the elderly. 3. The Political Sphere is where adjudication, policy regulation and law making occur with courts, a legislature and police. 4. The Community Sphere is where identity, religion, and spirituality occur with race, ethnicity, places of worship, beliefs about life and death, celebration of cultural traditions, etc. Like the human center and institutional boundary, these four spheres interconnect with one another: People aren't economically-affected in one part of their lives and gender-affected in another; state-influenced one day of the week and community-influenced some other day. Instead, they simultaneously experience economic, kinship, governance, and community involvements, and this guarantees that spheres interact. At a particular time class may have more influence on molding a person's consciousness and behavior than gender, or vice-versa. But these influences must co-exist. (Liberating Theory)
Accommodation
Accommodation describes a particular way in which the four spheres interact with each other. The assigned roles that are a feature of one sphere may be accommodated in another. How our roles are divided in the workplace may accommodate with sexist and racist divisions found as central features in our Kinship and Community Spheres so that economic activity subjugates women and non-whites to lower pay and positions of power than men and whites.
Co-Definition
Another way the four spheres can interact is by co-defining each other. For example, workplace roles are often determined by more than "class divisions." There is nothing inherent in capitalist economic relations that requires the activity of coffee-making to be assigned to the role of secretary. There is no purely economic reason why in the U.S., women are ghettoized into so-called "pink collar" jobs: clerical work, nursing, domestic work, restaurant and food service, retail sales, elementary school teaching, etc. There is nothing about economics that requires that in addition to different levels of compensation, women's and men's activities must or even should involve different degrees of oversight and mobility. Purely economic dynamics cannot explain such profound gender differentiations. In this sense, then, not only do economic relations accommodate kinship hierarchies, by placing women in the lowest "economicallydefined" positions, but patriarchy "co-defines" basic economic relations. (Liberating Theory)
Complementary holism
Social Change
Proponents of complementary holism feel the social theory is not only helpful in understanding society as it is but in understanding how society can be changed. By recognizing the interconnecting social forces - the human center, institutional boundary and four social spheres - and how they shape and are shaped by the intertwining relationship with each other, complementary holists feel we are better equipped to transform society and overcome social oppressions. Complementary holism sees two ways in which societies can change: 1. Reproduction: This is where a change recreates a past social feature. An example could be where a revolution overthrows one hierarchical political system and replaces it with another. 2. Transformation: This is where social activities create an entirely new characteristics that differ from the past. For example, a social revolution may overthrow a market economy and replace it with a self-managed participatory economy.
References
Albert, Michael et al. Liberating Theory. South End Press. 1989 Hahnel, Robin. The ABC's of Political Economy; A Modern Approach. Pluto Press. 2003: http:/ / zinelibrary. info/ abcs-political-economy-robin-hahnel Spannos, Chris. Introduction to Totality and Complementary Holism. Znet. 2008: http:/ / www. zcommunications. org/introduction-to-totality-and-complementary-holism-by-chris-spannos-1
Complexity
In general usage, complexity tends to be used to characterize something with many parts in intricate arrangement. The study of these complex linkages is the main goal of network theory and network science. In science there are at this time a number of approaches to characterizing complexity, many of which are reflected in this article. In a business context, complexity management is the methodology to minimize value-destroying complexity and efficiently control value-adding complexity in a cross-functional approach. Definitions are often tied to the concept of a "system"a set of parts or elements which have relationships among them differentiated from relationships with other elements outside the relational regime. Many definitions tend to postulate or assume that complexity expresses a condition of numerous elements in a system and numerous forms of relationships among the elements. At the same time, what is complex and what is simple is relative and changes with time. Some definitions key on the question of the probability of encountering a given condition of a system once characteristics of the system are specified. Warren Weaver has posited that the complexity of a particular system is the degree of difficulty in predicting the properties of the system if the properties of the system's parts are given. In Weaver's view, complexity comes in two forms: disorganized complexity, and organized complexity.[1] Weaver's paper has influenced contemporary thinking about complexity.[2] The approaches which embody concepts of systems, multiple elements, multiple relational regimes, and state spaces might be summarized as implying that complexity arises from the number of distinguishable relational regimes (and their associated state spaces) in a defined system. Some definitions relate to the algorithmic basis for the expression of a complex phenomenon or model or mathematical expression, as is later set out herein.
Complexity
[3] Map of Complexity Science . The web version of this map provides internet links to many of the leading scholars and areas of research in complexity science.
Complexity organized aspect of this form of complexity vis a vis other systems than the subject system can be said to "emerge," without any "guiding hand". The number of parts does not have to be very large for a particular system to have emergent properties. A system of organized complexity may be understood in its properties (behavior among the properties) through modeling and simulation, particularly modeling and simulation with computers. An example of organized complexity is a city neighborhood as a living mechanism, with the neighborhood people among the system's parts.[4]
Complexity developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003). In information processing, complexity is a measure of the total number of properties transmitted by an object and detected by an observer. Such a collection of properties is often referred to as a state. In business, complexity describes the variances and their consequences in various fields such as product portfolio, technologies, markets and market segments, locations, manufacturing network, customer portfolio, IT systems, organization, processes etc. In physical systems, complexity is a measure of the probability of the state vector of the system. This should not be confused with entropy; it is a distinct mathematical measure, one in which two distinct states are never conflated and considered equal, as is done for the notion of entropy statistical mechanics. In mathematics, Krohn-Rhodes complexity is an important topic in the study of finite semigroups and automata. In software engineering, programming complexity is a measure of the interactions of the various elements of the software. This differs from the computational complexity described above in that it is a measure of the design of the software. There are different specific forms of complexity: In the sense of how complicated a problem is from the perspective of the person trying to solve it, limits of complexity are measured using a term from cognitive psychology, namely the hrair limit. Complex adaptive system denotes systems which have some or all of the following attributes[6] The number of parts (and types of parts) in the system and the number of relations between the parts is non-trivial however, there is no general rule to separate "trivial" from "non-trivial"; The system has memory or includes feedback; The system can adapt itself according to its history or feedback; The relations between the system and its environment are non-trivial or non-linear; The system can be influenced by, or can adapt itself to, its environment; and The system is highly sensitive to initial conditions.
Study of complexity
Complexity has always been a part of our environment, and therefore many scientific fields have dealt with complex systems and phenomena. Indeed, some would say that only what is somehow complexwhat displays variation without being randomis worthy of interest. The use of the term complex is often confused with the term complicated. In today's systems, this is the difference between myriad connecting "stovepipes" and effective "integrated" solutions.[7] This means that complex is the opposite of independent, while complicated is the opposite of simple. While this has led some fields to come up with specific definitions of complexity, there is a more recent movement to regroup observations from different fields to study complexity in itself, whether it appears in anthills, human brains, or stock markets. One such interndisciplinary group of fields is relational order theories.
Complexity
Complexity topics
Complex behaviour
The behaviour of a complex system is often said to be due to emergence and self-organization. Chaos theory has investigated the sensitivity of systems to variations in initial conditions as one cause of complex behaviour.
Complex mechanisms
Recent developments around artificial life, evolutionary computation and genetic algorithms have led to an increasing emphasis on complexity and complex adaptive systems.
Complex simulations
In social science, the study on the emergence of macro-properties from the micro-properties, also known as macro-micro view in sociology. The topic is commonly recognized as social complexity that is often related to the use of computer simulation in social science, i.e.: computational sociology.
Complex systems
Systems theory has long been concerned with the study of complex systems (In recent times, complexity theory and complex systems have also been used as names of the field). These systems can be biological, economic, technological, etc. Recently, complexity is a natural domain of interest of the real world socio-cognitive systems and emerging systemics research. Complex systems tend to be high-dimensional, non-linear and hard to model. In specific circumstances they may exhibit low dimensional behaviour.
Complexity in data
In information theory, algorithmic information theory is concerned with the complexity of strings of data. Complex strings are harder to compress. While intuition tells us that this may depend on the codec used to compress a string (a codec could be theoretically created in any arbitrary language, including one in which the very small command "X" could cause the computer to output a very complicated string like "18995316"), any two Turing-complete languages can be implemented in each other, meaning that the length of two encodings in different languages will vary by at most the length of the "translation" languagewhich will end up being negligible for sufficiently large data strings. These algorithmic measures of complexity tend to assign high values to random noise. However, those studying complex systems would not consider randomness as complexity. Information entropy is also sometimes used in information theory as indicative of complexity.
Applications of complexity
Computational complexity theory is the study of the complexity of problemsthat is, the difficulty of solving them. Problems can be classified by complexity class according to the time it takes for an algorithmusually a computer programto solve them as a function of the problem size. Some problems are difficult to solve, while others are easy. For example, some difficult problems need algorithms that take an exponential amount of time in terms of the size of the problem to solve. Take the travelling salesman problem, for example. It can be solved in time (where n is the size of the network to visitlet's say the number of cities the travelling salesman must visit exactly once). As the size of the network of cities grows, the time needed to find the route grows (more than) exponentially. Even though a problem may be computationally solvable in principle, in actual practice it may not be that simple. These problems might require large amounts of time or an inordinate amount of space. Computational complexity may be approached from many different aspects. Computational complexity can be investigated on the basis of time,
Complexity memory or other resources used to solve the problem. Time and space are two of the most important and popular considerations when problems of complexity are analyzed. There exist a certain class of problems that although they are solvable in principle they require so much time or space that it is not practical to attempt to solve them. These problems are called intractable. There is another form of complexity called hierarchical complexity. It is orthogonal to the forms of complexity discussed so far, which are called horizontal complexity
10
See also
Chaos theory Command and Control Research Program Complexity theory (disambiguation page) Cyclomatic complexity Digital morphogenesis Evolution of complexity Game complexity Holism in science Interconnectedness Model of Hierarchical Complexity Names of large numbers Network science Network theory Novelty theory Occam's razor Process architecture Programming Complexity Sociology and complexity science Systems theory Variety (cybernetics) Volatility, uncertainty, complexity and ambiguity
References
[1] Weaver, Warren (1948). "Science and Complexity" (http:/ / www. ceptualinstitute. com/ genre/ weaver/ weaver-1947b. htm). American Scientist 36 (4): 536. PMID18882675. | accessdate = 2007-11-21 [2] Johnson, Steven (2001). Emergence: the connected lives of ants, brains, cities, and software. New York: Scribner. p.46. ISBN0-684-86875-X. [3] http:/ / www. art-sciencefactory. com/ complexity-map_feb09_april. html [4] Jacobs, Jane (1961). The Death and Life of Great American Cities. New York: Random House. [5] Ulanowicz, Robert, "Ecology, the Ascendant Perspective", Columbia, 1997 [6] Johnson, Neil F. (2007). Two's Company, Three is Complexity: A simple guide to the science of all sciences. Oxford: Oneworld. ISBN978-1-85168-488-5. [7] Lissack, Michael R.; Johan Roos (2000). The Next Common Sense, The e-Manager's Guide to Mastering Complexity. Intercultural Press. ISBN 9781857882353.
Complexity
11
Further reading
Lewin, Roger (1992). Complexity: Life at the Edge of Chaos. New York: Macmillan Publishing Co. ISBN9780025704855. Waldrop, M. Mitchell (1992). Complexity: The Emerging Science at the Edge of Order and Chaos. New York: Simon & Schuster. ISBN9780671767891. Czerwinski, Tom; David Alberts (1997). Complexity, Global Politics, and National Security (http://www. dodccrp.org/files/Alberts_Complexity_Global.pdf). National Defense University. ISBN9781579060466. Czerwinski, Tom (1998). Coping with the Bounds: Speculations on Nonlinearity in Military Affairs (http://www. dodccrp.org/files/Czerwinski_Coping.pdf). CCRP. ISBN9781414503158 (from Pavilion Press, 2004). Lissack, Michael R.; Johan Roos (2000). The Next Common Sense, The e-Manager's Guide to Mastering Complexity. Intercultural Press. ISBN9781857882353. Sol, R. V.; B. C. Goodwin (2002). Signs of Life: How Complexity Pervades Biology. Basic Books. ISBN9780465019281. Moffat, James (2003). Complexity Theory and Network Centric Warfare (http://www.dodccrp.org/files/ Moffat_Complexity.pdf). CCRP. ISBN9781893723115. Smith, Edward (2006). Complexity, Networking, and Effects Based Approaches to Operations (http://www. dodccrp.org/files/Smith_Complexity.pdf). CCRP. ISBN9781893723184. Heylighen, Francis (2008). " Complexity and Self-Organization (http://pespmc1.vub.ac.be/Papers/ ELIS-Complexity.pdf)". In Bates, Marcia J.; Maack, Mary Niles. Encyclopedia of Library and Information Sciences. CRC. ISBN9780849397127 Greenlaw, N. and Hoover, H.J. Fundamentals of the Theory of Computation, Morgan Kauffman Publishers, San Francisco, 1998 Blum, M. (1967) On the Size of Machines, Information and Control, v. 11, pp.257265 Burgin, M. (1982) Generalized Kolmogorov complexity and duality in theory of computations, Notices of the Russian Academy of Sciences, v.25, No. 3, pp.1923 Mark Burgin (2005), Super-recursive algorithms, Monographs in computer science, Springer. Burgin, M. and Debnath, N. Hardship of Program Utilization and User-Friendly Software, in Proceedings of the International Conference "Computer Applications in Industry and Engineering", Las Vegas, Nevada, 2003, pp.314317 Debnath, N.C. and Burgin, M., (2003) Software Metrics from the Algorithmic Perspective, in Proceedings of the ISCA 18th International Conference "Computers and their Applications", Honolulu, Hawaii, pp.279282 Meyers, R.A., (2009) "Encyclopedia of Complexity and Systems Science", ISBN 978-0-387-75888-6 Caterina Liberati, J. Andrew Howe, Hamparsum Bozdogan, Data Adaptive Simultaneous Parameter and Kernel Selection in Kernel Discriminant Analysis Using Information Complexity (http://jprr.org/index.php/jprr/ article/view/117), Journal of Pattern Recognition Research, JPRR (http://www.jprr.org), Vol 4, No 1, 2009. Gershenson, C. and F. Heylighen (2005). How can we think the complex? (http://uk.arxiv.org/abs/nlin.AO/ 0402023) In Richardson, Kurt (ed.) Managing Organizational Complexity: Philosophy, Theory and Application, Chapter 3. Information Age Publishing.
Complexity
12
External links
Quantifying Complexity Theory (http://www.calresco.org/lucas/quantify.htm) classification of complex systems Complexity Measures (http://cscs.umich.edu/~crshalizi/notebooks/complexity-measures.html) an article about the abundance of not-that-useful complexity measures. UC Four Campus Complexity Videoconferences (http://eclectic.ss.uci.edu/~drwhite/center/cac.html) Human Sciences and Complexity Complexity Digest (http://comdig.unam.mx/) networking the complexity community The Santa Fe Institute (http://www.santafe.edu/) engages in research in complexity related topics Exploring Complexity in Science and Technology (http://web.cecs.pdx.edu/~mm/ ExploringComplexityFall2009/index.html) A introductory course about complex system by Melanie Mitchell
Confirmation holism
Confirmation holism, also called epistemological holism is the claim that a single scientific theory cannot be tested in isolation; a test of one theory always depends on other theories and hypotheses. For example, in the first half of the 19th century, astronomers were observing the path of the planet Uranus to see if it conformed to the path predicted by Newton's law of gravitation; it didn't. There were an indeterminate number of possible explanations, such as that the telescopic observations were wrong because of some unknown factor; or that Newton's laws were in error; or that God moves different planets in different ways. However, it was eventually accepted that an unknown planet was affecting the path of Uranus, and that the hypothesis that there are seven planets in our solar system was false. Le Verrier calculated the approximate position of the interfering planet and its existence was confirmed in 1846. We now call the planet Neptune. There are two aspects of confirmation holism. The first is that interpretation of observation is dependent on theory (sometimes called theory-laden). Before accepting the telescopic observations one must look into the optics of the telescope, the way the mount is constructed in order to ensure that the telescope is pointing in the right direction, and that light travels through space in a straight line (which Einstein demonstrated is not so, except as a possible approximation). The second is that evidence alone is insufficient to determine which theory is correct. Each of the alternatives above might have been correct, but only one was in the end accepted. That theories can only be tested as they relate to other theories implies that one can always claim that test results that seem to refute a favoured scientific theory have not refuted that theory at all. Rather, one can claim that the test results conflict with predictions because some other theory is false or unrecognised (this is Einstein's basic objection when it comes to the uncertainty principle). Maybe the test equipment was out of alignment because the cleaning lady bumped into it the previous night. Or, maybe, there is dark matter in the universe that accounts for the strange motions of some galaxies. That one cannot unambiguously determine which theory is refuted by unexpected data means that scientists must use judgements about which theories to accept and which to reject. Logic alone does not guide such decisions.
Confirmation holism
13
Theory-dependence of observations
Suppose some theory T implies an observation O (observation meaning here the result of the observation, rather than the process of observation per se):
So by Modus Tollens,
All observations make use of prior assumptions, which can be symbolised as:
and therefore
which is by De Morgan's law equivalent to . In other words, the failure to make some observation only implies the failure of at least one of the prior assumptions that went into making the observation. It may be possible to reject an apparently falsifying observation by claiming that only one of its underlying assumptions is false, and not the one intended to be tested by the observation; if there are an indeterminate number of such assumptions, any observation can potentially be made compatible with any theory.
and so
which implies that In words, the failure of some theory implies the failure of at least one of its underlying hypotheses. It may be possible to resurrect a falsified theory by claiming that only one of its underlying hypotheses that had been previously ignored is false; again, if there are an indeterminate number of such hypotheses underlying a particular theory, it can potentially be made compatible with any particular observation. Therefore it would be in principle impossible to determine if that theory is false by reference to evidence.
Conceptual schemes
The framework of a theory (formal conceptual scheme) is just as open to revision as the "content" of the theory. The aphorism that Willard Quine uses is: theories face the tribunal of experience as a whole. This idea is problematic for the analytic-synthetic distinction because (in Quine's view) such a distinction supposes that some facts are true of language alone, but if conceptual scheme is as open to revision as synthetic content, then there can be no plausible distinction between framework and content, hence no distinction between the analytic and the synthetic. One upshot of confirmational holism is the underdetermination of theories: if all theories (and the propositions derived from them) of what exists are not sufficiently determined by empirical data (data, sensory-data, evidence); each theory with its interpretation of the evidence is equally justifiable or, alternatively, equally indeterminate. Thus, the Greeks' worldview of Homeric gods is as credible as the physicists' world of electromagnetic waves. Quine later
Confirmation holism argued for ontological relativity, that our ordinary talk of objects suffers from the same underdetermination and thus does not properly refer to objects. While underdetermination does not invalidate the principle of falsifiability first presented by Karl Popper, Popper himself acknowledged that continual ad hoc modification of a theory provides a means for a theory to avoid being falsified (cf. Lakatos). In this respect, the principle of parsimony, or Occam's Razor, plays a role. This principle presupposes that between multiple theories explaining the same phenomenon, the simplest theory in this case, the one that is least dependent on continual ad hoc modification is to be preferred.
14
Transcendental arguments
In recent philosophical literature, starting with Kant, and perhaps related most popularly with Strawson, attempts have been made at transcendental arguments. This form of argument attempts to prove a proposition from the fact that said proposition is the precondition of some other well-established or accepted proposition(s). If one accepts the validity of this sort of argumentation, then these arguments may serve alongside the Razor, and may perhaps be more conclusive, as a heuristic for selecting between competing, under-determined theories.
References
Curd, Martin; Cover, J.A. (Eds.) (1998). Philosophy of Science, Section 3, The Duhem-Quine Thesis and Underdetermination, W.W. Norton & Company. Duhem, Pierre. The Aim and Structure of Physical Theory. Princeton, New Jersey, Princeton University Press, 1954. W. V. Quine. 'Two Dogmas of Empiricism.' The Philosophical Review, 60 (1951), pp. 2043. online text [1] W. V. Quine. Word and Object. Cambridge, Mass., MIT Press, 1960. W. V. Quine. 'Ontological Relativity.' In Ontological Relativity and Other Essays, New York, Columbia University Press, 1969, pp. 2668. D. Davidson. 'On the Very Idea of Conceptual Scheme.' Proceedings of the American Philosophical Association, 17 (1973-74), pp. 520.
See also
Coherentism DuhemQuine thesis No true Scotsman Truth Truth theory Underdetermination
Confirmation holism
15
Theories of truth
Coherence theory of truth Consensus theory of truth Correspondence theory of truth Deflationary theory of truth Epistemic theories of truth Pragmatic theory of truth Redundancy theory of truth Semantic theory of truth
Related topics
Belief Epistemology Information Inquiry Knowledge Pragmatism Pragmaticism Pragmatic maxim Reproducibility Scientific method Testability Verificationism
References
[1] http:/ / www. ditext. com/ quine/ quine. html
DuhemQuine thesis
The DuhemQuine thesis (also called the DuhemQuine problem) is that it is impossible to test a scientific hypothesis in isolation, because an empirical test of the hypothesis requires one or more background assumptions (also called auxiliary assumptions or auxiliary hypotheses). The hypothesis in question is by itself incapable of making predictions. Instead, the consequences of the hypothesis typically rest on background assumptions from which to derive predictions. This prevents a theory from becoming conclusively falsified through empirical means if the background assumptions are not proven (since background assumptions sometimes involve one or more scientific theories). For instance, to "disprove" the idea that the Earth is in motion, some people noted that birds did not get thrown off into the sky whenever they let go of a tree branch. That datum is no longer accepted as empirical evidence that the Earth is not moving because we have adopted a different background system of physics that allows us to make different predictions. Although a bundle of theories (i.e. a theory and its background assumptions) as a whole can be tested against the empirical world and be falsified if it fails the test, the DuhemQuine thesis says it is impossible to isolate a single hypothesis in the bundle. One solution to the dilemma thus facing scientists is that when we have rational reasons to accept the background assumptions as true (e.g. scientific theories via evidence) we will have rationalalbeit nonconclusivereasons for thinking that the theory tested is probably wrong if the empirical test fails.
Pierre Duhem
As popular as the DuhemQuine thesis may be in philosophy of science, in reality Pierre Duhem and Willard Van Orman Quine stated very different theses. Duhem believed that only in the field of physics can a single individual hypothesis not be isolated for testing. He says in no uncertain terms that experimental theory in physics is not the same as in fields like physiology and certain branches of chemistry. Also, Duhem's conception of "theoretical group" has its limits, since he states that not all concepts are connected to each other logically. He did not include at all a priori disciplines such as logic and mathematics within the theoretical groups in physics, since they cannot be tested.
DuhemQuine thesis
16
References
Gillies, Donald [1]. "The Duhem Thesis and the Quine Thesis", in Martin Curd and J.A. Cover ed. Philosophy of Science: The Central Issues, (New York: Norton, 1998), 302-319. This paper is extracted from Donald Gillies Philosophy of Science in the Twentieth Century (Oxford: Blackwell Publishers, 1993). The third chapter of the Norton Anthology also contains relevant excerpts from Duhem's work, The Aim and Structure of Physical Theory, and reprints Quine's "Two Dogmas of Empiricism" which are important works for Duhem and Quine's thought on this topic.
References
[1] http:/ / www. ucl. ac. uk/ sts/ gillies/ index. htm
Emergent evolution
17
Emergent evolution
Emergent evolution is the hypothesis that, in the course of evolution, some entirely new properties, such as life and consciousness, appear at certain critical points, usually because of an unpredictable rearrangement of the already existing entities. The concept has influenced the development of systems theory and complexity theory.
Historical context
The word emergent was first used to describe the concept by George Lewes in volume two of his 1875 book Problems of Life and Mind (p.412). Henri Bergson covered similar themes in the popular book Creative Evolution in 1907. It was further developed by Samuel Alexander in his Gifford Lectures at Glasgow during 191618 and published as Space, Time, and Deity (1920). The term emergent evolution was coined by C. Lloyd Morgan in his own Gifford lectures of 192122 at St. Andrews and published as Emergent Evolution (1923). In an appendix to one lecture in his book, Morgan acknowledged the contributions of Roy Wood Sellars' Evolutionary Naturalism (1922).
See also
Evolutionary biology
References
George H. Lewes, Problems of Life and Mind, First Series: The Foundations of a Creed, vol. II (1875). University of Michigan Library: ISBN 1425555780 Henri Bergson, Creative Evolution (1911, English translation of L'Evolution cratrice), Dover Publications 1998: ISBN 0-486-40036-0 Samuel Alexander, Space, Time, and Deity (1920). Kessinger Publishing reprint: ISBN 0766187020 vol 1 online version [1] C. Lloyd Morgan, Emergent Evolution (1923). Henry Holt and Co., ISBN 0-40460468-4, online version [2]
References
[1] http:/ / www. giffordlectures. org/ Browse. asp?PubID=TPSTAD& Cover=TRUE [2] http:/ / www. giffordlectures. org/ Browse. asp?PubID=TPEMEV& Cover=TRUE
18
Operations Research
The idea that human behavior can be usefully analyzed mathematically gained great credibility following the application of operations research in World War II to improve military operations. One famous example involved how the Royal Air Force hunted submarines in the Bay of Biscay.[3] It had seemed to make sense to patrol the areas where submarines were most frequently seen. Then it was pointed out that "seeing the most submarines" was also a function of patrol density i.e., of the number of eyes looking. Making an allowance for patrol density showed that patrols were more efficient that is, found more submarines per patrol in other areas. Making appropriate adjustments increased the overall effectiveness.
Game Theory
Accounts of the success of operations research during the war, publication in 1944 of John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior (Von Neumann & Morgenstern 1944) on the use of game theory for developing and analyzing optimal strategies for military and other uses, and publication of John William's The Compleat Strategyst, a popular exposition of game theory,[4] led to a greater appreciation of mathematical analysis of human behavior.[5] But game theory had a little crisis: it could not find a strategy for a simple game called "The Prisoner's Dilemma" (PD) where two players have the option to cooperate together for mutual gain, but each also takes a risk of being suckered.
Prisoner's Dilemma
The Prisoner's Dilemma game[6] (invented around 1950 by Merrill Flood and Melvin Dresher[7] ) takes its name from the following scenario: you and a criminal associate have been busted. Fortunately for you, most of the evidence was shredded, so you are facing only a year in prison. But the prosecutor wants to nail someone, so he offers you a deal: if you squeal on your associate which will result in his getting a five year stretch the prosecutor will see that six months is taken off of your sentence. Which sounds good, until you learn your associate is being offered the same deal which would get you five years. So what do you do? The best that you and your associate can do together is to not squeal: that is, to cooperate (with each other, not the prosecutor!) in a mutual bond of silence, and do your year. But wait: if your associate cooperates (that sucker!), can you do better by squealing ("defecting") to get that six month reduction? It's tempting, but then he's also tempted. And if you both squeal, oh, no, it's four and half years each. So perhaps you should cooperate but wait, that's being a sucker yourself, as your associate will undoubtedly defect, and you won't even get the six months off. So what is the best strategy to minimize your incarceration (aside from going straight in the first place)?
The Evolution of Cooperation To cooperate, or not cooperate? This simple question (and the implicit question of whether to trust, or not), expressed in an extremely simple game, is a crucial issue across a broad range of life. Why shouldn't a shark eat the little fish that has just cleaned it of parasites: in any given exchange who would know? Fig wasps collectively limit the eggs they lay in fig trees (otherwise, the trees would suffer). But why shouldn't any one fig wasp cheat and leave a few more eggs than her rivals? At the level of human society, why shouldn't each of the villagers that share a common but finite resource try to exploit it more than the others?[8] At the core of these and myriad other examples is a conflict formally equivalent to the Prisoner's Dilemma. Yet sharks, fig wasps, and villagers all cooperate. It has been a vexatious problem in evolutionary studies to explain how such cooperation should evolve, let alone persist, in a world of self-maximizing egoists.
19
Darwinian context
Charles Darwin's theory of how evolution works ("By Means of Natural Selection"[9] ) is explicitly competitive ("survival of the fittest"), Malthusian ("struggle for existence"), even gladiatorial ("nature, red in tooth and claw"). Species are pitted against species for shared resources, similar species with similar needs and niches even more so, and individuals within species most of all.[10] All this comes down to one factor: out-competing all rivals and predators in producing progeny. Darwin's explanation of how preferential survival of the slightest benefits can lead to advanced forms is the most important explanatory principle in biology, and extremely powerful in many other fields. Such success has reinforced notions that life is in all respects a war of each against all, where every individual has to look out for himself, that your gain is my loss. In such a struggle for existence altruism (voluntarily yielding a benefit to a non-relative) and even cooperation (working with another for a mutual benefit) seem so antithetical to self-interest as to be the very kind of behavior that should be selected against. Yet cooperation and seemingly even altruism have evolved and persist, and naturalists have been hard pressed to explain why.
Social Darwinism
The popularity of the evolution of cooperation the reason it is not an obscure technical issue of interest to only a small number of specialists is in part because it mirrors a larger issue where the realms of political philosophy, ethics, and biology intersect: the ancient issue of individual interests versus group interests. On one hand, the so-called "Social Darwinians" (roughly, those who would use the "survival of the fittest" of Darwinian evolution to justify the cutthroat competitiveness of laissez-faire capitalism[11] ) declaim that the world is an inherently competitive "dog eat dog" jungle, where every individual has to look out for himself. The philosopher Ayn Rand damned "altruism" and declared selfishness a virtue.[12] The Social Darwinists' view is derived from Charles Darwin's interpretation of evolution by natural selection, which is explicitly competitive ("survival of the fittest"), Malthusian ("struggle for existence"), even gladiatorial ("red in tooth and claw"), and permeated by the Victorian laissez-faire ethos of Darwin and his disciples (such as T. H. Huxley and Herbert Spencer). What they read into the theory was then read out by Social Darwinians as scientific justification for their social and economic views (such as poverty being a natural condition and social reform an unnatural meddling).[13] Such views of evolution, competition, and the survival of the fittest are explicit in the ethos of modern capitalism, as epitomized by industrialist Andrew Carnegie in The Gospel of Wealth: [W]hile the law [of competition] may be sometimes hard for the individual, it is best for the race, because it ensures the survival of the fittest in every department. We accept and welcome, therefore, as conditions to which we must accommodate ourselves, great inequality of environment; the concentration of business, industrial and commercial, in the hands of the few; and the law of competition between these, as being not only beneficial, but essential to the future progress of the race. (Carnegie 1900)
The Evolution of Cooperation While the validity of extrapolating moral and political views from science is questionable, the significance of such views in modern society is undoubtable.
20
21
Modern developments
Darwin's explanation of how evolution works is quite simple, but the implications of how it might explain complex phenomena are not at all obvious; it has taken over a century to elaborate (see modern synthesis).[17] Explaining how altruism which by definition reduces personal fitness can arise by natural selection is a particular problem, and the central theoretical problem of sociobiology.[18] A possible explanation of altruism is provided by the theory of group selection (first suggested by Darwin himself while grappling with issue of social insects[19] ) which argues that natural selection can act on groups: groups that are more successful for any reason, including learned behaviors will benefit the individuals of the group, even if they are not related. It has had a powerful appeal, but has not been fully persuasive, in part because of difficulties regarding cheaters that participate in the group without contributing.[20] Another explanation is provided by the genetic kinship theory of William D. Hamilton:[21] if a gene causes an individual to help other individuals that carry copies of that gene, then the gene has a net benefit even with the sacrifice of a few individuals. The classic example is the social insects, where the workers which are sterile, and therefore incapable of passing on their genes benefit the queen, who is essentially passing on copies of "their" genes. This is further elaborated in the "selfish gene" theory of Richard Dawkins, that the unit of evolution is not the individual organism, but the gene.[2] (As stated by Wilson: "the organism is only DNA's way of making more DNA."[18] ) However, kinship selection works only where the individuals involved are closely related; it fails to explain the presence of altruism and cooperation between unrelated individuals, particularly across species. In a 1971 paper[22] Robert Trivers demonstrated how reciprocal altruism can evolve between unrelated individuals, even between individuals of entirely different species. And the relationship of the individuals involved is exactly analogous to the situation in a certain form of the Prisoner's Dilemma.[23] The key is that in the iterated Prisoner's Dilemma, or IPD, both parties can benefit from the exchange of many seemingly altruistic acts. As Trivers says, it "take[s] the altruism out of altruism."[24] The Randian premise that self-interest is paramount is largely unchallenged, but turned on its head by recognition of a broader, more profound view of what constitutes self-interest. It does not matter why the individuals cooperate. The individuals may be prompted to the exchange of "altruistic" acts by entirely different genes, or no genes in particular, but both individuals (and their genomes) can benefit simply on the basis of a shared exchange. In particular, "the benefits of human altruism are to be seen as coming directly from reciprocity not indirectly through non-altruistic group benefits".[25] Trivers' theory is very powerful. Not only can it replace group selection, it also predicts various observed behavior, including moralistic aggression,[26] gratitude and sympathy, guilt and reparative altruism,[27] and development of abilities to detect and discriminate against subtle cheaters. The benefits of such reciprocal altruism was dramatically demonstrated by a pair of tournaments held by Robert Axelrod around 1980.
Axelrod's Tournaments
Axelrod initially solicited strategies from other game theorists to compete in the first tournament. Each strategy was paired with each other strategy for 200 iterations of a Prisoner's Dilemma game, and scored on the total points accumulated through the tournament. The winner was a very simple strategy submitted by Anatol Rapoport called "TIT FOR TAT" (TFT) that cooperates on the first move, and subsequently echoes (reciprocates) what the other player did on the previous move. The results of the first tournament were analyzed and published, and a second tournament held to see if anyone could find a better strategy. TIT FOR TAT won again. Axelrod analyzed the results, and made some interesting discoveries about the nature of cooperation, which he describes in his book[28] In both actual tournaments and various replays the best performing strategies were nice[29] : that is, they were never the first to defect. Many of the competitors went to great lengths to gain an advantage over the "nice" (and usually simpler) strategies, but to no avail: tricky strategies fighting for a few points generally could not do as well as nice
The Evolution of Cooperation strategies working together. TFT (and other "nice" strategies generally) "won, not by doing better than the other player, but by eliciting cooperation [and] by promoting the mutual interest rather than by exploiting the other's weakness."[30] Being "nice" can be beneficial, but it can also lead to being suckered. To obtain the benefit or avoid exploitation it is necessary to be provocable to both retaliation and forgiveness. When the other player defects, a nice strategy must immediately be provoked into retaliatory defection.[31] The same goes for forgiveness: return to cooperation as soon as the other player does. Overdoing the punishment risks escalation, and can lead to an "unending echo of alternating defections" that depresses the scores of both players.[32] Most of the games that game theory had heretofore investigated are "zero-sum" that is, the total rewards are fixed, and a player does well only at the expense of other players. But real life is not zero-sum. Our best prospects are usually in cooperative efforts. In fact, TFT cannot score higher than its partner; at best it can only do "as good as". Yet it won the tournaments by consistently scoring a strong second-place with a variety of partners.[33] Axelrod summarizes this as don't be envious;[34] in other words, don't strive for a payoff greater than the other player's.[35] In any IPD game there is a certain maximum score each player can get by always cooperating. But some strategies try to find ways of getting a little more with an occasional defection (exploitation). This can work against some strategies that are less provocable or more forgiving than TIT FOR TAT, but generally they do poorly. "A common problem with these rules is that they used complex methods of making inferences about the other player [strategy] and these inferences were wrong."[36] Against TFT (and "nice" strategies generally) one can do no better than to simply cooperate.[37] Axelrod calls this clarity. Or: don't be too clever.[38] The success of any strategy depends on the nature of the particular strategies it encounters, which depends on the composition of the overall population. To better model the effects of reproductive success Axelrod also did an "ecological" tournament, where the prevalence of each type of strategy in each round was determined by that strategy's success in the previous round. The competition in each round becomes stronger as weaker performers are reduced and eliminated. The results were amazing: a handful of strategies all "nice" came to dominate the field.[39] In a sea of non-nice strategies the "nice" strategies provided they were also provokable! did well enough with each other to offset the occasional exploitation. As cooperation became general the non-provocable strategies were exploited and eventually eliminated, whereupon the exploitive (non-cooperating) strategies were out-performed by the cooperative strategies. In summary, success in an evolutionary "game" correlated with the following characteristics: Be nice: cooperate, never be the first to defect. Be provocable: return defection for defection, cooperation for cooperation. Don't be envious:: be fair with your partner. Don't be too clever: or, don't try to be tricky.
22
The Evolution of Cooperation it by continuing the conditions that maintain it. This implies two requirements for the players, aside from whatever strategy they may adopt. First, they must be able to recognize other players, to avoid exploitation by cheaters. Second, they must be able to track their previous history with any given player, in order to be responsive to that player's strategy.[42] Even when the discount parameter is high enough to permit reciprocal cooperation there is still a question of whether and how cooperation might start. One of Axelrod's findings is that when the existing population never offers cooperation nor reciprocates it the case of ALL D then no nice strategy can get established by isolated individuals; cooperation is strictly a sucker bet. (The "futility of isolated revolt".[43] ) But another finding of great significance is that clusters of nice strategies can get established. Even a small group of individuals with nice strategies with infrequent interactions can yet do so well on those interactions to make up for the low level of exploitation from non-nice strategies[44]
23
Subsequent work
In 1984 Axelrod estimated that there were "hundreds of articles on the Prisoner's Dilemma cited in Psychological Abstracts".[45] Since then he has estimated that citations to The Evolution of Cooperation alone "are now growing at the rate of over 300 per year".[46] To fully review this literature is infeasible. What follows are therefore only a few selected highlights. Axelrod has a subsequent book, The Complexity of Cooperation,[47] which he considers a sequel to The Evolution of Cooperation. Other work on the evolution of cooperation has expanded to cover prosocial behavior generally,[48] and in religion,[49] the promotion of conformity,[50] other mechanisms for generating cooperation,[51] the IPD under different conditions and assumptions,[52] and the use of other games such as the Public Goods and Ultimatum games to explore deep-seated notions of fairness and fair play.[53] It has also been used to challenge the rational and self-regarding "economic man" model of economics,[54] and as a basis for replacing Darwinian sexual selection theory with a theory of social selection.[55] Nice strategies are better able to invade if they have social structures or other means of increasing their interactions. Axelrod discusses this in chapter 8; in a later paper he and Rick Riolo and Michael Cohen[56] use computer simulations to show cooperation rising among agents who have negligible chance of future encounters but can recognize similarity of an arbitrary characteristic (such as a green beard). When an IPD tournament introduces noise (errors or misunderstandings) TFT strategies can get trapped into a long string of retaliatory defections, thereby depressing their score. TFT also tolerates "ALL C" (always cooperate) strategies, which then give an opening to exploiters.[57] In 1992 Martin Nowak and Karl Sigmund demonstrated a strategy called Pavlov (or "winstay, loseshift") that does better in these circumstances.[58] Pavlov looks at its own prior move as well as the other player's move. If the payoff was R or P (see "Prisoner's Dilemma", above) it cooperates; if S or T it defects. In a 2006 paper Nowak listed five mechanisms by which natural selection can lead to cooperation.[51] In addition to kin selection and direct reciprocity, he shows that: Indirect reciprocity is based on knowing the other player's reputation, which is the player's history with other players. Cooperation depends on a reliable history being projected from past partners to future partners. Network reciprocity relies on geographical or social factors to increase the interactions with nearer neighbors; it is essentially a virtual group. Group selection[59] assumes that groups with cooperators (even altruists) will be more successful as a whole, and this will tend to benefit all members. The payoffs in the Prisoner's Dilemma game are fixed, but in real life defectors are often punished by cooperators. Where punishment is costly there is a second-order dilemma amongst cooperators between those who pay the cost of enforcement and those who do not.[60] Other work has shown that while individuals given a choice between joining a
The Evolution of Cooperation group that punishes free-riders and one that does not initially prefer the sanction-free group, yet after several rounds they will join the sanctioning group, seeing that sanctions secure a better payoff.[61] And there is the very intriguing paper "The Coevolution of Parochial Altruism and War" by Jung-Kyoo Choi and Samuel Bowles. From their summary: Altruismbenefiting fellow group members at a cost to oneself and parochialismhostility towards individuals not of one's own ethnic, racial, or other groupare common human behaviors. The intersection of the twowhich we term "parochial altruism"is puzzling from an evolutionary perspective because altruistic or parochial behavior reduces one's payoffs by comparison to what one would gain from eschewing these behaviors. But parochial altruism could have evolved if parochialism promoted intergroup hostilities and the combination of altruism and parochialism contributed to success in these conflicts.... [Neither] would have been viable singly, but by promoting group conflict they could have evolved jointly. (Choi & Bowles 2007) They do not claim that humans have actually evolved in this way, but that computer simulations show how war could be promoted by the interaction of these behaviors.
24
Conclusion
When Richard Dawkins set out to "examine the biology of selfishness and altruism" in The Selfish Gene, he reinterpreted the basis of evolution, and therefore of altruism. He was "not advocating a morality based on evolution",[62] and even felt that "we must teach our children altruism, for we cannot expect it to be part of their biological nature."[63] But John Maynard Smith[64] was showing that behavior could be subject to evolution, Robert Trivers had shown that reciprocal altruism is strongly favored by natural selection to lead to complex systems of altruistic behavior (supporting Kroptokin's argument that cooperation is as much a factor of evolution as competition[65] ), and Axelrod's dramatic results showed that in a very simple game the conditions for survival (be "nice", be provocable, promote the mutual interest) seem to be the essence of morality. While this does not yet amount to a science of morality, the game theoretic approach has clarified the conditions required for the evolution and persistence of cooperation, and shown how Darwinian natural selection can lead to complex behavior, including notions of morality, fairness, and justice. It is shown that the nature of self-interest is more profound than previously considered, and that behavior that seems altruistic may, in a broader view, be individually beneficial. Extensions of this work to morality[66] and the social contract[67] may yet resolve the old issue of individual interests versus group interests.
Recommended Reading
Axelrod, Robert; Hamilton, William D. (27 March 1981), "The Evolution of Cooperation" [68], Science 211: 139096, doi:10.1126/science.7466396 Axelrod, Robert (1984), The Evolution of Cooperation, Basic Books, ISBN0-465-02122-2 Axelrod, Robert (2006), The Evolution of Cooperation (Revised ed.), Perseus Books Group, ISBN0-465-00564-0 Axelrod, Robert (1997), The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration, Princeton University Press, ISBN0-691-01567-8 Dawkins, Richard ([1976] 1989), The Selfish Gene (2nd ed.), Oxford Univ. Press, ISBN0-19-286092-5 Gould, Stephen Jay (June 1997), "Kropotkin was no crackpot" [69], Natural History 106: 1221 Ridley, Matt (1996), The Origins of Virtue, Viking (Penguin Books), ISBN0-670-86357-2 Sigmund, Karl; Fehr, Ernest; Nowak, Martin A. (January 2002), "The Economics of Fair Play" [70], Scientific American: 8287 Trivers, Robert L. (March 1971), "The Evolution of Reciprocal Altruism", Quarterly Review of Biology 46: 3557, doi:10.1086/406755
The Evolution of Cooperation Vogel, Gretchen (20 Feb. 2004), "News Focus: The Evolution of the Golden Rule" [71], Science 303 (5661): 112831, doi:10.1126/science.303.5661.1128, PMID14976292
25
Notes
[1] Axelrod's book was summarized in Douglas Hofstadter's May 1983 "Metamagical Themas" column in Scientific American (Hofstadter 1983) (reprinted in his book (Hofstadter 1985); see also Richard Dawkin's summary in the second edition of The Selfish Gene ((Dawkins 1989), ch. 12.) [2] Dawkins 1989. [3] Morse & Kimball1951, 1956 [4] Williams1954, 1966 [5] See Poundstone (1991) for a good overview of the development of game theory. [6] Technically, the Prisoner's Dilemma is any two-person "game" where the payoffs are ranked in a certain way. If the payoff ("reward") for mutual cooperation is R, for mutual defection is P, the sucker gets only S, and the temptation payoff (provided the other player is suckered into cooperating) is T, then the payoffs need to be ordered T > R > P > S, and satisfy R > (T+S)/2. (Axelrod & 1984 810, 206207). [7] Axelrod 1984, p. 216 n. 2; Poundstone 1992. [8] See Hardin (1968), "The Tragedy of the Commons". [9] "By Means of Natural Selection" being the subtitle of his work, On the Origin of Species. [10] Darwin 1859, pp 75, 76, 320. [11] Bowler 1984, pp.9499, 26970. [12] Rand 1961. [13] Bowler 1984, pp.9499 [14] See Gauthier 1970 for a lively debate on morality and self-interest. Aristotle's comment on the effectivness of philosophic argument: "For the many yield to complusion more than to argument." (Nichomachean Ethics, Book X, 1180a15, Irwin translation) [15] Kropotkin 1902, but originally published in the magazine Nineteenth Century starting in 1890. [16] Axelrod 1984, pp.90; Trivers 1971. [17] See Bowler (1984) generally. [18] Wilson 1975. [19] Darwin 1859, p.237. [20] Axelrod & Hamilton 1981; Trivers 1971, pp.44,48; Bowler 1984, p.312; Dawkins 1989, pp.710, 287, ch.7 generally. [21] Hamilton 1964. [22] Trivers 1971. [23] Trivers 1971, pp.3839. [24] Trivers 1971, p.35. [25] Trivers 1971, p.47. More pointedly, Trivers also said (p.48):"No concept of group advantage is necessary to explain the function of human altruistic behavior." [26] To deter cheaters from exploiting altruists. And "in extreme cases, perhaps, to select directly against the unreciprocating individual by injuring, killing, or exiling him."(Trivers) [27] Analogous to the situation in the IPD where, having once defected, a player voluntarily elects to cooperate, even in anticipation of being suckered, in order to return to a state of mutual cooperation. As Trivers says (p.50): "It seems plausible ... that the emotion of guilt has been selected for in humans partly in order to motivate the cheater to compensate his misdeed and to behave reciprocally in the future...." [28] Axelrod 1984. [29] Axelrod 1984, p.113. [30] Axelrod 1984, p.130. [31] Axelrod 1984, pp.62, 211. [32] Axelrod 1984, p.186. [33] Axelrod 1984, p.112. [34] Axelrod 1984, pp.110113. [35] Axelrod 1984, p.25. [36] Axelrod 1984, p.120. [37] Axelrod 1984, pp.47,118. [38] Axelrod 1984, pp.120+. [39] Axelrod 1984, pp.4853. [40] Axelrod 1984, p.13. [41] Axelrod 1984, pp.18, 174. [42] [43] [44] [45] Axelrod 1984, p.174. Axelrod 1984, p.150. Axelrod 1984, pp.6368, 99. Axelrod 1984, pp.28.
26
References
Most of these references are to the scientific literature, to establish the authority of various points in the article. A few references of lesser authority but greater accessibility are also included.
Axelrod, Robert (1984), The Evolution of Cooperation, Basic Books, ISBN0-465-02122-2 Axelrod, Robert (1997), The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration, Princeton University Press Axelrod, Robert (July 2000), "On Six Advances in Cooperation Theory" (http://www-personal.umich.edu/ ~axe/research_papers.html), Analyse & Kritic 22: 130151 Axelrod, Robert (2006), The Evolution of Cooperation (Revised ed.), Basic Books Axelrod, Robert; D'Ambrosio, Lisa (1996), Annotated Bibliography on the Evolution of Cooperation (http:// www.cscs.umich.edu/research/Publications) Axelrod, Robert; Dion, Douglas (9 Dec. 1988), "The Further Evolution of Cooperation" (http://www. sciencemag.org/cgi/reprint/242/4884/1385.pdf), Science 242 (4884): 138590, doi:10.1126/science.242.4884.1385, PMID17802133 Axelrod, Robert; Hamilton, William D. (27 March 1981), "The Evolution of Cooperation" (http://www. sciencemag.org/cgi/reprint/211/4489/1390.pdf), Science 211: 139096, doi:10.1126/science.7466396 Binmore, Kenneth G. (1994), Game Theory and the Social Contract: Vol. 1, Playing Fair, MIT Press Binmore, Kenneth G. (1998a), Game Theory and the Social Contract: Vol. 2, Just Playing, MIT Press Binmore, Kenneth G. (1998b), Review of 'The Complexity of Cooperation' (http://jasss.soc.surrey.ac.uk/1/1/ review1.html)
The Evolution of Cooperation Binmore, Kenneth G. (2004), "Reciprocity and the social contract" (http://mydocs.strands.de/MyDocs/06037/ 06037.pdf), Politics, Philosophy & Economics 3: 56, doi:10.1177/1470594X04039981 Bowler, Peter J. (1984), Evolution: The History of an Idea, Univ. of California Press, ISBN0-520-04880-3 Bowles, Samuel (8 Dec. 2006), "Group Competition, Reproductive Leveling, and the Evolution of Human Altruism" (http://www.santafe.edu/~bowles/GroupCompetition), Science 314 (5805): 156972, doi:10.1126/science.1134829, PMID17158320 Bowles, Samuel; Choi, Jung-Koo; Hopfensitz, Astrid (2003), "The co-evolution of individual behaviors and social institutions" (http://www.santafe.edu/~jkchoi/jtb223_2.pdf), J. of Theoretical Biology 223: 135147, doi:10.1016/S0022-5193(03)00060-2 Boyd, Robert (8 Dec. 2006), "The Puzzle of Human Sociality" (http://www.sciencemag.org/cgi/reprint/314/ 5805/1555.pdf), Science 314 (5805): 155556, doi:10.1126/science.1136841, PMID17158313 Boyd, Robert; Lorberbaum, Jeffrey P. (7 May 1987), "No pure strategy is evolutionarily stable in the repeated Prisoner's Dilemma Game" (http://www.sscnet.ucla.edu/anthro/faculty/boyd/Publications.htm), Nature 327: 589, doi:10.1038/327058a0 Boyd, Robert; Matthew, Sarah (29 June 2007), "A Narrow Road to Cooperation" (http://www.sciencemag.org/ cgi/reprint/316/5833/1858.pdf), Science 316: 185859, doi:10.1126/science.1144339] Brecht, Bertolt (1947), The Good Woman of Setzuan Camerer, Colin F.; Fehr, Ernest (6 Jan. 2006), "When Does 'Economic Man' Dominate Social Behavior?" (http:// www.sciencemag.org/cgi/reprint/311/5757/47.pdf), Science 311 (5757): 4752, doi:10.1126/science.1110600, PMID16400140 Carnegie, Andrew (1900), The Gospel of Wealth, and Other Timely Essays Choi, Jung-Kyoo; Bowles, Samuel (26 Oct. 2007), "The Coevolution of Parochial Altruism and War" (http:// www.sciencemag.org/cgi/reprint/318/5850/636.pdf), Science 318 (5850): 63640, doi:10.1126/science.1144237, PMID17962562 Darwin, Charles (1859), On the Origin of Species Dawkins, Richard ([1976] 1989), The Selfish Gene (2nd ed.), Oxford Univ. Press, ISBN0-19-286092-5 Gauthier, David P. (1986), Morals by agreement, Oxford Univ. Press Gauthier, David P., ed. (1970), Morality and Rational Self-Interest, Prentice-Hall Gould, Stephen Jay (June 1997), "Kropotkin was no crackpot" (http://www.marxists.org/subject/science/ essays/kropotkin.htm), Natural History 106: 1221 Grek, zgr; Irienbush, Bernd; Rockenbach, Bettina (7 April 2006), "The Competitive Advantage of Sanctioning Insitutions" (http://www.sciencemag.org/cgi/reprint/312/5770/108.pdf), Science 312 (5770): 10811, doi:10.1126/science.1123633, PMID16601192 Hamilton, William D. (1963), "The Evolution of Altruistic Behavior" (http://westgroup.biology.ed.ac.uk/ teach/social/Hamilton_63.pdf), American Naturalist 97: 35456, doi:10.1086/497114 Hamilton, William D. (1964), "The Genetical Evolution of Social Behavior", J. of Theoretical Biology 7: 116, 1752, doi:10.1016/0022-5193(64)90038-4 Hardin, Garrett (13 December 1968), "The Tragedy of the Commons" (http://www.ldeo.columbia.edu/edu/ dees/V1003/lectures/population/Tragedy of the Commons.pdf), Science 162 (3859): 12431248, doi:10.1126/science.162.3859.1243, PMID17756331 Hauert, Christoph; Traulsen, Arne; Brandt, Hannelore; Nowak, Martin A.; Sigmund, Karl (29 June 2007), "Via Freedom to Coercion: The Emergence of Costly Punishment" (http://www.sciencemag.org/cgi/reprint/316/ 5833/1905.pdf), Science 316 (5833): 190507, doi:10.1126/science.1141588, PMID17600218, PMC2430058
27
The Evolution of Cooperation Henrich, Joseph (7 April 2006), "Cooperation, Punishment, and the Evolution of Human Institutions" (http:// www.sciencemag.org/cgi/reprint/312/5770/60.pdf), Science 312 (5770): 6061, doi:10.1126/science.1126398, PMID16601179 Henrich, Joseph; et. al. (23 June 2007), "Costly Punishment Across Human Societies" (http://www.sciencemag. org/cgi/reprint/312/5781/1767.pdf), Science 312 (5781): 176770, doi:10.1126/science.1127333, PMID16794075 Hobbes, Thomas ([1651] 1958), Leviathan, Bobbs-Merrill [and others] Hofstadter, Douglas R. (May, 1983), "Metamagical Themas: Computer Tournaments of the Prisoner's Dilemma Suggest How Cooperation Evolves", Scientific American 248: 1626 Hofstadter, Douglas R. (1985), "The Prisoner's Dilemma Computer Tournaments and the Evolution of Cooperation", Metamagical Themas: Questing for the Essence of Mind and Pattern, Basic Books, pp.715730, ISBN0-465-04540-5 Kavka, Gregory S. (1986), Hobbesian moral and political theory, Princeton Univ. Press Kropotkin, Petr (1902), Mutual Aid: A Factor in Evolution Maynard Smith, John (1976), "Evolution and the Theory of Games", American Scientist 61: 4145 Maynard Smith, John (September 1978), "The Evolution of Behavior", Scientific American 239: 17692 Maynard Smith, John (1982), Evolution and the Theory of Games, Cambridge Univ. Press Melville, Herman ([1851] 1977), Moby-Dick, Bobbs-Merrill [and others] Milinski, Manfred (1 July 1993), "News and Views: Cooperation Wins and Stays", Nature 364 (6432): 1213, doi:10.1038/364012a0, PMID8316291 Morse, Phillip M.; Kimball, George E. (1951), Methods of Operations Research Morse, Phillip M.; Kimball, George E. (1956), "How to Hunt a Submarine", in Newman, James R., The World of Mathematics, 4, Simon and Schuster, pp.216079 Norenzayan, Ara; Shariff, Azim F. (3 Oct. 2008), "The Origin and Evolution of Religious Prosociality" (http:// www.sciencemag.org/cgi/reprint/322/5898/58.pdf), Science 322 (5898): 5862, doi:10.1126/science.1158757, PMID18832637 Nowak, Martin A (8 Dec. 2006), "Five Rules for the Evolution of Cooperation" (http://www.sciencemag.org/ cgi/reprint/314/5805/1560.pdf), Science 314 (5805): 156063, doi:10.1126/science.1133755, PMID17158317 Nowak, Martin A; Page, Karen M.; Sigmund, Karl (8 Sept. 2000), "Fairness Versus Reason in the Ultimatum Game" (http://www.sciencemag.org/cgi/reprint/289/5485/1773.pdf), Science 289: 177375, doi:10.1126/science.289.5485.1773 Nowak, Martin A.; Sigmund, Karl (16 Jan. 1992), "Tit For Tat in Heterogenous Populations" (http://www.ped. fas.harvard.edu/people/faculty/all_publications.html), Nature 355: 250253, doi:10.1038/315250a0 Nowak, Martin A.; Sigmund, Karl (1 July 1993), "A strategy of win-stay, lose-shift that outperforms tit for tat in Prisoner's Dilemma" (http://www.ped.fas.harvard.edu/people/faculty/publications_nowak/Nature93.pdf), Nature 364 (6432): 5658, doi:10.1038/364056a0, PMID8316296 Poundstone, William (1992), Prisoner's Dilemma: John von Neumann, Game Theory and the Puzzle of the Bomb, Anchor Books, ISBN0-385-41580-X Quervain, D. J.-F.; et. al. (24 Aug. 2004), "The Neural Basis of Altruistic Punishment" (http://www.sciencemag. org/cgi/reprint/305/5688/1254.pdf), Science 305 (5688): 1254, doi:10.1126/science.1100735, PMID15333831 Rand, Ayn (1961), The Virtue of Selfishness: A New Concept of Egoism, The New American Library Rapoport, Anatol; Chammah, Albert M. (1965), Prisoner's Dilemma, Univ. of Michigan Press
28
The Evolution of Cooperation Riolo, Rick L.; Cohen, Michael D.; Axelrod, Robert (23 Nov. 2001), "Evolution of cooperation without reciprocity" (http://www.nature.com/nature/journal/v414/n6862/full/414441a0.html), Nature 414 (6862): 44143, doi:10.1038/35106555, PMID11719803 Roughgarden, Joan; Oishi, Meeko; Akcay, Erol (17 Feb. 2006), "Reproductive Social Behavior: Cooperative Games to Replace Sexual Selection" (http://www.sciencemag.org/cgi/reprint/311/5763/965.pdf), Science 311 (5763): 96569, doi:10.1126/science.1110105, PMID16484485 Rousseau, Jean Jacques ([1762] 1950), The Social Contract, E. P. Dutton & Co. [and others] Sanfey, Alan G. (26 Oct. 2007), "Social Decision-Making: Insights from Game Theory and Neuroscience", Science 318 (5850): 598602, doi:10.1126/science.1142996, PMID17962552 Sigmund, Karl; Fehr, Ernest; Nowak, Martin A. (January 2002), "The Economics of Fair Play" (http://www. ped.fas.harvard.edu/people/faculty/all_publications.html), Scientific American: 8287 Stephens, D. W.; McLinn, C. M.; Stevens, J. R (13 Dec. 2002), "Discounting and Reciprocity in an Iterated Prisoner's Dilemma" (http://www.sciencemag.org/cgi/reprint/298/5601/2216.pdf), Science 298 (5601): 221618, doi:10.1126/science.1078498, PMID12481142 Trivers, Robert L. (March 1971), "The Evolution of Reciprocal Altruism", Quarterly Review of Biology 46: 3557, doi:10.1086/406755 Vogel, Gretchen (20 Feb. 2004), "News Focus: The Evolution of the Golden Rule" (http://www.sciencemag. org/cgi/reprint/303/5661/1128.pdf), Science 303 (5661): 112831, doi:10.1126/science.303.5661.1128, PMID14976292 Von Neumann, John; Morgenstern, Oskar (1944), Theory of Games and Economic Behavior, Princeton Univ. Press Wade, Nicholas (20 Mar. 2007), "Scientist Finds the Beginnings of Morality in Primitive Behavior" (http:// www.nytimes.com/2007/09/18/science/18mora.html?pagewanted=1), New York Times: D3 Williams, John D. (1954), The Compleat Strategyst, RAND Corp. Williams, John D. (1966), The Compleat Strategyst: being a primer on the theory of games of strategy (2nd ed.), McGraw-Hill Book Co. Wilson, Edward O. (1975), Sociobiolgy: The New Synthesis, Harvard Univ. Press
29
Holarchy
30
Holarchy
A holarchy, in the terminology of Arthur Koestler, is a hierarchy of holons where a holon is both a part and a whole. The term was coined in Koestler's 1967 book The Ghost in the Machine. The term, spelled holoarchy, is also used extensively by American philosopher and writer Ken Wilber.[1] The "nested" nature of holons, where one holon can be considered as part of another, is similar to the term Panarchy as used by Adaptive Management ecological theorists Lance Gunderson and C.S. Holling.[2] The universe as a whole is an example of a holarchical system, in which every holarchy is part of a larger holarchy.
Different meanings
David Spangler uses the term in a different meaning: "In a hierarchy, participants can be compared and evaluated on the basis of position, rank, relative power, seniority and the like. But in a holarchy each persons value comes from his or her individuality and uniqueness and the capacity to engage and interact with others to make the fruits of that uniqueness available."[3]
See also
Heterarchy Holism Holism in ecological anthropology Modular design Noosphere Ontology modularization Polytely Process philosophy
References
[1] Wilber, K: The Essential Ken Wilber: An Introductory Reader, Shambhala, 1998 [2] Gunderson, L. H. and Holling, C. S. (editors): Panarchy: Understanding Transformations in Human and Natural Systems, Island Press, 2002. [3] A Vision of Holarchy (http:/ / www. sevenpillarsreview. com/ article/ a_vision_of_holarchy1), Seven Pillars Review
External links
Brief essay on holarchies (http://www.worldtrans.org/essay/holarchies.html)
Holism
31
Holism
Distinguish from the suffix -holism, naming addictions. Holism (from holos, a Greek word meaning all, whole, entire, total) is the idea that all the properties of a given system (physical, biological, chemical, social, economic, mental, linguistic, etc.) cannot be determined or explained by its component parts alone. Instead, the system as a whole determines in an important way how the parts behave. The general principle of holism was concisely summarized by Aristotle in the Metaphysics: "The whole is more than the sum of its parts" (1045a10). Reductionism is sometimes seen as the opposite of holism. Reductionism in science says that a complex system can be explained by reduction to its fundamental parts. For example, that the processes of biology can be reduced to chemistry and the laws of chemistry explained by physics.
History
The term holism was introduced by the South African statesman Jan Smuts in his 1926 book, Holism and Evolution.[1] Smuts defined holism as "The tendency in nature to form wholes that are greater than the sum of the parts through creative evolution."[2] The idea has ancient roots. Examples of holism can be found throughout human history and in the most diverse socio-cultural contexts, as has been confirmed by many ethnological studies. The French Protestant missionary, Maurice Leenhardt coined the term cosmomorphism to indicate the state of perfect symbiosis with the surrounding environment which characterized the culture of the Melanesians of New Caledonia. For these people, an isolated individual is totally indeterminate, indistinct and featureless until he can find his position within the natural and social world in which he is inserted. The confines between the self and the world are annulled to the point that the material body itself is no guarantee of the sort of recognition of identity which is typical of our own culture. In the late 1990's the term holistic evolved into the term wholistic in order to clarify the concept even further. For example schools refer to wholistic learning styles and medicine refers to the wholistic model that considers the mind, body, spirit in diagnosis and treatment.
In science
In the latter half of the 20th century, holism led to systems thinking and its derivatives, like the sciences of chaos and complexity. Systems in biology, psychology, or sociology are frequently so complex that their behavior is, or appears, "new" or "emergent": it cannot be deduced from the properties of the elements alone.[3] Holism has thus been used as a catchword. This contributed to the resistance encountered by the scientific interpretation of holism, which insists that there are ontological reasons that prevent reductive models in principle from providing efficient algorithms for prediction of system behavior in certain classes of systems. Further resistance to holism has come from the association of the concept with quantum mysticism. Recently, however, public understanding has grown over the realities of such concepts, and more scientists are beginning to
Holism accept serious research into the concept such as cell biologist Bruce Lipton [4] . Scientific holism holds that the behavior of a system cannot be perfectly predicted, no matter how much data is available. Natural systems can produce surprisingly unexpected behavior, and it is suspected that behavior of such systems might be computationally irreducible, which means it would not be possible to even approximate the system state without a full simulation of all the events occurring in the system. Key properties of the higher level behavior of certain classes of systems may be mediated by rare "surprises" in the behavior of their elements due to the principle of interconnectivity, thus evading predictions except by brute force simulation. Stephen Wolfram has provided such examples with simple cellular automata, whose behavior is in most cases equally simple, but on rare occasions highly unpredictable.[5] Complexity theory (also called "science of complexity"), is a contemporary heir of systems thinking. It comprises both computational and holistic, relational approaches towards understanding complex adaptive systems and, especially in the latter, its methods can be seen as the polar opposite to reductive methods. General theories of complexity have been proposed, and numerous complexity institutes and departments have sprung up around the world. The Santa Fe Institute is arguably the most famous of them.
32
In anthropology
There is an ongoing dispute as to whether anthropology is intrinsically holistic. Supporters of this concept consider anthropology holistic in two senses. First, it is concerned with all human beings across times and places, and with all dimensions of humanity (evolutionary, biophysical, sociopolitical, economic, cultural, psychological, etc.). Further, many academic programs following this approach take a "four-field" approach to anthropology that encompasses physical anthropology, archeology, linguistics, and cultural anthropology or social anthropology.[6] Some leading anthropologists disagree, and consider anthropological holism to be an artifact from 19th century social evolutionary thought that inappropriately imposes scientific positivism upon cultural anthropology.[7] The term "holism" is additionally used within social and cultural anthropology to refer to an analysis of a society as a whole which refuses to break society into component parts. One definition says: "as a methodological ideal, holism implies ... that one does not permit oneself to believe that our own established institutional boundaries (e.g. between politics, sexuality, religion, economics) necessarily may be found also in foreign societies."[8]
In business
A holistic brand (also holistic branding) is considering the entire brand or image of the company. For example a universal brand image across all countries, including everything from advertising styles to the stationery the company has made, to the company colours.
In ecology
Ecology is the leading and most important approach to holism, as it tries to include biological, chemical, physical and economic views in a given area. The complexity grows with the area, so that it is necessary to reduce the characteristic of the view in other ways, for example to a specific time of duration. John Muir, Scots born early conservationist[9] , wrote "When we try to pick out anything by itself we find it hitched to everything else in the Universe" More information is to be found in the field of systems ecology, a cross-disciplinary field influenced by general systems theory. see Holistic Community.
Holism
33
In economics
With roots in Schumpeter, the evolutionary approach might be considered the holist theory in economics. They share certain language from the biological evolutionary approach. They take into account how the innovation system evolves over time. Knowledge and know-how, know-who, know-what and know-why are part of the whole business economics. Knowledge can also be tacit, as described by Michael Polanyi. These models are open, and consider that it is hard to predict exactly the impact of a policy measure. They are also less mathematical.
In philosophy
In philosophy, any doctrine that emphasizes the priority of a whole over its parts is holism. Some suggest that such a definition owes its origins to a non-holistic view of language and places it in the reductivist camp. Alternately, a 'holistic' definition of holism denies the necessity of a division between the function of separate parts and the workings of the 'whole'. It suggests that the key recognisable characteristic of a concept of holism is a sense of the fundamental truth of any particular experience. This exists in contradistinction to what is perceived as the reductivist reliance on inductive method as the key to verification of its concept of how the parts function within the whole. In the philosophy of language this becomes the claim, called semantic holism, that the meaning of an individual word or sentence can only be understood in terms of its relations to a larger body of language, even a whole theory or a whole language. In the philosophy of mind, a mental state may be identified only in terms of its relations with others. This is often referred to as content holism or holism of the mental. Epistemological and confirmation holism are mainstream ideas in contemporary philosophy. Ontological holism was espoused by David Bohm in his theory on The Implicate Order.
In sociology
mile Durkheim developed a concept of holism which he set as opposite to the notion that a society was nothing more than a simple collection of individuals. In more recent times, Louis Dumont [10] has contrasted "holism" to "individualism" as two different forms of societies. According to him, modern humans live in an individualist society, whereas ancient Greek society, for example, could be qualified as "holistic", because the individual found identity in the whole society. Thus, the individual was ready to sacrifice himself or herself for his or her community, as his or her life without the polis had no sense whatsoever. Martin Luther King Jr had a holistic view of social justice. In Letter from Birmingham Jail he famously said: "Injustice anywhere is a threat to justice everywhere". Scholars such as David Bohm [11] and M. I. Sanduk [12] consider the society through the Plasma Physics. From physics point of view, the interaction of individuals within a group may lead a continuous model. Therefore for M. I. Sanduk The nature of fluidity of plasma (ionized gas) arises from the interaction of its free interactive charges, so the society may behave as a fluid owing to the free interactive individuals. This fluid model may explain many social phenomena like social instability, diffusion, flow, viscosity...So the society behaves as a sort of intellectual fluid.
In psychology of perception
A major holist movement in the early twentieth century was gestalt psychology. The claim was that perception is not an aggregation of atomic sense data but a field, in which there is a figure and a ground. Background has holistic effects on the perceived figure. Gestalt psychologists included Wolfgang Koehler, Max Wertheimer, Kurt Koffka. Koehler claimed the perceptual fields corresponded to electrical fields in the brain. Karl Lashley did experiments with gold foil pieces inserted in monkey brains purporting to show that such fields did not exist. However, many of the perceptual illusions and visual phenomena exhibited by the gestaltists were taken over (often without credit) by later perceptual psychologists. Gestalt psychology had influence on Fritz Perls' gestalt therapy, although some old-line gestaltists opposed the association with counter-cultural and New Age trends later associated with gestalt therapy. Gestalt theory was also influential on phenomenology. Aron Gurwitsch wrote on the role of the field of
Holism consciousness in gestalt theory in relation to phenomenology. Maurice Merleau-Ponty made much use of holistic psychologists such as work of Kurt Goldstein in his "Phenomenology of Perception."
34
In teleological psychology
Alfred Adler believed that the individual (an integrated whole expressed through a self-consistent unity of thinking, feeling, and action, moving toward an unconscious, fictional final goal), must be understood within the larger wholes of society, from the groups to which he belongs (starting with his face-to-face relationships), to the larger whole of mankind. The recognition of our social embeddedness and the need for developing an interest in the welfare of others, as well as a respect for nature, is at the heart of Adler's philosophy of living and principles of psychotherapy. Edgar Morin, the French philosopher and sociobiologist, can be considered a holist based on the transdisciplinary nature of his work. Mel Levine, M.D., author of A Mind at a Time,[13] and co-founder (with Charles R. Schwab) of the not-for-profit organization All Kinds of Minds, can be considered a holist based on his view of the 'whole child' as a product of many systems and his work supporting the educational needs of children through the management of a child's educational profile as a whole rather than isolated weaknesses in that profile.
In theological anthropology
In theological anthropology, which belongs to theology and not to anthropology, holism is the belief that the nature of humans consists of an ultimately divisible union of components such as body, soul and spirit.
In theology
Holistic concepts are strongly represented within the thoughts expressed within Logos (per Heraclitus), Panentheism and Pantheism.
In neurology
A lively debate has run since the end of the 19th century regarding the functional organization of the brain. The holistic tradition (e.g., Pierre Marie) maintained that the brain was a homogeneous organ with no specific subparts whereas the localizationists (e.g., Paul Broca) argued that the brain was organized in functionally distinct cortical areas which were each specialized to process a given type of information or implement specific mental operations. The controversy was epitomized with the existence of a language area in the brain, nowadays known as the Broca's area.[14] Although Broca's view has gained acceptance, the issue isn't settled insofar as the brain as a whole is a highly connected organ at every level from the individual neuron to the hemispheres.
Applications
Architecture
Architecture is often argued by design academics and those practicing in design to be a holistic enterprise.[15] Used in this context, holism tends to imply an all-inclusive design perspective. This trait is considered exclusive to architecture, distinct from other professions involved in design projects.
Education reform
The Taxonomy of Educational Objectives identifies many levels of cognitive functioning, which can be used to create a more holistic education. In authentic assessment, rather than using computers to score multiple choice tests, a standards based assessment uses trained scorers to score open-response items using holistic scoring methods.[16] In projects such as the North Carolina Writing Project, scorers are instructed not to count errors, or count numbers of
Holism points or supporting statements. The scorer is instead instructed to judge holistically whether "as a whole" is it more a "2" or a "3". Critics question whether such a process can be as objective as computer scoring, and the degree to which such scoring methods can result in different scores from different scorers.
35
Medicine
Holism appears in psychosomatic medicine. In the 1970s the holistic approach was considered one possible way to conceptualize psychosomatic phenomena. Instead of charting one-way causal links from psyche to soma, or vice-versa, it aimed at a systemic model, where multiple biological, psychological and social factors were seen as interlinked. Other, alternative approaches at that time were psychosomatic and somatopsychic approaches, which concentrated on causal links only from psyche to soma, or from soma to psyche, respectively.[17] At present it is commonplace in psychosomatic medicine to state that psyche and soma cannot really be separated for practical or theoretical purposes. A disturbance on any level - somatic, psychic, or social - will radiate to all the other levels, too. In this sense, psychosomatic thinking is similar to the biopsychosocial model of medicine. Alternative medicine practitioners adopt a holistic approach to healing.
See also
Buckminster Fuller Christopher Alexander Confirmation holism David Bohm Emergence Emergentism Gaia hypothesis Gestalt psychology Gestalt therapy Gross National Happiness Holarchy Holistic health Holon (philosophy) Howard T. Odum Jan Smuts Janus Kurt Goldstein Logical holism Ontology Organicism Herbert Simon Polytely Panarchy Synergetics Synergy Systems theory Willard Van Orman Quine
Notes
[1] [2] [3] [4] According to the Oxford English Dictionary cf. Henri Bergson. Bertalanffy 1968, p.54. Finding My Religion (http:/ / www. sfgate. com/ cgi-bin/ article. cgi?f=/ g/ a/ 2005/ 11/ 14/ findrelig. DTL) San Francisco Chronicle, retrieved on March 2nd, 2010 [5] S. Wolfram, Cellular automata as models of complexity, Nature 311, 419 - 424 (1984) [6] Shore, Bradd (1999) Strange Fate of Holism. Anthropology News 40(9): 4-5. [7] Segal, Daniel A.; Sylvia J. Yanagisako (eds.), James Clifford, Ian Hodder, Rena Lederman, Michael Silverstein (2005). Unwrapping the Sacred Bundle: Reflections on the Disciplining of Anthropology (http:/ / www. dukeupress. edu/ cgibin/ forwardsql/ search. cgi?template0=nomatch. htm& template2=books/ book_detail_page. htm& user_id=11016434335& Bmain. Btitle_option=1& Bmain. Btitle=Unwrapping+ the+ Sacred+ Bundle). Duke University Press. . [8] anthrobase definition of holism (http:/ / www. anthrobase. org/ Dic/ eng/ def/ holism. htm) [9] Reconnecting with John Muir By Terry Gifford, University of Georgia, 2006 [10] Louis Dumont, 1984 [11] Wilkins, M., (1986) Oral history interviews with David Bohm, 16 tapes, undated transcript (AIP and Birkbeck college Library, London), 253-254. [12] M. I. Sanduk, Does Society Exhibit Same Behaviour of Plasma Fluid? http:/ / philpapers. org/ rec/ DSE
Holism
[13] (Simon & Schuster, 2002) [14] 'Does Broca's area exist?': Christofredo Jakob's 1906 response to Pierre Marie's holistic stance. Kyrana Tsapkini, Ana B. Vivas, Lazaros C. Triarhou. Brain and Language, Volume 105, Issue 3, June 2008, Pages 211-219, http:/ / dx. doi. org/ 10. 1016/ j. bandl. 2007. 07. 124 [15] Holm, Ivar (2006). Ideas and Beliefs in Architecture: How attitudes, orientations, and underlying assumptions shape the built environment. Oslo School of Architecture and Design. ISBN 82-547-0174-1. [16] Rubrics (Authentic Assessment Toolbox) "So, when might you use a holistic rubric? Holistic rubrics tend to be used when a quick or gross judgment needs to be made" (http:/ / jonathan. mueller. faculty. noctrl. edu/ toolbox/ rubrics. htm) [17] Lipowski, 1977.
36
References
Ludwig von Bertalanffy,1971 General System Theory. Foundations Development Applications. Allen Lane (1968) Bohm, D. (1980) Wholeness and the Implicate Order. London: Routledge. ISBN 0-7100-0971-2 Leenhardt, M. 1947 Do Kamo. La personne et le mythe dans le monde mlansien. Gallimard. Paris. Lipowski, Z.J.: "Psychosomatic medicine in seventies". Am. J. Psych. 134:3:233-244 Jan C. Smuts, 1926 Holism and Evolution MacMillan, Compass/Viking Press 1961 reprint: ISBN 0-598-63750-8, Greenwood Press 1973 reprint: ISBN 0-8371-6556-3, Sierra Sunrise 1999 (mildly edited): ISBN 1-887263-14-4
Further reading
Dusek, Val, The Holistic Inspirations of Physics: An Underground History of Electromagnetic Theory Rutgers University Press, Brunswick NJ, 1999. Fodor, Jerry, and Ernst Lepore, Holism: A Shopper's Guide Wiley. New York. 1992 Hayek, F.A. von. The Counter-revolution of Science. Studies on the abuse of reason. Free Press. New York. 1957. Mandelbaum, M. Societal Facts in Gardner 1959. Phillips, D.C. Holistic Thought in Social Science. Stanford University Press. Stanford. 1976. Dreyfus, H.L. Holism and Hermeneutics in The Review of Metaphysics. 34. pp. 323. James, S. The Content of Social Explanation. Cambridge University Press. Cambridge, 1984. Harrington, A. Reenchanted Science: Holism in German Culture from Wilhelm II to Hitler. Princeton University Press. 1996. Lopez, F. Il pensiero olistico di Ippocrate, vol. I-IIA, Ed. Pubblisfera, Cosenza Italy 2004-2008.
External links
Brief explanation of Koestler's derivation of "holon" (http://www.mech.kuleuven.be/pma/project/goa/ hms-int/history.html) Holism in nature (http://www.ecotao.com/holism/) and coevolution in ecosystems Stanford Encyclopedia of Philosophy article: "Holism and Nonseparability in Physics" (http://plato.stanford. edu/entries/physics-holism/) James Schombert of University of Oregon Physics Dept on quantum holism (http://abyss.uoregon.edu/~js/ glossary/holism.html) Theory of sociological holism (http://www.twow.net/ObjText/OtkCcCE.htm) from "World of Wholeness"
37
Holism in science
Holism in science, or Holistic science, is an approach to research that emphasizes the study of complex systems. This practice is in contrast to a purely analytic tradition (sometimes called reductionism) which purports to understand systems by dividing them into their smallest possible or discernible elements and understanding their elemental properties alone. The holism-reductionism dichotomy is often evident in conflicting interpretations of experimental findings and in setting priorities for future research.
Overview
Holism in science is an approach to research that emphasizes the study of complex systems. Two central aspects are: 1. the way of doing science, sometimes called "whole to parts," which focuses on observation of the specimen within its ecosystem first before breaking down to study any part of the specimen. 2. the idea that the scientist is not a passive observer of an external universe; that there is no 'objective truth,' but that the individual is in a reciprocal, participatory relationship with nature, and that the observer's contribution to the process is valuable. The term holistic science has been used as a category encompassing a number of scientific research fields (see some examples below). The term may not have a precise definition. Fields of scientific research considered potentially holistic do however have certain things in common. First, they are multidisciplinary. Second, they are concerned with the behavior of complex systems. Third, they recognize feedback within systems as a crucial element for understanding their behavior. The Santa Fe Institute, a center of holistic scientific research in the United States, expresses it like this: The two dominant characteristics of the SFI research style are commitment to a multidisciplinary approach and an emphasis on the study of problems that involve complex interactions among their constituent parts. "Santa Fe Institute's Research Topics" [1]. Archived from the original [2] on January 15, 2006. Retrieved January 22, 2006.
Holism in science
38
Opposing views
Holistic science is controversial. One opposing view is that holistic science is "pseudoscience" because it does not rigorously follow the scientific method despite the use of a scientific-sounding language. Bunge (1983) and Lilienfeld et al. (2003) state that proponents of pseudoscientific claims, especially in organic medicine, alternative medicine, naturopathy and mental health, often resort to the mantra of holism to explain negative findings or to immunise their claims against testing. Stenger (1999) states that "holistic healing is associated with the rejection of classical, Newtonian physics. Yet, holistic healing retains many ideas from eighteenth and nineteenth century physics. Its proponents are blissfully unaware that these ideas, especially superluminal holism, have been rejected by modern physics as well". Science journalist John Horgan has expressed this view in the book The End of Science 1996. He wrote that a certain pervasive model within holistic science, self-organized criticality, for example, "is not really a theory at all. Like punctuated equilibrium, self-organized criticality is merely a description, one of many, of the random fluctuations, the noise, permeating nature." By the theorists' own admissions, he said, such a model "can generate neither specific predictions about nature nor meaningful insights. What good is it, then?"
Cognitive science
The field of cognitive science, or the study of mind and intelligence has some examples for holistic approaches. These include Unified Theory of Cognition (Allen Newell, e.g. Soar, ACT-R as models) and many others, many of which rely on the concept of emergence, i.e. the interplay of many entities make up a functioning whole. Another example is psychological nativism, the study of the innate structure of the mind. Non-holistic functionalist approaches within cognitive science include e.g. the modularity of mind paradigm. Cognitive science need not concern only human cognition. Biologist Marc Bekoff has done holistic, interdisciplinary scientific research in animal cognition and has published a book about it (see below).
Holism in science
39
Quantum physics
In the standard Copenhagen Interpretation of quantum mechanics there is a holism of the measurement situation, in which there is a holism of apparatus and object. There is an "uncontrollable disturbance" of the measured object by the act of measurement according to Niels Bohr. It is impossible to separate the effect of the measuring apparatus from the object measured. The observer-measurement relation is an active area of research today: see Quantum decoherence, Quantum Zeno effect and Measurement problem.
Engineering
In engineering, the holistic approach can be considered "natural" because one of main engineering tasks is to design systems not existing yet. Therefore, conceptual design begins from a general idea which is successively specialized top-down. This process is stopped when the specified details are components available on the market.
Biology
Holistic science sometimes asks different questions than a strictly analytic scienceas is exemplified by Goethe in the following passage: We conceive of the individual animal as a small world, existing for its own sake, by its own means. Every creature is its own reason to be. All its parts have a direct effect on one another, a relationship to one another, thereby constantly renewing the circle of life; thus we are justified in considering every animal physiologically perfect. Viewed from within, no part of the animal is a useless or arbitrary product of the formative impulse (as so often thought). Externally, some parts may seem useless because the inner coherence of the animal nature has given them this form without regard to outer circumstance. Thus[not] the question, What are they for? but rather, Where do they come from? (Goethe, Scientific Studies, Suhrkamp ed., vol 12, p.121; trans. Douglas Miller)
Other examples
Ecology, or ecological science, i.e. studying the ecology at levels ranging from populations, communities, and ecosystems up to the biosphere as a whole. The study of climate change in the wider context of Earth science (and Earth system science in particular) can be considered holistic science, as the climate (and the Earth itself) constitutes a complex system to which the scientific method cannot be applied using current technology. The first scientist to seriously propose this was James Lovelock. [3] (URL accessed on 28 November 2006) Princeton University hosts a holistic science project entitled "Global Consciousness Project" that uses a network of physical random number generators to register events of global significance, testing the hypothesis that there is a collective human consciousness at work in the world. [4] Johann Wolfgang von Goethe's 1810 book Zur Farbenlehre (Theory of Colors) not only parted radically with the dominant Newtonian optical theories of his time, but also with the entire Enlightenment methodology of reductive science. Although the theory was not received well by scientists, Goethe considered one of the most important intellectual figures in modern Europe thought of his color theory as his greatest accomplishment. Holistic theorists and scientists such as Rupert Sheldrake still refer to the Goethe's color-theory as an inspiring example of holistic science. The introduction to the book lays out Goethe's unique philosophy of science. In system dynamics modeling, a field that originated at MIT, a holistic controlling paradigm organizes scientific method, but uses the results of reductionist science to define static relationships between variables in a modeling procedure that permits simulation of the dynamics of the system under study. As mentioned above, feedback is a crucial tool for understanding system dynamics. [5] Another example of how holistic and reductionist science can be mutually supportive and cooperative is free-choice profiling.
Holism in science As an example of interdisciplinary holistic research. Joe L. Kincheloe, in his work in critical pedagogy, has employed complexity and holism in science to overcome reductionism.
40
Holism in science
41
See also
Articles related to holism Cognitive science Complexity theory Holarchy Holism Holism in ecological anthropology Holism in science Holistic health Holon (philosophy) Philosophy of biology Scientific reductionism Systems thinking Antireductionism Articles related to classification of scientific endeavors Cartesian anxiety Demarcation problem Hard science Philosophy of science Pseudoscience Romanticism in science Science wars
Notes
[1] http:/ / web. archive. org/ web/ 20060115042406/ http:/ / www. santafe. edu/ research/ indexResearchAreas. php [2] http:/ / www. santafe. edu/ research/ indexResearchAreas. php [3] http:/ / today. reuters. com/ news/ articlenews. aspx?type=scienceNews& storyID=2006-11-28T151248Z_01_L28841108_RTRUKOC_0_US-EARTH-FEVER. xml& src=112806_1245_ARTICLE_PROMO_also_on_reuters [4] http:/ / noosphere. princeton. edu/ [5] http:/ / www. albany. edu/ cpr/ sds/ [6] http:/ / pages. slc. edu/ ~eraymond/ bestfoot. html [7] "The Theory of Knowledge Implicit in Goethe's World Conception" (http:/ / wn. rsarchive. org/ Books/ GA002/ English/ AP1985/ GA002_index. html). . Retrieved 2008-08-28. [8] "Goethe's World View" (http:/ / wn. rsarchive. org/ Books/ GA006/ English/ MP1985/ GA006_index. html). . Retrieved 2008-08-28.
References
Paul Davies and John Gribbin. The Matter Myth: Dramatic Discoveries That Challenge Our Understanding of Physical Reality, 1992, Simon & Schuster, ISBN 0-671-72841-5 Henri Bortoft. The Wholeness of Nature: Goethe's Way Toward a Science of Participation in Nature, 1996, Lindisfarne Books, ISBN 0-940262-79-7 Joe L. Kincheloe. Critical Constructivism, 2005. NY, Peter Lang. Joe L. Kincheloe. Teachers as Researchers: Qualitative Paths to Empowerment. 2nd ed, 2003, London, Falmer. Joe L. Kincheloe and Kathleen Berry, Rigor and Complexity in Qualitative Research: Constructing the Bricolage, 2004, London, Open University Press. Humberto R. Maturana and Francisco Varela. The Tree of Knowledge: The Biological Roots of Human Understanding, 1992, Shambhala, ISBN 0-87773-642-1 Ilya Prigogine & Isabelle Stengers, Order out of Chaos: Man's new dialogue with nature, 1984, Flamingo, ISBN 0-00-654115-1 Article "What is the Proper Relationship of Holistic and Reductionist Science?" by Karl North (http://www. geocities.com/northsheep/holiscience.html) Article "The Fine Line: (W)holism and Science" by Annemarie Colbin, Ph.D. (http://www.foodandhealing. com/article-wholismscience.htm) Article "A New Image of Cosmos & Anthropos: From Ancient Wisdom to a Philosophy of Wholeness" by Michael R. Meyer (http://www.khaldea.com/articles/ni2.shtml) Excerpts from Holistic Science towards a second Renaissance (http://www.eclipse.co.uk/moordent/ holistic.htm) by R.J.C. Wilding (unpublished book in process)
Holism in science Article " Concerning the Spiritual in Art and Science (http://web.ukonline.co.uk/mr.king/writings/essays/ essaysukc/csasukc0.html)" by Mike King (available on-line) Article " Patterns of Wholeness: Introducing Holistic Science (http://www.resurgence.org/resurgence/issues/ goodwin216.htm)" by Brian Goodwin, from the journal Resurgence (http://www.resurgence.org/index.htm) Article " From Control to Participation (http://www.resurgence.org/resurgence/issues/goodwin201.htm)" by Brian Goodwin, from the journal Resurgence (http://www.resurgence.org/index.htm) System Dynamics Resource Page (http://www.public.asu.edu/~kirkwood/sysdyn/SDRes.htm) at Arizona State University, hosted by Craig W. Kirkwood Introduction (http://www.arch.ksu.edu/seamon/book chapters/goethe_intro.htm) to Goethe's Way of Science: A Phenomenology of Nature, edited by David Seamon and Arthur Zajonc. State University of New York Press, 1998 Bunge.M., Demarcating Science from Pseudoscience. Fundamenta Scientiae, 1982, Vo. 3, No. 3/4, pg. 369-88 Lilienfeld,S.O. et al. (Eds.): Science and Pseudoscience in Clinical Psychology. New York / London 2003 Olival Freire Jr., Science and exile: (http://arxiv.org/abs/physics/0508184) David Bohm, the hot times of the Cold War, and his struggle for a new interpretation of quantum mechanics (Online article) Definition of System Dynamics and Systems Thinking (http://www.albany.edu/cpr/sds/index.html), on System Dynamics Society homepage Stenger.V.J., (1999) The Physics of 'Alternative Medicine'. The Scientific Review of Alternative Medicine Spring/Summer 1999 Volume 3 ~ Number 1
42
External links
The Nature Institute (http://natureinstitute.org/) Santa Fe Institute (http://www.santafe.edu) International Society for the System Sciences (http://www.isss.org) Schumacher College (http://www.schumachercollege.org) Center for the Study of Complex Systems (http://www.cscs.umich.edu/) at the University of Michigan Rice Cognitive Sciences Program (http://www.ruf.rice.edu/~cognsci/) The Scientific and Medical Network (http://www.scimednet.org) Princeton University Global Consciousness Project (http://noosphere.princeton.edu/) Centre for Postsecular Studies (http://www.jnani.org/postsecular/index.htm) at the London Metropolitan University The System Dynamics Society (http://www.albany.edu/cpr/sds/index.html) VERITAS Research Program (http://veritas.arizona.edu/) CAHRC Research In Quantum Number Theory (http://home.pacific.net.sg/~topchoice) Institute of Noetic Sciences website (http://www.noetic.org) Janus Head - Goethe's Delicate Empiricism (http://www.janushead.org/8-1/index.cfm)
Holon (philosophy)
43
Holon (philosophy)
A holon (Greek: , holon neuter form of , holos "whole") is something that is simultaneously a whole and a part. The word was coined by Arthur Koestler in his book The Ghost in the Machine (1967, p. 48). Koestler was compelled by two observations in proposing the notion of the holon. The first observation was influenced by Nobel Prize winner Herbert Simon's parable of the two watchmakers, wherein Simon concludes that complex systems will evolve from simple systems much more rapidly if there are stable intermediate forms present in that evolutionary process than if they are not present. The second observation was made by Koestler himself in his analysis of hierarchies and stable intermediate forms in both living organisms and social organizations. He concluded that, although it is easy to identify sub-wholes or parts, wholes and parts in an absolute sense do not exist anywhere. Koestler proposed the word holon to describe the hybrid nature of sub-wholes and parts within in vivo systems. From this perspective, holons exist simultaneously as self-contained wholes in relation to their sub-ordinate parts, and dependent parts when considered from the inverse direction. Koestler also points out that holons are autonomous, self-reliant units that possess a degree of independence and handle contingencies without asking higher authorities for instructions. These holons are also simultaneously subject to control from one or more of these higher authorities. The first property ensures that holons are stable forms that are able to withstand disturbances, while the latter property signifies that they are intermediate forms, providing a context for the proper functionality for the larger whole. Finally, Koestler defines a holarchy as a hierarchy of self-regulating holons that function first as autonomous wholes in supra-ordination to their parts, secondly as dependent parts in sub- ordination to controls on higher levels, and thirdly in coordination with their local environment.
General definition
A holon is a system (or phenomenon) which is an evolving self-organizing dissipative structure, composed of other holons, whose structures exist at a balance point between chaos and order. It is maintained by the throughput of matter-energy and information-entropy connected to other holons and is simultaneously a whole in and itself at the same time being nested within another holon and so is a part of something much larger than itself. Holons range in size from the smallest subatomic particles and strings, all the way up to the multiverse, comprising many universes. Individual humans, their societies and their cultures are intermediate level holons, created by the interaction of forces working upon us both top-down and bottom-up. On a non-physical level, words, ideas, sounds, emotionseverything that can be identifiedis simultaneously part of something, and can be viewed as having parts of its own, similar to sign in regard of semiotics. Since a holon is embedded in larger wholes, it is influenced by and influences these larger wholes. And since a holon also contains subsystems, or parts, it is similarly influenced by and influences these parts. Information flows bidirectionally between smaller and larger systems as well as rhizomatic contagion. When this bidirectionality of information flow and understanding of role is compromised, for whatever reason, the system begins to break down: wholes no longer recognize their dependence on their subsidiary parts, and parts no longer recognize the organizing authority of the wholes. Cancer may be understood as such a breakdown in the biological realm. A hierarchy of holons is called a holarchy. The holarchic model can be seen as an attempt to modify and modernise perceptions of natural hierarchy. Ken Wilber comments that the test of holon hierarchy (e.g. holarchy) is that if a type of holon is removed from existence, then all other holons of which it formed a part must necessarily cease to exist too. Thus an atom is of a lower standing in the hierarchy than a molecule, because if you removed all molecules, atoms could still exist, whereas if you removed all atoms, molecules, in a strict sense would cease to exist. Wilber's concept is known as the doctrine of the fundamental and the significant. A hydrogen atom is more fundamental than an ant, but an ant is
Holon (philosophy) more significant. The doctrine of the fundamental and the significant are contrasted by the radical rhizome oriented pragmatics of Deleuze and Guattari, and other continental philosophy.
44
Types of holons
Individual holon
An individual holon possesses a dominant monad; that is, it possesses a definable "I-ness". An individual holon is discrete, self-contained, and also demonstrates the quality of agency, or self-directed behavior. [3] The individual holon, although a discrete and self-contained is made up of parts; in the case of a human, examples of these parts would include the heart, lungs, liver, brain, spleen, etc. When a human exercises agency, taking a step to the left, for example, the entire holon, including the constituent parts, moves together as one unit.
Social holon
A social holon does not possess a dominant monad; it possesses only a definable "we-ness", as it is a collective made up of individual holons. [4] In addition, rather than possessing discrete agency, a social holon possesses what is defined as nexus agency. An illustration of nexus agency is best described by a flock of geese. Each goose is an individual holon, the flock makes up a social holon. Although the flock moves as one unit when flying, and it is "directed" by the choices of the lead goose, the flock itself is not mandated to follow that lead goose. Another way to consider this would be collective activity that has the potential for independent internal activity at any given moment.
Artifacts
American Philosopher Ken Wilber includes Artifacts in his theory of holons. Artifacts are anything (i.e. a statue or a piece of music) that is created by either an individual holon or a social holon. While lacking any of the defining structural characteristics - agency; self-maintenance; I-ness; Self Transcendence - of the previous two holons, Artifacts are useful to include in a comprehensive scheme due to their potential to replicate aspects of and profoundly affect (via, say interpretation) the previously described holons. It should also be noted that Artifacts are made up individual or social holons (i.e. a statue is made up atoms) As an interesting aside, the development of Artificial Intelligence may force us to question where the line should be drawn between the individual holon and the artifact.
Heaps
Heaps are defined as random collections of holons that lack any sort of organisational significance. A pile of leaves would be an example of a heap. Note, one could question whether a pile of leaves could be an "artifact" of an ecosystem "social holon". This raises a problem of intentionality: in short, if social holons create artifacts but lack intentionality (the domain of individual holons) how can we distinguish between heaps and artifacts? Further, if an artist (individual holon) paints a picture (artifact) in a deliberately chaotic and unstructured way does it become a heap?
Holon (philosophy)
45
See also
Bell's Theorem David Bohm Herbert Simon Heterarchy Holarchy Holism Holism in ecological anthropology Holism in science Holomovement Integral Theory Janus Ken Wilber Metasystem transition Philotics Protocol stack Quantum physics
External links
A brief history of the concept of holons [1] An even briefer history of the term holon [2] Arthur Koestler text on holon [3] Ecosystems and Holarchies - a new way to look at hierarchies [4] Holons, holarchy, and beyond [5]
Resources
Prigogine, I. Stengers, E. 1984. Order out of Chaos. New York: Bantam Books Koestler, Arthur, 1967. The Ghost in the Machine. London: Hutchinson. 1990 reprint edition, Penguin Group. ISBN 0-14-019192-5.
References
[1] [2] [3] [4] [5] http:/ / www. integralworld. net/ edwards13x. html http:/ / www. mech. kuleuven. ac. be/ pma/ project/ goa/ hms-int/ history. html http:/ / www. panarchy. org/ koestler/ holon. 1969. html http:/ / www. holon. se/ folke/ kurs/ Bilder/ holarchy2. shtml http:/ / www. beyondwilber. ca/ AQALmap/ bookdwl/ files/ WAQALMB_1. pdf
Indeterminacy (philosophy)
46
Indeterminacy (philosophy)
Indeterminacy, in philosophy, can refer both to common scientific and mathematical concepts of uncertainty and their implications and to another kind of indeterminacy deriving from the nature of definition or meaning. It is related to deconstructionism and to Nietzsche's criticism of the Kantian noumenon.
Indeterminacy in philosophy
Introduction
Indeterminacy was discussed in one of Jacques Derrida's early works Plato's Pharmacy (1969)[1] , a reading of Plato's Phaedrus and Phaedo. Plato writes of a fictionalized conversation between Socrates and a student, in which Socrates tries to convince the student that writing is inferior to speech. Socrates uses the Egyptian myth of Thoth's creation of writing to illustrate his point. As the story goes, Thoth presents his invention to the god-king of Upper Egypt for judgment. Upon its presentation, Thoth offers script as a pharmakon for the Egyptian people. The Greek word pharmakon poses a quandary for translators- it is both a remedy and a poison. In the proffering of a pharmakon, Thoth presents it as its true meaning- a harm and benefit. The god-king, however, refuses the invention. Through various reasonings, he determines the pharmakon of writing to be a bad thing for the Egyptian people. The pharmakon, the undecidable, has been returned decided. The problem, as Derrida reasons, is this: since the word pharmakon, in the original Greek, means both a remedy and a poison, it cannot be determined as fully remedy or fully poison. Amon rejected writing as fully poison in Socrates' retelling of the tale, thus shutting out the other possibilities. The problem of indeterminacy arises when one observes the eventual circularity of virtually every possible definition. It is easy to find loops of definition in any dictionary, because this seems to be the only way that certain concepts, and generally very important ones such as that of existence, can be defined in the English language. A definition is a collection of other words, and in any finite dictionary if one continues to follow the trail of words in search of the precise meaning of any given term, one will inevitably encounter this linguistic indeterminacy. Philosophers and scientists generally try to eliminate indeterminate terms from their arguments, since any indeterminate thing is unquantifiable and untestable; similarly, any hypothesis which consists of a statement of the properties of something unquantifiable or indefinable cannot be falsified and thus cannot be said to be supported by evidence that does not falsify it. This is related to Popper's discussions of falsifiability in his works on the scientific method. The quantifiability of data collected during an experiment is central to the scientific method, since reliable conclusions can only be drawn from replicable experiments, and since in order to establish observer agreement scientists must be able to quantify experimental evidence. Immanuel Kant unwittingly proposed one answer to this question in his Critique of Pure Reason by stating that there must "exist" a "thing in itself" a thing which is the cause of phenomena, but not a phenomenon itself. But, so to speak, "approximations" of "things in themselves" crop up in many models of empirical phenomena: singularities in physics, such as gravitational singularities, certain aspects of which (e.g., their unquantifiability) can seem almost to mirror various "aspects" of the proposed "thing in itself", are generally eliminated (or attempts are made at eliminating them) in newer, more precise models of the universe; and definitions of various psychiatric disorders stem, according to philosophers who draw on the work of Michel Foucault, from a belief that something unobservable and indescribable is fundamentally "wrong" with the mind of whoever suffers from such a disorder: proponents of Foucault's treatment of the concept of insanity would assert that one need only try to quantify various characteristics of such disorders as presented in today's Diagnostic and Statistical Manual delusion, one of the diagnostic criteria which must be exhibited by a patient if he or she is to be considered schizophrenic, for example in order to discover that the field of study known as abnormal psychology relies upon indeterminate concepts in defining virtually each "mental disorder" it describes. The quality that makes a belief a delusion is indeterminate to
Indeterminacy (philosophy) the extent to which it is unquantifiable; arguments that delusion is determined by popular sentiment (i.e., "almost no-one believes that he or she is made of cheese, and thus that belief is a delusion") would lead to the conclusion that, for example, Alfred Wegener's assertion of continental drift was a delusion since it was dismissed for decades after it was made.
47
Indeterminacy (philosophy) In examples as odd as this, the differences between two approximately-equal things may be very small indeed, and it is certainly true that they are quite irrelevant to most discussions. Acceptance of the reflexive property illustrated above has led to useful mathematical discoveries which have influenced the life of anyone reading this article on a computer. But in an examination of the possibility of the determinacy of any possible concept, differences like this are supremely relevant since that quality which could possibly make two separate things "equal" seems to be indeterminate.
48
Indeterminacy (philosophy) its members on an irrational basis. The less-precisely such states as "insanity" and "criminality" are defined in a society, the more likely that society is to fail to continue over time to describe the same behaviors as characteristic of those states (or, alternately, to characterize such states in terms of the same behaviors).
49
Current work
Richard Dawkins, the man who coined the term meme in the 1970s, described the concept of faith in his documentary, Root of All Evil?, as "the process of non-thinking". In the documentary, he used Bertrand Russell's analogy between a teapot orbiting the sun (something that cannot be observed because the brightness of the sun would obscure it even from the best telescope's view) and the object of one's faith (in this particular case, God) to explain that a highly indeterminate idea can self-replicate freely: "Everybody in the society had faith in the teapot. Stories of the teapot had been handed down for generations as part of the tradition of society. There are holy books about the teapot." [10] In Darwin's Dangerous Idea, Dennett argues against the existence of determinate meaning (in this case, of the subjective experience of vision for frogs) via an explanation of their indeterminacy in the chapter entitled The Evolution of Meanings, in the section The Quest for Real Meanings: "Unless there were 'meaningless' or 'indeterminate' variation in the triggering conditions of the various frogs' eyes, there could be no raw material [...] for selection for a new purpose to act upon. The indeterminacy that Fodor (and others) see as a flaw [...] is actually a prediction for such evolution [of "purpose"]. The idea that there must be something determinate that the frog's eye really means some possibly unknowable proposition in froggish that expresses exactly what the frog's eye is telling the frog's brain is just essentialism applied to meaning (or function). Meaning, like function on which it so directly depends, is not something determinate at its birth. [...]" Dennet argues, controversially [11] [12] , against qualia in Consciousness Explained. Qualia are attacked from several directions at once: he maintains they do not exist (or that they are too ill-defined to play any role in science, or that they are really something else, i.e. behavioral dispositions). They cannot simultaneously have all the properties attributed to them by philosophersincorrigible, ineffable, private, directly accessible and so on. The multiple drafts theory is leveraged to show that facts about qualia are not definite. Critics object that one's own qualia are subjectively quite clear and distinct to oneself. The self-replicating nature of memes is a partial explanation of the recurrence of indeterminacies in language and thought. The wide influences of Platonism and Kantianism in Western philosophy can arguably be partially attributed to the indeterminacies of some of their most fundamental concepts (namely, the Idea and the Noumenon, respectively). For a given meme to exhibit replication and heritability that is, for it to be able to make an imperfect copy of itself which is more likely to share any given trait with its "parent" meme than with some random member of the general "population" of memes it must in some way be mutable, since memetic replication occurs by means of human conceptual imitation rather than via the discrete molecular processes that govern genetic replication. (If a statement were to generate copies of itself that didn't meaningfully differ from it, that process of copying would more accurately be described as "duplication" than as "replication", and it would be incorrect to term these statements "memes"; the same would be true if the "child" statements did not noticeably inherit a substantial proportion of their traits from their "parent" statements.) In other words, if a meme is defined roughly (and somewhat arbitrarily) as a statement (or as a collection of statements, like Foucault's "discursive formations") that inherits some, but not all, of
Indeterminacy (philosophy) its properties (or elements of its definition) from its "parent" memes and which self-replicates, then indeterminacy of definition could be seen as advantageous to memetic replication, since an absolute rigidity of definition would preclude memetic adaptation. It is important to note that indeterminacy in linguistics can arguably partially be defeated by the fact that languages are always changing. However, what the entire language and its collected changes continue to reflect is sometimes still considered to be indeterminate.
50
Criticism
Persons of faith argue that faith "is the basis of all knowledge". The Wikipedia article on faith states that "one must assume, believe, or have faith in the credibility of a person, place, thing, or idea in order to have a basis for knowledge." In this way the object of one's faith is similar to Kant's noumenon. This would seem to attempt to make direct use of the indeterminacy of the object of one's faith as evidential support of its existence: if the object of one's faith were to be proven to exist (i.e., if it were no longer of indeterminate definition, or if it were no longer unquantifiable, etc.), then faith in that object would no longer be necessary; arguments from authority such as those mentioned above wouldn't either; all that would be needed to prove its existence would be scientific evidence. Thus, if faith is to be considered as a reliable basis for knowledge, persons of faith would seem, in effect, to assert that indeterminacy is not only necessary, but good (see Nassim Taleb).
Indeterminacy (philosophy)
51
Criticism
Proponents of a deterministic universe have criticised various applications of the concept of indeterminacy in the sciences; for instance, Einstein once stated that "God does not play dice" in a succinct (but now unpopular) argument against the theory of quantum indeterminacy, which states that the actions of particles of extremely low mass or energy are unpredictable because an observer's interaction with them changes either their positions or momenta. (The "dice" in Einstein's metaphor refer to the probabilities that these particles will behave in particular ways, which is how quantum mechanics addressed the problem.) At first it might seem that a criticism could be made from a biological standpoint in that an indeterminate idea would seem not to be beneficial to the species that holds it. A strong counterargument, however, is that not all traits exhibited by living organisms will be seen in the long term as evolutionarily advantageous, given that extinctions occur regularly and that phenotypic traits have often died out altogether in other words, an indeterminate meme may in the long term demonstrate its evolutionary value to the species that produced it in either direction; humans are, as yet, the only species known to make use of such concepts. It might also be argued that conceptual vagueness is an inevitability, given the limited capacity of the human nervous systems. We just do not have enough neurons to maintain separate concepts for "dog with 1,000,000 hairs", "dog with 1,000,001 hairs" and so on. But conceptual vagueness is not metaphysical indeterminacy.
Indeterminacy (philosophy)
52
See also
stochastics
Notes
[1] Derrida, Plato's Pharmacy in Dissemination, 1972, Athlone Press, London, 1981 (http:/ / social. chass. ncsu. edu/ wyrick/ debclass/ pharma. htm) [2] Nietzsche, F. On Truth and Lies (http:/ / www. publicappeal. org/ library/ nietzsche/ Nietzsche_various/ on_truth_and_lies. htm) [3] Nietzsche, F. Beyond Good and Evil (http:/ / www. marxists. org/ reference/ archive/ nietzsche/ 1886/ beyond-good-evil/ ch01. htm) [4] Nietzsche quotes (http:/ / www. wutsamada. com/ alma/ modern/ nietzquo. htm) [5] Nietzsche quote [6] Thompson, Hunter.S. [7] Foucault, M. Madness and Civilisation (http:/ / mchip00. nyu. edu/ lit-med/ lit-med-db/ webdocs/ webdescrips/ foucault12432-des-. html) [8] Foucault, M. The Archaeology of Knowledge [9] Hoenisch, S. Interpretation and Indeterminacy in Discourse Analysis (http:/ / www. criticism. com/ da/ da_indet. htm) [10] Dawkins World of Dawkins (http:/ / www. simonyi. ox. ac. uk/ dawkins/ WorldOfDawkins-archive/ Dawkins/ Work/ Articles/ 1999-10-04snakeoil. shtml) [11] Lormand, E. Qualia! Now Showing ata Theatre near you (http:/ / www-personal. umich. edu/ ~lormand/ phil/ cons/ qualia. htm) [12] De Leon, D. The Qualities of Qualia (http:/ / www. lucs. lu. se/ ftp/ pub/ LUCS_Studies/ LUCS58. pdf) [13] Weinberg, S. PBS interview (http:/ / www. pbs. org/ wgbh/ nova/ elegant/ view-weinberg. html) [14] Plank, William. THE IMPLICATIONS OF QUANTUM NON-LOCALITY FOR THE ARCHAEOLOGY OF CONSCIOUSNESS. Provides an expert opinion on the relationship between Nietzsche's critique of Kant's "thing in itself" and quantum indeterminacy. (http:/ / www. msubillings. edu/ CASFaculty/ Plank/ THE IMPLICATIONS OF QUANTUM NON. htm) [15] The Quantum Nietzsche-- a site explaining the same ideas, also run by William Plank. (http:/ / www. quantumnietzsche. com/ )
See also
Anti-realism Causality Causal loop Daniel Dennett Definition Deconstruction Deterministic system (philosophy) Empty set Event (philosophy) Faith Hyle Indeterminacy of translation Kant Meaning Memetics Nietzsche Occam's razor Philosophy of science Qualia Quantifiability Quantum indeterminacy Quintessence Scientific method Theory of everything Thing in itself Vagueness
Integral (spirituality)
53
Integral (spirituality)
This article is about "Integral" as a theme in spirituality. For the "Integral Theory" associated with Ken Wilber, see Integral Theory. See Integral (disambiguation) for other uses. Integral is a term applied to a wide-ranging set of developments in philosophy, psychology, religious thought, and other areas that seek interdisciplinary and comprehensive frameworks. The term is often combined with others such as approach[1] [2] , consciousness[3] , culture,[4] paradigm,[5] [6] , philosophy,[7] [8] , society,[9] , theory[10] , and worldview [3] , Major themes of this range of philosophies and teachings include a synthesis of science and religion, evolutionary spirituality, and holistic programs of development for the body, mind, soul, and spirit. In some versions of integral spirituality, integration is seen to necessarily include the three domains of self, culture, and nature.[11] Integral thinkers draw inspiration from the work of Sri Aurobindo, Don Beck, Jean Gebser, Robert Kegan, Ken Wilber, and others. Some individuals affiliated with integral spirituality have claimed that there exists a loosely-defined "Integral movement"[12] . Others, however, have disagreed[13] . Whatever its status as a "movement", there are a variety of religious organizations, think tanks, conferences, workshops, and publications in the US and internationally that use the term integral.
Integral (spirituality) The grandeur of Darwinian thought is not disputed, but it does not explain the integral evolution of man So it is with all purely physical explanations, which do not recognise the spiritual essence of man's being.[21] [Italics added] The word integral was independently suggested by Jean Gebser (19051973), a Swiss phenomenologist and interdisciplinary scholar, in 1939 to describe his own intuition regarding the next state of human consciousness. Gebser was the author of The Ever-Present Origin, which describes human history as a series of mutations in consciousness. he only afterwards discovered the similarity between his own ideas and those of Sri Aurobindo and Teilhard de Chardin [22] . The idea of "Integral Psychology" was first developed in the 1940s and 50s by Indra Sen (19031994) a psychologist, author, educator, and devotee of Sri Aurobindo and The Mother. He was the first to coin the term "Integral psychology" to describe the psychological observations he found in Sri Aurobindo's writings (which he contrasted with those of Western Psychology), and developed themes of "Integral Culture" and "Integral Man".[23] Although these basic ideas were first articulated in the early twentieth century, the movement originates with the California Institute of Integral Studies founded in 1968 by Haridas Chaudhuri (19131975), a Bengali philosopher and academic. Chaudhuri had been a correspondent of Sri Aurobindo, who developed his own perspective and philosophy. He established the California Institute of Integral Studies (originally the California Institute of Asian Studies), in 1968 in San Francisco (it became an independent organisation in 1974), and presented his own form of Integral psychology in the early 1970s.[24] Again independently, in Spiral Dynamics, Don Beck and Chris Cowan use the term integral for a developmental stage which sequentially follows the pluralistic stage. The essential characteristic of this stage is that it continues the inclusive nature of the pluralistic mentality, yet extends this inclusiveness to those outside of the pluralistic mentality. In doing so, it accepts the ideas of development and hierarchy, which the pluralistic mentality finds difficult. Other ideas of Beck and Cowan include the "first tier" and "second tier", which refer to major periods of human development. In late 1990s and 2000 Ken Wilber, who was influenced by both Aurobindo and Gebser, among many others, adopted the term Integral to refer to the latest revision of his own integral philosophy, which he called Integral Theory[25] . He also established the Integral Institute as a think-tank for further development of these ideas. In his book Integral Psychology, Wilber lists a number of pioneers of the integral approach, post hoc. These include Goethe, Schelling, Hegel, Gustav Fechner, William James, Rudolf Steiner, Alfred North Whitehead, James Mark Baldwin, Jrgen Habermas, Sri Aurobindo, and Abraham Maslow.[26] . The adjective Integral has also been applied to Spiral Dynamics, chiefly the version taught by Don Beck, who fora while collaborated with Wilber [27] . In the Wilber movement "Integral" when capitalized is given a further definition, being made synonymous with Wilber's AQAL Integral theory,[28] whereas "Integral Studies" refers to the broader field including the range of integral thinkers such as Jean Gebser, Sri Aurobindo, Ken Wilber, and Ervin Laszlo.[29]
54
Contemporary figures
A variety of intellectuals, academics, writers, and other specialists have advanced the fields of integral thought in recent decades. Due to its still ambiguous nature and definition, definitions of Integral psychology and philosophy, and lists of Integral philosophers and visionaries, differ , although there are some common themes. While Wilber was the first to nominate Integral philosophers, thinkers and visionaries, similar lists have later been proposed by others. According to John Bothwell and David Geier, among the top thinkers in the integral movement are Stanislav Grof, Fred Kofman, George Leonard, Michael Murphy, Jenny Wade, Roger Walsh, Ken Wilber, and Michael Zimmerman.[30] Australian academic Alex Burns mentions among integral theorists Jean Gebser, Clare W. Graves, Jane Loevinger
Integral (spirituality) and Ken Wilber.[31] In 2007, Steve McIntosh mentioned Henri Bergson and Teilhard de Chardin along with many of the names mentioned by Wilber.[32] While in the same year, the editors of What Is Enlightenment? listed as contemporary Integralists Don Beck, Allan Combs, Robert Godwin, Sally Goerner, George Leonard, Michael Murphy, William Irwin Thompson, and Wilber.[33] Gary Hampson suggested that there are six intertwined genealogical branches of Integral, based on those who first used the term: those aligned with Aurobindo, Gebser, Wilber, Gangadean, Lszl and Steiner (noting that the Steiner branch is via the conduit of Gidley).[34] Integral thought is claimed to provide "a new understanding of how evolution affects the development of consciousness and culture."[3] It includes areas such as business, education, medicine, spirituality, sports,[35] psychology and psychotherapy.[36] The idea of the evolution of consciousness has also become a central theme in much of integral theory.[37] According to the Integral Transformative Practice website, integral means "dealing with the body, mind, heart, and soul."[38]
55
Integral psychology
Integral psychology is psychology that presents an all-encompassing holistic rather than an exclusivist or reductive approach. It includes both lower, ordinary, and spiritual or transcendent states of consciousness. It originally is based on the Yoga psychology of Sri Aurobindo. Other important writers in the field of Integral Psychology are Indra Sen,[39] Haridas Chaudhuri,[40] Ken Wilber,[41] and Brant Cortright.[42]
Integral practice
Integral practice is primarily an outgrowth of different integral theories and philosophies as they intersect with various spiritual practices, holistic health modalities, and transformative regimens associated with the New Paradigm and human potential movement. Some ways to describe integral practice are the experiential application of integral theory,[43] the "holistic disciplines we consciously employ to nurture ourselves and others, and most specifically those practices that both inspire and sustain growth in many dimensions at once,"[44] and to "address and support each aspect of life with the goal of fully realizing all levels of human potential...."[45] These self-care practices target different areas of personal development, such as physical, emotional, creative, and psychosocial, in a combined, synergistic fashion. They may have different emphases depending on the theory that supports each approach, but most include a spiritual, introspective or meditative component as a major feature. The objectives of integral practice could be loosely defined as well-being and wholeness, with, in most cases, an underlying imperative of personal and even societal transformation and evolution.[46] [47] There is also the question of how to provide necessary customization and individualization of practice, while avoiding a "cafeteria model" that encourages practitioners to choose components according to their own strengths, rather than what is necessary for integral growth and development.[48] The following can be considered examples of different modalities of integral practice, listed in approximate order of inception: Sri Aurobindo's Integral Yoga; Integral Transformative Practice (ITP), created by George Leonard and Michael Murphy;[49] Holistic Integration, created by Ramon Albarada and Marina Romero;[50] Integral Lifework, created by T. Collins Logan;[51] and Integral Life Practice (ILP), based on Ken Wilber's AQAL framework.
Integral (spirituality)
56
See also
Cultural creatives Integral psychology Integral Theory Integral yoga Integrative learning Post-postmodernism Quantum mysticism Relationship between religion and science Remodernism Transmodernity Integral humanism
References
[1] An Essential Introduction to the Integral Approach (http:/ / integrallife. com/ learn/ overview/ essential-introduction-integral-approach) Integral Life com [2] Josh Floyd, Alex Burns, and Jose Ramos, A Challenging Conversation on Integral Futures: Embodied Foresight & Trialogues (http:/ / foresightinternational. com. au/ catalogue/ resources/ Integral_Futures. pdf), Journal of Futures Studies, November 2008, 13(2): 69 - 86; p.69 [3] Steve McIntosh, Integral Consciousness and the Future of Evolution, p.2 [4] Integral Culture: A Guide to the Emerging Integral Culture (http:/ / www. integralculture. org/ ) [5] Vincent Jeffries, The integral paradigm: The truth of faith and the social sciences, Journal The American Sociologist, Volume 30, Number 4. December, 1999 pp.36-55 [6] Integral Paradigm 101 (http:/ / www. audettesophia. com/ Integral. html) [7] Haridas Chaudhuri, Being, Evolution, and Immortality; an Outline of Integral Philosophy, Theosophical Publishing House, 1974 [8] Steve McIntosh, Integral Consciousness and the Future of Evolution, Paragon House, St Paul Minnesota, 2007, ISBN 978-1-55778-867-2 pp.2-3 and ch.7 "The Founders of Integral Philosophy" [9] Goerner, Sally J. 1999 After the clockwork universe : the emerging science and culture of integral society, Floris, Edinburgh [10] 1st Biennial Integral Theory Conference (http:/ / www. integraltheoryconference. org/ ) [11] Wilber, Ken. the Formation of Integral Institute, Ken Wilber Online. Retrieved via on Wilber.Shambhala.com on Feb. 5, 2010. (http:/ / wilber. shambhala. com/ html/ books/ formation_int_inst. cfm''Announcing) [12] Patten, Terry. "Integral Heart Newsletter #1: Exploring Big Questions in the Integral World," (http:/ / www. integralheart. com/ node/ 150) Integral Heart Newsletter. Retrieved via IntegralHeart.com on Jan. 13, 2010. [13] Kazlev, Alan. "Redefining Integral," (http:/ / www. integralworld. net/ kazlev13. html) Integral World. Retrieved via IntegralWorld.net on Jan. 13, 2010. [14] The Synthesis of Yoga, see Biographical Notes to the 3rd Pondicherry edition [15] Ram Shankar Misra, The integral Advaitism of Sri Aurobindo, Banaras: Banaras Hindu University, 1957 [16] Haridas Chaudhuri, Frederic Spiegelberg, The integral philosophy of Sri Aurobindo: a commemorative symposium, Allen & Unwin, 1960 [17] Brant Cortright, Integral Psychology: Yoga, Growth, and Opening the Heart, SUNY, 2007 ISBN 0791470717, pp.5-6 [18] Introduction, pp.38f., in Pitirim Aleksandrovich Sorokin, On the Practice of Sociology (edited by Barry V. Johnston), University of Chicago Press, 1998, ISBN 0226768287, ISBN 9780226768281 [19] Steve McIntosh, Integral Consciousness and the Future of Evolution, p.180 [20] Molz, M., & Gidley, J. (2008). A transversal dialogue on integral education and planetary consciousness: Markus Molz speaks with Jennifer Gidley. Integral Review: A Transdisciplinary and Transcultural Journal for New Thought, Research and Praxis, 6, p. 51. [21] Steiner, R. (1928/1978). An Esoteric Cosmology (GA 94), (E. Schure, Trans.) [Eighteen Lectures delivered in Paris, France, May 25 to June 14, 1906] [Electronic version] Original work published in French in 1928. [22] Ever-Present Origin p.102 note 4 [23] Aster Patel, "The Presence of Dr Indra Senji", SABDA - Recent Publications, November 2003 (http:/ / sabda. sriaurobindoashram. org/ pdf/ news/ nov2003. pdf) [24] Haridas Chaudhuri, "Psychology: Humanistic and Transpersonal". Journal of Humanistic Psychology, and The Evolution of Integral Consciousness; Bahman Shirazi "Integral psychology, metaphors and processes of personal integration" in Cornelissen (ed.) Consciousness and Its Transformation online version (http:/ / www. saccs. org. in/ TEXTS/ IP2/ IP2-1. 2-. htm) [25] Daryl S. Paulson, Wilber's Integral Philosophy: A Summary and Critique, Journal of Humanistic Psychology 2008; 48: 364-388 [26] Ken Wilber, Integral Psychology, Shambhalla, 2000 p.78 [27] Christopher Cooke and Ben Levi Spiral Dynamics Integral (http:/ / www. dialogue. org/ Documents/ SD_Integral. pdf)
Integral (spirituality)
[28] Matt Rentschler, AQAL Glossary (http:/ / aqaljournal. integralinstitute. org/ public/ Pdf/ AQAL_Glossary_01-27-07. pdf), p.15 [29] Sean Esbjrn-Hargens, An Overvew of Integral Theory - An All-Inclusive Framework for the 21st Century p.22 note 4, Integral InstituteResource Paper No. 1, 2009 [30] John Bothwell and David Geier, Score! Power Up Your Game, Business and Life by Harnessing the Power of Emotional Intelligence, p.144 [31] Josh Floyd, Alex Burns, and Jose Ramos, A Challenging Conversation on Integral Futures: Embodied Foresight & Trialogues (http:/ / foresightinternational. com. au/ catalogue/ resources/ Integral_Futures. pdf), Journal of Futures Studies, November 2008, 13(2): 69 - 86; p.71 [32] Steve McIntosh, Integral Consciousness and the Future of Evolution, ch.7 [33] The Real Evolution Debate, What Is Enlightenment?, no.35, JanuaryMarch 2007, p.100 [34] Gary Hampson, "Integral Re-views Postmodernism: The Way Out Is Through" Integral Review 4, 2007 pp.13-4, http:/ / www. integral-review. org [35] John Bothwell and David Geier, Score! Power Up Your Game, Business and Life by Harnessing the Power of Emotional Intelligence, Morgan James Publishing, 2006, ISBN 1933596627 p.144 [36] Arthur Freeman, Cognition and Psychotherapy, Springer, 2004, ISBN 0826122256 p.22 [37] Jennifer Gidley, The Evolution of Consciousness as a Planetary Imperative: An Integration of Integral Views (http:/ / integral-review. org/ current_issue/ documents/ Gidley, Evolution of Consciousness as Planetary Imperative 5, 2007. pdf) Integral Review no. 5, 2007 p.15 [38] ITP International Welcome! (http:/ / www. itp-life. com/ ) [39] Indra Sen, Integral Psychology: The Psychological System of Sri Aurobindo, Pondicherry, India: Sri Aurobindo Ashram Trust, 1986 [40] Chaudhuri, Haridas. (1975). "Psychology: Humanistic and transpersonal". Journal of Humanistic Psychology, 15 (1), 7-15. [41] Ken Wilber, Integral Psychology : Consciousness, Spirit, Psychology, Therapy Shambhala, ISBN 1-57062-554-9 [42] Brant Cortright, Integral Psychology: Yoga, Growth, and Opening the Heart, SUNY, 2007 ISBN 0791470717 [43] Ken Wilber, Terry Patten, Adam Leonard & Marco Morelli, Integral Life Practice, ISBN 9781590304679, p. 6 [44] T.Colins Logan, True Love: Integral Lifework Theory & Practice, ISBN 9780977033638, p. 3 [45] [46] [47] [48] [49] [50] [51] Elliott Dacher, Integral Health: The Path to Human Flourishing, ISBN 9781591201908, p. 118 George Leonard and Michael Murphy, The Life We Are Given, ISBN 0874777925, p.16 Sri Aurobindo, The Integral Yoga, ISBN 9780941524766, p. 10 Jorge Ferrar, "Integral Transformative Practice, A Participatory Perspective", Journal of Transpersonal Psychology, 2003, Vol. 35, No. 1 http:/ / www. itp-life. com http:/ / www. estel. es/ eng/ http:/ / www. integrallifework. com
57
External links
Academic programs California Institute of Integral Studies (http://www.ciis.edu/), offers programs in integral studies. Fielding Graduate University (http://www.fielding.edu/), offers programs in integral studies. John F. Kennedy University, MA in Integral Theory (http://www.jfku.edu/integraltheory/) an accredited online Master of Arts degree in Integral Theory. Conferences Integral Theory Conference (http://www.integraltheoryconference/) the official site for the biennial Integral Theory Conference held at JFK University. Integral Leadership in Action (http://www.integralleadershipinaction.com/) the official site for the 4th annual conference on integral conscious leadership. Organizations Integral Institute (http://www.integralinstitute.com/) a non-profit academic think tank. Integral Research Center (http://www.integralresearchcenter.org/) a grant giving mixed-methods research center based on Integral Methodological Pluralism. Publications Conscious Evolution (http://www.cejournal.org/), essays and articles about the multidisciplinary, integral study of consciousness and the Kosmos. Integral Leadership Review (http://www.integralleadershipreview.com), the site of the online publications Integral Leadership Review and Leading Digest
Integral (spirituality) Integral Life (http://www.integrallife.com/) online community website that is the sponsoring organization of Integral Institute, a non-profit academic think tank. Integral Review Journal (http://www.integral-review.org/), an online peer reviewed journal. Integral World (http://www.integralworld.net/) website and online resource maintained by Frank Visser. Journal of Integral Theory and Practice (http://www.integraljournals.org/) a peer-reviewed academic journal founded in 2003 with its first issue appearing in 2006. Kosmos Journal (http://www.kosmosjournal.org/), founded in 2001, a leading international journal for planetary citizens committed to the birth and emergence of a new planetary culture and civilization. World Futures: Journal of General Evolution (http://www.tandf.co.uk/journals/titles/02604027.html). An academic journal devoted to promoting evolutionary models, theories and approaches within and among the natural and the social sciences.
58
Integral Theory
This article is about "Integral Theory" as an emerging area of discourse. For "integral" as a term in spirituality, see Integral (spirituality). See Integral (disambiguation) for other uses. Integral Theory is an area of discourse emerging from the theoretical psychology and philosophy of Ken Wilber, a body of work that has evolved in phases from a transpersonal psychology[1] synthesizing Western and non-Western understandings of consciousness with notions of cosmic, biological, human, and divine evolution[2] into an emerging field of scholarly research focused on the complex interactions of ontology, epistemology, and methodology[3] . It has been claimed to offer a "Theory of Everything"[4] described as a "post-metaphysical"[5] worldview and a "trans-path path"[6] for holistic development; however, the discourse has received limited acceptance in mainstream academia[7] and has been sharply criticized by some for insularity and lack of rigor[8] . Integral Theory (or integral approach[9] [10] , consciousness[11] , paradigm[12] , philosophy[11] , society[13] , or worldview[11] ) has been applied in a variety of different domains: Integral Art, Integral Ecology, Integral Economics, Integral Politics, Integral Psychology, Integral Spirituality, and others. The first interdisciplinary academic conference on Integral Theory took place in 2008[14] . Integral Theory is said to be situated within Integral studies, described as an emerging interdisciplinary field of discourse[3] . Researchers have also developed applications in areas such as leadership, coaching, and organization development.[15] The Integral Institute was co-founded as a non-profit "think-and-practice tank"[16] by Ken Wilber and others in 2001,[17] to promote the theory and its practice. While there is no single organization defining the nature of Integral Theory, some have claimed that a loosely-defined "Integral movement" has appeared, expressed in a variety of conferences, workshops, publications, and blogs focused on themes in integral thought, such as spiritual evolution, and in academic developmental studies programs.[18] Others, however, have denied the existence of a single Integral movement, arguing that such claims conflate radically different phenomena[19] . The project of "The Integral University in Paris" was launched 28 February 2008. So far, the Integral University (Universit Intgrale in French) in Paris refers to a cycle of conferences organized by the French chapter of the Club of Budapest(1,2) based on an idea put forward by Michel Saloff Coste. It is not an institute as such, as it is still in its developing stages[20] .
Integral Theory
59
History
Although the first use of the term integral in a spiritual context was in the nineteenth century, Integral Theory's most recent antecedents include the California Institute of Integral Studies founded in 1968 by Haridas Chaudhuri (19131975), a Bengali philosopher and academic. Chaudhuri had been a correspondent of Sri Aurobindo, who developed his own perspective and philosophy. He established the California Institute of Integral Studies (originally the California Institute of Asian Studies), in 1968 in San Francisco (it became an independent organisation in 1974), and presented his own form of Integral psychology in the early 1970s.[21] Don Beck and Chris Cowan use the term integral for a developmental stage which sequentially follows the pluralistic stage. The essential characteristic of this stage is that it continues the inclusive nature of the pluralistic mentality, yet extends this inclusiveness to those outside of the pluralistic mentality. In doing so, it accepts the ideas of development and hierarchy, which the pluralistic mentality finds difficult. Other ideas of Beck and Cowan include the "first tier" and "second tier", which refer to major periods of human development. In late 1990s and 2000 Ken Wilber, who was influenced by both Aurobindo and Gebser, among many others, adopted the term Integral to refer to the latest revision of his own integral philosophy, which he called Integral theory [22] . He also established the Integral Institute as a think-tank for further development of these ideas. In his book Integral Psychology, Wilber lists a number of pioneers of the integral approach, post hoc. These include Goethe, Schelling, Hegel, Gustav Fechner, William James, Rudolf Steiner, Alfred North Whitehead, James Mark Baldwin, Jrgen Habermas, Sri Aurobindo, and Abraham Maslow.[23] . The adjective Integral has also been applied to Spiral Dynamics, chiefly the version taught by Don Beck, who for awhile collaborated with Wilber [24] . In the movement associated with Wilber, "Integral" when capitalized is given a further definition, being made synonymous with Wilber's AQAL Integral theory,[25] whereas "Integral Studies" refers to the broader field including the range of integral thinkers such as Jean Gebser, Sri Aurobindo, Ken Wilber, Rudolf Steiner, Edgar Morin and Ervin Laszlo.[26] [27]
Methodologies
AQAL, pronounced "ah-qwul," is a widely used framework in Integral Theory. It is also alternatively called the Integral Operating System (IOS) or by various other synonyms. The term stands for "all quadrants, all levels, all lines, all states, and all types." It is conceived by some integral theorists to be one of the most comprehensive approach to reality, a metatheory that attempts to explain how academic disciplines and every form of knowledge and experience fit together coherently.[28] In addition to AQAL, scholars have proposed other methodologies for integral studies. Bonnitta Roy has introduced a "Process Model" of integral theory, combining Western process philosophy, Dzogchen ideas, and Wilberian theory. She distinguishes between Wilber's concept of perspective and the Dzogchen concept of view, arguing that Wilber's view is situated within a framework or structural enfoldment which constrains it, in contrast to the Dzogchen intention of being mindful of view.[29] Wendelin Kpers, Ph.D., a German scholar specializing in phenomenological research, has proposed that an "integral pheno-practice" based on aspects of the work of Maurice Merleau-Ponty can provide the basis of an "adequate phenomenology" useful in integral research. His proposed approach claims to offer a more inclusive and coherent approach than classical phenomenology, including procedures and techniques called epoch, bracketing, reduction, and free variation.[30]
Integral Theory
60
Contemporary figures
A variety of intellectuals, academics, writers, and other specialists have advanced the integral theory in recent decades.
Themes
Integral art
In the context of Integral Theory, Integral art can be defined as art that reaches across multiple quadrants and levels. It may also refer to art that was created by someone who thinks or acts in an integral way.
Integral ecology
Integral ecology is a multi-disciplinary approach pioneered by Michael E. Zimmerman and Sean Esbjrn-Hargens. It applies Wilber's integral theory (especially the eight methodological perspectives) to the field of environmental studies and ecological research.[31] [32] [33] [34]
Integral economics
Integral economics is a paradigmatic methodology emanating from integral thought and theory as it translates to economics. This 'new' praxis offers a structural framework for addressing and resolving problems the Integral Institute has associated in their Mission [35] with evolutionary forms of capitalism; and the culture wars in political, religious, and scientific domains. These efforts are thus affording "theorists and developmental psychologists a needed and useful early look at the formal, dynamic process by which the evolution of higher-order development proceeds" in relation to an integral model.[36]
Integral leadership
As the term is often used, Integral leadership is a style of leadership that attempts to integrate other major styles of leadership. In "style" terms, integral leadership is an approach to influence that involves understanding 'where people are' (their mindsets, values, goals, capabilities and situational dynamics) and then interacting with them in a way that is appropriate and helpful given 'where they are'.[37]
Integral politics
Integral politics is an endeavor to develop a balanced and comprehensive politics around the principles of integral studies. Theorists including Don Beck, Lawrence Chickering, Jack Crittenden, David Sprecher, and Ken Wilber have applied concepts such as the AQAL methodology of Integral Theory to issues in political philosophy and applications in government.[38]
Integral Theory
61
Integral psychology
Integral psychology is originally is based on the Yoga psychology of Sri Aurobindo [39] . In the context of Integral Theory, it applies Wilber's AQAL and related themes to the field of psychology [40] . For Wilber, Integral psychology is psychology that is inclusive or holistic rather than exclusivist or reductive, and alues and integrates multiple explanations and methodologies.[41] [42]
Integral Theory universities as an indication of the field's emergence.[7] Jennifer Gidley Research Fellow at RMIT University Melbourne, points to the need in the 21st century to create conceptual bridges between integral theory, philosophy and pedagogy and other related philosophical, theoretical and pedagogical approaches. She undertook a comparative study of key evolution of consciousness thinkers, focusing particularly on the integral theoretic narratives of Rudolf Steiner, Jean Gebser, and Ken Wilber (but also with due reference to the seminal writings of Sri Aurobindo and those of contemporary European integral theorists such as Ervin Laszlo and Edgar Morin. She noted the conceptual breadth of Wilber's integral evolutionary narrative in transcending both scientism and epistemological isolationism. She also drew attention to some limitations of Wilbers integral project, notably his undervaluing of Gebser's actual text, and the substantial omission of the pioneering contribution of Steiner, who, as early as 1904 wrote extensively about the evolution of consciousness, including the imminent emergence of a new stage.[55] As a contribution to the knowledge base of integral education Gidley has also undertaken a hermeneutic comparative analysis of Rudolf Steiner's educational approach and Wilber's Integral Operating System. [56]
62
See also
Integral (spirituality) Ken Wilber Post-postmodernism Systems science
References
[1] Grof, Stanislav. "A Brief History of Transpersonal Psychology" (http:/ / www. stanislavgrof. com/ pdf/ A Brief History of Transpersonal Psychology-Grof. pdf), StanislavGrof.com, p. 11. Retrieved via StanislavGrof.com on Jan. 13, 2010. [2] Zimmerman, Michael E. (2005). "Ken Wilber (1949 -)" (http:/ / www. colorado. edu/ ArtsSciences/ CHA/ profiles/ zimmpdf/ Ken_Wilber_Rel_and_Nat. pdf), The Encyclopedia of Religion and Nature, p. 1743. London: Continuum. [3] Esbjrn-Hargens, Sean (2006). "Editors Inaugural Welcome," (http:/ / aqaljournal. integralinstitute. org/ ) AQAL: Journal of Integral Theory and Practice, p. v. Retrieved Jan. 7, 2010. [4] Macdonald, Copthorne. "(Review of) A Theory of Everything: An Integral Vision for Business, Politics, Science, and Spirituality by Ken Wilber," (http:/ / www. wisdompage. com/ toerevw. html) Integralis: Journal of Integral Consciousness, Culture, and Science, Vol. 1, No. 0. Retrieved via WisdomPage.com on Jan. 7, 2010. [5] Editors. "God's Playing a New Game: The Guru & The Pandit: Andrew Cohen & Ken Wilber in dialogue," (http:/ / www. andrewcohen. org/ andrew/ post-metaphysics. asp) What Is Enlightenment?, Issue 33. Retrieved via AndrewCohen.com on Jan. 7, 2010. [6] Integral Institute. "Integral Spiritual Center: A Trans-Path Path to Tomorrow," (http:/ / isc. integralinstitute. org/ Public/ static/ about. aspx). Retrieved via IntegralInstitute.org on Jan. 13, 2010. [7] Forman, Mark D. and Esbjrn-Hargens, Sean. "The Academic Emergence of Integral Theory," (http:/ / www. integralworld. net/ forman-hargens. html) Integral World. Retrieved via IntegralWorld.net on Jan. 7, 2010. [8] Visser, Frank. "Assessing Integral Theory: Opportunities and Impediments," (http:/ / www. integralworld. net/ visser26. html) Integral World. Retrieved via IntegralWorld.net on Jan. 7, 2010 [9] Fuhs, Clint. "An Essential Introduction to the Integral Approach" (http:/ / integrallife. com/ learn/ overview/ essential-introduction-integral-approach) Integral Life. Retrieved via IntegralLife.com on Jan. 13, 2010. [10] Floyd, Josh, Burns, Alex, and Ramos, Jose (2008). A Challenging Conversation on Integral Futures: Embodied Foresight & Trialogues (http:/ / foresightinternational. com. au/ catalogue/ resources/ Integral_Futures. pdf), Journal of Futures Studies, November 2008, Vol. 13, No. 2, p. 69. Retrieved via ForesightInternational.com.au on January 10, 2010. [11] McIntosh, Steve (2007). Integral Consciousness and the Future of Evolution, St. Paul, Minn.: Paragon House, p. 2-3. ISBN 978-1-55778-867-2 pp.2-3 and Chapter 7. [12] Ross, Sara, Fuhr, Reinhard, et. al. (2005). "Integral Review and its Editors," (http:/ / integral-review. org/ ) Integral Review, Issue 1, 2005. Retrieved Jan. 7, 2010. [13] Goerner, Sally J. (2007). After the Clockwork Universe: The Emerging Science and Culture of Integral Society, Chapel Hill, NC: Triangle Center for Complex Systems. [14] JFK University and Integral Institute."Integral Theory in Action: Serving Self, Other & Kosmos," (http:/ / www. integraltheoryconference. org/ sites/ default/ files/ ITC2008Brochure. pdf/ ) Retrieved via IntegralTheoryConference.com on Jan. 7, 2010.
Integral Theory
[15] Editors. "About Integral Leadership Review (ILR)," (http:/ / www. integralleadershipreview. com/ history-and-mission. php). Retrieved via IntegralLeadershipReview.com on Jan. 7, 2010. [16] JFK University and Integral Institute."IntegralTheoryConference.com," (http:/ / www. integraltheoryconference. org/ conference-hosts) IntegralTheoryConference.com. Retrieved via IntegralTheoryConference.com on Jan. 13, 2010. [17] Asian Foresight Institute. "Ken Wilber & Integral Thinking," (http:/ / www. asianforesightinstitute. org/ index. php/ eng/ Links/ Ken-Wilber-Integral-Thinking) AsianForesightInstitute.org. Retrieved Jan. 13, 2010 [18] Patten, Terry. "Integral Heart Newsletter #1: Exploring Big Questions in the Integral World," (http:/ / www. integralheart. com/ node/ 150) Integral Heart Newsletter. Retrieved via IntegralHeart.com on Jan. 13, 2010. [19] Kazlev, Alan. "Redefining Integral," (http:/ / www. integralworld. net/ kazlev13. html) Integral World. Retrieved via IntegralWorld.net on Jan. 13, 2010. [20] (http:/ / www. integralleadershipreview. com/ archives/ 2010-01/ 2010-01notes_The_Integral-University_in_Paris. php) [21] Haridas Chaudhuri, "Psychology: Humanistic and Transpersonal". Journal of Humanistic Psychology, and The Evolution of Integral Consciousness; Bahman Shirazi "Integral psychology, metaphors and processes of personal integration" in Cornelissen (ed.) Consciousness and Its Transformation online version (http:/ / www. saccs. org. in/ TEXTS/ IP2/ IP2-1. 2-. htm) [22] Daryl S. Paulson, Wilber's Integral Philosophy: A Summary and Critique, Journal of Humanistic Psychology 2008; 48: 364-388 [23] Ken Wilber, Integral Psychology, Shambhalla, 2000 p.78 [24] Christopher Cooke and Ben Levi Spiral Dynamics Integral (http:/ / www. dialogue. org/ Documents/ SD_Integral. pdf) [25] Matt Rentschler, AQAL Glossary (http:/ / aqaljournal. integralinstitute. org/ public/ Pdf/ AQAL_Glossary_01-27-07. pdf), p.15 [26] Sean Esbjrn-Hargens, An Overvew of Integral Theory - An All-Inclusive Framework for the 21st Century p.22 note 4, Integral InstituteResource Paper No. 1, 2009 [27] Gidley, J. An Other View of Integral Futures: De/reconstructing the IF Brand (http:/ / rmit. academia. edu/ JenniferGidley/ Papers) Futures: The journal of policy, planning and futures studies, 2010, Volume 42, Issue 4: 125-133. [28] Wilber, Ken. "AQAL Glossary," (http:/ / aqaljournal. integralinstitute. org/ public/ Pdf/ AQAL_Glossary_01-27-07. pdf) "Introduction to Integral Theory and Practice: IOS Basic and the AQAL Map," Vol. 1, No. 3. Retrieved on Jan. 7, 2010. [29] Roy, Bonnitta (2006). "A Process Model of Integral Theory," (http:/ / integral-review. org/ documents/ Kupers, Phenomenology Vol. 5 No. 1. pdf) Integral Review, 3, 2006. Retrieved on Jan. 10, 2010. [30] Kpers, Wendelin "The Status and Relevance of Phenomenology for Integral Research: Or Why Phenomenology is More and Different than an 'Upper Left' or 'Zone #1' Affair," (http:/ / integral-review. org/ documents/ Kupers, Phenomenology Vol. 5 No. 1. pdf) Integral Review, June 2009, Vol. 5, No. 1. Retrieved on Jan. 10, 2010. [31] Zimmerman, M. (2005). Integral Ecology: A Perspectival, Developmental, and Coordinating Approach to Environmental Problems. World Futures: The Journal of General Evolution 61, nos. 1-2: 50-62. [32] Esbjrn-Hargens, S. (2008). Integral Ecological Research: Using IMP to Examine Animals and Sustainability in Journal of Integral Theory and Practice Vol 3, No. 1. [33] Esbjrn-Hargens, S. & Zimmerman, M. E. (2008). Integral Ecology Callicott, J. B. & Frodeman, R. (Eds.) Encyclopedia of Environmental Ethics and Philosophy. New York: Macmillan Library Reference. [34] Sean Esbjrn-Hargens and Michael E. Zimmerman, Integral Ecology: Uniting Multiple Perspectives on the Natural World, Integral Books (2009) ISBN 1590304667 [35] http:/ / www. integralinstitute. org/ ?q=node/ 1 [36] Kevin J. Bowman, Integral Neoclassical Economic Growth (http:/ / web. augsburg. edu/ ~bowmank/ SolowAQAL. pdf), as submitted to AQAL: Journal of Integral Theory and Practice, June 27, 2008 [37] Kupers, W. & Volckmann, R. (2009). "A Dialogue on Integral Leadership" (http:/ / www. integralleadershipreview. com/ archives-2009/ 2009-08/ 2009-08-dialogue-kupers-volckmann. php). Integral Leadership Review, Volume IX, No. 4 - August 2009. Retrieved on October 23, 2010. [38] Ken Wilber (2000). A Theory of Everything: An Integral Vision for Business, Politics, Science and Spirituality, p. 153. Boston: Shambhala Publications. ISBN 1570628556 [39] Indra Sen, Integral Psychology: The Psychological System of Sri Aurobindo, Pondicherry, India: Sri Aurobindo Ashram Trust, 1986 [40] Ken Wilber, Integral Psychology : Consciousness, Spirit, Psychology, Therapy Shambhala, ISBN 1-57062-554-9 [41] Wilber, K., 1997, An integral theory of consciousness (http:/ / www. imprint. co. uk/ Wilber. htm); Journal of Consciousness Studies, 4 (1), pp.71-92 [42] Esbjrn-Hargens, S., & Wilber, K. (2008). Integral Psychology in The Corsinis Encyclopedia of Psychology. 4th Edition. New York: John Wiley and Sons. [43] An Other View of Integral Futures: De/reconstructing the IF Brand (http:/ / rmit. academia. edu/ JenniferGidley/ Papers) Futures: The journal of policy, planning and futures studies, 2010, Volume 42, Issue 4: 125-133. [44] Integral Research Center." References of M.A. Theses & Ph.D. Dissertations Using Integral Theory," (http:/ / www. integralresearchcenter. org/ sites/ default/ files/ Dissertations-Theses. pdf) IntegralResearchCenter.org (2009-5-28). Retrieved on Jan. 7, 2010. [45] See, for example: John J. Gibbs, et. al. "Criminology and the Eye of Spirit: An Introduction and Application to the Thoughts of Ken Wilber" (https:/ / www. iup. edu/ assets/ 0/ 347/ 361/ 1471/ 1499/ de92671ab3f9438689408972aa9e3dfc. pdf), Journal of Contemporary Criminal Justice. 2000. 16; 99.
63
Integral Theory
[46] Ron Cacioppe, et. al. "Adjusting blurred visions: A typology of integral approaches to organisations", Journal of Organizational Change Management. 2005. Vol. 18, No. 3, p. 230 - 246. [47] Daryl S. Paulson, PhD "Wilber's Integral Philosophy: A Summary and Critique", Journal of Humanistic Psychology, 2008. Vol. 48, No. 3, 364-388 [48] Olen Gunnlaugson. "Toward Integrally Informed Theories of Transformative Learning", Journal of Transformative Education, Vol. 3, No. 4, 331-353 [49] Chris C. Stewart "Humanicide: From Myth to Risk", Journal of Futures Studies, May 2005, 9(4): 15 - 28. [50] Gary P. Hampson. "Integral Re-views Postmodernism: The Way Out Is Through" (http:/ / integral-review. org/ documents/ Hampson, Integral Re-views Postmodernism 4, 2007. pdf), Integral Review, Vol. 4, p. 108 - 173. Retrieved on 2010-1-8. [51] Editors of KenWilber.com. "Meta-Genius: A Celebration of Ken's Writings (Part 1)," (http:/ / www. kenwilber. com/ Writings/ PDF/ Metagenius-part1. pdf) KenWilber.com, accessed 2010-1-10. [52] Visser, Frank. "Critics on Ken Wilber," (http:/ / www. integralworld. net/ criticism. html) IntegralWorld.net. Retrieved on Jan. 10, 2010. [53] Frank Visser "A Spectrum of Wilber Critics," (http:/ / www. integralworld. net/ visser11. html) IntegralWorld.net, accessed 2010-1-10. [54] Smith, Andrew P. "Contextualizing Ken," (http:/ / www. integralworld. net/ smith20. html) IntegralWorld.net. Retrieved on Jan. 7, 2010. [55] Gidley, J. The Evolution of Consciousness as a Planetary Imperative: An Integration of Integral Views (http:/ / integral-review. org/ documents/ Gidley, Evolution of Consciousness as Planetary Imperative 5, 2007. pdf), Integral Review: A Transdisciplinary and Transcultural Journal for New Thought, Research and Praxis, 2007, Issue 5, p. 4-226.] [56] Gidley, J. Educational Imperatives of the Evolution of Consciousness: The Integral Visions of Rudolf Steiner and Ken Wilber (http:/ / rmit. academia. edu/ JenniferGidley/ Papers), The International Journal of Childrens Spirituality. 12 (2): 170-135.]
64
External links
Academic programs California Institute of Integral Studies (http://www.ciis.edu/), offers programs in integral studies. Fielding Graduate University (http://www.fielding.edu/), offers programs in integral studies. John F. Kennedy University, MA in Integral Theory (http://www.jfku.edu/integraltheory/) an accredited online Master of Arts degree in Integral Theory. Conferences Integral Theory Conference (http://www.integraltheoryconference/) the official site for the biennial Integral Theory Conference held at JFK University. Integral Leadership in Action (http://www.integralleadershipinaction.com/) the official site for the 4th annual conference on integral conscious leadership. Organizations Integral Institute (http://www.integralinstitute.com/) a non-profit academic think tank. Integral Research Center (http://www.integralresearchcenter.org/) a grant giving mixed-methods research center based on Integral Methodological Pluralism. Publications Conscious Evolution (http://www.cejournal.org/), essays and articles about the multidisciplinary, integral study of consciousness and the Kosmos. Integral Leadership Review (http://www.integralleadershipreview.com), the site of the online publications Integral Leadership Review and Leading Digest Integral Life (http://www.integrallife.com/) online community website that is the sponsoring organization of Integral Institute, a non-profit academic think tank. Integral Review Journal (http://www.integral-review.org/), an online peer reviewed journal. Integral World (http://www.integralworld.net/) website and online resource maintained by Frank Visser. Journal of Integral Theory and Practice (http://www.integraljournals.org/) a peer-reviewed academic journal founded in 2003 with its first issue appearing in 2006. Kosmos Journal (http://www.kosmosjournal.org/), founded in 2001, a leading international journal for planetary citizens committed to the birth and emergence of a new planetary culture and civilization.
Integral Theory World Futures: Journal of General Evolution (http://www.tandf.co.uk/journals/titles/02604027.html). An academic journal devoted to promoting evolutionary models, theories and approaches within and among the natural and the social sciences.
65
Integral ecology
Integral ecology is an emerging field that applies Ken Wilber's integral theory to environmental studies and ecological research. The field was pioneered in the late 1990s by integral theorist Sean Esbjrn-Hargens and environmental philosopher Michael E. Zimmerman.
Teachings
Integral ecology integrates over 80 schools of ecology and 70 schools of environmental thought. It integrates these approaches by recognizing that environmental phenomena are the result of an observer using a particular method of observation to observe some aspect of nature. This postmetaphysical formula is summarized as Who (the observer) x How (method of observation) x What (that which is observed). Integral ecology uses a framework of eight ecological worldviews (e.g.,eco-manager, eco-holist, eco-radical, eco-sage), eight ecological modes of research (e.g., phenomenology, ethnomethodology, empiricism, systems theory), and four terrains (i.e., experience, behaviors, cultures, and systems). See table below for an overview of a few of the schools of ecology that integral ecology weaves together:
Terrain of Experiences Feminist Ecology Psychoanalytic Ecology Deep Ecology Ecopsychology Romantic Ecology Terrain of Cultures Ethno-Ecology Linguistic Ecology Process Ecology Information Ecology Spiritual Ecology Terrain of Behaviors Chemical Ecology Cognitive Ecology Behavioral Ecology Mathematical Ecology Acoustic Ecology Terrain of Systems Paleo-Ecology Historical Ecology Political Ecology Industrial Ecology Social Ecology
Integral ecology is defined as the mixed methods study of the subjective and objective aspects of organisms in relationship to their intersubjective and interobjective environments. As a result integral ecology doesnt require a new definition of ecology as much as it provides an integral interpretation of the standard definition of ecology, where organisms and their environments are recognized as having interiority. Integral ecology also examines developmental stages in both nature and humankind, including how nature shows up to people operating from differing worldviews. Key integrative figures drawn on in integral ecology include: Thomas Berry, Edgar Morin, Aldo Leopold, and Stan Salthe.
Integral ecology
66
Publications
Articles
Zimmerman, M. (1994). Contesting Earths Future: Radical Ecology and Postmodernity. Berkeley: University of California Press. Zimmerman, M. (1996). A Transpersonal Diagnosis of the Ecological Crisis. ReVision: A Journal of Consciousness and Transformation 18, no. 4: 38-48. Zimmerman, M. (2000). Possible Political Problems of Earth-Based Religiosity. In Beneath the Surface: Critical Essays in the Philosophy of Deep Ecology, edited by E. Katz, A. Light, and D. Rothenberg, 169-94. Cambridge, MA: MIT Press. Zimmerman, M. (2001). Ken Wilber's Critique of Ecological Spirituality. In Deep Ecology and World Religions, edited by D. Barnhill and R. Gottlieb, 243-69. Albany, NY: SUNY Press. Hargens, S. (2002). Integral development: Taking the Middle Path Towards Gross National Happiness [1], in Journal of Bhutan Studies Vol 6. Summer pp. 24-87. Zimmerman, M. (2004). Humanitys Relation to Gaia: Part of the Whole, or Member of the Community? The Trumpeter 20 no. 1 Zimmerman, M. (2005). Integral Ecology: A Perspectival, Developmental, and Coordinating Approach to Environmental Problems. World Futures: The Journal of General Evolution 61, nos. 1-2: 50-62. Esbjrn-Hargens, S. (2005). Integral Ecology: The What, Who, and How of Environmental Phenomena, in World Futures Vol 61, No. 1-2. pp. 5-49. Hochachka, G. (2005). Developing Sustainability, Developing the Self. Victoria, BC: POLIS Project on Ecological Governance. Esbjrn-Hargens, S. (2006). Integral Research: A multi-method approach to investigating phenomena, in Constructivism and the Human Sciences 11 (1-2) pp 79-107. Esbjrn-Hargens, S. (2006). Integral Ecology: A Postmetaphysical Approach to Environmental Phenomena, in AQAL: Journal of Integral Theory and Practice Vol 1, No. 1. Esbjrn-Hargens, S., & Wilber, K. (2006). Towards a comprehensive integration of science and religion: A post-metaphysical approach, in The Oxford Handbook of Science and Religion. Oxford: Oxford University Press pp 523 546. Esbjrn-Hargens, S. & Zimmerman, M. E. (2008). Integral Ecological Research: Using IMP to Examine Animals and Sustainability in AQAL: Journal of Integral Theory and Practice Vol 3, No. 1. Esbjrn-Hargens, S. & Zimmerman, M. E. (2008). Integral Ecology Callicott, J. B. & Frodeman, R. (Eds.) Encyclopedia of Environmental Ethics and Philosophy. New York: Macmillan Library Reference.
Integral ecology
67
Special Issues
Esbjrn-Hargens, S. (Ed.) (2005). Integral Ecology special issue of World Futures Vol 61, No. 1-2. 163 pages.
Books
Esbjrn-Hargens, S. & Zimmerman, M. E. Integral Ecology: Uniting Multiple Perspectives on the Natural World (Random House/Integral Books, 2008), ISBN 1-59030-466-7
See also
Constructivist epistemology
External links
[2] Integral Ecology Blog [3] Next Step Integral's Integral Ecology Seminar homepage [4] Michael Zimmerman's homepage [5] Sean Esbjrn-Hargens website
References
[1] [2] [3] [4] [5] http:/ / himalaya. socanth. cam. ac. uk/ collections/ journals/ jbs/ pdf/ JBS_06_03. pdf http:/ / integralecology-michaelz. blogspot. com/ http:/ / www. i-edu. org/ http:/ / www. colorado. edu/ ArtsSciences/ CHA/ profiles/ zimmerman. html http:/ / www. rhizomedesigns. org
Law of Complexity/Consciousness
The Law of Complexity/Consciousness is the tendency in matter to become more complex over time and at the same time to become more conscious. The law was first formulated by Jesuit priest and paleontologist Pierre Teilhard de Chardin. Teilhard holds that at all times and everywhere, matter is endeavoring to complexify upon itself, as observed in the evolutionary history of the earth. Matter complexified from inanimate matter, to plant-life, to animal-life, to human-life. Or, from the geosphere, to the biosphere, to the noosphere (of which humans represented, because of their possession of a consciousness which reflects upon themselves). As evolution rises through the geosphere, biosphere, and noosphere, matter continues to rise in a continual increase of both complexity and consciousness. For Teilhard, the Law of Complexity/Consciousness continues to run today in the form of the socialization of mankind. The closed and circular surface of the earth contributes to the increased compression (socialization) of mankind. As human beings continue to come into closer contact with one another, their methods of interaction continue to complexify in the form of better organized social networks, which contributes to an overall increase in consciousness, or the noosphere. Teilhard imagines a critical threshold, Omega Point, in which mankind will have reached its highest point of complexification (socialization) and thus its highest point of consciousness. At this point consciousness will rupture through time and space and assert itself on a higher plane of existence from which it can not come back. Interestingly, for Teilhard, because the Law of Complexity/Consciousness runs everywhere and at all times, and because of the immensity of both time and space in outer space, and the immensity of the chances for matter to find the right conditions to complexify upon itself, it is highly probable that life exists, has existed, and will exist in the
68
See also
Autopoesis Fifth force Involution (esoterism) Moore's law Technological singularity
Quotes
"The more complex a being is, so our Scale of Complexity tells us, the more it is centered upon itself and therefore the more aware does it become. In other words, the higher the degree of complexity in a living creature, the higher its consciousness; and vice versa. The two properties vary in parallel and simultaneously. If we depict them in diagrammatic form, they are equivalent and interchangeable." --Pierre Teilhard de Chardin, The Future of Man, p.111 "For its reflective and inventive forward spring it is in some sort necessary that Life, duplicating its evolutionary motive center, should henceforth be sustained by two centers of action, separate and conjoined, one of consciousness and the other of complexity.... In hominised evolution the Physical and the Psychic, the Without and the Within, Matter and Consciousness, are all found to be functionally linked in one tangible process." --Pierre Teilhard de Chardin, The Future of Man, p.209
Layered system
In telecommunication, a layered system is a system in which components are grouped, i.e., layered, in a hierarchical arrangement, such that lower layers provide functions and services that support the functions and services of higher layers. Note: Systems of ever-increasing complexity and capability can be built by adding or changing the layers to improve overall system capability while using the components that are still in place. This article incorporates public domain material from websites or documents of the General Services Administration.
69
Level I: Cells
Cells are the basic building blocks of life, and performs vital functions in an organism. The cell is the most basic unit of a living thing, each cell in your body has a specific job. Peroxisome Ventricles Other less common or varying structures are listed in the
Levels of Organization (anatomy) Integumentary system: the skin, hair, and nails Lymphatic system: the leukocytes, tonsils, adenoids, thymus, and spleen Muscular system: the muscles Nervous system: the nerves, brain, spinal cord, and peripheral nerves Reproductive system: the ovaries, fallopian tubes, uterus, vagina, mammary glands, testes, vas deferens, seminal vesicles, prostate, and penis Respiratory system: the pharynx, larynx, trachea, bronchi, lungs, and diaphragm Skeletal system: the bones, cartilage, ligaments, and tendons Urinary system: the kidneys, ureters, bladder, and urethra Organ systems may be closely intertwined and called things like the musculoskeletal system or neuroendocrine system.
70
Level V: Organisms
An organism is made up of cells and are either unicellular or multicellular. The fifth level of organization is referring to multicellular organisms. Organism is also the largest level of organization known. It is highly likely that superorganism will be accepted as the sixth level of organization. An organism is a type of living thing that is made of cells and is close enough- genetically- to be considered a species. Millions of organisms are currently known, from Paramecium to ourselves. All generally known or excepted organisms fit into six kingdoms or regnums: Animalia (animals), Plantae (plants), Protista (protists), Fungi (fungi), Archaea (or Archaebacteria), and Bacteria. Viruses and subviral agents are not generally considered organisms. Scientists are currently looking into the creation of artificial life and cybornetics and may end up looking towards the levels of organization for guidance and research.
See also
Holarchy Holism
References
Haircourt Science Textbook 2007 Edition
Logical holism
71
Logical holism
Logical holism is the belief that the world operates in such a way that no part can be known without the whole being known first.
See also
The doctrine of internal relations Holography 1. In optics: holography 2. In metaphysics: holonomic brain theory, holographic paradigm and The Holographic Universe (Michael Talbot's book) Proponents: Michael Talbot, David Bohm, Karl H. Pribram 1. In quantum mechanics: holographic principle (the conjecture that all of the information about the realities in a volume of space is present on the surface of that volume) Proponents: Gerard 't Hooft, Leonard Susskind, John A. Wheeler
Meshico
Meshico is a term which began to be employed in the middle of the 20th century by a group of Mexican intellectuals connected to the influential magazine Meshico Grande in order to define a philosophical and sociological stance based on an authentic ontology of the Mexican person, one that would serve, as well, as a means of confronting the dependancy of the official intelligentsia on ways of thinking perceived as being too foreign to permit a true understanding of Mexican reality. The group decided on the unusual spelling in order to differentiate itself from the official Europeanizing intelligentsia; they believed that the spelling "meshico" was historically more accurate as it reflected the original Nahuatl pronunciation of the word, and, for this reason, would be an appropriate name for a group dedicated to a professedly authentic understanding of Mexican identity (Mexicanidad). Among the many notable members of the group were Rosario Mara Gutirrez Eskildsen, Manuel Snchez Mrmol, Francisco Javier Santamara, and Jos Vasconcelos. The Tabascan Dr. Ricardo Alfonso Sarabia y Zorrilla was one of the group's more assiduous promoters and served for a time as director of Meshico Grande. He also contributed to a wider awareness of the group's goals and positions by writing, among other works, a paper entitled "Filosofa de la accin y resea del pensamiento filosfico de Meshico" [Philosophy of Action and Review of the Philosophical Thinking of Meshico], which was originally published in the Proceedings of the 11th International Congress of Philosophy, Brussels 1953. Sarabia y Zorrilla cites, in the aforementioned work, the definition of Truth given by the obscure and retiring Mexican philosopher and mathematician Edmundo Cetina Velzquez: "[it is knowledge] which corresponds to the integral unity of being; the world of thought will always reveal to us an external world, sensible or abstract; but always unilateral. 'Truth,' which is reality, is the patrimony of the totality of being." Sarabia y Zorrilla saw in such a statement a correct testimony of faith in an epistemological holism functioning in opposition to the notion of knowledge as a goal obtainable only through a scientific vision which is essentially and necessarily ontic and a posteriori (compare the avowal made by Sarabia himself in the same text: "everything is related, coordinated, linked
Meshico harmoniously and amorously"). Jos Gmez Robleda, a teacher, psychiatrist and Subsecretary of Public Education during the Adolfo Ruiz Cortines administration, published in 1947 a book titled Imagen del Mexicano (Image of the Mexican). Sarabia y Zorrilla describes the study as "the work which for the first time studies the way of being of the Mexican man," especially in his ethnopsychological dimensions, and praises it for a vision of the human which, owing to its philosophical balance, succeeds in avoiding the worst rationalistic tendencies of Comtian Positivism, without falling into the no less problematic cosmic mysticism of Jos Vasconcelos (who Professor Sarabia once described, with no intended harshness, as a "philosopher-artist").
72
See also
Cientfico; The Labyrinth of Solitude; La Raza Csmica; Samuel Ramos; Leopoldo Zea Aguilar
External links
El Pensamiento Filosfico de Meshico[1] Jos Vasconcelos and His World[2] Perspectivas docentes[3] O vont les philosophies[4] Meshico grande in Worldcat: [5]
References
[1] http:/ / books. google. com/ books?id=qOAMAAAAIAAJ& q=meshico& dq=meshico& ei=XHSJR4O7H4XqiwGd-_j5DQ& pgis=1 [2] http:/ / books. google. com/ books?id=T4E_AAAAIAAJ& q=%22sarabia+ y+ zorrilla%22& dq=%22sarabia+ y+ zorrilla%22& lr=& pgis=1 [3] http:/ / books. google. com/ books?id=eY0QAAAAYAAJ& q=%22algunos+ aspectos+ de+ la+ relatividad%22& dq=%22algunos+ aspectos+ de+ la+ relatividad%22& lr=& pgis=1 [4] http:/ / books. google. com/ books?id=p9cMAAAAIAAJ& dq=%22ou+ vont+ les+ philosophies%22& q=%22ou+ vont+ les%22& pgis=1#search [5] http:/ / www. worldcat. org/ wcpa/ ow/ 6741439
73
References
Gunnar Erixon: "Modular Function Deployment - A Method for Product Modularisation", Ph.D. Thesis [1], The Royal Institute of Technology, Stockholm, 1998. TRITA-MSM R-98-1, ISSN 1104-2141, ISRN KTH/MSM/R-98/1-SE. Application of the Modular Function Deployment Tool on a pressure regulator, Gilles Clemen / Rotarex Automotive S.A., Lintgen/Luxembourg [2]
References
[1] http:/ / users. du. se/ ~gex/ paper/ drabs. htm [2] http:/ / www. fh-kl. de/ ~albert. meij/ Zusammenfassung%20Gilles%20Clemen. pdf
Modular programming
74
Modular programming
Modular programming is a software design technique that increases the extent to which software is composed of separate, interchangeable components, called modules. Conceptually, modules represent a separation of concerns, and improve maintainability by enforcing logical boundaries between components. Modules are typically incorporated into the program through interfaces. A module interface expresses the elements that are provided and required by the module. The elements defined in the interface are detectable by other modules. The implementation contains the working code that corresponds to the elements declared in the interface.
Language support
Languages that formally support the module concept include IBM/360 Assembler, COBOL, RPG and PL/1, Ada, D, F, Fortran, Haskell, BlitzMax, OCaml, Pascal, ML, Modula-2, Oberon, Morpho, Component Pascal, Zonnon, Erlang, Perl, Python and Ruby. The IBM System i also uses Modules in RPG, COBOL and CL, when programming in the ILE environment. Modular programming can be performed even where the programming language lacks explicit syntactic features to support named modules. Software tools can create modular code units from groups of components. Libraries of components built from separately compiled modules can be combined into a whole by using a linker.
Key Aspects
With Modular Programming, concerns are separated such that no (or few) modules depend upon other modules of the system. To have as few dependencies as possible is the goal. When creating a modular system, instead of creating a monolithic application (where the smallest component is the whole application), several smaller modules are built (and usually compiled) separately that, when composed together, will construct the executable application program. A just in time compiler may perform some of this construction "on-the-fly" at run time. This makes modular designed systems, if built correctly, far more reusable than a traditional monolithic design since all (or many) of these modules may then be reused (without change) in other projects. This also facilitates the "breaking down" of projects (through "divide and conquer") into several smaller projects. Theoretically, a modularized software project will be more easily assembled by large teams, since no team members are creating the whole system, or even need to know about the system as a whole. They can focus just on the assigned smaller task (this, it is claimed, counters the key assumption of The Mythical Man Month - making it actually possible to add more developers to a late software project - without making it later still).
Implementation
Message passing has, more recently, gained ground over the earlier, more conventional, "Call" interfaces, becoming the more dominant linkage between separate modules as an attempt to solve the "versioning problem" (sometimes experienced when using interfaces for communication between the modules).
History
Traditional programming languages have been used to support modular programing - since at least the 1960s. Modular programming is a loosely defined concept with no official definition. It is, in essence, simply a programming technique. Exactly where modularized programming ends, and Dynamically Linked Libraries or Object-oriented programming starts in this context is subjective. It might be defined as the natural predecessor of OOP, or an evolutionary step beyond it - depending upon viewpoint.
Modular programming
75
See also
Architecture description language Cohesion Constructionist design methodology, a methodology for creating modular, broad Artificial Intelligence systems Component-based software engineering Coupling David Parnas Information hiding (encapsulation) Library (computing) List of System Quality Attributes Snippet (programming) Structured programming
References
O2 Software Process (http://legosoftwareprocess.org/)
Modular design
In systems engineering, modular design or "modularity in design" is an approach that subdivides a system into smaller parts (modules) that can be independently created and then used in different systems to drive multiple functionalities. A modular system can be characterized by the following: "(1) Functional partitioning into discrete scalable, reusable modules consisting of isolated, self-contained functional elements; (2) Rigorous use of well-defined modular interfaces, including object-oriented descriptions of module functionality; (3) Ease of change to achieve technology transparency and, to the extent possible, make use of industry standards for key interfaces."[1] Besides reduction in cost (due to lesser customization, and less learning time), and flexibility in design, modularity offers other benefits such as augmentation (adding new solution by merely plugging in a new module), and exclusion. Examples of modular systems are cars, computers and high rise buildings. Earlier examples include looms, railroad signaling systems, telephone exchanges, pipe organs and electric power distribution systems. Computers use modularity to overcome changing customer demands and to make the manufacturing process more adaptive to change (see modular programming).[2] Modular design is an attempt to combine the advantages of standardization (high volume normally equals low manufacturing costs) with those of customization. A downside to modularity (and this depends on the extent of modularity) is that modular systems are not optimized for performance. This is usually due to the cost of putting up interfaces between modules.
Modular design
76
Modular design
77
See also
Holism Holarchy Modular Function Deployment Modular programming Separation of concerns Modular building
References
[1] "Glossary (Modular Design)" (http:/ / nesipublic. spawar. navy. mil/ part5/ releases/ 1. 3. 0/ WebHelp/ glossary/ m. htm). Net-Centric Enterprise Solutions for Interoperability (US Government). . Retrieved September 2007. [2] Baldwin and Clark, 2000 [3] "Modular home definition" (http:/ / architecture. about. com/ cs/ buildyourhouse/ g/ modular. htm). . Retrieved 2010-08-19.
Further reading
Erixon, O.G. and Ericsson, A., "Controlling Design Variants" USA: Society of Manufacturing Engineers 1999 (http://www.sme.org/cgi-bin/get-item.pl?BK99PUB16&2&SME) ISBN 0-87263-514-7 (http://users.du. se/~gex/index.htm) Clark, K.B. and Baldwin, C.Y., "Design Rules. Vol. 1: The Power of Modularity" Cambridge, Massachusetts: MIT Press 2000 ISBN 0-262-02466-7 Baldwin, C.Y., Clark, K.B., "The Option Value of Modularity in Design" Harvard Business School, 2002 (http:// www.people.hbs.edu/cbaldwin/DR2/DR1Option.pdf) Modularity in Design Formal Modeling & Automated Analysis (http://www.cs.drexel.edu/~yfcai/ Presentations/Modularity in Design_CMU.ppt) "Modularity: upgrading to the next generation design architecture" (http://www.connected.org/media/modular. html), an interview
Modularity
78
Modularity
Modularity is a general systems concept, typically defined as a continuum describing the degree to which a systems components may be separated and recombined.[1] It refers to both the tightness of coupling between components, and the degree to which the rules of the system architecture enable (or prohibit) the mixing and matching of components. Its use, however, can vary somewhat by context: In biology, modularity refers to the concept that organisms or metabolic pathways are composed of modules. In nature, modularity refers to the construction of a cellular organism by joining together standardized units to form larger compositions, as for example, the hexagonal cells in a honeycomb. In the Five Principles of New Media as defined by Lev Manovich, modularity covers the principle that new media is composed of modules or self-sufficient parts of the overall media object. In the study of networks, modularity (networks) is a benefit function that measures the quality of a division of a network into groups or communities. In ecology, modularity is considered a key factor - along with diversity and feedback - in supporting resilience. In mathematics, the modularity theorem (formerly the TaniyamaShimura conjecture) establishes a connection between elliptic curves and modular forms. In mathematics, modular lattices are partially ordered sets satisfying certain axioms. In cognitive science, the modularity of mind refers to the idea that the mind is composed of independent, closed, domain-specific processing modules. Specifically, see visual modularity, for an article relating to the various putative visual modules. Specifically, see language module, for an article relating to the putative language module. In industrial design, modularity refers to an engineering technique that builds larger systems by combining smaller subsystems. In manufacturing, modularity refers to the use of exchangeable parts or options in the fabrication of an object. In modular programming, modularity refers to the compartmentalization and inter-relation of the parts of a software package. In contemporary art and architecture, modularity can refer to the construction of an object by joining together standardized units to form larger compositions, and/or to the use of a module as a standardized unit of measurement and proportion. In ModulArt - a branch of modular art - modularity refers to the ability to alter the work by reconfiguring, adding to and/or removing its parts.
Modularity single image can be composed of many layers, each of which can be treated as an entirely independent and separate entity. Websites can be defined as being modular, their structure is formed in a format that allows their contents to be changed, removed or edited whilst still retaining the structure of the website, this is because the websites content operates separately to the website and does not define the structure of the site. The entire Web, Manovich notes, has a modular structure, composed of independent sites and pages, and each webpage itself is composed of elements and code that can be independently modified.[5] Organizational systems are said to become increasingly modular when they begin to substitute loosely coupled forms for tightly integrated, hierarchical structures.[6] For instance, when the firm utilizes contract manufacturing rather than in-house manufacturing, it is using an organizational component that is more independent than building such capabilities in-house: the firm can switch between contract manufacturers that perform different functions, and the contract manufacturer can similarly work for different firms. As firms in a given industry begin to substitute loose coupling with organizational components that lie outside of firm boundaries for activities that were once conducted in-house, the entire production system (which may encompass many firms) becomes increasingly modular. The firms themselves become more specialized components. Using loosely-coupled structures enables firms to achieve greater scope flexibility and scale flexibility.[6] The firm can switch easily between different providers of these activities (e.g., between different contract manufacturers or alliance partners) compared to building the capabilities for all activities in house, thus responding to different market needs more quickly. However, these flexibility gains come with a price. Therefore the organization must assess the flexibility gains achievable, and any accompanying loss of performance, with each of these forms.
79
Modularity in Psychology
Probably the most noted work on modularity in psychology is Jerry Fodor's book, The Modularity of Mind (1996/1983).[7] In the book, he proposes a 'modified' modularity theory of cognitive processes. His theory builds on the premise of faculty psychology that there are certain faculties innate in the mind, and mental "organs" that are biologically predisposed to perform certain types of computational processes. Fodor does not argue that the entire mind is modular; rather he proposes that the central cognitive system responsible for complex cognitive activities (such as analogical reasoning) is not modular, but that input systems (which interpret the neural signals from physical stimuli, and are responsible for basic cognitive activities such as language and vision) are likely to be modular.[8] Input systems, or "domain specific computational mechanisms" (such as the ability to perceive spoken language) are termed vertical faculties, and according to Fodor they are modular in that they possess a number of characteristics Fodor argues constitute modularity. Fodor's list of features characterizing modules includes the following: 1. Domain specific (modules only respond to inputs of a specific class, and thus a "species of vertical faculty" (Fodor, 1996/1983:37) 2. Innately specified (the structure is inherent and is not formed by a learning process) 3. Not assembled (modules are not put together from a stock of more elementary subprocesses but rather their virtual architecture maps directly onto their neural implementation) 4. Neurologically hardwired (modules are associated with specific, localized, and elaborately structured neural systems rather than fungible neural mechanisms) 5. Autonomous (modules are independent of other modules) Fodor does not argue that this is formal definition or an all inclusive list of features necessary for modularity. He argues only that cognitive systems characterized by some of the features above are likely to be characterized by them all, and that such systems can be considered modular. He also notes that the characteristics are not an all-or-nothing proposition, but rather each of the characteristics may be manifest in some degree, and that modularity itself is also not a dichotomous constructsomething may be more or less modular: "One would thus expect--what anyhow seems to be desirable--that the notion of modularity ought to admit of degrees" (Fodor, 1996/1983:37).
Modularity Notably, Fodor's "not assembled" feature contrasts sharply with the use of modularity in other fields in which modular systems are seen to be hierarchically nested (that is, modules are themselves composed of modules, which in turn are composed of modules, etc.) However, Coltheart notes that Fodor's commitment to the non-assembled feature appears weak,[8] and other scholars (e.g., Block[9] ) have proposed that Fodor's modules could be decomposed into finer modules. For instance, while Fodor distinguishes between separate modules for spoken and written language, Block might further decompose the spoken language module into modules for phonetic analysis and lexical forms[8] : "Decomposition stops when all the components are primitive processors--because the operation of a primitive processor cannot be further decomposed into suboperations"[9] Though Fodor's work on modularity is one of the most extensive, there is other work in psychology on modularity worth noting for its symmetry with modularity in other disciplines. For instance, while Fodor focused on cognitive input systems as modules, Coltheart proposes that there may be many different kinds of cognitive modules, and distinguishes between, for example, knowledge modules and processing modules. The former is a body of knowledge that is independent of other bodies of knowledge, while the latter is a mental information-processing system independent from other such systems.
80
Modularity in Biology
As in some of the other disciplines, the term modularity may be used in multiple ways in biology. For example, it may be used to refer to organisms that have an indeterminate structure wherein modules of various complexity (e.g., leaves, twigs) may be assembled without strict limits on their number or placement. Many plants and sessile benthic invertebrates demonstrate this type of modularity (by contrast, many other organisms have a determinate structure that is predefined in embryogenesis).[10] The term has also been used in a broader sense in biology to refer to the reuse of homologous structures across individuals and species. Even within this latter category, there may be differences in how a module is perceived. For instance, evolutionary biologists may focus on the module as a morphological component (subunit) of a whole organism, while developmental biologists may use the term module to refer to some combination of lower-level components (e.g., genes) that are able to act in a unified way to perform a function.[11] In the former, the module is perceived a basic component, while in the latter the emphasis is on the module as a collective. Biology scholars have provided a list of features that should characterize a module (much as Fodor did in The Modularity of Mind[7] ). For instance, Raff[12] provides the following list of characteristics that developmental modules should possess: 1. 2. 3. 4. 5. discrete genetic specification hierarchical organization interactions with other modules a particular physical location within a developing organism the ability to undergo transformations on both developmental and evolutionary time scales
To Raff's mind, developmental modules are "dynamic entities representing localized processes (as in morphogenetic fields) rather than simply incipient structures ... (... such as organ rudiments)."[13] Bolker, however, attempts to construct a definitional list of characteristics that is more abstract, and thus more suited to multiple levels of study in biology. She argues that: 1. A module is a biological entity (a structure, a process, or a pathway) characterized by more internal than external integration 2. Modules are biological individuals[14] [15] that can be delineated from their surroundings or context, and whose behavior or function reflects the integration of their parts, not simply the arithmetical sum. That is, as a whole, the module can perform tasks that its constituent parts could not perform if dissociated. 3. In addition to their internal integration, modules have external connectivity, yet they can also be delineated from the other entities with which they interact in some way.
Modularity Another stream of research on modularity in biology that should be of particular interest to scholars in other disciplines is that of Gunter Wagner. Wagner's work[16] [17] explores how natural selection may have resulted in modular organisms, and the roles modularity plays in evolution. Wagner's work suggests that modularity is both the result of evolution, and facilitates evolutionan idea that shares a marked resemblance to work on modularity in technological and organizational domains.
81
Modularity Blair defines a modular system as one that gives more importance to parts than to wholes. Parts are conceived as equivalent and hence, in one or more senses, interchangeable and/or cumulative and/or recombinable (pg. 125). Blair describes the emergence of modular structures in education (the college curriculum), industry (modular product assembly), architecture (skyscrapers), music (blues and jazz), and more. In his concluding chapter, Blair does not commit to a firm view of what causes Americans to pursue more modular structures in the diverse domains in which it has appeared, but he does suggest that it may in some way be related to the American ideology of liberal individualism, and a preference for anti-hierarchical organization.
82
Consistent Themes
Comparing the use of modularity across these disciplines reveals several themes: One theme that shows up in psychology and biology study is innately specified. Innately specified (as used here) implies that the purpose or structure of the module is predetermined by some biological mandate. Domain specificity, that modules respond only to inputs of a specific class (or perform functions only of a specific class) is a theme that clearly spans psychology and biology, and it can be argued that it also spans technological and organizational systems. Domain specificity would be seen in the latter disciplines as specialization of function. Hierarchically nested is a theme that recurs in most disciplines. Though originally disavowed by Fodor, other psychologists have embraced it, and it is readily apparent in the use of modularity in biology (e.g., each module of an organism can be decomposed into finer modules), social processes and artifacts (e.g., we can think of a skyscraper in terms of blocks of floors, a single floor, elements of a floor, etc.), mathematics (e.g., the modulus 6 may be further divided into the moduli 1, 2 and 3), and technological and organizational systems (e.g., an organization may be composed of divisions, which are composed of teams, which are composed of individuals) [23] Greater internal than external integration is a theme that showed up in every discipline but mathematics. Often referred to as autonomy, this theme acknowledged that there may be interaction or integration between modules, but the greater interaction and integration occurs within the module. This theme is very closely related to information encapsulation, which shows up explicitly in both the psychology and technology research. Near decomposability (as termed by Simon, 1962) shows up in all of the disciplines, but is manifest in a matter of degrees. For instance, in psychology and biology it may refer merely to the ability to delineate one module from another (recognizing the boundaries of the module). In several of the social artifacts, mathematics, and technological or organizational systems, however, it refers to the ability to actually separate components from one another. In several of the disciplines this decomposability also enables the complexity of a system (or process) to be reduced. This is aptly captured in a quote from Marr[24] about psychological processes where he notes that, "any large computation should be split up into a collection of small, nearly independent, specialized subprocesses." Reducing complexity is also the express purpose of casting out nines in mathematics. Substitutability and recombinability are closely related constructs. The former refers to the ability to substitute one component for another as in Blairs systemic equivalence while the latter may refer both to the indeterminate form of the system and the indeterminate use of the component. In college curricula, for example, each course is designed with a credit system that ensures a uniform number of contact hours, and approximately uniform educational content, yielding substitutability. By virtue of their substitutability, each student may create their own curricula (recombinability of the curriculum as a system) and each course may be said to be recombinable with a variety of students curricula (recombinability of the component within multiple systems). Both substitutability and recombinability are immediately recognizable in Blairs social processes and artifacts, and are also well captured in Garud and Kumaraswamy's[25] discussion of economies of substitution in technological systems. Blairs systemic equivalence also demonstrates the relationship between substitutability and the module as a homologue. Blairs systemic equivalence refers to the ability for multiple modules to perform approximately the same function within a system, while in biology a module as a homologue refers to different modules sharing approximately the same form or function in different organisms. The extreme of the module as homologue is found
Modularity in mathematics, where (in the simplest case) the modules refer to the reuse of a particular number and thus each module is exactly alike. In all but mathematics, there has been an emphasis that modules may be different in kind. In Fodors discussion of modular cognitive system, each module performs a unique task. In biology, even modules that are considered homologous may be somewhat different in form and function (e.g., a whales fin versus a humans hand). In Blairs book, he points out that while jazz music may be composed of structural units that conform to the same underlying rules, those components vary significantly. Similarly in studies of technology and organization, modular systems may be composed of modules that are very similar (as in shelving units that may be piled one atop the other) or very different (as in a stereo system where each component performs unique functions) or any combination in between. Table 1: The use of modularity by discipline[26]
Concept Technology and Organizations X Psychology Biology American Studies Mathematics
83
Domain specific Innately specified Hierarchically nested More internal integration than external integration (localized processes and autonomy) Informationally encapsulated Near decomposability Recombinability Expandability Module as homologue
X X
X X X X X X X
X X
X X
X X X X X
X X X X X X X X X X X X X X
See also
Holism Object-oriented programming
Notes
[1] Schilling, M.A. 2000. Towards a general modular systems theory and its application to inter-firm product modularity. Academy of Management Review, Vol 25:312-334. [2] Baldwin, C. Y. & Clark, K. B. 2000. Design rules, Volume 1: The power of modularity, Cambridge, MA: MIT Press. [3] Orton, J. & Weick, K. 1990. Loosely coupled systems: A reconceptualization. Academy of Management Review, 15:203-223. [4] Manovich, J. 2001. The Language of New Media. [5] Bradley Dilger, Review of The Language of New Media (Kairos: http:/ / english. ttu. edu/ kairos/ 7. 1/ reviews/ dilger/ ). [6] Schilling, M.A. & Steensma, K. 2001. The use of modular organizational forms: An industry level analysis. Academy of Management Journal, 44: 1149-1169. [7] Fodor, J. 1983. The modularity of mind. Cambridge, MA: MIT Press. [8] Coltheart, M. 1999. Modularity and cognition. Trends in cognitive sciences, 3(3):115-120. [9] Block, N. 1995. The mind as the software of the brain, in Smith, E. and Osherson, D. (Eds) Thinking: An invitation to cognitive science. Cambridge, MA: MIT Press. [10] Andrews, J. 1998. Bacteria as modular organisms. Annual Review of Microbiology, 52:105-126. [11] Bolker, J. A. 2000. Modularity in development and why it matters to Evo-Devo. American Zoologist, 40: 770-776 [12] Raff, R. A. 1996. The shape of life. Chicago University Press, Chicago. [13] Raff, R. A. 1996. The shape of life. Chicago University Press, Chicago, pg. 326) [14] Hull, D. L. 1980. Individuality and selection. Annual Review of Ecological Systems, 11:311-332 [15] Roth, V. L. 1991. Homology and hierarchies: Problems solved and unresolved. Journal of Evolutionary Biology, 4:167-194
Modularity
[16] Wagner, G. 1996. Homologues, natural kinds and the evolution of modularity. American Zoologist, 36:36-43. [17] Wagner, G. and Altenberg, L. 1996a. Perspective: complex adaptations and the evolution of evolvability. Evolution, 50:967-976. [18] Notably, cubic sculptures by Mitzi Cunliffe in the 1950s and 60s, and prints by the sculptor Norman Carlberg from the 1970s and after. [19] See "Modulartists and Their Works" in ModulArt. [20] Blair, J.G. 1988. Modular America: Cross-cultural perspectives on the emergence of an American way. New York: Greenwood Press. [21] Blair, J.G. 1988. Modular America: Cross-cultural perspectives on the emergence of an American way. New York: Greenwood Press, pg. 2 [22] Blair, J.G. 1988. Modular America: Cross-cultural perspectives on the emergence of an American way. New York: Greenwood Press, pg. 3 [23] Schilling, M.A. 2002. Modularity in multiple disciplines. In Garud, R., Langlois, R., & Kumaraswamy, A. (eds) Managing in the Modular Age: Architectures, Networks and Organizations. Oxford, England: Blackwell Publishers, pg. 203-214 [24] Marr, D. 1982. Vision. W.H. Freeman, pg. 325 [25] Garud, R. and Kumaraswamy, A. 1995. Technological and organizational designs to achieve economies of substitution. Strategic Management Journal, 16:93-110. [26] Adapted with permission from Schilling, M.A. 2002. Modularity in multiple disciplines. In Garud, R., Langlois, R., & Kumaraswamy, A. (eds) Managing in the Modular Age: Architectures, Networks and Organizations. Oxford, England: Blackwell Publishers, pg. 203-214
84
Noosphere
Noosphere (pronounced /no.sfr/; sometimes nosphere), according to the thought of Vladimir Vernadsky and Teilhard de Chardin, denotes the "sphere of human thought". The word is derived from the Greek (nous "mind") + (sphaira "sphere"), in lexical analogy to "atmosphere" and "biosphere". In the original theory of Vernadsky, the noosphere is the third in a succession of phases of development of the Earth, after the geosphere (inanimate matter) and the biosphere (biological life). Just as the emergence of life fundamentally transformed the geosphere, the emergence of human cognition fundamentally transforms the biosphere. In contrast to the conceptions of the Gaia theorists, or the promoters of cyberspace, Vernadsky's noosphere emerges at the point where humankind, through the mastery of nuclear processes, begins to create resources through the transmutation of elements. It is also currently being researched as part of the Princeton Global Consciousness Project.[1]
History of concept
For Teilhard, the noosphere emerges through and is constituted by the interaction of human minds. The noosphere has grown in step with the organization of the human mass in relation to itself as it populates the earth. As mankind organizes itself in more complex social networks, the higher the noosphere will grow in awareness. This is an extension of Teilhard's Law of Complexity/Consciousness, the law describing the nature of evolution in the universe. Teilhard argued that the noosphere is growing towards an even greater integration and unification, culminating in the Omega Point, which he saw as the goal of history. The goal of history, then, is an apex of thought/consciousness. One of the original aspects of the noosphere concept deals with evolution. Henri Bergson, with his L'volution cratrice (1907), was one of the first to propose that evolution is 'creative' and cannot necessarily be explained solely by Darwinian natural selection. L'volution cratrice is upheld, according to Bergson, by a constant vital force that animates life and fundamentally connects mind and body, an idea opposing the dualism of Ren Descartes. In 1923, C. Lloyd Morgan took this work further, elaborating on an 'emergent evolution' that could explain increasing complexity (including the evolution of mind). Morgan found that many of the most interesting changes in living things have been largely discontinuous with past evolution, and therefore did not necessarily take place through a gradual process of natural selection. Rather, evolution experiences jumps in complexity (such as the emergence of a self-reflective universe, or noosphere). Finally, the complexification of human cultures, particularly language, facilitated a quickening of evolution in which cultural evolution occurs more rapidly than biological evolution. Recent understanding of human ecosystems and of human impact on the biosphere have led to a link between the notion of sustainability with the "co-evolution" [Norgaard, 1994] and harmonization of cultural and biological evolution. The resulting political system has been referred to as a noocracy.
Noosphere American integral theorist Ken Wilber deals with this third evolution of the noosphere. In his work, Sex, Ecology, Spirituality (1995), he builds many of his arguments on the emergence of the noosphere and the continued emergence of further evolutionary structures. The term Nocene epoch refers to "how we manage and adapt to the immense amount of knowledge weve created." [2] The noosphere concept of 'unification' was elaborated in popular science fiction by Julian May in the Galactic Milieu Series. It is also the reason Teilhard is often called the patron saint of the Internet.[3]
85
Noosphere
86
See also
Akashic records
References
[1] http:/ / noosphere. princeton. edu/ [2] http:/ / www. theatlantic. com/ doc/ 200907/ intelligence [3] However, the Vatican's position is to say Isidore of Seville is the patron saint of internauts, because of his pioneering work on indexing; see fr:Classement Alphabtique#Historique
Paul R. Samson and David Pitt (eds.)(1999), The Biosphere and Noosphere Reader: Global Environment, Society and Change. ISBN 0-415-16644-6 "The Quest for a Unified Theory of Information" (http://fis.iguw.tuwien.ac.at/fis96/programme.html), World Futures, Volumes 49 (3-4) & 50 (1-4) 1997, Special Issue Raymond, Eric (2000), "Homesteading the Noosphere", available online. (http://www.catb.org/~esr/writings/ cathedral-bazaar/homesteading/) Norgaard, R. B. (1994). Development betrayed: the end of progress and a coevolutionary revisioning of the future. London; New York, Routledge. ISBN 0-415-06862-2
External links
"Evidence for the Akashic Field from Modern Consciousness Research" (http://www.stanislavgrof.com/pdf/ Akashic Field Evidence.PDF) by consciousness researcher Dr. Stanislav Grof, M.D. http://www.lawoftime.org/GRI/GRI.html# The Place of the Noosphere in Cosmic Evolution (pdf) http://noosphere.princeton.edu/Global Consciousness project at Princeton Fortaleciendo la Inteligencia Sincrnica (http://www.noosfera.cl) Unidad de Ciencia Noosfricas de la Universidad del Mar en Chile (http://www.noosfera.udelmar.cl) http://transhumanism.org/index.php/WTA/declaration/ http://www.odeo.com/channel/105280 "Just Say Yes to the Noosphere", a Podcast from Stanford Law School Omega Point Institute (http://omegapoint.org) Noosphere, Global Thought, Future Studies Noosphere and Homo Noeticus (http://www.homonoeticus.info) Semandeks (http://semandeks.com) A web application that tries to imitate Noosphere Synaptic Web (http://synapticweb.org) States that the Web is the substrate for the "sphere of human thought"
Ontology modularization
87
Ontology modularization
The notion of ontology modularization refers to a methodological principle in ontology engineering. The idea is that an ontology is built in a modular manner, i.e. developed as a set of small modules and later composed to form, and be used as, one modular ontology. One of the major research meetings on ontology modularization is the International Workshop on Modular Ontologies series.
See also
Ontology double articulation principle.
References
Modular Ontologies, Concepts, Theories and Techniques for Knowledge Modularization [1] Stuckenschmidt, Heiner; Parent, Christine; Spaccapietra, Stefano (Eds.) In Lecture Notes in Computer Science (LNCS) Vol. 5445. 2009. Springer. ISBN 978-3-642-01906-7
References
[1] [2] [3] [4] [5] [6] [7] [8] http:/ / www. springer. com/ computer/ database+ management+ & + information+ retrieval/ book/ 978-3-642-01906-7 http:/ / www. informatik. uni-bremen. de/ ~okutz/ womo4/ http:/ / dkm. fbk. eu/ worm08/ http:/ / ftp. informatik. rwth-aachen. de/ Publications/ CEUR-WS/ Vol-348/ http:/ / webrum. uni-mannheim. de/ math/ lski/ WoMO07/ http:/ / ftp. informatik. rwth-aachen. de/ Publications/ CEUR-WS/ Vol-315/ http:/ / www. cild. iastate. edu/ events/ womo. html http:/ / ftp. informatik. rwth-aachen. de/ Publications/ CEUR-WS/ Vol-232/
Organicism
88
Organicism
Organicism is a philosophical orientation that asserts that reality is best understood as an organic whole. By definition it is close to holism. Plato, Hobbes or Constantin Brunner are examples of such philosophical thought. Organicism is also a biological doctrine that stresses the organization, rather than the composition, of organisms. William Emerson Ritter coined the term in 1919. Organicism became well-accepted in the 20th century. Examples of 20th century biologists who were organicists are Ross Harrison, Paul Weiss, and Joseph Needham. Donna Haraway discusses them in her first book. John Scott Haldane (father of J. B. S. Haldane), R. S. Lillie, W. E. Agar, and Ludwig von Bertalanffy are other early twentieth century organicists.
Is it material composition, or organization of parts, that creates the mutual symbiosis between Amphiprion clownfish and tropical sea anemones?
Organicism as a doctrine rejects mechanism and reductionism (doctrines that claim that the smallest parts by themselves explain the behavior of larger organized systems of which they are a part). However, organicism also rejects vitalism, the doctrine that there is a vital force different from physical forces that accounts for living things. A number of biologists in the early to mid-twentieth century embraced organicism. They wished to reject earlier vitalisms but to stress that whole organism biology was not fully explainable by atomic mechanism. The larger organization of an organic system has features that must be taken into account to explain its behavior. Gilbert and Sarkar distinguish organicism from holism to avoid what they see as the vitalistic of spritualistic connotations of holism. Dusek notes that holism contains a continuum of degrees of the top-down control of organization, ranging from monism (the doctrine that the only complete object is the whole universe, or that there is only one entity, the universe, to organicism, which allows relatively more independence of the parts from the whole, despite the whole being more than the sum of the parts, and/or the whole exerting some control on the behavior of the parts. Still more independence is present in relational holism. This doctrine does not assert top-down control of the whole over its parts, but does claim that the relations of the parts are essential to explanation of behavior of the system. Aristotle and early modern philosophers and scientists tended to describe reality as made of substances and their qualities, and to neglect relations. Gottfried Wilhelm Leibniz showed the bizarre conclusions to which a doctrine of the non-existence of relations led. Twentieth century philosophy has been characterized by the introduction of and emphasis on the importance of relations, whether in symbolic logic, in phenomenology, or in metaphysics. William Wimsatt has suggested that the number of terms in the relations considered distinguishes reductionism from holism. Reductionistic explanations claim that two or at most three term relations are sufficient to account for the system's behavior. At the other extreme the system could be considered as a single ten to the twenty-sixth term relation, for instance. Organicism has some intellectually and politically controversial or suspect associations. "Holism," the doctrine that the whole is more than the sum of its parts, often used synonymously with organicism, or as a broader category under which organicism falls, has been coopted in recent decades by "holistic medicine" and by New Age Thought. German Nazism appealed to organicist and holistic doctrines, discrediting for many in retrospect, the original organicist doctrines. (See Anne Harrington). Soviet Dialectical Materialism also made appeals to an holistic and organicist approach stemming from Hegel via Karl Marx's co-worker Friedrich Engels, again giving a controversial political association to organicism.
Organicism Organicism' has also been used to characterize notions put forth by various late 19th-century social scientists who considered human society to be analogous to an organism, and individual humans to be analogous to the cells of an organism. This sort of organicist sociology was articulated by Alfred Espinas, Paul von Lilienfeld, Jacques Novicow, Albert Schffle, Herbert Spencer, and Ren Worms, among others (Barberis 2003: 54).
89
References
Barberis D. S. (2003). In search of an object: Organicist sociology and the reality of society in fin-de-sicle France. History of the Human Sciences, vol 16, no. 3, pp. 5172. Beckner, Morton, (1967) Organismic Biology, in "Encyclopedia of Philosophy," ed. Paul Edwards, MacMillan Publishing CO., Inc. & The Free Press. Dusek, Val, (1999). The Holistic Inspirations of Physics, Rutgers University Press. Haraway, Donna (1976). Crystals, Fabrics, and Fields, Johns Hopkins University Press. Harrington, Anne (1996). Reenchanted Science, Harvard University Press. Mayr, E. (1997). The organicists. In What is the meaning of life. In This is biology. Belknap Press of Harvard University Press. Gilbert, Scott F. and Sahotra Sarkar (2000): Embracing complexity: Organicism for the 21st Century, Developmental Dynamics 219(1): 19. (abstract of the paper: [1]) Wimsatt, Willam (2007) Re-engineering Philosophy for Limited Beings :Peicewise Approximations to Reality, Harvard University Press.
See also
Organismic theory Organic unity Philosophy of Organism
External links
Orsini, G. N. G. - "Organicism" [2], in Dictionary of the History of Ideas (1973) Dictionary definition [3]
References
[1] http:/ / www3. interscience. wiley. com/ cgi-bin/ abstract/ 72513248/ ABSTRACT?CRETRY=1& SRETRY=0 [2] http:/ / xtf. lib. virginia. edu/ xtf/ view?docId=DicHist/ uvaBook/ tei/ DicHist3. xml;chunk. id=dv3-52 [3] http:/ / www. thefreedictionary. com/ organicism
Philosophy of Organism
90
Philosophy of Organism
Philosophy of Organism or Organic Realism is how Alfred North Whitehead described his metaphysics. It is now known as process philosophy. Central to this school is the idea of concrescence. Concrescence means growing together (com/con from Latin for "together", crescence from Latin crescere/cret- grow), the present is given by a consense of subjective forms. We are multiple individuals, but there are also multiple individual agents of consciousness operant in the construction of the given. Marvin Minsky calls this the "society of mind" in his book Society of Mind. Whitehead's "subjective forms" complement "eternal objects" in his metaphysical system; eternal objects being entities not unlike Plato's archetypal Forms. In Process and Reality, Whitehead proposes that his 'organic realism' be used in place of classical materialism.
References
Agar, W. E. 1936. 'Whitehead's Philosophy of Organism an Introduction for Biologists'. The Quarterly Review of Biology, Vol. 11, No. 1: 16-34. Whitehead, Alfred North. 1997. Science and the Modern World. Free Press. Whitehead, Alfred North. 1979, (2nd Ed.) Process and Reality:(Gifford Lectures Delivered in the University of Edinburgh During the Session 1927-28). Free Press.
See also
Organicism
Powers of Ten
91
Powers of Ten
Powers of Ten
Directed by Starring Music by Charles and Ray Eames Philip Morrison Elmer Bernstein
Distributed by IBM Release date(s) 1968, 1977 Running time Country 9 minutes United States
Powers of Ten is a 1968 American documentary short film written and directed by Ray Eames and her husband, Charles Eames, rereleased in 1977.[1] The film depicts the relative scale of the Universe in factors of ten (see also logarithmic scale and order of magnitude). The film is an adaptation of the 1957 book Cosmic View by Kees Boeke, and more recently is the basis of a new book version. Both adaptations, film and book, follow the form of the Boeke original, adding color and photography to the black and white drawings employed by Boeke in his seminal work. In 1998, "Powers of Ten" was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant".
Narrative summary
The film begins with a view of a man and woman picnicking next to Lake Shore Drive in Chicago, in between Soldier Field and Burnham Harbor. The view settles on an one-meter-square overhead image of the man reclining on a blanket. The viewpoint, accompanied by expository voiceover by Philip Morrison, then slowly zooms out to a view ten meters across (or 101 m in scientific notation). The zoom-out continues (at a rate of one power of ten per 10 seconds), to a view of 100 meters (102 m), then 1 kilometer (103 m), and so on, increasing the perspectivethe picnic is revealed to be taking place in Burnham Park, near Soldier Field on Chicago's lakefrontand continuing to zoom out to a field of view of 1024 meters, or the size of the observable universe. The camera then zooms back in at a rate of a power of ten per 2 seconds to the picnic, and then slows back down to its original rate into the man's hand, to views of negative powers of ten101 m (10 centimeters), and so forthuntil the camera comes to quarks in a proton of a carbon atom at 1016 meter.
Related works
There is also a 1982 book (revised 1994) of the same title, by Philip Morrison and Phylis Morrison (Philip narrated the film). It contains a sequence of pictures starting with the Universe and moving in powers of ten down to subatomic sizes. Cosmic Zoom (1968) by Eva Szasz and the National Film Board of Canada is also based on Boeke's Cosmic View.
Powers of Ten Cosmic Voyage (1996), [2] an IMAX film. (Credited as based on Boeke's Cosmic View without mention of the Eames' film.)
92
See also
Orders of magnitude (length) Earth's location in the universe J.T. Fraser, author of The Power of Time, the book on which the man at the picnic rests his left hand.
References
[1] New York Times obituary (http:/ / www. nytimes. com/ 2005/ 04/ 26/ science/ 26morrison. html), see correction at the end. Retrieved on 2009-08-06 [2] http:/ / www. imdb. com/ title/ tt0115952/
External links
Official website (http://www.powersof10.com) Exhibit at the California Academy of Sciences (http://www.calacademy.org/exhibits/powers_of_ten/) A website with a tutorial very similar to The Powers of Ten (http://micro.magnet.fsu.edu/primer/java/ scienceopticsu/powersof10/) Note: Requires Java (http://www.java.com/en/index.jsp) A Photograph version online (http://microcosm.web.cern.ch/microcosm/P10/english/P0.html) Similar Photographs 1 (http://www.wordwizz.com/pages/10exp0.htm) Powers of Ten (http://www.imdb.com/title/tt0078106/) at the Internet Movie Database Powers of 10 (http://www.theuniversesolved.com/powersof10.asp?r=1&p=1) at The Universe-Solved! website A similar Flash, also using the title "Cosmic Zoom" (http://sunshine.chpc.utah.edu/labs/cosmic_zoom/ cosmic_zoom2.swf) Geographical coordinates: 415153.93N 873648.21W
93
Willard Van Orman Quine Full name Born Died Era Region School Willard Van Orman Quine June 25, 1908 December 25, 2000 (aged92) 20th-century philosophy Western Philosophy Analytic
Main interests Logic, Ontology, Epistemology, Philosophy of language, Philosophy of mathematics, Philosophy of science, Set theory. Notable ideas New Foundations, Indeterminacy of translation, Naturalized epistemology, Ontological relativity, Quine's paradox, Duhem-Quine thesis, Radical translation, Confirmation holism, QuineMcCluskey algorithm.
Willard Van Orman Quine (June 25, 1908 December 25, 2000) (known to intimates as "Van") was an American philosopher and logician in the analytic tradition. From 1930 until his death 70 years later, Quine was continuously affiliated with Harvard University in one way or another, first as a student, then as a professor of philosophy and a teacher of mathematics, and finally as a professor emeritus who published or revised several books in retirement. He filled the Edgar Pierce Chair of Philosophy at Harvard from 1956 to 1978. A recent poll conducted among philosophers named Quine one of the five most important philosophers of the past two centuries.[1] He won the first Schock Prize in Logic and Philosophy in 1993, for "his systematical and penetrating discussions of how learning of language and communication are based on socially available evidence and of the consequences of this for theories on knowledge and linguistic meaning."[2] Quine falls squarely into the analytic philosophy tradition while also being the main proponent of the view that philosophy is not merely conceptual analysis. His major writings include "Two Dogmas of Empiricism" (1951), which attacked the distinction between analytic and synthetic propositions and advocated a form of semantic holism, and Word and Object (1960), which further developed these positions and introduced the notorious indeterminacy of translation thesis. He also developed an influential naturalized epistemology that tried to provide "an improved scientific explanation of how we have developed elaborate scientific theories on the basis of meager sensory input."[3] He is also important in philosophy of science for his "systematic attempt to understand science from within the resources of science itself"[3] and for his conception of philosophy as continuous with science. This led to his famous quip that "philosophy of science is philosophy enough."[4]
94
Biography
According to his autobiography, The Time of My Life (1986), Quine grew up in Akron, Ohio. His father was a manufacturing entrepreneur and his mother was a schoolteacher. He received his B.A. in mathematics and philosophy from Oberlin College in 1930 and his Ph.D. in philosophy from Harvard University in 1932. His thesis supervisor was Alfred North Whitehead. He was then appointed a Harvard Junior Fellow, which excused him from having to teach for four years. During the academic year 193233, he travelled in Europe thanks to a Sheldon fellowship, meeting Polish logicians (including Alfred Tarski) and members of the Vienna Circle (including Rudolf Carnap). It was through Quine's good offices that Alfred Tarski was invited to attend the September 1939 Unity of Science Congress in Cambridge. To attend that Congress, Tarski sailed for the USA on the last ship to leave Gdask before the Third Reich invaded Poland. Tarski survived the war and worked another 44 years in the USA. During World War II, Quine lectured on logic in Brazil, in Portuguese, and served in the United States Navy in a military intelligence role, reaching the rank of Lieutenant Commander. At Harvard, Quine helped supervise the Harvard theses of, among others, Donald Davidson, David Lewis, Daniel Dennett, Gilbert Harman, Dagfinn Fllesdal, Hao Wang, Hugues LeBlanc and Henry Hiz. For the academic year 1964-1965, Quine was a Fellow on the faculty in the Center for Advanced Studies at Wesleyan University.[5] Quine had four children by two marriages. Guitarist Robert Quine was his nephew.
Political beliefs
Quine was politically conservative, but the bulk of his writing was in technical areas of philosophy removed from direct political issues.[6] He did, however, argue at points for several conservative positions: a defense of moral censorship; an argument in favor of limitations on democratic civil rights; a general defense of the status quo against efforts to remodel society by 'underprivileged groups';[7] an argument against publicly funded education;[8] [9] Quine, like many philosophers in the Anglo-American "analytic" tradition, was critical of Jacques Derrida; in 1992, Quine led an unsuccessful petition to stop Cambridge University from granting Derrida an honorary degree. Such criticism was, according to Derrida, directed at Derrida "no doubt because [Derrida's methods, called] 'deconstructions', query or put into question a good many divisions and distinctions, for example the distinction between the pretended neutrality of philosophical discourse, on the one hand, and existential passions and drives on the other, between what is public and what is private, and so on."[10] Quine regarded Derrida's work as pseudophilosophy or sophistry.[11]
Work
Quine's Ph.D. thesis and early publications were on formal logic and set theory. Only after WWII did he, by virtue of seminal papers on ontology, epistemology and language, emerge as a major philosopher. By the 1960s, he had worked out his "naturalized epistemology" whose aim was to answer all substantive questions of knowledge and meaning using the methods and tools of the natural sciences. Quine roundly rejected the notion that there should be a "first philosophy", a theoretical standpoint somehow prior to natural science and capable of justifying it. These views are intrinsic to his naturalism. Quine often wrote superbly crafted and witty English prose. He had a gift for languages and could lecture in French, Spanish, Portuguese and German. But like the logical positivists, he evinced little interest in the philosophical canon: only once did he teach a course in the history of philosophy, on Hume. Quine has an Erds number of 3.[12]
95
Academic Genealogy Notable teachers Rudolf Carnap Clarence Irving Lewis Alfred North Whitehead Notable students Hilary Putnam Donald Davidson Daniel Dennett Dagfinn Fllesdal Gilbert Harman David K. Lewis Hao Wang Theodore Kaczynski Wolfgang Stegmller Tom Lehrer Michael Silverstein
Willard Van Orman Quine Quine concluded his "Two Dogmas of Empiricism" as follows: As an empiricist I continue to think of the conceptual scheme of science as a tool, ultimately, for predicting future experience in the light of past experience. Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer . . . For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits. Quine's ontological relativism (evident in the passage above) led him to agree with Pierre Duhem that for any collection of empirical evidence, there would always be many theories able to account for it. However, Duhem's holism is much more restricted and limited than Quine's. For Duhem, underdetermination applies only to physics or possibly to natural science, while for Quine it applies to all of human knowledge. Thus, while it is possible to verify or falsify whole theories, it is not possible to verify or falsify individual statements. Almost any particular statements can be saved, given sufficiently radical modifications of the containing theory. For Quine, scientific thought forms a coherent web in which any part could be altered in the light of empirical evidence, and in which no empirical evidence could force the revision of a given part. Quine's writings have led to the wide acceptance of instrumentalism in the philosophy of science.
96
97
Logic
Over the course of his career, Quine published numerous technical and expository papers on formal logic, some of which are reprinted in his Selected Logic Papers and in The Ways of Paradox. Quine confined logic to classical bivalent first-order logic, hence to truth and falsity under any (nonempty) universe of discourse. Hence the following were not logic for Quine: Higher order logic and set theory. He famously referred to higher order logic as "set theory in disguise"; Much of Principia Mathematica included in logic was not logic for Quine. Formal systems involving intensional notions, especially modality. Quine was especially hostile to modal logic with quantification, a battle he largely lost when Saul Kripke's relational semantics became canonical for modal logics. Quine wrote three undergraduate texts on logic: Elementary Logic. While teaching an introductory course in 1940, Quine discovered that extant texts for philosophy students did not do justice to quantification theory or first-order predicate logic. Quine wrote this book in 6 weeks as an ad hoc solution to his teaching needs. Methods of Logic. The four editions of this book resulted from a more advanced undergraduate course in logic Quine taught from the end of WWII until his 1978 retirement. Philosophy of Logic. A concise and witty undergraduate treatment of a number of Quinian themes, such as the prevalence of use-mention confusions, the dubiousness of quantified modal logic, and the non-logical character of higher-order logic. Mathematical Logic is based on Quine's graduate teaching during the 1930s and 40s. It shows that much of what Principia Mathematica took more than 1000 pages to say can be said in 250 pages. The proofs are concise, even cryptic. The last chapter, on Gdel's incompleteness theorem of and Tarski's indefinability theorem, along with the article Quine (1946), became a launching point for Raymond Smullyan's later lucid exposition of these and related results. Quine's work in logic gradually became dated in some respects. Techniques he did not teach and discuss include analytic tableaux, recursive functions, and model theory. His treatment of metalogic left something to be desired. For example, Mathematical Logic does not include any proofs of soundness and completeness. Early in his career, the notation of his writings on logic was often idiosyncratic. His later writings nearly always employed the now-dated notation of Principia Mathematica. Set against all this are the simplicity of his preferred method (as exposited in his Methods of Logic) for determining the satisfiability of quantified formulas, the richness of his philosophical and linguistic insights, and the fine prose in which he expressed them. Most of Quine's original work in formal logic from 1960 onwards was on variants of his predicate functor logic, one of several ways that have been proposed for doing logic without quantifiers. For a comprehensive treatment of predicate functor logic and its history, see Quine (1976). For an introduction, see chpt. 45 of his Methods of Logic. Quine was very warm to the possibility that formal logic would eventually be applied outside of philosophy and mathematics. He wrote several papers on the sort of Boolean algebra employed in electrical engineering, and with Edward J. McCluskey, devised the QuineMcCluskey algorithm of reducing Boolean equations to a minimum covering sum of prime implicants.
98
Set theory
While his contributions to logic include elegant expositions and a number of technical results, it is in set theory that Quine was most innovative. He always maintained that mathematics required set theory and that set theory was quite distinct from logic. He flirted with Nelson Goodman's nominalism for a while, but backed away when he failed to find a nominalist grounding of mathematics. Over the course of his career, Quine proposed three variants of axiomatic set theory, each including the axiom of extensionality: New Foundations, NF, creates and manipulates sets using a single axiom schema for set admissibility, namely an axiom schema of stratified comprehension, whereby all individuals satisfying a stratified formula compose a set. A stratified formula is one that type theory would allow, were the ontology to include types. However, Quine's set theory does not feature types. The metamathematics of NF are curious. NF allows many "large" sets the now-canonical ZFC set theory does not allow, even sets for which the axiom of choice does not hold. Since the axiom of choice holds for all finite sets, the failure of this axiom in NF proves that NF includes infinite sets. The (relative) consistency of NF is an open question. A modification of NF, NFU, due to R. B. Jensen and admitting urelements (entities that can be members of sets but that lack elements), turns out to be consistent relative to Peano arithmetic, thus vindicating the intuition behind NF. NF and NFU are the only Quinian set theories with a following. For a derivation of foundational mathematics in NF, see Rosser (1953); The set theory of Mathematical Logic is NF augmented by the proper classes of Von NeumannBernaysGdel set theory, except axiomatized in a much simpler way; The set theory of Set Theory and Its Logic does away with stratification and is almost entirely derived from a single axiom schema. Quine derived the foundations of mathematics once again. This book includes the definitive exposition of Quine's theory of virtual sets and relations, and surveyed axiomatic set theory as it stood circa 1960. However, Fraenkel, Bar-Hillel and Levy (1973) do a better job of surveying set theory as it stood at mid-century. All three set theories admit a universal class, but since they are free of any hierarchy of types, they have no need for a distinct universal class at each type level. Quine's set theory and its background logic were driven by a desire to minimize posits; each innovation is pushed as far as it can be pushed before further innovations are introduced. For Quine, there is but one connective, the Sheffer stroke, and one quantifier, the universal quantifier. All polyadic predicates can be reduced to one dyadic predicate, interpretable as set membership. His rules of proof were limited to modus ponens and substitution. He preferred conjunction to either disjunction or the conditional, because conjunction has the least semantic ambiguity. He was delighted to discover early in his career that all of first order logic and set theory could be grounded in a mere two primitive notions: set abstraction and inclusion. For an elegant introduction to the parsimony of Quine's approach to logic, see his "New Foundations for Mathematical Logic," ch. 5 in his From a Logical Point of View.
Quine's epistemology
Just as he challenged the dominant analytic-synthetic distinction, Quine also took aim at traditional normative epistemology. According to Quine, normative epistemology is the trend that assigns ought claims to conditions of knowledge. This approach, he argued, has failed to give us any real understanding of the necessary and sufficient conditions for knowledge. Quine recommended that, as an alternative, we look to natural sciences like psychology for a full explanation of knowledge. Thus, we must totally replace our entire epistemological paradigm. Quine's proposal is extremely controversial among contemporary philosophers and has several important critics, with Jaegwon Kim the most prominent among them.[16]
99
In popular culture
A computer program whose output is its source code is named a "quine" after W.V.Quine. The Mexican short story "Valenta, Marek" features a chess player who studied the writings of Quine and blurred the distinction between reality and chess.
Bibliography
Selected books
1951 (1940). Mathematical Logic. Harvard Univ. Press. ISBN 0-674-55451-5. 1966. Selected Logic Papers. New York: Random House. 1970 (2nd ed., 1978). With J. S. Ullian. The Web of Belief. New York: Random House. 1980 (1941). Elementary Logic. Harvard Univ. Press. ISBN 0-674-24451-6. 1982 (1950). Methods of Logic. Harvard Univ. Press. 1980 (1953). From a Logical Point of View. Harvard Univ. Press. ISBN 0-674-32351-3. Contains "Two dogmas of Empiricism. [1]"
1960 Word and Object. MIT Press; ISBN 0-262-67001-1. The closest thing Quine wrote to a philosophical treatise. Chpt. 2 sets out the indeterminacy of translation thesis. 1976 (1966). The Ways of Paradox. Harvard Univ. Press. 1969 Ontological Relativity and Other Essays. Columbia Univ. Press. ISBN 0-231-08357-2. Contains chapters on ontological relativity, naturalized epistemology and natural kinds. 1969 (1963). Set Theory and Its Logic. Harvard Univ. Press. 1985 The Time of My Life An Autobiography. Cambridge, The MIT Press. ISBN 0-262-17003-5. 1986: Harvard Univ. Press. 1986 (1970). The Philosophy of Logic. Harvard Univ. Press. 1987 Quiddities: An Intermittently Philosophical Dictionary. Harvard Univ. Press. ISBN 0-14-012522-1. A work of essays, many subtly humorous, for lay readers, very revealing of the breadth of his interests. 1992 (1990). Pursuit of Truth. Harvard Univ. Press. A short, lively synthesis of his thought for advanced students and general readers not fooled by its simplicity. ISBN 0-674-73951-5.
Important articles
1946, "Concatenation as a basis for arithmetic." Reprinted in his Selected Logic Papers. Harvard Univ. Press. 1948, "On What There Is", Review of Metaphysics. Reprinted in his 1953 From a Logical Point of View. Harvard University Press. 1951, "Two Dogmas of Empiricism", The Philosophical Review 60: 2043. Reprinted in his 1953 From a Logical Point of View. Harvard University Press. 1956, "Quantifiers and Propositional Attitudes," Journal of Philosophy 53. Reprinted in his 1976 Ways of Paradox. Harvard Univ. Press: 18596. 1969, "Epistemology Naturalized" in Ontological Relativity and Other Essays. New York: Columbia University Press: 6990.
100
See also
Douglas Hofstadter Hold come what may Hold more stubbornly at least Schock Prize List of American philosophers
Notes
[1] "So who *is* the most important philosopher of the past 200 years?" (http:/ / leiterreports. typepad. com/ blog/ 2009/ 03/ so-who-is-the-most-important-philosopher-of-the-past-200-years. html) Leiter Reports. Leiterreports.typepad.com. 11 March 2009. Accessed 8 March 2010. [2] "Prize winner page" (http:/ / www. kva. se/ en/ Prizes/ Prize-winner-page/ ?laureateId=368). The Royal Swedish Academy of Sciences. Kva.se. Retrieved 29 August 2010. [3] "Quine's Philosophy of Science" (http:/ / www. iep. utm. edu/ quine-sc/ ). Internet Encyclopedia of Philosophy. Iep.utm.edu. 27 July 2009. Accessed 8 March 2010. [4] "Mr Strawson on Logical Theory" (http:/ / www. jstor. org/ stable/ 2251091?cookieSet=1). WV Quine. Mind Vol. 62 No. 248. Oct. 1953. [5] "Guide to the Center for Advanced Studies Records, 1958 - 1969" (http:/ / www. wesleyan. edu/ libr/ schome/ FAs/ ce1000-137. html). Weselyan University. Wesleyan.edu. Accessed 8 March 2010. [6] Wall Street Journal obituary (http:/ / www. wvquine. org/ wvq-obit3. html) for W V Quine - Jan 4 2001 [7] Quiddities: An Intermittently Philosophical Dictionary, entries for Tolerance (pp.206-8) and Freedom (p.69) [8] Paradoxes of Plenty in Theories and Things p.197 [9] The Time of My Life: An Autobiography, pp. 352-3 [10] The 'Derrida Affair' at Cambridge University (http:/ / prelectur. stanford. edu/ lecturers/ derrida/ interviews. html#cambridge), from "Honoris Causa" pp. 409-413 [11] J.E. D'Ulisse Derrida (1930-2004) (http:/ / www. newpartisan. com/ home/ derrida-1930-2004. html), New Partisan 12.24.2004 [12] "MR: Collaboration Distance" (http:/ / www. ams. org/ mathscinet/ collaborationDistance. html). American Mathematical Society. Ams.org. Retrieved 29 August 2010. [13] Prawitz, Dag. 'Quine and Verificationism.' In Inquiry, Stockholm, 1994, p 487 - 494 [14] W.V.O. Quine, "On What There Is" The Review of Metaphysics, New Haven 1948, 2, 21 [15] Czeslaw Lejewski, "Logic and Existence" British Journal for the Philosophy of Science Vol. 5 (19545), pp. 104119 [16] "Naturalized Epistemology" (http:/ / plato. stanford. edu/ entries/ epistemology-naturalized/ ). Stanford Encyclopedia of Philosophy. Plato.stanford.edu. 5 July 2001. Accessed 8 March 2010.
Further reading
Gibson, Roger F., 1982/86. The Philosophy of W.V. Quine: An Expository Essay. Tampa: University of South Florida. , 1988. Enlightened Empiricism: An Examination of W. V. Quine's Theory of Knowledge Tampa: University of South Florida. , ed., 2004. The Cambridge Companion to Quine. Cambridge University Press. , 2004. Quintessence: Basic Readings from the Philosophy of W. V. Quine. Harvard Univ. Press. and Barrett, R., eds., 1990. Perspectives on Quine. Oxford: Blackwell. Gochet, Paul, 1978. Quine en perspective, Paris, Flammarion. Godfrey-Smith, Peter, 2003. Theory and Reality: An Introduction to the Philosophy of Science. Grattan-Guinness, Ivor, 2000. The Search for Mathematical Roots 18701940. Princeton University Press. Grice, Paul and Peter Strawson. "In Defense of a Dogma". The Philosophical Review 65 (1965). Hahn, L. E., and Schilpp, P. A., eds., 1986. The Philosophy of W. V. O. Quine (The Library of Living Philosophers). Open Court. Khler, Dieter, 1999/2003. Sinnesreize, Sprache und Erfahrung: eine Studie zur Quineschen Erkenntnistheorie (http://www.ub.uni-heidelberg.de/archiv/3548). Ph.D. thesis, Univ. of Heidelberg. Orenstein, Alex (2002). W.V. Quine. Princeton University Press.
Willard Van Orman Quine Putnam, Hilary. "The Greatest Logical Positivist." Reprinted in Realism with a Human Face, ed. James Conant. Cambridge, MA: Harvard University Press, 1990. Rosser, John Barkley, 1953. Valore, Paolo, 2001. Questioni di ontologia quineana, Milano: Cusi.
101
External links
Willard Van Orman QuinePhilosopher and Mathematician (http://www.wvquine.org/) Willard Van Orman Quine (http://plato.stanford.edu/entries/quine/) at the Stanford Encyclopedia of Philosophy Quine's Philosophy of Science (http://www.iep.utm.edu/quine-sc/) at the Internet Encyclopedia of Philosophy Quine's New Foundations (http://plato.stanford.edu/entries/quine-nf/) at the Stanford Encyclopedia of Philosophy Willard Van Orman Quine (http://genealogy.math.ndsu.nodak.edu/id.php?id=73831) at the Mathematics Genealogy Project Obituary from The Guardian (http://www.guardian.co.uk/obituaries/story/0,,416245,00.html) "Two Dogmas of Empiricism" (http://www.ditext.com/quine/quine.html) "On Simple Theories Of A Complex World" (http://sveinbjorn.org/simple_theories_of_a_complex_world)
Semantic holism
Semantic holism is a doctrine in the philosophy of language to the effect that a certain part of language, be it a term or a complete sentence, can only be understood through its relations to a (previously understood) larger segment of language. There is substantial controversy, however, as to exactly what the larger segment of language in question consists of. In recent years, the debate surrounding semantic holism, which is one among the many forms of holism that are debated and discussed in modern philosophy, has tended to centre around the view that the "whole" in question consists of an entire language.
Background
Since the use of a linguistic expression is only possible if the speaker who uses it understands its meaning, one of the central problems for analytic philosophers has always been the question of meaning. What is it? Where does it come from? How is it communicated? And, among these questions, what is the smallest unit of meaning, the smallest fragment of language with which it is possible to communicate something? At the end of the 19th and beginning of the 20th century, Gottlob Frege and his followers abandoned the view, common at the time, that a word gets its meaning in isolation, independently from all the rest of the words in a language. Frege, as an alternative, formulated his famous Context principle, according to which it is only within the context of an entire sentence that a word acquires its meaning. In the 1950's, the agreement that seemed to have been reached regarding the primacy of sentences in semantic questions began to unravel with the collapse of the movement of logical positivism and the powerful influence exercised by the philosophical investigations of the later Wittgenstein. Wittgenstein wrote in the Philosophical Investigations, in fact, that "comprehending a proposition means comprehending a language." About the same time or shortly after, W.V.O. Quine wrote that "the unit of measure of empirical meaning is all of science in its globality"; and Donald Davidson, in 1967, put it even more sharply by saying that "a sentence (and therefore a word) has meaning only in the context of a (whole) language."
Semantic holism
102
Semantic holism meaning, and it possessed a meaning only if it was possible to refer to a set of experiences that could, at least potentially, verify it and to another set that could potentially falsify it. Underlying all this, there is an implicit and powerful connection between epistemological and semantic questions. This connection carries over into the work of Quine in Two Dogmas of Empiricism. Quines holistic argument against the neo-positivists set out to demolish the assumption that every sentence of a language is bound univocally to its own set of potential verifiers and falsifiers and the result was that the epistemological value of every sentence must depend on the entire language. Since the epistemological value of every sentence, for Quine just as for the positivists, was the meaning of that sentence, then the meaning of every sentence must depend on every other. As Quine states it: All of our so-called knowledge or convictions, from questions of geography and history to the most profound laws of atomic physics or even mathematics and logic, are an edifice made by man that touches experience only at the margins. Or, to change images, science in its globality is like a force field whose limit points are experiencesa particular experience is never tied to any proposition inside the field except indirectly, for the needs of equilibrium which affect the field in its globality. For Quine then (although Fodor and Lepore have maintained the contrary), and for many of his followers, confirmation holism and semantic holism are inextricably linked. Since confirmation holism is widely accepted among philosophers, a serious question for them has been to determine whether and how the two holisms can be distinguished or how the undesirable consequences of unbuttoned holism, as Michael Dummett has called it, can be limited.
103
Moderate holism
Numerous philosophers of language have taken the latter avenue, abandoning the early Quinean holism in favour of what Michael Dummett has labelled semantic molecularism. These philosophers generally deny that the meaning of an expression E depends on the meanings of the words of the entire language L of which it is part and sustain, instead, that the meaning of E depends on some subset of L. These positions, notwithstanding the fact that many of their proponents continue to call themselves holists, are actually intermediate between holism and atomism. Dummett, for example, after rejecting Quinean holism (holism tout court in his sense), takes precisely this approach. But those who would opt for some version of moderate holism need to make the distinction between the parts of a language that are "constitutive" of the meaning of an expression E and those that are not without falling into the extraordinarily problematic analytic/synthetic distinction. Fodor and Lepore (1992) present several arguments to demonstrate that this is impossible.
Semantic holism Carlo Penco criticizes this argument by pointing out that there is an intermediate reading Fodor and Lepore have left out of count: (I) This says that two people cannot believe the same proposition unless they also both believe a proposition different from p. This helps to some extent but there is still a problem in terms of identifying how the different propositions shared by the two speakers are specifically related to each other. Dummetts proposal is based on an analogy from logic. To understand a logically complex sentence it is necessary to understand one that is logically less complex. In this manner, the distinction between logically less complex sentences that are constitutive of the meaning of a logical constant and logically more complex sentences that are not takes on the role of the old analytic/synthetic distinction. "The comprehension of a sentence in which the logical constant does not figure as a principal operator depends on the comprehension of the constant, but does not contribute to its constitution." For example, one can explain the use of the conditional in by stating that the whole sentence is false if the part before the arrow is true and c is false. But to understand one must already know the meaning of "not" and "or." This is, in turn, and . To comprehend a explained by giving the rules of introduction for simple schemes such as
104
sentence is to comprehend all and only the sentences of less logical complexity than the sentence that one is trying to comprehend. However, there is still a problem with extending this approach to natural languages. If I understand the word "hot" because I have understood the phrase "this stove is hot", it seems that I am defining the term by reference to a set of stereotypical objects with the property of being hot. If I dont know what it means for these objects to be "hot", such a set or listing of objects is not helpful.
Semantic holism
105
The first relation means that L applies between , and just in case is a part of and F accepts the inference between and . The relation R applies between , , and just in case is a part of and F accepts the inference from to . The Global Role, G(), of a simple expression can then be defined as:
The global role of consists in a pair of sets, each one composed of a pair of sets of expressions. If F accepts the inference from to and is a part of , then the couple is an element of the set which is an element of the right side of the Global Role of . This makes Global Roles for simple expressions sensitive to changes in the acceptance of inferences by F. The Global Role for complex expressions can be defined as: The Global Role of the complex expression is the n- tuple of the global roles of its constituent parts. The next problem is to develop a function that assigns meanings to Global Roles. This function is generally called a homomorphism and says that for every syntactic function G that assigns to simple expressions 1...n some complex expression , there exists a function F from meanings to meanings:
This function is one to one in that it assigns exactly one meaning to every Global Role. According to Fodor and Lepore, holistic inferential role semantics leads to the absurd conclusion that part of the meaning of brown cow is constituted by the inference Brown cow implies dangerous. This is true if the function from meanings to Global Roles is one to one. In this case, in fact, the meanings of brown, cow and dangerous all contain the inference Brown cows are dangerous!! But this only true if the relation is one to one. Since it is one to one, brown would not have the meaning it has unless it had the global role that it has. If we change the relation so that it is many to one (h*), many global roles can share the same meaning. So suppose that the meaning of brown is given by M(brown). It does not follow from this that L(brown, brown cow, dangerous) is true unless all of the global roles that h* assigns to M(brown) contain (brown cow, dangerous). And this is not necessary for holism. In fact, with this many to one relation from Global Roles meanings, it is possible to change opinions with respect to an inference consistently. Suppose that B and C initially accept all of the same inferences, speak the same language and they both accept that brown cows imply dangerous. Suddenly, B changes his mind and rejects the inference. If the function from meanings to Global Role is one to one, then many of Bs Global Roles have changed and therefore their meanings. But if there is no one to one assignment, then Bs change in belief in the inference about brown cows does not necessarily imply a difference is the meaning of the terms he uses. Therefore, it is not intrinsic to holism that communication or change of opinion is impossible.
Semantic holism Tyler Burge, in Individualism and the Mental, describes a different thought experiment that led to the notion of the social externalism of mental contents. In Burge's experiment, a person named Jeffray believes that he has arthritis in his thighs and we can correctly attribute to him the (mistaken) belief that he has arthritis in his thighs because he is ignorant of the fact that arthritis is a disease of the articulation of the joints. In another society, there is an individual named Goodfrey who also believes that he has arthritis in the thighs. But in the case of Goodfrey the belief is correct because in the counterfactual society in which he lives "arthritis" is defined as a disease that can include the thighs. The question then arises of the possibility of reconciling externalism with holism. The one seems to be saying that meanings are determined by the external relations (with society or the world), while the other suggests that meaning is determined by the relation of words (or beliefs) to all the other words (or beliefs). Frederik Stjernfelt identifies at least three possible ways to reconcile them and then points out some objections. The first approach is to insist that there is no conflict because holists do not mean the phrase "determine beliefs" in the sense of individuation but rather of attribution. But the problem with this is that if one is not a "realist" about mental states, then all we are left with is the attributions themselves and, if these are holistic, then we really have a form of hidden constitutive holism rather than a genuine attributive holism. But if one is a "realist" about mental states, then why not say that we can actually individuate them and therefore that instrumentalist attributions are just a short-term strategy? Another approach is to say that externalism is valid only for certain beliefs and that holism only suggests that beliefs are determined only in part by their relations with other beliefs. In this way, it is possible to say that externalism applies only to those beliefs not determined by their relations with other beliefs (or for the part of a belief that is not determined by its relations with other parts of other beliefs), and holism is valid to the extent that beliefs (or parts of beliefs) are not determined externally. The problem here is that the whole scheme is based on the idea that certain relations are constitutive (i.e. necessary) for the determination of the beliefs and others are not. Thus, we have reintroduced the idea of an analytic/synthetic distinction with all of the problems that that carries with it. A third possibility is to insist that there are two distinct types of belief: those determined holistically and those determined externally. Perhaps the external beliefs are those that are determined by their relations with the external world through observation and the holistic ones are the theoretical statements. But this implies the abandonment of a central pillar of holism: the idea that there can be no one to one correspondence between behavior and beliefs. There will be cases in which the beliefs that are determined externally correspond one to one with perceptual states of the subject. One last proposal is to carefully distinguish between so-called narrow content states and broad content states. The first would be determined in a holistic manner and the second non-holistically and externalistically. But how to distinguish between the two notions of content while providing a justification of the possibility of formulating an idea of narrow content that does not depend on a prior notion of broad content? These are some of the problems and questions that have still to be resolved by those who would adopt a position of "holist externalism" or "externalist holism".
106
Semantic holism
107
See also
confirmation holism inferential role semantics Donald Davidson W.V. Quine Michael Dummett
References
Burge, Tyler. (1979). "Individualism and the Mental". In Midwest Studies in Philosophy, 4. pp. 73-121. Davidson, Donald. (1984). Inquiries into Truth and Interpretation. Oxford: Clarendon Press. Dummett, Michael. (1978). The Logical Basis of Metaphysics. Cambridge, MA: Harvard University Press. Fodor, J. and Lepore, E. (1992). Holism: A Shopper's Guide. Oxford: Blackwell. Pagin, Peter. (2002). "Are Compositionality and Holism Compatible?" In Olismo, Massimo dell'Utri (ed.), Macerata: Quodlibet. Penco, Carlo. (2002). "Olismo e Molecularismo". In Olismo, Massimo Dell'Utri (ed.), Macerata: Quodlibet. Putnam, Hilary. (1975). "The meaning of 'meaning'" [1], in Mind, Language and Reality. Cambridge: Cambridge University Press. Putnam, Hilary. (2002). "The Mind is Not Only Computation". In Olismo, Massimo dell'Utri (ed.), Macerata: Quodlibet. Quine, W. V. (1953). From a Logical Point of View. Cambridge, MA: Harvard University Press. Stjernberg, Fredrik. (2002). "On the Combination of Holism and Externalism". In Olismo, Massimo dell'Utri (ed.), Macerata: Quodlibet. Wittgenstein, Ludwig. (1967). Philosophical Investigations. Oxford: Basil Blackwell.
References
Sphoa
108
Sphoa
Sphoa (Devanagari, the Sanskritfor "bursting, opening") is an important concept in Indian grammatical tradition, relating to the problem of speech production, how the mind orders linguistic units into coherent discourse and meaning. The theory of sphoa is associated with Bharthari (c. 5th century), an early figure in Indic linguistic theory, mentioned in the 670s by Chinese traveller Yi-Jing. Bharthari is the author of the Vkyapadya ("[treatise] on words and sentences"). The work is divided into three books, the Brahma-ka, (or gama-samuccaya "aggregation of traditions"), the Vkya-k a, and the Pada-k a (or Prakraka "miscellaneous"). He theorized the act of speech as being made up of three stages: 1. Conceptualization by the speaker (Payant "idea") 2. Performance of speaking (Madhyam "medium) 3. Comprehension by the interpreter (Vaikhar "complete utterance"). Bharthari is of the abda-advaita "speech monistic" school which identifies language and cognition. According to George Cardona, "Vkyapadya is considered to be the major Indian work of its time on grammar, semantics and philosophy."
Vkyapadya
The account of the Chinese traveller Yi-Jing places a firm terminus ante quem of AD 670 on Bhartrhari. Scholarly opinion had formerly tended to place him in the 6th or 7th century; current consensus places him in the 5th century. By some traditional accounts, he is the same as the poet Bharthari who wrote the atakatraya. In the Vkyapadya, the term sphoa takes on a finer nuance, but there is some dissension among scholars as to what Bharthari intended to say. Sphoa retains its invariant attribute, but now its indivisibility is emphasized and it now operates at several levels. Bharthari develops this doctrine in a metaphysical setting, where he views sphoa as the language capability of man, revealing his consciousness[4] . Indeed, the ultimate reality is also expressible in language, the abda-brahman, or the
Sphoa Eternal Verbum. Early indologists such as A. B. Keith felt that Bharthari's sphoa was a mystical notion, owing to the metaphysical underpinning of Bharthari's text, Vkyapdiya where it is discussed, but it appears to be more of a psychological notion. Also, the notion of "flash or insight" or "revelation" central to the concept also lent itself to this viewpoint. However, the modern view is that it is perhaps a more psychological distinction. Bharthari expands on the notion of sphoa in Patajali, and discusses three levels: 1. vara-sphoa, at the syllable level. George Cardona feels that this remains an abstraction of sound, a further refinement on Patajali for the concept of phoneme- now it stands for units of sound. 2. pada-sphoa, at the word level, and 3. vakya-sphoa, at the sentence level. In verse I.93, Bharthari states that the 'sphota' is the universal or linguistic type - sentence-type or word-type, as opposed to their tokens (sounds)[1] . He makes a distinction between sphoa, which is whole and indivisible, and 'nda', the sound, which is sequenced and therefore divisible. The sphoa is the causal root, the intention, behind an utterance, in which sense is similar to the notion of lemma in most psycholinguistic theories of speech production. However, sphoa arises also in the listener, which is different from the lemma position. Uttering the 'nda' induces the same mental state or sphoa in the listener - it comes as a whole, in a flash of recognition or intuition (pratibh, 'shining forth'). This is particularly true for vakya-sphoa or sentence-vibration, where the entire sentence is thought of (by the speaker), and grasped (by the listener) as a whole. On the other hand, the modern sanskritist S.D. Joshi feels that Bharthari may not have been talking about meanings at all, but a class of sounds. Bimal K. Matilal has tried to unify these views - he feels that for Bharthari the very process of thinking involves vibrations, so that thought has some sound-like properties. Thought operates by abdanaor 'speaking', - so that the mechanisms of thought are the same as that of language. Indeed, Bharthari seems to be saying that thought is not possible without language. This leads to a somewhat whorfian position on the relationship between language and thought. The sphoa then is the carrier of this thought, as a primordial vibration. Sometimes the nda-sphoa distinction is posited in terms of the signifier-signified mapping, but this is a misconception. In traditional Sanskrit linguistic discourse (e.g. in Katyyana), vcaka refers to the signifier, and 'vcya' the signified. The 'vcaka-vcya' relation is eternal for Katyyana and the Mmsakas, but is conventional among the Nyya. However, in Bharthari, this duality is given up in favour of a more holistic view - for him, there is no independent meaning or signified; the meaning is inherent in the word or the sphoa itself.
109
Beyond Bhartrihari
Sphoa theory remained widely influential in Indian philosophy of language and was the focus of much debate over several centuries. It was adopted by most scholars of Vykaraa (grammar), but both the Mms and Nyya schools rejected it, primarily on the grounds of compositionality. Adherents of the 'sphota' doctrine were holistic or non-compositional (a-khana-paka), suggesting that many larger units of language are understood as a whole, whereas the Mmsakas in particular proposed compositionality (khana-paka). According to the former, word meanings, if any, are arrived at after analyzing the sentences in which they occur. Interestingly, this debate had many of the features animating present day debates in language over semantic holism, for example. The Mmsakas felt that the sound-units or the letters alone make up the word. The sound-units are uttered in sequence, but each leaves behind an impression, and the meaning is grasped only when the last unit is uttered. The position was most ably stated by Kumarila Bhatta (7th c.) who argued that the 'sphoas' at the word and sentence level are after all composed of the smaller units, and cannot be different from their combination[5] . However, in the end it is cognized as a whole, and this leads to the misperception of the sphoa as a single indivisible unit. Each sound unit in the utterance is an eternal, and the actual sounds differ owing to differences in manifestation.
Sphoa The Nyya view is enunciated among others by Jayanta (9th c.), who argues against the Mms position by saying that the sound units as uttered are different; e.g. for the sound [g], we infer its 'g-hood' based on its similarity to other such sounds, and not because of any underlying eternal. Also, the vcaka-vcya linkage is viewed as arbitrary and conventional, and not eternal. However, he agrees with Kumarila in terms of the compositionality of an utterance. Throughout the second millennium, a number of treatises discussed the sphoa doctrine. Particularly notable is Nageabhaa's Sphotavda (18th c.). Nagea clearly defines sphoa as a carrier of meaning, and identifies eight levels, some of which are divisible. In modern times, scholars of Bharthari have included Ferdinand de Saussure, who did his doctoral work on the genitive in Sanskrit, and lectured on Sanskrit and Indo-European languages at the Paris and at the University of Geneva for nearly three decades. It is thought that he might have been influenced by some ideas of Bharthari, particularly the sphoa debate. In particular, his description of the sign, as composed of the signifier and the signified, where these entities are not separable - the whole mapping from sound to denotation constitutes the sign, seems to have some colourings of sphoa in it. Many other prominent European scholars around 1900, including linguists such as Leonard Bloomfield and Roman Jakobson may have been influenced by Bharthari[6] .
110
See also
abda Vc Nyya
References
[1] The word and the world: India's contribution to the study of language (1990). Bimal Krishna Matilal. Oxford. [2] Brough, J., (1952,). "Audumbarayana's Theory of Language,". Bulletin of the School of Oriental and African Studies, University of London, 14, (1,): 7377,. [3] Dominik Wujastyk, (1993,). Metarules of Pinian Grammar, the Vyyaparibh,. Forsten. Wujastyk notes, however, that there is no early evidence linking someone called Vy i with a text called Sagraha that is said to be about language philosophy, and that the connection between the two has grown up through early misreadings of the Mahbhya. Furthermore, the Sagraha is mainly referred to for having an opinion about the connection between a word and its meaning (abdrthasabandha). [4] The Sphota Theory of Language: A Philosophical Analysis (1997,). Coward, Harold G.,. Motilal Banarsidass,. ISBN8120801814,.The first part of this text is a good review of the metaphysical underpinnings in Bharthari. [5] Gaurinath Sastri A Study in the Dialectics of Sphota, Motilal Banarsidass (1981). [6] Frits Staal The science of language, Chapter 16, in Gavin D. Flood, ed. The Blackwell Companion to Hinduism Blackwell Publishing, 2003, 599 pages ISBN 0631215352, 9780631215356. p. 357-358 [7] http:/ / openlibrary. org/ b/ OL1378779M
Alessandro Graheli, Teoria dello Sphoa nel sesto Ahnik della Nyyamajar di Jayantabhaa (2003), University La Sapienza thesis, Rome (2003). Clear, E. H., 'Hindu philosophy', in E. Craig (ed.), Routledge Encyclopedia of Philosophy, London: Routledge (1998) (http://www.rep.routledge.com/article/F002SECT4)
Sphoa Saroja Bhate, Johannes Bronkhorst (eds.), Bharthari - philosopher and grammarian : proceedings of the First International Conference on Bharthari, University of Poona, January 68, 1992, Motilal Banarsidass Publishers, 1997, ISBN 81-208-1198-4 K. Raghavan Pillai (trans.), Bhartrihari. The Vkyapadya, Critical texts of Cantos I and II with English Translation Delhi: Motilal Banarsidass, 1971. Coward, Harold G., The Sphota Theory of Language: A Philosophical Analysis, Delhi: Motilal Banarsidass, 1980. Herzberger, Radhika, Bhartrihari and the Buddhists, Dordrecht: D. Reidel/Kluwer Academic Publishers, 1986. Houben, Jan E.M., The Sambanda Samuddesha and Bhartrihari's Philosophy of Language, Groningen: Egbert Forsten, 1995. Iyer, Subramania, K.A., Bhartrihari. A Study of Vkyapadya in the Light of Ancient Commentaries, Poona: Deccan College Postgraduate Research Institute, 1969, reprint 1997. Shah, K.J., "Bhartrihari and Wittgenstein" in Perspectives on the Philosophy of Meaning (Vol. I, No. 1. New Delhi.)1/1 (1990): 80-95. Saroja Bhate, Johannes Bronkhorst (eds.), Bharthari - philosopher and grammarian : proceedings of the First International Conference on Bharthari, University of Poona, January 68, 1992, Motilal Banarsidass Publishers, 1997, ISBN 81-208-1198-4 Patnaik, Tandra, abda : a study of Bhartrhari's philosophy of language, New Delhi : DK Printworld, 1994, ISBN 81-246-0028-7. Maria Piera Candotti, Interprtations du discours mtalinguistique : la fortune du stra A 1 1 68 chez Patajali et Bharthari, Kykion studi e testi. 1, Scienze delle religioni, Firenze University Press, 2006, Diss. Univ. Lausanne, 2004, ISBN 978-88-8453-452-1
111
External links
the doctrine of sphota (http://www.languageinindia.com/june2004/anirbansphota1.html) by Anirban Dash Bhartrihari (http://www.iep.utm.edu/b/bhartrihari.htm) by S. Theodorou
Structured programming
112
Structured programming
To many people, Dijkstra's letter to the Editor of Communications of the A CM, published in March 1968, marks the true beginning of structured programming. Structured programming can be seen as a subset or subdiscipline of imperative programming, one of the major programming paradigms. It is most famous for removing or reducing reliance on the GOTO statement.Edsger Dijkstra's subsequent article, "Go To Statement Considered Harmful" was instrumental in the trend towards structured programming. Description of the inverse relationship between a programmer's ability and the density of goto statements in his program is repeated.[1] Historically, several different structuring techniques or methodologies have been developed for writing structured programs. The most common are: 1. Edsger Dijkstra's structured programming, where the logic of a program is a structure composed of similar sub-structures in a limited number of ways. This reduces understanding a program to understanding each structure on its own, and in relation to that containing it, a useful separation of concerns. 2. A view derived from Dijkstra's which also advocates splitting programs into sub-sections with a single point of entry, but is strongly opposed to the concept of a single point of exit. 3. Data Structured Programming or Jackson Structured Programming, which is based on aligning data structures with program structures. This approach applied the fundamental structures proposed by Dijkstra, but as constructs that used the high-level structure of a program to be modeled on the underlying data structures being processed. There are at least 3 major approaches to data structured program design proposed by Jean-Dominique Warnier, Michael A. Jackson, and Ken Orr. The two latter meanings for the term "structured programming" are more common, and that is what this article will discuss. Years after Dijkstra (1969), object-oriented programming (OOP) was developed to handle very large or complex programs (see below: Object-oriented comparison).
Structured programming
113
Design
Structured programming is often (but not always) associated with a "top-down" approach to design.
History
Theoretical foundation
The structured program theorem provides the theoretical basis of structured programming. It states that three ways of combining programssequencing, selection, and iterationare sufficient to express any computable function. This observation did not originate with the structured programming movement; these structures are sufficient to describe the instruction cycle of a central processing unit, as well as the operation of a Turing machine. Therefore a processor is always executing a "structured program" in this sense, even if the instructions it reads from memory are not part of a structured program. However, authors usually credit the result to a 1966 paper by Bhm and Jacopini, possibly because Dijkstra cited this paper himself. The structured program theorem does not address how to write and analyze a usefully structured program. These issues were addressed during the late 1960s and early 1970s, with major contributions by Dijkstra, Robert W. Floyd, Tony Hoare, and David Gries.
Debate
P. J. Plauger, an early adopter of structured programming, described his reaction to the structured program theorem: Us [sic] converts waved this interesting bit of news under the noses of the unreconstructed assembly-language programmers who kept trotting forth twisty bits of logic and saying, 'I betcha can't structure this.' Neither the proof by Bhm and Jacopini nor our repeated successes at writing structured code brought them around one day sooner than they were ready to convince themselves. In 1967 a letter from Dijkstra appeared in Communications of the ACM with the heading "Goto statement considered harmful." The letter, which cited the Bhm and Jacopini proof, called for the abolishment of unconstrained GOTO from high-level languages in the interest of improving code quality. This letter is usually cited as the beginning of the structured programming debate. Although, as Plauger mentioned, many programmers unfamiliar with the theorem doubted its claims, the more significant dispute in the ensuing years was whether structured programming could actually improve software's clarity, quality, and development time enough to justify training programmers in it. Dijkstra claimed that limiting the number of structures would help to focus a programmer's thinking, and would simplify the task of ensuring the program's correctness by dividing analysis into manageable steps. In his 1969 Notes on Structured Programming, Dijkstra wrote: When we now take the position that it is not only the programmer's task to produce a correct program but also to demonstrate its correctness in a convincing manner, then the above remarks have a profound influence on the programmer's activity: the object he has to produce must be usefully structured. [] In what follows it will become apparent that program correctness is not my only concern, program adaptability or manageability will be another [] 1
Structured programming Donald Knuth accepted the principle that programs must be written with provability in mind, but he disagreed (and still disagrees) with abolishing the GOTO statement. In his 1974 paper, "Structured Programming with Goto Statements", he gave examples where he believed that a direct jump leads to clearer and more efficient code without sacrificing provability. Knuth proposed a looser structural constraint: It should be possible to draw a program's flow chart with all forward branches on the left, all backward branches on the right, and no branches crossing each other. Many of those knowledgeable in compilers and graph theory have advocated allowing only reducible flow graphs. Structured programming theorists gained a major ally in the 1970s after IBM researcher Harlan Mills applied his interpretation of structured programming theory to the development of an indexing system for the New York Times research file. The project was a great engineering success, and managers at other companies cited it in support of adopting structured programming, although Dijkstra criticized the ways that Mills's interpretation differed from the published work. As late as 1987 it was still possible to raise the question of structured programming in a computer science journal. Frank Rubin did so in that year with a letter, "'GOTO considered harmful' considered harmful." Numerous objections followed, including a response from Dijkstra that sharply criticized both Rubin and the concessions other writers made when responding to him.
114
Outcome
By the end of the 20th century nearly all computer scientists were convinced that it is useful to learn and apply the concepts of structured programming. High-level programming languages that originally lacked programming structures, such as FORTRAN, COBOL, and BASIC, now have them.
Common deviations
Exception handling
Although there is almost never a reason to have multiple points of entry to a subprogram, multiple exits are often used to reflect that a subprogram may have no more work to do, or may have encountered circumstances that prevent it from continuing. A typical example of a simple procedure would be reading data from a file and processing it: open file; while (reading not finished) { read some data; if (error) { stop the subprogram and inform rest of the program about the error; } } process read data; finish the subprogram; The "stop and inform" may be achieved by throwing an exception, second return from the procedure, labelled loop break, or even a goto. As the procedure has 2 exit points, it breaks the rules of Dijkstra's structured programming. Coding it in accordance with single point of exit rule would be very cumbersome. If there were more possible error conditions, with different cleanup rules, single exit point procedure would be extremely hard to read and understand, very likely even more so than an unstructured one with control handled by goto statements. Most languages have adopted the multiple points of exit form of structural programming. C allows multiple paths to a structure's exit (such as "continue", "break", and "return"), newer languages have also "labelled breaks" (similar to the former, but allowing breaking out of more than just the innermost loop) and exceptions.
Structured programming
115
State machines
Some programs, particularly parsers and communications protocols, have a number of states that follow each other in a way that is not easily reduced to the basic structures. It is possible to structure these systems by making each state-change a separate subprogram and using a variable to indicate the active state (see trampoline). However, some programmers (including Knuth) prefer to implement the state-changes with a jump to the new state.
Object-oriented comparison
In the 1960s, language design was often based on textbook examples of programs, which were generally small (due to the size of a textbook); however, when programs became very large, the focus changed. In small programs, the most common statement is generally the assignment statement; however, in large programs (over 10,000 lines), the most common statement is typically the procedure-call to a subprogram. Ensuring parameters are correctly passed to the correct subprogram becomes a major issue. Many small programs can be handled by coding a hierarchy of structures; however, in large programs, the organization is more a network of structures, and insistence on hierarchical structuring for data and procedures can produce cumbersome code with large amounts of "tramp data". For example, a text-display program that allows dynamically changing the font-size of the entire screen would be very cumbersome if coded by passing font-size data through a hierarchy. Instead, a subsystem could be used to control the font data through a set of accessor functions that set or retrieve data from a common area controlled by that font-data subsystem. Databases are a common way around tramping. The FORTRAN language has used labelled COMMON-blocks to separate global program data into subsystems (no longer global) to allow program-wide, network-style access to data, such as font-size, but only by specifying the particular COMMON-block name. Confusion could occur in FORTRAN by coding alias names and changing data-types when referencing the same labelled COMMON-block yet mapping alternate variables to overlay the same area of memory. Regardless, the labelled-COMMON concept was very valuable in organizing massive software systems and led to the use of object-oriented programming to define subsystems of centralized data controlled by accessor functions. Changing data into other data-types was performed by explicitly converting, or casting, data from the original variables. Global subprogram names were recognized as just as dangerous as (or even more dangerous than) global variables or blank COMMON, and subsystems were limited to isolated groups of subprogram names, such as naming with unique prefixes or using Java package names. Although structuring a program into a hierarchy might help to clarify some types of software, even for some special types of large programs, a small change, such as requesting a user-chosen new option (text font-color) could cause a massive ripple-effect with changing multiple subprograms to propagate the new data into the program's hierarchy. The object-oriented approach is more flexible, by separating a program into a network of subsystems, with each controlling their own data, algorithms, or devices across the entire program, but only accessible by first specifying named access to the subsystem object-class, not just by accidentally coding a similar global variable name. Rather than relying on a structured-programming hierarchy chart, object-oriented programming needs a call-reference index to trace which subsystems or classes are accessed from other locations. Modern structured systems have tended away from deep hierarchies found in the 1970s and tend toward "event driven" architectures, where various procedural events are designed as relatively independent tasks. Structured programming, as a forerunner to object-oriented programming, noted some crucial issues, such as emphasizing the need for a single exit-point in some types of applications, as in a long-running program with a procedure that allocates memory and should deallocate that memory before exiting and returning to the calling procedure. Memory leaks that cause a program to consume vast amounts of memory could be traced to a failure to observe a single exit-point in a subprogram needing memory deallocation.
Structured programming Similarly, structured programming, in warning of the rampant use of goto-statements, led to a recognition of top-down discipline in branching, typified by Ada's GOTO that cannot branch to a statement-label inside another code block. However, "GOTO WrapUp" became a balanced approach to handling a severe anomaly without losing control of the major exit-point to ensure wrap-up (for deallocating memory, deleting temporary files, and such), when a severe issue interrupts complex, multi-level processing and wrap-up code must be performed before exiting. The various concepts behind structured programming can help to understand the many facets of object-oriented programming.
116
See also
Control flow (more detail of high-level control structures) Minimal evaluation Programming paradigms Structured exception handling Structure chart Switch statement, a case of multiple GOTOs
References
1. Edsger Dijkstra, Notes on Structured Programming [2], pg. 6 2. Bhm, C. and Jacopini, G.: Flow diagrams, Turing machines and languages with only two formation rules, CACM 9(5), 1966. 3. Michael A. Jackson, Principles of Program Design, Academic Press, London, 1975. 4. O.-J. Dahl, E. W. Dijkstra, C. A. R. Hoare Structured Programming, Academic Press, London, 1972 ISBN 0-12-200550-3 this volume includes an expanded version of the Notes on Structured Programming, above, including an extended example of using the structured approach to develop a backtracking algorithm to solve the 8 Queens problem. a pdf version is in the ACM Classic Books Series [3] Note that the third chapter of this book, by Dahl, describes an approach which is easily recognized as Object Oriented Programming. It can be seen as another way to "usefully structure" a program to aid in showing that it is correct.
[1] Edsger Wybe Dijkstra, Go to statement considered harmful [2] http:/ / www. cs. utexas. edu/ users/ EWD/ ewd02xx/ EWD249. PDF [3] http:/ / portal. acm. org/ citation. cfm?id=1243380& jmp=cit& coll=portal& dl=GUIDE& CFID=:/ / www. acm. org/ publications/ & CFTOKEN=www. acm. org/ publications/ #CIT
External links
Notes on Structured Programming and Variation Analysis (Programming Wisdom Center) (http://www. geocities.com/tablizer/struc.htm)
Subroutine
117
Subroutine
In computer science, a subroutine (also called procedure, method, function, or routine) is a portion of code within a larger program that performs a specific task and is relatively independent of the remaining code. As the name "subprogram" suggests, a subroutine behaves in much the same way as a computer program that is used as one step in a larger program or another subprogram. A subroutine is often coded so that it can be started ("called") several times and/or from several places during a single execution of the program, including from other subroutines, and then branch back (return) to the next instruction after the "call" once the subroutine's task is done. Subroutines are a powerful programming tool,[1] and the syntax of many programming languages includes support for writing and using them. Judicious use of subroutines (for example, through the structured programming approach) will often substantially reduce the cost of developing and maintaining a large program, while increasing its quality and reliability.[2] Subroutines, often collected into libraries, are an important mechanism for sharing and trading software. The discipline of object-oriented programming is based on objects and methods (which are subroutines attached to these objects or object classes). In the compilation technique called threaded code, the executable program is basically a sequence of subroutine calls. Maurice Wilkes, David Wheeler, and Stanley Gill are credited with the invention of this concept, which they referred to as closed subroutine.[3]
Main concepts
The content of a subroutine is its body, the piece of program code that is executed when the subroutine is called or invoked. A subroutine may be written so that it expects to obtain one or more data values from the calling program (its parameters or arguments). It may also return a computed value to its caller (its return value), or provide various result values or out(put) parameters. Indeed, a common use of subroutines is to implement mathematical functions, in which the purpose of the subroutine is purely to compute one or more results whose values are entirely determined by the parameters passed to the subroutine. (Examples might include computing the logarithm of a number or the determinant of a matrix. However, a subroutine call may also have side effects, such as modifying data structures in the computer's memory, reading from or writing to a peripheral device, creating a file, halting the program or the machine, or even delaying the program's execution for a specified time. A subprogram with side effects may return different results each time it is called, even if it is called with the same arguments. An example is a random number function, available in many languages, that returns a different random-looking number each time it is called. The widespread use of subroutines with side effects is a characteristic of imperative programming languages. A subroutine can be coded so that it may call itself recursively, at one or more places, in order to perform its task. This technique allows direct implementation of functions defined by mathematical induction and recursive divide and conquer algorithms. A subroutine whose purpose is to compute a single boolean-valued function (that is, to answer a yes/no question) is called a predicate. In logic programming languages, often all subroutines are called "predicates", since they primarily determine success or failure.
Subroutine
118
Language support
High-level programming languages usually include specific constructs for delimiting the part of the program (body) that comprises the subroutine, assigning a name to the subroutine, specifying the names and/or types of its parameters and/or return values, providing a private naming scope for its temporary variables, identifying variables outside the subroutine that are accessible within it, calling the subroutine, providing values to its parameters, specifying the return values from within its body, returning to the calling program, disposing of the values returned by a call, handling any exceptional conditions encountered during the call, packaging subroutines into a module, library, object, class, etc.
Some programming languages, such as Visual Basic .NET, Pascal , Fortran, and Ada, distinguish between "functions" or "function subprograms", which provide an explicit return value to the calling program, and "subroutines" or "procedures", which do not. In those languages, function calls are normally embedded in expressions (e.g., a sqrt function may be called as y = z + sqrt(x)); whereas procedure calls behave syntactically as statements (e.g., a print procedure may be called as if x > 0 then print(x). Other languages, such as C and Lisp, do not make this distinction, and treat those terms as synonymous. In strictly functional programming languages such as Haskell, subprograms can have no side effects, and will always return the same result if repeatedly called with the same arguments. Such languages typically only support functions, since subroutines that do not return a value have no use unless they can cause a side effect. A language's compiler will usually translate procedure calls and returns into machine instructions according to a well-defined calling convention, so that subroutines can be compiled separately from the programs that call them. The instruction sequences corresponding to call and return statements are called the procedure's prologue and epilogue.
Advantages
The advantages of breaking a program into subroutines include: decomposition of a complex programming task into simpler steps: this is one of the two main tools of structured programming, along with data structures. reducing the duplication of code within a program, enabling the reuse of code across multiple programs, dividing a large programming task among various programmers, or various stages of a project, hiding implementation details from users of the subroutine.
Subroutine
119
Disadvantages
The invocation of a subroutine (rather than using in-line code) imposes some computational overhead in the call mechanism itself The subroutine typically requires standard housekeeping codeboth at entry to, and exit from, the function (function prologue and epilogueusually saving general purpose registers and return address as a minimum)
History
Language support
In the (very) early assemblers, subroutine support was limited. Subroutines were not explicitly separated from each other or from the main program, and indeed the source code of a subroutine could be interspersed with that of other subprograms. Some assemblers would offer predefined macros to generate the call and return sequences. Later assemblers (1960s) had much more sophisticated support for both in-line and separately assembled subroutines that could be linked together.
Self-modifying code
The first use of subprograms was on early computers that were programmed in machine code or assembly language, and did not have a specific call instruction. On those computers, each subroutine call had to be implemented as a sequence of lower level machine instructions that relied on self-modifying code. The replacing the operand of a branch instruction at the end of the procedure's body so that it would return to the proper location in the calling program (the return address, usually just after the instruction that jumped into the subroutine).
Subroutine libraries
Even with this cumbersome approach, subroutines proved very useful. For one thing they allowed the same code to be used in many different programs. Morever, memory was a very scarce resource on early computers, and subroutines allowed significant savings in program size. In many early computers, the program instructions were entered into memory from a punched paper tape. Each subroutine could then be provided by a separate piece of tape, loaded or spliced before or after the main program; and the same subroutine tape could then be used by many different programs. A similar approach was used in computers whose main input was through punched cards. The name "subroutine library" originally meant a library, in the literal sense, which kept indexed collections of such tapes or card decks for collective use.
Subroutine
120
Jump to subroutine
Another advance was the "jump to subroutine" instruction, which combined the saving of the return address with the calling jump, thereby minimizing overhead significantly. In the IBM System/360, for example, the branch instructions BAL or BALR, designed for procedure calling, would save the return address in a processor register specified in the instruction. To return, the subroutine had only to execute an indirect branch instruction (BR) through that register. If the subroutine needed that register for some other purpose (such as calling another subroutine), it would save the register's contents to a private memory location or a register stack. In the HP 2100, the JSB instruction would perform a similar task, except that the return address was stored in the memory location that was the target of the branch. Execution of the procedure would actually begin at the next memory location. In the HP 2100 assembly language, one would write, for example ... JSB MYSUB ...
BB
to call a subroutine called MYSUB from the main program. The subroutine would be coded as MYSUB NOP AA ... ... JMP MYSUB,I (Storage for MYSUB's return address.) (Start of MYSUB's body.) (Returns to the calling program.)
The JSB instruction placed the address of the NEXT instruction (namely, BB) into the location specified as its operand (namely, MYSUB), and then branched to the NEXT location after that (namely, AA = MYSUB + 1). The subroutine could then return to the main program by executing the indirect jump JMP MYSUB,I which branched to the location stored at location MYSUB. Compilers for Fortran and other languages could easily make use of these instructions when available. This approach supported multiple levels of calls; however, since the return address, parameters, and return values of a subroutine were assigned fixed memory locations, it did not allow for recursive calls. Incidentally, a similar technique was used by Lotus 1-2-3, in the early 1980s, to discover the recalculation dependencies in a spreadsheet. Namely, a location was reserved in each cell to store the "return" address. Since circular references are not allowed for natural recalculation order, this allows a tree walk without reserving space for a stack in memory, which was very limited on small computers such as the IBM PC.
Call stack
Most modern implementations use a call stack, a special case of the stack data structure, to implement subroutine calls and returns. Each procedure call creates a new entry, called a stack frame, at the top of the stack; when the procedure returns, its stack frame is deleted from the stack, and its space may be used for other procedure calls. Each stack frame contains the private data of the corresponding call, which typically includes the procedure's parameters and internal variables, and the return address. The call sequence can be implemented by a sequence of ordinary instructions (an approach still used in RISC and VLIW architectures), but many traditional machines designed since the late 1960s have included special instructions for that purpose. The call stack is usually implemented as a contiguous area of memory. It is an arbitrary design choice whether the bottom of the stack is the lowest or highest address within this area, so that the stack may grow forwards or backwards in memory; however, many architectures chose the latter.
Subroutine Some designs, notably some Forth implementations, used two separate stacks, one mainly for control information (like return addresses and loop counters) and the other for data. The former was, or worked like, a call stack and was only indirectly accessible to the programmer through other language constructs while the latter was more directly accessible. When stack-based procedure calls were first introduced, an important motivation was to save precious memory. With this scheme, the compiler does not have to reserve separate space in memory for the private data (parameters, return address, and local variables) of each procedure. At any moment, the stack contains only the private data of the calls that are currently active (namely, which have been called but haven't returned yet). Because of the ways in which programs were usually assembled from libraries, it was (and still is) not uncommon to find programs that include thousands of subroutines, of which only a handful are active at any given moment. For such programs, the call stack mechanism could save significant amounts of memory. Indeed, the call stack mechanism can be viewed as the earliest and simplest method for automatic memory management. However, another advantage of the call stack method is that it allows recursive subroutine calls, since each nested call to the same procedure gets a separate instance of its private data.
121
Delayed stacking
One disadvantage of the call stack mechanism is the increased cost of a procedure call and its matching return. The extra cost includes incrementing and decrementing the stack pointer (and, in some architectures, checking for stack overflow), and accessing the local variables and parameters by frame-relative addresses, instead of absolute addresses. The cost may be realized in increased execution time, or increased processor complexity, or both. This overhead is most obvious and objectionable in leaf procedures, which return without making any procedure calls themselves. To reduce that overhead, many modern compilers try to delay the use of a call stack until it is really needed. For example, the call of a procedure P may store the return address and parameters of the called procedure in certain processor registers, and transfer control to the procedure's body by a simple jump. If procedure P returns without making any other call, the call stack is not used at all. If P needs to call another procedure Q, it will then use the call stack to save the contents of any registers (such as the return address) that will be needed after Q returns.
Subroutine } This function converts a number between 0 to 6 into the initial letter of the corresponding day of the week, namely 0 to 'M', 1 to 'T', ..., 6 to 'S'. The result of calling it might be assigned to a variable, e.g., num_day = function3(number);. void function4(int *pointer_to_var) { (*pointer_to_var)++; } This function does not return a value but modifies the variable whose address is passed as the parameter; it would be called with "function4(&variable_to_increment);".
122
By value [ByVal]
A way of passing the value of an argument to a procedure instead of passing the address. This allows the procedure to access a copy of the variable. As a result, the variable's actual value can't be changed by the procedure to which it is passed.
By reference [ByRef]
A way of passing the address of an argument to a procedure instead of passing the value. This allows the procedure to access the actual variable. As a result, the variable's actual value can be changed by the procedure to which it is passed. Unless otherwise specified, arguments are passed by reference.
Public (optional)
Indicates that the Function procedure is accessible to all other procedures in all modules. If used in a module that contains an Option Private, the procedure is not available outside the project.
Private (optional)
Indicates that the Function procedure is accessible only to other procedures in the module where it is declared.
Friend (optional)
Used only in a class module. Indicates that the Function procedure is visible throughout the project, but not visible to a controller of an instance of an object. Private Function Function1() ' Some Code Here End Function The function does not return a value and has to be called as a stand-alone function, e.g., Function1
Subroutine Private Function Function2() as Integer Function2 = 5 End Function This function returns a result (the number 5), and the call can be part of an expression, e.g., x + Function2() Private Function Function3(ByVal intValue as Integer) as String Dim strArray(6) as String strArray = Array("M", "T", "W", "T", "F", "S", "S") Function3 = strArray(intValue) End Function This function converts a number between 0 and 6 into the initial letter of the corresponding day of the week, namely 0 to 'M', 1 to 'T', ..., 6 to 'S'. The result of calling it might be assigned to a variable, e.g., num_day = Function3(number). Private Function Function4(ByRef intValue as Integer) intValue = intValue + 1 End Function This function does not return a value but modifies the variable whose address is passed as the parameter; it would be called with "Function4(variable_to_increment)".
123
Subroutine If a subprogram can function properly even when called while another execution is already in progress, that subprogram is said to be re-entrant. A recursive subprogram must be re-entrant. Re-entrant subprograms are also useful in multi-threaded situations, since multiple threads can call the same subprogram without fear of interfering with each other. In the IBM CICS transaction processing system, "quasi-reentrant" was a slightly less restrictive, but similar, requirement for application programs that were shared by many threads. In a multi-threaded environment, there is generally more than one stack. An environment that fully supports coroutines or lazy evaluation may use data structures other than stacks to store their activation records.
124
Overloading
In strongly typed languages, it is sometimes desirable to have a number of functions with the same name, but operating on different types of data, or with different parameter profiles. For example, a square root function might be defined to operate on reals, complex values or matrices. The algorithm to be used in each case is different, and the return result may be different. By writing three separate functions with the same name, the programmer has the convenience of not having to remember different names for each type of data. Further if a subtype can be defined for the reals, to separate positive and negative reals, two functions can be written for the reals, one to return a real when the parameter is positive, and another to return a complex value when the parameter is negative. In Object-oriented programming, when a series of functions with the same name can accept different parameter profiles or parameters of different types, each of the functions is said to be overloaded . As another example, a subroutine might construct an object that will accept directions, and trace its path to these points on screen. There are a plethora of parameters that could be passed in to the constructor (colour of the trace, starting x and y co-ordinates, trace speed). If the programmer wanted the constructor to be able to accept only the color parameter, then he could call another constructor that accepts only color, which in turn calls the constructor with all the parameters passing in a set of "default values" for all the other parameters (X and Y would generally be centered on screen or placed at the origin, and the speed would be set to another value of the coder's choosing).
Closures
A closure is a subprogram together with the values of some of its variables captured from the environment in which it was created. Closures were a notable feature of the Lisp programming language, introduced by John McCarthy.
Conventions
A number of conventions for the coding of subprograms have been developed. It has been commonly preferable that the name of a subprogram should be a verb when it does a certain task, an adjective when it makes some inquiry, and a noun when it is used to substitute variables. Experienced programmers recommend that a subprogram perform only one task. If a subprogram performs more than one task, it should be split up into more subprograms. They argue that subprograms are key components in maintaining code and their roles in the program must be distinct. Some advocate that each subprogram should have minimal dependency on other pieces of code. For example, they see the use of global variables as unwise because it adds tight coupling between subprograms and global variables. If such coupling is not necessary at all, they advise to refactor subprograms to take parameters instead. This practice is controversial because it tends to increase the number of passed parameters to subprograms.
Subroutine
125
Return codes
Besides its "main" or "normal" effect, a subroutine may need to inform the calling program about "exceptional" conditions that may have occurred during its execution. In some languages and/or programming standards, this is often done through a "return code", an integer value placed by the subroutine in some standard location, which encodes the normal and exceptional conditions. In the IBM S/360, where a return code was expected from the subroutine, the return value was often designed to be a multiple of 4so that it could be used as a direct branch table index into a branch table often located immediately after the call instruction to avoid extra conditional tests, further improving efficiency. In the System/360 assembly language, one would write, for example:
BAL B TABLE B B B 14,SUBRTN01 TABLE(15) OK BAD ERROR go to subroutine , using reg 14 as save register (sets reg 15 to 0,4,8 as return value)
use returned value in reg 15 to index the branch table, branching to the appropriate branch instr. return code =00 return code =04 return code =08 GOOD Invalid input Unexpected condition } } Branch table }
Inlining
A technique used to eliminate this overhead is inline expansion or inlining of the subprogram's body at each call site (rather than branching to the subroutine and back). Not only does this avoid the call overhead, but it also allows the compiler to optimize the procedure's 'body' more effectively by taking into account the context and arguments at that call. The inserted body can be optimized by the compiler. Inlining however, will usually increase the code size, unless the program contains only a single call to the subroutine, or the subroutine body is less code than the call overhead.
Subroutine
126
See also
Function (mathematics) Method (computer science) Module (programming) Transclusion Operator overloading Functional programming Command-Query Separation Coroutines, subprograms that call each other as if both were the main programs. Event handler, a subprogram that is called in response to an input event or Interrupt.
References
[1] [2] [3] [4] Donald E. Knuth. The Art of Computer Programming, Volume I: Fundamental Algorithms. Addison-Wesley. ISBNISBN 0-201-89683-4. O.-J. Dahl; E. W. Dijkstra, C. A. R. Hoare (1972). Structured Programming. Academic Press. ISBN0-12-200550-3. Wilkes, M. V.; Wheeler, D. J., Gill, S. (1951). Preparation of Programs for an Electronic Digital Computer. Addison-Wesley. U.S. Election Assistance Commission (2007). "Definitions of Words with Special Meanings" (http:/ / www. eac. gov/ vvsg/ g). Voluntary Voting System Guidelines. . Retrieved 2008-02-02.
Synergetics (Fuller)
Synergetics is the empirical study of systems in transformation, with an emphasis on total system behavior unpredicted by the behavior of any isolated components, including humanitys role as both participant and observer. Since systems are identifiable at every scale from the quantum level to the cosmic, and humanity both articulates the behavior of these systems and is composed of these systems, synergetics is a very broad discipline, and embraces a broad range of scientific and philosophical studies including tetrahedral and close-packed-sphere geometries, thermodynamics, chemistry, psychology, biochemistry, economics, philosophy and theology. Despite a few mainstream endorsements such as articles by Arthur Loeb and the naming of a molecule buckminsterfullerene, synergetics remains an iconoclastic subject ignored by most traditional curricula and academic departments. Buckminster Fuller (1895-1983) coined the term and attempted to define its scope in his two volume work Synergetics [1] [2] [3] . His oeuvre inspired many researchers to tackle branches of synergetics. Three examples: Haken explored self-organizing structures of open systems far from thermodynamic equilibrium, Amy Edmondson explored tetrahedral and icosahedral geometry, and Stafford Beer tackled geodesics in the context of social dynamics. Many other researchers toil today on aspects of Synergetics, though many deliberately distance themselves from Fullers broad all-encompassing definition, given its problematic attempt to differentiate and relate all aspects of reality including the ideal and the physically realized, the container and the contained, the one and the many, the observer and the observed, the human microcosm and the universal macrocosm.
Synergetics (Fuller)
127
Definition
Synergetics is defined by R. Buckminster Fuller (1895-1983) in his two books Synergetics: Explorations in the Geometry of Thinking and Synergetics 2: Explorations in the Geometry of Thinking as: "A system of mensuration employing 60-degree vectorial coordination comprehensive to both physics and chemistry, and to both arithmetic and geometry, in rational whole numbers... Synergetics explains much that has not been previously illuminated... Synergetics follows the cosmic logic of the structural mathematics strategies of nature, which employ the paired sets of the six angular degrees of freedom, frequencies, and vectorially economical actions and their multi-alternative, equi-economical action options... Synergetics discloses the excruciating awkwardness characterizing present-day mathematical treatment of the interrelationships of the independent scientific disciplines as originally occasioned by their mutual and separate lacks of awareness of the existence of a comprehensive, rational, coordinating system inherent in nature."[4] Other passages in Synergetics that outline the subject are its introduction (The Wellspring of Reality [5]) and the section on Nature's Coordination (410.01) [6]. The chapter on Operational Mathematics (801.00-842.07) [7] provides an easy to follow, easy to build introduction to some of Fuller's geometrical modeling techniques. So this chapter can help a new reader become familiar with Fuller's approach, style and geometry. One of Fuller's clearest expositions on "the geometry of thinking" occurs in the two part essay "Omnidirectional Halo" which appears in his book No More Secondhand God[8] . Amy Edmondson describes synergetics "in the broadest terms, as the study of spatial complexity, and as such is an inherently comprehensive discipline." [9] In her PhD study, Cheryl Clark synthesizes the scope of synergetics as "the study of how nature works, of the patterns inherent in nature, the geometry of environmental forces that impact on humanity."[10] Here's an abridged list of some of the discoveries Fuller claims for Synergetics (see Controversies below) again quoting directly: The rational volumetric quantation or constant proportionality of the octahedron, the cube, the rhombic triacontahedron, and the rhombic dodecahedron when referenced to the tetrahedron as volumetric unity. The trigonometric identification of the great-circle trajectories of the seven axes of symmetry with the 120 basic disequilibrium LCD triangles of the spherical icosahedron. (See Sec. 1043.00.) The rational identification of number with the hierarchy of all the geometries. The A and B Quanta Modules. The volumetric hierarchy of Platonic and other symmetrical geometricals based on the tetrahedron and the A and B Quanta Modules as unity of coordinate mensuration. The identification of the nucleus with the vector equilibrium. Omnirationality: the identification of triangling and tetrahedroning with second- and third-powering factors. Omni-60-degree coordination versus 90-degree coordination. The integration of geometry and philosophy in a single conceptual system providing a common language and accounting for both the physical and metaphysical.[11]
Significance
Several authors have tried characterize the importance of Synergetics. Amy Edmonson asserts that "Experience with synergetics encourages a new way of approaching and solving problems. Its emphasis on visual and spatial phenomena combined with Fuller's holistic approach fosters the kind of lateral thinking which so often leads to creative breakthroughs."[12] . Cheryl Clark points out that "In his thousands of lectures, Fuller urged his audiences to study synergetics, saying `I am confident that humanity's survival depends on all of our willingness to comprehend feelingly the way nature works.'"[13]
Synergetics (Fuller)
128
Tetrahedral Accounting
A chief hallmark of this system of mensuration was its unit of volume: a tetrahedron defined by four closest-packed unit-radius spheres. This tetrahedron anchored a set of concentrically arranged polyhedra proportioned in a canonical manner and inter-connected by a twisting-contracting, inside-outing dynamic named the Jitterbug Transformation.
Shape A,B,T modules MITE Tetrahedron Coupler Cuboctahedron Duo-Tet Cube Octahedron
Properties tetrahedral voxels space-filler, 2As, 1B self dual space filler cb.h = 1/2, cb.v = 1/8 of 20 24 MITEs dual of cube radius rt.h < 1, rt.v = 2/3 of 7.5 space-filler, dual to cuboctahedron rt.h = phi/sqrt(2) edges 1 = tetrahedron's edges edges 1, cb.h = 1 2-frequency, 8 x 3 volume
Shape A module B module T module MITE Tetrahedron Coupler Duo-Tet Cube Octahedron
A 1 0 0 2 24 16 48 48 0 96
B 0 1 0 1 0 8 24 48 0 48
T 0 0 1 0 0 0 0 0 120 0
A & B modules
Corresponding to Fuller's use of a regular tetrahedron as his unit of volume was his replacing the cube as his model of 3rd powering.(Fig. 990.01 [14]) The relative size of a shape was indexed by its "frequency," a term he deliberately chose for its resonance with scientific meanings. "Size and time are synonymous. Frequency and size are the same phenomenon." (528.00 [15]) Shapes not having any size, because purely conceptual in the Platonic sense, were "prefrequency" or "subfrequency" in contrast. Prime means sizeless, timeless, subfrequency. Prime is prehierarchical. Prime is prefrequency. Prime is generalized, a metaphysical conceptualization experience, not a special case.... (1071.10 [16]) Generalized principles (scientific laws), although communicated energetically, did not inhere in the "special case" episodes, were considered "metaphysical" in that sense. An energy event is always special case. Whenever we have experienced energy, we have special case. The physicist's first definition of physical is that it is an experience that is extracorporeally, remotely, instrumentally apprehensible. Metaphysical includes all the experiences that are excluded by the definition of physical. Metaphysical is always generalized principle.(1075.11 [16])
Synergetics (Fuller) Tetrahedral mensuration also involved substituting what Fuller called the "isotropic vector matrix" (IVM) for the standard XYZ coordinate system, as his principal conceptual backdrop for special case physicality: The synergetics coordinate system -- in contradistinction to the XYZ coordinate system -- is linearly referenced to the unit-vector-length edges of the regular tetrahedron, each of whose six unit vector edges occur in the isotropic vector matrix as the diagonals of the cube's six faces. (986.203 [17]) The IVM scaffolding or skeletal framework was defined by cubic closest packed spheres (CCP), alternatively known as the FCC or face-centered cubic lattice, or as the octet truss in architecture (on which Fuller held a patent). The space-filling complementary tetrahedra and octahedra characterizing this matrix had prefrequency volumes 1 and 4 respectively (see above). A third consequence of switching to tetrahedral mensuration was Fuller's review of the standard "dimension" concept. Whereas "height, width and depth" have been promulgated as three distinct dimensions within the Euclidean context, each with its own independence, Fuller considered the tetrahedron a minimal starting point for spatial cognition. His use of "4D" was in many passages close to synonymous with the ordinary meaning of "3D," with the dimensions of physicality (time, mass) considered additional dimensions. Geometers and "schooled" people speak of length, breadth, and height as constituting a hierarchy of three independent dimensional states -- "one-dimensional," "two-dimensional," and "three-dimensional" -- which can be conjoined like building blocks. But length, breadth, and height simply do not exist independently of one another nor independently of all the inherent characteristics of all systems and of all systems' inherent complex of interrelationships with Scenario Universe.... All conceptual consideration is inherently four-dimensional. Thus the primitive is a priori four-dimensional, always based on the four planes of reference of the tetrahedron. There can never be less than four primitive dimensions. Any one of the stars or point-to-able "points" is a system-ultratunable, tunable, or infratunable but inherently four-dimensional. (527.702 [18], 527.712 [19]) Synergetics did not aim to replace or invalidate pre-existing geometry or mathematics, was designed to carve out a namespace and serve as a glue language providing a new source of insights.
129
Synergetics (Fuller)
130
An Intuitive Geometry
Fuller took an intuitive approach to his studies, often going into exhaustive empirical detail while at the same time seeking to cast his findings in their most general philosophical context. For example, his sphere packing studies led him to generalize a formula for polyhedral numbers: 2 P F2 + 2, where F stands for "frequency" (the number of intervals between balls along an edge) and P for a product of low order primes (some integer). He then related the "multiplicative 2" and "additive 2" in this formula to the convex versus concave aspects of shapes, and to their polar spinnability respectively. These same polyhedra, developed through sphere packing and related by tetrahedral mensuration, he then spun around their various poles to form great circle networks and corresponding triangular tiles on the surface of a sphere. He exhaustively cataloged the central and surface angles of these spherical triangles and their related chord factors. Fuller was continually on the lookout for ways to connect the dots, often purely speculatively. As an example of "dot connecting" he sought to relate the 120 basic disequilibrium LCD triangles of the spherical icosahedron to the plane net of his A module.(915.11 [27]Fig. 913.01 [28], Table 905.65 [29]) The Jitterbug Transformation provided a unifying dynamic in this work, with much significance attached to the doubling and quadrupling of edges that occurred, when a cuboctahedron is collapsed through icosahedral, octahedral and tetrahedral stages, then inside-outed and re-expanded in a complementary fashion. The JT formed a bridge between 3,4-fold rotationally symmetric shapes, and the 5-fold family, such as a rhombic triacontahedron, which latter he analyzed in terms of the T module, another tetrahedral wedge with the same volume as his A and B modules. He modeled energy transfer between systems by means of the double-edged octahedron and its ability to turn into a spiral (tetrahelix). Energy lost to one system always reappeared somewhere else in his Universe. He modeled a threshold between associative and disassociative energy patterns with his T-to-E module transformation ("E" for "Einstein").(Fig 986.411A [30]) Synergetics is in some ways a library of potential "science cartoons" (scenarios) described in prose and not heavily dependent upon mathematical notations. His demystification of a gyroscope's behavior in terms of a hammer thrower, pea shooter, and garden hose, is a good example of his commitment to using accessible metaphors. (Fig. 826.02A [31]) His modular dissection of a space-filling tetrahedron or MITE (minimum tetrahedron) into 2 A and 1 B module served as a basis for more speculations about energy, the former being more energy conservative, the latter more dissipative in his analysis.(986.422 [32]921.20 [33], 921.30 [34]). His focus was reminiscent of later cellular automaton studies in that tessellating modules would affect their neighbors over successive time intervals.
Social Commentary
Synergetics informed Fuller's social analysis of the human condition. He identified "ephemeralization" as the trend towards accomplishing more with less physical resources, as a result of increasing comprehension of such "generalized principles" as E = Mc2. He remained concerned that humanity's conditioned reflexes were not keeping pace with its engineering potential, emphasizing the "touch and go" nature of our current predicament. Fuller hoped the streamlining effects of a more 60-degree-based approach within natural philosophy would help bridge the gap between C.P. Snow's "two cultures" and result in a greater level of scientific literacy in the general population. (935.24 [35])
Synergetics (Fuller)
131
Controversies
Fuller hoped to gain traction for his ideas and nomenclature by dedicating Synergetics to H.S.M. Coxeter (with permission) and by citing page 71 of the latter's Regular Polytopes to suggest where his A & B modules (depicted above) might enter the literature (see Fig. 950.12 [36]). Dr. Arthur Loeb provided a prologue and an appendix to Synergetics discussing its overlap with crystallography, chemistry and virology. However few if any academic departments, outside of Literature, have much tolerance for such an intuitive and/or exploratory approach, even with a track record of inventions and successes attached. Synergetics is difficult to pigeon-hole and is not in the style of any currently practiced discipline. E.J. Applewhite, Fuller's chief collaborator on Synergetics, related it to Edgar Allan Poe's Eureka: A Prose Poem, in terms of its being a metaphysical work. Fuller might have had more of an audience back in some Renaissance period, when natural philosophers still had an appetite for Neo-Platonism.
Errata
A two volume work of this size is going to have some outright mistakes. A major bug Fuller himself catches involves a misapplication of his Synergetics Constant in Synergetics 1, leading him to delude himself into thinking he had discovered a radius 1 sphere of 5 tetravolumes. He provides a patch in Synergetics 2 in the form of his T&E module thread. (986.206 - 986.212 [37])
About Synergy
Synergetics refers to synergy: either the concept of the output of a system not foreseen by the simple sum of the output of each system part, or simply less used another term for negative entropy negentropy.
See also
Cloud Nine Dymaxion House Geodesic Dome Integral Theory Octet Truss Synergetics coordinates Tensegrity Quadray coordinates
Notes
[1] Synergetics, http:/ / www. rwgrayprojects. com/ synergetics/ synergetics. html [2] Fuller, R. Buckminster (1963). No More Secondhand God. Carbondale and Edwardsville. pp.118163. [3] CJ Fearnley, Presentation to the American Mathematical Society (AMS) 2008 Spring Eastern Meeting (http:/ / www. cjfearnley. com/ folding. great. circles. 2008. pdf), p. 6. Retrieved on 2010-01-26. [4] Synergetics, Sec. 200.01-203.07 (http:/ / www. rwgrayprojects. com/ synergetics/ s02/ p0000. html) [5] http:/ / www. rwgrayprojects. com/ synergetics/ intro/ well. html [6] http:/ / www. rwgrayprojects. com/ synergetics/ s04/ p1000. html#410. 01 [7] http:/ / www. rwgrayprojects. com/ synergetics/ s08/ p0000. html [8] Fuller, R. Buckminster (1963). No More Secondhand God. Carbondale and Edwardsville. pp.118163. [9] Edmondson, Amy C. (1987). A Fuller Explanation: The Synergetic Geometry of R. Buckminster Fuller. Boston: Birkhauser. pp.ix. ISBN0-8176-3338-3. [10] Cheryl Clark, 12 degrees of Freedom, Ph.D. Thesis, p. xiv (http:/ / www. doinglife. com/ 12FreedomPDFs/ Ib_AbstractLitReview. pdf) [11] Synergetics, Sec. 251.50 (http:/ / www. rwgrayprojects. com/ synergetics/ s02/ p0000. html) [12] Edmondson 1987, pp. ix-x
Synergetics (Fuller)
[13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] Clark, p. xiv http:/ / www. rwgrayprojects. com/ synergetics/ s09/ figs/ f9001. html http:/ / www. rwgrayprojects. com/ synergetics/ s05/ p2800. html#528. 00 http:/ / www. rwgrayprojects. com/ synergetics/ s10/ p7000. html#1075. 11 http:/ / www. rwgrayprojects. com/ synergetics/ s09/ p86200. html#986. 203 http:/ / www. rwgrayprojects. com/ synergetics/ s05/ p2700. html#527. 702 http:/ / www. rwgrayprojects. com/ synergetics/ s05/ p2700. html#527. 712 http:/ / www. rwgrayprojects. com/ synergetics/ s03/ p0000. html#307. 04 http:/ / www. rwgrayprojects. com/ synergetics/ s01/ p6000. html#162. 00 http:/ / www. rwgrayprojects. com/ synergetics/ s01/ p3000. html http:/ / www. rwgrayprojects. com/ synergetics/ s05/ p3100. html#533. 00 http:/ / www. rwgrayprojects. com/ synergetics/ s10/ p0810. html#1009. 00 http:/ / www. rwgrayprojects. com/ synergetics/ s03/ p2600. html#326. 10 http:/ / www. rwgrayprojects. com/ synergetics/ s10/ p0810. html#1009. 92 http:/ / www. rwgrayprojects. com/ synergetics/ s09/ p0570. html#915. 11 http:/ / www. rwgrayprojects. com/ synergetics/ s09/ figs/ f1301. html http:/ / www. rwgrayprojects. com/ synergetics/ s09/ figs/ tb0565. html http:/ / www. rwgrayprojects. com/ synergetics/ s09/ figs/ f86411a. html http:/ / www. rwgrayprojects. com/ synergetics/ s08/ figs/ f2602a. html http:/ / www. rwgrayprojects. com/ synergetics/ s09/ p86420. html#986. 422 http:/ / www. rwgrayprojects. com/ synergetics/ s09/ p2000. html#921. 20 http:/ / www. rwgrayprojects. com/ synergetics/ s09/ p2000. html#921. 30
132
[35] http:/ / www. rwgrayprojects. com/ synergetics/ s09/ p3400. html#935. 24 [36] http:/ / www. rwgrayprojects. com/ synergetics/ s09/ figs/ f5012. html [37] http:/ / www. rwgrayprojects. com/ synergetics/ s09/ p86200. html
References
R. Buckminster Fuller (in collaboration with E.J. Applewhite, Synergetics: Explorations in the Geometry of Thinking (http://www.rwgrayprojects.com/synergetics/synergetics.html), online edition hosted by R. W. Gray with permission (http://www.rwgrayprojects.com/), originally published by Macmillan (http://www. macmillanmh.com/), Vol. 1 in 1975 (with a preface and contribution by Arthur L. Loeb; ISBN 002541870X), and Vol. 2 in 1979 (ISBN 0025418807), as two hard-bound volumes, re-editions in paperback. Amy Edmondson, A Fuller Explanation (http://books.google.com/books?id=G8zttcNdKBAC), EmergentWorld LLC, 2007.
External links
Complete On-Line Edition of Fuller's Synergetics (http://www.rwgrayprojects.com/synergetics/synergetics. html) WNET: Synergetics by E.J. Applewhite (http://www.wnet.org/bucky/syner.html) Synergetics 101 (http://www.youtube.com/watch?v=y-mpwMPeCm8) video of Joe Clinton at RISD 2007. What is Synergetics? (http://bfi.org/our_programs/who_is_buckminster_fuller/synergetics/ what_is_synergetics) at Buckminster Fuller Institute (http://www.bfi.org) A Fuller Explanation: The Synergetic Geometry of R. Buckminster Fuller (http://books.google.com/ books?id=G8zttcNdKBAC) Cheryl Clark's PhD thesis "12 Degrees of Freedom" (http://www.doinglife.com/12FreedomPDFs/ Ib_AbstractLitReview.pdf) Synergetics section of the Buckminster Fuller FAQ (http://www.cjfearnley.com/fuller-faq-2.html) Synergetics on the Web (http://www.grunch.net/) CJ Fearnley, Reading Synergetics: Some Tips (http://www.cjfearnley.com/synergetics.essay.html) Synergetics Collaborative (http://www.SynergeticsCollaborative.org)
Synergetics (Haken)
133
Synergetics (Haken)
Synergetics is an interdisciplinary science explaining the formation and self-organization of patterns and structures in open systems far from thermodynamic equilibrium. It is founded by Hermann Haken, inspired by the laser theory. Self-organization requires a 'macroscopic' system, consisting of many nonlinearly interacting subsystems. Depending on the external control parameters (environment, energy-fluxes) self-organization takes place.
Order-parameter concept
Essential in synergetics is the order-parameter concept which was originally introduced in the Ginzburg-Landau theory in order to describe phase-transitions in thermodynamics. The order parameter concept is generalized by Haken to the "enslaving-principle" saying that the dynamics of fast-relaxing (stable) modes is completely determined by the 'slow' dynamics of as a rule only a few 'order-parameters' (unstable modes). The order parameters can be interpreted as the amplitudes of the unstable modes determining the macroscopic pattern. As a consequence, self-organization means an enormous reduction of degrees of freedom (entropy) of the system which macroscopically reveals an increase of 'order' (pattern-formation). This far-reaching macroscopic order is independent of the details of the microscopic interactions of the subsystems. This supposedly explains the self-organization of patterns in so many different systems in physics, chemistry, biology and even social systems.
In social systems
In management science, synergetics was first applied to deliberative structures by Stafford Beer, whose syntegration method is based so specifically on geodesic dome design that only fixed numbers of persons, determined by geodesic chord factors, can take part in the process at each deliberation stage. Beer's earlier work was briefly applied by the government of Salvador Allende in Chile in the early 1970s. This was Project Cybersyn- a portmanteau word from "Cybernetic synergy". The approach is applied today as a series of related management methods. All of these seek some macroscopic order of priorities by taking some path of integrating diverse positions or attitudes to some problem, making the synergetic assumption that priorities will converge under the constraint of viability. There are similar themes in the work especially of Jay Forrester and Donella Meadows who sought leverage on social and management problems by seeking out an emerging macroscopic order. Under synergetic assumptions, this could often be reliably found by determining the points of greatest resistance to change by an older or inertial macroscopic order. The twelve leverage points of Meadows apply the order parameter concept but without making the assumption of "enslaving" lower-leverage points to the higher-leverage. A similar view is expressed in the deep framing theory of linguist George Lakoff, in which basic conceptual metaphors partly but do not completely determine the actions of their users. As in all social sciences, conscious goals, choices, free will, self-interest and self-awareness prevent any control groups or strictly predictive models from applying to human problems as they do in natural sciences. In Meadows' leverage model the leverage of self-organization is explicitly below that of goal-setting, and much below that of mindsets and the ability to change them. The synergetic assumptions apply mostly to the lower leverage factors, while the higher leverage factors follow principles more like Lakoff's. However, the basic relationship remains: fast-relaxing (stable) modes are at least partly determined or strongly biased by the 'slow' dynamics of only a few parameters. Lakoff argued in his Moral Politics that there could be as few as one basic metaphor (state as parent) determining a vast range of political choices and policy making patterns.
Synergetics (Haken)
134
Literature
H. Haken: "Synergetics, an Introduction: Nonequilibrium Phase Transitions and Self-Organization in Physics, Chemistry, and Biology", 3rd rev. enl. ed. New York: Springer-Verlag, 1983. H. Haken: Advanced Synergetics: Instability Hierarchies of Self-Organizing Systems and Devices. New York: Springer-Verlag, 1993. H. Haken: Synergetik. Springer-Verlag Berlin Heidelberg New York 1982, ISBN 3-8017-1686-4 R. Graham, A. Wunderlin (Hrsg.): Lasers and Synergetics. Springer-Verlag Berlin Heidelberg New York 1987, ISBN 3-540-17940-2 Korotayev A., Malkov A., Khaltourina D.: Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS, 2006. ISBN 5-484-00414-4 [1].
See also
J. Willard Gibbs Phase Rule Fokker-Planck equation Ginzburg-Landau theory
Alexander Bogdanov
External links
Homepage of the former Institute for Theoretical Physics and Synergetics (IFTPUS) [2] Center for Synergetics homepage [3]
References
[1] http:/ / urss. ru/ cgi-bin/ db. pl?cp=& lang=en& blang=en& list=14& page=Book& id=34250 [2] http:/ / itp1. uni-stuttgart. de/ en/ [3] http:/ / www. center-for-synergetics. de/
Synergism (theology)
135
Synergism (theology)
In theology, synergism is the position of those who hold that salvation involves some form of cooperation between divine grace and human freedom. It is opposed to monergism.
Protestant views
Calvinists use the term "synergism" to describe the Arminian doctrine of salvation, although many Arminians would disagree with the characterisation. According to Calvinists, synergism is the view that God and man work together, each contributing their part to accomplish regeneration in and for the individual. John Hendryx, a Calvinist thinker, has stated it this way: synergism is "...the doctrine that there are two efficient agents in regeneration, namely the human will and the divine Spirit, which, in the strict sense of the term, cooperate. This theory accordingly holds that the soul has not lost in the fall all inclination toward holiness, nor all power to seek for it under the influence of ordinary motives." [1] Arminians, especially of the Wesleyan tradition, might respond with the criticism that Hendryx has merely provided a description of semi-Pelagianism, and they recognize that grace precedes any cooperation of the human soul with the saving power of God. In other words, God has offered salvation, and man must receive it. This is opposed to the monergistic view as held by Reformed or Calvinistic groups in which objects of God's election participate in, but do not contribute to, the salvific or regenerative processes. Classical Arminians and most Wesleyans would consider this a straw man description, as they have historically affirmed the Reformed doctrine of total depravity. To this, Hendryx replies by asking the following question: "If two persons receive prevenient grace and only one believes the gospel, why does one believe in Christ and not the other? What makes the two persons to differ? Jesus Christ or something else? And that 'something else' is why Calvinists believe Arminians and other non-Augustinian groups to be synergists." Regeneration, in this case, would occur only when the unregenerate will cooperates with God's Spirit to effectuate redemption. To the Monergist, faith does not proceed from our unregenerate human nature. If faith precedes regeneration, as it does in Arminianism, then the unregenerate person must exercise faith in order to be regenerated. However, it ought to be recognized, such a debate concerning whether it is possible for an unregenerate will to cooperate with God's Spirit is a superimposed Calvinian concept to Wesleyans. In order to answer such objections, we need to see what the doctrine of Prevenient Grace actually teaches. For they are in agreement with the Monergist; strictly speaking, at no time is it being argued that faith proceeds from the unregenerate (that is, a totally natural or graceless) human nature. John Wesley expressed this himself, saying, "The will of man is by nature free only to evil. Yet... every man has a measure of free-will restored to him by grace." [2] "Natural free-will in the present state of mankind, I do not understand: I only assert, that there is a measure of free-will supernaturally restored to every man, together with that supernatural light which 'enlightens every man that comes into the world.'" [3] "This is not a statement about natural ability, or about nature as such working of itself, but about grace working through nature." [4] Arminians therefore have a position which may be summarized in the following way. A human being cannot, on their own, turn to God. God grants to all men the "prevenient grace" (prevenient meaning "coming before"). With this prevenient grace (or with its effects on the fallen human), the person is now able to choose faith or reject it. If the person accepts it, then God continues to give further grace to heal them. In answer to Hendryx's question about the two individuals receiving prevenient grace and only one being saved, the Arminian would reply that the one who was saved freely chose faith, but only had the power to choose faith because of the prevenient grace, whereas the one who was not saved had the same assistance from prevenient grace and thus the same ability to choose, but chose to reject faith. Whether this is characterized as synergy will depend upon one's definition. It differs, however, from semi-Pelagianism, which maintains that a human being can begin to have faith without the need for grace.[5] In addition, it may be said by the Arminian that the choice of the person is not the cause of their salvation or loss, but rather that their free response to prevenient grace forms the grounds for God's free decision: the person's decision does not constrain God, but God takes it into consideration when He decides whether to complete the person's
Synergism (theology) salvation or not. An analogy may be seen in that when person A offers to pay off a loan for person B, person A offers freely. If person B says no, person A could still choose to pay off the loan (in theory), and if person B says yes, person A is still not constrained to pay it off. Rather, the response of person B forms part of the data which person A considers in deciding whether to pay off the loan. In like manner, God takes the person's response to prevenient grace into account as part of the data when freely choosing whether or not to save that person. Therefore, the person's choice does not work alongside God. For this reason, Arminians do not see the term synergism as an accurate description of their doctrine. Another analogy sometimes used is based upon Revelations 3, in which Christ states that He stands at the door and knocks, and if anyone opens He will enter in. Arminians assert that Christ comes to each person with prevenient grace, and if they are willing for Him to enter, He enters into them. Therefore, no one does any of the actual work of saving themselves, because Christ does the work of coming to them in the first place, and if they are willing to follow Him, He does the work of entering in, but whether He does so is dependent upon the will of the person (no one, however, could will for Him to enter if He did not first knock). This is similar to the position taken in the Conferences of St. John Cassian.[6] In this work, the matter of grace and faith is taken as analogous to that of the invalids that Christ healed. The fact that Christ came to where an invalid was is liked to prevenient grace, because unless Christ came there, the invalid would have no opportunity to ask him for help. Likewise without prevenient grace, a person would not be able to ask God for help. The actual asking for help comes from the free choice of the invalid or person in question. It is made possible by Christ's presence (by prevenient grace), but there is no necessary outcome: Christ's presence (prevenient grace) leaves a person able to ask for help, but also able to refuse to ask for help. Asking, however, does not do anything to actually heal the person; Christ's response to their request is what heals them, not their own choice. Likewise, God saves those who ask Him. However, they are only able to ask because He first comes to them with prevenient grace. Nonetheless, they are free to refuse to ask for His help, just as the invalids were free to not ask Christ for healing. Thus it is concluded, "it belongs to divine grace to give us opportunities of salvation... it is ours to follow up the blessings which God gives us with earnestness or indifference." God is then free to decide how to response to our earnestness or indifference, which make up a part of the data which He considers in His free decision. We know, however, that in love He will respond by completing the salvation of those who respond earnestly, while leaving those who respond with indifference to their own devices. In the 13th Conference, Cassian also uses the analogy of a farmer. Although the farmer must chose to work the farm, the growth of his crops is entirely due to God. God provides the growth, but He does so only for those who are willing to have that growth and actualize this through their effort.
136
Synergism (theology)
137
See also
Arminianism Decision theology Eastern Orthodoxy Monergism Regeneration (theology) Semi-Pelagianism Soteriology
References
[1] What is Monergism? (http:/ / www. monergism. com/ what_is_monergism. php) [2] "Some Remarks on Mr. Hill's Review" by John Wesley [3] Predestination Calmly Considered (http:/ / docs. google. com/ gview?a=v& q=cache:8FHZ-rIUTw0J:evangelicalarminians. org/ files/ Wesley. %20PREDESTINATION%20CALMLY%20CONSIDERED. pdf+ wesley+ predestination+ calmly+ considered& hl=en& gl=us) by John Wesley [4] John Wesley's Scriptural Christianity: A Plain Exposition of His Teaching on Christian Doctrine (1994) by Thomas Oden, chapter 8: "On Grace and Predestination", pp. 243-252 (ISBN 031075321X) [5] Semipelagianism, Catholic Encyclopedia, http:/ / www. newadvent. org/ cathen/ 13703a. htm [6] Conferences, John Cassian, 3rd Conference, 19th Chapter, http:/ / www. documentacatholicaomnia. eu/ 04z/ z_0360-0435__Cassianus__The_Conferences_Of_John_Cassian__EN. pdf. html [7] Catechism of the Catholic Church, Reader's Guide to Themes (Burns & Oates 1999 ISBN 0-860-12366-9), p. 766 (http:/ / books. google. com/ books?id=cymM4xEM76wC& pg=PA766& dq=Catechism+ Catholic+ Church+ synergy& hl=en& ei=cu4dTLPRE8qHOLbolPEL& sa=X& oi=book_result& ct=result& resnum=1& ved=0CDAQ6AEwAA#v=onepage& q& f=false) [8] Catechism of the Catholic Church, 405 (http:/ / www. vatican. va/ archive/ catechism/ p1s2c1p7. htm) [9] Catechism of the Catholic Church, 1742 (http:/ / www. vatican. va/ archive/ ENG0015/ __P5O. HTM) [10] Catechism of the Catholic Church, 2001 (http:/ / www. vatican. va/ archive/ ccc_css/ archive/ catechism/ p3s1c3a2. htm) [11] Pan-Orthodox Synod of Jerusalem (1672), Decree 14 (http:/ / www. crivoice. org/ creeddositheus. html) [12] Council of Trent, quoted in Catechism of the Catholic Church, 1993 (http:/ / www. vatican. va/ archive/ ccc_css/ archive/ catechism/ p3s1c3a2. htm)
External links
Universal prevenient grace (http://www.theopedia.com/Universal_prevenient_grace) Prevenient Grace (http://www.eternalsecurity.us/prevenient_grace.htm) by Jeff Paton
References
Synergy
138
Synergy
Synergy, in general, may be defined as two or more agents working together to produce a result not obtainable by any of the agents independently.
Etymology
The term synergy comes from the ancient Greek word syn-ergos, , meaning 'working together'.[1] .
Synergy are designed, not if they emerge by chance[7] . The idea of a systemic approach is endorsed by the UK Health and Safety Executive: The successful performance of the health and safety management depends upon the analyzing the causes of incidents and accidents and learning correct lessons from them. The idea is that all events (not just those causing injuries) represent failures in control, and present an opportunity for learning and improvement[8] . This book describes the principles and management practices, which provide the basis of effective health and safety management. It sets out the issues which need to be addressed, and can be used for developing improvement programs, self-audit or self-assessment. Its message is that organizations need to manage health and safety with the same degree of expertise and to the same standards as other core business activities, if they are to effectively control risks and prevent harm to people. The term synergy was refined by R. Buckminster Fuller who analyzed some of its implications more fully[9] and coined the term Synergetics.[10] Synergy can be understood as the opposite of the concept entropy. Hence it was perhaps more of a 'discovery' etymologically speaking. A dynamic state in which combined action is favored over the difference of individual component actions. Behavior of whole systems unpredicted by the behavior of their parts taken separately, known as emergent behavior. The cooperative action of two or more stimuli (or drugs), resulting in a different or greater response than that of the individual stimuli.
139
Drug synergy
Drug synergism occurs when drugs can interact in ways that enhance or magnify one or more effects, or side effects, of those drugs. This is sometimes exploited in combination preparations, such as codeine mixed with acetaminophen or ibuprofen to enhance the action of codeine as a pain reliever. This is often seen with recreational drugs, where 5-HTP, a serotonin precursor often used as an antidepressant, is often used prior to, during, and shortly after recreational use of MDMA as it allegedly increases the "high" and decreases the "comedown" stages of MDMA use (although most anecdotal evidence has pointed to 5-HTP moderately muting the effect of MDMA). Other examples include the use of cannabis with LSD, where the active chemicals in cannabis enhance the hallucinatory experience of LSD use. Negative effects of synergy are a form of contraindication, which for instance can be if more than one depressant drug is used that affects the central nervous system (CNS), an example being alcohol and Valium. The combination can cause a greater reaction than simply the sum of the individual effects of each drug if they were used separately. In this particular case, the most serious consequence of drug synergy is exaggerated respiratory depression, which can be fatal if left untreated.
Biological Sciences
Synergism has been termed for a hypothesis on how complex systems operate, advanced by Robert Corning.[11] Environmental systems may react in a non-linear way to perturbations, such as climate change so that the outcome may be greater than the sum of the individual component alterations. Synergistic responses are a complicating factor in environmental modeling.[12]
Pest synergy
Pest synergy would occur in a biological host organism population where, for example, the introduction of parasite A may cause 10% fatalities, and parasite B may also cause 10% loss. When both parasites are present, the losses would normally be expected to total less than 20%, yet in some cases, losses are significantly greater. In such cases it is said that the parasites in combination have a synergistic effect.
Synergy
140
Toxicological synergy
Toxicological synergy is of concern to the public and regulatory agencies because chemicals individually considered safe might pose unacceptable health or ecological risk when exposure is to a combination. Articles in scientific and lay journals include many definitions of chemical or toxicological synergy, often vague or in conflict with each other. Because toxic interactions are defined relative to the expectation under "no interaction," a determination of synergy (or antagonism) depends on what is meant by "no interaction." The United States Environmental Protection Agency has one of the more detailed and precise definitions of toxic interaction, designed to facilitate risk assessment. In their guidance documents, the no-interaction default assumption is dose addition, so synergy means a mixture response that exceeds that predicted from dose addition. The EPA emphasizes that synergy does not always make a mixture dangerous, nor does antagonism always make the mixture safe; each depends on the predicted risk under dose addition. For example, a consequence of pesticide use is the risk of health effects. During the registration of pesticides in the US exhaustive tests are performed to discern health effects on humans at various exposure levels. A regulatory upper limit of presence in foods is then placed on this pesticide. As long as residues in the food stay below this regulatory level, health effects are deemed highly unlikely and the food is considered safe to consume. However in normal agricultural practice it is rare to use only a single pesticide. During the production of a crop several different materials may be used. Each of them has had determined a regulatory level at which they would be considered individually safe. In many cases, a commercial pesticide is itself a combination of several chemical agents, and thus the safe levels actually represent levels of the mixture. In contrast, combinations created by the end user, such as a farmer, are rarely tested as that combination. The potential for synergy is then unknown or estimated from data on similar combinations. This lack of information also applies to many of the chemical combinations to which humans are exposed, including residues in food, indoor air contaminants, and occupational exposures to chemicals. Some groups think that the rising rates of cancer, asthma and other health problems may be caused by these combination exposures; others have alternative explanations. This question will likely be answered only after years of exposure by the population in general and research on chemical toxicity, usually performed on animals. Examples of pesticide synergists include Piperonyl butoxide and MGK 264[13] .
Human synergy
Human synergy relates to humans. For example, say person A alone is too short to reach an apple on a tree and person B is too short as well. Once person B sits on the shoulders of person A, they are more than tall enough to reach the apple. In this example, the product of their synergy would be one apple. Another case would be two politicians. If each is able to gather one million votes on their own, but together they were able to appeal to 2.5 million voters, their synergy would have produced 500,000 more votes than had they each worked independently. A song is also a good example of human synergy, taking more than one musical part and putting them together to create a song that has a much more dramatic effect than each of the parts when played individually. A third form of human synergy is when one person is able to complete two separate tasks by doing one action. For example, if a person was asked by a teacher and his boss at work to write an essay on how he could improve his work, that would be considered synergy. Or, a more visual example of this synergy is a drummer while he's drumming, using four separate rhythms to create one drum beat. Synergy usually arises when two persons with different complementary skills cooperate. The fundamental example is cooperation of men and women in a couple. In business, cooperation of people with organizational and technical skills happens very often. In general, the most common reason why people cooperate is it brings a synergy. On the other hand, people tend to specialize just to be able to form groups with high synergy (see also division of labor and teamwork).
Synergy Example: Two teams in System Admin working together to combine technical and organizational skills in order to better the client experience, thus creating synergy.
141
Corporate synergy
Corporate synergy occurs when corporations interact congruently. A corporate synergy refers to a financial benefit that a corporation expects to realize when it merges with or acquires another corporation. This type of synergy is a nearly ubiquitous feature of a corporate acquisition and is a negotiating point between the buyer and seller that impacts the final price both parties agree to. There are distinct types of corporate synergies:
Revenue
A revenue synergy refers to the opportunity of a combined corporate entity to generate more revenue than its two predecessor stand alone companies would be able to generate. For example, if company A sells product X through its sales force, company B sells product Y, and company A decides to buy company B then the new company could use each sales person to sell products X and Y thereby increasing the revenue that each sales person generates for the company. In media revenue, synergy is the promotion and sale of a product throughout the various subsidiaries of a media conglomerate, e.g. films, soundtracks or video games.
Management
Synergy in terms of management and in relation to team working refers to the combined effort of individuals as participants of the team. Positive or negative synergy can exist. The condition that exists when the organization's parts interact to produce a joint effect that is greater than the sum of the parts acting alone.
In Terms of Leverage
Synergy in terms of leverage is a term that was used in the announcement of Software AG's acquisition of webMethods in 2007. Analysts and developers all over the world have attempted to decode the meaning of this phrase. Currently, the best guess is that it is nonsensical corporate rhetoric used to confuse the listening audience.
Cost
A cost synergy refers to the opportunity of a combined corporate entity to reduce or eliminate expenses associated with running a business. Cost synergies are realized by eliminating positions that are viewed as duplicate within the merged entity. Examples include the head quarters office of one of the predecessor companies, certain executives, the human resources department, or other employees of the predecessor companies. This is related to the economic concept of Economies of Scale.
Computers
Synergy can also be defined as the combination of human strengths and computer strengths, such as advanced chess. Computers can process data much more quickly than humans, but lack the ability to respond meaningfully to arbitrary stimuli.
Synergy products and ads, and continued to market Disney media through licensing arrangements. These products can help advertise the film itself and thus help to increase the film's sales. For example, the Spider-Man films had toys of webshooters and figures of the characters made, as well as posters and games.[15]
142
Computer Games
Another obvious example of synergy is in video games. RPG Games such as World of Warcraft and Dragon Age and other party based games such as Team Fortress 2 rely heavily on the idea of synergy. An individual playing alone will have difficulty with many of the quest requirements. A team most often consists of a Tank, Healer, DPS dealer and crowd control Mage to effectively complete quests. In the case of TF2 a heavy, spy, engineer, pyro and medic etc. all work together to win a round. The mixture of offensive and defensive classes can be used to maximize a teams chances of victory. Examples of the word synergy in a sentence could be: 1) Sirius and XM Radio created over $400M of synergy when they merged. [16] 2) The synergy of love and happiness from Scott and Laura being together are unmeasurable
See also
Synergetics Synergism Holism Emergence Perfect storm Systems theory Behavioral Cusp
References
[1] The Strategy Reader, Edited by Susan Segal-Horn, The Open University, 1998 Great Britain, [2] David Buchanan & Andrzej Huczynski: Organizational behavior, introductory text. Prentice Hall,pp 276,Third Edition 1997 ISBN 0-13-207259-9 [3] Benjamin Blanchard, System Engineering Management, pp 8, John Wiley 2004, ISBN 0-471-29176-5 [4] David Buchanan & Andrzej Huczynski: Organizational behavior, introductory text, Prentice Hall,pp 275, Third Edition 1997 ISBN 0-13-207259-9 [5] David Buchanan & Andrzej Huczynski: Organizational behavior, introductory text. Prentice Hall,pp 280,Third Edition 1997 ISBN 0-13-207259-9 [6] David Buchanan & Andrzej Huczynski: Organizational behavior, introductory text. Prentice Hall,pp 283,Third Edition 1997 ISBN 0-13-207259-9 [7] :Dr Chris Elliot, System safety and Law, Proceedings of First International Conference on System Safety, Institution of Engineering and Technology, London, pp 344-351(2006) [8] UK Health and Safety Executive, Successful health and safety management, ISBN 978 0 7176 1276 5,(1997) [9] SYNERGETICS Explorations in the Geometry of Thinking by R. Buckminster Fuller (online version) (http:/ / www. rwgrayprojects. com/ synergetics/ s01/ p0100. html) [10] Fuller, R. B., (1975), Synergetics: Explorations In The Geometry Of Thinking, in collaboration with E.J. Applewhite. Introduction and contribution by Arthur L. Loeb. Macmillan Publishing Company, Inc., New York. [11] Synergy and self-organization in the evolution of complex systems. (http:/ / www. complexsystems. org/ publications/ pdf/ synselforg. pdf) [12] Myers, N Environmental Unknowns (1995)--~~~~ [13] Pyrethroids and Pyrethrins (http:/ / www. epa. gov/ oppsrrd1/ reevaluation/ pyrethroids-pyrethrins. html), U.S. Environmental Protection Agency, epa.gov [14] Campbell, Richard, Christopher R. Martin, and Bettina Fabos. Media & Culture 5: an Introduction to Mass Communication. Fifth Edition 2007 Update ed. Bostin: Bedford St. Martins, 2007. 606. [15] Media Synergy see Linden Dalecki's article in Northwestern's Journal of Integrated Marketing Communications (2008) (http:/ / jimc. medill. northwestern. edu/ JIMCWebsite/ 2008/ HollywoodMediaSynergy. pdf) [16] http:/ / seekingalpha. com/ article/ 76496-citi-issues-report-on-sirius-xm-merger-synergies
Synergy
143
External links
Linden Dalecki on Media Synergy (2008) (http://jimc.medill.northwestern.edu/JIMCWebsite/2008/ HollywoodMediaSynergy.pdf) Synergism Hypothesis (http://www.complexsystems.org/publications/synhypo.html) Buckminster Fuller's definition of Synergy (http://www.rwgrayprojects.com/synergetics/s01/p0100.html) EPA Supplementary Guidance for Conducting Health Risk Assessment of Chemical Mixtures (http://cfpub.epa. gov/ncea/raf/recordisplay.cfm?deid=20533) Synergy and Dysergy in Mereologic Geometries (http://www.wikinfo.org/index.php/ Synergy_and_Dysergy_in_Mereologic_Geometries)
Systems thinking
Systems thinking is the process of understanding how things influence one another within a whole. In nature, systems thinking examples include ecosystems in which various elements such as air, water, movement, plants, and animals work together to survive or perish. In organizations, systems consist of people, structures, and processes that work together to make an organization healthy or unhealthy. Systems Thinking has been defined as an approach to problem solving, by viewing "problems" as parts of an overall system, rather than reacting to specific part, outcomes or events and potentially contributing to further development of unintended consequences. Systems thinking is not one thing but a set of habits or practices [1] within a framework that is based on the belief that the component parts of a system can best be understood in the context of relationships with each other and with other systems, rather than in isolation. Systems thinking focuses on cyclical rather than linear cause and effect. In science systems, it is argued that the only way to fully understand why a problem or element occurs and persists is to understand the parts in relation to the whole.[2] Standing in contrast to Descartes's scientific reductionism and philosophical analysis, it proposes to view systems in a holistic manner. Consistent with systems philosophy, systems thinking concerns an understanding of a system by examining the linkages and interactions between the elements that compose the entirety of the system. Science systems thinking attempts to illustrate that events are separated by distance and time and that small catalytic events can cause large changes in complex systems. Acknowledging that an improvement in one area of a system can adversely affect another area of the system, it promotes organizational communication at all levels in order to avoid the silo effect. Systems thinking techniques may be used to study any kind of system natural, scientific, engineered, human, or conceptual. The concept of a system''' Science systems thinkers consider that: a system is a dynamic and complex whole, interacting as a structured functional unit; energy, material and information flow among the different elements that compose the system; a system is a community situated within an environment; energy, material and information flow from and to the surrounding environment via semi-permeable membranes or boundaries; systems are often composed of entities seeking equilibrium but can exhibit oscillating, chaotic, or exponential behavior. A holistic system is any set (group) of interdependent or temporally interacting parts. Parts are generally systems themselves and are composed of other parts, just as systems are generally parts or holons of other systems.
Systems thinking Science systems and the application of science systems thinking has been grouped into three categories based on the techniques used to tackle a system: Hard systems involving simulations, often using computers and the techniques of operations research. Useful for problems that can justifiably be quantified. However it cannot easily take into account unquantifiable variables (opinions, culture, politics, etc), and may treat people as being passive, rather than having complex motivations. Soft systems For systems that cannot easily be quantified, especially those involving people holding multiple and conflicting frames of reference. Useful for understanding motivations, viewpoints, and interactions and addressing qualitative as well as quantitative dimensions of problem situations. Soft systems are a field that utilizes foundation methodological work developed by Peter Checkland, Brian Wilson and their colleagues at Lancaster University. Morphological analysis is a complementary method for structuring and analysing non-quantifiable problem complexes. Evolutionary systems Bla H. Bnthy developed a methodology that is applicable to the design of complex social systems. This technique integrates critical systems inquiry with soft systems methodologies. Evolutionary systems, similar to dynamic systems are understood as open, complex systems, but with the capacity to evolve over time. Bnthy uniquely integrated the interdisciplinary perspectives of systems research (including chaos, complexity, cybernetics), cultural anthropology, evolutionary theory, and others.
144
Some examples: Rather than trying to improve the braking system on a car by looking in great detail at the material composition of the brake pads (reductionist), the boundary of the braking system may be extended to include the interactions between the: brake disks or drums brake pedal sensors hydraulics driver reaction time tires road conditions weather conditions time of day
Using the tenet of "Multifinality", a supermarket could be considered to be: a "profit making system" from the perspective of management and owners
Systems thinking a "distribution system" from the perspective of the suppliers an "employment system" from the perspective of employees a "materials supply system" from the perspective of customers an "entertainment system" from the perspective of loiterers a "social system" from the perspective of local residents a "dating system" from the perspective of single customers
145
As a result of such thinking, new insights may be gained into how the supermarket works, why it has problems, how it can be improved or how changes made to one component of the system may impact the other components.
Applications
Science systems thinking is increasingly being used to tackle a wide variety of subjects in fields such as computing, engineering, epidemiology, information science, health, manufacture, management, and the environment. Some examples: Organizational architecture Job design Team Population and Work Unit Design Linear and Complex Process Design Supply Chain Design Business continuity planning with FMEA protocol Critical Infrastructure Protection via FBI Infragard Delphi method developed by RAND for USAF Futures studies Thought leadership mentoring The public sector including examples at The Systems Thinking Review [4] Leadership development Oceanography forecasting complex systems behavior Permaculture Quality function deployment (QFD) Quality management Hoshin planning [5] methods Quality storyboard StoryTech framework (LeapfrogU-EE) Software quality Program management Project management MECE - McKinsey Way Sociocracy Linear Thinking
See also
Boundary critique Crossdisciplinarity Holistic management Information Flow Diagram Interdisciplinary Multidisciplinary Negative feedback Soft systems methodology Synergetics (Fuller) System dynamics Systematics - study of multi-term systems Systemics Systems engineering Systems intelligence Systems philosophy Systems theory Systems science Systemography Transdisciplinary Terms used in systems theory
Systems thinking
146
Bibliography
Russell L. Ackoff (1999) Ackoff's Best: His Classic Writings on Management. (Wiley) ISBN 0-471-31634-2 Russell L. Ackoff (2010) Systems Thinking for Curious Managers [6]. (Triarchy Press). ISBN 978-0-9562631-5-5 Bla H. Bnthy (1996) Designing Social Systems in a Changing World (Contemporary Systems Thinking). (Springer) ISBN 0-306-45251-0 Bla H. Bnthy (2000) Guided Evolution of Society: A Systems View (Contemporary Systems Thinking). (Springer) ISBN 0-306-46382-2 Ludwig von Bertalanffy (1976 - revised) General System theory: Foundations, Development, Applications. (George Braziller) ISBN 0-807-60453-4 Fritjof Capra (1997) The Web of Life (HarperCollins) ISBN 0-00-654751-6 Peter Checkland (1981) Systems Thinking, Systems Practice. (Wiley) ISBN 0-471-27911-0 Peter Checkland, Jim Scholes (1990) Soft Systems Methodology in Action. (Wiley) ISBN 0-471-92768-6 Peter Checkland, Jim Sue Holwell (1998) Information, Systems and Information Systems. (Wiley) ISBN 0-471-95820-4 Peter Checkland, John Poulter (2006) Learning for Action. (Wiley) ISBN 0-470-02554-9 C. West Churchman (1984 - revised) The Systems Approach. (Delacorte Press) ISBN 0-440-38407-9. John Gall (2003) The Systems Bible: The Beginner's Guide to Systems Large and Small. (General Systemantics Pr/Liberty) ISBN 0-961-82517-0 Jamshid Gharajedaghi (2005) Systems Thinking: Managing Chaos and Complexity - A Platform for Designing Business Architecture. (Butterworth-Heinemann) ISBN 0-750-67973-5 Charles Franois (ed) (1997), International Encyclopedia of Systems and Cybernetics, Mnchen: K. G. Saur. Charles L. Hutchins (1996) Systemic Thinking: Solving Complex Problems CO:PDS ISBN 1-888017-51-1 Bradford Keeney (2002 - revised) Aesthetics of Change. (Guilford Press) ISBN 1-572-30830-3 Donella Meadows (2008) Thinking in Systems - A primer (Earthscan) ISBN 978-1-84407-726-7 John Seddon (2008) Systems Thinking in the Public Sector [7]. (Triarchy Press). ISBN 978-0-9550081-8-4 Peter M. Senge (1990) The Fifth Discipline - The Art & Practice of The Learning Organization. (Currency Doubleday) ISBN 0-385-26095-4 Lars Skyttner (2006) General Systems Theory: Problems, Perspective, Practice (World Scientific Publishing Company) ISBN 9-812-56467-5 Frederic Vester (2007) The Art of interconnected Thinking. Ideas and Tools for tackling with Complexity (MCB) ISBN 3-939-31405-6 Gerald M. Weinberg (2001 - revised) An Introduction to General Systems Thinking. (Dorset House) ISBN 0-932-63349-8 Brian Wilson (1990) Systems: Concepts, Methodologies and Applications, 2nd ed. (Wiley) ISBN 0-471-92716-3 Brian Wilson (2001) Soft Systems Methodology: Conceptual Model Building and its Contribution. (Wiley) ISBN 0-471-89489-3 Ludwig von Bertalanffy (1969) General System Theory. (George Braziller) ISBN 0-8076-0453-4
Systems thinking
147
References
[1] http:/ / www. watersfoundation. org/ index. cfm?fuseaction=materials. main [2] Capra, F. (1996) The web of life: a new scientific understanding of living systems (1st Anchor Books ed). New York: Anchor Books. p. 30 [3] Skyttner, Lars (2006). General Systems Theory: Problems, Perspective, Practice. World Scientific Publishing Company. ISBN9-812-56467-5. [4] http:/ / www. thesystemsthinkingreview. co. uk/ [5] http:/ / www. qualitydigest. com/ may97/ html/ hoshin. html [6] http:/ / triarchypress. com/ pages/ Systems_Thinking_for_Curious_Managers. htm [7] http:/ / www. triarchypress. co. uk/ pages/ book5. htm
External links
The Systems Thinker newsletter glossary (http://www.thesystemsthinker.com/systemsthinkinglearn.html) Dancing With Systems (http://www.projectworldview.org/wvtheme13.htm) from Project Worldview Systems-thinking.de (http://www.systems-thinking.de/): systems thinking links displayed as a network Systems Thinking Laboratory (http://www.systhink.org/) Systems Thinking (http://www.thinking.net/Systems_Thinking/systems_thinking.html)
148
Light spectrum, from Theory of Colours Goethe observed that colour arises at the edges, and the spectrum occurs where these coloured edges overlap. Author Original title Translator Language Publisher Publication date Published in English ISBN OCLC Number Johann Wolfgang von Goethe Zur Farbenlehre Charles Eastlake [1] German John Murray 1810 1840 0-262-57021-1 318274261
[2]
Theory of Colours (original German title, Zur Farbenlehre) is a book by Johann Wolfgang von Goethe published in 1810. It contains some of the earliest published descriptions of phenomena such as coloured shadows, refraction, and chromatic aberration. Its influence extends primarily to the art world, especially among the Pre-Raphaelites. J. M. W. Turner studied it comprehensively, and referenced it in the titles of several paintings (Bockemuhl, 1991[3] ). Wassily Kandinsky considered Goethe's theory "one of the most important works."[4] Although Goethe's work was never well received by physicists, a number of philosophers and physicists have been known to have concerned themselves with it, including Arthur Schopenhauer, Kurt Gdel, Werner Heisenberg, Ludwig Wittgenstein, and Hermann von Helmholtz. Mitchell Feigenbaum had even said, 'Goethe had been right about colour!' (Ribe & Steinle, 2002[5] ). In his book, Goethe provides a general exposition of how colour is perceived in a variety of circumstances, and considers Isaac Newton's observations to be special cases.[6] Goethe's concern was not so much with the analytic measurement of colour phenomenon, as with the qualities of how phenomena are perceived. Science has come to understand the distinction between the optical spectrum, as observed by Newton, and the phenomenon of human colour perception as presented by Goethe - a subject analyzed at length by Wittgenstein in his exegesis of Goethe in Remarks on Colour.
149
Goethe's theory
It is hard to present Goethe's "theory", since he refrains from setting up any actual theory; "its intention is to portray rather than explain" (Scientific Studies[7] ). For Goethe, "the highest is to understand that all fact is really theory. The blue of the sky reveals to us the basic law of color. Search nothing beyond the phenomena, they themselves are the theory."[8] [Goethe] delivered in full measure what was promised by the title of his excellent work: Data for a Theory of Color. They are important, complete, and significant data, rich material for a future theory of color. He has not, however, undertaken to furnish the theory itself; hence, as he himself remarks and admits on page xxxix of the introduction, he has not furnished us with a real explanation of the essential nature of color, but really postulates it as a phenomenon, and merely tells us how it originates, not what it is. The physiological colors he represents as a phenomenon, complete and existing by itself, without even attempting to show their relation to the physical colors, his principal theme. it is really a systematic presentation of facts, but it stops short at this. Schopenhauer, On Vision and Colors, Introduction The crux of his color theory is its experiential source: rather than impose theoretical statements, Goethe sought to allow light and color to be displayed in an ordered series of experiments that readers could experience for themselves." (Seamon, 1998[9] ). As such, he would reject both the wave and particle theories because they are conceptually inferred and not directly perceived by the human senses. According to Goethe, "Newton's error... was trusting math over the sensations of his eye." Jonah Lehrer,Goethe and Color, December 7, 2006 [10] Goethe's theory of the origin of the spectrum isn't a theory of its origin that has proved unsatisfactory; it is really not a theory at all. Nothing can be predicted by means of it. It is, rather, a vague schematic outline, of the sort we find in James's psychology. There is no experimentum crucis for Goethe's theory of colour. Ludwig Wittgenstein, Remarks on Colour Goethe outlines his method in the essay, The experiment as mediator between subject and object (1772).[11] It underscores his experiential standpoint. "The human being himself, to the extent that he makes sound use of his senses, is the most exact physical apparatus that can exist." (Goethe, Scientific Studies[7] )
150
Historical background
In 1740, Louis Bertrand Castel published a criticism of Newton's spectral description of prismatic colour,[12] where he observed that the colours of white light split by a prism depended on the distance from the prism, and that Newton was looking at a special case; an argument which Goethe later developed.[13] It was in the 1780s when Goethe was asked to return a prism which had been on loan from the Privy Councillor Buettner in Jena. As he did so, he paused to take a look through the prism and what he saw led him to a comprehensive study of light phenomena, culminating in The Theory of Colours.[14] Along with the rest of the world I was convinced that all the colours are contained in the light; no one had ever told me anything different, and I had never found the least cause to doubt it, because I had no further interest in the subject. Goethe At the time, it was already known that the prismatic phenomenon is a process of splitting up the colourless (white) light into colours. Newton's theory stated that colourless light already contains the seven colours within itself and when we direct this light through a prism, the prism separates what is already there in the light the seven colours into which it is analyzed.
Castel's 1740 comparison of Newton's spectral colour description with his explanation in terms of the interaction of light and dark, which Goethe later developed into his Theory of Colours
Goethe's reasoning
Goethe reasoned: In such way the phenomena are interpreted, but this is not the primal or complete phenomenon. A look through the prism shows that we do not see white areas split evenly into seven colours. Rather, we see colours at some edge or border-line. If we let light pass through the space of the room, we get a white circle on the screen... Put a prism in the way of the body of light that is going through there the cylinder of light is diverted (Figure I), but what appears in the first place is not the series of seven colours at all, only a reddish colour at the lower edge, passing over into yellow, and at the upper edge a blue passing over into greenish shades. In the middle it stays white.
Figure I. Reddish-yellow edges overlap blue-violet edges to form green.
Theory of Colours (book) The colours therefore, to begin with, make their appearance purely and simply as phenomena at the border between light and dark. This is the original, the primary phenomenon. We are no longer seeing the original phenomenon when by reducing the circle in size we get a continuous sequence of colours. The latter phenomenon only arises when we take so small a circle that the colours extend inward from the edges to the middle. They then overlap in the middle and form what we call a continuous spectrum, while with the larger circle the colours formed at the edges stay as they are. This is the primal phenomenon. Colours arise at the borders, where light and dark flow together. Steiner, 1919[15] Goethe therefore concluded that the spectrum is a compound phenomenon. Colour arises at light-dark boundaries, and where the yellow-red and blue-violet edges overlap, you get green.
151
Theory of Colours (book) Yellow is a light which has been dampened by darkness Blue is a darkness weakened by the light.
152
Boundary conditions
When viewed through a prism, the orientation of a light-dark boundary with respect to the prism's axis is significant. With white above a dark boundary, we observe the light extending a blue-violet edge into the dark area; whereas dark above a light boundary results in a red-yellow edge extending into the light area. Goethe was intrigued by this difference. He felt that this arising of colour at light-dark boundaries was fundamental to the creation of the spectrum (which he considered to be a compound phenomenon).
When looked at through a prism, the colours seen at a light-dark boundary depend upon the orientation of this light-dark boundary.
Varying the experimental conditions by using different shades of grey shows that the intensity of coloured edges increases with boundary contrast.
Light and dark spectra when the coloured edges overlap in a light spectrum, green results; when they overlap in a dark spectrum, magenta results.
With a light spectrum, coming out of the prism, one sees a shaft of light surrounded by dark. We find yellow-red colours along the top edge, and blue-violet colours along the bottom edge. The spectrum with green in the middle arises only where the blue-violet edges overlap the yellow-red edges. With a dark spectrum (i.e. a shadow surrounded by light), we find violet-blue along the top edge, and red-yellow along the bottom edge where these edges overlap, we find magenta.
153
"For Newton, only spectral colors could count as fundamental. By contrast, Goethe's more empirical approach led him to recognize the essential role of (nonspectral) magenta in a complete color circle, a role that it still has in all modern color systems." (Ribe & Steinle, 2002[5]
Theory of Colours (book) Whereas Newton narrowed the beam of light in order to isolate the phenomenon, Goethe observed that with a wider aperture, there was no spectrum. He saw only reddish-yellow edges and blue-cyan edges with white between them, and the spectrum arose only where these edges came close enough to overlap. For him, the spectrum could be explained by the simpler phenomena of colour arising from the interaction of light and dark edges. Newton explains "the fact that all the colors appear only when the prism is at a certain distance from the screen, whereas the middle otherwise is white... [by saying] the more strongly diverted lights from the upper part of the image and the more weakly diverted ones from the lower part fall together in the middle and mix into white. The colors appear only at the edges because there none of the more strongly diverted parts of the light from above can fall into the most weakly diverted parts of the light, and none of the more weakly diverted ones from below can fall into the most strongly diverted ones." (Steiner, 1897[16] )
154
Table of differences
Qualities of Light Homogeneity Newton (1704) Goethe (1810)
White light is composed of coloured elements (heterogeneous). Darkness is the absence of light. Colours are fanned out of light according to their refrangibility (primary phenomenon). The prism is immaterial to the existence of colour. Light becomes decomposed through refraction, inflection, and reflection. White light decomposes into seven pure colours.
Light is the simplest most undivided most homogenous thing (homogenous). Darkness is polar to, and interacts with light. Coloured edges which arise at light-dark borders overlap to form a spectrum (compound phenomenon). As a turbid medium, the prism plays a role in the arising of colour. Refraction, inflection, and reflection can exist without the appearance of colour. There are only two pure colours blue and yellow; the rest are degrees of these. Colours recombine to shades of grey.
Darkness Spectrum
Synthesis
Just as white light can be decomposed, it can be put back together. Particle
Neither, since they are inferences and not observed with the senses.
Asymmetric, 7 colours
Symmetric, 6 colours
As a catalogue of observations, Goethe's experiments are useful data for understanding the complexities of human colour perception. Whereas Newton sought to develop a mathematical model for the behaviour of light, Goethe focused on exploring how colour is perceived in a wide array of conditions. Goethe's reification of darkness has caused almost all of modern physics to reject Goethe's theory. Both Newton and Huygens defined darkness as an absence of light. Young and Fresnel combined Newton's particle theory with Huygen's wave theory to show that colour is the visible manifestation of light's wavelength. Physicists today attribute both a corpuscular and undulatory character to light, which is the content of the so-called Waveparticle duality. Curiously, since the crux of Goethe's theory is tied to what is experiential, he would reject both the wave and particle theories since they are conceptually inferred and not directly perceived by the human senses.
155
Reception by Scientists
Although the accuracy of Goethe's observations does not admit a great deal of criticism, his theory's failure to demonstrate significant predictive validity eventually rendered it scientifically irrelevant. Goethe's colour theory has in many ways borne fruit in art, physiology and aesthetics. But victory, and hence influence on the research of the following century, has been Newton's. Werner Heisenberg, 1952 Much controversy stems from two different ways of investigating light and colour. Goethe was not interested in Newton's analytic treatment of colour - but he presented an excellent rational description of the phenomenon of human colour perception. It is as such a collection of colour observations that we must view this book. Most of Goethe's explanations of color have been thoroughly demolished, but no criticism has been leveled at his reports of the facts to be observed; nor should any be. This book can lead the reader through a demonstration course not only in subjectively produced colors (after images, light and dark adaptation, irradiation, colored shadows, and pressure phosphenes), but also in physical phenomena detectable qualitatively by observation of color (absorption, scattering, refraction, diffraction, polarization, and interference). A reader who attempts to follow the logic of Goethe's explanations and who attempts to compare them with the currently accepted views might, even with the advantage of 1970 sophistication, become convinced that Goethe's theory, or at least a part of it, has been dismissed too quickly. Judd, 1970[25]
Theory of Colours (book) As Feigenbaum understood them, Goethe's ideas had true science in them. They were hard and empirical. Over and over again, Goethe emphasized the repeatability of his experiments. It was the perception of colour, to Goethe, that was universal and objective. What scientific evidence was there for a definable real-world quality of redness independent of our perception? James Gleick,Chaos[26]
156
Current Status
Developments in understanding how the brain interprets colours, such as colour constancy and Edwin H. Land's retinex theory bear striking similarities to Goethe's theory (Ribe & Steinle, 2002[5] ). A modern treatment of the book is given by Dennis L. Sepper in the book, Goethe contra Newton: Polemics and the Project for a New Science of Color (Cambridge University Press, 2003).[27]
Quotations
As to what I have done as a poet... I take no pride in it... but that in my century I am the only person who knows the truth in the difficult science of colours of that, I say, I am not a little proud, and here I have a consciousness of a superiority to many. Johann Eckermann,Conversations of Goethe, (tr. John Oxenford), London, 1930, p.302 [Goethe] delivered in full measure what was promised by the title of his excellent work: data toward a theory of colour. They are important, complete, and significant data, rich material for a future theory of colour. He has not, however, undertaken to furnish the theory itself; hence, as he himself remarks and admits on page xxxix of the introduction, he has not furnished us with a real explanation of the essential nature of colour, but really postulates it as a phenomenon, and merely tells us how it originates, not what it is. Schopenhauer,On Vision and Colors Goethe's theory of the origin of the spectrum isn't a theory of its origin that has proved unsatisfactory; it is really not a theory at all. Nothing can be predicted by means of it. It is, rather, a vague schematic outline, of the sort we find in James's psychology. There is no experimentum crucis for Goethe's theory of colour. Wittgenstein,Remarks on Colour Can you lend me the Theory of Colours for a few weeks? It is an important work. His last things are insipid. Ludwig van Beethoven,Conversation-book, 1820 Should your glance on mornings lovely Lift to drink the heaven's blue Or when sun, veiled by sirocco, Royal red sinks out of view Give to Nature praise and honor. Blithe of heart and sound of eye, Knowing for the world of colour Where its broad foundations lie. Goethe
157
Bibliography
Goethe, Theory of Colours, trans. Charles Lock Eastlake, Cambridge, Massachusetts: The M.I.T. Press, 1982 ISBN 0-262-57021-1 Bockemuhl, M. 1991. Turner. Koln: Taschen. ISBN 3-8228-6325-4. Duck, Michael, Newton and Goethe on colour: Physical and physiological considerations, Annals of Science, Volume 45, Number 5, September 1988 , pp.507519(13). Taylor and Francis Ltd. (http://www. ingentaconnect.com/content/tandf/tasc/1988/00000045/00000005/art00004?crawler=true) Gleick, James Chaos, pp.1657; William Heinemann Publishers, London, 1988. Proskauer, The Rediscovery of Color, Steiner Books, 1986. Ribe, Neil; Steinle, Friedrich, Physics Today, Exploratory Experimentation: Goethe, Land, and Color Theory' (http://scitation.aip.org/journals/doc/PHTOAD-ft/vol_55/iss_7/43_1.shtml), Volume 55, Issue 7, July 2002. Schopenhauer, On Vision and Colors, Providence: Berg, 1994 ISBN 0-85496-988-8 Sepper, Dennis L., Goethe contra Newton: Polemics and the Project for a New Science of Color, Cambridge University Press, 2007 ISBN 0521531322 Steiner, Rudolf, First Scientific Lecture-Course (http://wn.rsarchive.org/Lectures/LightCrse/19191225p01. html;mark=156,43,56#WN_mark), Third Lecture, Stuttgart, 25 December 1919; GA320.
Theory of Colours (book) Steiner, Rudolf, Goethe's World View (http://wn.rsarchive.org/Books/GA006/English/MP1985/GA006_c03. html), Chapter III The Phenomena of the World of Colors, 1897. Wittgenstein, Remarks on Colour, Berkeley and Los Angeles: University of California Press, 1978 ISBN 0-520-03727-8
158
See also
Color Color theory Theory of painting Prism (optics) Same color illusion Visible spectrum On Vision and Colors
External links
Complete book content in German language (http://www.farben-welten.de/farben-welten/goethes-farbenlehre. html) Scanned copy of English translation as a Google book (http://books.google.com/ books?id=qDIHAAAAQAAJ&printsec=toc&source=gbs_summary_r&cad=0#PPP1,M1) Physics Today Exploratory Experimentation: Goethe, Land, and Colour Theory, 2002 (http://scitation.aip. org/journals/doc/PHTOAD-ft/vol_55/iss_7/43_1.shtml?bypassSSO=1) Goethe's Prismatic Experiments; Fotos by Sakae Tajima (http://www.scielo.br/img/revistas/ea/v7n19/ encarte19.pdf) Light, Darkness and Colour, a film by Henrik Botius (1998) (http://magichourfilms.webhotel.net/index. php?specLan=eng&specPage=filmbasen_showMAX&specMovie=ok&id=36) Connections That Have a Quality of Necessity: Goethe's Way Of Science As a Phenomenology of Nature (http:// www.arch.ksu.edu/seamon/articles/goethe_essay.htm) Colour Mixing and Goethe's Triangle (Java Applet) (http://www.cs.brown.edu/courses/cs092/VA10/HTML/ GoethesTriangleExplanation.html) Texts on Wikisource: John Tyndall, Goethe's Farbenlehre-(Theory of Colors) I, in Popular Science Monthly, Vol. 17, June 1880. John Tyndall, Goethe's Farbenlehre-(Theory of Colors) II, in Popular Science Monthly, Vol. 17, July 1880. Critical review of Goethe's Theory of Colours (http://www.handprint.com/HP/WCL/book3.html#goethe) A list of links relating to Goethe's investigation of colour (http://alpha.lasalle.edu/~didio/courses/hon462/ goethe_chaos.htm) Essay discussing color psychology and Goethe's theory (http://midwest-facilitators.net/downloads/ mfn_19991025_frank_vodvarka.pdf) Google Scholar: Works citing "Theory of Colours" (http://scholar.google.com/scholar?hl=en&lr=& q=link:hlUS5CYSUSEJ:scholar.google.com/)
159
160
161
162
License
163
License
Creative Commons Attribution-Share Alike 3.0 Unported http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/
Contents
Articles
Chaos theory Complexity theory Holism in science Interconnectedness Model of hierarchical complexity Network theory Programming Complexity Sociology and complexity science Systems theory Variety (cybernetics) Volatility, uncertainty, complexity and ambiguity Holism Confirmation holism Holism in ecological anthropology Holon (philosophy) Indeterminacy (philosophy) Integral (spirituality) Integral Theory Integral ecology Logical holism Organicism Synergetics (Fuller) Synergetics (Haken) Synergism Synergy Systems thinking Integral art Integral education Integral psychology Integral yoga World Union 1 14 14 20 20 29 32 33 38 49 52 53 59 62 63 66 73 78 85 87 88 90 97 99 101 105 109 111 112 114 122
References
Article Sources and Contributors 124
126
Article Licenses
License 127
Chaos theory
Chaos theory
Chaos theory is a field of study in mathematics, physics, and philosophy studying the behavior of dynamical systems that are highly sensitive to initial conditions. This sensitivity is popularly referred to as the butterfly effect. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible in general.[1] This happens even though these systems are deterministic, meaning that their future behaviour is fully determined by their initial conditions, with no random elements involved.[2] In other words, the deterministic nature of these systems does not make them predictable.[3] This behavior is known as deterministic chaos, or simply chaos.
Chaotic behavior can be observed in many natural systems, such as the weather.[4] Explanation of such behavior may be sought through analysis of a chaotic mathematical model, or through analytical techniques such as recurrence plots and Poincar maps.
Applications
Chaos theory is applied in many scientific disciplines: mathematics, programming, microbiology, biology, computer science, economics,[5] [6] [7] engineering,[8] finance,[9] [10] philosophy, physics, politics, population dynamics, psychology, and robotics.[11] Chaotic behavior has been observed in the laboratory in a variety of systems including electrical circuits, lasers, oscillating chemical reactions, fluid dynamics, and mechanical and magneto-mechanical devices, as well as computer models of chaotic processes. Observations of chaotic behavior in nature include changes in weather,[4] the dynamics of satellites in the solar system, the time evolution of the magnetic field of celestial bodies, population growth in ecology, the dynamics of the action potentials in neurons, and molecular vibrations. There is some controversy over the existence of chaotic dynamics in plate tectonics and in economics.[12] [13] [14] One of the most successful applications of chaos theory has been in ecology, where dynamical systems such as the Ricker model have been used to show how population growth under density dependence can lead to chaotic dynamics. Chaos theory is also currently being applied to medical studies of epilepsy, specifically to the prediction of seemingly random seizures by observing initial conditions.[15] A related field of physics called quantum chaos theory investigates the relationship between chaos and quantum mechanics. The correspondence principle states that classical mechanics is a special case of quantum mechanics, the classical limit. If quantum mechanics does not demonstrate an exponential sensitivity to initial conditions, it is unclear how exponential sensitivity to initial conditions can arise in practice in classical chaos.[16] Recently, another
Chaos theory field, called relativistic chaos,[17] has emerged to describe systems that follow the laws of general relativity. The initial conditions of three or more bodies interacting through gravitational attraction (see the n-body problem) can be arranged to produce chaotic motion.
Chaotic dynamics
In common usage, "chaos" means "a state of disorder",[18] but the adjective "chaotic" is defined more precisely in chaos theory. Although there is no universally accepted mathematical definition of chaos, a commonly-used definition says that, for a dynamical system to be classified as chaotic, it must have the following properties:[19] 1. it must be sensitive to initial conditions, 2. it must be topologically mixing, and 3. its periodic orbits must be dense.
where is the Lyapunov exponent. The rate of separation can be different for different orientations of the initial separation vector. Thus, there is a whole spectrum of Lyapunov exponents the number of them is equal to the number of dimensions of the phase space. It is common to just refer to the largest one, i.e. to the Maximal Lyapunov exponent (MLE), because it determines the overall predictability of the system. A positive MLE is usually taken as
Topological mixing
Topological mixing (or topological transitivity) means that the system will evolve over time so that any given region or open set of its phase space will eventually overlap with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system. Topological mixing is often omitted from popular accounts of chaos, which equate chaos with sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points will eventually become widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behaviour: all points except 0 tend to infinity.
The map defined by x 4 x (1 x) and y x + y if x + y < 1 (x + y 1 otherwise) also displays topological mixing. Here the blue region is transformed by the dynamics first to the purple region, then to the pink and red regions, and eventually to a cloud of points scattered across the space.
Chaos theory
Strange Attractors
Some dynamical systems, like the one-dimensional logistic map defined by x 4 x (1 x), are chaotic everywhere, but in many cases chaotic behaviour is found only in a subset of phase space. The cases of most interest arise when the chaotic behaviour takes place on an attractor, since then a large set of initial conditions will lead to orbits that converge to this chaotic region. An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is The Lorenz attractor displays chaotic behavior. These two plots demonstrate likely to produce a picture of the entire final sensitive dependence on initial conditions within the region of phase space occupied by the attractor. attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it was not only one of the first, but it is also one of the most complex and as such gives rise to a very interesting pattern which looks like the wings of a butterfly. Unlike fixed-point attractors and limit cycles, the attractors which arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hnon map). Other discrete dynamical systems have a repelling structure called a Julia set which forms at the boundary between basins of attraction of fixed points Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and a fractal dimension can be calculated for them.
Chaos theory
History
Chaos theory
The first discoverer of chaos was Henri Poincar. In the 1880s, while studying the three-body problem, he found that there can be orbits which are nonperiodic, and yet not forever increasing nor approaching a fixed point.[32] [33] In 1898 Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature.[34] In the system studied, "Hadamard's billiards," Hadamard was able to show that all trajectories are unstable in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent. Much of the earlier theory was developed almost entirely by mathematicians, game. Natural forms (ferns, clouds, under the name of ergodic theory. Later studies, also on the topic of nonlinear mountains, etc.) may be recreated [35] through an Iterated function system differential equations, were carried out by G.D. Birkhoff, A. N. [36] [37] [38] [39] (IFS). Kolmogorov, M.L. Cartwright and J.E. Littlewood, and Stephen Smale.[40] Except for Smale, these studies were all directly inspired by physics: the three-body problem in the case of Birkhoff, turbulence and astronomical problems in the case of Kolmogorov, and radio engineering in the case of Cartwright and Littlewood. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing. Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident for some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behaviour of certain experiments like that of the logistic map. What had been beforehand excluded as measure imprecision and simple "noise" was considered by chaos theories as a full component of the studied systems. The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. An early pioneer of the theory was Edward Lorenz whose interest in chaos came about accidentally through his work on weather prediction in 1961.[41] Lorenz was using a simple digital computer, a Royal McBee LGP-30, to run his weather simulation. He wanted to see a sequence of data again and to save time he started the simulation in the middle of its course. He was able to do this by entering a printout of the data corresponding to conditions in the middle of his simulation which he had calculated last time. To his surprise the weather that the machine began to predict was Turbulence in the tip vortex from an airplane completely different from the weather calculated before. Lorenz wing. Studies of the critical point beyond which a tracked this down to the computer printout. The computer worked with system creates turbulence was important for 6-digit precision, but the printout rounded variables off to a 3-digit Chaos theory, analyzed for example by the Soviet number, so a value like 0.506127 was printed as 0.506. This difference physicist Lev Landau who developed the Landau-Hopf theory of turbulence. David Ruelle is tiny and the consensus at the time would have been that it should and Floris Takens later predicted, against Landau, have had practically no effect. However Lorenz had discovered that that fluid turbulence could develop through a small changes in initial conditions produced large changes in the strange attractor, a main concept of chaos theory. long-term outcome.[42] Lorenz's discovery, which gave its name to Lorenz attractors, proved that meteorology could not reasonably predict weather beyond a weekly period (at most).
Barnsley fern created using the chaos
Chaos theory The year before, Benot Mandelbrot found recurring patterns at every scale in data on cotton prices.[43] Beforehand, he had studied information theory and concluded noise was patterned like a Cantor set: on any scale the proportion of noise-containing periods to error-free periods was a constant thus errors were inevitable and must be planned for by incorporating redundancy.[44] Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur, e.g., in a stock's prices after bad news, thus challenging normal distribution theory in statistics, aka Bell Curve) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards).[45] [46] In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension," showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device.[47] Arguing that a ball of twine appears to be a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (for example, the Koch curve or "snowflake", which is infinitely long yet encloses a finite space and has fractal dimension equal to circa 1.2619, the Menger sponge and the Sierpiski gasket). In 1975 Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory. Biological systems such as the branching of the circulatory and bronchial systems proved to fit a fractal model. Chaos was observed by a number of experimenters before it was recognized; e.g., in 1927 by van der Pol[48] and in 1958 by R.L. Ives.[49] [50] However, as a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers (that is, vacuum tubes) and noticed, on Nov. 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.[51] [52] In December 1977 the New York Academy of Sciences organized the first symposium on Chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw (a physicist, part of the Eudaemons group with J. Doyne Farmer and Norman Packard who tried to find a mathematical method to beat roulette, and then created with them the Dynamical Systems Collective in Santa Cruz, California), and the meteorologist Edward Lorenz. The following year, Mitchell Feigenbaum published the noted article "Quantitative Universality for a Class of Nonlinear Transformations", where he described logistic maps.[53] Feigenbaum had applied fractal geometry to the study of natural forms such as coastlines. Feigenbaum notably discovered the universality in chaos, permitting an application of chaos theory to many different phenomena. In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in convective RayleighBenard systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum "for his brilliant experimental demonstration of the transition to turbulence and chaos in dynamical systems".[54] Then in 1986 the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on Chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking disorder among schizophrenics.[55] This led to a renewal of physiology in the 1980s through the application of chaos theory, for example in the study of pathological cardiac cycles. In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters[56] describing for the first time self-organized criticality (SOC), considered to be one of the mechanisms by which complexity arises in nature. Alongside largely lab-based approaches such as the BakTangWiesenfeld sandpile, many other investigations have centered around large-scale natural or social systems that are known (or suspected) to display scale-invariant behaviour. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including: earthquakes (which, long before SOC was discovered, were known as a source of
Chaos theory scale-invariant behaviour such as the GutenbergRichter law describing the statistical distribution of earthquake sizes, and the Omori law[57] describing the frequency of aftershocks); solar flares; fluctuations in economic systems such as financial markets (references to SOC are common in econophysics); landscape formation; forest fires; landslides; epidemics; and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Worryingly, given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These "applied" investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws. The same year, James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. At first the domain of work of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some self-nominated themselves) claimed that this new theory was an example of such a shift, a thesis upheld by J. Gleick. The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory continues to be a very active area of research, involving many different disciplines (mathematics, topology, physics, population biology, biology, meteorology, astrophysics, information theory, etc.).
Chaos theory When a non-linear deterministic system is attended by external fluctuations, its trajectories present serious and permanent distortions. Furthermore, the noise is amplified due to the inherent non-linearity and reveals totally new dynamical properties. Statistical tests attempting to separate noise from the deterministic skeleton or inversely isolate the deterministic part risk failure. Things become worse when the deterministic component is a non-linear feedback system.[62] In presence of interactions between nonlinear deterministic components and noise, the resulting nonlinear series can display dynamics that traditional tests for nonlinearity are sometimes not able to capture.[63]
Cultural references
Chaos theory has been mentioned in numerous movies and works of literature. Examples include the film Jurassic Park and Tom Stoppard's play Arcadia.
See also
Examples of chaotic systems Arnold's cat map Bouncing Ball Simulation System Chua's circuit Double pendulum Dynamical billiards Economic bubble Hnon map Horseshoe map Logistic map Rssler attractor Standard map Swinging Atwood's machine Tilt A Whirl Coupled map lattice List of chaotic maps Other related topics Anosov diffeomorphism Bifurcation theory Butterfly effect Chaos theory in organizational development Complexity Control of chaos Edge of chaos Fractal Julia set Mandelbrot set Predictability Quantum chaos Santa Fe Institute Synchronization of chaos Unintended consequence People Mitchell Feigenbaum Martin Gutzwiller Michael Berry Brosl Hasslacher Michel Hnon Edward Lorenz Aleksandr Lyapunov Benot Mandelbrot Henri Poincar Otto Rssler David Ruelle Oleksandr Mikolaiovich Sharkovsky Floris Takens James A. Yorke
Scientific literature
Articles
A.N. Sharkovskii, "Co-existence of cycles of a continuous mapping of the line into itself", Ukrainian Math. J., 16:6171 (1964) Li, T. Y. and Yorke, J. A. "Period Three Implies Chaos." American Mathematical Monthly 82, 98592, 1975. Kolyada, S. F. "Li-Yorke sensitivity and other concepts of chaos [64]", Ukrainian Math. J. 56 (2004), 12421257.
Textbooks
Alligood, K. T., Sauer, T., and Yorke, J.A. (1997). Chaos: an introduction to dynamical systems. Springer-Verlag New York, LLC. ISBN0-387-94677-2. Baker, G. L. (1996). Chaos, Scattering and Statistical Mechanics. Cambridge University Press. ISBN0-521-39511-9. Badii, R.; Politi A. (1997). "Complexity: hierarchical structures and scaling in physics" [65]. Cambridge University Press. ISBN0521663857. Devaney, Robert L. (2003). An Introduction to Chaotic Dynamical Systems, 2nd ed,. Westview Press. ISBN0-8133-4085-3. Gollub, J. P.; Baker, G. L. (1996). Chaotic dynamics. Cambridge University Press. ISBN0-521-47685-2.
Complexity theory
14
Complexity theory
Complexity theory may refer to: The study of complex systems. Computational complexity theory, a field in theoretical computer science and mathematics dealing with the resources required during computation to solve a given problem. The theoretical treatment of Kolmogorov complexity of a string studied in algorithmic information theory by identifying the length of the shortest binary program which can output that string. Complexity theory and organizations or complexity theory and strategy, which have been influential in strategic management and organizational studies and incorporate the study of complex adaptive systems. Complexity economics, the application of complexity theory to economics.
See also
Systems theory (or systemics or general systems theory), an interdisciplinary field including engineering, biology, and philosophy that incorporates science to study large systems Complexity
Holism in science
Holism in science, or Holistic science, is an approach to research that emphasizes the study of complex systems. This practice is in contrast to a purely analytic tradition (sometimes called reductionism) which purports to understand systems by dividing them into their smallest possible or discernible elements and understanding their elemental properties alone. The holism-reductionism dichotomy is often evident in conflicting interpretations of experimental findings and in setting priorities for future research.
Overview
Holism in science is an approach to research that emphasizes the study of complex systems. Two central aspects are: 1. the way of doing science, sometimes called "whole to parts," which focuses on observation of the specimen within its ecosystem first before breaking down to study any part of the specimen. 2. the idea that the scientist is not a passive observer of an external universe; that there is no 'objective truth,' but that the individual is in a reciprocal, participatory relationship with nature, and that the observer's contribution to the process is valuable. The term holistic science has been used as a category encompassing a number of scientific research fields (see some examples below). The term may not have a precise definition. Fields of scientific research considered potentially holistic do however have certain things in common. First, they are multidisciplinary. Second, they are concerned with the behavior of complex systems. Third, they recognize feedback within systems as a crucial element for understanding their behavior. The Santa Fe Institute, a center of holistic scientific research in the United States, expresses it like this: The two dominant characteristics of the SFI research style are commitment to a multidisciplinary approach and an emphasis on the study of problems that involve complex interactions among their constituent parts. "Santa Fe Institute's Research Topics" [1]. Retrieved January 22, 2006.
Holism in science
15
Opposing views
Holistic science is controversial. One opposing view is that holistic science is "pseudoscience" because it does not rigorously follow the scientific method despite the use of a scientific-sounding language. Bunge (1983) and Lilienfeld et al. (2003) state that proponents of pseudoscientific claims, especially in organic medicine, alternative medicine, naturopathy and mental health, often resort to the mantra of holism to explain negative findings or to immunise their claims against testing. Stenger (1999) states that "holistic healing is associated with the rejection of classical, Newtonian physics. Yet, holistic healing retains many ideas from eighteenth and nineteenth century physics. Its proponents are blissfully unaware that these ideas, especially superluminal holism, have been rejected by modern physics as well". Science journalist John Horgan has expressed this view in the book The End of Science 1996. He wrote that a certain pervasive model within holistic science, self-organized criticality, for example, "is not really a theory at all. Like punctuated equilibrium, self-organized criticality is merely a description, one of many, of the random fluctuations, the noise, permeating nature." By the theorists' own admissions, he said, such a model "can generate neither specific predictions about nature nor meaningful insights. What good is it, then?"
Cognitive science
The field of cognitive science, or the study of mind and intelligence has some examples for holistic approaches. These include Unified Theory of Cognition (Allen Newell, e.g. Soar, ACT-R as models) and many others, many of which rely on the concept of emergence, i.e. the interplay of many entities make up a functioning whole. Another example is psychological nativism, the study of the innate structure of the mind. Non-holistic functionalist approaches within cognitive science include e.g. the modularity of mind paradigm. Cognitive science need not concern only human cognition. Biologist Marc Bekoff has done holistic, interdisciplinary scientific research in animal cognition and has published a book about it (see below).
Holism in science
16
Quantum physics
In the standard Copenhagen Interpretation of quantum mechanics there is a holism of the measurement situation, in which there is a holism of apparatus and object. There is an "uncontrollable disturbance" of the measured object by the act of measurement according to Niels Bohr. It is impossible to separate the effect of the measuring apparatus from the object measured. The observer-measurement relation is an active area of research today: see Quantum decoherence, Quantum Zeno effect and Measurement problem.
Engineering
In engineering, the holistic approach can be considered "natural" because one of main engineering tasks is to design systems not existing yet. Therefore, conceptual design begins from a general idea which is successively specialized top-down. This process is stopped when the specified details are components available on the market.
Biology
Holistic science sometimes asks different questions than a strictly analytic scienceas is exemplified by Goethe in the following passage: We conceive of the individual animal as a small world, existing for its own sake, by its own means. Every creature is its own reason to be. All its parts have a direct effect on one another, a relationship to one another, thereby constantly renewing the circle of life; thus we are justified in considering every animal physiologically perfect. Viewed from within, no part of the animal is a useless or arbitrary product of the formative impulse (as so often thought). Externally, some parts may seem useless because the inner coherence of the animal nature has given them this form without regard to outer circumstance. Thus[not] the question, What are they for? but rather, Where do they come from? (Goethe, Scientific Studies, Suhrkamp ed., vol 12, p.121; trans. Douglas Miller)
Other examples
Ecology, or ecological science, i.e. studying the ecology at levels ranging from populations, communities, and ecosystems up to the biosphere as a whole. The study of climate change in the wider context of Earth science (and Earth system science in particular) can be considered holistic science, as the climate (and the Earth itself) constitutes a complex system to which the scientific method cannot be applied using current technology. The first scientist to seriously propose this was James Lovelock. [2] (URL accessed on 28 November 2006) Princeton University hosts a holistic science project entitled "Global Consciousness Project" that uses a network of physical random number generators to register events of global significance, testing the hypothesis that there is a collective human consciousness at work in the world. [3] Johann Wolfgang von Goethe's 1810 book Zur Farbenlehre (Theory of Colors) not only parted radically with the dominant Newtonian optical theories of his time, but also with the entire Enlightenment methodology of reductive science. Although the theory was not received well by scientists, Goethe considered one of the most important intellectual figures in modern Europe thought of his color theory as his greatest accomplishment. Holistic theorists and scientists such as Rupert Sheldrake still refer to the Goethe's color-theory as an inspiring example of holistic science. The introduction to the book lays out Goethe's unique philosophy of science. In system dynamics modeling, a field that originated at MIT, a holistic controlling paradigm organizes scientific method, but uses the results of reductionist science to define static relationships between variables in a modeling procedure that permits simulation of the dynamics of the system under study. As mentioned above, feedback is a crucial tool for understanding system dynamics. [4] Another example of how holistic and reductionist science can be mutually supportive and cooperative is free-choice profiling.
Holism in science As an example of interdisciplinary holistic research. Joe L. Kincheloe, in his work in critical pedagogy, has employed complexity and holism in science to overcome reductionism.
17
Holism in science
18
See also
Articles related to holism Cognitive science Complexity theory Holism Holistic health Philosophy of biology Scientific reductionism Systems thinking Antireductionism Articles related to classification of scientific endeavors Cartesian anxiety Demarcation problem Hard science Philosophy of science Pseudoscience Romanticism in science Science wars
References
Paul Davies and John Gribbin. The Matter Myth: Dramatic Discoveries That Challenge Our Understanding of Physical Reality, 1992, Simon & Schuster, ISBN 0-671-72841-5 Henri Bortoft. The Wholeness of Nature: Goethe's Way Toward a Science of Participation in Nature, 1996, Lindisfarne Books, ISBN 0-940262-79-7 Joe L. Kincheloe. Critical Constructivism, 2005. NY, Peter Lang. Joe L. Kincheloe. Teachers as Researchers: Qualitative Paths to Empowerment. 2nd ed, 2003, London, Falmer. Joe L. Kincheloe and Kathleen Berry, Rigor and Complexity in Qualitative Research: Constructing the Bricolage, 2004, London, Open University Press. Humberto R. Maturana and Francisco Varela. The Tree of Knowledge: The Biological Roots of Human Understanding, 1992, Shambhala, ISBN 0-87773-642-1 Ilya Prigogine & Isabelle Stengers, Order out of Chaos: Man's new dialogue with nature, 1984, Flamingo, ISBN 0-00-654115-1 Article "What is the Proper Relationship of Holistic and Reductionist Science?" by Karl North [8] Article "The Fine Line: (W)holism and Science" by Annemarie Colbin, Ph.D. [9] Article "A New Image of Cosmos & Anthropos: From Ancient Wisdom to a Philosophy of Wholeness" by Michael R. Meyer [10] Excerpts from Holistic Science towards a second Renaissance [11] by R.J.C. Wilding (unpublished book in process) Article "Concerning the Spiritual in Art and Science [12]" by Mike King (available on-line) Article "Patterns of Wholeness: Introducing Holistic Science [13]" by Brian Goodwin, from the journal Resurgence [14] Article "From Control to Participation [15]" by Brian Goodwin, from the journal Resurgence [14] System Dynamics Resource Page [16] at Arizona State University, hosted by Craig W. Kirkwood Introduction [17] to Goethe's Way of Science: A Phenomenology of Nature, edited by David Seamon and Arthur Zajonc. State University of New York Press, 1998 Bunge.M., Demarcating Science from Pseudoscience. Fundamenta Scientiae, 1982, Vo. 3, No. 3/4, pg. 369-88 Lilienfeld,S.O. et al. (Eds.): Science and Pseudoscience in Clinical Psychology. New York / London 2003 Olival Freire Jr., Science and exile: [18] David Bohm, the hot times of the Cold War, and his struggle for a new interpretation of quantum mechanics (Online article) Definition of System Dynamics and Systems Thinking [19], on System Dynamics Society homepage Stenger.V.J., (1999) The Physics of 'Alternative Medicine'. The Scientific Review of Alternative Medicine Spring/Summer 1999 Volume 3 ~ Number 1
Interconnectedness
20
Interconnectedness
Oneness may refer to: Divine oneness, the belief that God is without parts Oneness Pentecostalism, a particular belief about the Godhead Oneness of God, the belief that only one deity exists Oneness (mathematics), a mathematical concept Oneness (Carlos Santana album), a 1979 rock album GodWeenSatan: The Oneness (album) Oneness (Jack DeJohnette album) The Oneness Movement led by Kalki Bhagavan
See also
Henosis (Greek - unity, oneness)
Overview
The Model of Hierarchical Complexity, which has been presented as a formal theory [1] , is a framework for scoring how complex a behavior is. Developed by Michael Commons[2] , it quantifies the order of hierarchical complexity of a task based on mathematical principles of how the information is organized [3] , and of information science [4] . Its forerunner was the General Stage Model [5] . It is a model in mathematical psychology. Behaviors that may be scored include those of individual humans or their social groupings (e.g., organizations, governments, societies), animals, or machines. It enables scoring the complexity of human reasoning in any domain. It is cross-culturally valid. The reason it applies cross-culturally is that the scoring is based on the mathematical complexity of the hierarchical organization of information. Scoring does not depend upon the content of the information (e.g., what is done, said, written, or analyzed) but upon how the information is organized. The MHC is a non-mentalistic model of developmental stages. It specifies 14 orders of hierarchical complexity and their corresponding stages. It is different from previous proposals about developmental stage applied to humans [6] . Instead of attributing behavioral changes across a persons age to the development of mental structures or schema, this model posits that task sequences form hierarchies that become increasingly complex. Because less complex tasks must be completed and practiced before more complex tasks can be acquired, this accounts for the developmental changes seen, for example, in individual persons performance of tasks. (For example, a person cannot perform arithmetic until the numeral representations of numbers are learned. A person cannot multiply numbers until addition is learned). Furthermore, previous theories of stage have confounded the stimulus and response in assessing stage by simply scoring responses and ignoring the task or stimulus. The Model of Hierarchical Complexity separates the task or stimulus from the performance. The participants performance on a task of a given complexity represents the stage of developmental complexity.
21
Horizontal complexity
Classical information describes the number of yes-no questions it takes to do a task. For example, if one asked a person across the room whether a penny came up heads when they flipped it, their saying heads would transmit 1 bit of horizontal information. If there were 2 pennies, one would have to ask at least two questions, one about each penny. Hence, each additional 1-bit question would add another bit. Let us say they had a four-faced top with the faces numbered 1, 2, 3, and 4. Instead of spinning it, they tossed it against a backboard as one does with dice in a game of craps. Again, there would be 2 bits. One could ask them whether the face had an even number. If it did, one would then ask if it were a 2. Horizontal complexity, then, is the sum of bits required by just such tasks as these.
Vertical complexity
Hierarchical complexity refers to the number of recursions that the coordinating actions must perform on a set of primary elements. Actions at a higher order of hierarchical complexity: (a) are defined in terms of actions at the next lower order of hierarchical complexity; (b) organize and transform the lower-order actions (see Figure 2); (c) produce organizations of lower-order actions that are new and not arbitrary, and cannot be accomplished by those lower-order actions alone. Once these conditions have been met, we say the higher-order action coordinates the actions of the next lower order. To illustrate how lower actions get organized into more hierarchically complex actions, let us turn to a simple example. Completing the entire operation 3 x (4 + 1) constitutes a task requiring the distributive act. That act non-arbitrarily orders adding and multiplying to coordinate them. The distributive act is therefore one order more hierarchically complex than the acts of adding and multiplying alone; it indicates the singular proper sequence of the simpler actions. Although simply adding results in the same answer, people who can do both display a greater freedom of mental functioning. Thus, the order of complexity of the task is determined through analyzing the demands of each task by breaking it down into its constituent parts. The hierarchical complexity of a task refers to the number of concatenation operations it contains, that is, the number of recursions that the coordinating actions must perform. An order-three task has three concatenation operations. A
Model of hierarchical complexity task of order three operates on a task of order two and a task of order two operates on a task of order one (a simple task).
22
Stages of development
The notion of stages is fundamental in the description of human, organismic, and [7] machine evolution. Previously it has been defined in some ad hoc ways. Here, it is described formally in terms of the Model of Hierarchical Complexity (MHC).
1 - sensory or motor
Discriminate in a rote fashion, stimuli generalization, move Form open-ended proper classes Form concepts Find relations among concepts. Use names Imitate and acquire sequences Follows short sequential acts
Move limbs, lips, toes, eyes, elbows, head View objects or move
Open ended proper classes, phonemes, archiphonemes Morphemes, concepts Single words: ejaculatives & exclamations, verbs, nouns, number names, letter names Various forms of pronouns: subject (I), object (me), possessive adjective (my), possessive pronoun (mine), and reflexive (myself) for various persons (I, you, he, she, it, we, y'all, they) Connectives: as, when, then, why, before; products of simple operations
Respond to stimuli in a class successfully Find relations among concepts Use names
5 - sentential
6 - preoperational
Make simple deductions. Follow lists of sequential acts. Tell stories. Simple logical deduction and empirical rules involving time sequence Simple arithmetic
Count event events and objects Connect the dots Combine numbers and simple propositions
7 - primary
Adds, subtracts, multiplies, divides, counts, proves, does series of tasks on own
Times, places, counts acts, actors, arithmetic outcome, sequence from calculation
23
Does long division, short division, follows Interrelations, social events, what happened complex social rules, ignores simple social rules, among others, reasonable deals, history, takes and coordinates perspective of other and self geography Variable time, place, act, actor, state, type; quantifiers (all, none, some); categorical assertions (e.g., We all die")
8 - concrete
9 - abstract
Discriminate variables Form variables out of finite classes Make and such as stereotypes; logical quantify propositions quantification; (none, some, all) Argue using empirical or logical evidence Logic is linear, 1 dimensional Solve problems with one unknown using algebra, logic and empiricism
10 - formal
Relationships (for example: causality) are formed out of variables; words: linear, logical, one dimensional, if then, thus, therefore, because; correct scientific solutions Events and concepts situated in a multivariate context; systems are formed out of relations; systems: legal, societal, corporate, economic, national Metasystems and supersystems are formed out of systems of relationships
11- systematic
Create metasystems out of systems Compare systems and perspectives Name properties of systems: e.g. homomorphic, isomorphic, complete, consistent (such as tested by consistency proofs), commensurable
13 - paradigmatic
Fit metasystems together to Synthesize metasystems form new paradigms Fit paradigms together to form new fields Form new fields by crossing paradigms
Paradigms are formed out of multiple metasystems New fields are formed out of multiple paradigms
14 cross-paradigmatic
Model of hierarchical complexity 2. 3. 4. 5. 6. 7. 8. 9. All tasks have an order of hierarchical complexity There is only one sequence of orders of hierarchical complexity. Hence, there is structure of the whole for ideal tasks and actions There are gaps between the orders of hierarchical complexity Stage is defined as the most hierarchically complex task solved. There are gaps in Rasch Scaled Stage of Performance. Performance stage is different task area to task area. There is no structure of the wholehorizontal decalgefor performance. It is not inconsistency in thinking within a developmental stage. Decalge is the normal modal state of affairs.
24
Model of hierarchical complexity Perception researchers; History of science historians; Educators; Therapists; Anthropologists.
25
The following list shows the large range of domains to which the Model has been applied. In one representative study, Commons, Goodheart, and Dawson (1997) found, using Rasch (1980) analysis, that hierarchical complexity of a given task predicts stage of a performance, the correlation being r = .92. Correlations of similar magnitude have been found in a number of the studies.
List of examples
List of examples of tasks studied using the Model of Hierarchical Complexity or Fischers Skill Theory (1980): Algebra (Commons, in preparation) Animal stages (Commons & Miller, 2004) Atheism (Commons-Miller, 2005) Attachment and Loss (Commons, 1991; Miller & Lee, 2000) Balance beam and pendulum (Commons, Goodheart, & Bresette, 1995; Commons, Pekker, et al., 2007) Contingencies of reinforcement (Commons, in preparation) Counselor stages (Lovell, 2004) Empathy of Hominids (Commons & Wolfsont, 2002) Epistemology (Kitchener & King, 1990; Kitchener & Fischer, 1990) Evaluative reasoning (Dawson, 2000) Four Story problem (Commons, Richards & Kuhn, 1982; Kallio & Helkama, 1991) Good Education (Dawson-Tunik, 2004) Good Interpersonal (Armon, 1989) Good Work (Armon, 1993) Honesty and Kindness (Lamborn, Fischer & Pipp, 1994) Informed consent (Commons & Rodriguez, 1990, 1993; Commons, Goodheart, Rodriguez, & Gutheil, 2006; Commons, Rodriguez, Adams, Goodheart, Gutheil, & Cyr, 2007). Language stages (Commons, et al., 2007) Leadership before and after crises (Oliver, 2004) Loevinger's Sentence Completion task (Cook-Greuter, 1990) Moral Judgment, (Armon & Dawson, 1997; Dawson, 2000) Music (Beethoven) (Funk, 1989) Orienteering (Commons, in preparation) Physics tasks (Inhelder & Piaget, 1958) Political development (Sonnert & Commons, 1994) Relationships (Armon, 1984a, 1984b) Report patient's prior crimes (Commons, Lee, Gutheil, et al., 1995) Social perspective-taking (Commons & Rodriguez, 1990; 1993) Spirituality (Miller & Cook-Greuter, 2000) Tool Making of Hominids (Commons & Miller 2002) Views of the Agood life@ (Armon, 1984c; Danaher, 1993; Dawson, 2000; Lam, 1995) Workplace culture (Commons, Krause, Fayer, & Meaney, 1993)
Workplace organization (Bowman, 1996a, 1996b) Writing (Commons & DeVos, 1985)
29
External links
Dare Association, Inc. (http://dareassociation.org) display text. Behavioral Development Bulletin (http://www.behavioraldevelopmentbulletin.com) display text. Society for Research in Adult Development (http://adultdevelopment.org) display text
Network theory
For the sociological theory, see Social network Network theory is an area of computer science and network science and part of graph theory. It has application in many disciplines including particle physics, computer science, biology, economics, operations research, and sociology. Network theory concerns itself with the study of graphs as a representation of either symmetric relations or, more generally, of asymmetric relations between discrete objects. Applications of network theory include logistical networks, the World Wide Web, gene regulatory networks, metabolic networks, social networks, epistemological networks, etc. See list of network theory topics for more examples.
Network optimization
Network problems that involve finding an optimal way of doing something are studied under the name of combinatorial optimization. Examples include network flow, shortest path problem, transport problem, transshipment problem, location problem, matching problem, assignment problem, packing problem, routing problem, Critical Path Analysis and PERT (Program Evaluation & Review Technique).
Network analysis
Social network analysis
Social network analysis maps relationships between individuals in social networks.[1] Such individuals are often persons, but may be groups (including cliques and cohesive blocks [2]), organizations, nation states, web sites, or citations between scholarly publications (scientometrics). Network analysis, and its close cousin traffic analysis, has significant use in intelligence. By monitoring the communication patterns between the network nodes, its structure can be established. This can be used for uncovering insurgent networks of both hierarchical and leaderless nature.
Link analysis
Link analysis is a subset of network analysis, exploring associations between objects. An example may be examining the addresses of suspects and victims, the telephone numbers they have dialed and financial transactions that they have partaken in during a given timeframe, and the familial relationships between these subjects as a part of police investigation. Link analysis here provides the crucial relationships and associations between very many objects of different types that are not apparent from isolated pieces of information. Computer-assisted or fully automatic
Network theory computer-based link analysis is increasingly employed by banks and insurance agencies in fraud detection, by telecommunication operators in telecommunication network analysis, by medical sector in epidemiology and pharmacology, in law enforcement investigations, by search engines for relevance rating (and conversely by the spammers for spamdexing and by business owners for search engine optimization), and everywhere else where relationships between many objects have to be analyzed. Web link analysis Several Web search ranking algorithms use link-based centrality metrics, including (in order of appearance) Marchiori's Hyper Search, Google's PageRank, Kleinberg's HITS algorithm, and the TrustRank algorithm. Link analysis is also conducted in information science and communication science in order to understand and extract information from the structure of collections of web pages. For example the analysis might be of the interlinking between politicians' web sites or blogs.
30
Centrality measures
Information about the relative importance of nodes and edges in a graph can be obtained through centrality measures, widely used in disciplines like sociology. For example, eigenvector centrality uses the eigenvectors of the adjacency matrix to determine nodes that tend to be frequently visited.
See also
Complex network Network science Network topology Small-world networks Social circles Scale-free networks Sequential dynamical systems
Programming Complexity
32
Programming Complexity
Programming Complexity, which is often also referred to as Software Complexity is a term that encompasses numerous properties of a piece of software, all of which affect internal interactions. According to several commentators, including Jonnie Moore [1], there is a distinction between the terms complex and complicated; complicated implies being difficult to understand but with time and effort, ultimately knowable. Complex, on the other hand, describes the interactions between a number of entities. As the number of entities increases, the number of interactions between them would increase exponentially and it would get to a point where it would be impossible to know and understand all of them. Similarly, higher levels of complexity in software increase the risk of unintentionally interfering with interactions and so increases the chance of introducing defects when making changes. In more extreme cases, it can make modifying the software virtually impossible. The idea of linking software complexity to the maintainability of the software has been explored extensively by Professor Manny Lehman who developed his Laws of Software Evolution from his research. He and his co-Author Les Belady explored numerous possible Software Metrics in their oft cited book[2] , that could be used to measure the state of the software, eventually reaching the conclusion that the only practical solution would be to use one that uses deterministic complexity models. Many measures of software complexity have been proposed, many of these although giving a good representation of complexity, do not lend themselves to easy measurement. Some of the more commonly used metrics are McCabes cyclomatic complexity metric Halsteads software science metrics Henry and Kafura introduced Software Structure Metrics Based on Information Flow in 1981[3] which measures complexity as a function of fan in and fan out. They define fan-in of a procedure as the number of local flows into that procedure plus the number of data structures from which that procedure retrieves information. Fan-out is defined as the number of local flows out of that procedure plus the number of data structures that the procedure updates. Local flows relate to data passed to and from procedures that call or are called by, the procedure in question. Henry and Kafura?s complexity value is defined as "the square of procedure length multiplied by fan-in multiplied by fan-out." A Metrics Suite for Object Oriented Design[4] was introduced by Chidamber and Kemerer in 1994 focusing, as the title suggests on metrics specifically for object oriented code. They introduce six OO complexity metrics; weighted methods per class, coupling between object classes, response for a class, number of children, depth of inheritance tree and lack of cohesion of methods There are several other metrics that can use to measure programming complexity: data complexity (Chapin Metric) data flow complexity (Elshof Metric) data access complexity (Card Metric) decisional complexity (McClure Metric) branching complexity (Sneed Metric)
Programming Complexity
33
See also
Software crisis (and subsequent programming paradigm solutions) Software metrics - quantitative measure of some property of a program.
References
[1] [2] [3] [4] http:/ / www. astroprojects. com/ morespace/ johnnie/ MM Lehmam LA Belady; Program Evolution - Processes of Software Change 1985 Henry, S.; Kafura, D. IEEE Transactions on Software Engineering Volume SE-7, Issue 5, Sept. 1981 Page(s): 510 - 518 Chidamber, S.R.; Kemerer, C.F. IEEE Transactions on Software Engineering Volume 20, Issue 6, Jun 1994 Page(s):476 - 493
34
Historical background
An argument can be made that western sociology (including its various smaller, national sociologies) has been and continues to be a profession of complexity.[8] [9] The primary basis for this challenge is western society. To study society is, by definition, to study complexity. Starting with the industrial and industrious revolutions of the middle 1700s to early 1900s western society transitionedteleology not impliedinto a type of complexity [7] that, in many ways, did not previously 'A perspective on, and partial map of Complexity Science. The web version lists links to academics and research. exist. Furthermore, as industrialization evolved into its later stages (i.e., Taylorism, Fordism, post-Fordism, etc), the complexity of western society evolved as well (See Arnold J. Toynbee). The latest developments in this complexity are post-industrialism and, most recently, across societies throughout the world, globalization.
Classical era
Of the numerous scholars writing during the middle 1800s to early 1900s, perhaps the best known systems thinkers were Auguste Comte, Herbert Spencer, Karl Marx, Max Weber, Emile Durkheim and Vilfredo Pareto. While not all of these scholars were sociologists, their systems thinking had a tremendous impact on organized sociology. Three characteristics identify these scholars as systems thinkers: (1) They conceptualized their work as a direct response to the increasing complexity of western society; (2) they conceptualized the changes taking place in western society in systems terms; that is, they treated western society (and its various substantive issues) as a system; and (3) their failure and successes show scholars today how best to think about social complexity in systems terms. Their failures include treating social systems in strictly biological terms--homeostasis, etc. Their successes include Pareto's 80/20 rule and Durkheim's notion of system differentiation (sociology).
35
Computational Sociology
The second area is computational sociology involving such scholars as Nigel Gilbert, Klaus Troitzsch, Scott Page, Joshua Epstein and Jrgen Klver--see Map 2 for information on these scholars. The focus of researchers in this field, amount to two: social simulation and data-mining, both of which are subclusters within computational sociology. Social simulation uses the computer to create an artificial laboratory for the study of complex social systems, and data-mining uses machine intelligence to search for non-trivial patterns of relations in large, complex,
Sociology and complexity science real-world databases. A variant of computational sociology is socionics. [19] [20]
36
Sociocybernetics
The fourth major area of research is sociocybernetics. The main goal of this field is to integrate sociology with second-order cybernetics and Niklas Luhmann, along with the latest advances in complexity science. In terms of scholarly work, the focus of sociocybernetics has been primarily conceptual and only slightly methodological or empirical.[24]
See also
Complex adaptive system Complexity Complexity economics Computational sociology Generative sciences Multi-agent system Social network analysis Sociocybernetics Systems theory
38
Systems theory
Systems theory is a transdisciplinary approach, which abstracts and considers a system as a set of independent and interacting parts. The main goal is to study general principles of system functioning to be applied for the all types of systems in all fields of research. As a technical and general academic area of study it predominantly refers to the science of systems that resulted from Bertalanffy's General System Theory (GST), among others, in initiating what became a project of systems research and practice. Systems theoretical approaches were later appropriated in other fields, such as in the structural functionalist sociology of Talcott Parsons and Niklas Luhmann.
Overview
Contemporary ideas from systems theory have grown with diversified areas, exemplified by the work of Bla H. Bnthy, ecological systems with Howard T. Odum, Eugene Odum and Fritjof Capra, organizational theory and management with individuals such as Peter Senge, interdisciplinary study with areas like Human Resource Development from the work of Richard A. Swanson, and insights from educators such as Debora Hammond and Alfonso Montuori. As a transdisciplinary, interdisciplinary and multiperspectival domain, the area brings together principles and concepts from ontology, philosophy of science, physics, computer Margaret Mead was an influential figure in systems science, biology, and engineering as well as geography, sociology, theory. political science, psychotherapy (within family systems therapy) and economics among others. Systems theory thus serves as a bridge for interdisciplinary dialogue between autonomous areas of study as well as within the area of systems science itself. In this respect, with the possibility of misinterpretations, von Bertalanffy [1] believed a general theory of systems "should be an important regulative device in science," to guard against superficial analogies that "are useless in science and harmful in their practical consequences." Others remain closer to the direct systems concepts developed by the original theorists. For example, Ilya Prigogine, of the Center for Complex Quantum Systems at the University of Texas, Austin, has studied emergent properties, suggesting that they offer analogues for living systems. The theories of autopoiesis of Francisco Varela and Humberto Maturana are a further development in this field. Important names in contemporary systems science include Russell Ackoff, Bla H. Bnthy, Anthony Stafford Beer, Peter Checkland, Robert L. Flood, Fritjof Capra, Michael C. Jackson, Edgar Morin and Werner Ulrich, among others. With the modern foundations for a general theory of systems following the World Wars, Ervin Laszlo, in the preface for Bertalanffy's book Perspectives on General System Theory, maintains that the translation of "general system theory" from German into English has "wrought a certain amount of havoc" [2] . The preface explains that the original concept of a general system theory was "Allgemeine Systemtheorie (or Lehre)", pointing out the fact that "Theorie" (or "Lehre") just as "Wissenschaft" (translated Scholarship), "has a much broader meaning in German than the closest English words theory and science'" [2] . With these ideas referring to an organized body of knowledge and "any systematically presented set of concepts, whether they are empirical, axiomatic, or philosophical", "Lehre"
Systems theory is associated with theory and science in the etymology of general systems, but also does not translate from the German very well; "teaching" is the "closest equivalent", but "sounds dogmatic and off the mark" [2] . While many of the root meanings for the idea of a "general systems theory" might have been lost in the translation and many were led to believe that the systems theorists had articulated nothing but a pseudoscience, systems theory became a nomenclature that early investigators used to describe the interdependence of relationships in organization by defining a new way of thinking about science and scientific paradigms. A system from this frame of reference is composed of regularly interacting or interrelating groups of activities. For example, in noting the influence in organizational psychology as the field evolved from "an individually oriented industrial psychology to a systems and developmentally oriented organizational psychology," it was recognized that organizations are complex social systems; reducing the parts from the whole reduces the overall effectiveness of organizations [3] . This is at difference to conventional models that center on individuals, structures, departments and units separate in part from the whole instead of recognizing the interdependence between groups of individuals, structures and processes that enable an organization to function. Laszlo [4] explains that the new systems view of organized complexity went "one step beyond the Newtonian view of organized simplicity" in reducing the parts from the whole, or in understanding the whole without relation to the parts. The relationship between organizations and their environments became recognized as the foremost source of complexity and interdependence. In most cases the whole has properties that cannot be known from analysis of the constituent elements in isolation. Bla H. Bnthy, who argued - along with the founders of the systems society - that "the benefit of humankind" is the purpose of science, has made significant and far-reaching contributions to the area of systems theory. For the Primer Group at ISSS, Bnthy defines a perspective that iterates this view: The systems view is a world-view that is based on the discipline of SYSTEM INQUIRY. Central to systems inquiry is the concept of SYSTEM. In the most general sense, system means a configuration of parts connected and joined together by a web of relationships. The Primer group defines system as a family of relationships among the members acting as a whole. Von Bertalanffy defined system as "elements in standing relationship. [5] Similar ideas are found in learning theories that developed from the same fundamental concepts, emphasizing that understanding results from knowing concepts both in part and as a whole. In fact, Bertalanffys organismic psychology paralleled the learning theory of Jean Piaget.[6] Interdisciplinary perspectives are critical in breaking away from industrial age models and thinking where history is history and math is math segregated from the arts and music separate from the sciences and never the twain shall meet [7] . The influential contemporary work of Peter Senge [8] provides detailed discussion of the commonplace critique of educational systems grounded in conventional assumptions about learning, including the problems with fragmented knowledge and lack of holistic learning from the "machine-age thinking" that became a "model of school separated from daily life." It is in this way that systems theorists attempted to provide alternatives and an evolved ideation from orthodox theories with individuals such as Max Weber, mile Durkheim in sociology and Frederick Winslow Taylor in scientific management, which were grounded in classical assumptions [9] . The theorists sought holistic methods by developing systems concepts that could be integrated with different areas. The contradiction of reductionism in conventional theory (which has as its subject a single part) is simply an example of changing assumptions. The emphasis with systems theory shifts from parts to the organization of parts, recognizing interactions of the parts are not "static" and constant but "dynamic" processes. Conventional closed systems were questioned with the development of open systems perspectives. The shift was from absolute and universal authoritative principles and knowledge to relative and general conceptual and perceptual knowledge [10] , still in the tradition of theorists that sought to provide means in organizing human life. Meaning, the history of ideas that preceded were rethought not lost. Mechanistic thinking was particularly critiqued, especially the industrial-age mechanistic metaphor of the mind from interpretations of Newtonian mechanics by Enlightenment philosophers and
39
Systems theory later psychologists that laid the foundations of modern organizational theory and management by the late 19th century [11] . Classical science had not been overthrown, but questions arose over core assumptions that historically influenced organized systems, within both social and technical sciences.
40
History
TIMELINE
Precursors Karl Marx (1817-1883), Herbert Spencer (18201903), Vilfredo Pareto (18481923), mile Durkheim (18581917), Alexander Bogdanov (18731928), Nicolai Hartmann (18821950), Stafford Beer (1926-2002), Robert Maynard Hutchins (19291951), among others Pioneers 1946-1953 Macy conferences 1948 Norbert Wiener publishes Cybernetics or Control and Communication in the Animal and the Machine 1954 Ludwig von Bertalanffy, Anatol Rapoport, Ralph W. Gerard, Kenneth Boulding establish Society for the Advancement of General Systems Theory, in 1956 renamed to Society for General Systems Research. 1955 W. Ross Ashby publishes Introduction to Cybernetics 1968 Ludwig von Bertalanffy publishes General System theory: Foundations, Development, Applications Developments 1970-1980s Second-order cybernetics developed by Heinz von Foerster, Gregory Bateson, Humberto Maturana and others 1971-1973 Cybersyn, rudimentary internet and cybernetic system for democratic economic planning developed in Chile under Allende government by Stafford Beer 1970s Catastrophe theory (Ren Thom, E.C. Zeeman) Dynamical systems in mathematics. 1980s Chaos theory David Ruelle, Edward Lorenz, Mitchell Feigenbaum, Steve Smale, James A. Yorke 1986 Context theory Anthony Wilden 1988 International Society for Systems Science 1990 Complex adaptive systems (CAS) John H. Holland, Murray Gell-Mann, W. Brian Arthur Whether considering the first systems of written communication with Sumerian cuneiform to Mayan numerals, or the feats of engineering with the Egyptian pyramids, systems thinking in essence dates back to antiquity. Differentiated from Western rationalist traditions of philosophy, C. West Churchman often identified with the I Ching as a systems approach sharing a frame of reference similar to pre-Socratic philosophy and Heraclitus [12] . Von Bertalanffy traced systems concepts to the philosophy of G.W. von Leibniz and Nicholas of Cusa's coincidentia oppositorum. While modern systems are considerably more complicated, today's systems are embedded in history. Systems theory as an area of study specifically developed following the World Wars from the work of Ludwig von Bertalanffy, Anatol Rapoport, Kenneth E. Boulding, William Ross Ashby, Margaret Mead, Gregory Bateson, C. West Churchman and others in the 1950s, specifically catalyzed by the cooperation in the Society for General Systems Research. Cognizant of advances in science that questioned classical assumptions in the organizational sciences, Bertalanffy's idea to develop a theory of systems began as early as the interwar period, publishing "An Outline for General Systems Theory" in the British Journal for the Philosophy of Science, Vol 1, No. 2, by 1950. Where assumptions in Western science from Greek thought with Plato and Aristotle to Newton's Principia have historically influenced all areas from the hard to social sciences (see David Easton's seminal development of the "political system" as an analytical construct), the original theorists explored the implications of twentieth century advances in terms of systems. Subjects like complexity, self-organization, connectionism and adaptive systems had already been studied in the 1940s and 1950s. In fields like cybernetics, researchers like Norbert Wiener, William Ross Ashby, John von
Systems theory Neumann and Heinz von Foerster examined complex systems using mathematics. John von Neumann discovered cellular automata and self-reproducing systems, again with only pencil and paper. Aleksandr Lyapunov and Jules Henri Poincar worked on the foundations of chaos theory without any computer at all. At the same time Howard T. Odum, the radiation ecologist, recognised that the study of general systems required a language that could depict energetics and kinetics at any system scale. Odum developed a general systems, or Universal language, based on the circuit language of electronics to fulfill this role, known as the Energy Systems Language. Between 1929-1951, Robert Maynard Hutchins at the University of Chicago had undertaken efforts to encourage innovation and interdisciplinary research in the social sciences, aided by the Ford Foundation with the interdisciplinary Division of the Social Sciences established in 1931[13] . Numerous scholars had been actively engaged in ideas before (Tectology of Alexander Bogdanov published in 1912-1917 is a remarkable example), but in 1937 von Bertalanffy presented the general theory of systems for a conference at the University of Chicago. The systems view was based on several fundamental ideas. First, all phenomena can be viewed as a web of relationships among elements, or a system. Second, all systems, whether electrical, biological, or social, have common patterns, behaviors, and properties that can be understood and used to develop greater insight into the behavior of complex phenomena and to move closer toward a unity of science. System philosophy, methodology and application are complementary to this science [2] . By 1956, the Society for General Systems Research was established, renamed the International Society for Systems Science in 1988. The Cold War affected the research project for systems theory in ways that sorely disappointed many of the seminal theorists. Some began to recognize theories defined in association with systems theory had deviated from the initial General Systems Theory (GST) view[14] . The economist Kenneth Boulding, an early researcher in systems theory, had concerns over the manipulation of systems concepts. Boulding concluded from the effects of the Cold War that abuses of power always prove consequential and that systems theory might address such issues [15] . Since the end of the Cold War, there has been a renewed interest in systems theory with efforts to strengthen an ethical view.
41
Systems theory Ludwig von Bertalanffy outlines systems inquiry into three major domains: Philosophy, Science, and Technology. In his work with the Primer Group, Bla H. Bnthy generalized the domains into four integratable domains of systemic inquiry:
Domain Philosophy Theory Description the ontology, epistemology, and axiology of systems; a set of interrelated concepts and principles applying to all systems
42
Methodology the set of models, strategies, methods, and tools that instrumentalize systems theory and philosophy Application the application and interaction of the domains
These operate in a recursive relationship, he explained. Integrating Philosophy and Theory as Knowledge, and Method and Application as action, Systems Inquiry then is knowledgeable action.[20]
Cybernetics
The term cybernetics derives from a Greek word which meant steersman, and which is the origin of English words such as "govern". Cybernetics is the study of feedback and derived concepts such as communication and control in living organisms, machines and organisations. Its focus is how anything (digital, mechanical or biological) processes information, reacts to information, and changes or can be changed to better accomplish the first two tasks. The terms "systems theory" and "cybernetics" have been widely used as synonyms. Some authors use the term cybernetic systems to denote a proper subset of the class of general systems, namely those systems that include feedback loops. However Gordon Pask's differences of eternal interacting actor loops (that produce finite products) makes general systems a proper subset of cybernetics. According to Jackson (2000), von Bertalanffy promoted an embryonic form of general system theory (GST) as early as the 1920s and 1930s but it was not until the early 1950s it became more widely known in scientific circles. Threads of cybernetics began in the late 1800s that led toward the publishing of seminal works (e.g., Wiener's Cybernetics in 1948 and von Bertalanffy's General Systems Theory in 1968). Cybernetics arose more from engineering fields and GST from biology. If anything it appears that although the two probably mutually influenced each other, cybernetics had the greater influence. Von Bertalanffy (1969) specifically makes the point of distinguishing between the areas in noting the influence of cybernetics: "Systems theory is frequently identified with cybernetics and control theory. This again is incorrect. Cybernetics as the theory of control mechanisms in technology and nature is founded on the concepts of information and feedback, but as part of a general theory of systems;" then reiterates: "the model is of wide application but should not be identified with 'systems theory' in general", and that "warning is necessary against its incautious expansion to fields for which its concepts are not made." (17-23). Jackson (2000) also claims von Bertalanffy was informed by Alexander Bogdanov's three volume Tectology that was published in Russia between 1912 and 1917, and was translated into German in 1928. He also states it is clear to Gorelik (1975) that the "conceptual part" of general system theory (GST) had first been put in place by Bogdanov. The similar position is held by Mattessich (1978) and Capra (1996). Ludwig von Bertalanffy never even mentioned Bogdanov in his works, which Capra (1996) finds "surprising". Cybernetics, catastrophe theory, chaos theory and complexity theory have the common goal to explain complex systems that consist of a large number of mutually interacting and interrelated parts in terms of those interactions. Cellular automata (CA), neural networks (NN), artificial intelligence (AI), and artificial life (ALife) are related fields, but they do not try to describe general (universal) complex (singular) systems. The best context to compare the different "C"-Theories about complex systems is historical, which emphasizes different tools and methodologies, from pure mathematics in the beginning to pure computer science now. Since the beginning of chaos theory when Edward Lorenz accidentally discovered a strange attractor with his computer, computers have become an indispensable source of information. One could not imagine the study of complex systems without the use of
43
Organizational theory
Systems theory
44
The systems framework is also fundamental to organizational theory as organizations are complex dynamic goal-oriented processes. One of the early thinkers in the field was Alexander Bogdanov, who developed his Tectology, a theory widely considered a precursor of von Bertalanffy's GST, aiming to model and design human organizations (see Mattessich 1978, Capra 1996). Kurt Lewin was particularly influential in developing the systems perspective within organizational theory and coined the term "systems of ideology", from his frustration with behavioral psychologies that became an obstacle to sustainable work in psychology [21] . Jay Forrester with his work in dynamics and management alongside numerous theorists including Edgar Schein that followed in their tradition since the Civil Rights Era have also been influential. The systems to organizations relies heavily upon achieving negative entropy through openness and feedback. A systemic view on organizations is Kurt Lewin attended the Macy transdisciplinary and integrative. In other words, it transcends the perspectives of conferences and is commonly identified as the founder of the individual disciplines, integrating them on the basis of a common "code", or more movement to study groups exactly, on the basis of the formal apparatus provided by systems theory. The scientifically. systems approach gives primacy to the interrelationships, not to the elements of the system. It is from these dynamic interrelationships that new properties of the system emerge. In recent years, systems thinking has been developed to provide techniques for studying systems in holistic ways to supplement traditional reductionistic methods. In this more recent tradition, systems theory in organizational studies is considered by some as a humanistic extension of the natural sciences.
Systems theory In sociology, members of Research Committee 51 of the International Sociological Association (which focuses on sociocybernetics), have sought to identify the sociocybernetic feedback loops which, it is argued, primarily control the operation of society. On the basis of research largely conducted in the area of education, Raven (1995) has, for example, argued that it is these sociocybernetic processes which consistently undermine well intentioned public action and are currently heading our species, at an exponentially increasing rate, toward extinction. See sustainability. He suggests that an understanding of these systems processes will allow us to generate the kind of (non "common-sense") targeted interventions that are required for things to be otherwise - i.e. to halt the destruction of the planet.
45
System dynamics
System Dynamics was founded in the late 1950s by Jay W. Forrester of the MIT Sloan School of Management with the establishment of the MIT System Dynamics Group. At that time, he began applying what he had learned about systems during his work in electrical engineering to everyday kinds of systems. Determining the exact date of the founding of the field of system dynamics is difficult and involves a certain degree of arbitrariness. Jay W. Forrester joined the faculty of the Sloan School at MIT in 1956, where he then developed what is now System Dynamics. The first published article by Jay W. Forrester in the Harvard Business Review on "Industrial Dynamics", was published in 1958. The members of the System Dynamics Society have chosen 1957 to mark the occasion as it is the year in which the work leading to that article, which described the dynamics of a manufacturing supply chain, was done. As an aspect of systems theory, system dynamics is a method for understanding the dynamic behavior of complex systems. The basis of the method is the recognition that the structure of any system the many circular, interlocking, sometimes time-delayed relationships among its components is often just as important in determining its behavior as the individual components themselves. Examples are chaos theory and social dynamics. It is also claimed that, because there are often properties-of-the-whole which cannot be found among the properties-of-the-elements, in some cases the behavior of the whole cannot be explained in terms of the behavior of the parts. An example is the properties of these letters which when considered together can give rise to meaning which does not exist in the letters by themselves. This further explains the integration of tools, like language, as a more parsimonious process in the human application of easiest path adaptability through interconnected systems.
Systems engineering
Systems Engineering is an interdisciplinary approach and means for enabling the realization and deployment of successful systems. It can be viewed as the application of engineering techniques to the engineering of systems, as well as the application of a systems approach to engineering efforts.[22] Systems Engineering integrates other disciplines and specialty groups into a team effort, forming a structured development process that proceeds from concept to production to operation and disposal. Systems Engineering considers both the business and the technical needs of all customers, with the goal of providing a quality product that meets the user needs.[23]
Systems psychology
Systems psychology is a branch of psychology that studies human behaviour and experience in complex systems. It is inspired by systems theory and systems thinking, and based on the theoretical work of Roger Barker, Gregory Bateson, Humberto Maturana and others. It is an approach in psychology, in which groups and individuals, are considered as systems in homeostasis. Systems psychology "includes the domain of engineering psychology, but in addition is more concerned with societal systems and with the study of motivational, affective, cognitive and group behavior than is engineering psychology."[24] In systems psychology "characteristics of organizational behaviour for example individual needs, rewards, expectations, and attributes of the people interacting with the systems are considered in the process in order to create an effective system".[25] . The Systems psychology includes an illusion of homeostatic systems, although most of the living systems are in a continuous disequilibrium of various degrees.
Systems theory
46
See also
List of types of systems theory Clicks principle Cybernetics Emergence Glossary of systems theory Holism Meta-systems Open and Closed Systems in Social Science Social rule system theory Sociology and complexity science Systemantics System engineering Systems psychology Systemics Systems theory in archaeology Systems theory in anthropology Systems theory in political science Systems thinking World-systems theory Systematics - study of multi-term systems
Further reading
Ackoff, R. (1978). The art of problem solving. New York: Wiley. Ash, M.G. (1992). "Cultural Contexts and Scientific Change in Psychology: Kurt Lewin in Iowa." American Psychologist, Vol. 47, No. 2, pp.198207. Bailey, K.D. (1994). Sociology and the New Systems Theory: Toward a Theoretical Synthesis. New York: State of New York Press. Bnthy, B (1996) Designing Social Systems in a Changing World New York Plenum Bnthy, B. (1991) Systems Design of Education. Englewood Cliffs: Educational Technology Publications Bnthy, B. (1992) A Systems View of Education. Englewood Cliffs: Educational Technology Publications. ISBN 0-87778-245-8 Bnthy, B.H. (1997). "A Taste of Systemics" [26], The Primer Project, Retrieved May 14, (2007) Bateson, G. (1979). Mind and nature: A necessary unity. New York: Ballantine Bausch, Kenneth C. (2001) The Emerging Consensus in Social Systems Theory, Kluwer Academic New York ISBN 0-306-46539-6 Ludwig von Bertalanffy (1968). General System Theory: Foundations, Development, Applications New York: George Braziller Bertalanffy, L. von. (1950). "An Outline of General System Theory." British Journal for the Philosophy of Science, Vol. 1, No. 2. Bertalanffy, L. von. (1955). "An Essay on the Relativity of Categories." Philosophy of Science, Vol. 22, No. 4, pp.243263. Bertalanffy, Ludwig von. (1968). Organismic Psychology and Systems Theory. Worchester: Clark University Press. Bertalanffy, Ludwig Von. (1974). Perspectives on General System Theory Edited by Edgar Taschdjian. George Braziller, New York. Buckley, W. (1967). Sociology and Modern Systems Theory. New Jersey: Englewood Cliffs. Mario Bunge (1979) Treatise on Basic Philosophy, Volume 4. Ontology II A World of Systems. Dordrecht, Netherlands: D. Reidel. Capra, F. (1997). The Web of Life-A New Scientific Understanding of Living Systems, Anchor ISBN 978-0385476768 Checkland, P. (1981). Systems thinking, Systems practice. New York: Wiley. Checkland, P. 1997. Systems Thinking, Systems Practice. Chichester: John Wiley & Sons, Ltd. Churchman, C.W. (1968). The systems approach. New York: Laurel. Churchman, C.W. (1971). The design of inquiring systems. New York: Basic Books. Corning, P. (1983) The Synergism Hupothesis: A Theory of Progressive Evolution. New York: McGraw Hill
Systems theory
[34] http:/ / projects. isss. org/ Main/ Primer [35] http:/ / www. necsi. org/ [36] http:/ / www. systemdynamics. org/
49
Variety (cybernetics)
In cybernetics the term variety denotes the total number of distinct states of a system.
Overview
The term Variety was introduced by W. Ross Ashby to denote the count of the total number of states of a system. The condition for dynamic stability under perturbation (or input) was described by his Law of Requisite Variety. Ashby says[1] : Thus, if the order of occurrence is ignored, the set[2] c, b, c, a, c, c, a, b, c, b, b, a which contains twelve elements, contains only three distinct elements- a, b, c. Such a set will be said to have a variety of three elements. He adds The observer and his powers of discrimination may have to be specified if the variety is to be well defined. [3] Variety can be stated as an integer, as above, or as the logarithm to the base 2 of the number i.e. in bits [4] .
Variety (cybernetics)
50
Applications
In general a description of the required inputs and outputs is established then encoded with the minimum variety necessary. The mapping of input bits to output bits can then produce an estimate the minimum hardware or software components necessary to produce the desired control behaviour e.g. in a piece of computer software or computer hardware. The cybernetician Frank George discussed the variety of teams competing in games like football or rugby to produce goals or tries. A winning chess player might be said to have more variety than his losing opponent. Here a simple ordering is implied. The attenuation and amplification of variety were major themes in Stafford Beer's work in management [7] (the profession of control, as he called it). The number of staff needed to answer telephones, control crowds or tend to patients are clear examples. The application of natural and analogue signals to variety analysis require an of estimate Ashby's "powers of discrimination" (see above quote). Given the butterfly effect of dynamical systems care must be taken before quantitative measures can be produced. Small quantities, which might be overlooked, can have big effects. In his Designing Freedom Stafford Beer discusses the patient in a hospital with a temperature denoting fever[9] . Action must be taken immediately to isolate the patient. Here no amount of variety recording the patients' average temperature would detect this small signal which might have a big effect. Monitoring is required on individuals thus amplifying variety (see Algedonic alerts in the Viable System Model or VSM). Beer's work in management cybernetics and VSM is largely based on variety engineering. Further applications involving Ashby's view of state counting include the analysis of digital bandwidth requirements, redundancy and software bloat, the bit representation of data types and indexes, analogue to digital conversion, the bounds on finite state machines and data compression. See also State (physics), State (computer science), State pattern, State (controls) and Cellular automaton. Requisite Variety can be seen in Chaitin's Algorithmic information theory where a longer, higher variety program or finite state machine produces incompressible output with more variety or information content. Recently[10] James Lovelock suggested burning and burying carbonized agricultural waste to sequester carbon. A variety calculation requires estimates of global annual agricultural waste production, burial and pyrolysis efficiency to estimate the mass of carbon thus sequestered from the atmosphere.
See also
Cardinality Complexity Degrees of freedom Power set
Further reading
Ashby, W.R. 1956, Introduction to Cybernetics, Chapman & Hall, 1956, ISBN 0-416-68300-2 (also available in electronic form as a PDF from Principia Cybernetica [11]) Ashby, W.R. 1958, Requisite Variety and its implications for the control of complex systems [12], Cybernetica (Namur) Vo1 1, No 2, 1958. Beer, S. 1974, Designing Freedom, CBC Learning Systems, Toronto, 1974; and John Wiley, London and New York, 1975. Translated into Spanish and Japanese. Beer, S. 1975, Platform for Change, John Wiley, London and New York. Reprinted with corrections 1978. Beer, S. 1979, The Heart of Enterprise, John Wiley, London and New York. Reprinted with corrections 1988. Beer, S. 1981, Brain of the Firm; Second Edition (much extended), John Wiley, London and New York. Reprinted 1986, 1988. Translated into Russian.
52
For most contemporary organizations business, the military, education, government and others VUCA is a practical code for awareness and readiness. Beyond the simply acronym is a body of knowledge that deals with learning models for VUCA preparedness, anticipation, evolution and intervention.[5] The capacity of individuals and organizations to deal with VUCA can be measured with a number of engagement themes: 1. 2. 3. 4. 5. Knowledge Management on Sense-Making Planning and Readiness Considerations Process Management and Resource Systems Functional Responsiveness and Impact Models Recovery Systems and Forward Practices
At some level, the capacity for VUCA management and leadership hinges on enterprise value systems, assumptions and natural goals. A "prepared and resolved" enterprise[2] is engaged with a strategic agenda that is aware of and empowered by VUCA forces. The capacity for VUCA leadership in strategic and operating terms depends on a well-developed mindset for gauging the technical, social, political, market and economic realities of the environment in which people work. Working with deeper smarts about the elements of VUCA may be a driver for survival and sustainability in an otherwise complicated world.[6]
53
References
[1] Stiehm, Judith Hicks and Nicholas W. Townsend (2002). The U.S. Army War College: Military Education in a Democracy (http:/ / books. google. com/ books?id=sEkp6GlK19cC& pg=PA34& dq=VUCA#PPA6,M1). Temple University Press. p.6. ISBN1566399602. . [2] Wolf, Daniel (2007). Prepared and Resolved: The Strategic Agenda for Growth, Performance and Change. dsb Publishing. p.115. ISBN097913000X. [3] "Fingertip Knowledge" (http:/ / media. centerdigitaled. com/ Converge_Mag/ pdfs/ issues/ CON_June07_lorz_PDF. pdf). Coverge Magazine: 34. June 2007. . Retrieved 2009-10-18. [4] Johansen, Bob (2007). Get There Early: Sensing the Future to Compete in the Present. San Francisco, CA: Berrett-Koehler Publishers, Inc.. p.5153. ISBN9781576754405. [5] Satish, Usha and Siegfried Streufert (June 2006). "Strategic Management Simulations to Prepare for VUCAD Terrorism" (http:/ / www. homelandsecurity. org/ journal/ Default. aspx?oid=145& ocat=1& AspxAutoDetectCookieSupport=1). Journal of Homeland Security. . Retrieved 2008-10-29. [6] Johansen, Bob (2007). Get There Early: Sensing the Future to Compete in the Present. San Francisco, CA: Berrett-Koehler Publishers, Inc.. p.68. ISBN9781576754405.
Holism
Distinguish from the suffix -holism, naming addictions. Holism (from holos, a Greek word meaning all, whole, entire, total) is the idea that all the properties of a given system (physical, biological, chemical, social, economic, mental, linguistic, etc.) cannot be determined or explained by its component parts alone. Instead, the system as a whole determines in an important way how the parts behave. The general principle of holism was concisely summarized by Aristotle in the Metaphysics: "The whole is more than the sum of its parts" (1045a10). Reductionism is sometimes seen as the opposite of holism. Reductionism in science says that a complex system can be explained by reduction to its fundamental parts. For example, the processes of biology are reducible to chemistry and the laws of chemistry are explained by physics.
History
The term holism was introduced by the South African statesman Jan Smuts in his 1926 book, Holism and Evolution.[1] Smuts defined holism as "The tendency in nature to form wholes that are greater than the sum of the parts through creative evolution."[2] The idea has ancient roots. Examples of holism can be found throughout human history and in the most diverse socio-cultural contexts, as has been confirmed by many ethnological studies. The French Protestant missionary, Maurice Leenhardt coined the term cosmomorphism to indicate the state of perfect symbiosis with the surrounding environment which characterized the culture of the Melanesians of New Caledonia. For these people, an isolated individual is totally indeterminate, indistinct and featureless until he can find his position within the natural and social world in which he is inserted. The confines between the self and the world are annulled to the point that the material body itself is no guarantee of the sort of recognition of identity which is typical of our own culture.
Holism
54
In science
In the latter half of the 20th century, holism led to systems thinking and its derivatives, like the sciences of chaos and complexity. Systems in biology, psychology, or sociology are frequently so complex that their behavior is, or appears, "new" or "emergent": it cannot be deduced from the properties of the elements alone.[3] Holism has thus been used as a catchword. This contributed to the resistance encountered by the scientific interpretation of holism, which insists that there are ontological reasons that prevent reductive models in principle from providing efficient algorithms for prediction of system behavior in certain classes of systems. Further resistance to holism has come from the association of the concept with quantum mysticism. Recently, however, public understanding has grown over the realities of such concepts, and more scientists are beginning to accept serious research into the concept such as cell biologist Bruce Lipton [4] . Scientific holism holds that the behavior of a system cannot be perfectly predicted, no matter how much data is available. Natural systems can produce surprisingly unexpected behavior, and it is suspected that behavior of such systems might be computationally irreducible, which means it would not be possible to even approximate the system state without a full simulation of all the events occurring in the system. Key properties of the higher level behavior of certain classes of systems may be mediated by rare "surprises" in the behavior of their elements due to the principle of interconnectivity, thus evading predictions except by brute force simulation. Stephen Wolfram has provided such examples with simple cellular automata, whose behavior is in most cases equally simple, but on rare occasions highly unpredictable.[5] Complexity theory (also called "science of complexity"), is a contemporary heir of systems thinking. It comprises both computational and holistic, relational approaches towards understanding complex adaptive systems and, especially in the latter, its methods can be seen as the polar opposite to reductive methods. General theories of complexity have been proposed, and numerous complexity institutes and departments have sprung up around the world. The Santa Fe Institute is arguably the most famous of them.
In anthropology
There is an ongoing dispute as to whether anthropology is intrinsically holistic. Supporters of this concept consider anthropology holistic in two senses. First, it is concerned with all human beings across times and places, and with all dimensions of humanity (evolutionary, biophysical, sociopolitical, economic, cultural, psychological, etc.). Further, many academic programs following this approach take a "four-field" approach to anthropology that enompasses physical anthropology, archeology, linguistics, and cultural anthropology or social anthropology.[6] Some leading anthropologists disagree, and consider anthropological holism to be an artifact from 19th century social evolutionary thought that inappropriately imposes scientific positivism upon cultural anthropology.[7] The term "holism" is additionally used within social and cultural anthropology to refer to an analysis of a society as a whole which refuses to break society into component parts. One definition says: "as a methodological ideal, holism implies ... that one does not permit oneself to believe that our own established institutional boundaries (e.g. between politics, sexuality, religion, economics) necessarily may be found also in foreign societies."[8]
Holism
55
In ecology
Ecology is the leading and most important approach to holism, as it tries to include biological, chemical, physical and economic views in a given area. The complexity grows with the area, so that it is necessary to reduce the characteristic of the view in other ways, for example to a specific time of duration. John Muir, Scots born early conservationist[9] , wrote "When we try to pick out anything by itself we find it hitched to everything else in the Universe" More information is to be found in the field of systems ecology, a cross-disciplinary field influenced by general systems theory. see Holistic Community.
In economics
With roots in Schumpeter, the evolutionary approach might be considered the holist theory in economics. They share certain language from the biological evolutionary approach. They take into account how the innovation system evolves over time. Knowledge and know-how, know-who, know-what and know-why are part of the whole business economics. Knowledge can also be tacit, as described by Michael Polanyi. These models are open, and consider that it is hard to predict exactly the impact of a policy measure. They are also less mathematical.
In philosophy
In philosophy, any doctrine that emphasizes the priority of a whole over its parts is holism. Some suggest that such a definition owes its origins to a non-holistic view of language and places it in the reductivist camp. Alternately, a 'holistic' definition of holism denies the necessity of a division between the function of separate parts and the workings of the 'whole'. It suggests that the key recognisable characteristic of a concept of holism is a sense of the fundamental truth of any particular experience. This exists in contradistinction to what is perceived as the reductivist reliance on inductive method as the key to verification of its concept of how the parts function within the whole. In the philosophy of language this becomes the claim, called semantic holism, that the meaning of an individual word or sentence can only be understood in terms of its relations to a larger body of language, even a whole theory or a whole language. In the philosophy of mind, a mental state may be identified only in terms of its relations with others. This is often referred to as content holism or holism of the mental. Epistemological and confirmation holism are mainstream ideas in contemporary philosophy. Ontological holism was espoused by David Bohm in his theory on The Implicate Order.
In sociology
mile Durkheim developed a concept of holism which he set as opposite to the notion that a society was nothing more than a simple collection of individuals. In more recent times, Louis Dumont [10] has contrasted "holism" to "individualism" as two different forms of societies. According to him, modern humans live in an individualist society, whereas ancient Greek society, for example, could be qualified as "holistic", because the individual found identity in the whole society. Thus, the individual was ready to sacrifice himself or herself for his or her community, as his or her life without the polis had no sense whatsoever. Martin Luther King Jr had a holistic view of social justice. In Letter from Birmingham Jail he famously said: "Injustice anywhere is a threat to justice everywhere". Scholars such as David Bohm [11] and M. I. Sanduk [12] consider the society through the Plasma Physics. From physics point of view, the interaction of individuals within a group may lead a continuous model. Therefore for M. I. Sanduk The nature of fluidity of plasma (ionized gas) arises from the interaction of its free interactive charges, so the society may behave as a fluid owing to the free interactive individuals. This fluid model may explain many social phenomena like social instability, diffusion, flow, viscosity...So the society behaves as a sort of intellectual fluid.
Holism
56
In psychology of perception
A major holist movement in the early twentieth century was gestalt psychology. The claim was that perception is not an aggregation of atomic sense data but a field, in which there is a figure and a ground. Background has holistic effects on the perceived figure. Gestalt psychologists included Wolfgang Koehler, Max Wertheimer, Kurt Koffka. Koehler claimed the perceptual fields corresponded to electrical fields in the brain. Karl Lashley did experiments with gold foil pieces inserted in monkey brains purporting to show that such fields did not exist. However, many of the perceptual illusions and visual phenomena exhibited by the gestaltists were taken over (often without credit) by later perceptual psychologists. Gestalt psychology had influence on Fritz Perls' gestalt therapy, although some old-line gestaltists opposed the association with counter-cultural and New Age trends later associated with gestalt therapy. Gestalt theory was also influential on phenomenology. Aron Gurwitsch wrote on the role of the field of consciousness in gestalt theory in relation to phenomenology. Maurice Merleau-Ponty made much use of holistic psychologists such as work of Kurt Goldstein in his "Phenomenology of Perception."
In teleological psychology
Alfred Adler believed that the individual (an integrated whole expressed through a self-consistent unity of thinking, feeling, and action, moving toward an unconscious, fictional final goal), must be understood within the larger wholes of society, from the groups to which he belongs (starting with his face-to-face relationships), to the larger whole of mankind. The recognition of our social embeddedness and the need for developing an interest in the welfare of others, as well as a respect for nature, is at the heart of Adler's philosophy of living and principles of psychotherapy. Edgar Morin, the French philosopher and sociobiologist, can be considered a holist based on the transdisciplinary nature of his work. Mel Levine, M.D., author of A Mind at a Time,[13] and co-founder (with Charles R. Schwab) of the not-for-profit organization All Kinds of Minds, can be considered a holist based on his view of the 'whole child' as a product of many systems and his work supporting the educational needs of children through the management of a child's educational profile as a whole rather than isolated weaknesses in that profile.
In theological anthropology
In theological anthropology, which belongs to theology and not to anthropology, holism is the belief that the nature of humans consists of an ultimately divisible union of components such as body, soul and spirit.
In theology
Holistic concepts are strongly represented within the thoughts expressed within Logos (per Heraclitus), Panentheism and Pantheism.
In neurology
A lively debate has run since the end of the 19th century regarding the functional organization of the brain. The holistic tradition (e.g., Pierre Marie) maintained that the brain was a homogeneous organ with no specific subparts whereas the localizationists (e.g., Paul Broca) argued that the brain was organized in functionally distinct cortical areas which were each specialized to process a given type of information or implement specific mental operations. The controversy was epitomized with the existence of a language area in the brain, nowadays known as the Broca's area.[14] Although Broca's view has gained acceptance, the issue isn't settled insofar as the brain as a whole is a highly connected organ at every level from the individual neuron to the hemispheres.
Holism
57
Applications
Architecture
Architecture is often argued by design academics and those practicing in design to be a holistic enterprise.[15] Used in this context, holism tends to imply an all-inclusive design perspective. This trait is considered exclusive to architecture, distinct from other professions involved in design projects.
Education reform
The Taxonomy of Educational Objectives identifies many levels of cognitive functioning, which can be used to create a more holistic education. In authentic assessment, rather than using computers to score multiple choice tests, a standards based assessment uses trained scorers to score open-response items using holistic scoring methods.[16] In projects such as the North Carolina Writing Project, scorers are instructed not to count errors, or count numbers of points or supporting statements. The scorer is instead instructed to judge holistically whether "as a whole" is it more a "2" or a "3". Critics question whether such a process can be as objective as computer scoring, and the degree to which such scoring methods can result in different scores from different scorers.
Medicine
Holism appears in psychosomatic medicine. In the 1970s the holistic approach was considered one possible way to conceptualize psychosomatic phenomena. Instead of charting one-way causal links from psyche to soma, or vice-versa, it aimed at a systemic model, where multiple biological, psychological and social factors were seen as interlinked. Other, alternative approaches at that time were psychosomatic and somatopsychic approaches, which concentrated on causal links only from psyche to soma, or from soma to psyche, respectively.[17] At present it is commonplace in psychosomatic medicine to state that psyche and soma cannot really be separated for practical or theoretical purposes. A disturbance on any level - somatic, psychic, or social - will radiate to all the other levels, too. In this sense, psychosomatic thinking is similar to the biopsychosocial model of medicine. Alternative medicine practitioners adopt a holistic approach to healing.
See also
Buckminster Fuller Christopher Alexander Confirmation holism David Bohm Emergence Emergentism Gaia hypothesis Gestalt psychology Gestalt therapy Holistic health Holon Howard T. Odum Jan Smuts Janus Kurt Goldstein Logical holism Organicism Polytely Panarchy Synergetics Synergy Systems theory Willard Van Orman Quine
Holism
[14] 'Does Broca's area exist?': Christofredo Jakob's 1906 response to Pierre Marie's holistic stance. Kyrana Tsapkini, Ana B. Vivas, Lazaros C. Triarhou. Brain and Language, Volume 105, Issue 3, June 2008, Pages 211-219, http:/ / dx. doi. org/ 10. 1016/ j. bandl. 2007. 07. 124 [15] Holm, Ivar (2006). Ideas and Beliefs in Architecture: How attitudes, orientations, and underlying assumptions shape the built environment. Oslo School of Architecture and Design. ISBN 8254701741. [16] Rubrics (Authentic Assessment Toolbox) "So, when might you use a holistic rubric? Holistic rubrics tend to be used when a quick or gross judgment needs to be made" (http:/ / jonathan. mueller. faculty. noctrl. edu/ toolbox/ rubrics. htm) [17] Lipowski, 1977. [18] http:/ / www. mech. kuleuven. be/ pma/ project/ goa/ hms-int/ history. html [19] http:/ / www. ecotao. com/ holism/ [20] http:/ / plato. stanford. edu/ entries/ physics-holism/ [21] http:/ / abyss. uoregon. edu/ ~js/ glossary/ holism. html [22] http:/ / www. twow. net/ ObjText/ OtkCcCE. htm
59
Confirmation holism
Confirmation holism, also called epistemological holism is the claim that a single scientific theory cannot be tested in isolation; a test of one theory always depends on other theories and hypotheses. For example, in the first half of the 19th century, astronomers were observing the path of the planet Uranus to see if it conformed to the path predicted by Newton's law of gravitation; it didn't. There were an indeterminate number of possible explanations, such as that the telescopic observations were wrong because of some unknown factor; or that Newton's laws were in error; or that God moves different planets in different ways. However, it was eventually accepted that an unknown planet was affecting the path of Uranus, and that the hypothesis that there are seven planets in our solar system was false. Le Verrier calculated the approximate position of the interfering planet and its existence was confirmed in 1846. We now call the planet Neptune. There are two aspects of confirmation holism. The first is that interpretation of observation is dependent on theory (sometimes called theory-laden). Before accepting the telescopic observations one must look into the optics of the telescope, the way the mount is constructed in order to ensure that the telescope is pointing in the right direction, and that light travels through space in a straight line (which Einstein demonstrated is not so, except as a possible approximation). The second is that evidence alone is insufficient to determine which theory is correct. Each of the alternatives above might have been correct, but only one was in the end accepted. That theories can only be tested as they relate to other theories implies that one can always claim that test results that seem to refute a favoured scientific theory have not refuted that theory at all. Rather, one can claim that the test results conflict with predictions because some other theory is false or unrecognised (this is Einstein's basic objection when it comes to the uncertainty principle). Maybe the test equipment was out of alignment because the cleaning lady bumped into it the previous night. Or, maybe, there is dark matter in the universe that accounts for the strange motions of some galaxies. That one cannot unambiguously determine which theory is refuted by unexpected data means that scientists must use judgements about which theories to accept and which to reject. In practice, these judgements are made according to the outcomes of statistical hypothesis tests. As these have their foundation in mathematical statistics, the methods scientists use to accept or reject theories may be regarded as rigorous.
Confirmation holism
60
Theory-dependence of observations
Suppose some theory T implies an observation O (observation meaning here the result of the observation, rather than the process of observation per se):
So by Modus Tollens,
All observations make use of prior assumptions, which can be symbolised as:
and therefore
which is by De Morgan's law equivalent to . In other words, the failure to make some observation only implies the failure of at least one of the prior assumptions that went into making the observation. It is always possible to reject an apparently falsifying observation by claiming that only one of its underlying assumptions is false; since there are an indeterminate number of such assumptions, any observation can potentially be made compatible with any theory. So it is quite valid to use a theory to reject an observation.
and so
In words, the failure of some theory implies the failure of at least one of its underlying hypotheses. It is always possible to resurrect a falsified theory by claiming that only one of its underlying hypotheses is false; again, since there are an indeterminate number of such hypotheses, any theory can potentially be made compatible with any particular observation. Therefore it is in principle impossible to determine if a theory is false by reference to evidence.
Conceptual schemes
The framework of a theory (formal conceptual scheme) is just as open to revision as the "content" of the theory. The aphorism that Willard Quine uses is: theories face the tribunal of experience as a whole. This idea is problematic for the analytic-synthetic distinction because (in Quine's view) such a distinction supposes that some facts are true of language alone, but if conceptual scheme is as open to revision as synthetic content, then there can be no plausible distinction between framework and content, hence no distinction between the analytic and the synthetic. One upshot of confirmational holism is the underdetermination of theories: if all theories (and the propositions derived from them) of what exists are not sufficiently determined by empirical data (data, sensory-data, evidence); each theory with its interpretation of the evidence is equally justifiable or, alternatively, equally indeterminate. Thus,
Confirmation holism the Greeks' worldview of Homeric gods is as credible as the physicists' world of electromagnetic waves. Quine later argued for ontological relativity, that our ordinary talk of objects suffers from the same underdetermination and thus does not properly refer to objects. While underdetermination does not invalidate the principle of falsifiability first presented by Karl Popper, Popper himself acknowledged that continual ad hoc modification of a theory provides a means for a theory to avoid being falsified (cf. Lakatos). In this respect, the principle of parsimony, or Occam's Razor, plays a role. This principle presupposes that between multiple theories explaining the same phenomenon, the simplest theory in this case, the one that is least dependent on continual ad hoc modification is to be preferred.
61
Transcendental arguments
In recent philosophical literature, starting with Kant, and perhaps related most popularly with Strawson, attempts have been made at transcendental arguments. This form of argument attempts to prove a proposition from the fact that said proposition is the precondition of some other well-established or accepted proposition(s). If one accepts the validity of this sort of argumentation, then these arguments may serve alongside the Razor, and may perhaps be more conclusive, as a heuristic for selecting between competing, under-determined theories.
References
Curd, Martin; Cover, J.A. (Eds.) (1998). Philosophy of Science, Section 3, The Duhem-Quine Thesis and Underdetermination, W.W. Norton & Company. Duhem, Pierre. The Aim and Structure of Physical Theory. Princeton, New Jersey, Princeton University Press, 1954. W. V. Quine. 'Two Dogmas of Empiricism.' The Philosophical Review, 60 (1951), pp. 2043. online text [1] W. V. Quine. Word and Object. Cambridge, Mass., MIT Press, 1960. W. V. Quine. 'Ontological Relativity.' In Ontological Relativity and Other Essays, New York, Columbia University Press, 1969, pp. 2668. D. Davidson. 'On the Very Idea of Conceptual Scheme.' Proceedings of the American Philosophical Association, 17 (1973-74), pp. 520.
See also
Truth Truth theory Coherentism Underdetermination No true Scotsman
Theories of truth
Coherence theory of truth Consensus theory of truth Correspondence theory of truth Deflationary theory of truth Epistemic theories of truth Pragmatic theory of truth Redundancy theory of truth Semantic theory of truth
Confirmation holism
62
Related topics
Belief Epistemology Information Inquiry Knowledge Pragmatism Pragmaticism Pragmatic maxim Reproducibility Scientific method Testability Verificationism
References
[1] http:/ / www. ditext. com/ quine/ quine. html
Holon (philosophy)
63
Holon (philosophy)
A holon (Greek: , holon neuter form of , holos "whole") is something that is simultaneously a whole and a part. The word was coined by Arthur Koestler in his book The Ghost in the Machine (1967, p. 48). Koestler was compelled by two observations in proposing the notion of the holon. The first observation was influenced by Nobel Prize winner Herbert Simon's parable of the two watchmakers, wherein Simon concludes that complex systems will evolve from simple systems much more rapidly if there are stable intermediate forms present in that evolutionary process than if they are not present. The second observation was made by Koestler himself in his analysis of hierarchies and stable intermediate forms in both living organisms and social organizations. He concluded that, although it is easy to identify sub-wholes or parts, wholes and parts in an absolute sense do not exist anywhere. Koestler proposed the word holon to describe the hybrid nature of sub-wholes and parts within in vivo systems. From this perspective, holons exist simultaneously as self-contained wholes in relation to their sub-ordinate parts, and dependent parts when considered from the inverse direction. Koestler also points out that holons are autonomous, self-reliant units that possess a degree of independence and handle contingencies without asking higher authorities for instructions. These holons are also simultaneously subject to control from one or more of these higher authorities. The first property ensures that holons are stable forms that are able to withstand disturbances, while the latter property signifies that they are intermediate forms, providing a context for the proper functionality for the larger whole. Finally, Koestler defines a holarchy as a hierarchy of self-regulating holons that function first as autonomous wholes in supra-ordination to their parts, secondly as dependent parts in sub- ordination to controls on higher levels, and thirdly in coordination with their local environment.
General definition
A holon is a system (or phenomenon) which is an evolving self-organizing dissipative structure, composed of other holons, whose structures exist at a balance point between chaos and order. It is maintained by the throughput of matter-energy and information-entropy connected to other holons and is simultaneously a whole in and itself at the same time being nested within another holon and so is a part of something much larger than itself. Holons range in size from the smallest subatomic particles and strings, all the way up to the multiverse, comprising many universes. Individual humans, their societies and their cultures are intermediate level holons, created by the interaction of forces working upon us both top-down and bottom-up. On a non-physical level, words, ideas, sounds, emotionseverything that can be identifiedis simultaneously part of something, and can be viewed as having parts of its own, similar to sign in regard of semiotics. Since a holon is embedded in larger wholes, it is influenced by and influences these larger wholes. And since a holon also contains subsystems, or parts, it is similarly influenced by and influences these parts. Information flows bidirectionally between smaller and larger systems as well as rhizomatic contagion. When this bidirectionality of information flow and understanding of role is compromised, for whatever reason, the system begins to break down: wholes no longer recognize their dependence on their subsidiary parts, and parts no longer recognize the organizing authority of the wholes. Cancer may be understood as such a breakdown in the biological realm. A hierarchy of holons is called a holarchy. The holarchic model can be seen as an attempt to modify and modernise perceptions of natural hierarchy. Ken Wilber comments that the test of holon hierarchy (e.g. holarchy) is that if a type of holon is removed from existence, then all other holons of which it formed a part must necessarily cease to exist too. Thus an atom is of a lower standing in the hierarchy than a molecule, because if you removed all molecules, atoms could still exist, whereas if you removed all atoms, molecules, in a strict sense would cease to exist. Wilber's concept is known as the doctrine of the fundamental and the significant. A hydrogen atom is more fundamental than an ant, but an ant is
Holon (philosophy) more significant. The doctrine of the fundamental and the significant are contrasted by the radical rhizome oriented pragmatics of Deleuze and Guattari, and other continental philosophy.
64
Types of holons
Individual holon
An individual holon possesses a dominant monad; that is, it possesses a definable "I-ness". An individual holon is discrete, self-contained, and also demonstrates the quality of agency, or self-directed behavior. [3] The individual holon, although a discrete and self-contained is made up of parts; in the case of a human, examples of these parts would include the heart, lungs, liver, brain, spleen, etc. When a human exercises agency, taking a step to the left, for example, the entire holon, including the constituent parts, moves together as one unit.
Social holon
A social holon does not possess a dominant monad; it possesses only a definable "we-ness", as it is a collective made up of individual holons. [4] In addition, rather than possessing discrete agency, a social holon possesses what is defined as nexus agency. An illustration of nexus agency is best described by a flock of geese. Each goose is an individual holon, the flock makes up a social holon. Although the flock moves as one unit when flying, and it is "directed" by the choices of the lead goose, the flock itself is not mandated to follow that lead goose. Another way to consider this would be collective activity that has the potential for independent internal activity at any given moment.
Artifacts
American Philosopher Ken Wilber includes Artifacts in his theory of holons. Artifacts are anything (i.e. a statue or a piece of music) that is created by either an individual holon or a social holon. While lacking any of the defining structural characteristics - agency; self-maintenance; I-ness; Self Transcendence - of the previous two holons, Artifacts are useful to include in a comprehensive scheme due to their potential to replicate aspects of and profoundly affect (via, say interpretation) the previously described holons. It should also be noted that Artifacts are made up individual or social holons (i.e. a statue is made up atoms) As an interesting aside, the development of Artificial Intelligence may force us to question where the line should be drawn between the individual holon and the artifact.
Heaps
Heaps are defined as random collections of holons that lack any sort of organisational significance. A pile of leaves would be an example of a heap. Note, one could question whether a pile of leaves could be an "artifact" of an ecosystem "social holon". This raises a problem of intentionality: in short, if social holons create artifacts but lack intentionality (the domain of individual holons) how can we distinguish between heaps and artifacts? Further, if an artist (individual holon) paints a picture (artifact) in a deliberately chaotic and unstructured way does it become a heap?
Holon (philosophy)
65
See also
Holarchy David Bohm Ken Wilber Heterarchy Holomovement Metasystem transition Protocol stack Quantum physics Philotics Bell's Theorem
External links
A brief history of the concept of holons [1] An even briefer history of the term holon [2] Arthur Koestler text on holon [3] Ecosystems and Holarchies - a new way to look at hierarchies [4] Holons, holarchy, and beyond [5]
Resources
Prigogine, I. Stengers, E. 1984. Order out of Chaos. New York: Bantam Books Koestler, Arthur, 1967. The Ghost in the Machine. London: Hutchinson. 1990 reprint edition, Penguin Group. ISBN 0-14-019192-5.
References
[1] [2] [3] [4] [5] http:/ / www. integralworld. net/ edwards13x. html http:/ / www. mech. kuleuven. ac. be/ pma/ project/ goa/ hms-int/ history. html http:/ / www. panarchy. org/ koestler/ holon. 1969. html http:/ / www. holon. se/ folke/ kurs/ Bilder/ holarchy2. shtml http:/ / www. beyondwilber. ca/ AQALmap/ bookdwl/ files/ WAQALMB_1. pdf
Indeterminacy (philosophy)
66
Indeterminacy (philosophy)
Indeterminacy, in philosophy, can refer both to common scientific and mathematical concepts of uncertainty and their implications and to another kind of indeterminacy deriving from the nature of definition or meaning. It is related to deconstructionism and to Nietzsche's criticism of the Kantian noumenon.
Indeterminacy in philosophy
Introduction
Indeterminacy was discussed in one of Jacques Derrida's early works Plato's Pharmacy (1969)[1] , a reading of Plato's Phaedrus and Phaedo. Plato writes of a fictionalized conversation between Socrates and a student, in which Socrates tries to convince the student that writing is inferior to speech. Socrates uses the Egyptian myth of Thoth's creation of writing to illustrate his point. As the story goes, Thoth presents his invention to the god-king of Upper Egypt for judgment. Upon its presentation, Thoth offers script as a pharmakon for the Egyptian people. The Greek word pharmakon poses a quandary for translators- it is both a remedy and a poison. In the proffering of a pharmakon, Thoth presents it as its true meaning- a harm and benefit. The god-king, however, refuses the invention. Through various reasonings, he determines the pharmakon of writing to be a bad thing for the Egyptian people. The pharmakon, the undecidable, has been returned decided. The problem, as Derrida reasons, is this: since the word pharmakon, in the original Greek, means both a remedy and a poison, it cannot be determined as fully remedy or fully poison. Amon rejected writing as fully poison in Socrates' retelling of the tale, thus shutting out the other possibilities. The problem of indeterminacy arises when one observes the eventual circularity of virtually every possible definition. It is easy to find loops of definition in any dictionary, because this seems to be the only way that certain concepts, and generally very important ones such as that of existence, can be defined in the English language. A definition is a collection of other words, and in any finite dictionary if one continues to follow the trail of words in search of the precise meaning of any given term, one will inevitably encounter this linguistic indeterminacy. Philosophers and scientists generally try to eliminate indeterminate terms from their arguments, since any indeterminate thing is unquantifiable and untestable; similarly, any hypothesis which consists of a statement of the properties of something unquantifiable or indefinable cannot be falsified and thus cannot be said to be supported by evidence that does not falsify it. This is related to Popper's discussions of falsifiability in his works on the scientific method. The quantifiability of data collected during an experiment is central to the scientific method, since reliable conclusions can only be drawn from replicable experiments, and since in order to establish observer agreement scientists must be able to quantify experimental evidence. Immanuel Kant unwittingly proposed one answer to this question in his Critique of Pure Reason by stating that there must "exist" a "thing in itself" - a thing which is the cause of phenomena, but not a phenomenon itself. But, so to speak, "approximations" of "things in themselves" crop up in many models of empirical phenomena: singularities in physics, such as gravitational singularities, certain aspects of which (e.g., their unquantifiability) can seem almost to mirror various "aspects" of the proposed "thing in itself", are generally eliminated (or attempts are made at eliminating them) in newer, more precise models of the universe; and definitions of various psychiatric disorders stem, according to philosophers who draw on the work of Michel Foucault, from a belief that something unobservable and indescribable is fundamentally "wrong" with the mind of whomever suffers from such a disorder: proponents of Foucault's treatment of the concept of insanity would assert that one need only try to quantify various characteristics of such disorders as presented in today's Diagnostic and Statistical Manual delusion, one of the diagnostic criteria which must be exhibited by a patient if he or she is to be considered schizophrenic, for example in order to discover that the field of study known as abnormal psychology relies upon indeterminate concepts in defining virtually each "mental disorder" it describes. The quality that makes a belief a delusion is indeterminate to
Indeterminacy (philosophy) the extent to which it is unquantifiable; arguments that delusion is determined by popular sentiment (i.e., "almost no-one believes that he or she is made of cheese, and thus that belief is a delusion") would lead to the conclusion that, for example, Alfred Wegener's assertion of continental drift was a delusion since it was dismissed for decades after it was made.
67
Indeterminacy (philosophy) In examples as odd as this, the differences between two approximately-equal things may be very small indeed, and it is certainly true that they are quite irrelevant to most discussions. Acceptance of the reflexive property illustrated above has led to useful mathematical discoveries which have influenced the life of anyone reading this article on a computer. But in an examination of the possibility of the determinacy of any possible concept, differences like this are supremely relevant since that quality which could possibly make two separate things "equal" seems to be indeterminate.
68
Indeterminacy (philosophy) its members on an irrational basis. The less-precisely such states as "insanity" and "criminality" are defined in a society, the more likely that society is to fail to continue over time to describe the same behaviors as characteristic of those states (or, alternately, to characterize such states in terms of the same behaviors).
69
Current work
Richard Dawkins, the man who coined the term meme in the 1970s, described the concept of faith in his documentary, Root of All Evil?, as "the process of non-thinking". In the documentary, he used Bertrand Russell's analogy between a teapot orbiting the sun (something that cannot be observed because the brightness of the sun would obscure it even from the best telescope's view) and the object of one's faith (in this particular case, God) to explain that a highly indeterminate idea can self-replicate freely: "Everybody in the society had faith in the teapot. Stories of the teapot had been handed down for generations as part of the tradition of society. There are holy books about the teapot." [10] In Darwin's Dangerous Idea, Dennett argues against the existence of determinate meaning (in this case, of the subjective experience of vision for frogs) via an explanation of their indeterminacy in the chapter entitled The Evolution of Meanings, in the section The Quest for Real Meanings: "Unless there were 'meaningless' or 'indeterminate' variation in the triggering conditions of the various frogs' eyes, there could be no raw material [...] for selection for a new purpose to act upon. The indeterminacy that Fodor (and others) see as a flaw [...] is actually a prediction for such evolution [of "purpose"]. The idea that there must be something determinate that the frog's eye really means some possibly unknowable proposition in froggish that expresses exactly what the frog's eye is telling the frog's brain is just essentialism applied to meaning (or function). Meaning, like function on which it so directly depends, is not something determinate at its birth. [...]" Dennet argues, controversially [11] [12] , against qualia in Consciousness Explained. Qualia are attacked from several directions at once: he maintains they do not exist (or that they are too ill-defined to play any role in science, or that they are really something else, i.e. behavioral dispositions). They cannot simultaneously have all the properties attributed to them by philosophersincorrigible, ineffable, private, directly accessible and so on. The multiple drafts theory is leveraged to show that facts about qualia are not definite. Critics object that one's own qualia are subjectively quite clear and distinct to oneself. The self-replicating nature of memes is a partial explanation of the recurrence of indeterminacies in language and thought. The wide influences of Platonism and Kantianism in Western philosophy can arguably be partially attributed to the indeterminacies of some of their most fundamental concepts (namely, the Idea and the Noumenon, respectively). For a given meme to exhibit replication and heritability - that is, for it to be able to make an imperfect copy of itself which is more likely to share any given trait with its "parent" meme than with some random member of the general "population" of memes - it must in some way be mutable, since memetic replication occurs by means of human conceptual imitation rather than via the discrete molecular processes that govern genetic replication. (If a statement were to generate copies of itself that didn't meaningfully differ from it, that process of copying would more accurately be described as "duplication" than as "replication", and it would be incorrect to term these statements "memes"; the same would be true if the "child" statements did not noticeably inherit a substantial proportion of their traits from their "parent" statements.) In other words, if a meme is defined roughly (and somewhat arbitrarily) as a statement (or as a collection of statements, like Foucault's "discursive formations") that inherits some, but not all, of
Indeterminacy (philosophy) its properties (or elements of its definition) from its "parent" memes and which self-replicates, then indeterminacy of definition could be seen as advantageous to memetic replication, since an absolute rigidity of definition would preclude memetic adaptation. It is important to note that indeterminacy in linguistics can arguably partially be defeated by the fact that languages are always changing. However, what the entire language and its collected changes continue to reflect is sometimes still considered to be indeterminate.
70
Criticism
Persons of faith argue that faith "is the basis of all knowledge". The Wikipedia article on faith states that "one must assume, believe, or have faith in the credibility of a person, place, thing, or idea in order to have a basis for knowledge." In this way the object of one's faith is similar to Kant's noumenon. This would seem to attempt to make direct use of the indeterminacy of the object of one's faith as evidential support of its existence: if the object of one's faith were to be proven to exist (i.e., if it were no longer of indeterminate definition, or if it were no longer unquantifiable, etc.), then faith in that object would no longer be necessary; arguments from authority such as those mentioned above wouldn't either; all that would be needed to prove its existence would be scientific evidence. Thus, if faith is to be considered as a reliable basis for knowledge, persons of faith would seem, in effect, to assert that indeterminacy is not only necessary, but good (see Nassim Taleb).
Indeterminacy (philosophy)
71
Criticism
Proponents of a deterministic universe have criticised various applications of the concept of indeterminacy in the sciences; for instance, Einstein once stated that "God does not play dice" in a succinct (but now unpopular) argument against the theory of quantum indeterminacy, which states that the actions of particles of extremely low mass or energy are unpredictable because an observer's interaction with them changes either their positions or momenta. (The "dice" in Einstein's metaphor refer to the probabilities that these particles will behave in particular ways, which is how quantum mechanics addressed the problem.) At first it might seem that a criticism could be made from a biological standpoint in that an indeterminate idea would seem not to be beneficial to the species that holds it. A strong counterargument, however, is that not all traits exhibited by living organisms will be seen in the long term as evolutionarily advantageous, given that extinctions occur regularly and that phenotypic traits have often died out altogether in other words, an indeterminate meme may in the long term demonstrate its evolutionary value to the species that produced it in either direction; humans are, as yet, the only species known to make use of such concepts. It might also be argued that conceptual vagueness is an inevitability, given the limited capacity of the human nervous systems. We just do not have enough neurons to maintain separate concepts for "dog with 1,000,000 hairs", "dog with 1,000,001 hairs" and so on. But conceptual vagueness is not metaphysical indeterminacy.
Indeterminacy (philosophy)
72
See Also
stochastics
See also
Anti-realism Causality Causal loop Daniel Dennett Definition Deconstruction Deterministic system (philosophy) Empty set Event (philosophy) Faith Hyle Indeterminacy of translation Kant Meaning Memetics Nietzsche Occam's razor Philosophy of science Qualia Quantifiability Quantum indeterminacy Quintessence Scientific method Theory of everything Thing in itself Vagueness
References
[1] Derrida, Plato's Pharmacy in Dissemination, 1972, Athlone Press, London, 1981 (http:/ / social. chass. ncsu. edu/ wyrick/ debclass/ pharma. htm) [2] Nietzsche, F. On Truth and Lies (http:/ / www. publicappeal. org/ library/ nietzsche/ Nietzsche_various/ on_truth_and_lies. htm) [3] Nietzsche, F. Beyond Good and Evil (http:/ / www. marxists. org/ reference/ archive/ nietzsche/ 1886/ beyond-good-evil/ ch01. htm) [4] Nietzsche quotes (http:/ / www. wutsamada. com/ alma/ modern/ nietzquo. htm) [5] Nietzsche quote [6] Thompson, Hunter.S. [7] Foucault, M. Madness and Civilisation (http:/ / mchip00. nyu. edu/ lit-med/ lit-med-db/ webdocs/ webdescrips/ foucault12432-des-. html) [8] Foucault, M. The Archaeology of Knowledge [9] Hoenisch, S. Interpretation and Indeterminacy in Discourse Analysis (http:/ / www. criticism. com/ da/ da_indet. htm) [10] Dawkins World of Dawkins (http:/ / www. simonyi. ox. ac. uk/ dawkins/ WorldOfDawkins-archive/ Dawkins/ Work/ Articles/ 1999-10-04snakeoil. shtml) [11] Lormand, E. Qualia! Now Showing ata Theatre near you (http:/ / www-personal. umich. edu/ ~lormand/ phil/ cons/ qualia. htm) [12] De Leon, D. The Qualities of Qualia (http:/ / www. lucs. lu. se/ ftp/ pub/ LUCS_Studies/ LUCS58. pdf) [13] Weinberg, S. PBS interview (http:/ / www. pbs. org/ wgbh/ nova/ elegant/ view-weinberg. html) [14] Plank, William. THE IMPLICATIONS OF QUANTUM NON-LOCALITY FOR THE ARCHAEOLOGY OF CONSCIOUSNESS. Provides an expert opinion on the relationship between Nietzsche's critique of Kant's "thing in itself" and quantum indeterminacy. (http:/ / www. msubillings. edu/ CASFaculty/ Plank/ THE IMPLICATIONS OF QUANTUM NON. htm) [15] The Quantum Nietzsche-- a site explaining the same ideas, also run by William Plank. (http:/ / www. quantumnietzsche. com/ )
Integral (spirituality)
73
Integral (spirituality)
This article is about "Integral" as a theme in spirituality. For the "Integral Theory" associated with Ken Wilber, see Integral Theory. See Integral (disambiguation) for other uses. Integral is a term applied to a wide-ranging set of developments in philosophy, psychology, religious thought, and other areas that seek interdisciplinary and comprehensive frameworks. The term is often combined with others such as approach[1] [2] , consciousness[3] , culture,[4] paradigm,[5] [6] , philosophy,[7] [8] , society,[9] , theory[10] , and worldview [3] , Major themes of this range of philosophies and teachings include a synthesis of science and religion, evolutionary spirituality, and holistic programs of development for the body, mind, soul, and spirit. In some versions of integral spirituality, integration is seen to necessarily include the three domains of self, culture, and nature.[11] Integral thinkers draw inspiration from the work of Sri Aurobindo, Don Beck, Jean Gebser, Robert Kegan, Ken Wilber, and others. Some individuals affiliated with integral spirituality have claimed that there exists a loosely-defined "Integral movement"[12] . Others, however, have disagreed[13] . Whatever its status as a "movement", there are a variety of religious organizations, think tanks, conferences, workshops, and publications in the US and internationally that use the term integral.
Integral (spirituality) The grandeur of Darwinian thought is not disputed, but it does not explain the integral evolution of man So it is with all purely physical explanations, which do not recognise the spiritual essence of man's being.[21] [Italics added] The word integral was independently suggested by Jean Gebser (19051973), a Swiss phenomenologist and interdisciplinary scholar, in 1939 to describe his own intuition regarding the next state of human consciousness. Gebser was the author of The Ever-Present Origin, which describes human history as a series of mutations in consciousness. he only afterwards discovered the similarity between his own ideas and those of Sri Aurobindo and Teilhard de Chardin [22] . The idea of "Integral Psychology" was first developed in the 1940s and 50s by Indra Sen (19031994) a psychologist, author, educator, and devotee of Sri Aurobindo and The Mother. He was the first to coin the term "Integral psychology" to describe the psychological observations he found in Sri Aurobindo's writings (which he contrasted with those of Western Psychology), and developed themes of "Integral Culture" and "Integral Man".[23] Although these basic ideas were first articulated in the early twentieth century, the movement originates with the California Institute of Integral Studies founded in 1968 by Haridas Chaudhuri (19131975), a Bengali philosopher and academic. Chaudhuri had been a correspondent of Sri Aurobindo, who developed his own perspective and philosophy. He established the California Institute of Integral Studies (originally the California Institute of Asian Studies), in 1968 in San Francisco (it became an independent organisation in 1974), and presented his own form of Integral psychology in the early 1970s.[24] Again independently, in Spiral Dynamics, Don Beck and Chris Cowan use the term integral for a developmental stage which sequentially follows the pluralistic stage. The essential characteristic of this stage is that it continues the inclusive nature of the pluralistic mentality, yet extends this inclusiveness to those outside of the pluralistic mentality. In doing so, it accepts the ideas of development and hierarchy, which the pluralistic mentality finds difficult. Other ideas of Beck and Cowan include the "first tier" and "second tier", which refer to major periods of human development. In late 1990s and 2000 Ken Wilber, who was influenced by both Aurobindo and Gebser, among many others, adopted the term Integral to refer to the latest revision of his own integral philosophy, which he called Integral Theory[25] . He also established the Integral Institute as a think-tank for further development of these ideas. In his book Integral Psychology, Wilber lists a number of pioneers of the integral approach, post hoc. These include Goethe, Schelling, Hegel, Gustav Fechner, William James, Rudolf Steiner, Alfred North Whitehead, James Mark Baldwin, Jrgen Habermas, Sri Aurobindo, and Abraham Maslow.[26] . The adjective Integral has also been applied to Spiral Dynamics, chiefly the version taught by Don Beck, who fora while collaborated with Wilber [27] . In the Wilber movement "Integral" when capitalized is given a further definition, being made synonymous with Wilber's AQAL Integral theory,[28] whereas "Integral Studies" refers to the broader field including the range of integral thinkers such as Jean Gebser, Sri Aurobindo, Ken Wilber, and Ervin Laszlo.[29]
74
Contemporary figures
A variety of intellectuals, academics, writers, and other specialists have advanced the fields of integral thought in recent decades. Due to its still ambiguous nature and definition, definitions of Integral psychology and philosophy, and lists of Integral philosophers and visionaries, differ , although there are some common themes. While Wilber was the first to nominate Integral philosophers, thinkers and visionaries, similar lists have later been proposed by others. According to John Bothwell and David Geier, among the top thinkers in the integral movement are Stanislav Grof, Fred Kofman, George Leonard, Michael Murphy, Jenny Wade, Roger Walsh, Ken Wilber, and Michael Zimmerman.[30] Australian academic Alex Burns mentions among integral theorists Jean Gebser, Clare W. Graves, Jane Loevinger
Integral (spirituality) and Ken Wilber.[31] In 2007, Steve McIntosh mentioned Henri Bergson and Teilhard de Chardin along with many of the names mentioned by Wilber.[32] While in the same year, the editors of What Is Enlightenment? listed as contemporary Integralists Don Beck, Allan Combs, Robert Godwin, Sally Goerner, George Leonard, Michael Murphy, William Irwin Thompson, and Wilber.[33] Gary Hampson suggested that there are six intertwined genealogical branches of Integral, based on those who first used the term: those aligned with Aurobindo, Gebser, Wilber, Gangadean, Lszl and Steiner (noting that the Steiner branch is via the conduit of Gidley).[34] Integral thought is claimed to provide "a new understanding of how evolution affects the development of consciousness and culture."[3] It includes areas such as business, education, medicine, spirituality, sports,[35] psychology and psychotherapy.[36] The idea of the evolution of consciousness has also become a central theme in much of integral theory.[37] According to the Integral Transformative Practice website, integral means "dealing with the body, mind, heart, and soul."[38]
75
Integral psychology
Integral psychology is psychology that presents an all-encompassing holistic rather than an exclusivist or reductive approach. It includes both lower, ordinary, and spiritual or transcendent states of consciousness. It originally is based on the Yoga psychology of Sri Aurobindo. Other important writers in the field of Integral Psychology are Indra Sen,[39] Haridas Chaudhuri,[40] Ken Wilber,[41] and Brant Cortright.[42]
Integral practice
Integral practice is primarily an outgrowth of different integral theories and philosophies as they intersect with various spiritual practices, holistic health modalities, and transformative regimens associated with the New Paradigm and human potential movement. Some ways to describe integral practice are the experiential application of integral theory,[43] the "holistic disciplines we consciously employ to nurture ourselves and others, and most specifically those practices that both inspire and sustain growth in many dimensions at once,"[44] and to "address and support each aspect of life with the goal of fully realizing all levels of human potential...."[45] These self-care practices target different areas of personal development, such as physical, emotional, creative, and psychosocial, in a combined, synergistic fashion. They may have different emphases depending on the theory that supports each approach, but most include a spiritual, introspective or meditative component as a major feature. The objectives of integral practice could be loosely defined as well-being and wholeness, with, in most cases, an underlying imperative of personal and even societal transformation and evolution.[46] [47] There is also the question of how to provide necessary customization and individualization of practice, while avoiding a "cafeteria model" that encourages practitioners to choose components according to their own strengths, rather than what is necessary for integral growth and development.[48] The following can be considered examples of different modalities of integral practice, listed in approximate order of inception: Sri Aurobindo's Integral Yoga; Integral Transformative Practice (ITP), created by George Leonard and Michael Murphy;[49] Holistic Integration, created by Ramon Albarada and Marina Romero;[50] Integral Lifework, created by T. Collins Logan;[51] and Integral Life Practice (ILP), based on Ken Wilber's AQAL framework.
Integral (spirituality)
76
See also
Cultural creatives Integral psychology Integral Theory Integral yoga Integrative learning Post-postmodernism Quantum mysticism Relationship between religion and science Remodernism Transmodernity Integral humanism
External links
Academic programs California Institute of Integral Studies [52], offers programs in integral studies. Fielding Graduate University [53], offers programs in integral studies. John F. Kennedy University, MA in Integral Theory [54] an accredited online Master of Arts degree in Integral Theory. Conferences Integral Theory Conference [55] the official site for the biennial Integral Theory Conference held at JFK University. Integral Leadership in Action [56] the official site for the 4th annual conference on integral conscious leadership. Organizations Integral Institute [57] a non-profit academic think tank. Integral Research Center [58] a grant giving mixed-methods research center based on Integral Methodological Pluralism. Publications Conscious Evolution [59], essays and articles about the multidisciplinary, integral study of consciousness and the Kosmos. Integral Leadership Review [60], the site of the online publications Integral Leadership Review and Leading Digest Integral Life [61] online community website that is the sponsoring organization of Integral Institute, a non-profit academic think tank. Integral Review Journal [62], an online peer reviewed journal. Integral World [63] website and online resource maintained by Frank Visser. Journal of Integral Theory and Practice [64] a peer-reviewed academic journal founded in 2003 with its first issue appearing in 2006. Kosmos Journal [65], founded in 2001, a leading international journal for planetary citizens committed to the birth and emergence of a new planetary culture and civilization. World Futures: Journal of General Evolution [66]. An academic journal devoted to promoting evolutionary models, theories and approaches within and among the natural and the social sciences.
Integral (spirituality)
[40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] Chaudhuri, Haridas. (1975). "Psychology: Humanistic and transpersonal". Journal of Humanistic Psychology, 15 (1), 7-15. Ken Wilber, Integral Psychology : Consciousness, Spirit, Psychology, Therapy Shambhala, ISBN 1-57062-554-9 Brant Cortright, Integral Psychology: Yoga, Growth, and Opening the Heart, SUNY, 2007 ISBN 0791470717 Ken Wilber, Terry Patten, Adam Leonard & Marco Morelli, Integral Life Practice, ISBN 9781590304679, p. 6 T.Colins Logan, True Love: Integral Lifework Theory & Practice, ISBN 9780977033638, p. 3 Elliott Dacher, Integral Health: The Path to Human Flourishing, ISBN 9781591201908, p. 118 George Leonard and Michael Murphy, The Life We Are Given, ISBN 0874777925, p.16 Sri Aurobindo, The Integral Yoga, ISBN 9780941524766, p. 10 Jorge Ferrar, "Integral Transformative Practice, A Participatory Perspective", Journal of Transpersonal Psychology, 2003, Vol. 35, No. 1 http:/ / www. itp-life. com http:/ / www. estel. es/ eng/ http:/ / www. integrallifework. com http:/ / www. ciis. edu/ http:/ / www. fielding. edu/ http:/ / www. jfku. edu/ integraltheory/ http:/ / www. integraltheoryconference/ http:/ / www. integralleadershipinaction. com/ http:/ / www. integralinstitute. com/ http:/ / www. integralresearchcenter. org/ http:/ / www. cejournal. org/ http:/ / www. integralleadershipreview. com http:/ / www. integrallife. com/ http:/ / www. integral-review. org/ http:/ / www. integralworld. net/ http:/ / www. integraljournals. org/ http:/ / www. kosmosjournal. org/ http:/ / www. tandf. co. uk/ journals/ titles/ 02604027. html
78
Integral Theory
This article is about "Integral Theory" as an emerging area of discourse. For "integral" as a term in spirituality, see Integral (spirituality). See Integral (disambiguation) for other uses. Integral Theory is an area of discourse emerging from the theoretical psychology and philosophy of Ken Wilber, a body of work that has evolved in phases from a transpersonal psychology[1] synthesizing Western and non-Western understandings of consciousness with notions of cosmic, biological, human, and divine evolution[2] into an emerging field of scholarly research focused on the complex interactions of ontology, epistemology, and methodology[3] . It has been claimed to offer a "Theory of Everything"[4] described as a "post-metaphysical"[5] worldview and a "trans-path path"[6] for holistic development; however, the discourse has received limited acceptance in mainstream academia[7] and has been sharply criticized by some for insularity and lack of rigor[8] . Integral Theory (or integral approach[9] [10] , consciousness[11] , paradigm[12] , philosophy[11] , society[13] , or worldview[11] ) has been applied in a variety of different domains: Integral Art, Integral Ecology, Integral Economics, Integral Politics, Integral Psychology, Integral Spirituality, and others. The first interdisciplinary academic conference on Integral Theory took place in 2008[14] . Integral Theory is said to be situated within Integral studies, described as an emerging interdisciplinary field of discourse[3] . Researchers have also developed applications in areas such as leadership, coaching, and organizational development.[15] The Integral Institute was co-founded as a non-profit "think-and-practice tank"[16] by Ken Wilber and others in 2001,[17] to promote the theory and its practice. While there is no single organization defining the nature of Integral Theory, some have claimed that a loosely-defined "Integral movement" has appeared, expressed in a variety of conferences, workshops, publications, and blogs focused on themes in integral thought, such as spiritual evolution, and in academic developmental studies programs.[18] Others, however, have denied the existence of a single Integral movement, arguing that such claims conflate radically different phenomena[19] .
Integral Theory The project of "The Integral University in Paris" was launched 28 February 2008. So far, the Integral University (Universit Intgrale in French) in Paris refers to a cycle of conferences organized by the French chapter of the Club of Budapest(1,2) based on an idea put forward by Michel Saloff Coste. It is not an institute as such, as it is still in its developing stages.[20]
79
History
Although the first use of the term integral in a spiritual context was in the nineteenth century, Integral Theory's most recent antecedents include the California Institute of Integral Studies founded in 1968 by Haridas Chaudhuri (19131975), a Bengali philosopher and academic. Chaudhuri had been a correspondent of Sri Aurobindo, who developed his own perspective and philosophy. He established the California Institute of Integral Studies (originally the California Institute of Asian Studies), in 1968 in San Francisco (it became an independent organisation in 1974), and presented his own form of Integral psychology in the early 1970s.[21] Don Beck and Chris Cowan use the term integral for a developmental stage which sequentially follows the pluralistic stage. The essential characteristic of this stage is that it continues the inclusive nature of the pluralistic mentality, yet extends this inclusiveness to those outside of the pluralistic mentality. In doing so, it accepts the ideas of development and hierarchy, which the pluralistic mentality finds difficult. Other ideas of Beck and Cowan include the "first tier" and "second tier", which refer to major periods of human development. In late 1990s and 2000 Ken Wilber, who was influenced by both Aurobindo and Gebser, among many others, adopted the term Integral to refer to the latest revision of his own integral philosophy, which he called Integral theory [22] . He also established the Integral Institute as a think-tank for further development of these ideas. In his book Integral Psychology, Wilber lists a number of pioneers of the integral approach, post hoc. These include Goethe, Schelling, Hegel, Gustav Fechner, William James, Rudolf Steiner, Alfred North Whitehead, James Mark Baldwin, Jrgen Habermas, Sri Aurobindo, and Abraham Maslow.[23] . The adjective Integral has also been applied to Spiral Dynamics, chiefly the version taught by Don Beck, who for awhile collaborated with Wilber [24] . In the movement associated with Wilber, "Integral" when capitalized is given a further definition, being made synonymous with Wilber's AQAL Integral theory,[25] whereas "Integral Studies" refers to the broader field including the range of integral thinkers such as Jean Gebser, Sri Aurobindo, Ken Wilber, Rudolf Steiner, Edgar Morin and Ervin Laszlo.[26] [27]
Methodologies
AQAL, pronounced "ah-qwul," is a widely used framework in Integral Theory. It is also alternatively called the Integral Operating System (IOS) or by various other synonyms. The term stands for "all quadrants, all levels, all lines, all states, and all types." It is conceived by some integral theorists to be one of the most comprehensive approach to reality, a metatheory that attempts to explain how academic disciplines and every form of knowledge and experience fit together coherently.[28] In addition to AQAL, scholars have proposed other methodologies for integral studies. Bonnitta Roy has introduced a "Process Model" of integral theory, combining Western process philosophy, Dzogchen ideas, and Wilberian theory. She distinguishes between Wilber's concept of perspective and the Dzogchen concept of view, arguing that Wilber's view is situated within a framework or structural enfoldment which constrains it, in contrast to the Dzogchen intention of being mindful of view.[29] Wendelin Kpers, Ph.D., a German scholar specializing in phenomenological research, has proposed that an "integral pheno-practice" based on aspects of the work of Maurice Merleau-Ponty can provide the basis of an "adequate phenomenology" useful in integral research. His proposed approach claims to offer a more inclusive and coherent approach than classical phenomenology, including procedures and techniques called epoch, bracketing, reduction, and free variation.[30]
Integral Theory
80
Contemporary figures
A variety of intellectuals, academics, writers, and other specialists have advanced the integral theory in recent decades.
Themes
Integral art
In the context of Integral Theory, Integral art can be defined as art that reaches across multiple quadrants and levels. It may also refer to art that was created by someone who thinks or acts in an integral way.
Integral ecology
Integral ecology is a multi-disciplinary approach pioneered by Michael E. Zimmerman and Sean Esbjrn-Hargens. It applies Wilber's integral theory (especially the eight methodological perspectives) to the field of environmental studies and ecological research.[31] [32] [33] [34]
Integral economics
Integral economics is a paradigmatic methodology emanating from integral thought and theory as it translates to economics. This 'new' praxis offers a structural framework for addressing and resolving problems the Integral Institute has associated in their Mission [35] with evolutionary forms of capitalism; and the culture wars in political, religious, and scientific domains. These efforts are thus affording "theorists and developmental psychologists a needed and useful early look at the formal, dynamic process by which the evolution of higher-order development proceeds" in relation to an integral model.[36]
Integral politics
Integral politics is an endeavor to develop a balanced and comprehensive politics around the principles of integral studies. Theorists including Don Beck, Lawrence Chickering, Jack Crittenden, David Sprecher, and Ken Wilber have applied concepts such as the AQAL methodology of Integral Theory to issues in political philosophy and applications in government.[37]
Integral psychology
Integral psychology is originally is based on the Yoga psychology of Sri Aurobindo [38] . In the context of Integral Theory, it applies Wilber's AQAL and related themes to the field of psychology [39] . For Wilber, Integral psychology is psychology that is inclusive or holistic rather than exclusivist or reductive, and alues and integrates multiple explanations and methodologies.[40] [41]
Integral Theory to traverse and converse across these multiple dimensions (transversal-integral). [42]
81
Integral Theory
82
See also
Integral (spirituality) Post-postmodernism Ken Wilber
External links
Academic programs California Institute of Integral Studies [52], offers programs in integral studies. Fielding Graduate University [53], offers programs in integral studies. John F. Kennedy University, MA in Integral Theory [54] an accredited online Master of Arts degree in Integral Theory. Conferences Integral Theory Conference [55] the official site for the biennial Integral Theory Conference held at JFK University. Integral Leadership in Action [56] the official site for the 4th annual conference on integral conscious leadership. Organizations Integral Institute [57] a non-profit academic think tank. Integral Research Center [58] a grant giving mixed-methods research center based on Integral Methodological Pluralism. Publications Conscious Evolution [59], essays and articles about the multidisciplinary, integral study of consciousness and the Kosmos. Integral Leadership Review [60], the site of the online publications Integral Leadership Review and Leading Digest Integral Life [61] online community website that is the sponsoring organization of Integral Institute, a non-profit academic think tank. Integral Review Journal [62], an online peer reviewed journal. Integral World [63] website and online resource maintained by Frank Visser. Journal of Integral Theory and Practice [64] a peer-reviewed academic journal founded in 2003 with its first issue appearing in 2006. Kosmos Journal [65], founded in 2001, a leading international journal for planetary citizens committed to the birth and emergence of a new planetary culture and civilization. World Futures: Journal of General Evolution [66]. An academic journal devoted to promoting evolutionary models, theories and approaches within and among the natural and the social sciences.
Integral ecology
85
Integral ecology
Integral ecology is an emerging field that applies Ken Wilber's integral theory to environmental studies and ecological research. The field was pioneered in the late 1990s by integral theorist Sean Esbjrn-Hargens and environmental philosopher Michael E. Zimmerman.
Teachings
Integral ecology integrates over 80 schools of ecology and 70 schools of environmental thought. It integrates these approaches by recognizing that environmental phenomena are the result of an observer using a particular method of observation to observe some aspect of nature. This postmetaphysical formula is summarized as Who (the observer) x How (method of observation) x What (that which is observed). Integral ecology uses a framework of eight ecological worldviews (e.g.,eco-manager, eco-holist, eco-radical, eco-sage), eight ecological modes of research (e.g., phenomenology, ethnomethodology, empiricism, systems theory), and four terrains (i.e., experience, behaviors, cultures, and systems). See table below for an overview of a few of the schools of ecology that integral ecology weaves together:
Terrain of Experiences Terrain of Cultures Ethno-Ecology Terrain of Behaviors Terrain of Systems
Feminist Ecology
Psychoanalytic Ecology Linguistic Ecology Deep Ecology Ecopsychology Romantic Ecology Process Ecology
Information Ecology Mathematical Ecology Industrial Ecology Spiritual Ecology Acoustic Ecology Social Ecology
Integral ecology is defined as the mixed methods study of the subjective and objective aspects of organisms in relationship to their intersubjective and interobjective environments. As a result integral ecology doesnt require a new definition of ecology as much as it provides an integral interpretation of the standard definition of ecology, where organisms and their environments are recognized as having interiority. Integral ecology also examines developmental stages in both nature and humankind, including how nature shows up to people operating from differing worldviews. Key integrative figures drawn on in integral ecology include: Thomas Berry, Edgar Morin, Aldo Leopold, and Stan Salthe.
Publications
Articles
Zimmerman, M. (1994). Contesting Earths Future: Radical Ecology and Postmodernity. Berkeley: University of California Press. Zimmerman, M. (1996). A Transpersonal Diagnosis of the Ecological Crisis. ReVision: A Journal of Consciousness and Transformation 18, no. 4: 38-48. Zimmerman, M. (2000). Possible Political Problems of Earth-Based Religiosity. In Beneath the Surface: Critical Essays in the Philosophy of Deep Ecology, edited by E. Katz, A. Light, and D. Rothenberg, 169-94. Cambridge, MA: MIT Press. Zimmerman, M. (2001). Ken Wilber's Critique of Ecological Spirituality. In Deep Ecology and World Religions, edited by D. Barnhill and R. Gottlieb, 243-69. Albany, NY: SUNY Press.
Integral ecology
87
References
[1] [2] [3] [4] http:/ / integralecology-michaelz. blogspot. com/ http:/ / www. i-edu. org/ http:/ / www. colorado. edu/ ArtsSciences/ CHA/ profiles/ zimmerman. html http:/ / www. rhizomedesigns. org
Logical holism
Logical holism is the belief that the world operates in such a way that no part can be known without the whole being known first.
See also
The doctrine of internal relations Holography 1. In optics: holography 2. In metaphysics: holonomic brain theory, holographic paradigm and The Holographic Universe (Michael Talbot's book) Proponents: Michael Talbot, David Bohm, Karl H. Pribram 1. In quantum mechanics: holographic principle (the conjecture that all of the information about the realities in a volume of space is present on the surface of that volume) Proponents: Gerard 't Hooft, Leonard Susskind, John A. Wheeler
Organicism
88
Organicism
Organicism is a philosophical orientation that asserts that reality is best understood as an organic whole. By definition it is close to holism. Plato, Hobbes or Constantin Brunner are examples of such philosophical thought. Organicism is also a biological doctrine that stresses the organization, rather than the composition, of organisms. William Emerson Ritter coined the term in 1919. Organicism became well-accepted in the 20th century. Examples of 20th century biologists who were organicists are Ross Harrison, Paul Weiss, and Joseph Needham. Donna Haraway discusses them in her first book. John Scott Haldane (father of J. B. S. Haldane), R. S. Lillie, W. E. Agar, and Ludwig von Bertalanffy are other early twentieth century organicists.
Is it material composition, or organization of parts, that creates the mutual symbiosis between Amphiprion clownfish and tropical sea anemones?
Organicism as a doctrine rejects mechanism and reductionism (doctrines that claim that the smallest parts by themselves explain the behavior of larger organized systems of which they are a part). However, organicism also rejects vitalism, the doctrine that there is a vital force different from physical forces that accounts for living things. A number of biologists in the early to mid-twentieth century embraced organicism. They wished to reject earlier vitalisms but to stress that whole organism biology was not fully explainable by atomic mechanism. The larger organization of an organic system has features that must be taken into account to explain its behavior. Gilbert and Sarkar distinguish organicism from holism to avoid what they see as the vitalistic of spritualistic connotations of holism. Dusek notes that holism contains a continuum of degrees of the top-down control of organization, ranging from monism (the doctrine that the only complete object is the whole universe, or that there is only one entity, the universe, to organicism, which allows relatively more independence of the parts from the whole, despite the whole being more than the sum of the parts, and/or the whole exerting some control on the behavior of the parts. Still more independence is present in relational holism. This doctrine does not assert top-down control of the whole over its parts, but does claim that the relations of the parts are essential to explanation of behavior of the system. Aristotle and early modern philosophers and scientists tended to describe reality as made of substances and their qualities, and to neglect relations. Gottfried Wilhelm Leibniz showed the bizarre conclusions to which a doctrine of the non-existence of relations led. Twentieth century philosophy has been characterized by the introduction of and emphasis on the importance of relations, whether in symbolic logic, in phenomenology, or in metaphysics. William Wimsatt has suggested that the number of terms in the relations considered distinguishes reductionism from holism. Reductionistic explanations claim that two or at most three term relations are sufficient to account for the system's behavior. At the other extreme the system could be considered as a single ten to the twenty-sixth term relation, for instance. Organicism has some intellectually and politically controversial or suspect associations. "Holism," the doctrine that the whole is more than the sum of its parts, often used synonymously with organicism, or as a broader category under which organicism falls, has been coopted in recent decades by "holistic medicine" and by New Age Thought. German Nazism appealed to organicist and holistic doctrines, discrediting for many in retrospect, the original organicist doctrines. (See Anne Harrington). Soviet Dialectical Materialism also made appeals to an holistic and organicist approach stemming from Hegel via Karl Marx's co-worker Friedrich Engels, again giving a controversial political association to organicism.
Organicism Organicism' has also been used to characterize notions put forth by various late 19th-century social scientists who considered human society to be analogous to an organism, and individual humans to be analogous to the cells of an organism. This sort of organicist sociology was articulated by Alfred Espinas, Paul von Lilienfeld, Jacques Novicow, Albert Schffle, Herbert Spencer, and Ren Worms, among others (Barberis 2003: 54).
89
References
Barberis D. S. (2003). In search of an object: Organicist sociology and the reality of society in fin-de-sicle France. History of the Human Sciences, vol 16, no. 3, pp. 5172. Beckner, Morton, (1967) Organismic Biology, in "Encyclopedia of Philosophy," ed. Paul Edwards, MacMillan Publishing CO., Inc. & The Free Press. Dusek, Val, (1999). The Holistic Inspirations of Physics, Rutgers University Press. Haraway, Donna (1976). Crystals, Fabrics, and Fields, Johns Hopkins University Press. Harrington, Anne (1996). Reenchanted Science, Harvard University Press. Mayr, E. (1997). The organicists. In What is the meaning of life. In This is biology. Belknap Press of Harvard University Press. Gilbert, Scott F. and Sahotra Sarkar (2000): Embracing complexity: Organicism for the 21st Century, Developmental Dynamics 219(1): 19. (abstract of the paper: [1]) Wimsatt, Willam (2007) Re-engineering Philosophy for Limited Beings :Peicewise Approximations to Reality, Harvard University Press.
See also
Organismic theory Organic unity Philosophy of Organism
External links
Orsini, G. N. G. - "Organicism" [2], in Dictionary of the History of Ideas (1973) Dictionary definition [3]
References
[1] http:/ / www3. interscience. wiley. com/ cgi-bin/ abstract/ 72513248/ ABSTRACT?CRETRY=1& SRETRY=0 [2] http:/ / xtf. lib. virginia. edu/ xtf/ view?docId=DicHist/ uvaBook/ tei/ DicHist3. xml;chunk. id=dv3-52 [3] http:/ / www. thefreedictionary. com/ organicism
Synergetics (Fuller)
90
Synergetics (Fuller)
Synergetics is the empirical study of systems in transformation, with an emphasis on total system behavior unpredicted by the behavior of any isolated components, including humanitys role as both participant and observer. Since systems are identifiable at every scale from the quantum level to the cosmic, and humanity both articulates the behavior of these systems and is composed of these systems, synergetics is a very broad discipline, and embraces a broad range of scientific and philosophical studies including tetrahedral and close-packed-sphere geometries, thermodynamics, chemistry, psychology, biochemistry, economics, philosophy and theology. Despite a few mainstream endorsements such as articles by Arthur Loeb and the naming of a molecule buckminsterfullerene, synergetics remains an iconoclastic subject ignored by most traditional curricula and academic departments. Buckminster Fuller (1895-1983) coined the term and attempted to define its scope in his two volume work Synergetics [1] [2] [3] . His oeuvre inspired many researchers to tackle branches of synergetics. Three examples: Haken explored self-organizing structures of open systems far from thermodynamic equilibrium, Amy Edmondson explored tetrahedral and icosahedral geometry, and Stafford Beer tackled geodesics in the context of social dynamics. Many other researchers toil today on aspects of Synergetics, though many deliberately distance themselves from Fullers broad all-encompassing definition, given its problematic attempt to differentiate and relate all aspects of reality including the ideal and the physically realized, the container and the contained, the one and the many, the observer and the observed, the human microcosm and the universal macrocosm.
What is Synergetics?
Synergetics is defined by R. Buckminster Fuller (1895-1983) in his two books Synergetics: Explorations in the Geometry of Thinking and Synergetics 2: Explorations in the Geometry of Thinking as: "A system of mensuration employing 60-degree vectorial coordination comprehensive to both physics and chemistry, and to both arithmetic and geometry, in rational whole numbers... Synergetics explains much that has not been previously illuminated... Synergetics follows the cosmic logic of the structural mathematics strategies of nature, which employ the paired sets of the six angular degrees of freedom, frequencies, and vectorially economical actions and their multi-alternative, equi-economical action options... Synergetics discloses the excruciating awkwardness characterizing present-day mathematical treatment of the interrelationships of the independent scientific disciplines as originally occasioned by their mutual and separate lacks of awareness of the existence of a comprehensive, rational, coordinating system inherent in nature."[4] Other passages in Synergetics that outline the subject are its introduction (The Wellspring of Reality [5]) and the section on Nature's Coordination (410.01) [6]. The chapter on Operational Mathematics (801.00-842.07) [7] provides an easy to follow, easy to build introduction to some of Fuller's geometrical modeling techniques. So this chapter can help a new reader become familiar with Fuller's approach, style and geometry. One of Fuller's clearest expositions on "the geometry of thinking" occurs in the two part essay "Omnidirectional Halo" which appears in his book No More Secondhand God[8] . Amy Edmondson describes synergetics "in the broadest terms, as the study of spatial complexity, and as such is an inherently comprehensive discipline." [9] In her PhD study, Cheryl Clark synthesizes the scope of synergetics as "the study of how nature works, of the patterns inherent in nature, the geometry of environmental forces that impact on humanity."[10] Here's an abridged list of some of the discoveries Fuller claims for Synergetics (see Controversies below) again quoting directly: The rational volumetric quantation or constant proportionality of the octahedron, the cube, the rhombic triacontahedron, and the rhombic dodecahedron when referenced to the tetrahedron as volumetric unity.
Synergetics (Fuller) The trigonometric identification of the great-circle trajectories of the seven axes of symmetry with the 120 basic disequilibrium LCD triangles of the spherical icosahedron. (See Sec. 1043.00.) The rational identification of number with the hierarchy of all the geometries. The A and B Quanta Modules. The volumetric hierarchy of Platonic and other symmetrical geometricals based on the tetrahedron and the A and B Quanta Modules as unity of coordinate mensuration. The identification of the nucleus with the vector equilibrium. Omnirationality: the identification of triangling and tetrahedroning with second- and third-powering factors. Omni-60-degree coordination versus 90-degree coordination. The integration of geometry and philosophy in a single conceptual system providing a common language and accounting for both the physical and metaphysical.[11]
91
Significance of Synergetics
Several authors have tried characterize the importance of Synergetics. Amy Edmonson asserts that "Experience with synergetics encourages a new way of approaching and solving problems. Its emphasis on visual and spatial phenomena combined with Fuller's holistic approach fosters the kind of lateral thinking which so often leads to creative breakthroughs."[12] . Cheryl Clark points out that "In his thousands of lectures, Fuller urged his audiences to study synergetics, saying `I am confident that humanity's survival depends on all of our willingness to comprehend feelingly the way nature works.'"[13]
Tetrahedral Accounting
A chief hallmark of this system of mensuration was its unit of volume: a tetrahedron defined by four closest-packed unit-radius spheres. This tetrahedron anchored a set of concentrically arranged polyhedra proportioned in a canonical manner and inter-connected by a twisting-contracting, inside-outing dynamic named the Jitterbug Transformation.
Shape A,B,T modules MITE Tetrahedron Coupler Cuboctahedron Duo-Tet Cube Octahedron
Properties tetrahedral voxels space-filler, 2As, 1B self dual space filler cb.h = 1/2, cb.v = 1/8 of 20 24 MITEs dual of cube radius rt.h < 1, rt.v = 2/3 of 7.5 space-filler, dual to cuboctahedron rt.h = phi/sqrt(2) edges 1 = tetrahedron's edges edges 1, cb.h = 1 2-frequency, 8 x 3 volume
Shape A module B module T module MITE Tetrahedron Coupler Duo-Tet Cube Octahedron
A 1 0 0 2 24 16 48 48 0 96
B 0 1 0 1 0 8 24 48 0 48
T 0 0 1 0 0 0 0 0 120 0
Synergetics (Fuller)
92
A & B modules
Corresponding to Fuller's use of a regular tetrahedron as his unit of volume was his replacing the cube as his model of 3rd powering.(Fig. 990.01 [14]) The relative size of a shape was indexed by its "frequency," a term he deliberately chose for its resonance with scientific meanings. "Size and time are synonymous. Frequency and size are the same phenomenon." (528.00 [15]) Shapes not having any size, because purely conceptual in the Platonic sense, were "prefrequency" or "subfrequency" in contrast. Prime means sizeless, timeless, subfrequency. Prime is prehierarchical. Prime is prefrequency. Prime is generalized, a metaphysical conceptualization experience, not a special case.... (1071.10 [16]) Generalized principles (scientific laws), although communicated energetically, did not inhere in the "special case" episodes, were considered "metaphysical" in that sense. An energy event is always special case. Whenever we have experienced energy, we have special case. The physicist's first definition of physical is that it is an experience that is extracorporeally, remotely, instrumentally apprehensible. Metaphysical includes all the experiences that are excluded by the definition of physical. Metaphysical is always generalized principle.(1075.11 [16]) Tetrahedral mensuration also involved substituting what Fuller called the "isotropic vector matrix" (IVM) for the standard XYZ coordinate system, as his principal conceptual backdrop for special case physicality: The synergetics coordinate system -- in contradistinction to the XYZ coordinate system -- is linearly referenced to the unit-vector-length edges of the regular tetrahedron, each of whose six unit vector edges occur in the isotropic vector matrix as the diagonals of the cube's six faces. (986.203 [17]) The IVM scaffolding or skeletal framework was defined by cubic closest packed spheres (CCP), alternatively known as the FCC or face-centered cubic lattice, or as the octet truss in architecture (on which Fuller held a patent). The space-filling complementary tetrahedra and octahedra characterizing this matrix had prefrequency volumes 1 and 4 respectively (see above). A third consequence of switching to tetrahedral mensuration was Fuller's review of the standard "dimension" concept. Whereas "height, width and depth" have been promulgated as three distinct dimensions within the Euclidean context, each with its own independence, Fuller considered the tetrahedron a minimal starting point for spatial cognition. His use of "4D" was in many passages close to synonymous with the ordinary meaning of "3D," with the dimensions of physicality (time, mass) considered additional dimensions. Geometers and "schooled" people speak of length, breadth, and height as constituting a hierarchy of three independent dimensional states -- "one-dimensional," "two-dimensional," and "three-dimensional" -- which can be conjoined like building blocks. But length, breadth, and height simply do not exist independently of one another nor independently of all the inherent characteristics of all systems and of all systems' inherent complex of interrelationships with Scenario Universe.... All conceptual consideration is inherently four-dimensional. Thus the primitive is a priori four-dimensional, being always comprised of the four planes of reference of the tetrahedron. There can never be any less than four primitive dimensions. Any one of the stars or point-to-able "points" is a system-ultratunable,
Synergetics (Fuller) tunable, or infratunable but inherently four-dimensional. (527.702 [18], 527.712 [19]) Synergetics did not aim to replace or invalidate pre-existing geometry or mathematics, was designed to carve out a namespace and serve as a glue language providing a new source of insights.
93
An Intuitive Geometry
Fuller took an intuitive approach to his studies, often going into exhaustive empirical detail while at the same time seeking to cast his findings in their most general philosophical context. For example, his sphere packing studies led him to generalize a formula for polyhedral numbers: 2 P F2 + 2, where F stands for "frequency" (the number of intervals between balls along an edge) and P for a product of low order primes (some integer). He then related the "multiplicative 2" and "additive 2" in this formula to the convex versus concave aspects of shapes, and to their polar spinnability respectively. These same polyhedra, developed through sphere packing and related by tetrahedral mensuration, he then spun around their various poles to form great circle networks and corresponding triangular tiles on the surface of a sphere. He exhaustively cataloged the central and surface angles of these spherical triangles and their related chord factors. Fuller was continually on the lookout for ways to connect the dots, often purely speculatively. As an example of "dot connecting" he sought to relate the 120 basic disequilibrium LCD triangles of the spherical icosahedron to the plane net of his A module.(915.11 [27]Fig. 913.01 [28], Table 905.65 [29]) The Jitterbug Transformation provided a unifying dynamic in this work, with much significance attached to the doubling and quadrupling of edges that occurred, when a cuboctahedron is collapsed through icosahedral, octahedral and tetrahedral stages, then inside-outed and re-expanded in a complementary fashion. The JT formed a bridge between 3,4-fold rotationally symmetric shapes, and the 5-fold family, such as a rhombic triacontahedron, which latter he analyzed in terms of the T module, another tetrahedral wedge with the same volume as his A and B modules. He modeled energy transfer between systems by means of the double-edged octahedron and its ability to turn into a spiral (tetrahelix). Energy lost to one system always reappeared somewhere else in his Universe. He modeled a threshold between associative and disassociative energy patterns with his T-to-E module transformation ("E" for
Synergetics (Fuller) "Einstein").(Fig 986.411A [30]) Synergetics is in some ways a library of potential "science cartoons" (scenarios) described in prose and not heavily dependent upon mathematical notations. His demystification of a gyroscope's behavior in terms of a hammer thrower, pea shooter, and garden hose, is a good example of his commitment to using accessible metaphors. (Fig. 826.02A [31]) His modular dissection of a space-filling tetrahedron or MITE (minimum tetrahedron) into 2 A and 1 B module served as a basis for more speculations about energy, the former being more energy conservative, the latter more dissipative in his analysis.(986.422 [32]921.20 [33], 921.30 [34]). His focus was reminiscent of later cellular automaton studies in that tessellating modules would affect their neighbors over successive time intervals.
94
Social Commentary
Synergetics informed Fuller's social analysis of the human condition. He identified "ephemeralization" as the trend towards accomplishing more with less physical resources, as a result of increasing comprehension of such "generalized principles" as E = Mc2. He remained concerned that humanity's conditioned reflexes were not keeping pace with its engineering potential, emphasizing the "touch and go" nature of our current predicament. Fuller hoped the streamlining effects of a more 60-degree-based approach within natural philosophy would help bridge the gap between C.P. Snow's "two cultures" and result in a greater level of scientific literacy in the general population. (935.24 [35])
Controversies
Fuller hoped to gain traction for his ideas and nomenclature by dedicating Synergetics to H.S.M. Coxeter (with permission) and by citing page 71 of the latter's Regular Polytopes to suggest where his A & B modules (depicted above) might enter the literature (see Fig. 950.12 [36]). Dr. Arthur Loeb provided a prologue and an appendix to Synergetics discussing its overlap with crystallography, chemistry and virology. However few if any academic departments, outside of Literature, have much tolerance for such an intuitive and/or exploratory approach, even with a track record of inventions and successes attached. Synergetics is difficult to pigeon-hole and is not in the style of any currently practiced discipline. E.J. Applewhite, Fuller's chief collaborator on Synergetics, related it to Edgar Allan Poe's Eureka: A Prose Poem, in terms of its being a metaphysical work. Fuller might have had more of an audience back in some Renaissance period, when natural philosophers still had an appetite for Neo-Platonism.
Errata
A two volume work of this size is going to have some outright mistakes. A major bug Fuller himself catches involves a misapplication of his Synergetics Constant in Synergetics 1, leading him to delude himself into thinking he had discovered a radius 1 sphere of 5 tetravolumes. He provides a patch in Synergetics 2 in the form of his T&E module thread. (986.206 - 986.212 [37])
About Synergy
Synergetics refers to synergy: either the concept of the output of a system not foreseen by the simple sum of the output of each system part, or simply less used another term for negative entropy negentropy.
Synergetics (Fuller)
95
See also
Octet Truss Geodesic Dome Dymaxion House Tensegrity Cloud Nine Synergetics coordinates Quadray coordinates
References
R. Buckminster Fuller (in collaboration with E.J. Applewhite, Synergetics: Explorations in the Geometry of Thinking [38], online edition hosted by R. W. Gray with permission [39], originally published by Macmillan [40], Vol. 1 in 1975 (with a preface and contribution by Arthur L. Loeb; ISBN 002541870X), and Vol. 2 in 1979 (ISBN 0025418807), as two hard-bound volumes, re-editions in paperback. Amy Edmondson, A Fuller Explanation [41], EmergentWorld LLC, 2007.
External links
Complete On-Line Edition of Fuller's Synergetics [38] WNET: Synergetics by E.J. Applewhite [42] Synergetics 101 [43] video of Joe Clinton at RISD 2007. What is Synergetics? [44] at Buckminster Fuller Institute [45] A Fuller Explanation: The Synergetic Geometry of R. Buckminster Fuller [41] Cheryl Clark's PhD thesis "12 Degrees of Freedom" [46] Synergetics section of the Buckminster Fuller FAQ [47] Synergetics on the Web [48] CJ Fearnley, Reading Synergetics: Some Tips [49] Synergetics Collaborative [50]
References
[1] Synergetics, http:/ / www. rwgrayprojects. com/ synergetics/ synergetics. html [2] Fuller, R. Buckminster (1963). No More Secondhand God. Carbondale and Edwardsville. pp.118163. [3] CJ Fearnley, Presentation to the American Mathematical Society (AMS) 2008 Spring Eastern Meeting (http:/ / www. cjfearnley. com/ folding. great. circles. 2008. pdf), p. 6. Retrieved on 2010-01-26. [4] Synergetics, Sec. 200.01-203.07 (http:/ / www. rwgrayprojects. com/ synergetics/ s02/ p0000. html) [5] http:/ / www. rwgrayprojects. com/ synergetics/ intro/ well. html [6] http:/ / www. rwgrayprojects. com/ synergetics/ s04/ p1000. html#410. 01 [7] http:/ / www. rwgrayprojects. com/ synergetics/ s08/ p0000. html [8] Fuller, R. Buckminster (1963). No More Secondhand God. Carbondale and Edwardsville. pp.118163. [9] Edmondson, Amy C. (1987). A Fuller Explanation: The Synergetic Geometry of R. Buckminster Fuller. Boston: Birkhauser. pp.ix. ISBN0-8176-3338-3. [10] Cheryl Clark, 12 degrees of Freedom, Ph.D. Thesis, p. xiv (http:/ / www. doinglife. com/ 12FreedomPDFs/ Ib_AbstractLitReview. pdf) [11] Synergetics, Sec. 251.50 (http:/ / www. rwgrayprojects. com/ synergetics/ s02/ p0000. html) [12] Edmondson 1987, pp. ix-x [13] Clark, p. xiv [14] http:/ / www. rwgrayprojects. com/ synergetics/ s09/ figs/ f9001. html [15] http:/ / www. rwgrayprojects. com/ synergetics/ s05/ p2800. html#528. 00 [16] http:/ / www. rwgrayprojects. com/ synergetics/ s10/ p7000. html#1075. 11 [17] http:/ / www. rwgrayprojects. com/ synergetics/ s09/ p86200. html#986. 203 [18] http:/ / www. rwgrayprojects. com/ synergetics/ s05/ p2700. html#527. 702
Synergetics (Haken)
97
Synergetics (Haken)
This article is on a school of thought initiated by Hermann Haken. For other uses, see Synergetics (disambiguation). Synergetics is an interdisciplinary science explaining the formation and self-organization of patterns and structures in open systems far from thermodynamic equilibrium. It is founded by Hermann Haken, inspired by the laser theory. Self-organization requires a 'macroscopic' system, consisting of many nonlinearly interacting subsystems. Depending on the external control parameters (environment, energy-fluxes) self-organization takes place.
Order-parameter concept
Essential in synergetics is the order-parameter concept which was originally introduced in the Ginzburg-Landau theory in order to describe phase-transitions in thermodynamics. The order parameter concept is generalized by Haken to the "enslaving-principle" saying that the dynamics of fast-relaxing (stable) modes is completely determined by the 'slow' dynamics of as a rule only a few 'order-parameters' (unstable modes). The order parameters can be interpreted as the amplitudes of the unstable modes determining the macroscopic pattern. As a consequence, self-organization means an enormous reduction of degrees of freedom (entropy) of the system which macroscopically reveals an increase of 'order' (pattern-formation). This far-reaching macroscopic order is independent of the details of the microscopic interactions of the subsystems. This supposedly explains the self-organization of patterns in so many different systems in physics, chemistry, biology and even social systems.
In social systems
In management science, synergetics was first applied to deliberative structures by Stafford Beer, whose syntegration method is based so specifically on geodesic dome design that only fixed numbers of persons, determined by geodesic chord factors, can take part in the process at each deliberation stage. Beer's earlier work was briefly applied by the government of Salvador Allende in Chile in the early 1970s. This was Project Cybersyn- a portmanteau word from "Cybernetic synergy". The approach is applied today as a series of related management methods. All of these seek some macroscopic order of priorities by taking some path of integrating diverse positions or attitudes to some problem, making the synergetic assumption that priorities will converge under the constraint of viability. There are similar themes in the work especially of Jay Forrester and Donella Meadows who sought leverage on social and management problems by seeking out an emerging macroscopic order. Under synergetic assumptions, this could often be reliably found by determining the points of greatest resistance to change by an older or inertial macroscopic order. The twelve leverage points of Meadows apply the order parameter concept but without making the assumption of "enslaving" lower-leverage points to the higher-leverage. A similar view is expressed in the deep framing theory of linguist George Lakoff, in which basic conceptual metaphors partly but do not completely determine the actions of their users. As in all social sciences, conscious goals, choices, free will, self-interest and self-awareness prevent any control groups or strictly predictive models from applying to human problems as they do in natural sciences. In Meadows' leverage model the leverage of self-organization is explicitly below that of goal-setting, and much below that of mindsets and the ability to change them. The synergetic assumptions apply mostly to the lower leverage factors, while the higher leverage factors follow principles more like Lakoff's. However, the basic relationship remains: fast-relaxing (stable) modes are at least partly determined or strongly biased by the 'slow' dynamics of only a few parameters. Lakoff argued in his Moral Politics that there could be as few as one basic metaphor (state as parent) determining a vast range of political choices and policy making patterns.
Synergetics (Haken)
98
Literature
H. Haken: "Synergetics, an Introduction: Nonequilibrium Phase Transitions and Self-Organization in Physics, Chemistry, and Biology", 3rd rev. enl. ed. New York: Springer-Verlag, 1983. H. Haken: Advanced Synergetics: Instability Hierarchies of Self-Organizing Systems and Devices. New York: Springer-Verlag, 1993. H. Haken: Synergetik. Springer-Verlag Berlin Heidelberg New York 1982, ISBN 3-8017-1686-4 R. Graham, A. Wunderlin (Hrsg.): Lasers and Synergetics. Springer-Verlag Berlin Heidelberg New York 1987, ISBN 3-540-17940-2 Korotayev A., Malkov A., Khaltourina D.: Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS, 2006. ISBN 5-484-00414-4 [1].
See also
J. Willard Gibbs Phase Rule Fokker-Planck equation Ginzburg-Landau theory
Alexander Bogdanov
External links
Homepage of the former Institute for Theoretical Physics and Synergetics (IFTPUS) [2] Center for Synergetics homepage [3]
References
[1] http:/ / urss. ru/ cgi-bin/ db. pl?cp=& lang=en& blang=en& list=14& page=Book& id=34250 [2] http:/ / itp1. uni-stuttgart. de/ en/ [3] http:/ / www. center-for-synergetics. de/
Synergism
99
Synergism
Synergism, in general, may be defined as two or more agents working together to produce a result not obtainable by any of the agents independently. The word synergy or synergism comes from two Greek words: erg meaning "to work", and syn meaning "together"; hence, synergism is a "working together."
Christian Theology
In Calvinism (AKA monergism), "synergism" is used to describe the Arminian doctrine of salvation, although many Arminians would disagree with the characterisation. According to Calvinists, synergism is the view that God and man work together, each contributing their part to accomplish regeneration in and for the individual. John Hendryx, a Calvinist thinker, has stated it this way: synergism is "...the doctrine that there are two efficient agents in regeneration, namely the human will and the divine Spirit, which, in the strict sense of the term, cooperate. This theory accordingly holds that the soul has not lost in the fall all inclination toward holiness, nor all power to seek for it under the influence of ordinary motives." [1] Arminians, especially of the Wesleyan tradition, might respond with the criticism that Hendryx has merely provided a description of semi-Pelagianism, and they recognize that grace precedes any cooperation of the human soul with the saving power of God. In other words, God has offered salvation, and man must receive it. This is opposed to the monergistic view as held by Reformed or Calvinistic groups in which objects of God's election participate in, but do not contribute to, the salvific or regenerative processes. Classical Arminians and most Wesleyans would consider this a straw man description, as they have historically affirmed the Reformed doctrine of total depravity. To this, Hendryx replies by asking the following question: "If two persons receive prevenient grace and only one believes the gospel, why does one believe in Christ and not the other? What makes the two persons to differ? Jesus Christ or something else? And that 'something else' is why Calvinists believe Arminians and other non-Augustinian groups to be synergists." Regeneration, in this case, would occur only when the unregenerate will cooperates with God's Spirit to effectuate redemption. To the Monergist, faith does not proceed from our unregenerate human nature. If faith precedes regeneration, as it does in Arminianism, then the unregenerate person must exercise faith in order to be regenerated. However, it ought to be recognized, such a debate concerning whether it is possible for an unregenerate will to cooperate with God's Spirit is a superimposed Calvinian concept to Wesleyans. In order to answer such objections, we need to see what the doctrine of Prevenient Grace actually teaches. For they are in agreement with the Monergist; strictly speaking, at no time is it being argued that faith proceeds from the unregenerate (that is, a totally natural or graceless) human nature. John Wesley expressed this himself, saying, "The will of man is by nature free only to evil. Yet... every man has a measure of free-will restored to him by grace." [2] "Natural free-will in the present state of mankind, I do not understand: I only assert, that there is a measure of free-will supernaturally restored to every man, together with that supernatural light which 'enlightens every man that comes into the world.'" [3] "This is not a statement about natural ability, or about nature as such working of itself, but about grace working through nature." [4] Synergism is also an important part of the theology of the Eastern Orthodox Church.
Synergism
100
Biological Sciences
Synergism has been termed for a hypothesis on how complex systems operate, advanced by Robert Corning.[5] Environmental systems may react in a non-linear way to peturbations, such as climate change so that the outcome may be greater than the sum of the individual component alterations. Synergistic responses are a complicating factor in environmental modelling.[6]
See also
Arminianism Decision theology Eastern Orthodoxy Monergism Regeneration (theology) Semi-Pelagianism Soteriology
External links
Universal prevenient grace [7] Prevenient Grace [8] by Jeff Paton
References
[1] What is Monergism? (http:/ / www. monergism. com/ what_is_monergism. php) [2] "Some Remarks on Mr. Hill's Review" by John Wesley [3] Predestination Calmly Considered (http:/ / docs. google. com/ gview?a=v& q=cache:8FHZ-rIUTw0J:evangelicalarminians. org/ files/ Wesley. %20PREDESTINATION%20CALMLY%20CONSIDERED. pdf+ wesley+ predestination+ calmly+ considered& hl=en& gl=us) by John Wesley [4] John Wesley's Scriptural Christianity: A Plain Exposition of His Teaching on Christian Doctrine (1994) by Thomas Oden, chapter 8: "On Grace and Predestination", pp. 243-252 (ISBN 031075321X) [5] Synergy and self-organization in the evolution of complex systems. (http:/ / www. complexsystems. org/ publications/ pdf/ synselforg. pdf) [6] Myers, N Environmental Unknowns (1995)--~~~~ [7] http:/ / www. theopedia. com/ Universal_prevenient_grace [8] http:/ / www. eternalsecurity. us/ prevenient_grace. htm
Synergy
101
Synergy
A synergy is where different entities cooperate advantageously for a final outcome. If used in a business application it means that teamwork will produce an overall better result than if each person was working toward the same goal individually. The ancient Greek word syn-ergos, signified a rudimentary idea of things 'working together'. It was refined by R. Buckminster Fuller who analysed some of its implications more fully[1] and coined the term Synergetics.[2] It quite literally filled the space missing for an opposite of the concept entropy. Hence it was perhaps more of a 'discovery' etymologically speaking. A dynamic state in which combined action is favored over the difference of individual component actions. Behavior of whole systems unpredicted by the behavior of their parts taken separately. More accurately known as emergent behavior. The cooperative action of two or more stimuli (or drugs), resulting in a different or greater response than that of the individual stimuli.
Drug synergy
Drug synergism occurs when drugs can interact in ways that enhance or magnify one or more effects, or side effects, of those drugs. This is sometimes exploited in combination preparations, such as codeine mixed with acetaminophen or ibuprofen to enhance the action of codeine as a pain reliever. This is often seen with recreational drugs, where 5-HTP, a serotonin precursor often used as an antidepressant, is often used prior to, during, and shortly after recreational use of MDMA as it allegedly increases the "high" and decreases the "comedown" stages of MDMA use (although most anecdotal evidence has pointed to 5-HTP moderately muting the effect of MDMA). Other examples include the use of cannabis with LSD, where the active chemicals in cannabis enhance the hallucinatory experience of LSD use. Negative effects of synergy are a form of contraindication, which for instance can be if more than one depressant drug is used that affects the central nervous system (CNS), an example being alcohol and Valium. The combination can cause a greater reaction than simply the sum of the individual effects of each drug if they were used separately. In this particular case, the most serious consequence of drug synergy is exaggerated respiratory depression, which can be fatal if left untreated.
Pest synergy
Pest synergy would occur in a biological host organism population where, for example, the introduction of parasite A may cause 10% fatalities, and parasite B may also cause 10% loss. When both parasites are present, the losses would normally be expected to total less than 20%, yet in some cases, losses are significantly greater. In such cases it is said that the parasites in combination have a synergistic effect.
Toxicological synergy
Toxicologic synergy is of concern to the public and regulatory agencies because chemicals individually considered safe might pose unacceptable health or ecological risk when exposure is to a combination. Articles in scientific and lay journals include many definitions of chemical or toxicologic synergy, often vague or in conflict with each other. Because toxic interactions are defined relative to the expectation under "no interaction," a determination of synergy (or antagonism) depends on what is meant by "no interaction." The United States Environmental Protection Agency has one of the more detailed and precise definitions of toxic interaction, designed to facilitate risk assessment. In their guidance documents, the no-interaction default assumption is dose addition, so synergy means a mixture
Synergy response that exceeds that predicted from dose addition. The EPA emphasizes that synergy does not always make a mixture dangerous, nor does antagonism always make the mixture safe; each depends on the predicted risk under dose addition. For example, a consequence of pesticide use is the risk of health effects. During the registration of pesticides in the US exhaustive tests are performed to discern health effects on humans at various exposure levels. A regulatory upper limit of presence in foods is then placed on this pesticide. As long as residues in the food stay below this regulatory level, health effects are deemed highly unlikely and the food is considered safe to consume. However in normal agricultural practice it is rare to use only a single pesticide. During the production of a crop several different materials may be used. Each of them has had determined a regulatory level at which they would be considered individually safe. In many cases, a commercial pesticide is itself a combination of several chemical agents, and thus the safe levels actually represent levels of the mixture. In contrast, combinations created by the end user, such as a farmer, are rarely tested as that combination. The potential for synergy is then unknown or estimated from data on similar combinations. This lack of information also applies to many of the chemical combinations to which humans are exposed, including residues in food, indoor air contaminants, and occupational exposures to chemicals. Some groups think that the rising rates of cancer, asthma and other health problems may be caused by these combination exposures; others have other explanations. This question will likely be answered only after years of exposure by the population in general and research on chemical toxicity, usually performed on animals. Examples of pesticide synergists include Piperonyl butoxide and MGK 264[3] .
102
Human synergy
Human synergy relates to humans. For example, say person A alone is too short to reach an apple on a tree and person B is too short as well. Once person B sits on the shoulders of person A, they are more than tall enough to reach the apple. In this example, the product of their synergy would be one apple. Another case would be two politicians. If each is able to gather one million votes on their own, but together they were able to appeal to 2.5 million voters, their synergy would have produced 500,000 more votes than had they each worked independently. A song is also a good example of human synergy, taking more than one musical part and putting them together to create a song that has a much more dramatic effect than each of the parts when played individually. A third form of human synergy is when one person is able to complete two separate tasks by doing one action. For example, if a person was asked by a teacher and his boss at work to write an essay on how he could improve his work, that would be considered synergy. Or, a more visual example of this synergy is a drummer while he's drumming, using four separate rhythms to create one drum beat. Synergy usually arises when two persons with different complementary skills cooperate. The fundamental example is cooperation of men and women in a couple. In business, cooperation of people with organizational and technical skills happens very often. In general, the most common reason why people cooperate is it brings a synergy. On the other hand, people tend to specialize just to be able to form groups with high synergy (see also division of labor and teamwork). Example: Two teams in System Admin working together to combine technical and organizational skills in order to better the client experience, thus creating synergy.
Synergy
103
Corporate synergy
Corporate synergy occurs when corporations interact congruently. A corporate synergy refers to a financial benefit that a corporation expects to realize when it merges with or acquires another corporation. This type of synergy is a nearly ubiquitous feature of a corporate acquisition and is a negotiating point between the buyer and seller that impacts the final price both parties agree to. There are three distinct types of corporate synergies:
Revenue
A revenue synergy refers to the opportunity of a combined corporate entity to generate more revenue than its two predecessor stand alone companies would be able to generate. For example, if company A sells product X through its sales force, company B sells product Y, and company A decides to buy company B then the new company could use each sales person to sell products X and Y thereby increasing the revenue that each sales person generates for the company. In media revenue, synergy is the promotion and sale of a product throughout the various subsidiaries of a media conglomerate, e.g. films, soundtracks or video games.
Management
Synergy in terms of management and in relation to team working refers to the combined effort of individuals as participants of the team. Positive or negative synergy can exist. The condition that exists when the organization's parts interact to produce a joint effect that is greater than the sum of the parts acting alone.
Cost
A cost synergy refers to the opportunity of a combined corporate entity to reduce or eliminate expenses associated with running a business. Cost synergies are realized by eliminating positions that are viewed as duplicate within the merged entity. Examples include the head quarters office of one of the predecessor companies, certain executives, the human resources department, or other employees of the predecessor companies. This is related to the economic concept of Economies of Scale.
Computers
Synergy can also be defined as the combination of human strengths and computer strengths, such as advanced chess. Computers can process data much more quickly than humans, but lack the ability to respond meaningfully to arbitrary stimuli.
Synergy
104
Computer Games
Another obvious example of synergy is in video games. RPG Games such as World of Warcraft and Dragon Age and other party based games such as Team Fortress 2 rely heavily on the idea of synergy. An individual playing alone will have difficulty with many of the quest requirements. A team most often consists of a Tank, Healer, DPS dealer and crowd control Mage to effectively complete quests. In the case of TF2 a heavy, spy, engineer, pyro and medic etc all work together to win a round. The mixture of offensive and defensive classes can be used to maximize a teams chances of victory.
See also
Synergetics Synergism Holism Emergence Perfect storm Systems theory Behavioral Cusp
External links
Linden Dalecki on Media Synergy (2008) [5] Key Principles [6] Synergism Hypothesis [7] Buckminster Fuller's definition of Synergy [8] EPA Supplementary Guidance for Conducting Health Risk Assessment of Chemical Mixtures [9] Synergy and Dysergy in Mereologic Geometries [10]
References
[1] SYNERGETICS Explorations in the Geometry of Thinking by R. Buckminster Fuller (online version) (http:/ / www. rwgrayprojects. com/ synergetics/ s01/ p0100. html) [2] Fuller, R. B., (1975), Synergetics: Explorations In The Geometry Of Thinking, in collaboration with E.J. Applewhite. Introduction and contribution by Arthur L. Loeb. Macmillan Publishing Company, Inc., New York. [3] Pyrethroids and Pyrethrins (http:/ / www. epa. gov/ oppsrrd1/ reevaluation/ pyrethroids-pyrethrins. html), U.S. Environmental Protection Agency, epa.gov [4] Campbell, Richard, Christopher R. Martin, and Bettina Fabos. Media & Culture 5: an Introduction to Mass Communication. Fifth Edition 2007 Update ed. Bostin: Bedford St. Martins, 2007. 606. [5] http:/ / jimc. medill. northwestern. edu/ JIMCWebsite/ 2008/ HollywoodMediaSynergy. pdf [6] http:/ / www. humansynergy. com [7] http:/ / www. complexsystems. org/ publications/ synhypo. html [8] http:/ / www. rwgrayprojects. com/ synergetics/ s01/ p0100. html [9] http:/ / cfpub. epa. gov/ ncea/ raf/ recordisplay. cfm?deid=20533 [10] http:/ / www. wikinfo. org/ index. php/ Synergy_and_Dysergy_in_Mereologic_Geometries
Systems thinking
105
Systems thinking
Systems thinking is the process of understanding how things influence one another within a whole. In nature systems thinking examples include ecosystems in which various elements such as air, water, movement, plant and animals work together to survive or perish. In organizations, systems consist of people, structures, and processes that work together to make an organization healthy or unhealthy. Systems thinking has been defined as an approach to problem solving, by viewing "problems" as parts of an overall system, rather than reacting to specific part, outcomes or events and potentially contributing to further development of unintended consequences. Systems thinking is not one thing but a set of habits or practices [1] within a framework that is based on the belief that the component parts of a system can best be understood in the context of relationships with each other and with other systems, rather than in isolation. Systems thinking focuses on cyclical rather than linear cause and effect. In science systems, it is argued that the only way to fully understand why a problem or element occurs and persists is to understand the parts in relation to the whole.[2] Standing in contrast to Descartes's scientific reductionism and philosophical analysis, it proposes to view systems in a holistic manner. Consistent with systems philosophy, systems thinking concerns an understanding of a system by examining the linkages and interactions between the elements that compose the entirety of the system. Science systems thinking attempts to illustrate that events are separated by distance and time and that small catalytic events can cause large changes in complex systems. Acknowledging that an improvement in one area of a system can adversely affect another area of the system, it promotes organizational communication at all levels in order to avoid the silo effect. Systems thinking techniques may be used to study any kind of system natural, scientific, engineered, human, or conceptual.
Systems thinking Evolutionary systems Bla H. Bnthy developed a methodology that is applicable to the design of complex social systems. This technique integrates critical systems inquiry with soft systems methodologies. Evolutionary systems, similar to dynamic systems are understood as open, complex systems, but with the capacity to evolve over time. Bnthy uniquely integrated the interdisciplinary perspectives of systems research (including chaos, complexity, cybernetics), cultural anthropology, evolutionary theory, and others.
106
Differentiation - specialized units perform specialized functions Equifinality - alternative ways of attaining the same objectives (convergence) Multifinality - attaining alternative objectives from the same inputs (divergence) Some examples: Rather than trying to improve the braking system on a car by looking in great detail at the material composition of the brake pads (reductionist), the boundary of the braking system may be extended to include the interactions between the: brake disks or drums brake pedal sensors hydraulics driver reaction time tires road conditions weather conditions time of day
Using the tenet of "Multifinality", a supermarket could be considered to be: a "profit making system" from the perspective of management and owners a "distribution system" from the perspective of the suppliers an "employment system" from the perspective of employees a "materials supply system" from the perspective of customers an "entertainment system" from the perspective of loiterers a "social system" from the perspective of local residents a "dating system" from the perspective of single customers
As a result of such thinking, new insights may be gained into how the supermarket works, why it has problems, how it can be improved or how changes made to one component of the system may impact the other components.
Systems thinking
107
Applications
Science systems thinking is increasingly being used to tackle a wide variety of subjects in fields such as computing, engineering, epidemiology, information science, health, manufacture, management, and the environment. Some examples: Organizational architecture Job design Team Population and Work Unit Design Linear and Complex Process Design Supply Chain Design Business continuity planning with FMEA protocol Critical Infrastructure Protection via FBI Infragard Delphi method developed by RAND for USAF Futures studies Thought leadership mentoring The public sector including examples at The Systems Thinking Review [4] Leadership development Oceanography forecasting complex systems behavior Permaculture Quality function deployment (QFD) Quality management Hoshin planning [5] methods Quality storyboard StoryTech framework (LeapfrogU-EE) Software quality Program management Project management MECE - McKinsey Way
See also
Boundary critique Crossdisciplinarity Holistic management Information Flow Diagram Interdisciplinary Multidisciplinary Negative feedback Soft systems methodology Synergetics (Fuller) System dynamics Systematics - study of multi-term systems Systemics Systems engineering Systems intelligence Systems philosophy Systems theory Systems science Transdisciplinary Terms used in systems theory
Systems thinking
109
External links
International Society for the Systems Sciences (ISSS) on Wikipedia, International Society for the System Sciences home page [8] UK Systems Society [9] The Systems Thinker newsletter glossary [10] Dancing With Systems [11] from Project Worldview Systems-thinking.de [12]: systems thinking links displayed as a network Systems Thinking [13]
References
[1] http:/ / www. watersfoundation. org/ index. cfm?fuseaction=materials. main [2] Capra, F. (1996) The web of life: a new scientific understanding of living systems (1st Anchor Books ed). New York: Anchor Books. p. 30 [3] Skyttner, Lars (2006). General Systems Theory: Problems, Perspective, Practice. World Scientific Publishing Company. ISBN9-812-56467-5. [4] http:/ / www. thesystemsthinkingreview. co. uk/ [5] http:/ / www. qualitydigest. com/ may97/ html/ hoshin. html [6] http:/ / triarchypress. com/ pages/ Systems_Thinking_for_Curious_Managers. htm [7] http:/ / www. triarchypress. co. uk/ pages/ book5. htm [8] http:/ / isss. org/ world/ [9] http:/ / www. ukss. org. uk [10] http:/ / www. thesystemsthinker. com/ systemsthinkinglearn. html [11] http:/ / www. projectworldview. org/ wvtheme13. htm [12] http:/ / www. systems-thinking. de/ [13] http:/ / www. thinking. net/ Systems_Thinking/ systems_thinking. html
Integral art
Integral art can be variously defined as art that reaches across multiple quadrants and levels, that transcends and includes all limited forms, interpretations, or perspectives, as the belief that every human being is creative and that art is integral to all human endeavours, or simply as art that was created by someone who thinks or acts in an integral way.
A problem of definition
There is no one form of integral art, and although the term is most commonly applied in the Wilberian context, it is in no way limited to that paradigm or organisation. Integral art may equally derive from integral teachers like Sri Aurobindo and the Mother, or other integral thinkers, or simply developed integral art independently.
Integral artists
As with Integral thought in general, any list of Integral artists will be controversial.
Integral art Saul Williams (b. 1972) is a hip-hop artist associated with the Integral Institute. Wilber considers the Wachowski brothers' (b. 1965, 1967) Matrix films to convey important spiritual and philosophical truths. With philosopher Cornel West, Wilber created a DVD commentary track on the series, which is available in The Ultimate Matrix Collection. Paul Lonely (b. 1978) is a mystical poet/artist (author of Suicide Dictionary) associated with Integral Institute. Adam Scott Miller (b. 1984) is a painter commonly categorized in the genres visionary, surreal, and fantastic realism. His work is strongly informed by integral aesthetics to embody characteristics of mystery, awe, and inspiration. Adam is a featured artist at Integral Naked. Michael Garfield (b. 1984) is a songwriter, painter, and essayist once associated with Integral Institute as a member of Ken Wilber's editorial team and as a featured performer at Integral Naked. Multi-perspectivism characterizes his work in all media. He has interviewed Ken at length about integral art. Joe Perez (b. 1969) is a writer/poet whose literary spiritual memoir, Soulfully Gay (2007) was published with a Foreword by Ken Wilber.
110
External links
Integral Art [1] - Johannes Wallmann Alchemies of the TransVisible [2] - Elle Nicolai School of Integral Art [3] Integrally-inspired paintings by Adam Scott Miller [4] 75-minute interview with Ken Wilber on Integral Art [5]
References
[1] [2] [3] [4] [5] http:/ / www. integral-art. de/ english/ integralart. html http:/ / www. ellenicolai. com http:/ / www. sofia. net. au/ http:/ / www. adamscottmiller. com http:/ / evolution. bandcamp. com/ album/ integral-art-mg-interviews-ken-wilber
Integral education
111
Integral education
Integral education refers to educational theories or institutions which are informed by integral thought.
Development
In the teachings on education of Sri Aurobindo and especially those of his co-worker The Mother, Integral Education is the philosophy and practice of education for the whole child: body, emotions, mind, soul, and spirit [1] , [2] . There are several institutions that attempt to use their teachings to inform educational methodology. These include the Sri Aurobindo International Centre of Education, and The Mother's International School. Haridas Chaudhuri, a follower of Sri Aurobindo and The Mother, and Frederic Spiegelberg founded the California Institute of Integral Studies in 1968 in San Francisco .[3] . Author Michael Murphy, who studied at the Sri Aurobindo Ashram in Pondicherry, India, founded the Esalen Institute with Dick Price in 1961. Ken Wilber's Integral University, a part of the Integral Institute, is a set of programs offered at established schools such as John F. Kennedy University and Fielding Graduate University. Sean Esbjrn-Hargens, whose work uses Wilber's ideas, has written about integral education [4] . Literary figure William Irwin Thompson and mathematician Ralph Abraham, whose ideas about the evolution of consciousness are influenced by, among others, Sri Aurobindo and The Mother, designed a curriculum for the private K-12 Ross School in East Hampton, New York and the Ross Global Academy in New York City. Thompson wrote an essay in 1998 entitled "Cultural History and the Ethos of the Ross School". Thompson had also founded the Lindisfarne Association in 1972.
Further reading
Raghunath Pani, Integral Education: Thought and Practice [5] 1987 Ashish Pub. House ISBN 8170241561 Comprehensive study on the new approach of the education policy of the Government of India, in comparison with the integral education of Aurobindo Ghose, 1872-1950 Indra Sen, Integral Education: In the Words of Sri Aurobindo and the Mother [6] 1952, Sri Aurobindo International University Centre Sri Aurobindo Institute of Research in Social Sciences, Integral Education Series: A New Approach to Education. Pondicherry: Sri Aurobindo Society, 1996.
See also
Saybrook Graduate School and Research Center Waldorf education [[2]] Next Step Integral's Integral Education Seminar
References
[1] Collected Works of the Mother, Volume 12 - On Education, Sri Aurobindo Ashram Press, Pondicherry [2] Sri Aurobindo and the Mother on Education, Sri Aurobindo Ashram Press, Pondicherry, 1986 [3] Ulansey, David. (2001). The Early History of the California Institute of Integral Studies (http:/ / www. well. com/ user/ davidu/ ciishistory. html). [4] Esbjrn-Hargens, S. (2006). Integral Education By Design: How Integral Theory Informs Teaching, Learning, and Curriculum in a Graduate Program in ReVision 28 (3), p. 21-29 [5] http:/ / books. google. com/ books?id=Z7IcAAAAMAAJ& q=%22integral+ education%22+ aurobindo+ OR+ wilber& dq=%22integral+ education%22+ aurobindo+ OR+ wilber& lr=& ei=yaKQSKz4LobujgGuiNyICQ
Integral education
[6] http:/ / books. google. com/ books?id=E5ZjGQAACAAJ& dq=%22integral+ education%22+ aurobindo+ OR+ wilber& lr=& ei=yaKQSKz4LobujgGuiNyICQ
112
Integral psychology
Integral psychology is psychology that presents an all-encompassing holistic rather than an exclusivist or reductive approach. It includes both lower, ordinary, and spiritual or transcendent states of consciousness. Important writers in the field of Integral Psychology are Sri Aurobindo, Indra Sen, Haridas Chaudhuri, and Ken Wilber. While Sen closely follows Sri Aurobindo, Chaudhuri and Wilber each present very different theories.
, as
Sri Aurobindo's yoga psychology has also been presented in a scientific and evolutionary context by Don Salmon and Jan Maslow[7] .
Haridas Chaudhuri
An original interpretation of Integral psychology was proposed in the 1970s by Haridas Chaudhuri, who grouped three principle of uniqueness, relatedness and transcendence, corresponding to the personal, interpersonal and transpersonal domains of human existence.[8] [9]
Ken Wilber
Like Sen, Ken Wilber wrote a book entitled Integral Psychology, in which he applies his integral model of consciousness to the psychological realm. This was the first book in which he embraced the Spiral Dynamics model of human development. In Integral Psychology, Wilber identifies an "integral stage of consciousness" which exhibits "...cognition of unity, holism, dynamic dialecticism, or universal integralism..."[10] While Wilber's debt to Sri Aurobindo (despite their very different approaches) is evident in the foreword to a book on Aurobindonian Integral psychology,[11] Wilber began working on the manuscript of a textbook for integral psychology in 1992, tentatively titled System, Self, and Structure, but was diverted because he felt the need to provide more detail on his integral philosophy in Sex, Ecology, Spirituality (1995). The textbook was finally published in 1999 as part of the Collected Works[12] , and then separately in 2000 [13] For Wilber, Integral psychology is psychology that is inclusive or holistic rather than exclusivist or reductive. Multiple explanations of phenomena, rather than competing with each other for supremacy, are to be valued and integrated into a coherent overall view.[14] [15]
Integral psychology
113
Other interpretations
Bahman Shirazi of the California Institute of Integral Studies has defined Integral Psychology as "a psychological system concerned with exploring and understanding the totality of the human phenomenon....(which) at its breadth, covers the entire body-mind-psyche-spirit spectrum, while at its depth...encompasses the previously explored unconscious and the conscious dimensions of the psyche, as well as the supra-conscious dimension traditionally excluded from psychological inquiry".[16] Brant Cortright, also of the CIIS, explains Integral Psychology as born through the synthesis of Sri Aurobindo's teachings with the findings of depth psychology. He presents Integral Psychology as a synthesis of the two major streams of depth psychology the humanistic-existential and contemporary psychoanalytic within an integrating east-west framework [17] .
See also
Integral psychology (Sri Aurobindo) Ken Wilber
External links
Integral psychology - Metaphors and processes of personalintegration [18] - online copy of essay by Bahman Shirazi
References
[1] Patel, Aster, "The Presence of Dr Indra Senji", SABDA - Recent Publications, November 2003, pp. 9-12 PDF (http:/ / sabda. sriaurobindoashram. org/ pdf/ news/ nov2003. pdf) [2] Indra Sen Integral Psychology: The Psychological System of Sri Aurobindo, Pondicherry, India: Sri Aurobindo Ashram Trust, 1986 [3] N. V. Subbannachar, Social Psychology: The Integral Approach, Scientific Book Agency, 1966. This work was originally a doctoral thesis approved by the University of Mysore in 1959) [4] V. Madhusudan Reddy, Integral Yoga Psychology: The Psychic Way to Human Growth and Human Potential, Institute of Human Study, Hyderabad 1990 [5] Joseph Vrinte, The Quest for the Inner Man: Transpersonal Psychotheraphy and Integral Sadhana, 1996 [6] Jyoti and Prem Sobel, The Hierarchy of Minds, Sri Aurobindo Ashram Trust, Pondicherry, 1984 [7] Don Salmon and Jan Maslow, Yoga Psychology and the Transformation of Consciousness Seeng through the Eyes of Infinity, Paragon House, St Paul, Mn, 2007 ISBN 1-55778-835-9 [8] Chaudhuri, Haridas. (1975). "Psychology: Humanistic and transpersonal". Journal of Humanistic Psychology, 15 (1), 7-15. [9] Chaudhuri, Haridas. (1977). The Evolution of Integral Consciousness. Wheaton, Illinois: Quest Books. 1989 paperback reprint: ISBN 0-8356-0494-2 [10] Ken Wilber, The Collected Works of Ken Wilber: Integral Psychology, Trasformations of Consciousness, Selected Essays v.4, p. 458 Shambhala, 1999 [11] Ken Wilber, Forward to A. S. Dalal (ed.), A Greater Psychology - An Introduction to the Psychological Thought of Sri Aurobindo, Tarcher/Putnam, 2000. [12] Collected Works of Ken Wilber volume IV ISBN 1-57062-504-2 [13] Ken Wilber, Integral Psychology : Consciousness, Spirit, Psychology, Therapy Shambhala, ISBN 1-57062-554-9 [14] Wilber, K., 1997, An integral theory of consciousness (http:/ / www. imprint. co. uk/ Wilber. htm); Journal of Consciousness Studies, 4 (1), pp.71-92 [15] Esbjrn-Hargens, S., & Wilber, K. (2008). Integral Psychology in The Corsinis Encyclopedia of Psychology. 4th Edition. New York: John Wiley and Sons. [16] Shirazi, Bahman (2001) "Integral psychology, metaphors and processes of personal integration", Cornelissen, Matthijs (Ed.) Consciousness and Its Transformation, Pondicherry: SAICE [17] Brant Cortright, Integral Psychology: Yoga, Growth, and Opening the Heart, SUNY, 2007 ISBN 0791470717 [18] http:/ / ipi. org. in/ texts/ ip2/ ip2-1. 2-. htm
Integral yoga
114
Integral yoga
Integral yoga Religious origins: Regional origins: Founding Guru: Hinduism, Vedanta Sri Aurobindo Ashram, India Sri Aurobindo, The Mother
Mainstream popularity: millions, both in India and abroad Practice emphases: Derivative forms: Integral transformation of the whole being, physical immortality. none Related schools incorporates Karma, Jnana, Raja and Bhakti yoga Other topics Integral thought - The Synthesis of Yoga - Triple transformation - Psychicisation
In the teachings of Sri Aurobindo, Integral yoga (or purna yoga, Sanskrit for full or complete yoga, sometimes also called supramental yoga) refers to the process of the union of all the parts of one's being with the Divine, and the transmutation of all of their jarring elements into a harmonious state of higher divine consciousness and existence. Sri Aurobindo's Integral Yoga should not be confused with a trademark "Integral Yoga" of Swami Satchidananda. Sri Aurobindo defined integral yoga in the early 1900s as "a path of integral seeking of the Divine by which all that we are is in the end liberated out of the Ignorance and its undivine formations into a truth beyond the Mind, a truth not only of highest spiritual status but of a dynamic spiritual self-manifestation in the universe." He describes the nature and practice of integral yoga in his opus The Synthesis of Yoga. As the title of that work indicates, his integral yoga is a yoga of synthesis, intended to harmonize the paths of karma, jnana, and bhakti yoga as described in the Bhagavad Gita. It can also be considered a synthesis between Vedanta and Tantra, and even between Eastern and Western approaches to spirituality.
Textual sources
Sri Aurobindo and The Mother
Books
Collected Works Life Divine Synthesis of Yoga Savitri Agenda
Teachings
Involution/Involution Evolution Integral education Integral psychology Integral yoga Intermediate zone Supermind
Places
Matrimandir Pondicherry
Communities
Sri Aurobindo Ashram Auroville
Disciples
Champaklal N.K.Gupta Amal Kiran Nirodbaran Pavitra M.P.Pandit Pranab A.B.Purani D.K.Roy Satprem Indra Sen Kapali Shastri
Integral yoga
115
Arya Mother India Collaboration
The theory and practice of Integral Yoga is described in several works by Sri Aurobindo. His book The Synthesis of Yoga, the first version of which appeared in the Arya, was written as a practical guide, and covers all aspects of Integral Yoga. Additional and revised material is found in several of the later chapters of The Life Divine and in other works. Later, his replies to letters and queries by disciples (mostly written during the early 1930s) were collected into a series of volumes, the Letters on Yoga. There is also Sri Aurobindo's personal diary of his yogic experiences, written during the period from 1909 to 1927, and only published under the title Record of Yoga.
No definitive method
Sri Aurobindo and the Mother taught that surrendering to the higher' consciousness was one of the most important processes of the supramental yoga. There is no definitive method for every practitioner of the yoga, else it would not be an adventure. supramental consciousness would act and establish itself in Earthly life. Both Sri Aurobindo and the mother always explained that how this will happen is for the divine to decide and evolve with time . The Mother decided to take this work down to the matter at the cellular level in the late 1960's .
Integral yoga
116
Integral yoga The Inner Being The Inner Being includes the inner realms or aspects of the physical, vital and mental being, which here have a larger, subtler, freer consciousness than that of the everyday consciousness, and its realisation is essential for any higher spiritual realisation. Psychic Being In Integral Yoga the goal is to move inward and discover the Psychic Being, which then can bring about a transformation of the outer nature. This transformation of the outer being or ego by the Psychic is called Psychicisation; it is one of the three necessary stages, called the Triple transformation, in the realisation of the Supramental consciousness. This Psychic transformation is the decisive movement that enables a never-ending progress in life through the power of connecting to one's inner spirit or Divine Essence.
117
Triple Transformation
Introduction
The other major topic in Sri Aurobindo's integral yoga is the Triple transformation. This refers to the process through which reality is transformed into the divine. This is described in The Life Divine part 2, ch.25, and Letters on Yoga part 4, section 1. The Triple Transformation refers to the two-fold movement of spiritual transformation - the inward pychicisation by which the sadhak gets in contact with the inner divine principle or Psychic Being, and the spiritual transformation or spiritualisation. The former represents the Inner Guide which is realised through the Heart, the latter can be compared to the traditional concept of Vedantic, Buddhist and popular guru Enlightenment and the descriptions of the Causal and Ultimate stages of spiritual development in the evolutionary philosophy of the integral thinker Ken Wilber. For Sri Aurobindo, both these stages are equally necessary and important, as both serve as necessary prerequisites for the third and by far the most difficult element of change in the triple transformation, the Supramentalisation of the entire being. ...One must first acquire an inner Yogic consciousness and replace by it our ordinary view of things, natural movements, motives of life; one must revolutionise the whole present build of our being. Next, we have to go still deeper, discover our veiled psychic entity and in its light and under its government psychicise our inner and outer parts, turn mind-nature, life-nature, body-nature and all our mental, vital, physical action and states and movements into a conscious instrumentation of the soul. Afterwards or concurrently we have to spiritualise the being in its entirety by a descent of a divine Light, Force, Purity, Knowledge, freedom and wideness. It is necessary to break down the limits of the personal mind, life and physicality, dissolve the ego, enter into the cosmic consciousness, realise the self, acquire a spiritualised and universalised mind and heart, life-force, physical consciousness. Then only the passage into the supramental consciousness begins to become possible, and even then there is a difficult ascent to make each stage of which is a separate arduous achievement. Sri Aurobindo, The Synthesis of Yoga, 281-2
Integral yoga
118
Psychicisation
Psychicisation is one of the most essential stages of the integral yoga. As described in The Life Divine (book II chapter 25) it refers to a spiritual movement inward, so that one realises the psychic being - the psychic personality or Divine Soul - in the core of one's being, and enable this to transform the outer being, as well as serve as a spiritual Guide in the yoga. It is thanks to this Psychic transformation that the sadhak can avoid the pitfalls of the spiritual path, such as the intermediate zone. The three central spiritual methods here are Consecration, Moving to the Depths (Concentration), and Surrender. Consecration is to open to the Force before engaging in an activity. Moving to the Depths (or Concentration) is a movement away from the surface existence to a deeper existence within. Surrender means offering all one's work, one's life to the Divine Force and Intent (Synthesis of Yoga Part I ch. II-III; Letters on Yoga vol. II pp.585ff (3rd ed.)) In connecting with the evolving divine soul within, the sadhak moves away from ego, ignorance, finiteness, and the limitations of the outer being Psychicisation can serve as a prequel to spiritualisation (equivalent to "Enlightenment"), although they do not have to follow any sort of order. However, both the psychic and the spiritual transformation are equally necessary for the final stage of Supramental transformation.
Spiritualisation
As a result of the Psychic transformation, light, peace, power is drawn into and descends into the body, transforming all of its parts physical, vital, and mental. This is the Spiritual transformation, or Spiritualisation, which refers to the bringing down of the larger spiritual consciousness or spiritual transformation. The spiritual transformation in itself however is not sufficient to avoid pitfalls of the spiritual path, or bring about Supramentalisation. For that, the psychic transformation is needed as well.
Supramentalisation
Supramentalisation is the ultimate stage in the integral yoga. It refers to the bringing down of the Supramental consciousness, and the resulting transformation of the entire being. The supramental transformation is the final stage in the integral yoga, enabling the birth of a new individual fully formed by the supramental power. Such individuals would be the forerunners of a new truth-consciousness based supra-humanity. All aspects of division and ignorance of consciousness at the vital and mental levels would be overcome, replaced with a unity of consciousness at every plane, and even the physical body transformed and divinised. A new supramental species would then emerge, living a supramental, gnostic, divine life on earth. (The Life Divine book II ch.27-28)
Integral yoga duality; not only the salokyalmukti by which the whole conscious existence dwells in the same status of being as the Divine, in the state of Sachchidananda; but also the acquisition of the divine nature by the transformation of this lower being into the human image of the divine, sadharmyamukti, and the complete and final release of all, the liberation of the consciousness from the transitory mould of the ego and its unification with the One Being, universal both in the world and the individual and transcendentally one both in the world and beyond all universe. Sri Aurobindo, Synthesis of Yoga, pp.47-48) God Descends to the Mundane Swami Ramakrishnananda, in his book Yoga Union with Reality, writes about the goal of integral yoga according to Sri Aurobindos teaching: In his description of integral yoga, Sri Aurobindo refers to having God descend to the mundane. In Hassidism, it is said make Him a dwelling in the lower worlds. However, if we go deeper, we will discover that the descent of heaven to earth is the revelation that the division of the material from the spiritual, or above from below, only exists in our egoic perspective, and that nothing needs to come down, because heaven is much closer than we believe and God dwells within us. Swami Ramakrishnananda, Yoga Union with Reality - Chapter 1:Purna Yoga[1]
119
Quotes
"The movement of nature is twofold: divine and undivine. The distinction is only for practical purposes since there is nothing that is not divine. The undivine nature, that which we are and must remain so long as the faith in us is not changed, acts through limitation and ignorance and culminates in the life of the ego; but the divine nature acts by unification and knowledge, and culminates in life divine. The passage from the lower to the higher may effect itself by the transformation of the lower and its elevation to the higher nature. It is this that must be the aim of an integral yoga." -- The Synthesis of Yoga What is the integral yoga? It is a way of complete God-realisation, a complete Self-realisation, a complete fulfillment of our being and consciousness, a complete transformation of our nature - and this implies a complete perfection of life here and not only a return to an eternal perfection elsewhere -- Sri Aurobindo Archives and Research, Dec 1982, p.197 "The method we have to pursue, then, is to put our whole conscious being into contact with the divine and to call him in to transform our entire being into his, so that in a sense god himself, the real person in us, becomes the sadhaka of the sadhana as well as the master of the yoga by whom the lower personality is used. " -- The Synthesis of Yoga All life is a Yoga of Nature seeking to manifest God within itself. Yoga marks the stage at which this effort becomes capable of self-awareness and therefore of right completion in the individual. It is a gathering up and concentration of the movements dispersed and loosely combined in the lower evolution." -- The Synthesis of Yoga p.47 The first word of the supramental Yoga is surrender; its last word also is surrender. It is by a will to give oneself to the eternal Divine, for lifting into the divine consciousness, for perfection, for transformation, that the Yoga begins; it is in the entire giving that it culminates; for it is only when the self-giving is complete that there comes the finality of the Yoga, the entire taking up into the supramental Divine, the perfection of the being, the transformation of the nature."
Integral yoga - Sri Aurobindo Seven drafts on Supramental Yoga [for "The Path"] from 1928-1929 to late 1930's as found on Bernard's Site for Sri Aurobindo and the Mother' [2] ... to do the integral yoga one must first resolve to surrender entirely to the Divine, there is no other way, this is the way. But after that one must have the five psychological virtues, five psychological perfections and we say that the perfections are 1.Sincerity or Transparency 2.Faith or Trust (Trust in the Divine) 3.Devotion or Gratitude 4.Courage or Inspiration 5.Endurance or Perseverance The Mother, Collected Works of the Mother Vol.8 p.42
120
See also
Integral thought Involution (Sri Aurobindo)
References
Sri Aurobindo, The Synthesis of Yoga, fifth edition, Sri Aurobindo Ashram Trust 1999 ----- Letters on Yoga, Volumes 22, 23, and 24, 1972, Sri Aurobindo Ashram Trust Anon. The integral yoga; Sri Aurobindo's Teaching and Method of Practice, 1993 Sri Aurobindo Ashram Trust. Anon. Glossary to the Record of Yoga [3] Tulsidas Chatterjee, Sri Aurobindo's Integral Yoga, Aurobindo Ashram, Pondicherry 1970 Morwenna Donnelly, Founding the Life Divine: An Introduction to the Integral Yoga of Sri Aurobindo Hawthorn Books, 1956 Madhav Pundalik Pandit, Sri Aurobindo and His Yoga, Lotus Press 1987 ISBN 0941524256 ----- Dictionary of Sri Aurobindo's Yoga Lotus Press 1992 ISBN 0941524744 [1] Seven drafts on Supramental Yoga [2] [for "The Path"] from 1928-1929 to late 1930s as found on Bernard's Site for Sri Aurobindo and the Mother'. [2] ibid.
External links
The Integral Yoga of Sri Aurobindo and the Mother [4] The University of Tomorrow [5]- Online University offering courses in Sri Aurobindo's Integral Yoga and other related subjects Core-texts of the Integral Yoga [6] - (as listed on the website of the Indian Psychology Institute) Selections from The Synthesis of Yoga [7] Sri Aurobindo's Teaching and Method of Sadhana [8] quotes [9] integral yoga from Sri Aurobindo's Synthesis of Yoga - link [10] - link [11] Integral yoga - Integral Wiki [12] Integral Yoga Studies [13] Patrizia Norelli-Bachelet on the Supramental and Integral Yoga of Sri Aurobindo and the Mother [14]
World Union
122
World Union
Sri Aurobindo and The Mother
Books
Collected Works Life Divine Synthesis of Yoga Savitri Agenda
Teachings
Involution/Involution Evolution Integral education Integral psychology Integral yoga Intermediate zone Supermind
Places
Matrimandir Pondicherry
Communities
Sri Aurobindo Ashram Auroville
Disciples
Champaklal N.K.Gupta Amal Kiran Nirodbaran Pavitra M.P.Pandit Pranab A.B.Purani D.K.Roy Satprem Indra Sen Kapali Shastri
World Union is a non-profit, non-political organization founded on the November 26, 1958 in Pondicherry, inspired by Sri Aurobindo's vision of carrying forward a movement for Human Unity, World Peace and Progress on a Spiritual Foundation. For the ordinary humanitarian and religious outlook and motivation are inadequate to meet the demands of the New Age which is already in the process of manifesting under the inevitable programme of the evolutionary nature on earth. Mirra Alfassa, known as The Mother, became the President of the first World Council of World Union on 20 August 1964, and since then, 20 August is celebrated as World Union Day. A. B. Patel was the leading figure in the organization for many years, he was succeeded in that role by M. P. Pandit and Samar Basu. The organization also publishes a quarterly journal with the same title, which draws inspiration from the teachings of Sri Aurobindo and The Mother, particularly from The Human Cycle and The Ideal of Human Unity. At the instance of Surendra Mohan Ghosh, and Anil Mukherjee along with other prominent persons formed an Association named New World Union on the 26th November 1958 in Pondicherry A. B. Patel joined in May 1959, and then on April 23, 1960 the name was changed to 'World Union' on The Mother's advice. The Mother became the President of the first World Council of World Union on 20 August 1964, and Surendra Mohan Ghosh, the Chairman. Executive Body of World Union Samir Kanta Gupta - Chairman Anjali Roy, Kittu Reddy, and Jagdish Gandhi - Vice-chairpersons Sunil Behari Mohanty - General Secretary & Treasurer Prakash S. Patel - Asst. General Secretary & Asst. Treasurer Editorial Board - Samir Kanta Gupta, Kittu Reddy, Suresh Chandra De, and Sunil Behari Mohanty (Editor) Location - 52, Rue Debassyns de Richmont, Puducherry - 605002, India. Phone - (0413) 2334834, Web www.worldunion.bravehost.com, Mail - [email protected]
World Union Select articles from the Journal Concept of Man in Sri Aurobindo. Indra Sen July-Sept 1968 The Integral Culture of Man. Indra Sen April-June 1970 Integral Philosophy and a New Civilization. Karl Heussenstamm April-June 1971 Unity and Harmony in Practical Prospects. H. Maheswari March 2006 Drive to Higher Integrations. Purnendu Prasad Bhattacharya June 2006 A free world-union: Sri Aurobindo Social and Political Thought Volume-15
123
See also
World government World citizen World Service Authority Democratic World Federalists is a San-Francisco-based civil society organization with supporters worldwide, advocates a democratic federal system of world government.
Contents
Articles
Holography Dennis Gabor Electron holography Yuri Denisyuk Emmett Leith Nicholas J. Phillips Sound recording Interference (wave propagation) Diffraction Diffraction grating Plane wave Point source Photographic plate Concave lens Beam splitter Speckle pattern Coherence (physics) Coherence length Holographic memory Security hologram Holographic interferometry Interferometric microscopy Holonomic brain theory Holographic principle Volume hologram Digital holography Digital planar holography Integral imaging Phase-coherent holography Australian Holographics Holographic data storage Computer generated holography Check weigher Frank DeFreitas 1 14 17 18 18 20 22 33 37 49 53 57 59 61 74 75 78 86 87 92 94 95 96 98 103 104 107 108 111 111 114 119 122 126
Hogel HoloVID Holographic screen Imagination Dead Imagine International Hologram Manufacturers Association Kinebar MIT Museum Rainbow hologram Reciprocity (photography) SeeReal Technologies X-ray fluorescence holography Holographic paradigm Membrane paradigm David Bohm Karl H. Pribram Aharonov-Bohm effect Bohm diffusion Bohm interpretation Correspondence principle Fractal cosmology EPR paradox Holomovement Self-similarity Implicate order Orch-OR Implicate and Explicate Order John Stewart Bell Debye sheath
127 127 129 132 132 136 137 138 139 144 145 145 148 150 156 159 163 164 180 185 189 198 202 204 211 217 224 227
References
Article Sources and Contributors Image Sources, Licenses and Contributors 231 235
Article Licenses
License 238
Holography
Holography
Holography (from the Greek, -hlos whole + -graf writing, drawing) is a technique that allows the light scattered from an object to be recorded and later reconstructed so that it appears as if the object is in the same position relative to the recording medium as it was when recorded. The image changes as the position and orientation of the viewing system changes in exactly the same way as if the object were still present, thus making the recorded image (hologram) appear three dimensional.
The technique of holography can also be used to optically store, retrieve, and process information. While holography is commonly used to display static 3-D pictures, it is not yet possible to generate arbitrary scenes by a holographic volumetric display.
Overview
Holography was discovered in 1947 by Hungarian physicist Dennis Gabor (Hungarian name: Gbor Dnes) (19001979),[1] work for which he received the Nobel Prize in Physics in 1971. It was made possible by pioneering work in the field of physics by other scientists like Mieczysaw Wolfke who resolved technical issues that previously made advancements impossible. The discovery was an unexpected result of research into improving electron microscopes at the British Thomson-Houston Company in Rugby, England, and the company filed a patent in December Hologram Artwork in MIT Museum 1947 (patent GB685286). The technique as originally invented is still used in electron microscopy, where it is known as electron holography, but holography as a light-optical technique did not really advance until the development of the laser in 1960. The first holograms that recorded 3D objects were made in 1962 by Yuri Denisyuk in the Soviet Union[2] and by Emmett Leith and Juris Upatnieks at University of Michigan, USA.[3] Advances in photochemical processing techniques to produce high-quality display holograms were achieved by Nicholas J. Phillips.[4] Several types of holograms can be made. Transmission holograms, such as those produced by Leith and Upatnieks, are viewed by shining laser light through them and looking at the reconstructed image from the side of the hologram opposite the source. A later refinement, the "rainbow transmission" hologram, allows more convenient illumination by white light or other monochromatic sources rather than by lasers. Rainbow holograms are commonly seen today on credit cards as a security feature and on product packaging. These versions of the rainbow transmission hologram are commonly formed as surface relief patterns in a plastic film, and they incorporate a reflective aluminum coating
Holography that provides the light from "behind" to reconstruct their imagery. Another kind of common hologram, the reflection or Denisyuk hologram, is capable of multicolour image reproduction using a white light illumination source on the same side of the hologram as the viewer. One of the most promising recent advances in the short history of holography has been the mass production of low-cost solid-state lasers, such as those found in millions of DVD recorders and used in other common applications, which are sometimes also useful for holography. These cheap, compact, solid-state lasers can, under some circumstances, compete well with the large, expensive gas lasers previously required to make holograms, and are already helping to make holography much more accessible to low-budget researchers, artists and dedicated hobbyists.
Theory
Though holography is often referred to as 3D photography, this is a misconception. A better analogy is sound recording where the sound field is encoded in such a way that it can later be reproduced. In holography, some of the light scattered from an object or a set of objects falls on the recording medium. A second light beam, known as the reference beam, also illuminates the recording medium, so that interference occurs between the Holographic recording process two beams. The resulting light field is an apparently random pattern of varying intensity which is the hologram. It can be shown that if the hologram is illuminated by the original reference beam, a light field is diffracted by the reference beam which is identical to the light field which was scattered by the object or objects. Thus, someone looking into the hologram "sees" the objects even though they are no longer present. There are a variety of recording materials which can be used, including photographic film.
Holography relative phase of object and reference beam is encoded as the maxima and minima of the fringe pattern. When the photographic plate is developed, the fringe pattern acts as a diffraction grating and when the reference beam is incident upon the photographic plate, it is partly diffracted into the same angle at which the original object beam was incident. Thus, the object beam has been reconstructed. The diffraction grating created by the two waves interfering has reconstructed the "object beam" and it is therefore a hologram as defined above. Point sources A slightly more complicated hologram can be made using a point source of light as object beam and a plane wave as reference beam to illuminate the photographic plate. An interference pattern is formed which in this case is in the form of curves of decreasing separation with increasing distance from the centre. The photographic plate is developed giving a complicated pattern which can be considered to be made up of a diffraction pattern of varying spacing. When the plate is illuminated by the reference beam alone, it is diffracted by the grating into different angles which depend on the local spacing of the pattern on the plate. It can be shown that the net effect of this is to reconstruct the object beam, so that it appears that light is coming from a point source behind the plate, even when the source has been removed. Holographic reconstruction process The light emerging from the photographic plate is identical to the light that emerged from the point source that used to be there. An observer looking into the plate from the other side will "see" a point source of light whether the original source of light is there or not. This sort of hologram is effectively a concave lens, since it "converts" a plane wavefront into a divergent wavefront. It will also increase the divergence of any wave which is incident on it in exactly the same way as a normal lens does. Its focal length is the distance between the point source and the plate. Complex objects To record a hologram of a complex object, a laser beam is first split into two separate beams of light using a beam splitter of half-silvered glass or a birefringent material. One beam illuminates the object, reflecting its image onto the recording medium as it scatters the beam. The second (reference) beam illuminates the recording medium directly. According to diffraction theory, each point in the object acts as a point source of light. Each of these point sources interferes with the reference beam, giving rise to an interference pattern. The resulting pattern is the sum of a large number (strictly speaking, an infinite number) of point source + reference beam interference patterns. When the object is no longer present, the holographic plate is illuminated by the reference beam. Each point source diffraction grating will diffract part of the reference beam to reconstruct the wavefront from its point source. These individual wavefronts add together to reconstruct the whole of the object beam.
Holography The viewer perceives a wavefront that is identical to the scattered wavefront of the object illuminated by the reference beam, so that it appears to him or her that the object is still in place. This image is known as a "virtual" image as it is generated even though the object is no longer there. The direction of the light source seen illuminating the virtual image is that of the original illuminating beam. This explains, albeit in somewhat simple terms, how transmission holograms work. Other holograms, such as rainbow and Denisyuk holograms, are more complex but have similar principles.
Mathematical model
A light wave can be modeled by a complex number U which represents the electric or magnetic field of the light wave. The amplitude and phase of the light are represented by the absolute value and angle of the complex number. The object and reference waves at any point in the holographic system are given by UO and UR. The combined beam is given be UO + UR. The energy of the combined beams is proportional to the square of magnitude of the electric wave:
If a photographic plate is exposed to the two beams, and then developed, its transmittance, T, is proportional to the light energy which was incident on the plate, and is given by
where k is a constant. When the developed plate is illuminated by the reference beam, the light transmitted through the plate, UH is It can be seen that UH has four terms. The first of these is kUO, since URUR* is equal to one, and this is the re-constructed object beam. The second term represents the reference beam whose amplitude has been modified by UR2. The third also represents the reference beam which has had its amplitude modified by UO2; this modification will cause the reference beam to be diffracted around its central direction. The fourth term is known as the "conjugate object beam." It has the reverse curvature to the object beam itself, and forms a real image of the object in the space beyond the holographic plate. Early holograms had both the object and reference beams illuminating the recording medium normally, which meant that all the four beams emerging from the hologram were superimposed on one another. The off-axis hologram was developed by Leith and Upatnieks to overcome this problem. The object and reference beams are incident at well-separated angles onto the holographic recording medium and the virtual, real and reference wavefronts all emerge at different angles, enabling the re-constructed object beam to be imaged clearly.
Holography
Holography
Holography
No
Wet
No Yes Yes
100% 2% 100%
Holography
A hologram on a Nokia mobile phone battery. This is intended to show the battery is 'original Nokia' and not a fake or an imitation.
The first step in the embossing process is to make a stamper by electrodeposition of nickel on the relief image recorded on the photoresist or photothermoplastic. When the nickel layer is thick enough, it is separated from the master hologram and mounted on a metal backing plate. The material used to make embossed copies consists of a polyester base film, a resin separation layer and a thermoplastic film constituting the holographic layer. The embossing process can be carried out with a simple heated press. The bottom layer of the duplicating film (the thermoplastic layer) is heated above its softening point and pressed against the stamper so that it takes up its shape. This shape is retained when the film is cooled and removed from the press. In order to permit the viewing of embossed holograms in reflection, an additional reflecting layer of aluminum is usually added on the hologram recording layer. It is now possible to print holograms directly into steel using a sheet explosive charge to create the required surface relief.[9]
Applications
Data storage
Holography can be put to a variety of uses other than recording images. Holographic data storage is a technique that can store information at high density inside crystals or photopolymers. The ability to store large amounts of information in some kind of media is of great importance, as many electronic products incorporate storage devices. As current storage techniques such as Blu-ray Disc reach the limit of possible data density (due to the diffraction-limited size of the writing beams), holographic storage has the potential to become the next generation of popular storage media.The advantage of this type of data storage is that the volume of the recording media is used instead of just the surface. Currently available SLMs can produce about 1000 different images a second at 10241024-bit resolution. With the right type of media (probably polymers rather than something like LiNbO3), this would result in about 1 gigabit per second writing speed. Read speeds can surpass this and experts believe 1-terabit per second readout is possible. In 2005, companies such as Optware and Maxell have produced a 120mm disc that uses a holographic layer to store data to a potential 3.9 TB (terabyte), which they plan to market under the name Holographic Versatile Disc. Another company, InPhase Technologies, is developing a competing format. While many holographic data storage models have used "page-based" storage, where each recorded hologram holds a large amount of data, more recent research into using submicrometre-sized "microholograms" has resulted in several potential 3D optical data storage solutions. While this approach to data storage can not attain the high data rates of page-based storage, the tolerances, technological hurdles, and cost of producing a commercial product are
Security
Security holograms are very difficult to forge because they are replicated from a master hologram which requires expensive, specialized and technologically advanced equipment. They are used widely in many currencies such as the Brazilian real 20 note, British pound 5/10/20 notes, Canadian dollar 5/10/20/50/100 notes, Euro 5/10/20/50/100/200/500 notes, South Korean won 5000/10000/50000 notes, Japanese yen 5000/10000 notes, etc. They are also used in credit and bank cards as well as passports, books, DVDs, and sports equipment.
Art
Early on artists saw the potential of holography as a medium and gained access to science laboratories to create their work. Holographic art is often the result of collaborations between scientists and artists, although some holographers would regard themselves as both an artist and scientist. Salvador Dal claimed to have been the first to employ holography artistically. He was certainly the first and best-known surrealist to do so, but the 1972 New York exhibit of Dal holograms had been preceded by the holographic art exhibition which was held at the Cranbrook Academy of Art in Michigan in 1968 and by the one at the Finch College gallery in New York in 1970, which attracted national media attention.[10] During the 1970s a number of arts studios and schools were established, each with their particular approach to holography. Notably there was the San Francisco School of holography established by Llyod Cross, The Museum of Holography in New York founded by Rosemary (Possie) H. Jackson, the Royal College of Art in London and the Lake Forest College Symposiums organised by Tung Jeong (T.J) [11]. None of these studios still exist, however there is the Center for the Holographic Arts in New York [12] and the HOLOcenter in Seoul [13] which offer artists a place to create and exhibit work. A small but active group of artists use holography as their main medium and many more artists integrate holographic elements into their work [14]. The MIT Museum [15] and Jonathan Ross [16] both have extensive collections of holography and on-line catalogues of art holograms
Hobbyist use
Since the beginning of holography, experimenters have explored the uses of holography. Starting in 1971 Lloyd Cross started the San Francisco School of Holography and started to teach amateurs the methods of making holograms with inexpensive equipment. This method relied on the use of a large table of deep sand to hold the optics rigid and damp vibrations that would destroy the image. Many of these holographers would go on to produce art holograms. In 1983, Fred Unterseher published the Holography Handbook, a remarkably easy to read description of making holograms at home. This brought in a new wave of holographers and gave simple methods to use the then available AGFA silver halide recording materials.
Holography In 2000 Frank DeFreitas published the Shoebox Holography Book and introduced using inexpensive laser pointers to countless hobbyists. This was a very important development for amateurs as the cost for a 5mW laser dropped from $1200 to $5 as semiconductor laser diodes reached mass market. Now there are hundreds to thousands of amateur holographers worldwide. In 2006 a large number of surplus Holography Quality Green Lasers (Coherent C315) became available and put Dichromated Gelatin (DCG) within the reach of the amateur holographer. The holography community was surprised at the amazing sensitivity of DCG to green light. It had been assumed that the sensitivity would be non existent. Jeff Blyth responded with the G307 formulation of DCG to increase the speed and sensitivity to these new lasers[17] . Many film suppliers have come and gone from the silver halide market. While more film manufactures have filled in the voids, many amateurs are now making their own film. The favorite formulations are Dichromated Gelatin, Methylene Blue Sensitised Dichromated Gelatin and Diffusion Method Silver Halide preparations. Jeff Blyth has published very accurate methods for making film in a small lab or garage[18] . A small group of amateurs are even constructing their own pulsed lasers to make holograms of moving objects.[19]
10
Holographic interferometry
Holographic interferometry (HI)[20] [21] is a technique which enables static and dynamic displacements of objects with optically rough surfaces to be measured to optical interferometric precision (i.e. to fractions of a wavelength of light). It can also be used to detect optical path length variations in transparent media, which enables, for example, fluid flow to be visualised and analysed. It can also be used to generate contours representing the form of the surface. It has been widely used to measure stress, strain, and vibration in engineering structures.
Interferometric microscopy
The hologram keeps the information on the amplitude and phase of the field. Several holograms may keep information about the same distribution of light, emitted to various directions. The numerical analysis of such holograms allows one to emulate large numerical aperture which, in turn, enables enhancement of the resolution of optical microscopy. The corresponding technique is called interferometric microscopy. Recent achievements of interferometric microscopy allow one to approach the quarter-wavelength limit of resolution.[22]
Dynamic holography
In static holography, recording, developing and reconstructing occur sequentially and a permanent hologram is produced. There also exist holographic materials which do not need the developing process and can record a hologram in a very short time. This allows one to use holography to perform some simple operations in an all-optical way. Examples of applications of such real-time holograms include phase-conjugate mirrors ("time-reversal" of light), optical cache memories, image processing (pattern recognition of time-varying images), and optical computing. The amount of processed information can be very high (terabit/s), since the operation is performed in parallel on a whole image. This compensates for the fact that the recording time, which is in the order of a s, is still very long compared to the processing time of an electronic computer. The optical processing performed by a dynamic hologram is also much less flexible than electronic processing. On one side one has to perform the operation always on the whole image, and on the other side the operation a hologram can perform is basically either a multiplication or a phase conjugation. But remember that in optics, addition and Fourier transform are already easily performed in linear materials, the second simply by a lens. This enables some applications like a device that compares images in an optical way.[23] The search for novel nonlinear optical materials for dynamic holography is an active area of research. The most common materials are photorefractive crystals, but also in semiconductors or semiconductor heterostructures (such
Holography as quantum wells), atomic vapors and gases, plasmas and even liquids it was possible to generate holograms. A particularly promising application is optical phase conjugation. It allows the removal of the wavefront distortions a light beam receives when passing through an aberrating medium, by sending it back through the same aberrating medium with a conjugated phase. This is useful for example in free-space optical communications to compensate for atmospheric turbulence (the phenomenon that gives rise to the twinkling of starlight).
11
Non-optical applications
In principle, it is possible to make a hologram for any wave. Electron holography is the application of holography techniques to electron waves rather than light waves. Electron holography was invented by Dennis Gabor to improve the resolution and avoid the aberrations of the transmission electron microscope. Today it is commonly used to study electric and magnetic fields in thin films, as magnetic and electric fields can shift the phase of the interfering wave passing through the sample.[24] The principle of electron holography can also be applied to interference lithography.[25] Acoustic holography is a method used to estimate the sound field near a source by measuring acoustic parameters away from the source via an array of pressure and/or particle velocity transducers. Measuring techniques included within acoustic holography are becoming increasingly popular in various fields, most notably those of transportation, vehicle and aircraft design, and NVH. The general idea of acoustic holography has led to different versions such as near-field acoustic holography (NAH) and statistically optimal near-field acoustic holography (SONAH). For audio rendition, the wave field synthesis is the most related procedure. Atomic holography has evolved out of the development of the basic elements of atom optics. With the Fresnel diffraction lens and atomic mirrors atomic holography follows a natural step in the development of the physics (and applications) of atomic beams. Recent developments including atomic mirrors and especially ridged mirrors have provided the tools necessary for the creation of atomic holograms,[26] although such holograms have not yet been commercialized.
Other applications
Holographic scanners are in use in post offices, larger shipping firms, and automated conveyor systems to determine the three-dimensional size of a package. They are often used in tandem with checkweighers to allow automated pre-packing of given volumes, such as a truck or pallet for bulk shipment of goods.
See also
Volume hologram Digital holography Digital planar holography Holonomic brain theory Integral imaging Holographic principle Tomography List of emerging technologies Phase-coherent holography Australian Holographics Holographic data storage Computer generated holography
Dennis Gabor
14
Dennis Gabor
The native form of this personal name is Gbor Dnes. This article uses the Western name order.
Dennis Gabor
Born 5 June 1900 Budapest, Hungary 9 February 1979 (aged78) London, England Electrical engineering Imperial College London British Thomson-Houston Technical University of Berlin Technical University of Budapest Invention of holography
Died
Fields Institutions
Alma mater
Knownfor
Notable awards Nobel Prize in Physics (1971) IEEE Medal of Honor (1970)
Dennis Gabor (original Hungarian name: Gbor Dnes) CBE, FRS, (5 June 1900, Budapest 9 February 1979, London) was a Hungarian electrical engineer and inventor, most notable for inventing holography, for which he later received the Nobel Prize in Physics.
Biography
He was born as Gbor Dnes,[1] in Budapest, Hungary.[2] He served with the Hungarian artillery in northern Italy during World War I.[2] He studied at the Technical University of Budapest from 1918, later in Germany, at the Charlottenburg Technical University in Berlin, now known as the Technical University of Berlin.[1] At the start of his career, he analyzed the properties of high voltage electric transmission lines by using cathode-beam oscillographs, which led to his interest in electron optics.[1] Studying the fundamental processes of the oscillograph, Gabor was led to other electron-beam devices such as electron microscopes and TV tubes. He eventually wrote his Ph.D. thesis concerning the cathode ray tube in 1927, and worked on plasma lamps.[1] Having fled from Nazi Germany in 1933, Gabor was invited to Britain to work at the development department of the British Thomson-Houston company in Rugby, Warwickshire. During his time in Rugby, he met Marjorie Butler, and they married in 1936. It was while working at British Thomson-Houston that he invented holography, in 1947. Gabor's research focused on electron inputs and outputs, which led him to the invention of re-holography.[1] The basic idea was that for perfect optical imaging, the total of all the information has to be used; not only the amplitude, as in usual optical imaging, but also the phase. In this manner a complete holo-spatial picture can be obtained.[1] Gabor published his theories of re-holography in a series of papers between 1946 and 1951.[1] Gabor also researched how human beings communicate and hear; the result of his investigations was the theory of granular synthesis, although Greek composer Iannis Xenakis claimed that he was actually the first inventor of this synthesis technique.[3] At the time Gabor developed holography, coherent light sources were not available, so the theory had to wait more than a decade until its first practical applications were realized, though he experimented with a heavily filtered mercury arc light source.[1] The invention in 1960 of the laser, the first coherent light source, was followed by the first hologram, in 1964, after which holography became commercially available.
Dennis Gabor In 1948 Gabor moved from Rugby to Imperial College London, and in 1958 became professor of Applied Physics until his retirement in 1967. While spending much of his retirement in Italy, he remained connected with Imperial College as a Senior Research Fellow and also became Staff Scientist of CBS Laboratories, in Stamford, Connecticut; there, he collaborated with his life-long friend, CBS Labs' president Dr. Peter C. Goldmark in many new schemes of communication and display. One of Imperial College's new halls of residence in Prince's Gardens, Knightsbridge is named Gabor Hall in honour of Gabor's contribution to Imperial College. He developed an interest in social analysis and published The Mature Society: a view of the future in 1972. Gabor wrote, "The best way to predict the future is to invent it." Following the rapid development of lasers and a wide variety of holographic applications (e.g. art, information storage, recognition of patterns), Gabor achieved acknowledged success and worldwide attention during his lifetime.[1] He received numerous awards besides the Nobel Prize.
15
Awards
1956 - Fellow of the Royal Society 1964 - Honorary Member of the Hungarian Academy of Sciences 1964 - D.Sc., University of London 1967 - Young Medal and Prize, for distinguished research in the field of optics
1967 - Colombus Award of the International Institute for Communications, Genoa 1968 - Albert Michelson Medal of The Franklin Institute, Philadelphia 1968 - Rumford Medal of the Royal Society 1970 - Honorary Doctorate, University of Southampton 1970 - Medal of Honor of the Institute of Electrical and Electronics Engineers 1970 - Commander of the Order of the British Empire (CBE) 1971 - Nobel Prize in Physics, for his invention and development of the holographic method 1971 - Honorary Doctorate, Delft University of Technology 1971 - Prix Holweck of the Socit Franaise de Physique Dennis-Gabor Strasse in Potsdam is named in his honor and is the location of the Potsdamer Centrum fr Technologie [4]. 2009 - Imperial College London opens Gabor Hall, a hall of residence named in his honor
Electron holography
17
Electron holography
Electron holography is the application of holography techniques to electron waves rather than light waves.
Illumination source
Point-like field emission sources are the appropriate sources for coherent electron waves. Unlike optical sources, the wavelength is not fixed but can be readily selected by means of the applied voltage.
Beamsplitter
The coherent beam needs to be split into at least two beams for interference. This can be done by grating diffraction or by use of an electron biprism (essentially a narrow wire filament).
Electromagnetic fields
It is important to shield the interferometric system from electromagnetic fields, as they can induce unwanted phase-shifts due to the Aharonov-Bohm effect. Static fields will result in a fixed shift of the interference pattern. It is clear every component and sample must be properly grounded and shielded from outside noise.
Applications
Electron holography was invented by Dennis Gabor to improve the resolution and avoid the aberrations of the transmission electron microscope. Today it is commonly used to study electric and magnetic fields in thin films, as magnetic and electric fields can shift the phase of the interfering wave passing through the sample[1] . The principle of electron holography can also be applied to interference lithography[2] .
References
[1] R. E. Dunin-Borkowski et al., Micros. Res. and Tech. vol. 64, pp. 390-402 (2004). [2] K. Ogai et al., Jpn. J. Appl. Phys., vol. 32, pp.5988-5992 (1993)
Yuri Denisyuk
18
Yuri Denisyuk
Yuri Nikolaevich Denisyuk (Russian: ; July 27, 1927, Sochi May 14, 2006, Saint Petersburg) was a Soviet physicist known for his contribution to holography, in particular for the so-called "Denisyuk hologram".
External links
(Russian) Virtual Museum: Yuri Nikolaevich Denisyuk [1]
Yuri Denisyuk holding a self-portrait hologram.
References
[1] http:/ / www. ifmo. ru/ museum/ ?out=person& per_id=227& letter=196
Emmett Leith
Emmett Leith (March 12, 1927 in Detroit, Michigan December 23, 2005 in Ann Arbor, Michigan) was a professor of electrical engineering at the University of Michigan and, with Juris Upatnieks of the University of Michigan, the co-inventor of three-dimensional holography. Leith received his B.S. in physics from Wayne State University in 1949 and his M.S. in physics in 1952. He received his Ph.D. in electrical engineering from Wayne State in 1978. Much of Leith's holographic work was an outgrowth of his research on synthetic aperture radar (SAR) performed while a member of the Radar Laboratory of the University of Michigan's Willow Run Laboratory beginning in 1953. Professor Leith and his coworker Juris Upatnieks displayed the world's first three-dimensional hologram at a conference of the Optical Society of America in 1964. He received the 1960 IEEE Morris N. Liebmann Memorial Award and the Ballantine Medal in 1969. In 1979, President Jimmy Carter awarded Leith with the National Medal of Science for his research. He was awarded the 1985 Frederic Ives Medal [1] by the OSA.
Nicholas J. Phillips
20
Nicholas J. Phillips
Nick Phillips
Nicholas John Phillips wearing a holographic bolo tie, c. 2003. Born 26 September 1933 Finchley, London, United Kingdom 23 May 2009 (aged75) Loughborough, United Kingdom United Kingdom British Physicist De Montfort University (DMU) Loughborough University (LUT) Sperry Rand Research Centre English Electric AWRE Aldermaston Imperial College Display Holograms [1] Phillips-Bjelkhagen Ultimate (PBU) Derek Abbott
[2]
Died
Nicholas (Nick) John Phillips (26 September 1933 23 May 2009) was an English physicist, notable for the development of photochemical processing techniques for the color hologram. Holograms typically used to have low signal-to-noise ratios, and Phillips is credited as the pioneer of silver halide holographic processing techniques for producing high-quality reflection holograms.
Career
Phillips graduated with a BSc degree in physics from Imperial College, London. He was a senior researcher at the Atomic Weapons Research Establishment (AWRE), Aldermaston, from 1959-1962. He was a research scientist at the Sperry Rand Research Centre, Sudbury, Massachusetts, USA, from 1962-1963. He was a theoretical physicist at English Electric, Whetstone, Leicester, UK, from 1963-1965. From 1965-1993 he was appointed at Loughborough University, where he rose to Professor of Applied Optics. In October 1993, he was appointed as Professor of Imaging Science at De Montfort University, Leicester, UK. Phillips was the co-founder in the early 1970s of Holoco, who using lasers supplied by The Who (that had been used in laser light shows during their concerts), constructed the Light Fantastic exhibitions as The Royal Academy of Arts, London, in 1977-8. The company became Advanced
Nicholas J. Phillips Holographics in 1980 when The Who withdrew their financial backing, and was based in Loughborough, UK,[3] and later became part of Markem Systems.[4]
21
Holographic Art
Phillips developed a technique for producing white light holograms that work in dim lighting conditions, which are now widely used in the world of holographic art.[7]
Awards
Phillips was awarded the Institute of Physics Thomas Young Medal (1981) in recognition for contributions to holography, particularly the development of high quality holograms for visual display. He is a Fellow of the Institute of Physics.
References
Dieter Jung, "Holographic space: A historical view and some personal experiences" [8], Leonardo, Vol. 22, No. 3/4, Holography as an Art Medium: Special Double Issue. (1989), pp.331336. Ed Wesly, "A toast to Nick Phillips" [9], Leonardo, Vol. 25, No. 5, Archives of Holography: A Partial View of a Three-Dimensional World: Special Issue. (1992), pp.439442.
External links
A photographic collection of Phillips's holograms [10] Link to Phillips's company, Advanced Holographics [11] Memorial biography [12] Times obituary [13]
Sound recording
[18] [19] [20] [21] http:/ / sounds. bl. uk/ BrowseCategory. aspx?category=Sound-recording-history http:/ / www. nypl. org/ news/ treasures/ index. cfm?vidid=9 http:/ / www. audiosonica. com/ en/ course/ post/ 2 http:/ / sounds. bl. uk
33
Interference of two circular waves. Absolute value snapshots of the (real-valued, scalar) wave field. Wavelength increasing from top to bottom, distance between wave centers increasing from left to right. The dark regions indicate destructive interference.
Theory
The principle of superposition of waves states that the resultant displacement at a point is equal to the vector sum of the displacements of different waves at that point. If a crest of a wave meets a crest of another wave at the same point then the crests interfere constructively and the resultant wave amplitude is increased. If a crest of a wave meets a trough of another wave then they interfere destructively, and the overall amplitude is decreased. This form of interference can occur whenever a wave can propagate from a source to a destination by two or more paths of different length. Two or more sources can only be used to produce interference when there is a fixed phase relation between them, but in this case the interference generated is the same as with a single source; see Huygens' principle.
Chromatic interference is seen in sea foam, which is composed of plankton. It is an example of naturally occurring interference.
34
Experiments
Thomas Young's double-slit experiment shows the pattern created when two coherent beams of light interfere. The two beams have the same wavelength range and, at the center of the interference pattern, have the same phases at each wavelength as they both come from the same source.
Interference patterns
For two coherent sources, the spatial separation between sources is half the wavelength times the number of nodal lines. Light from any source can be used to obtain interference patterns, for example, Newton's rings can be produced with sunlight. However, in general white light is less suited for producing clear interference patterns, as it is a mix of a full spectrum of colours, that each have different spacing of the interference fringes. Sodium light is close to monochromatic and is thus more suitable for producing interference patterns. The most suitable is laser light because it is almost perfectly monochromatic.
Animation of interference of waves coming from two point sources.
35
Interference pattern produced with a Michelson interferometer. Bright bands are the result of constructive interference while the dark bands are the result of destructive interference.
Examples
A conceptually simple case of interference is a small (compared to wavelength) source say, a small array of regularly spaced small sources (see diffraction grating). Consider the case of a flat boundary (say, between two media with different densities or simply a flat mirror), onto which the plane wave is incident at some angle. In this case of continuous distribution of sources, constructive interference will only be in specular direction the direction at which angle with the normal is exactly the same as the angle of incidence. Thus, this results in the law of reflection which is simply the result of constructive interference of a plane wave on a plane surface.
36
where the
s specify the different quantum "alternatives" available (technically, they form an eigenvector basis) to a new state is the
and the are the probability amplitude coefficients, which are complex numbers. The probability of observing the system making a transition or quantum leap from state square of the modulus of the scalar or inner product of the two states:
where
Now let's consider the situation classically and imagine that the system transited from all the possible intermediate steps. So we would have
. Then we would classically expect the probability of the two-step transition to be the sum of
, The classical and quantum derivations for the transition probability differ by the presence, in the quantum case, of the extra terms ; these extra quantum terms represent interference between the different
intermediate "alternatives". These are consequently known as the quantum interference terms, or cross terms. This is a purely quantum effect and is a consequence of the non-additivity of the probabilities of quantum alternatives. The interference terms vanish, via the mechanism of quantum decoherence, if the intermediate state or coupled with the environment
[1] [2]
is measured
See also
Active noise control Beat (acoustics) Coherence (physics) Diffraction Double-slit experiment Haidinger fringes HongOuMandel effect Interference lithography Interferometer List of types of interferometers Lloyd's Mirror Moir pattern Thin-film interference Optical feedback Retroreflector
37
External links
Expressions of position and fringe spacing [3] Java demonstration of interference [4] Java simulation of interference of water waves 1 [5] Java simulation of interference of water waves 2 [6] Flash animations demonstrating interference [7] Lissajous Curves: Interactive simulation of graphical representations of musical intervals, beats, interference, vibrating strings [8]
References
[1] Wojciech H. Zurek, Decoherence and the transition from quantum to classical, Physics Today, 44, pp 3644 (1991) [2] Wojciech H. Zurek, Decoherence, einselection, and the quantum origins of the classical, Reviews of Modern Physics 2003, 75, 715 or (http:/ / arxiv. org/ abs/ quant-ph/ 0105127) [3] http:/ / www. citycollegiate. com/ interference1. htm [4] [5] [6] [7] [8] http:/ / www. falstad. com/ ripple/ ex-2source. html http:/ / www. phy. hk/ wiki/ englishhtm/ Interference. htm http:/ / www. phy. hk/ wiki/ englishhtm/ Interference2. htm http:/ / www. acoustics. salford. ac. uk/ feschools/ waves/ super2. htm http:/ / gerdbreitenbach. de/ lissajous/ lissajous. html
Diffraction
Diffraction refers to various phenomena which occur when a wave encounters an obstacle. It is described as the apparent bending of waves around small obstacles and the spreading out of waves past small openings. Similar effects are observed when light waves travel through a medium with a varying refractive index or a sound wave through one with varying acoustic impedance. Diffraction occurs with all waves, including sound waves, water waves, and electromagnetic waves such as visible light, x-rays and radio waves. As physical objects have wave-like properties (at the atomic level), diffraction also occurs with matter and can be studied according to the principles of quantum mechanics. While diffraction occurs whenever propagating waves encounter such changes,
Diffraction
38
its effects are generally most pronounced for waves where the wavelength is on the order of the size of the diffracting objects. If the obstructing object provides multiple, closely-spaced openings, a complex pattern of varying intensity can result. This is due to the superposition, or interference, of different parts of a wave that traveled to the observer by different paths (see diffraction grating). The formalism of diffraction can also describe the way in which waves of finite extent propagate in free space. For example, the expanding profile of a laser beam, the beam shape of a radar antenna and the field of view of an ultrasonic transducer are all explained by diffraction theory.
Diffraction
Colors seen in a spider web are partially due to diffraction, according to some [1] analyses.
Diffraction
39
Examples
The effects of diffraction can be regularly seen in everyday life. The most colorful examples of diffraction are those involving light; for example, the closely spaced tracks on a CD or DVD act as a diffraction grating to form the familiar rainbow pattern we see when looking at a disk. This principle can be extended to engineer a grating with a structure such that it will produce any diffraction pattern desired; the hologram on a credit card is an example. Diffraction in the atmosphere by small particles can cause a bright ring to be visible around a bright light source like the sun or the moon. A shadow of a solid object, using light from a compact source, shows small fringes near its edges. The speckle pattern which is observed when laser light falls on an optically rough surface is also a diffraction phenomenon. All these effects are a consequence of the fact that light propagates as a wave.
Solar glory at the steam from hot springs. A glory is an optical phenomenon produced by light backscattered (a combination of diffraction, reflection and refraction) towards its source by a cloud of uniformly-sized water droplets.
Diffraction can occur with any kind of wave. Ocean waves diffract around jetties and other obstacles. Sound waves can diffract around objects, which is why one can still hear someone calling even when hiding behind a tree.[2] Diffraction can also be a concern in some technical applications; it sets a fundamental limit to the resolution of a camera, telescope, or microscope.
History
The effects of diffraction of light were first carefully observed and characterized by Francesco Maria Grimaldi, who also coined the term diffraction, from the Latin diffringere, 'to break into pieces', referring to light breaking up into different directions. The results of Grimaldi's observations were published posthumously in 1665.[3] [4] [5] Isaac Newton studied these effects and attributed them to Thomas Young's sketch of two-slit diffraction, which he presented to the Royal Society in inflexion of light rays. James Gregory 1803 (16381675) observed the diffraction patterns caused by a bird feather, which was effectively the first diffraction grating to be discovered.[6] Thomas Young performed a celebrated experiment in 1803 demonstrating interference from two closely spaced slits.[7] Explaining his results by interference of the waves emanating from the two different slits, he deduced that light must propagate as waves. Augustin-Jean Fresnel did more definitive studies and calculations of diffraction, made public in 1815[8] and 1818,[9] and thereby gave great support to the wave theory of light that had been advanced by Christiaan Huygens[10] and reinvigorated by Young, against Newton's particle theory.
Diffraction
40
of maxima and minima. The form of a diffraction pattern can be determined from the sum of the phases and amplitudes of the Huygens wavelets at each point in space. There are various analytical models which can be used to do this including the Fraunhofer diffraction equation for the far field and the Fresnel Diffraction equation for the near field. Most configurations cannot be solved analytically, but can yield numerical solutions through finite element and boundary element methods.
Diffraction systems
It is possible to obtain a qualitative understanding of many diffraction phenomena by considering how the relative phases of the individual secondary wave sources vary, and in particular, the conditions in which the phase difference equals half a cycle in which case waves will cancel one another out. The simplest descriptions of diffraction are those in which the situation can be reduced to a two-dimensional problem. For water waves, this is already the case, water waves propagate only on the surface of the water. For light, we can often neglect one direction if the diffracting object extends in that direction over a distance far greater than the wavelength. In the case of light shining through small circular holes we will have to take into account the full three dimensional nature of the problem. Some of the simpler cases of diffraction are considered below.
Diffraction
41
Single-slit diffraction
A long slit of infinitesimal width which is illuminated by light diffracts the light into a series of circular waves and the wavefront which emerges from the slit is a cylindrical wave of uniform intensity. A slit which is wider than a wavelength has a large number of point sources spaced evenly across the width of the slit. The light at a given angle is made up of contributions from each of these point sources and if the relative phases of these contributions vary by 2 or more, we expect to find minima and maxima in the diffracted light.
Numerical approximation of diffraction pattern from a slit of width four wavelengths with an incident plane wave. The main central beam, nulls, and phase reversals are apparent.
We can find the angle at which a first minimum is obtained in the diffracted light by the following reasoning. The light from a source located at the top edge of the slit interferes destructively with a source located at the middle of the slit, when the path difference between them is equal to /2. Similarly, the source just below the top of the slit will interfere destructively with the source located just below the middle of the slit at the same angle. We can continue this reasoning along the entire height of the slit to conclude that the condition for destructive interference for the entire slit is the same as the condition for destructive interference between two narrow slits a distance apart that is half the width of the slit. The path difference is given by min given by so that the minimum intensity occurs at an angle
Diffraction
42
where d is the width of the slit. A similar argument can be used to show that if we imagine the slit to be divided into four, six, eight parts, etc, minima are obtained at angles n given by where n is an integer other than zero. There is no such simple argument to enable us to find the maxima of the diffraction pattern. The intensity profile can be calculated using the Fraunhofer diffraction integral as
where the sinc function is given by sinc(x) = sin(x)/(x) if x 0, and sinc(0) = 1. It should be noted that this analysis applies only to the far field, that is, at a distance much larger than the width of the slit.
Diffraction grating
A diffraction grating is an optical component with a regular pattern. The form of the light diffracted by a grating depends on the structure of the elements and the number of elements present, but all gratings have intensity maxima at angles m which are given by the grating equation
where i is the angle at which the light is incident, d is the separation of grating elements and m is an integer which can be positive or negative.
Diffraction The light diffracted by a grating is found by summing the light diffracted from each of the elements, and is essentially a convolution of diffraction and interference patterns. The figure shows the light diffracted by 2-element and 5-element gratings where the grating spacings are the same; it can be seen that the maxima are in the same position, but the detailed structures of the intensities are different.
43
Computer generated light diffraction pattern from a circular aperture of diameter 0.5micron at a wavelength of 0.6micron (red-light) at distances of 0.1cm 1cm in steps of 0.1cm. One can see the image moving from the Fresnel region into the Fraunhofer region where the Airy pattern is seen.
where a is the radius of the circular aperture, k is equal to 2/ and J1 is a Bessel function. The smaller the aperture, the larger the spot size at a given distance, and the greater the divergence of the diffracted beams.
Diffraction
44
Diffraction-limited imaging
The ability of an imaging system to resolve detail is ultimately limited by diffraction. This is because a plane wave incident on a circular lens or mirror is diffracted as described above. The light is not focused to a point but forms an Airy disk having a central spot in the focal plane with radius to first null of
The Airy disk around each of the stars from the 2.56m telescope aperture can be seen in this lucky image of the binary star zeta Botis.
where is the wavelength of the light and N is the f-number (focal length divided by diameter) of the imaging optics. In object space, the corresponding angular resolution is
where D is the diameter of the entrance pupil of the imaging lens (e.g., of a telescope's main mirror). Two point sources will each produce an Airy pattern see the photo of a binary star. As the point sources move closer together, the patterns will start to overlap, and ultimately they will merge to form a single pattern, in which case the two point sources cannot be resolved in the image. The Rayleigh criterion specifies that two point sources can be considered to be resolvable if the separation of the two images is at least the radius of the Airy disk, i.e. if the first minimum of one coincides with the maximum of the other. Thus, the larger the aperture of the lens, and the smaller the wavelength, the finer the resolution of an imaging system. This is why telescopes have very large lenses or mirrors, and why optical microscopes are limited in the detail which they can see.
Diffraction
45
Speckle patterns
The speckle pattern which is seen when using a laser pointer is another diffraction phenomenon. It is a result of the superpostion of many waves with different phases, which are produced when a laser beam illuminates a rough surface. They add together to give a resultant wave whose amplitude, and therefore intensity varies randomly.
Particle diffraction
Quantum theory tells us that every particle exhibits wave properties. In particular, massive particles can interfere and therefore diffract. Diffraction of electrons and neutrons stood as one of the powerful arguments in favor of quantum mechanics. The wavelength associated with a particle is the de Broglie wavelength
The upper half of this image shows a diffraction pattern of He-Ne laser beam on an elliptic aperture. The lower half is its 2D Fourier transform approximately reconstructing the shape of the aperture.
where h is Planck's constant and p is the momentum of the particle (mass velocity for slow-moving particles) . For most macroscopic objects, this wavelength is so short that it is not meaningful to assign a wavelength to them. A sodium atom traveling at about 30,000 m/s would have a De Broglie wavelength of about 50 pico meters. Because the wavelength for even the smallest of macroscopic objects is extremely small, diffraction of matter waves is only visible for small particles, like electrons, neutrons, atoms and small molecules. The short wavelength of these matter waves makes them ideally suited to study the atomic crystal structure of solids and large molecules like proteins. Relatively larger molecules like buckyballs were also shown to diffract.[11]
Diffraction
46
Bragg diffraction
Diffraction from a three dimensional periodic structure such as atoms in a crystal is called Bragg diffraction. It is similar to what occurs when waves are scattered from a diffraction grating. Bragg diffraction is a consequence of interference between waves reflecting from different crystal planes. The condition of constructive interference is given by Bragg's law:
Following Bragg's law, each dot (or reflection), in this diffraction pattern forms from the constructive interference of X-rays passing through a crystal. The data can be used to determine the crystal's atomic structure.
where is the wavelength, d is the distance between crystal planes, is the angle of the diffracted wave. and m is an integer known as the order of the diffracted beam. Bragg diffraction may be carried out using either light of very short wavelength like x-rays or matter waves like neutrons (and electrons) whose wavelength is on the order of (or much smaller than) the atomic spacing[12] . The pattern produced gives information of the separations of crystallographic planes d, allowing one to deduce the crystal structure. Diffraction contrast, in electron microscopes and x-topography devices in particular, is also a powerful tool for examining individual defects and local strain fields in crystals.
Coherence
The description of diffraction relies on the interference of waves emanating from the same source taking different paths to the same point on a screen. In this description, the difference in phase between waves that took different paths is only dependent on the effective path length. This does not take into account the fact that waves that arrive at the screen at the same time were emitted by the source at different times. The initial phase with which the source emits waves can change over time in an unpredictable way. This means that waves emitted by the source at times that are too far apart can no longer form a constant interference pattern since the relation between their phases is no longer time independent. The length over which the phase in a beam of light is correlated, is called the coherence length. In order for interference to occur, the path length difference must be smaller than the coherence length. This is sometimes referred to as spectral coherence, as it is related to the presence of different frequency components in the wave. In the case of light emitted by an atomic transition, the coherence length is related to the lifetime of the excited state from which the atom made its transition.
Diffraction If waves are emitted from an extended source, this can lead to incoherence in the transversal direction. When looking at a cross section of a beam of light, the length over which the phase is correlated is called the transverse coherence length. In the case of Young's double slit experiment, this would mean that if the transverse coherence length is smaller than the spacing between the two slits, the resulting pattern on a screen would look like two single slit diffraction patterns. In the case of particles like electrons, neutrons and atoms, the coherence length is related to the spatial extent of the wave function that describes the particle.
47
See also
Atmospheric diffraction Bragg diffraction Brocken spectre Cloud iridescence Diffraction formalism Diffraction grating Diffraction limit Diffractometer Dynamical theory of diffraction Electron diffraction Fraunhofer diffraction Fresnel diffraction Fresnel imager Fresnel number Fresnel zone Neutron diffraction Prism Powder diffraction Refraction SchaeferBergmann diffraction Thinned array curse X-ray scattering techniques
External links
Diffraction and Crystallography for beginners [13] Do Sensors Outresolve Lenses? [14]; on lens and sensor resolution interaction. Diffraction and acoustics. [15] Diffraction in photography. [16] On Diffraction [17] at MathPages. Diffraction pattern calculators [18] at The Wolfram Demonstrations Project Wave Optics [19] A chapter of an online textbook. 2-D wave Java applet [20] Displays diffraction patterns of various slit configurations. Diffraction Java applet [21] Displays diffraction patterns of various 2-D apertures. Diffraction approximations illustrated [22] MIT site that illustrates the various approximations in diffraction and intuitively explains the Fraunhofer regime from the perspective of linear system theory.
Gap [23] Obstacle [24] Corner [25] Java simulation of diffraction of water wave. Google Maps [26] Satellite image of Panama Canal entry ocean wave diffraction.
Diffraction
[29] http:/ / www. cvimellesgriot. com/ products/ Documents/ TechnicalGuide/ Diffraction_Effects. pdf [30] http:/ / scripts. mit. edu/ ~raskar/ lightfields/ index. php?title=An_Introduction_to_The_Wigner_Distribution_in_Geometric_Optics
49
Diffraction grating
In optics, a diffraction grating is an optical component with a periodic structure, which splits and diffracts light into several beams travelling in different directions. The directions of these beams depend on the spacing of the grating and the wavelength of the light so that the grating acts as the dispersive element. Because of this, gratings are commonly used in monochromators and spectrometers. A photographic slide with a fine pattern of black lines forms a simple grating. For practical applications, gratings generally A very large reflecting diffraction grating. have grooves or rulings on their surface rather than dark lines. Such gratings can be either transparent or reflective. Gratings which modulate the phase rather than the amplitude of the incident light are also produced, frequently using holography. The principles of diffraction gratings were discovered by James Gregory, about a year after Newton's prism experiments, initially with artifacts such as bird feathers. The first man-made diffraction grating was made around 1785 by Philadelphia inventor David Rittenhouse, who strung hairs between two finely threaded screws. This was similar to notable German physicist Joseph von Fraunhofer's wire diffraction grating in 1821.
Theory of operation
The relationship between the grating spacing and the angles of the incident and diffracted beams of light is known as the grating equation. According to the HuygensFresnel principle, each point on the wavefront of a propagating wave can be considered to act as a point source, and the wavefront at any subsequent point can be found by adding together the contributions from each of these individual point sources. An idealised grating is considered here which is made up of a set of long and infinitely narrow slits of spacing d. When a plane wave of wavelength is incident
Diffraction grating normally on the grating, each slit in the grating acts as a point source propagating in all directions. The light in a particular direction, , is made up of the interfering components from each slit. Generally, the phases of the waves from different slits will vary from one another, and will cancel one another out partially or wholly. However, when the path difference between the light from adjacent slits is equal to the wavelength, , the waves will all be in phase. This occurs at angles m which satisfy the relationship dsinm/=|m| where d is the separation of the slits and m is an integer. Thus, the diffracted light will have maxima at angles m given by It is straightforward to show that if a plane wave is incident at an angle i, the grating equation becomes The light that corresponds to direct transmission (or specular reflection in the case of a reflection grating) is called the zero order, and is denoted m = 0. The other maxima occur at angles which are represented by non-zero integers m. Note that m can be positive or negative, resulting in diffracted orders on both sides of the zero order beam. This derivation of the grating equation has used an idealised grating. However, the relationship between the angles of the diffracted beams, the grating spacing and the wavelength of the light apply to any regular structure of the same spacing, because the phase relationship between light scattered from adjacent elements of the grating remains the same. The detailed distribution of the diffracted light depends on the detailed structure of the grating elements as well as on the number of elements in the grating, but it will always give maxima in the directions given by the grating equation. Gratings can be made in which various properties of the incident light are modulated in a regular pattern; these include transparency (transmission amplitude gratings) reflectance (reflection amplitude gratings) refractive index (phase gratings) direction of optical axis (optical axis gratings)
50
A light bulb of a flashlight seen through a transmissive grating, showing three diffracted orders. The order m = 0 corresponds to a direct transmission of light through the grating. In the first positive order (m = +1), colors with increasing wavelengths (from blue to red) are diffracted at increasing angles.
Diffraction grating a given wavelength. A triangular profile is commonly used. This technique is called blazing. The incident angle and wavelength for which the diffraction is most efficient are often called blazing angle and blazing wavelength. The efficiency of a grating may also depend on the polarization of the incident light. Gratings are usually designated by their groove density, the number of grooves per unit length, usually expressed in grooves per millimeter (g/mm), also equal to the inverse of the groove period. The groove period must be on the order of the wavelength of interest; the spectral range covered by a grating is dependent on groove spacing and is the same for ruled and holographic gratings with the same grating constant. The maximum wavelength that a grating can diffract is equal to twice the grating period, in which case the incident and diffracted light will be at ninety degrees to the grating normal. To obtain frequency dispersion over a wider frequency one must use a prism. In the optical regime, in which the use of gratings is most common, this corresponds to wavelengths between 100 nm and 10 m. In that case, the groove density can vary from a few tens of grooves per millimeter, as in echelle gratings, to a few thousands of grooves per millimeter. When groove spacing is less than half the wavelength of light, the only present order is the m = 0 order. Gratings with such small periodicity are called subwavelength gratings and exhibit special optical properties. Made on an isotropic material the subwavelength gratings give rise to form birefringence, in which the material behaves as if it were birefringent.
51
Fabrication
Originally, high-resolution gratings were ruled using high-quality ruling engines whose construction was a large undertaking. Henry Joseph Grayson designed a machine to make diffraction gratings, succeeding with one of 120,000 lines to the inch (approx. 47000percm) in 1899. Later, photolithographic techniques allowed gratings to be created from a holographic interference pattern. Holographic gratings have sinusoidal grooves and may not be as efficient as ruled gratings, but are often preferred in monochromators because they lead to much less stray light. A copying technique allows high quality replicas to be made from master gratings of either type, thereby lowering fabrication costs. Another method for manufacturing diffraction gratings uses a photosensitive gel sandwiched between two substrates. A holographic interference pattern exposes the gel which is later developed. These gratings, called volume phase holography diffraction gratings (or VPH diffraction gratings) have no physical grooves, but instead a periodic modulation of the refractive index within the gel. This removes much of the surface scattering effects typically seen in other types of gratings. These gratings also tend to have higher efficiencies, and allow for the inclusion of complicated patterns into a single grating. In older versions of such gratings, environmental susceptibility was a trade-off, as the gel had to be contained at low temperature and humidity. Typically, the photosensitive substances are sealed between two substrates which make them resistant to humidity, thermal and mechanical stresses. VPH diffraction gratings are not destroyed by accidental touches and are more scratch resistant than typical relief gratings. Semiconductor technology today is also utilized to etch holographically patterned gratings into robust materials as fused silica. In this way, low stray-light holography is combined with the high efficiency of deep, etched transmission gratings, and can be incorporated into high volume, low cost semiconductor manufacturing technology. A new technology for grating insertion into integrated photonic lightwave circuits is digital planar holography (DPH). DPH gratings are generated in computer and fabricated on one or several interfaces of an optical waveguide planar with standard micro-lithography or nano-imprinting methods, compatible with mass-production. Light propagates inside the DPH gratings, confined by the refractive index gradient, which provides longer interaction path and greater flexibility in light steering.
Diffraction grating
52
Examples
Diffraction gratings are often used in monochromators, spectrometers, lasers, wavelength division multiplexing devices, optical pulse compressing devices, and many other optical instruments. Ordinary pressed CD and DVD media are every-day examples of diffraction gratings and can be used to demonstrate the effect by reflecting sunlight off them onto a white wall. This is a side effect of their manufacture, as one surface of a CD has many small pits in the plastic, arranged in a spiral; that surface has a thin layer of metal applied to make the pits more visible. The structure of a DVD is optically similar, although it may have more than one pitted surface, and all pitted surfaces are inside the disc.
The grooves of a compact disc can act as a grating and produce iridescent reflections.
In a standard pressed vinyl record when viewed from a low angle perpendicular to the grooves, a similar but less defined effect to that seen in a CD/DVD. This is due to viewing angle (less than the critical angle of reflection of the black vinyl) and the path of the light being reflected due to this being changed by the grooves, leaving a rainbow relief pattern behind. Diffraction gratings are also present in nature. For example, the iridescent colors of peacock feathers, mother-of-pearl, butterfly wings, and some other insects are caused by very fine regular structures that diffract light, splitting it into its component colors.
See also
Henry Augustus Rowland Zone plate
References
This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" [1]. National Optical Astronomy Observatories entry regarding volume phase holography gratings. Hutley, Michael, Diffraction Gratings (Techniques of Physics), Academic Press (1982). ISBN 0123629802 Loewen, Erwin & Evgeny Popov, Diffraction Gratings and Applications, CRC; 1 edition (1997). ISBN 0824799232 Palmer, Christopher, Diffraction Grating Handbook [2], 6th edition, Newport Corporation (2005). Greenslade, Thomas B., "Wire Diffraction Gratings [3]," The Physics Teacher, February 2004. Volume 42 Issue 2, pp.7677. Abrahams, Peter, Early Instruments of Astronomical Spectroscopy [4].
Diffraction grating
53
External links
Diffraction Gratings - The Crucial Dispersive Component [5] Automatic calculation of diffraction angles based on input variables. [6]
References
[1] http:/ / www. its. bldrdoc. gov/ fs-1037/ fs-1037c. htm [2] http:/ / gratings. newport. com/ library/ handbook/ cover. asp [3] http:/ / scitation. aip. org/ getabs/ servlet/ GetabsServlet?prog=normal& id=PHTEAH000042000002000076000001& idtype=cvips& gifs=Yes [4] http:/ / www. europa. com/ ~telscope/ histspec. txt [5] http:/ / gratings. newport. com/ information/ gratings. asp [6] http:/ / www. calctool. org/ CALC/ phys/ optics/ grating
Plane wave
In the physics of wave propagation, a plane wave (also spelled planewave) is a constant-frequency wave whose wavefronts (surfaces of constant phase) are infinite parallel planes of constant amplitude normal to the phase velocity vector.
Plane wave
54
By extension, the term is also used to describe waves that are approximately plane waves in a localized region of space. For example, a localized source such as an antenna produces a field that is approximately a plane wave in its far-field region. Equivalently, for propagation in a homogeneous medium over lengthscales much longer than the wavelength, the "rays" in the limit where ray optics is valid correspond locally to approximate plane waves. Mathematically, a plane wave is a wave of the following form:
where i is the imaginary unit, k is the wave vector, is the angular frequency, and A is the (complex) amplitude. This form of the plane wave uses the physics time convention; in the engineering time convention, j is used instead of +i in the exponent. The physical solution is found by taking the real part of this expression:
This is the solution for a scalar wave equation in a homogeneous medium. For vector wave equations, such as the ones describing electromagnetic radiation or waves in an elastic solid, the solution for a homogeneous medium is similar: the scalar amplitude A is replaced by a constant vector A. For example, in electromagnetism A is typically the vector for the electric field, magnetic field, or vector potential. A transverse wave is one in which the amplitude vector is orthogonal to k, which is the case for electromagnetic waves in an isotropic medium. By contrast, a longitudinal wave is one in which the amplitude vector is parallel to k, such as for acoustic waves in a gas or fluid. In this equation, the function (k) is the dispersion relation of the medium, with the ratio /|k| giving the magnitude of the phase velocity and d/dk giving the group velocity. For electromagnetism in an isotropic medium with index of refraction n, the phase velocity is c/n, which equals the group velocity only if the index is not frequency-dependent. Generally, a wave solution can be expressed as a superposition of plane waves. This approach is known as the Angular spectrum method. The form of the planewave solution is actually a general consequence of translational symmetry. More generally, for periodic structures having discrete translational symmetry, the solutions take the form of Bloch waves, most famously in crystalline atomic materials but also in photonic crystals and other periodic wave equations. As another generalization, for structures that are only uniform along one direction x (such as a waveguide
Plane wave along the x direction), the solutions (waveguide modes) are of the form exp[i(kx-t)] multiplied by some amplitude function a(y,z). This is a special case of a separable partial differential equation. The term is used in the same way for telecommunication, e.g. in Federal Standard 1037C and MIL-STD-188.
55
Plane wave Circularly polarized light. The blocks of vectors represent how the magnitude and direction of the electric field is constant for an entire plane perpendicular to the direction of travel. Represented in the first illustration toward the right is a linearly polarized, electromagnetic wave. Because this is a plane wave, each blue vector, indicating the perpendicular displacement from a point on the axis out to the sine wave, represents the magnitude and direction of the electric field for an entire plane that is perpendicular to the axis. Represented in the second illustration is a circularly polarized, electromagnetic plane wave. Each blue vector indicating the perpendicular displacement from a point on the axis out to the helix, also represents the magnitude and direction of the electric field for an entire plane perpendicular to the axis. In both illustrations, along the axes is a series of shorter blue vectors which are scaled down versions of the longer blue vectors. These shorter blue vectors are extrapolated out into the block of black vectors which fill a volume of space. Notice that for a given plane, the black vectors are identical, indicating that the magnitude and direction of the electric field is constant along that plane. In the case of the linearly polarized light, the field strength from plane to plane varies from a maximum in one direction, down to zero, and then back up to a maximum in the opposite direction. In the case of the circularly polarized light, the field strength remains constant from plane to plane but its direction steadily changes in a rotary type manner. Not indicated in either illustration is the electric fields corresponding magnetic field which is proportional in strength to the electric field at each point in space but is at a right angle to it. Illustrations of the magnetic field vectors would be virtually identical to these except all the vectors would be rotated 90 degrees perpendicular to the direction of propagation.
56
See also
Angular spectrum method
References
J. D. Jackson, Classical Electrodynamics (Wiley: New York, 1998 ).
Point source
57
Point source
A point source is a single identifiable localized source of something. A point source has negligible extent, distinguishing it from other source geometries. Sources are called point sources because in mathematical modeling, these sources can usually be approximated as a mathematical point to simplify analysis. The actual source need not be physically small, if its size is negligible relative to other length scales in the problem. For example, in astronomy stars are routinely treated as point sources, even though they are in actuality much larger than the Earth. In three dimensions, the density of something leaving a point source decreases in proportion to the inverse square of the distance from the source, if the distribution is homogeneous in all directions, and there is no absorption or other loss.
Mathematics
In mathematics, a point source is a singularity from which flux or flow is emanating. Although singularities such as this do not exist in the observable universe, mathematical point sources are often used as approximations to reality in physics and other fields.
Light
Generally a source of light can be considered a point source if the resolution of the imaging instrument is too low to resolve its size, or if the object is at a very great distance. Mathematically an object may be considered a point source if its angular size, power of the telescope: , where is the wavelength of light and is the telescope diameter. Examples: Light from a distant star seen through a small telescope Light passing through a pinhole or other small aperture, viewed from a distance much greater than the size of the hole Light from a street light in a large-scale study of light pollution or street illumination , is much smaller than the resolving
Radio waves
Radio wave sources which are smaller than one radio wavelength are also generally treated as point sources. Radio emissions generated by a fixed electrical circuit are usually polarized, producing anisotropic radiation. If the propagating medium is lossless, however, the radiant power in the radio waves at a given distance will still vary as the inverse square of the distance if the angle remains constant to the source polarization. Examples: Radio antennas are often smaller than one wavelength, even though they are many metres across Pulsars are treated as point sources when observed using radio telescopes
Point source
58
Sound
Sound is an oscillating pressure wave. As the pressure oscillates up and down, an audio point source acts in turn as a fluid point source and then a fluid point sink. (Such an object does not exist physically, but is often a good simplified model for calculations.) Examples: Seismic vibration from a localised seismic experiment searching for oil Noise pollution from a jet engine in a large-scale study of noise pollution A loudspeaker may be considered as a point source in a study of the acoustics of airport announcements
Heat
In vacuum, heat escapes as radiation isotropically. If the source remains stationary in a compressible fluid such as air, flow patterns can form around the source due to convection, leading to an anisotropic pattern of heat loss. The most common form of anisotropy is the formation of a thermal plume above the heat source. Examples: Geological hotspots on the surface of the Earth which lie at the tops of thermal plumes rising from deep inside the Earth Plumes of heat studied in thermal pollution tracking.
Fluid
Fluid point sources are commonly used in fluid dynamics and aerodynamics. A point source of fluid is the inverse of a fluid point sink (a point where fluid is removed). Whereas fluid sinks exhibit A mushroom cloud as an example of a thermal complex rapidly changing behaviour such as is seen in vortices (for plume. A nuclear explosion can be treated as a thermal point source in large-scale atmospheric example water running into a plug-hole or tornadoes generated at simulations. points where air is rising), fluid sources generally produce simple flow patterns, with stationary isotropic point sources generating an expanding sphere of new fluid. If the fluid is moving (such as wind in air or currents in water) a plume is generated from the point source. Examples: Air pollution from a power plant flue gas stack in a large scale analysis of air pollution Water pollution from an oil refinery wastewater discharge outlet in a large scale analysis of water pollution Gas escaping from a pressurised pipe in a laboratory Smoke is often released from point sources in a wind tunnel in order to create a plume of smoke which highlights the flow of the wind over an object Smoke from a localised chemical fire can be blown in the wind to form a plume of pollution
Point source
59
Pollution
Sources of various types of pollution are often considered as point sources in large-scale studies of pollution.
See also
Line source Dirac delta function
Photographic plate
Photographic plates preceded photographic film as a means of photography. A light-sensitive emulsion of silver salts was applied to a glass plate. This form of photographic material largely faded from the consumer market in the early years of the 20th century, as more convenient and less fragile films were introduced. However, photographic plates were in wide use by the professional astronomical community as late as the 1990s. Such plates respond to ~2% of light received. Glass plates were far superior to film for research-quality imaging because they were extremely stable and less likely to bend or distort, especially in large-format frames for wide-field imaging.
AGFA photografic plates, 1880
Scientific uses
Astronomy
Many famous astronomical surveys were taken using photographic plates, including the first Palomar Observatory Sky Survey (POSS) of the 1950s, the follow-up POSS-II survey of the 1990s, and the UK Negative plate Schmidt survey of southern declinations. A number of observatories, including Harvard University and Sonneberg Observatory, maintain large archives of photographic plates, which are used primarily for historical research on variable stars. Many solar system objects were discovered by using photographic plates, superseding earlier visual methods. Discovery of minor planets using photographic plates was pioneered by Max Wolf beginning with his discovery of 323 Brucia in 1891. The first natural satellite discovered using photographic plates was Phoebe in 1898. Pluto was discovered using photographic plates in a blink comparator; its moon Charon was discovered by carefully examining a bulge in Pluto's image on a plate.
Photographic plate
60
Physics
Photographic plates were also an important tool in early high-energy physics, as they get blackened by ionizing radiation. For example, Victor Franz Hess discovered, in the 1910s, cosmic radiation as it left traces on stacks of photographic plates, which he left for that purpose on high mountains or let into the even higher atmosphere using balloons.
Medical imaging
The sensitivity of certain types of photographic plates to ionizing radiation (usually X-rays) is also a useful in medical imaging and material science applications, although they have been largely replaced with reusable and computer readable image plate detectors and other types of X-ray detectors.
Decline
Use of photographic plates has declined significantly since the early 1980s, replaced by charge-coupled devices (CCD). CCD cameras have several benefits over glass plates, including high efficiency, linear response to light, and simplicity of image acquisition and processing. However, even the largest format CCDs (e.g., 8192x8192 pixels) still do not have the detecting area and resolution of most photographic plates, which has forced modern survey cameras to use large arrays of CCD chips. Several institutions are setting up archives to preserve the original plates, preventing valuable historical astronomical data from being lost.
See also
Film base Camera
References
Peter Kroll, Constanze La Dous, Hans-Jrgen Bruer: "Treasure Hunting in Astronomical Plate Archives." (Proceedings of the international Workshop held at Sonneberg Observatory, March 4 to 6, 1999.) Verlag Herri Deutsch, Frankfurt am Main (1999), ISBN 3-8171-1599-7
External links
The Sonneberg Plates Archiv (Sonneberg Observatory) [1] The Harvard College Observatory Plate Stacks [2]
References
[1] http:/ / stw-serv. stw. tu-ilmenau. de/ science/ plate/ index_E. html [2] http:/ / tdc-www. harvard. edu/ plates/
Concave lens
61
Concave lens
A lens is an optical device with perfect or approximate axial symmetry which transmits and refracts light, converging or diverging the beam. A simple lens consists of a single optical element. A compound lens is an array of simple lenses (elements) with a common axis; the use of multiple elements allows more optical aberrations to be corrected than is possible with a single element. Lenses are typically made of glass or transparent plastic. Elements which refract electromagnetic radiation outside the visual spectrum are also called lenses: for instance, a microwave lens can be made from paraffin wax.
A lens.
The variant spelling lense is sometimes seen. While it is listed as an alternative spelling in some dictionaries, most mainstream dictionaries do not list it as acceptable.[1] [2]
Concave lens
62
History
The oldest lens artifact is the Nimrud lens, which is over three thousand years old, dating back to ancient Assyria.[3] David Brewster proposed that it may have been used as a magnifying glass, or as a burning-glass to start fires by concentrating sunlight.[3] [4] Assyrian craftsmen made intricate engravings, and could have used such a lens in their work. Another early reference to magnification dates back to ancient Egyptian hieroglyphs in the 8th century BC, which depict "simple glass meniscal lenses".[5] The earliest written records of lenses date to Ancient Greece, with Aristophanes' play The Clouds (424 BC) mentioning a burning-glass (a biconvex lens used to focus the sun's rays to produce fire). The writings of Pliny the Elder (2379) also show that burning-glasses were known to the Roman Empire,[6] and mentions what is arguably the earliest use of a corrective lens: Nero was said to watch the gladiatorial games using an emerald[7] (presumably concave to correct for myopia, though the reference is vague). Both Pliny and Seneca the Younger (3 BC65) described the magnifying effect of a glass globe filled with water. The word lens comes from the Latin name of the lentil, because a double-convex lens is lentil-shaped. The genus of the lentil plant is Lens, and the most commonly eaten species is Lens culinaris. The lentil plant also gives its name to a geometric figure.
The Golden Gate Bridge refracted in rain droplets, which act as lenses
The Arab physicist and mathematician Ibn Sahl (c.940c.1000) used what is now known as Snell's law to calculate the shape of lenses.[8] Ibn al-Haytham (9651038), known in the West as Alhazen, wrote the first major optical treatise, the Book of Optics, which contained the earliest historical proof of a magnifying device, a convex lens forming a magnified image. The book was translated into Latin in the 12th century, and became the standard textbook in the field and influenced many other writers.[5] [9] Excavations at the Viking harbour town of Frjel, Gotland, Sweden discovered in 1999 the rock crystal Visby lenses, produced by turning on pole-lathes at Frjel in the 11th to 12th century, with an imaging quality comparable to that of 1950s aspheric lenses. The Viking lenses concentrate sunlight enough to ignite fires. Widespread use of lenses did not occur until the use of reading stones in the 11th century and the invention of spectacles, probably in Italy in the 1280s. Scholars have noted that spectacles were invented not long after the translation of al-Haytham's book into Latin, but it is not clear what role, if any, the optical theory of the time played in the discovery.[5] [9] Nicholas of Cusa is believed to have been the first to discover the benefits of concave lenses for the treatment of myopia in 1451. The Abbe sine condition, due to Ernst Abbe (1860s), is a condition that must be fulfilled by a lens or other optical system in order for it to produce sharp images of off-axis as well as on-axis objects. It revolutionized the design of optical instruments such as microscopes, and helped to establish the Carl Zeiss company as a leading supplier of optical instruments.
Concave lens
63
Toric or sphero-cylindrical lenses have surfaces with two different radii of curvature in two orthogonal planes. They have a different focal power in different meridians. This is a form of deliberate astigmatism. More complex are aspheric lenses. These are lenses where one or both surfaces have a shape that is neither spherical nor cylindrical. Such lenses can produce images with much less aberration than standard simple lenses.
Concave lens
64
If the lens is biconcave or plano-concave, a collimated beam of light passing through the lens is diverged (spread); the lens is thus called a negative or diverging lens. The beam after passing through the lens appears to be emanating from a particular point on the axis in front of the lens; the distance from this point to the lens is also known as the focal length, although it is negative with respect to the focal length of a converging lens.
Convex-concave (meniscus) lenses can be either positive or negative, depending on the relative curvatures of the two surfaces. A negative meniscus lens has a steeper concave surface and will be thinner at the centre than at the periphery. Conversely, a positive meniscus lens has a steeper convex surface and will be thicker at the centre than at the periphery. An ideal thin lens with two surfaces of equal curvature would have zero optical power, meaning that it would neither converge nor diverge light. All real lenses have a nonzero thickness, however, which affects the optical power. To obtain exactly zero optical power, a meniscus lens must have slightly unequal curvatures to account for the effect of the lens' thickness.
Concave lens
65
Lensmaker's equation
The focal length of a lens in air can be calculated from the lensmaker's equation:[10]
where is the focal length of the lens, is the refractive index of the lens material, is the radius of curvature of the lens surface closest to the light source, is the radius of curvature of the lens surface farthest from the light source, and is the thickness of the lens (the distance along the lens axis between the two surface vertices). Sign convention of lens radii R1 and R2 The signs of the lens' radii of curvature indicate whether the corresponding surfaces are convex or concave. The sign convention used to represent this varies, but in this article if R1 is positive the first surface is convex, and if R1 is negative the surface is concave. The signs are reversed for the back surface of the lens: if R2 is positive the surface is concave, and if R2 is negative the surface is convex. If either radius is infinite, the corresponding surface is flat. With this convention the signs are determined by the shapes of the lens surfaces, and are independent of the direction in which light travels through the lens. Thin lens equation If d is small compared to R1 and R2, then the thin lens approximation can be made. For a lens in air, f is then given by
[11]
The focal length f is positive for converging lenses, and negative for diverging lenses. The reciprocal of the focal length, 1/f, is the optical power of the lens. If the focal length is in metres, this gives the optical power in dioptres (inverse metres). Lenses have the same focal length when light travels from the back to the front as when light goes from the front to the back, although other properties of the lens, such as the aberrations are not necessarily the same in both directions.
Imaging properties
As mentioned above, a positive or converging lens in air will focus a collimated beam travelling along the lens axis to a spot (known as the focal point) at a distance f from the lens. Conversely, a point source of light placed at the focal point will be converted into a collimated beam by the lens. These two cases are examples of image formation in lenses. In the former case, an object at an infinite distance (as represented by a collimated beam of waves) is focused to an image at the focal point of the lens. In the latter, an object at the focal length distance from the lens is imaged at infinity. The plane perpendicular to the lens axis situated at a distance f from the lens is called the focal plane. (Note: In the figure below the image is actually larger than the object; this is a function of f and S1, described below)
This image has three visible reflections and one visible projection of the same lamp; two reflections are on a biconvex lens.
Concave lens
66
If the distances from the object to the lens and from the lens to the image are S1 and S2 respectively, for a lens of negligible thickness, in air, the distances are related by the thin lens formula . This can also be put into the "Newtonian" form:
[12]
where
and
What this means is that, if an object is placed at a distance S1 along the axis in front of a positive lens of focal length f, a screen placed at a distance S2 behind the lens will have a sharp image of the object projected onto it, as long as S1 > f (if the lens-to-screen distance S2 is varied slightly, the image will become less sharp). This is the principle behind photography and the human eye. The image in this case is known as a real image.
Concave lens
67
Note that if S1 < f, S2 becomes negative, the image is apparently positioned on the same side of the lens as the object. Although this kind of image, known as a virtual image, cannot be projected on a screen, an observer looking through the lens will see the image in its apparent calculated position. A magnifying glass creates this kind of image. The magnification of the lens is given by: , where M is the magnification factor; if |M|>1, the image is larger than the object. Notice the sign convention here shows that, if M is negative, as it is for real images, the image is upside-down with respect to the object. For virtual images, M is positive and the image is upright. In the special case that S1 = , then S2 = f and M = f / = 0. This corresponds to a collimated beam being focused to a single spot at the focal point. The size of the image in this case is not actually zero, since diffraction effects place a lower limit on the size of the image (see Rayleigh criterion).
Concave lens
68
The formulas above may also be used for negative (diverging) lens by using a negative focal length (f), but for these lenses only virtual images can be formed. For the case of lenses that are not thin, or for more complicated multi-lens optical systems, the same formulas can be used, but S1 and S2 are interpreted differently. If the system is in air or vacuum, S1 and S2 are measured from the front and rear principal planes of the system, respectively. Imaging in media with an index of refraction greater than 1 is more complicated, and is beyond the scope of this article.
Aberrations
Lenses do not form perfect images, and there is always some degree of distortion or aberration introduced by the lens which causes the image to be an imperfect replica of the object. Careful design of the lens system for a particular application ensures that the aberration is minimized. There are several different types of aberration which can affect image quality.
Spherical aberration
Spherical aberration occurs because spherical surfaces are not the ideal shape with which to make a lens, but they are by far the simplest shape to which glass can be ground and polished and so are often used. Spherical aberration causes beams parallel to, but distant from, the lens axis to be focused in a slightly different place than beams close to the axis. This manifests itself as a blurring of the image. Lenses in which closer-to-ideal, non-spherical surfaces are used are called aspheric lenses. These were formerly complex to make and often extremely expensive, but advances in technology have greatly reduced the manufacturing cost for such lenses. Spherical aberration can be minimised by careful choice of the curvature of the surfaces for a particular application: for instance, a plano-convex lens which is used to focus a collimated beam produces a sharper focal spot when used with the convex side towards the beam source.
Concave lens
69
Coma
Another type of aberration is coma, which derives its name from the comet-like appearance of the aberrated image. Coma occurs when an object off the optical axis of the lens is imaged, where rays pass through the lens at an angle to the axis . Rays which pass through the centre of the lens of focal length f are focused at a point with distance f tan from the axis. Rays passing through the outer margins of the lens are focused at different points, either further from the axis (positive coma) or closer to the axis (negative coma). In general, a bundle of parallel rays passing through the lens at a fixed distance from the centre of the lens are focused to a ring-shaped image in the focal plane, known as a comatic circle. The sum of all these circles results in a V-shaped or comet-like flare. As with spherical aberration, coma can be minimised (and in some cases eliminated) by choosing the curvature of the two lens surfaces to match the application. Lenses in which both spherical aberration and coma are minimised are called bestform lenses.
Chromatic aberration
Chromatic aberration is caused by the dispersion of the lens materialthe variation of its refractive index, n, with the wavelength of light. Since, from the formulae above, f is dependent upon n, it follows that different wavelengths of light will be focused to different positions. Chromatic aberration of a lens is seen as fringes of colour around the image. It can be minimised by using an achromatic doublet (or achromat) in which two materials with differing dispersion are bonded together to form a single lens. This reduces the amount of chromatic aberration over a certain
Concave lens range of wavelengths, though it does not produce perfect correction. The use of achromats was an important step in the development of the optical microscope. An apochromat is a lens or lens system which has even better correction of chromatic aberration, combined with improved correction of spherical aberration. Apochromats are much more expensive than achromats. Different lens materials may also be used to minimise chromatic aberration, such as specialised coatings or lenses made from the crystal fluorite. This naturally occurring substance has the highest known Abbe number, indicating that the material has low dispersion.
70
Aperture diffraction
Even if a lens is designed to minimize or eliminate the aberrations described above, the image quality is still limited by the diffraction of light passing through the lens' finite aperture. A diffraction-limited lens is one in which aberrations have been reduced to the point where the image quality is primarily limited by diffraction under the design conditions.
Compound lenses
Simple lenses are subject to the optical aberrations discussed above. In many cases these aberrations can be compensated for to a great extent by using a combination of simple lenses with complementary aberrations. A compound lens is a collection of simple lenses of different shapes and made of materials of different refractive indices, arranged one after the other with a common axis. The simplest case is where lenses are placed in contact: if the lenses of focal lengths f1 and f2 are "thin", the combined focal length f of the lenses is given by
Since 1/f is the power of a lens, it can be seen that the powers of thin lenses in contact are additive. If two thin lenses are separated in air by some distance d, the focal length for the combined system is given by
The distance from the second lens to the focal point of the combined lenses is called the back focal length (BFL).
Concave lens
71
As d tends to zero, the value of the BFL tends to the value of f given for thin lenses in contact. If the separation distance is equal to the sum of the focal lengths (d = f1+f2), the combined focal length and BFL are infinite. This corresponds to a pair of lenses that transform a parallel (collimated) beam into another collimated beam. This type of system is called an afocal system, since it produces no net convergence or divergence of the beam. Two lenses at this separation form the simplest type of optical telescope. Although the system does not alter the divergence of a collimated beam, it does alter the width of the beam. The magnification of such a telescope is given by
which is the ratio of the input beam width to the output beam width. Note the sign convention: a telescope with two convex lenses (f1 > 0, f2 > 0) produces a negative magnification, indicating an inverted image. A convex plus a concave lens (f1 > 0 > f2) produces a positive magnification and the image is upright.
Uses of lenses
A single convex lens mounted in a frame with a handle or stand is a magnifying glass. Lenses are used as prosthetics for the correction of visual impairments such as myopia, hyperopia, presbyopia, and astigmatism. (See corrective lens, contact lens, eyeglasses.) Most lenses used for other purposes have strict axial symmetry; eyeglass lenses are only approximately symmetric. They are usually shaped to fit in a roughly oval, not circular, frame; the optical centers are placed over the eyeballs; their curvature may not be axially symmetric to correct for astigmatism. Sunglasses' lenses are designed to attenuate light; sunglass lenses that also correct visual impairments can be custom made. Other uses are in imaging systems such as monoculars, binoculars, telescopes, microscopes, cameras and projectors. Some of these instruments produce a virtual image when applied to the human eye; others produce a real image which can be captured on photographic film or an optical sensor, or can be viewed on a screen. In these devices lenses are sometimes paired up with curved mirrors to make a catadioptric system where the lenses spherical aberration corrects the opposite aberration in the mirror (such as Schmidt and meniscus correctors). Convex lenses produce an image of an object at infinity at their focus; if the sun is imaged, much of the visible and infrared light incident on the lens is concentrated into the small image. A large lens will create enough intensity to burn a flammable object at the focal point. Since ignition can be achieved even with a poorly made lens, lenses have been used as burning-glasses for at least 2400 years.[13] A modern application is the use of relatively large lenses to concentrate solar energy on relatively small photovoltaic cells, harvesting more energy without the need to use larger, more expensive, cells. Radio astronomy and radar systems often use dielectric lenses, commonly called a lens antenna to refract electromagnetic radiation into a collector antenna. The Square Kilometre Array radio telescope, scheduled to be operational by 2020 [14], will employ such lenses to get a collection area nearly 30 times greater than any previous antenna. Lenses can become scratched and abraded. Abrasion resistant coatings are available to help control this.[15]
Concave lens
72
See also
Aberration in optical systems Axicon Back focal plane Bokeh Cardinal point (optics) Corrective lens Eyepiece F-number Fresnel lens Gradient index lens Gravitational lens History of lensmaking Lens (anatomy) List of lens designs Microscope Microlens Numerical aperture Optical coatings Optical lens design Optical lenticular Photochromic lens Photographic lens Prime lens Prism (optics) Ray tracing Sunglass lens Superlens Telescope Zoom lens Anti-fogging treatment of optical surfaces
References
General
Hecht, Eugene (1987). Optics (2nd ed.). Addison Wesley. ISBN0-201-11609-X. Chapters 5 & 6. Hecht, Eugene (2002). Optics (4th ed.). Addison Wesley. ISBN0-321-18878-0. Greivenkamp, John E. (2004). Field Guide to Geometrical Optics. SPIE Field Guides vol. FG01. SPIE. ISBN0-8194-5294-7.
External links
Applied photographic optics Book [16] Book- The properties of optical glass [17] Handbook of Ceramics, Glasses, and Diamonds [18] Optical glass construction [19] History of Optics (audio mp3) [20] by Simon Schaffer, Professor in History and Philosophy of Science at the University of Cambridge, Jim Bennett, Director of the Museum of the History of Science at the University of Oxford and Emily Winterburn, Curator of Astronomy at the National Maritime Museum (recorded by the BBC). a chapter from an online textbook on refraction and lenses [21] Thin Spherical Lenses [22] on Project PHYSNET [23]. Lens article at digitalartform.com [24] Article on Ancient Egyptian lenses [25] picture of the Ninive rock crystal lens [26] Do Sensors Outresolve Lenses? [14]; on lens and sensor resolution interaction. Fundamental optics [27]
Concave lens
73
Simulations
Learning by Simulations [28] - Concave and Convex Lenses OpticalRayTracer - [[GPL|Open source [29]] lens simulator (downloadable java)]
References
[1] Brians, Paul (2003). Common Errors in English (http:/ / wsu. edu/ ~brians/ errors/ lense. html). Franklin, Beedle & Associates. p.125. ISBN1887902899. . Retrieved June 28, 2009. Reports "lense" as listed in some dictionaries, but not generally considered acceptable. [2] Merriam-Webster's Medical Dictionary. Merriam-Webster. 1995. p.368. ISBN0877799148. Lists "lense" as an acceptable alternate spelling. [3] Whitehouse, David (1999-07-01). "World's oldest telescope?" (http:/ / news. bbc. co. uk/ 1/ hi/ sci/ tech/ 380186. stm). BBC News. . Retrieved 2008-05-10. [4] D. Brewster (1852). "On an account of a rock-crystal lens and decomposed glass found in Niniveh" (http:/ / books. google. com/ books?id=bHwEAAAAYAAJ& pg=RA1-PA355& dq=niniveh+ lens& as_brr=3& ei=ILaBR-mHEoGmswP6jqHDCw) (in German). Die Fortschritte der Physik (Deutsche Physikalische Gesellschaft). . [5] Kriss, Timothy C.; Kriss, Vesna Martich (April 1998). "History of the Operating Microscope: From Magnifying Glass to Microneurosurgery". Neurosurgery 42 (4): 899907. doi:10.1097/00006123-199804000-00116. PMID9574655. [6] Pliny the Elder, The Natural History (trans. John Bostock) Book XXXVII, Chap. 10 (http:/ / www. perseus. tufts. edu/ cgi-bin/ ptext?lookup=Plin. + Nat. + 37. 10). [7] Pliny the Elder, The Natural History (trans. John Bostock) Book XXXVII, Chap. 16 (http:/ / www. perseus. tufts. edu/ cgi-bin/ ptext?lookup=Plin. + Nat. + 37. 16) [8] Rashed, R. (1990). "A pioneer in anaclastics: Ibn Sahl on burning mirrors and lenses." Isis, 81, 464491. [9] Ilardi, Vincent (2007). Renaissance Vision from Spectacles to Telescopes. American Philosophical Society. pp.2633. ISBN0871692597. [10] Greivenkamp, p.14; Hecht 6.1 [11] Hecht, 5.2.3 [12] Hecht (2002), p. 120. [13] Aristophanes (424 BC). The Clouds. [14] http:/ / www. skatelescope. org/ pages/ page_genpub. htm [15] Schottner, G (May). "Scratch and Abrasion Resistant Coatings on Plastic LensesState of the Art, Current Developments and Perspectives" (http:/ / www. springerlink. com/ content/ wu963135883p31r8/ ). Journal of Sol-Gel Science and Technology: pp.7179. . Retrieved 28 Dec, 2009. [16] http:/ / books. google. com/ books?id=cuzYl4hx-B8C& pg=PA58& lpg=PA58& dq=Fused+ quartz+ nikon+ + camera+ lens& source=web& ots=n-IqvTABOz& sig=t-YYBNAIsgKQ37D9kTA0CcK6f1k& hl=en& sa=X& oi=book_result& resnum=8& ct=result#PPA100,M1 [17] http:/ / books. google. com/ books?id=J0RX1mbhzAEC& printsec=toc& dq=bk7+ optical+ glass+ construction& source=gbs_summary_s& cad=0#PRA1-PA58,M1 [18] http:/ / books. google. com/ books?id=_T9dX14rz64C& pg=PT415& lpg=PT415& dq=camera+ + optical+ glass+ + composition& source=web& ots=YMMv0GjGDL& sig=8VZXryxlUfcVq3nonFvrNWElkoI& hl=en& sa=X& oi=book_result& resnum=7& ct=result [19] http:/ / books. google. com/ books?id=KdYclkhSfTAC& pg=PT49& lpg=PT49& dq=optical+ glass+ ingredients& source=web& ots=sLEkmvi05g& sig=F6ERFbklTewIvFuKh30POTb0JG0& hl=en& sa=X& oi=book_result& resnum=7& ct=result [20] http:/ / www. bbc. co. uk/ radio4/ history/ inourtime/ inourtime_20070301. shtml [21] http:/ / www. lightandmatter. com/ html_books/ 5op/ ch04/ ch04. html [22] http:/ / www. physnet. org/ modules/ pdfmodules/ m223. pdf [23] http:/ / www. physnet. org [24] http:/ / www. digitalartform. com/ lenses. htm [25] http:/ / home. comcast. net/ ~hebsed/ enoch. htm [26] http:/ / www3. usal. es/ %7Ehistologia/ aplicacion/ english/ museum/ microsco/ micros01/ micros01. htm [27] http:/ / www. cvimellesgriot. com/ products/ Documents/ TechnicalGuide/ fundamental-Optics. pdf [28] http:/ / www. vias. org/ simulations/ simusoft_lenses. html [29] http:/ / www. arachnoid. com/ OpticalRayTracer/
Beam splitter
74
Beam splitter
A beam splitter is an optical device that splits a beam of light in two. It is the crucial part of most interferometers. In its most common form, a cube, it is made from two triangular glass prisms which are glued together at their base using Canada balsam. The thickness of the resin layer is adjusted such that (for a certain wavelength) half of the light incident through one "port" (i.e. face of the cube) is reflected and the other half is transmitted due to Frustrated Total Internal Reflection. Polarizing beam splitters, such as the Wollaston prism, use birefringent materials, splitting light into beams of differing polarization. Another design is the use of a half-silvered mirror. This is a plate of glass with a thin coating of aluminum (usually deposited from aluminum vapor) with the thickness of the aluminum coating such that part, typically half, of light incident at a 45 degree angle is transmitted, and the remainder reflected. Instead of a metallic coating, a dielectric optical coating may be used. Such mirrors are commonly used as output couplers in laser construction. A half-silvered mirror used in photography is often called a pellicle mirror. Depending on the coating that is being used, reflection/transmission ratios may differ in function of the wavelength.
Schematic representation of a beam splitter cube 1 - Incident light 2 - 50% Transmitted light 3 - 50% Reflected light
Beamsplitters
A third version of the beam splitter is a dichroic mirrored prism assembly which uses dichroic optical coatings to split the incoming light into three beams, one each of red, green, and blue. Such a device was used in multi-tube color television cameras and also in the three-film Technicolor movie cameras. It is also used in the 3 LCD projectors to separate colors and in ellipsoidal reflector spotlights to eliminate heat radiation. Beam splitters are also used in stereo photography to shoot stereo photos using a single shot with a non-stereo camera. The device attaches in place of the lens of the camera. Some argue that "image splitter" is a more proper name for this device. Beam splitters with single mode fiber for PON networks use the single mode behavior to split the beam. The splitter is done by physically splicing two fibers "together" as an X.
Speckle pattern
75
Speckle pattern
A speckle pattern is a random intensity pattern produced by the mutual interference of a set of wavefronts. This phenomenon has been investigated by scientists since the time of Newton, but speckles have come into prominence since the invention of the laser and have now found a variety of applications.
Laser speckle on a digital camera image from a green laser pointer. This is a subjective speckle pattern.
The speckle effect is observed when radio waves are scattered from rough surfaces such as ground or sea, and can also be found in ultrasonic imaging. In the output of a multimode optical fiber, a speckle pattern results from a superposition of mode field patterns. If the relative modal group velocities change with time, the speckle pattern will also change with time. If differential mode attenuation occurs, modal noise results.[2]
Explanation
The speckle effect is a result of the interference of many waves, having different phases, which add together to give a resultant wave whose amplitude, and therefore intensity, varies randomly. If each wave is modelled by a vector, then it can be seen that if a number of vectors with random angles are added together, the length of the resulting vector can be anything from zero to the sum of the individual vector lengthsa 2-dimensional random walk, sometimes known as a drunkard's walk. When a surface is illuminated by a light wave, according to diffraction theory, each point on an illuminated surface acts as a source of secondary spherical waves. The light at any point in the scattered light field is made up of waves which have been scattered from each point on the illuminated surface. If the surface is rough enough to create path-length differences exceeding one wavelength, giving rise to phase changes greater than 2, the amplitude, and hence the intensity, of the resultant light varies randomly. If light of low coherence (i.e. made up of many wavelengths) is used, a speckle pattern will not normally be observed, because the speckle patterns produced by individual wavelengths have different dimensions and will normally average one another out. However, speckle patterns can be observed in polychromatic light in some conditions.[3]
Speckle pattern
76
Subjective speckles
When an image is formed of a rough surface which is illuminated by a coherent light (e.g. a laser beam), a speckle pattern is observed in the image plane; this is called a subjective speckle pattern - see image above. It is called "subjective" because the detailed structure of the speckle pattern depends on the viewing system parameters; for instance, if the size of the lens aperture changes, the size of the speckles change. If the position of the imaging system is altered, the pattern will gradually change and will eventually be unrelated to the original speckle pattern. This can be explained as follows. Each point in the image can be considered to be illuminated by a finite area in the object. The size of this area is determined by the diffraction-limited resolution of the lens which is given by the Airy disk whose diameter is 2.4u/D where u is distance between the object and the lens, and D is the diameter of the lens aperture. (This is a simplified model of diffraction-limited imaging). The light at neighbouring points in the image has been scattered from areas which have many points in common and the intensity of two such points will not differ much. However, two points in the image which are illuminated by areas in the object which are separated by the diameter of the Airy disk, have light intensities which are unrelated. This corresponds to a distance in the image of 2.4v/D where v is the distance between the lens and the image. Thus, the size of the speckles in the image is of this order. The change in speckle size with lens aperture can be observed by looking at a laser spot on a wall directly, and then through a very small hole. The speckles will be seen to increase significantly in size.
Objective speckles
When laser light which has been scattered off a rough surface falls on another surface, it forms an objective speckle pattern. If a photographic plate or another 2-D optical sensor is located within the scattered light field without a lens, a speckle pattern is obtained whose characteristics depend on the geometry of the system and the wavelength of the laser. The speckle pattern in the figure was obtained by pointing a laser beam at the surface of a mobile phone so that the scattered light fell onto an adjacent wall. A photograph was then taken of the speckle pattern formed on the wall (strictly speaking, this also has a second subjective speckle pattern but its dimensions are much smaller than the objective pattern so it is not seen in the image) The light at a given point in the speckle pattern is made A photograph of an objective speckle pattern. This is the light field up of contributions from the whole of the scattering formed when a laser beam was scattered from a plastic surface onto a surface. The relative phases of these waves vary across wall. the surface, so that the sum of the individual waves varies randomly. The pattern is the same regardless of how it is imaged, just as if it were a painted pattern. The "size" of the speckles is a function of the wavelength of the light, the size of the laser beam which illuminates the first surface, and the distance between this surface and the surface where the speckle pattern is formed. This is the case because when the angle of scattering changes such that the relative path difference between light scattered from the centre of the illuminated area compared with light scattered from the edge of the illuminated changes by , the intensity becomes uncorrelated. Dainty [4] derives an expression for the mean speckle size as z/L where L is the width of the illuminated area and z is the distance between the object and the location of the speckle pattern.
Speckle pattern
77
Near-field speckles
Objective speckles are usually obtained in the far field (also called Fraunhofer region, that is the zone where Fraunhofer diffraction happens). This means that they are generated "far" from the object that emits or scatters light. Speckles can be observed also close to the scattering object, in the near field (also called Fresnel region, that is, the region where Fresnel diffraction happens). This kind of speckles are called Near Field Speckles. See near and far field for a more rigorous definition of "near" and "far". The statistical properties of a far-field speckle pattern (i.e., the speckle form and dimension) depend on the form and dimension of the region hit by laser light. By contrast, a very interesting feature of near field speckles is that their statistical properties are closely related to the form and structure of the scattering object: objects that scatter at high angles generate small near field speckles, and vice versa. Under Rayleigh-Gans condition, in particular, speckle dimension mirrors the average dimension of the scattering objects, while, in general, the statistical properties of near field speckles generated by a sample depend on the light scattering distribution. [5] [6] Actually, the condition under which the near field speckles appear has been described as more strict than the usual Fresnel condition: see [7]
Applications
When lasers were first invented, the speckle effect was considered to be a severe drawback in using lasers to illuminate objects, particularly in holographic imaging because of the grainy image produced. It was later realized that speckle patterns could carry information about the object's surface deformations, and this effect is exploited in holographic interferometry and electronic speckle pattern interferometry. The speckle effect is also used in stellar speckle astronomy, speckle imaging and in eye testing using speckle. Speckle is the chief limitation of coherent imaging in optical heterodyne detection. In the case of near field speckles, the statistical properties depend on the light scattering distribution of a given sample. This allows to use the near field speckles analysis as a way to detect the scattering distribution; this is the so-called near-field scattering technique [8] .
Reduction
Speckle is considered to be a problem in laser based display systems like the Laser TV. Speckle is usually quantified by the speckle contrast. Speckle contrast reduction is essentially the creation of many independent speckle patterns, so that they average out on the retina/detector. This can be achieved by, [9] Angle diversity: Illumination from different angles. Polarization diversity: Use of different polarization states. Wavelength diversity: Use of laser sources which differs in wavelength by a small amount. Rotating diffusers which destroys the spatial coherence of the laser light can also be used to reduce the speckle. Moving/vibrating screens may also be solutions. The Mitsubishi Laser TV appears to use such a screen which requires special care according to their product manual. Synthetic array heterodyne detection was developed to reduce speckle noise in coherent optical imaging and coherent DIAL LIDAR.
Speckle pattern
78
See also
Optical heterodyne detection Diffusing-wave spectroscopy speckle noise
External links
Seeing speckle in your fingernail [1] Research group on light scattering and photonic materials [10] Ph. D. thesis: D. Brogioli, "Near Field Speckles" [11]
References
[1] http:/ / www. sciencenewsforkids. org/ pages/ puzzlezone/ muse/ muse0705. asp/ [2] This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (http:/ / www. its. bldrdoc. gov/ fs-1037/ fs-1037c. htm) (in support of MIL-STD-188). [3] McKechnie, T.S. 1976. Image-plane speckle in partially coherent illumination. Optical and Quantum Electronics 8:61-67. [4] Dainty C (Ed), Laser Speckle and Related Phenomena, 1984, Sprinter Verlag, ISBN 0387131698 [5] M. Giglio, M. Carpineti and A. Vailati, 2000, Space intensity correlations in the near field of the scattered light: a direct measurement of the density correlation function ", Phys. Rev. Lett. 85: 1416--1419 [6] M. Giglio, M. Carpineti, A. Vailati and D. Brogioli, 2001, Near-field intensity correlations of scattered light, Appl. Opt. 40: 4036 [7] R. Cerbino, Correlations of light in the deep Fresnel region: an extended van Cittert and Zernike theorem, Phys. Rev. A 75 5: 53815-1-4 [8] D. Brogioli, A. Vailati and M. Giglio, 2002, Heterodyne near-field scattering, Appl. Phys. Lett. 81 22: 4109 [9] Jahja I. Trisnadi, 2002. Speckle contrast reduction in laser projection displays. Proc. SPIE Vol. 4657, p. 131-137, Projection Displays VIII. [10] http:/ / luxrerum. icmm. csic. es/ ?q=node/ research/ interference/ [11] http:/ / www. geocities. com/ dbrogioli/ nfs_phd
Coherence (physics)
In physics, coherence is a property of waves that enables stationary (i.e. temporally and spatially constant) interference. More generally, coherence describes all properties of the correlation between physical quantities of a wave. When interfering, two waves can add together to create a larger wave (constructive interference) or subtract from each other to create a smaller wave (destructive interference), depending on their relative phase. Two waves are said to be coherent if they have a constant relative phase. The degree of coherence is measured by the interference visibility, a measure of how perfectly the waves can cancel due to destructive interference.
Introduction
Coherence was originally introduced in connection with Youngs double-slit experiment in optics but is now used in any field that involves waves, such as acoustics, electrical engineering, neuroscience, and quantum physics. The property of coherence is the basis for commercial applications such as holography, the Sagnac gyroscope, radio antenna arrays, optical coherence tomography and telescope interferometers (astronomical optical interferometers and radio telescopes).
Coherence (physics)
79
Radio waves and Microwaves Light waves (optics) Electrons, atoms and any other object (as described by quantum physics) In most of these systems, one can measure the wave directly. Consequently, its correlation with another wave can simply be calculated. However, in optics one cannot measure the electric field directly as it oscillates much faster than any detectors time resolution.[6] Instead, we measure the intensity of the light. Most of the concepts involving coherence which will be introduced below were developed in the field of optics and then used in other fields. Therefore, many of the standard measurements of coherence are indirect measurements, even in fields where the wave can be measured directly.
Temporal coherence
Temporal coherence is the measure of the average correlation between the value of a wave at any pair of times, separated by delay . Temporal coherence tells us how monochromatic a source is. In other words, it characterizes how well a wave can interfere with itself at a different time. Figure 1: The amplitude of a single frequency wave as a function of time t (red) and a copy of the same wave delayed by (green). The coherence time of the wave is infinite The delay over which the phase or since it is perfectly correlated with itself for all delays . amplitude wanders by a significant amount (and hence the correlation decreases by significant amount) is defined as the coherence time c. At =0 the degree of coherence is perfect whereas it drops significantly by delay c. The coherence length Lc is defined as the distance the wave travels in time c.
Coherence (physics)
80
One should be careful not to confuse the coherence time with the time duration of the signal, nor the coherence length with the coherence area (see below).
Figure 2: The amplitude of a wave whose phase drifts significantly in time c as a function of time t (red) and a copy of the same wave delayed by 2c(green). At any particular time t the wave can interfere perfectly with its delayed copy. But, since half the time the red and green waves are in phase and half the time out of phase, when averaged over t any interference disappears at this delay.
Formally, this follows from the convolution theorem in mathematics, which relates the Fourier transform of the power spectrum (the intensity of each frequency) to its autocorrelation.
Coherence (physics)
81
Figure 3: The amplitude of a wavepacket whose amplitude changes significantly in time c (red) and a copy of the same wave delayed by 2c(green) plotted as a function of time t. At any particular time the red and green waves are uncorrelated; one oscillates while the other is constant and so there will be no interference at this delay. Another way of looking at this is the wavepackets are not overlapped in time and so at any particular time there is only one nonzero field so no interference can occur.
Spatial coherence
Figure 4: The time-averaged intensity (blue) detected at the output of an interferometer plotted as a function of delay for the example waves in Figures 2 and 3. As the delay is changed by half a period, the interference switches between constructive and destructive. The black lines indicate the interference envelope, which gives the degree of coherence. Although the waves in Figures 2 and 3 have different time durations, they have the same coherence time.
In some systems, such as water waves or optics, wave-like states can extend over one or two dimensions. Spatial coherence describes the ability for two points in space, x1 and x2, in the extent of a wave to interfere, when averaged over time. More precisely, the spatial coherence is the cross-correlation between two points in a wave for all times. If a wave has only 1 value of amplitude over an infinite length, it is perfectly spatially coherent. The range of separation between the two points over which there is significant interference is called the coherence area, Ac. This is the relevant type of coherence for the Youngs double-slit interferometer. It is also used in optical imaging systems and particularly in various types of astronomy telescopes. Sometimes people also use spatial coherence to refer to the visibility when a wave-like state is combined with a spatially shifted copy of itself.
Coherence (physics)
82
Figure 6: A wave with a varying profile (wavefront) and infinite coherence length.
Figure 7: A wave with a varying profile (wavefront) and finite coherence length.
Figure 8: A wave with finite coherence area is incident on a pinhole (small aperture). The wave will diffract out of the pinhole. Far from the pinhole the emerging spherical wavefronts are approximately flat. The coherence area is now infinite while the coherence length is unchanged.
Figure 9: A wave with infinite coherence area is combined with a spatially-shifted copy of itself. Some sections in the wave interfere constructively and some will interfere destructively. Averaging over these sections, a detector with length D will measure reduced interference visibility. For example a misaligned Mach-Zehnder interferometer will do this.
Consider a tungsten light-bulb filament. Different points in the filament emit light independently and have no fixed phase-relationship. In detail, at any point in time the profile of the emitted light is going to be distorted. The profile will change randomly over the coherence time . Since for a white-light source such as a light-bulb is small, the filament is considered a spatially incoherent source. In contrast, a radio antenna array, has large spatial coherence because antennas at opposite ends of the array emit with a fixed phase-relationship. Light waves produced by a laser often have high temporal and spatial coherence (though the degree of coherence depends strongly on the exact properties of the laser). Spatial coherence of laser beams also manifests itself as speckle patterns and diffraction fringes seen at the edges of shadow. Holography requires temporally and spatially coherent light. Its inventor, Dennis Gabor, produced successful holograms more than ten years before lasers were invented. To produce coherent light he passed the monochromatic light from an emission line of a mercury-vapor lamp through a pinhole spatial filter.
Coherence (physics)
83
Spectral coherence
Waves of different frequencies (in light these are different colours) can interfere to form a pulse if they have a fixed relative phase-relationship (see Fourier transform). Conversely, if waves of different frequencies are not coherent, then, when combined, they create a wave that is continuous in time (e.g. white light or white noise). The temporal duration of the pulse is limited by the spectral bandwidth of the light according to:
Figure 10: Waves of different frequencies (i.e. colors) interfere to form a pulse if they are coherent.
Figure 11: Spectrally incoherent light interferes to form continuous light with a randomly varying phase and amplitude
, which follows from the properties of the Fourier transform (for quantum particles it also results in the Heisenberg uncertainty principle).
Coherence (physics) If the phase depends linearly on the frequency (i.e. ) then the pulse will have the minimum time duration
84
Polarization coherence
Light also has a polarization, which is the direction in which the electric field oscillates. Unpolarized light is composed of two equally intense incoherent light waves with orthogonal polarizations. The electric field of the unpolarized light wanders in every direction and changes in phase over the coherence time of the two light waves. A polarizer rotated to any angle will always transmit half the incident intensity when averaged over time. If the electric field wanders by a smaller amount the light will be partially polarized so that at some angle, the polarizer will transmit more than half the intensity. If a wave is combined with an orthogonally polarized copy of itself delayed by less than the coherence time, partially polarized light is created. The polarization of a light beam is represented by a vector in the Poincare sphere. For polarized light the end of the vector lies on the surface of the sphere, whereas the vector has zero length for unpolarized light. The vector for partially polarized light lies within the sphere
Applications
Holography
Coherent superpositions of optical wave fields include holography. Holographic objects are used frequently in daily life in bank notes and credit cards.
Coherence (physics) Large-scale (macroscopic) quantum coherence leads to novel phenomena. For instance, the laser, superconductivity, and superfluidity are examples of highly coherent quantum systems. One example that shows the amazing possibilities of macroscopic quantum coherence is the Schrdinger's cat thought experiment. Another example of quantum coherence is in a BoseEinstein condensate. Here, all the atoms that make up the condensate are in-phase; they are thus necessarily all described by a single quantum wavefunction.
85
See also
Atomic coherence Coherence length Coherence width Coherent state Optical heterodyne detection Quantum decoherence Quantum Zeno effect Measurement problem Measurement in quantum mechanics
References
[1] Rolf G. Winter; Aephraim M. Steinberg. Coherence (http:/ / www. accessscience. com). AccessScience@McGraw-Hill. doi:10.1036/1097-8542.146900. . [2] M.Born; E. Wolf (1999). Principles of Optics (7th ed. ed.). [3] Loudon, Rodney (2000). The Quantum Theory of Light. Oxford University Press. ISBN0-19-850177-3. [4] Leonard Mandel (1995). Optical Coherence and Quantum Optics. Cambridge University Press. ISBN0521417112. [5] Arvind Marathay (1982). Elements of Optical Coherence Theory. John Wiley & Sons Inc. ISBN0471567892. [6] http:/ / www. springerlink. com/ content/ l80m18140241381l/
Coherence length
86
Coherence length
In physics, coherence length is the propagation distance from a coherent source to a point where an electromagnetic wave maintains a specified degree of coherence. The significance is that interference will be strong within a coherence length of the source, but not beyond it. This concept is also commonly used in telecommunication engineering. In long-distance transmission systems, the coherence length may be reduced by propagation factors such as dispersion, scattering, and diffraction. In radio-band systems, the coherence length is approximated by
where c is the speed of light in a vacuum, n is the refractive index of the medium, and source. In optical communications, the coherence length is given by
where
is the spectral
width of the source. Coherence length is usually applied to the optical regime. The expression above is a frequently used approximation. Due to ambiguities in the definition of spectral width of a source, however, the following definition of coherence length has been suggested: The coherence length can be measured using a Michelson interferometer and is the optical path length difference of a self-interfering laserbeam which corresponds to a fringe visibility[1] , where the fringe visibility is defined as
where
Multimode helium-neon lasers have a typical coherence length of 20cm, while semiconductor lasers reach some 100 m. Fiber lasers can have coherence lengths exceeding 100km.
See also
Coherence time Laser
References
[1] Ackermann, Gerhard K. (2007). Holography: A Practical Approach. Wiley-VCH. ISBN3527406638.
This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (http://www.its.bldrdoc.gov/fs-1037/fs-1037c.htm) (in support of MIL-STD-188).
Holographic memory
87
Holographic memory
Optical discOptical disc driveOptical disc authoringOptical disc authoring softwareAuthoring softwareOptical disc recording technologiesRecording technologiesOptical disc recording modesRecording modesPacket writing Optical media types Blu-ray Disc (BD): Blu-ray Disc recordableBD-R, BD-REDVD: DVD-R, DVD+R, DVD-R DL, DVD+R DL, DVD-RW, DVD+RW, DVD-RAM, DVD-D, High-Definition Versatile DiscHVDCompact Disc (CD): Red Book (audio Compact Disc standard)Red Book, CD-ROM, CD-R, CD-RW, 5.1 Music Disc, Super Audio CDSACD, PhotoCD, CD Video (CDV), Video CD (VCD), SVCD, CD+G, CD-Text, CD-ROM XA, CD-iUniversal Media Disc (UMD) Enhanced Versatile Disc (EVD) Forward Versatile Disc (FVD) Holographic Versatile Disc (HVD) China Blue High-definition Disc (CBHD) HD DVD: HD DVD-R, HD DVD-RW, HD DVD-RAMHigh-Definition Versatile Disc (HDVD) High definition Versatile Multilayer Disc (HD VMD) VCDHDEcoDiscGD-ROMMiniDisc (MD) (Hi-MD) Laserdisc (LD) Video Single Disc (VSD) Ultra Density Optical (UDO) Stacked Volumetric Optical Disk (SVOD) 5D DVDFive dimensional discs (5D DVD) Nintendo optical disc (NOD) Standards Rainbow BooksFile systems ISO 9660Joliet (file system)JolietRock Ridge / SUSP El Torito (CD-ROM standard)El ToritoApple ISO 9660 ExtensionsUniversal Disk Format (UDF) Mount Rainier (packet writing)Mount Rainier See also History of optical storage mediaHigh definition optical disc format war
Holographic data storage is a potential replacement technology in the area of high-capacity data storage currently dominated by magnetic and conventional optical data storage. Magnetic and optical data storage devices rely on individual bits being stored as distinct magnetic or optical changes on the surface of the recording medium. Holographic data storage overcomes this limitation by recording information throughout the volume of the medium and is capable of recording multiple images in the same area utilizing light at different angles. Additionally, whereas magnetic and optical data storage records information a bit at a time in a linear fashion, holographic storage is capable of recording and reading millions of bits in parallel, enabling data transfer rates greater than those attained by optical storage.[1]
Recording data
Holographic data storage captures information using an optical interference pattern within a thick, photosensitive optical material. Light from a single laser beam is divided into two separate optical patterns of dark and light pixels. By adjusting the reference beam angle, wavelength, or media position, a multitude of holograms (theoretically, several thousand) can be stored on a single volume. The theoretical limits for the storage density of this technique is approximately several tens of Terabytes (1 terabyte = 1024 gigabytes) per cubic centimeter. In 2006, InPhase Technologies published a white paper reporting an achievement of 500 Gb/in2. From this figure we can deduce that a regular disk (with 4cm radius of writing area) could hold up to a maximum of 3895.6Gb[2]
Reading data
The stored data is read through the reproduction of the same reference beam used to create the hologram. The reference beams light is focused on the photosensitive material, illuminating the appropriate interference pattern, the light diffracts on the interference pattern, and projects the pattern onto a detector. The detector is capable of reading the data in parallel, over one million bits at once, resulting in the fast data transfer rate. Files on the holographic drive can be accessed in less than 200 milliseconds.[3]
Holographic memory
88
Longevity
Holographic data storage can provide companies a method to preserve and archive information. The write-once, read many (WORM) approach to data storage would ensure content security, preventing the information from being overwritten or modified. Manufacturers believe this technology can provide safe storage for content without degradation for more than 50 years, far exceeding current data storage options. Counterpoints to this claim point out the evolution of data reader technology changes every ten years; therefore, being able to store data for 50100 years would not matter if you could not read or access it.[3] However, a storage method that works very well could be around longer before needing a replacement; plus, with the replacement, the possibility of backwards-compatibility exists, similar to how DVD technology is backwards-compatible with CD technology.
Terms used
Sensitivity refers to the extent of refractive index modulation produced per unit of exposure. Diffraction efficiency is proportional to the square of the index modulation times the effective thickness. The dynamic range determines how many holograms may be multiplexed in a single volume data. Spatial light modulators (SLM) are pixelated input devices (liquid crystal panels), used to imprint the data to be stored on the object beam.
Technical aspects
Like other media, holographic media is divided into write once (where the storage medium undergoes some irreversible change), and rewritable media (where the change is reversible). Rewritable holographic storage can be achieved via the photorefractive effect in crystals: Mutually coherent light from two sources creates an interference pattern in the media. These two sources are called the reference beam and the signal beam. Where there is constructive interference the light is bright and electrons can be promoted from the valence band to the conduction band of the material (since the light has given the electrons energy to jump the energy gap). The positively charged vacancies they leave are called holes and they must be immobile in rewritable holographic materials. Where there is destructive interference, there is less light and few electrons are promoted. Electrons in the conduction band are free to move in the material. They will experience two opposing forces that determine how they move. The first force is the coulomb force between the electrons and the positive holes that they have been promoted from. This force encourages the electrons to stay put or move back to where they came from. The second is the pseudo-force of diffusion that encourages them to move to areas where electrons are less dense. If the coulomb forces are not too strong, the electrons will move into the dark areas. Beginning immediately after being promoted, there is a chance that a given electron will recombine with a hole and move back into the valence band. The faster the rate of recombination, the fewer the number of electrons that will have the chance to move into the dark areas. This rate will affect the strength of the hologram. After some electrons have moved into the dark areas and recombined with holes there, there is a permanent space charge field between the electrons that moved to the dark spots and the holes in the bright spots. This leads to a change in the index of refraction due to the electro-optic effect.
Holographic memory
89
When the information is to be retrieved or read out from the hologram, only the reference beam is necessary. The beam is sent into the material in exactly the same way as when the hologram was written. As a result of the index changes in the material that were created during writing, the beam splits into two parts. One of these parts recreates the signal beam where the information is stored. Something like a CCD camera can be used to convert this information into a more usable form. Holograms can theoretically store one bit per cubic block the size of the wavelength of light in writing. For example, light from a helium-neon laser is red, 632.8 nm wavelength light. Using light of this wavelength, perfect holographic storage could store 4 gigabits per cubic millimetre. In practice, the data density would be much lower, for at least four reasons: The need to add error-correction The need to accommodate imperfections or limitations in the optical system Economic payoff (higher densities may cost disproportionately more to achieve) Design technique limitationsa problem currently faced in magnetic Hard Drives wherein magnetic domain configuration prevents manufacture of disks that fully utilize the theoretical limits of the technology.
Unlike current storage technologies that record and read one data bit at a time, holographic memory writes and reads data in parallel in a single flash of light.[4]
Two-color recording
For two-color holographic recording, the reference and signal beams are fixed to a particular wavelength (green, red or IR) and the sensitizing/gating beam is a separate, shorter wavelength (blue or UV). The sensitizing/gating beam is used to sensitize the material before and during the recording process, while the information is recorded in the crystal via the reference and signal beams. It is shone intermittently on the crystal during the recording process for measuring the diffracted beam intensity. Set up for holographic recording Readout is achieved by illumination with the reference beam alone. Hence the readout beam with a longer wavelength would not be able to excite the recombined electrons from the deep trap centers during readout, as they need the sensitizing light with shorter wavelength to erase them. Usually, for two-color holographic recording, two different dopants are required to promote trap centers, which belong to transition metal and rare earth elements and are sensitive to certain wavelengths. By using two dopants, more trap centers would be created in the Lithium niobate crystal. Namely a shallow and a deep trap would be created. The concept now is to use the sensitizing light to excite electrons from the deep trap farther from the valence band to the conduction band and then to recombine at the shallow traps nearer to the conduction band. The reference and signal beam would then be used to excite the electrons from the shallow traps back to the deep traps. The information would hence be stored in the deep traps. Reading would be done with the reference beam since the
Holographic memory electrons can no longer be excited out of the deep traps by the long wavelength beam.
90
Effect of annealing
For a doubly doped LiNbO3 crystal there exists an optimum oxidation/reduction state for desired performance. This optimum depends on the doping levels of shallow and deep traps as well as the annealing conditions for the crystal samples. This optimum state generally occurs when 95 98% of the deep traps are filled. In a strongly oxidized sample holograms cannot be easily recorded and the diffraction efficiency is very low. This is because the shallow trap is completely empty and the deep trap is also almost devoid of electrons. In a highly reduced sample on the other hand, the deep traps are completely filled and the shallow traps are also partially filled. This results in very good sensitivity (fast recording) and high diffraction efficiency due to the availability of electrons in the shallow traps. However during readout, all the deep traps get filled quickly and the resulting holograms reside in the shallow traps where they are totally erased by further readout. Hence after extensive readout the diffraction efficiency drops to zero and the hologram stored cannot be fixed.
See also
Holographic Versatile Card Holographic Versatile Disc Holographic associative memory 3D optical data storage List of emerging technologies Holography
Holographic memory
91
External links
Howstuffworks [12] Daewoo Electronics Develops the Worlds First High Accuracy Servo Motion Control System for Holographic Digital Data Storage (virtual prototype created with LabView) [13] Inphase [14] Comparison of Two Approaches: Page-based and Bit-based HDS [15] Maxell Holographic Media Press Release [16] GE Global Research is developing terabyte discs and players that will work with old storage media [17] Holography speaks volumes [18] - an Instant Insight [19] where Sren Hvilsted and colleagues explain how holograms could be the key to storing increasing amounts of information. From the Royal Society of Chemistry
References
[1] "Holographic data storage." (http:/ / www. research. ibm. com/ journal/ rd/ 443/ ashley. html). IBM journal of research and development. . Retrieved 2008-04-28. [2] "High speed holographic data storage at 500 Gbit/in.2" (http:/ / www. inphase-technologies. com/ technology/ whitepapers. asp?subn=2_3). . Retrieved 2008-05-05. [3] Robinson, T. (2005, June). The race for space. netWorker. 9,2. Retrieved April 28, 2008 from ACM Digital Library. [4] "Maxell Introduces the Future of Optical Storage Media With Holographic Recording Technology", (2005) retrieved January 27, 2007 (http:/ / www. maxell-usa. com/ index. aspx?id=-5;0;158;0& a=read& pid=49) [5] "Update: Aprilis Unveils Holographic Disk Media" (http:/ / www. extremetech. com/ article2/ 0,3973,600628,00. asp). 2002-10-08. . [6] "Holographic-memory discs may put DVDs to shame" (http:/ / www. newscientist. com/ article. ns?id=dn8370& feedId=online-news_rss20). New Scientist. 2005-11-24. . [7] "Aprilis to Showcase Holographic Data Technology" (http:/ / www. enterprisestorageforum. com/ technology/ news/ article. php/ 885351). 2001-09-18. . [8] Sander Olson (2002-12-09). "Holographic storage isn't dead yet" (http:/ / www. geek. com/ news/ geeknews/ 2002Dec/ bch20021209017652. htm). . [9] GE Unveils 500-GB, Holographic Disc Storage Technology (http:/ / www. crn. com/ storage/ 217200230;jsessionid=PCLSSR1JXVD1OQSNDLOSKHSCJUNN2JVN) [10] "Could Holography Cure Nintendo's Storage Space Blues? News" (http:/ / www. totalvideogames. com/ Nintendo-Wii/ news/ Could-Holography-Cure-Nintendo039s-Storage-Space-Blues-13031. html). . [11] Inphase Technologies, Inc. (Longmont, CO, US) and Nintendo Co., Ltd. (Kyoto, JP) (2008-02-26). "Miniature Flexure Based Scanners For Angle Multiplexing Patent" (http:/ / www. freepatentsonline. com/ 7336409. html). . [12] http:/ / computer. howstuffworks. com/ holographic-memory. htm [13] http:/ / sine. ni. com/ csol/ cds/ item/ vw/ p/ id/ 685/ nid/ 124300 [14] http:/ / www. inphase-technologies. com/ technology/ default. asp?subn=2_1 [15] http:/ / www. media-tech. net/ fileadmin/ templates/ resources/ sc06/ mtc06_keynote_day2_hesselink_yuzuru. pdf [16] http:/ / www. maxell-usa. com/ index. aspx?id=-5;0;246;0& a=read& pid=100 [17] http:/ / www. technologyreview. com/ computing/ 21507 [18] http:/ / www. rsc. org/ Publishing/ ChemTech/ Volume/ 2009/ 09/ holographic_data_storage. asp [19] http:/ / www. rsc. org/ Publishing/ ChemTech/ Instant_insights. asp
Security hologram
92
Security hologram
Security holograms are very difficult to forge because they are replicated from a master hologram which requires expensive, specialized and technologically advanced equipment. They are used widely in several banknotes around the world, in particular those that are of high denominations. They are also used in passports, credit and bank cards as well as quality products. Holograms are classified into different types with reference to the degree of level of optical security incorporated in them during the process of master origination. The different classifications are described below:
A hologram on a Nokia mobile phone battery. This is intended to show the battery is 'original Nokia' and not a cheaper imitation.
2D / 3D "hologram" images
These are by far the most common type of hologram - and in fact they are not holograms in any true sense of the words. The term "hologram" has taken on a secondary meaning due to the widespread use of a multilayer image on credit cards and driver licenses. This type of "hologram" consists of two or more images stacked in such a way that each is alternately visible depending upon the angle of perspective of the viewer. The technology here is similar to the technology used for the past 50 years to make red safety night reflectors for bicycles, trucks, and cars. These holograms (and therefore the artwork of these A hologram label on a paper box for security holograms) may be of two layers (i.e. with a background and a foreground) or three layers (with a background, a middle ground and a foreground). The matter of the middle ground in the case of the two-layer holograms are usually superimposed over the matter of the background of the hologram. These holograms display a unique multilevel, multi-colour effect. These images have one or two levels of flat graphics floating above or at the surface of the hologram. The matter in the background appears to be under or behind the hologram, giving the illusion of depth.
Dot matrix
These holograms have a maximum resolution of 10 micrometres per optical element and are produced on specialized machines making forgery difficult and expensive. To design optical elements, several algorithms are used to shape scattered radiation patterns.
Electron-beam lithography
These types of hologram are originated using highly sophisticated and very expensive electron-beam lithography system and this is the latest technology in the world at present. This kind of technology allows the creation of surface holograms with a resolution of up to 0.1 micrometres (12,000 dpi). This technique requires development of various algorithms for designing optical elements that shapes scattered radiation patterns. This type of hologram offers features like the viewing of four lasers at a single point, 2D/3D raster text, switch effects, 3D effects, concealed images, laser readable text and true colour images. The various kinds of features possible in security holograms are mentioned below: Concealed images
Security hologram These usually take the form of very thin lines and contours. Concealed images can be seen at large angle light diffraction, and at one particular angle only. Guilloch pattern (high resolution line patterns) These are sets of thin lines of a complicated geometry (guilloch patterns) drawn with high resolution. The technology allows continuous visual changes of colour along each separated lines. Kinetic images They can be seen when the conditions of hologram observations are being changed. Turning or inclining the hologram allows the movements of certain features of the image to be studied. Microtexts or nanotexts Dot matrix holograms are capable of embedding microtext at various sizes. There are three types of microtexts in holograms: high contrast microtexts of size 50 150 micrometres; diffractive grating filled microtexts of size 50 150 micrometres low contrast microtexts. Microtexts of sizes smaller than 50 micrometres are referred to as nanotext. Nanotext with sizes of less than 50 micrometres can be observed with a microscope only. CLR (Covert Laser Readable) image Dot matrix holograms also support CLR imagery, where a simple laser device may be used to verify the hologram's authenticity. Computing CLR images is a complicated mathematical task that involves solving ill-posed problems. There are two types of CLR: Dynamic CLR and Multigrade CLR. Dynamic CLR is a set of CLR fragments that produce animated images on the screen as the control device moves along the hologram surface. Multigrade CLR images produce certain images on the screen of the controlling device, which differ in the first and minus first orders of laser light diffraction. As a variant, a hidden image which is both negative and positive, in plus one and minus one order respectively, may be created. Computer-synthesized 2D/3D and 3D images This technology allows 2D / 3D images to be combined with other security features (microtexts, concealed images, CLR etc.) - this combination effect cannot be achieved using any other traditional technologies of origination. True colour images True colour images are very effective decorative pictures. When synthesized by computer, they may include microtexts, hidden images, and other security features, yielding attractive, high-security holograms.
93
See also
Security printing
Holographic interferometry
94
Holographic interferometry
Holographic interferometry (HI)[1] [2] is a technique which enables static and dynamic displacements of objects with optically rough surfaces to be measured to optical interferometric precision (i.e to fractions of a wavelength of light). These measurements can be applied to stress, strain and vibration analysis, as well as to non-destructive testing. It can also be used to detect optical path length variations in transparent media, which enables, for example, fluid flow to be visualised and analysed. It can also be used to generate contours representing the form of the surface. Holography enables the light field scattered from an object to be recorded and replayed. If this recorded field is superimposed on the 'live field' scattered from the object, the two fields will be identical. If, however, a small deformation is applied to the object, the relative phases of the two light fields will alter, and it is possible to observe interference. This technique is known as live holographic interferometry. It is also possible to obtain fringes by making two recordings of the light field scattered from the object on the same recording medium. The reconstructed light fields may then interferere to give fringes which map out the displacement of the surface. This is known as 'frozen fringe' holography. The form of the fringe pattern is related to the changes in surface position or air density. Many methods of analysing such patterns automatically have been developed in recent years.
See also
interferometry holography
External links
Holographic Interferometry (University of Edinburgh)[3] Holographic Interferometry (University of Warwick)[4] Holographic Interferometry (Rice University) [5] interferometry [6]
References
[1] [2] [3] [4] [5] [6] Powell RL & Stetson KA, 1965, J. Opt. Soc. Am., 55, 1593-8 Jones R & Wykes C, Holographic and Speckle Interferometry, 1989, Cambridge University Press http:/ / www. ph. ed. ac. uk/ ~wjh/ teaching/ mo/ slides/ holo-interferometry/ holo-inter. pdf http:/ / www. eng. warwick. ac. uk/ OEL/ previous/ interferometry. htm http:/ / www. owlnet. rice. edu/ ~dodds/ Files332/ holography. pdf http:/ / www. answers. com/ topic/ interferometry
Interferometric microscopy
95
Interferometric microscopy
Interferometric microscopy or Imaging interferometric microscopy is the concept of microscopy which is related to holography, synthetic-aperture imaging, and off-axis-dark-field illumination techniques. Interferometric microscopy allows enhancement of resolution of optical microscopy due to interferometric (holographic) registration of several partial images (amplitude and phase) and the numerical combining.
Fig.1. Optical arrangement for registering a single [1] partial image for interferometric microscopy.
Non-optical waves
Although the Interferometric microscopy has been demonstrated only for optical images (visible light), this technics may find application in high resolution atom optics, or optics of neutral atom beams (see Atomic de Broglie microscope), where the Numerical aperture is usually very limited [5] .
See also
Holography, Numerical Aperture, Diffraction limited
References
[1] Y.Kuznetsova; A.Neumann, S.R.Brueck (2007). "Imaging interferometric microscopyapproaching the linear systems limits of optical resolution" (http:/ / www. opticsexpress. org/ abstract. cfm?id=134719). Optics Express 15: 66516663. doi:10.1364/OE.15.006651. . [2] C.J.Schwarz; Y.Kuznetsova and S.R.J.Brueck (2003). "Imaging interferometric microscopy". Optics Letters 28 (16): 14241426. doi:10.1364/OL.28.001424. PMID12943079. [3] J.Hwang; M.M.Fejer, and W.E.Moerner (2003). "Scanning interferometric microscopy for the detection of ultrasmall phase shifts in condensed matter" (http:/ / scitation. aip. org/ getabs/ servlet/ GetabsServlet?prog=normal& id=PLRAAN000073000002021802000001& idtype=cvips& gifs=yes). PRA 73: 021802. doi:10.1103/PhysRevA.73.021802. . [4] J.Hwang; M.M.Fejer, and W.E.Moerner (2004). "Scanning interferometric microscopy for the detection of ultrasmall phase shifts in condensed matter" (http:/ / scitation. aip. org/ getabs/ servlet/ GetabsServlet?prog=normal& id=PLRAAN000073000002021802000001& idtype=cvips& gifs=yes). Optik 115: 9496. . [5] D.Kouznetsov; H. Oberst, K. Shimizu, A. Neumann, Y. Kuznetsova, J.-F. Bisson, K. Ueda, S. R. J. Brueck (2006). "Ridged atomic mirrors and atomic nanoscope" (http:/ / stacks. iop. org/ 0953-4075/ 39/ 1605). JOPB 39: 16051623. doi:10.1088/0953-4075/39/7/005. .
96
Holonomic brain theory These so-called "quantum minds" are still debated among scientists and philosophers, and there are actually a number of different theoriesnot onethat have been suggested. Notable proponents of various quantum mind theories are philosopher David Chalmers and mathematical physicist Roger Penrose. Cosmologist Max Tegmark is a notable opponent of the various quantum mind theories. Tegmark wrote the well-known paper, "Problem with Quantum Mind Theory [4]," which demonstrates certain problems with Chalmers' and Penrose's ideas on the subject.
97
See also
Consciousness Evolutionary neuroscience Gamma wave Holographic memory Holonomic Implicate and Explicate Order according to David Bohm Sensory integration dysfunction Wikibook on consciousness
References
Karen K. DeValois, Russell L. DeValois, and W. W. Yund. "Responses of Striate Cortex Cells to Grating and Checkerboard Patterns", Journal of Physiology, vol 291, 483-505, 1979. Russel L. DeValois and Karen K. DeValois, "Spatial vision", Ann. Rev. Psychol, 31, 309-41, (1980) Paul Pietsch, "Shuffle Brain", Harper's, May, 1972, online [5] Paul Pietsch, Shufflebrain: The Quest for the Hologramic Mind, Houghton-Mifflin, 1981, ISBN 0-395-29480-0. 2nd edition 1996: online [6]: Shufflebrain: The Quest of Hologramic Mind: an in-depth but non-technical look at experiments on the neural hologram Karl H. Pribram, "The Implicate Brain", in B. J. Hiley and F. David Peat, (eds) Quantum Implications: Essays in Honour of David Bohm, Routledge, 1987 ISBN 0-415-06960-2 --- 'Holonomic Brain Theory and Motor Gestalts: Recent Experimental Results', (1997) Michael Talbot, "The Holographic Universe" 1991, HarperCollins
External links
"Holonomic brain theory" [7], Article in Scholarpedia by Karl Pribram, Georgetown University, Washington, DC ACSA2000.net [8] - 'Comparison between Karl Pribram's "Holographic Brain Theory" and more conventional models of neuronal computation', Jeff Prideaux NIH.gov [9] - 'Concept-matching in the brain depends on serotonin and gamma-frequency shifts' M. B. Bayly, Medical Hypotheses Vol 65, No 1, pp 149-51, 2005 ReutersHealth.com [10] - 'Celebrity photos prompt memory study breakthrough: Scientists at two California universities have isolated single neurons responsible for holding the memory of an image' (June 23, 2005) ToeQuest.com [11] - 'Holonomic Brain Theory: Holographic Theory offers answers for two main paradoxes, Nature of mind and Non-locality' TWM.co.nz [12] - 'The Holographic Brain: Karl Pribram, Ph.D. interview', Dr. Jeffrey Mishlove (1998)
98
References
[1] Pribram, 1987 [2] DeValois and DeValois, 1980 [3] Pribram, 1987 [4] http:/ / www. sustainedaction. org/ Explorations/ problem_with_quantum_mind_theory. htm [5] http:/ / www. indiana. edu/ ~pietsch/ shufflebrain. html [6] http:/ / www. indiana. edu/ ~pietsch/ home. html [7] http:/ / www. scholarpedia. org/ article/ Holonomic_Brain_Theory [8] http:/ / www. acsa2000. net/ bcngroup/ jponkp/ [9] http:/ / www. ncbi. nlm. nih. gov/ entrez/ query. fcgi?cmd=Retrieve& db=pubmed& dopt=Abstract& list_uids=15893132& query_hl=1 [10] http:/ / www. reutershealth. com/ archive/ 2005/ 06/ 23/ eline/ links/ 20050623elin007. html [11] http:/ / www. toequest. com/ forum/ showthread. php?s=88e90cefda26ac1ea6440a97d7e4342f& p=2473#post2473 [12] http:/ / twm. co. nz/ pribram. htm
Holographic principle
The holographic principle is a property of quantum gravity and string theories which states that the description of a volume of space can be thought of as encoded on a boundary to the regionpreferably a light-like boundary like a gravitational horizon. First proposed by Gerard 't Hooft, it was given a precise string-theory interpretation by Leonard Susskind. In a larger and more speculative sense, the theory suggests that the entire universe can be seen as a two-dimensional information structure "painted" on the cosmological horizon, such that the three dimensions we observe are only an effective description at macroscopic scales and at low energies. Cosmological holography has not been made mathematically precise, partly because the cosmological horizon has a finite area and grows with time.[1] [2] The holographic principle was inspired by black hole thermodynamics, which implies that the maximal entropy in any region scales with the radius squared, and not cubed as might be expected. In the case of a black hole, the insight was that the description of all the objects which have fallen in can be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory.[3]
Holographic principle escape. If neighboring geodesics start moving toward each other they eventually collide, at which point their extension is inside the black hole. So the geodesics are always moving apart, and the number of geodesics which generate the boundary, the area of the horizon, always increases. Hawking's result was called the second law of black hole thermodynamics, by analogy with the law of entropy increase, but at first, he did not take the analogy too seriously. Hawking knew that if the horizon area was an actual entropy, black holes would have to radiate. When heat is added to a thermal system, the change in entropy is the increase in mass-energy divided by temperature:
99
If black holes have a finite entropy, they should also have a finite temperature. In particular, they would come to equilibrium with a thermal gas of photons. This means that black holes would not only absorb photons, but they would also have to emit them in the right amount to maintain detailed balance. Time independent solutions to field equations don't emit radiation, because a time independent background conserves energy. Based on this principle, Hawking set out to show that black holes do not radiate. But, to his surprise, a careful analysis convinced him that they do, and in just the right way to come to equilibrium with a gas at a finite temperature. Hawking's calculation fixed the constant of proportionality at 1/4; the entropy of a black hole is one quarter its horizon area in Planck units.[5] The entropy is the logarithm of the number of ways an object can be configured microscopically, while leaving the macroscopic description unchanged. Black hole entropy is deeply puzzling it says that the logarithm of the number of states of a black hole is proportional to the area of the horizon, not the volume in the interior.[6]
Holographic principle description. While short strings have zero entropy, he could identify long highly excited string states with ordinary black holes. This was a deep advance because it revealed that strings have a classical interpretation in terms of black holes. This work showed that the black hole information paradox is resolved when quantum gravity is described in an unusual string-theoretic way. The space-time in quantum gravity should emerge as an effective description of the theory of oscillations of a lower dimensional black-hole horizon. This suggested that any black hole with appropriate properties, not just strings, would serve as a basis for a description of string theory. In 1995, Susskind, along with collaborators Tom Banks, Willy Fischler, and Stephen Shenker, presented a formulation of then new M-theory using a holographic description in terms of charged point black holes, the D0 branes of type IIA string theory. The Matrix theory they proposed was first suggested as a description of 2branes in 11 dimensional supergravity by Bernard de Wit, Jens Hoppe, and Hermann Nicolai. The later authors reinterpreted the same matrix models as a description of the dynamics of point black holes in particular limits. Holography allowed them to conclude that the dynamics of these black holes give a complete nonperturbative formulation of M-theory. In 1997, Juan Maldacena gave the first holographic descriptions of a higher dimensional object, the 3+1 dimensional type IIB membrane, which resolved a long-standing problem of finding a string description which describes a gauge theory. These developments simultaneously explained how string theory is related to quantum chromodynamics, and afterwards holography gained wide acceptance.
100
Holographic principle
101
Unexpected connection
Bekenstein's topical overview "A Tale of Two Entropies" describes potentially profound implications of Wheeler's trend in part by noting a previously unexpected connection between the world of information theory and classical physics. This connection was first described shortly after the seminal 1948 papers of American applied mathematician Claude E. Shannon introduced today's most widely used measure of information content, now known as Shannon entropy. As an objective measure of the quantity of information, Shannon entropy has been enormously useful, as the design of all modern communications and data storage devices, from cellular phones to modems to hard disk drives and DVDs, all rely on Shannon entropy. In thermodynamics (the branch of physics dealing with heat), entropy is popularly described as a measure of the "disorder" in a physical system of matter and energy. In 1877 Austrian physicist Ludwig Boltzmann described it more precisely in terms of the number of distinct microscopic states that the particles composing a macroscopic "chunk" of matter could be in while still looking like the same macroscopic "chunk". As an example, for the air in a room, its thermodynamic entropy would equal the logarithm of the count of all the ways that the individual gas molecules could be distributed in the room, and all the ways they could be moving.
See also
Bekenstein bound Brane cosmology Gravity as an entropic force MargolusLevitin theorem Physical cosmology
Holographic principle
102
References
Gerard 't Hooft's original 1993 paper, "Dimensional Reduction in Quantum Gravity" [12] General Bousso, Raphael (2002). "The holographic principle". Reviews of Modern Physics 74: 825874. doi:10.1103/RevModPhys.74.825. arXiv:hep-th/0203101. Citations
[1] Lloyd, Seth (2002-05-24). "Computational Capacity of the Universe" (http:/ / link. aps. org/ abstract/ PRL/ v88/ e237901). Physics Review Letters; American Physical Society 88 (23): 237901. doi:10.1103/PhysRevLett.88.237901. . Retrieved 2008-03-14. [2] Davies, Paul. "Multiverse Cosmological Models and the Anthropic Principle" (http:/ / www. google. com/ search?hl=en& lr=& as_qdr=all& q=holographic+ everything+ site:ctnsstars. org). CTNS. . Retrieved 2008-03-14. [3] Susskind, L., "The Black Hole War - My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics", Little, Brown and Company (2008) [4] Bekenstein, Jacob D. (January 1981). "Universal upper bound on the entropy-to-energy ratio for bounded systems" (http:/ / www. aeiveos. com/ ~bradbury/ Authors/ Computing/ Bekenstein-JD/ UUBotEtERfBS. html). Physical Review D 23 (215): 287298. doi:10.1103/PhysRevD.23.287. . [5] Majumdar, Parthasarathi (1998). "Black Hole Entropy and Quantum Gravity". ArXiv: General Relativity and Quantum Cosmology. arXiv:gr-qc/9807045. [6] Bekenstein, Jacob D. (August 2003). "Information in the Holographic Universe Theoretical results about black holes suggest that the universe could be like a gigantic hologram" (http:/ / www. sciam. com/ article. cfm?articleid=000AF072-4891-1F0A-97AE80A84189EEDF). Scientific American 17: p. 59. doi:10.1093/shm/17.1.145. . [7] except in the case of measurements, which the black hole should not be performing [8] J.D.Brown and M.Henneaux 1986 "Central charges in the canonical realization of asymptotic symmetries: an example from three-dimensional gravity" Commun. Math. Phys. 104 207-226 [9] Information in the Holographic Universe (http:/ / www. sciamdigital. com/ index. cfm?fa=Products. ViewIssuePreview& ARTICLEID_CHAR=0E90201A-2B35-221B-6BBEB44296C90AAD) [10] Hogan (2007). "Measurement of Quantum Fluctuations in Geometry". ariv:0712.3419 [gr-qc]. [11] Chown, Marcus (15 January 2009). "Our world may be a giant hologram" (http:/ / www. newscientist. com/ article/ mg20126911. 300). NewScientist. . Retrieved 2010-04-19. [12] http:/ / lanl. arxiv. org/ abs/ gr-qc/ 9310026
External links
UC Berkeley's Raphael Bousso gives an introductory lecture on the holographic principle - Video. (http://www. uctv.tv/search-details.asp?showID=11140) Scientific American article on holographic principle by Jacob Bekenstein (http://community.livejournal.com/ ref_sciam/1190.html)
Volume hologram
103
Volume hologram
Volume holograms are holograms where the thickness of the recording material is much larger than the light wavelength used for recording. In this case diffraction of light from the hologram is possible only as Bragg diffraction, i.e., the light has to have the right wavelength (color) and the wave must have the right shape (beam direction, wavefront profile). Volume holograms are also called "thick holograms" or "Bragg holograms".
Theory
Volume holograms were first treated by H. Kogelnik in 1969 [1] by the so-called "coupled-wave theory". For volume phase holograms it is possible to diffract 100% of the incoming reference light into the signal wave, i.e., full diffraction of light can be achieved. Volume absorption holograms show much lower efficiencies. H.Kogelnik provides analytical solutions for transmission as well as for reflection conditions. A good text-book description of the theory of volume holograms can be found in a book from J.Goodman [2] .
Bragg selectivity
In the case of a simple Bragg reflector the wavelength selectivity , where is the vacuum wavelength of the reading light, can be roughly estimated by is the period length of the grating and
is the thickness of the grating. The assumption is just that the grating is not too strong, i.e., that the full length of the grating is used for light diffraction. Considering that because of the Bragg condition the simple relation holds, where is the refractive index of the material at this wavelength, one sees that for typical values ( ) one gets showing the extraordinary wavelength selectivity of such volume holograms. In the case of a simple grating in the transmission geometry the angular selectivity can be estimated as well: , where is the thickness of the holographic grating. Here is given by 2 ). Using again typical numbers ( ) one ends up with showing the impressive angular selectivity of volume holograms.
Volume hologram
104
References
[1] [2] [3] [4] [5] H. Kogelnik (1969). "Coupled-wave theory for thick hologram gratings". Bell System Technical Journal 48: 2909. J. Goodman (2005). Introduction to Fourier optics. Roberts & Co Publishers. http:/ / www. ondaxinc. com/ - Ondax, Inc. http:/ / www. pdld. com/ index. htm - PD LD, Inc. http:/ / www. optigrate. com/ - Optigrate
Digital holography
Digital holography is the technology of acquiring and processing holographic measurement data, typically via a CCD camera or a similar device. In particular, this includes the numerical reconstruction of object data from the recorded measurement data, in distinction to an optical reconstruction which reproduces an aspect of the object. Digital holography typically delivers three-dimensional surface or optical thickness data. There are different techniques available in practice, depending on the intended purpose. [1]
Off-axis configuration
At the off-axis configuration where a small angle between the reference and the object beams is used. In this configuration, a single recorded digital hologram is sufficient to reconstruct the information defining the shape of the surface, allowing real-time imaging.
Multiplexing of holograms
Digital holograms can be numerically multiplexed and demultiplexed for efficient storage and transmission. Amplitude and phase can be correctly recovered.[2] The numerical access to the optical wave characteristics (amplitude, phase, polarization) made digital holography a very powerful method. Numerical optics can be applied to increase the depth of focus (numerical focalization) and compensate for aberration.[3] Wavelength multiplexing of holograms is also possible in digital holography as in classical holography. It is possible to record on the same digital hologram interferograms obtained for different wavelengths.[4] ) or different polarizations [5]
Digital holography
105
See also
Digital planar holography
Further reading
S. Grilli, P. Ferraro, S. De Nicola, A. Finizio, G. Pierattini, and R. Meucci (2001) "Whole optical wavefields reconstruction by digital holography [14]" Optics Express 9, 294302.
External links
Lynce Tec [15] DHMTM Instruments for Biomedical and Metrologic Applications Phase Holographic Imaging [16] Digital holographic microscopy
Digital holography
106
References
[1] U. Schnars, W. Jptner (2005). Digital Holography (http:/ / www. springer. com/ physics/ optics/ book/ 978-3-540-21934-7). Springer. . [2] M. Paturzo; P. Memmolo, L. Miccio, A. Finizio, P. Ferraro, A. Tulino, and B. Javidi (2008). "Numerical multiplexing and demultiplexing of digital holographic information for remote reconstruction in amplitude and phase" (http:/ / www. opticsinfobase. org/ abstract. cfm?URI=ol-33-22-2629). Optics Letters 33: 26292631. doi:10.1364/OL.33.002629. . [3] T. Colomb; F. Montfort, J. Khn, N. Aspert, E. Cuche, A. Marian, F. Charrire, S. Bourquin, P. Marquet, and C. Depeursinge (20076). "Numerical parametric lens for shifting, magnification and complete aberration compensation in digital holographic microscopy" (http:/ / josaa. osa. org/ abstract. cfm?id=117928). Journal of the Optical Society of America A 23: 31773190. doi:10.1364/JOSAA.23.003177. . [4] J. Khn; T. Colomb, F. Montfort, F. Charrire, Y. Emery, E. Cuche, P. Marquet, and C. Depeursinge (2007). "Real-time dual-wavelength digital holographic microscopy with a single hologram acquisition" (http:/ / www. opticsexpress. org/ abstract. cfm?id=137910). Optics Express 15: 7231724. doi:10.1364/OE.15.007231. . [5] T. Colomb; F. Drr, E. Cuche, P. Marquet, H. Limberger, R.-P. Salath, and C. Depeursinge (2005). "Polarization microscopy by use of digital holography: application to optical fiber birefringence measurements" (http:/ / ao. osa. org/ abstract. cfm?id=84638). Applied Optics 44: 44614469. doi:10.1364/AO.44.004461. . [6] Super-resolution in digital holography by a two-dimensional dynamic phase grating M. Paturzo, F. Merola, S. Grilli, S. De Nicola, A. Finizio, and P. Ferraro Optics Express 16, 17107-17118 (2008). http:/ / dx. doi. org/ 10. 1364/ OE. 16. 017107 [7] E. Lam; X. Zhang, H. Vo, T.-C. Poon, G. Indebetouw (2009). "Three-dimensional microscopy and sectional image reconstruction using optical scanning holography" (http:/ / www. opticsinfobase. org/ abstract. cfm?URI=ao-48-34-H113). Applied Optics 48: H113-H119. doi:10.1364/AO.48.00H113. . [8] X. Zhang; E. Lam, T.-C. Poon (2008). "Reconstruction of sectional images in holography using inverse imaging" (http:/ / www. opticsinfobase. org/ oe/ abstract. cfm?uri=oe-16-22-17215). Optics Express 16: 1721517226. doi:10.1364/OE.16.017215. . [9] Extended focused image in microscopy by digital holography P. Ferraro, S. Grilli, D. Alfieri, S. De Nicola, A. Finizio, G. Pierattini, B. Javidi, G. Coppola, and V. Striano Optics Express 13, 6738-6749 (2005). http:/ / dx. doi. org/ 10. 1364/ OPEX. 13. 006738 [10] Y.Kuznetsova; A.Neumann, S.R.Brueck (2007). "Imaging interferometric microscopyapproaching the linear systems limits of optical resolution" (http:/ / www. opticsexpress. org/ abstract. cfm?id=134719). Optics Express 15: 66516663. doi:10.1364/OE.15.006651. . [11] C.J.Schwarz; Y.Kuznetsova and S.R.J.Brueck (2003). "Imaging interferometric microscopy" (http:/ / www. ncbi. nlm. nih. gov/ sites/ entrez?cmd=Retrieve& db=PubMed& list_uids=12943079& dopt=Abstract). Optics Letters 28: 14241426. doi:10.1364/OL.28.001424. . [12] M. Paturzo; F. Merola, S. Grilli, S. De Nicola, A. Finizio, and P. Ferraro (2008). "Super-resolution in digital holography by a two-dimensional dynamic phase grating" (http:/ / www. opticsinfobase. org/ abstract. cfm?URI=oe-16-21-17107). Optics Express 16: 1710717118. doi:10.1364/OE.16.017107. . [13] F.Shimizu; J.Fujita (March 2002). "Reflection-Type Hologram for Atoms" (http:/ / prola. aps. org/ abstract/ PRL/ v88/ i12/ e123201). PRL (American Physical Society) 88 (12): 123201. doi:10.1103/PhysRevLett.88.123201. . [14] http:/ / www. opticsinfobase. org/ abstract. cfm?id=65208 [15] http:/ / www. lynceetec. com [16] http:/ / www. phiab. com
107
Planar waveguide
As it is well known, light can be confined in waveguides by a refractive index gradient. Light propagates in a core layer, surrounded with a cladding layer(s), which should be selected the core refractive index Ncore is greater than that of cladding Nclad: Ncore> Nclad. Cylindrical waveguides (optical fibers) allow for one-dimensional light propagation along the axis. Planar waveguides, fabricated by sequential depositing flat layers of transparent materials with a proper refractive index gradient on a standard wafer, confine light in one direction (axis z) and permit free propagation in two others (axes x and y). Light wave, propagating through the core, extends to some distinct into the both cladding layers. If the refractive index is modulated in the wave path, light of each given wavelength can be directed to a desirable point. The DPH technology comprises design and fabrication of the holographic nano-structures inside a planar waveguide, providing light processing and control. There are many ways of modulating the core refractive index, the simplest of which is engraving the required pattern by nanolithography means. The modulation is created by embedding a digital hologram on the lower or upper core surface or on the both of them. According to NOD statement, standard lithographical processes can be used, making mass production straightforward and inexpensive. Nanoimprinting could be another viable method of fabricating DPH patterns. Each DPH pattern is customized for a given application and computer-generated. It consists of numerous nano-grooves, each ~100nm wide, positioned in a way, providing maximum efficiency for a specific application. The devices are fabricated on standard wafers; one of typical devices is presented below (from the NOD web site). While the total number of nano-grooves is huge (106), a typical device size of DPH devices is on the millimeter scale. Nano-Optic Devices, LLC (NOD) [1] developed the DPH technology and applied it for fabricating Nano-Spectrometers and Optical Interconnects. There are additional numerous applications for the DPH in integrated optics. The pictures below from the NOD [1] web site demonstrate a DPH structure (left) and a nano-spectrometer hologram for the visible band (right).
DPH Devices
108
References
Yankov Vladimir et al., Digital Planar Holography and multiplexer/demultiplexer with discrete dispersion, Proc. SPIE, vol. 5246, pp.608620 (2003) Yankov Vladimir et al., Photonic bandgap quasi-crystals for integrated WDM devices, Proc. SPIE, Vol. 4989, pp.131136 (2003)
References
[1] http:/ / www. nanoopticdevices. com/ index. htm
Integral imaging
Integral imaging is an autostereoscopic 3D display, meaning that it displays a 3D image without the use of special glasses on the part of the viewer. It achieves this by placing an array of microlenses (similar to a lenticular lens) in front of the image, where each lens looks different depending on viewing angle. Thus rather than displaying a 2D image that looks the same from every direction, it reproduces a 4D light field, creating stereo images that exhibit parallax when the viewer moves. The concept were proposed in 1908 by Gabriel Lippmann, and to date has found use largely in the related concept of lenticular printing of static images.
Description
An integral image consists of a tremendous number of closely-packed, distinct micro-images, that are viewed by an observer through an array of spherical convex lenses, one lens for every micro-image. The term Integral comes from the integration of all the micro images into a complete three dimensional image through the lens array. This special type of lens array is known as a fly's-eye or integral lens array; see Fig. 1. When properly practiced, the result is stunning three-dimensional imagery which coveys a realism matched only by museum-quality holograms. Indeed, it has been demonstrated that an integral image can very accurately reproduce the wavefront that emanated from the original photographed or computer-generated subject, much like a hologram, but without the need for lasers to create the image; see Fig. 2. This allows the eyes to accommodate (focus) on foreground and background elements, something not possible with lenticular or barrier strip methods. In addition to three dimensional effects, elaborate animation effects can also be achieved in integral images, or even a combination of these effects.
Figure 1. Fly's eye lens sheet illustration; Okoshi, Academic Press 1976.
Integral imaging
109
Figure 2: Integral image (Left) without lens; Enlargement (Center), note that each lens records its own unique picture; Integral image (Right) resolved through a matching lens from a particular viewing position; Roberts & Villums.
Sampling effect
Integral imaging is based on a principle known as the lens sampling effect. To achieve this effect, the thickness of the lens array sheet is chosen so that parallel incoming light rays generally focus on the opposing side of the array, which is typically flat; see Fig. 3 (right). This flat side is known as the focal plane. It is at this plane that the Figure 3: Sampling effect using a flys eye lens micro-images are placed, one for every lens, side by side. Since each array placed on a printed image of white dots on a lenslet focuses to a point onto a micro-image below, an observer can black background; Left, Roberts. Sampling effect illustration; Okoshi, Academic Press, 1976. never view two spots within a micro-image simultaneously; just one spot at a time, depending on the angle the observer looks though the lens. For example, if you have an array of small white dots, on an otherwise black background, behind each lens at the focal plane, any given lens will appear either completely black or white, depending on whether or not the lens is focused on a white dot, or the black background; see Fig. 3 (left). The state of each lens will vary depending on the point of observation. If all the dots are precisely ordered in a pre-calculated way, a completely different composite image can be directed to each eye of an observer, simultaneously, since each eye looks through the lens array at a different angle. The resolution of an integral image is therefore directly determined by the density of lenses in the array, since each lens effectively becomes a dot, or pixel (picture element), in the picture, with the visual state of each dot being a function of the viewing angle.
Integral photography
The first integral imaging method was Integral Photography. In this method the lens array is used to both record and play back a composite three-dimensional image. When an integral lens array sheet is brought into contact with a photographic emulsion at its focal plane, and an exposure is made of an illuminated object that is placed close to the lens side of the sheet, each individual lens (or pin-hole) will record its own unique micro-image of the object. The content of each micro-image changes slightly based on the position, or vantage point, of the lenslet on the array. In other words, the integral method produces a huge number of tiny, juxtaposed pictures behind the lens array onto the film. After development, the film is realigned with the lens sheet and a composite, spatial reconstruction of the object is re-created in front of the lens array, that can be viewed from arbitrary directions within a limited viewing angle.
Integral imaging paper [1] Integral digital printing holds great promise. While the mass-production of integral lens arrays remains limited, they will inevitably become widely accessible in the near future as the relevant replication technologies continue to evolve. Once available, these lenses, when coupled with readily-available digital interlacing and effects generation software, will enable lithographic integral imagery to develop as an important advertising medium.
110
See also
Autostereoscopy Lenticular printing 3D display Stereoscopy
External links
Integral History [3] Comprehensive integral imaging history in PDF form. Translation [4] of Lippmann's 1908 article
Integral imaging
111
References
[1] [2] [3] [4] http:/ / www. opticsexpress. org/ abstract. cfm?id=89306 http:/ / spie. org/ x8756. xml ftp:/ / ftp. umiacs. umd. edu/ pub/ aagrawal/ HistoryOfIntegralImaging/ Integral_History. pdf http:/ / people. csail. mit. edu/ fredo/ PUBLI/ Lippmann. pdf
Phase-coherent holography
Phase-coherent holography is a type of holography, in which undiffracted beams are deflected phase-coherent.
Australian Holographics
Australian Holographics was started with the specific objective to produce high quality large format holograms. After two years of research and development the company began commercial operations in 1991. Situated on 80acres (320000m2) of rural farm land 25miles (40km) from Adelaide, the lab's facilities included a 5 x 6 metre vibration isolation table in a studio with air-lock loading doors, large enough to drive a car onto the main table. The main CW (continuous wave) laser was a 6W argon laser built by Coherent Scientific. The company also used a 3 joule ruby pulse laser, built in collaboration with Professor Jesper Munch of the School of Chemistry and Physics at Adelaide University. The company mainly specialized in the production of the large format white-light-viewable Rainbow hologram, a type of holography originally invented in 1968 by Dr. Stephen Benton of MIT. IN fact, while all Rainbow Holograms are 'white-light-viewable' the most commonly known application of the technique has been applied to reflectve substrates like PVC (Polyvinyl Chloride) and PET (Poly Ethylene Terephthalate) and used widely on credit cards and as anti-counterfeiting applications on labeling of products. Australian Holographics applied the principle for transmission rather than reflective viewing conditions. In 1992, Australian Holographics produced a 2 x 1 metre rainbow transmission hologram of a Mitsubishi Station Wagon car, which was shown at Holographics International '92 conference in London.
Australian Holographics
112
History
Australian Holographics Pty Ltd. was incorporated in Adelaide, South Australia in 1989 by Dr. David Brotherton Ratcliffe. Dr. Ratcliffe was at the time a Research Fellow in Physics in the School of Physical Sciences, at Flinders University. The senior holographers working with Dr. Ratcliffe were initially Mr. Geoffrey Fox, and subsequently Mark Trinne. In 1992, David Ratcliffe formed GEOLA Labs [1] in Vilnius, Lithuania to concentrate on the manufacture of pulsed Neodymium YLF lasers. In May 1992, Mr. Simon Edhouse, joined Australian Holographics as Marketing Manager, becoming General Manager later that year. The company then focused its attention on the international science museum community, selling large holograms to museums in Hong Kong, Singapore, Taipei and Japan. In 1993, Australian Holographics was commissioned by the Sunkung Corporation of South Korea to produce an exhibition of ten large format holograms for Expo '93. In October 1993, David Ratcliffe relocated to Europe, and handed operational control of the day-to-day running of the Adelaide studios to Simon Edhouse, who managed the marketing and operational aspects of production until the closure of the Australian facility in 1998.
Australian Holographics In 1994, Australian Holographics produced a series of holographic billboards for the Singaporean military to promote the 'NS Men' (National Service Men) campaign. The holograms were of the rainbow transmission variety, enclosed in a compact viewing enclosure which housed a mirror to extend the light path for optimal viewing conditions. Also in 1994, Multi Cellular Media Pty. Ltd. trading as Australian Holographics, signed a joint venture agreement with the South Australian Museum, giving the company access to the Museum's vast collection of exhibits. A Holographic Diorama of Extinct Thylacines One of the first projects undertaken by the new venture was the production of a 1.6 x 1.1 metre rainbow transmission hologram of a family of thylacines. The holographic thylacines, shown standing on a rocky outcrop in a field of dry grass, portrays the now extinct Thylacines as a family group, with the small thylacine pup protruding 50 cm in front of the holographic image-plane.
113
The company also produced a 1.5 x 1.1 metre hologram of a Tyrannosaurus rex skull from the S.A. Museum's collection. In 1995, a large series of holograms were produced of satellites and space vehicles. The most notable of these holograms was the giant 2.1 x 1.1 metre rainbow transmission hologram of the MIR Space Station. This hologram showed a 2 x 3 metre scale model of MIR apparently floating high above the Earth. The model of the Earth used in this hologram was custom made by Adelaide Artist John Haratsis. It measured 4 x 5 x .6 metres resembling a thin slice of a much larger sphere.
In 1996, a 'Great White Shark' hologram was produced by the company from a 4.5 metre model made in Queensland by David Joffe. The resulting 1.5 x 1.1 metre rainbow transmission hologram would become the most popular of all the Australian Holographics stock images, being sold around the world to museums, private collections and tourist venues.
References
[1] http:/ / www. geola. com
114
Holographic data storage is a potential replacement technology in the area of high-capacity data storage currently dominated by magnetic and conventional optical data storage. Magnetic and optical data storage devices rely on individual bits being stored as distinct magnetic or optical changes on the surface of the recording medium. Holographic data storage overcomes this limitation by recording information throughout the volume of the medium and is capable of recording multiple images in the same area utilizing light at different angles. Additionally, whereas magnetic and optical data storage records information a bit at a time in a linear fashion, holographic storage is capable of recording and reading millions of bits in parallel, enabling data transfer rates greater than those attained by optical storage.[1]
Recording data
Holographic data storage captures information using an optical interference pattern within a thick, photosensitive optical material. Light from a single laser beam is divided into two separate optical patterns of dark and light pixels. By adjusting the reference beam angle, wavelength, or media position, a multitude of holograms (theoretically, several thousand) can be stored on a single volume. The theoretical limits for the storage density of this technique is approximately several tens of Terabytes (1 terabyte = 1024 gigabytes) per cubic centimeter. In 2006, InPhase Technologies published a white paper reporting an achievement of 500 Gb/in2. From this figure we can deduce that a regular disk (with 4cm radius of writing area) could hold up to a maximum of 3895.6Gb[2]
Reading data
The stored data is read through the reproduction of the same reference beam used to create the hologram. The reference beams light is focused on the photosensitive material, illuminating the appropriate interference pattern, the light diffracts on the interference pattern, and projects the pattern onto a detector. The detector is capable of reading the data in parallel, over one million bits at once, resulting in the fast data transfer rate. Files on the holographic drive can be accessed in less than 200 milliseconds.[3]
115
Longevity
Holographic data storage can provide companies a method to preserve and archive information. The write-once, read many (WORM) approach to data storage would ensure content security, preventing the information from being overwritten or modified. Manufacturers believe this technology can provide safe storage for content without degradation for more than 50 years, far exceeding current data storage options. Counterpoints to this claim point out the evolution of data reader technology changes every ten years; therefore, being able to store data for 50100 years would not matter if you could not read or access it.[3] However, a storage method that works very well could be around longer before needing a replacement; plus, with the replacement, the possibility of backwards-compatibility exists, similar to how DVD technology is backwards-compatible with CD technology.
Terms used
Sensitivity refers to the extent of refractive index modulation produced per unit of exposure. Diffraction efficiency is proportional to the square of the index modulation times the effective thickness. The dynamic range determines how many holograms may be multiplexed in a single volume data. Spatial light modulators (SLM) are pixelated input devices (liquid crystal panels), used to imprint the data to be stored on the object beam.
Technical aspects
Like other media, holographic media is divided into write once (where the storage medium undergoes some irreversible change), and rewritable media (where the change is reversible). Rewritable holographic storage can be achieved via the photorefractive effect in crystals: Mutually coherent light from two sources creates an interference pattern in the media. These two sources are called the reference beam and the signal beam. Where there is constructive interference the light is bright and electrons can be promoted from the valence band to the conduction band of the material (since the light has given the electrons energy to jump the energy gap). The positively charged vacancies they leave are called holes and they must be immobile in rewritable holographic materials. Where there is destructive interference, there is less light and few electrons are promoted. Electrons in the conduction band are free to move in the material. They will experience two opposing forces that determine how they move. The first force is the coulomb force between the electrons and the positive holes that they have been promoted from. This force encourages the electrons to stay put or move back to where they came from. The second is the pseudo-force of diffusion that encourages them to move to areas where electrons are less dense. If the coulomb forces are not too strong, the electrons will move into the dark areas. Beginning immediately after being promoted, there is a chance that a given electron will recombine with a hole and move back into the valence band. The faster the rate of recombination, the fewer the number of electrons that will have the chance to move into the dark areas. This rate will affect the strength of the hologram. After some electrons have moved into the dark areas and recombined with holes there, there is a permanent space charge field between the electrons that moved to the dark spots and the holes in the bright spots. This leads to a change in the index of refraction due to the electro-optic effect.
116
When the information is to be retrieved or read out from the hologram, only the reference beam is necessary. The beam is sent into the material in exactly the same way as when the hologram was written. As a result of the index changes in the material that were created during writing, the beam splits into two parts. One of these parts recreates the signal beam where the information is stored. Something like a CCD camera can be used to convert this information into a more usable form. Holograms can theoretically store one bit per cubic block the size of the wavelength of light in writing. For example, light from a helium-neon laser is red, 632.8 nm wavelength light. Using light of this wavelength, perfect holographic storage could store 4 gigabits per cubic millimetre. In practice, the data density would be much lower, for at least four reasons: The need to add error-correction The need to accommodate imperfections or limitations in the optical system Economic payoff (higher densities may cost disproportionately more to achieve) Design technique limitationsa problem currently faced in magnetic Hard Drives wherein magnetic domain configuration prevents manufacture of disks that fully utilize the theoretical limits of the technology.
Unlike current storage technologies that record and read one data bit at a time, holographic memory writes and reads data in parallel in a single flash of light.[4]
Two-color recording
For two-color holographic recording, the reference and signal beams are fixed to a particular wavelength (green, red or IR) and the sensitizing/gating beam is a separate, shorter wavelength (blue or UV). The sensitizing/gating beam is used to sensitize the material before and during the recording process, while the information is recorded in the crystal via the reference and signal beams. It is shone intermittently on the crystal during the recording process for measuring the diffracted beam intensity. Set up for holographic recording Readout is achieved by illumination with the reference beam alone. Hence the readout beam with a longer wavelength would not be able to excite the recombined electrons from the deep trap centers during readout, as they need the sensitizing light with shorter wavelength to erase them. Usually, for two-color holographic recording, two different dopants are required to promote trap centers, which belong to transition metal and rare earth elements and are sensitive to certain wavelengths. By using two dopants, more trap centers would be created in the Lithium niobate crystal. Namely a shallow and a deep trap would be created. The concept now is to use the sensitizing light to excite electrons from the deep trap farther from the valence band to the conduction band and then to recombine at the shallow traps nearer to the conduction band. The reference and signal beam would then be used to excite the electrons from the shallow traps back to the deep traps. The information would hence be stored in the deep traps. Reading would be done with the reference beam since the
Holographic data storage electrons can no longer be excited out of the deep traps by the long wavelength beam.
117
Effect of annealing
For a doubly doped LiNbO3 crystal there exists an optimum oxidation/reduction state for desired performance. This optimum depends on the doping levels of shallow and deep traps as well as the annealing conditions for the crystal samples. This optimum state generally occurs when 95 98% of the deep traps are filled. In a strongly oxidized sample holograms cannot be easily recorded and the diffraction efficiency is very low. This is because the shallow trap is completely empty and the deep trap is also almost devoid of electrons. In a highly reduced sample on the other hand, the deep traps are completely filled and the shallow traps are also partially filled. This results in very good sensitivity (fast recording) and high diffraction efficiency due to the availability of electrons in the shallow traps. However during readout, all the deep traps get filled quickly and the resulting holograms reside in the shallow traps where they are totally erased by further readout. Hence after extensive readout the diffraction efficiency drops to zero and the hologram stored cannot be fixed.
See also
Holographic Versatile Card Holographic Versatile Disc Holographic associative memory 3D optical data storage List of emerging technologies Holography
118
External links
Howstuffworks [12] Daewoo Electronics Develops the Worlds First High Accuracy Servo Motion Control System for Holographic Digital Data Storage (virtual prototype created with LabView) [13] Inphase [14] Comparison of Two Approaches: Page-based and Bit-based HDS [15] Maxell Holographic Media Press Release [16] GE Global Research is developing terabyte discs and players that will work with old storage media [17] Holography speaks volumes [18] - an Instant Insight [19] where Sren Hvilsted and colleagues explain how holograms could be the key to storing increasing amounts of information. From the Royal Society of Chemistry
References
[1] "Holographic data storage." (http:/ / www. research. ibm. com/ journal/ rd/ 443/ ashley. html). IBM journal of research and development. . Retrieved 2008-04-28. [2] "High speed holographic data storage at 500 Gbit/in.2" (http:/ / www. inphase-technologies. com/ technology/ whitepapers. asp?subn=2_3). . Retrieved 2008-05-05. [3] Robinson, T. (2005, June). The race for space. netWorker. 9,2. Retrieved April 28, 2008 from ACM Digital Library. [4] "Maxell Introduces the Future of Optical Storage Media With Holographic Recording Technology", (2005) retrieved January 27, 2007 (http:/ / www. maxell-usa. com/ index. aspx?id=-5;0;158;0& a=read& pid=49) [5] "Update: Aprilis Unveils Holographic Disk Media" (http:/ / www. extremetech. com/ article2/ 0,3973,600628,00. asp). 2002-10-08. . [6] "Holographic-memory discs may put DVDs to shame" (http:/ / www. newscientist. com/ article. ns?id=dn8370& feedId=online-news_rss20). New Scientist. 2005-11-24. . [7] "Aprilis to Showcase Holographic Data Technology" (http:/ / www. enterprisestorageforum. com/ technology/ news/ article. php/ 885351). 2001-09-18. . [8] Sander Olson (2002-12-09). "Holographic storage isn't dead yet" (http:/ / www. geek. com/ news/ geeknews/ 2002Dec/ bch20021209017652. htm). . [9] GE Unveils 500-GB, Holographic Disc Storage Technology (http:/ / www. crn. com/ storage/ 217200230;jsessionid=PCLSSR1JXVD1OQSNDLOSKHSCJUNN2JVN) [10] "Could Holography Cure Nintendo's Storage Space Blues? News" (http:/ / www. totalvideogames. com/ Nintendo-Wii/ news/ Could-Holography-Cure-Nintendo039s-Storage-Space-Blues-13031. html). . [11] Inphase Technologies, Inc. (Longmont, CO, US) and Nintendo Co., Ltd. (Kyoto, JP) (2008-02-26). "Miniature Flexure Based Scanners For Angle Multiplexing Patent" (http:/ / www. freepatentsonline. com/ 7336409. html). .
119
Overview
Holography is a technique originally invented by Hungarian physicist Dennis Gabor (1900-1979) to improve the resolving power on electron microscopes. An object is illuminated with a coherent (usually monochromatic) light beam; the scattered light is brought to interference with a reference beam of the same source, recording the interference pattern. CGH as defined in the introduction has broadly three tasks: 1. Computation of the virtual scattered wavefront 2. Encoding the wavefront data, preparing it for display 3. Reconstruction: Modulating the interference pattern onto a coherent light beam by technological means, to transport it to the user observing the hologram. Note that it is not always justified to make a strict distinction between these steps; however it helps the discussion to structure it in this way.
Wavefront computation
Computer generated holograms offer important advantages over the optical holograms since there is no need for a real object. Because of this a breakthrough in three-dimensional display was expected when the first algorithms were reported at 1966. [2] . Unfortunately, the researchers have very soon realized that there a noticeable lower and upper bounds in terms of computational speed and image quality and fidelity respectively. Wavefront calculations are computationally very intensive; even with modern mathematical techniques and high-end computing equipment, real-time computation is tricky. There are many different methods for calculating the interference pattern for a CGH. In the next 25 years a lot of methods for CGHs [3] [4] [5] [6] [7] [8] have been proposed in the fields of holographic information and computational reduction as well as in computational and quantization techniques. In the field of computational techniques the reported algorithms can be categorized in two main concepts.
120
121
Reconstruction
The third (technical) issue is beam modulation and actual wavefront reconstruction. Masks may be printed, resulting often in a grained pattern structure since most printers can make only dots (although very small ones). Films may be developed by laser exposure. Holographic displays are currently yet a challenge (as of 2008), although successful prototypes have been built. An ideal display for computer generated holograms would consist of pixels smaller than a wavelength of light with adjustable phase and brightness. Such displays have been called phased array optics[22] . Further progress in nanotechnology is required to build them.
References
[1] Ch. Slinger, C. Cameron, M. Stanley (Aug. 2005), "Computer-Generated Holography as a Generic Display Technology", Computer (IEEE) [2] B. R. Brown, A. W. Lohmann (1966). "Complex spatial filtering with binary masks" (http:/ / www. opticsinfobase. org/ abstract. cfm?URI=ao-5-6-967). Appl. Opt. (OSA) 5: 967ff. doi:10.1364/AO.5.000967. . [3] L.B. Lesem, P.M. Hirsch, and J.A. Jordan (1968). "Computer synthesis of holograms for 3-D display" (http:/ / portal. acm. org/ citation. cfm?id=364111). Jr. Commun. (ACM) 11: 661674. . [4] L.B. Lesem, P.M. Hirsch, and J.A. Jordan (1969). "The Kinform: A New Wavefront Reconstruction Device" (http:/ / www. research. ibm. com/ journal/ rd/ 132/ lesem. pdf). IBM Journal of Research and Development (IBM) 13: 150155. . [5] W.H. Lee (1970). "Sampled Fourier Transform Hologram Generated by Computer" (http:/ / www. opticsinfobase. org/ abstract. cfm?URI=ao-9-3-639). Appl. Opt. (OSA) 9: 639643. doi:10.1364/AO.9.000639. . [6] D. Leseberg and O. Bryngdahl (1984). "Computer-generated rainbow holograms" (http:/ / www. opticsinfobase. org/ abstract. cfm?URI=ao-23-14-2441). Appl. Opt. (OSA) 23: 24412447. doi:10.1364/AO.23.002441. . [7] F. Wyrowski, R. Hauck and O. Bryngdahl (1987). "Computer-generated holography: hologram repetition and phase manipulation" (http:/ / www. opticsinfobase. org/ abstract. cfm?URI=josaa-4-4-694). J. Opt. Soc. Am. A (OSA) 4: 694698. doi:10.1364/JOSAA.4.000694. . [8] D. Leseberg and C. Frre (1988). "Computer-generated holograms of 3-D objects composed of tilted planar segments" (http:/ / www. opticsinfobase. org/ abstract. cfm?URI=ao-27-14-3020). Appl. Opt. (OSA) 27: 30203024. doi:10.1364/AO.27.003020. . [9] B. R. Brown, A. W. Lohmann (1966). "Complex spatial filtering with binary masks" (http:/ / www. opticsinfobase. org/ abstract. cfm?URI=ao-5-6-967). Appl. Opt. (OSA) 5: 967ff. doi:10.1364/AO.5.000967. . [10] J.J. Burch (1967). "A Computer Algorithm for the Synthesis of Spatial Frequency Filters" (http:/ / ieeexplore. ieee. org/ xpl/ freeabs_all. jsp?arnumber=1447550). Proceedings of IEEE (IEEE) 55: 599601. doi:10.1109/PROC.1967.5620. . [11] B.R. Brown and A.W. Lohmann (1969). "Computer-generated Binary Holograms" (http:/ / www. loreti. it/ Download/ PDF/ CGH/ ibmrd1302D. pdf). IBM Journal of Research and Development (IBM) 13: 160168. .
122
Check weigher
A checkweigher is an automatic machine for checking the weight of packaged commodities. It is normally found at the offgoing end of a production process and is used to ensure that the weight of a pack of the commodity is within specified limits. Any packs that are outside the tolerance are taken out of line automatically. A checkweigher can weigh in excess of 500 items per minute (depending on carton size and accuracy requirements). Checkweighers often incorporate additional checking devices such as metal detectors and X-ray machines to enable other attributes of the pack to be checked and acted upon accordingly.
Check weigher
123
A typical machine
A checkweigher incorporates a series of conveyor belts. Checkweighers are known also as belt weighers, in-motion scales, conveyor scales, dynamic scales, and in-line scales. In filler applications, they are known as check scales. Typically, there are three belts or chain beds: An infeed belt that may change the speed of the package and to bring it up or down to a speed required for weighing. The infeed is also sometimes used as an indexer, which sets the gap between products to an optimal distance for weighing. It sometimes has special belts or chains to position the product for weighing. A weigh belt. This is typically mounted on a weight transducer which can typically be a strain-gauge load cell or a servo-balance (also known as a force-balance), or sometimes known as a split-beam. Some older machines may pause the weigh bed belt before taking the weight measurement. This may limit line speed and throughput. For high-speed precision scales, a load cell using electromagnetic force restoration(EMFR) is appropriate. This kind of system charges an inductive coil, effectively floating the weigh bed in an electromagnetic field. When the weight is added, the movement of a ferrous material through that coil causes a loss of ElectroMagnetic Force. A precision circuit charges the coil back to its original charge. The amount added to the coil is precisely measured. The voltage produced is filtered and sampled into digital data. That voltage is then passed through a Digital Signal Processor(DSP)filter and ring-buffer to further reduce ambient and digital noise and delivered to a computerized controller. It is usual for a built-in computer to take many weight readings from the transducer over the time that the package is on the weigh bed to ensure an accurate weight reading. Calibration is critical. A lab scale, which usually is in an isolated chamber pressurized with dry nitrogen(pressurized at sea level) can weigh an object within plus or minus 100th of a gram, but ambient air pressure is a factor. This is straightforward when there is no motion, but in motion there is a factor that is not obvious-noise from the motion of a weigh belt, vibration, air-conditioning or refrigeration which can cause drafts. Torque on a load cell causes erratic readings. A dynamic, in-motion checkweigher takes samples, and analyzes them to form an accurate weight over a given time period. In most cases, there is a trigger from an optical(or ultrasonic) device to signal the passing of a package. Once the trigger fires, there is a delay set to allow the package to move to the "sweet spot" (center) of the weigh bed to sample the weight. The weight is sampled for a given duration. If either of these times are wrong, the weight will be wrong. There seems to be no scientific method to predict these timings. Some systems have a "graphing" feature to do this, but it is generally more of an empirical method that works best. A reject conveyor to enable the out-of-tolerance packages to be removed from the normal flow while still moving at the conveyor velocity. The reject mechanism can be one of several types. Among these are a simple pneumatic pusher to push the reject pack sideways from the belt, a diverting arm to sweep the pack sideways and a reject belt that lowers or lifts to divert the pack vertically. A typical checkweigher usually has a bin to collect the out-of-tolerance packs.
Tolerance methods
There are several tolerance methods: The traditional "minimum weight" system where weights below a specified weight are rejected. Normally the minimum weight is the weight that is printed on the pack or a weight level that exceeds that to allow for weight losses after production such as evaporation of commodities that have a moisture content. The larger wholesale companies have mandated that any product shipped to them have accurate weight checks such that a customer can be confident that they are getting the amount of product for which they paid. These wholesalers charge large fees for inaccurately filled packages.
Check weigher The European Average Weight System which follows three specified rules known as the "Packers Rules"[1] . Other published standards and regulations such as NIST Handbook 133[2]
124
Data Collection
There is also a requirement under the European Average Weight System that data collected by checkweighers is archived and is available for inspection. Most modern checkweighers are therefore equipped with communications ports to enable the actual pack weights and derived data to be uploaded to a host computer. This data can also be used for management information enabling processes to fine-tuned and production performance monitored. Checkweighers that are equipped with high speed communications such as Ethernet ports are capable of integrating themselves in to groups such that a group of production lines that are producing identical products can be considered as one production line for the purposes of weight control. For example, a line that is running with a low average weight can be complemented by another that is running with a high average weight such that the aggregate of the two lines will still comply with rules. An alternative is to program the checkweigher to check bands of different weight tolerances. For instance, the total valid weight is 100grams 15grams. This means that the product can weigh 85g - 115g. However, it is obvious that if you are producing 10,000 packs a day, and most of your packs are 110g, you are losing 100kg of product. If you try to run closer to 85g, you may have a high rejection rate. EXAMPLE: A checkweigher is programmed to indicate 5 zones with resolution to 1g: 1. 2. 3. 4. 5. Under Reject.... the product weighs 84.9g or less Under OK........ the product weighs 85g, but less than 95g Valid........... the product weighs 96g, but less than 105g Over OK......... the product weighs 105g, and less than 114g Over Reject..... the product weighs over the 115g limit
With a check weigher programmed as a zone checkweigher, the data collection over the networks, as well as local statistics, can indicate the need to check the settings on the upstream equipment to better control flow into the packaging. In some cases the dynamic scale sends a signal to a filler, for instance, in real-time, controlling the actual flow into a barrel, can, bag, etc. In many cases a checkweigher has a light-tree with different lights to indicate the variation of the zone weight of each product.
Application considerations
Speed and accuracy that can be achieved by a checkweigher is influenced by the following: Pack length Pack weight Line speed required Pack content (solid or liquid) Motor technology Stabilization time of the weight transducer Airflow causing readings in error Vibrations from machinery causing unnecessary rejects Sensitivity to temperature, as the load cells can be temperature sensitive
Check weigher
125
Applications
In-motion scales are dynamic machines that can be designed to perform thousands of tasks. Some are used as simple caseweighers at the end of the conveyor line to ensure the overall finished package product is within its target weight. An in motion conveyor checkweigher can be used to detect missing pieces of a kit, such as a cell phone package that is missing the manual, or other collateral. Checkweighers are typically used on the incoming conveyor chain, and the output pre-packaging conveyor chain in a poultry processing plant. The bird is weighed when it comes onto the conveyor, then after processing and washing at the end, the network computer can determine whether or not the bird absorbed too much water, which as it is further processed, will be drained, making the bird under its target weight. A high speed conveyor scale can be used to change the pacing, or pitch of the products on the line by speeding, or slowing the product speed to change the distance between packs before reaching a different speed going into a conveyor machine that is boxing multiple packs into a box. A checkweigher can be used to count packs, and the aggregate (total) weight of the boxes going onto a pallet for shipment, including the ability to read each package's weight and cubic dimensions. The controller computer can print a shipping label and a bar-code label to identify the weight, the cubic dimensions, ship-to address, and other data for machine ID through the shipment of the product. A receiving checkweigher for the shipment can read the label with a bar code scanner, and determine if the shipment is as it was before the transportation carrier received it from the shipper's loading dock, and determine if a box is missing, or something was pilfered or broken in transit. Checkweighers are also used for Quality management. For instance, raw material for machining a bearing is weighed prior to beginning the process, and after the process, the quality inspector expects that a certain amount of metal was removed in the finishing process. The finished bearings are checkweighed, and bearings over- or underweight are rejected for physical inspection. This is a benefit to the inspector, since he can have a high confidence that the ones not rejected are within machining tolerance. Quality management can use a checkweigher for Nondestructive testing to verify finished goods using common Evaluation methods to detect pieces missing from a "finished" product, such as grease from a bearing, or a missing roller within the housing. Checkweighers can be built with metal detectors, x-ray machines, open-flap detection, bar-code scanners, holographic scanners, temperature sensors, vision inspectors, timing screws to set the timing and spacing between product, indexing gates and concentrator ducts to line up the product into a designated area on the conveyor. An industrial motion checkweigher can sort products from a fraction of a gram to many, many kilograms. In English units, is this from less than 100th of an ounce to as much as 500lbs or more. Specialized checkweighers can weigh commercial aircraft, and even find their center-of-gravity. Checkweighers can be very high speed, processing products weighing fractions of a gram at over 100m/m (meters per minute, such as pharmaceuticals, and 200lb bags of produce at over 100fpm(feet per minute). They can be designed in many shapes and sizes, hung from ceilings, raised on mezzanines, operated in ovens or in refrigerators. Their conveying medium can be industrial belting, low-static belting, chains similar to bicycle chains(but much smaller), or interlocked chain belts of any width. They can have chain belts made of special materials, different polymers, metals, etc. Checkweighers are used in cleanrooms, dry atmosphere environments, wet environments, produce barns, food processing, drug processing, etc. Checkweighers are specified by the kind of environment, and the kind of cleaning will be used. Typically, a checkweigher for produce is made of mild steel, and one that will be cleaned with harsh chemicals, such as bleach, will be made with all stainless steel parts, even the Load cells. These machines are labeled "full washdown", and must have every part and component specified to survive the washdown environment. Checkweighers are operated in some applications for extremely long periods of time- 24/7 year round. Generally, conveyor lines are not stopped unless there is maintenance required, or there is an emergency stop, called an E-stop.
Check weigher Checkweighers operating in high density conveyor lines may have numerous special equipments in their design to ensure that if an E-stop occurs, all power going to all motors is removed until the E-stop is cleared and reset.
126
References
[1] "The Weights and Measures(Packaged Goods)Regulations 2006" (http:/ / www. nmo. bis. gov. uk/ Documents/ PGR guidance 13 august 2007. pdf), NWML, Dept for Innovation, Universities & Skills URN 07/1343, 2006, [2] Checking the Net Contents of Packaged Goods, NIST Handbook 133 (http:/ / ts. nist. gov/ WeightsAndMeasures/ h1334-05. cfm), Fourth Edition, 2005,
Frank DeFreitas
Frank DeFreitas (born January 1, 1956, in Camden, New Jersey) is the maintainer of the popular website HoloWorld [1], aimed at amateur holographers, and author of Shoebox Holography [2]. He instructs people new to holography how to make simple holograms; for example by using a laser pointer. He started working on holography in 1983 with no formal training in science. [3] He is a 2007 recipient of the International Holography Fund [4] (IHF) grant award program, for the documentation of creative holographic artists through internet broadcasting [5] and has provided laser and holography educational programs for schools, ranging from elementary to university level. [6] Additionally he hosts an internet radio program, HoloTalk [7] and has recently begun publishing a free companion e-zine [8]
References
[1] [2] [3] [4] [5] [6] [7] [8] http:/ / www. holoworld. com/ http:/ / www. holoworld. com/ shoebox/ about. html Bio at Holowiki (http:/ / www. holowiki. com/ index. php/ Frank_DeFreitas) http:/ / www. holographyfund. org International Holography Fund (http:/ / www. holographyfund. org/ index. php?option=com_content& task=view& id=27) HoloWorld (http:/ / www. holoworld. com/ ) HoloTalk radio show (http:/ / holoworld. com/ holotalk/ index. html) Holoworld E-zine (http:/ / www. holoworld. com/ ezine).
Hogel
127
Hogel
A hogel (a portmanteau of the words holographic and element) is a part of a hologram, in particular a computer-generated one. Research into efficient generation and compression of hogels may allow holographic displays to become more widely available.
HoloVID
HoloVID is a tool originally developed for the holographic dimensional measurement of the internal isogrid webbing of the Delta Series of (launch vehicle) Space Craft skins by Dr. Jon Dark in 1981.
History
Delta Space Craft was produced by McDonnell Douglas Astronautics, until the line was purchased by Boeing Aircraft. Milled out of T6 Aluminum, on huge 40 foot by 20 foot horizontal mills, the inspection of the huge sheets took longer than the original manufacturing. It was estimated that a real time in-situ inspection device could seriously cut costs so an IRAD (Independent Research and Development ) budget was generated to solve the problem. Two solutions were worked simultaneously by J. Dark, a photo-optical techniques utilizing Holographic lens, and an ultrasonic technique utilizing configurable micro-transducer multiplexed arrays. A pair of HoloVids for simultaneous frontside and backside weld feedback, was later used at Martin Aerospace to inspect the long weld seams which hold the external fuel tanks of the Space Shuttle together. By controlling the Weld bead profile in real time as it was TIG generated, an optimum weight vs. performance ratio could be obtained, saving the launch engines from having to waste thrust energy, while guaranteeing the highest possible web strengths.
Usage
Many corporations (Kodak, Immunex, Boeing, Johnson and Johnson, Aerospace Corp., Silverline, and others) use customized versions of the Six Dimensional Non-Contact Reader w/ Integrated Holographic Optical Processing, for applications from SuperComputer Surface Mount pad assessment, to Genetic Biochemical Assay Analysis.
Specifications
HoloVid belongs to a class of sensor known as a structured-light 3D scanner device. The use of structured light to extract three-dimensional shape information is a well known technique.[1] [2] The use of single planes of light to measure the distance and orientation of objects has been reported several times.[3] [4] [5] The use of multiple planes [6] [7] [8] and multiple points estimates of objects has also been widely reported.[11]
[9] [10]
The use of segmented phase holograms to selectively deflect portions of an image wave front... is unusual. The holographic optical components used in this device, split tessellated segments of a returning wave front in programmable bulk areas and shaped patches to achieve a unique capability, increasing both the size of an object which can be read and the z-axis depth per point which is measurable, while also increasing the simultaneous operations possible which is a significant advance in the previous state of art.
HoloVID
128
Operational modes
A laser beam is made to impinge onto a target surface. The angle of the initially nonlinear optical field can be non-orthogonal to the surface. This light beam is then reflected by the surface in a wide conical spread function which is geometrically related to the incidence angle, light frequency, wavelength and relative surface finish. A portion of this reflected light enters the optical system coaxially, where a 'stop' shadows the edges. In a single point reader, this edge is viewed along a radius by a photodiode array. The output of this device is a boxcar output where the photodiodes are sequentially lit diode by diode as the object distance changes in relation to the sensor until either no diodes are lit, or all diodes are lit. The residue product product charge dynamic value in each light diode cell is a function of the bias current, the dark current and the incident ionizing radiation (in this case, the returning laser light). In the multipoint system, the HoloVid, the cursor point is acousto-optically scanned in the x-axis across a K theta monaxial transformer. A monaxial holographic lens collects the wave front and reconstructs the pattern onto the single dimensional photodiode array and a two dimensional matrix sensor. Image processing of the sensor data derives the correlation between the compressed wave front and the actual physical object.
References
[1] Agin, G.J., "Real Time Control of a Robot with a Mobile Camera". Technical Note 179, SRI International, Feb. 1979. [2] Bolles, R.C. and M.A. Fischler, "A Ransac-based Approach to Model Fitting and its Application to Finding Cylinders in range data", Proc. Seventh IJCAI, August 1981. [3] Posdamer, J.L. and M.D. Altschuler, "Surface Measurement by Space-encoded Projected Beam Systems", Computer Graphics and Image Processing 181, 182, 17. [4] Popplestone, R.J. and C.M. Brown, A.P. Ambler, and G.F. Crawford, "Forming models of plane and cylindrical faceted bodies from light stripes", Proc. Fourth IJCAI, Sept. 1975. [5] Oshima, M and Y. Shirai, "Object recognition using three dimensional information", Proc. Seventh IJCAI, August 1981, 601-606. [6] Albus, J.E., M. Nashman, P. Mansbach and L. Palombo, "Six dimensional vision system", Proc. SPIE Vol. 336, 1982, 142148. [7] Okada, S., "Welding machine using shape detector", Mitsubishi-Denki-Giho, Vol. 47-2, 1973, 157. [8] Taenser, D., "Progress report on visual inspection of solder joints", M.I.T. A.I. Lab., Working Paper No. 96, 1976. [9] Nakagawa, Y., "Automatic visual inspection of solder joints on printed circuit boards", Proc. SPIE Vol. 336, 1982, 121127. [10] Duda, R. and D. Nitzan, "Low level processing of registered range and intensity data", Artificial Intelligence Center Technical Note 129, SRI Project 4201, March 1976. [11] Nitzan, D. A. Brain, and R,Duda, "The measurement and use of registered reflectance and range data in scene analysis", Proc. IEEE, February 1977, 206220.
Holographic screen
129
Holographic screen
A holographic screen is a display that uses a coated glass media for the projection surface of a video projector. The name component holographic refers to the coating that might be fashioned to bundle light using formed micro lenses. The lens design and attributes are rather matching the field of holographic area. A very rough similarity with the fresnel lenses as used in overhead projectors might exist. The whole appearance often looks rather similar to a free-space display since the image carrier appears very transparent. The beam manipulation by the lenses might make the image appear not directly on the glass but with some offset. Still it is only a 2D display and not a true 3D display. At present its unclear if with such a technology it is possible to provide acceptable 3D images to a viewer.
Working Principle
The display design can use either front or rear projection where one or more video projectors point to the glass plate. The beam is widening towards the surface and then bundled again by the lenses arrangement on the glass. This is forming a virtual point of origin thus a viewer will not see the projector as the source but some imaginary object somewhere close to the glass. In back projection method the light passes through the glass whilst in front projection it will be reflected there in the right fashion. At present it looks that rear projection is the common use case whilst front projection might still wait for realization due to greater technical complexity .
Scheme.
Holographic screen on the screens film forming the image. Finally, when the user touches the screen giving instructions with the hands as if they were the mouse of the computer, tactile membranes film catches these movements, generates electrical impulses and sends them to the computer. The computer interprets the received impulses and modifies the projected image in agreement with the information. Projector The projector generates the beams of light that will form the image on the screens film adhered on the crystal support. Its placed behind the screen. It must be placed a certain angle above or below this to avoid the dazzle of the user. It must be trapezoidal projector which allows a certain angle of displacement without deforming the images. Computer The computer controls the whole system. It manages the image that it is necessary to project on screen and the impulses from tactile membranes film in order to carry out the interactivity. It coordinates the reproduced image corresponding with the orders that the user has given. In addition, it allows to the manager of the system to manage what wants to be projected on the screen and to do modifications. Films The films are caps of plastic that stick fast on the crystal and that allow both; the visualization of the image and the interactivity. There are two types of films: Screens film: It can be opaque or transparent. For a correct reproduction it is possible to work with different degrees of opaqueness that can change among 90 % and 98 %, depending to the use which is going to be destined (interior, exterior, natural lighting, artificial lighting...) Tactile membranes film: This film makes the interactivity possible. Thanks to capacitive projected technology [1] it catches the movements realized by the user on the crystal and sends impulses to the computer. Its like a giant tactile screen.
Film.
130
Specifications
The crystals where the images are going to be projected can have at the maximum 16mm of width. *The measures of the projection films can change among 40 and 100. The whole montage must be realized in the posterior part of the crystal support to allow the interactivity. It is possible to be mounted on any support of glass.
Holographic screen
131
Uses
Nowadays the most used application are interactive shop windows [2]. There is mounted an holographic interactive screen on the shop windows crystal so that any person who passes for the street could interact with it. Another use that can give him is the projection on glass supports on fairs. We might say that all the uses of this technology are related with advertising. There is another type of holographic screens that are not interactive but that also are placed in the shop windows, these have a software of artificial vision that depending on the person who puts opposite they would offer him or her an advertising adapted to his or her characteristics (age, sex...).
See also
Phantasmagoria Video projector Rear-projection television Large-screen television technology Free-space display 3D display
References
Eresmultimedia [3] (In Spanish) Globalzepp [4] (In Spanish) Iberhermes [5] Orizom [6] (In Spanish)
References
[1] [2] [3] [4] [5] [6] http:/ / elotouch. com. ar/ Productos/ Touchscreens/ CapacitivaProyectada/ pcwork. asp http:/ / www. youtube. com/ watch?v=FsBO-CZcLzk http:/ / www. emesmultimedia. com/ escaparate_interactivo_de_retroproyeccion. html http:/ / www. globalzepp. com/ index. php?option=com_content& view=category& layout=blog& id=6& Itemid=18 http:/ / www. iberhermes. com/ index_archivos/ Page4220. htm http:/ / www. orizom. com/
132
References
[1] http:/ / www. mtexpress. com/ 1999/ 02-03-99/ seduced. htm [2] http:/ / www. nytimes. com/ 1984/ 06/ 15/ arts/ theater-pretty-boy-and-a-beckett. html?& pagewanted=all
In addition to contributing to the aims of the association, IHMA membership allows members certain benefits. These include: Intergraf's security certification for secure hologram producers Inclusion in the Hologram Image Register - an IGMA initiative that protects holographic images Subscription to Holography News Publicity and holography patent alerts
133
Publications
The IHMA have published a number of publications offering guidance to converters, users and manufacters of holograms and other holgraphic materials. These are The Glossary of Holographic Terms Specifying and Purchasing Authenticating DOVIDS Hologram Patent Guidelines Hologram Copyright Guidelines
Current members
The IHMA currently has just under 90 members worldwide. They are: 3D AG 3M Company AC Hologram AET Films AHEAD Optoelectronics, Inc. Alpha Lasertek India Limited American Bank Note Holographics, In API Holographics Atech-Holografica Azure Photonics Co Ltd Bajaj Holographics (I) Pvt Ltd BEP Hologram Cavomit Centro Grafico DG SpA Coformex SA de CV
Computer Holography Centre Concern Russian Security Technology Crown Roll Leaf, Inc.
International Hologram Manufacturers Association Dai Nippon Printing Co., Ltd. De La Rue Holographics Demax PLC Diaures SpA Dongnan Industry Company Limited Eskay Holographics Ltd Everest Holovisions Ltd Fast Forms de Mexico Fasver First Print Yard Holographics Flex Industries Ltd Formas Inteligentes SA de CV Garware Polyester Limited Gopsons Papers Ltd Hague Print Hi-Glo Images Pvt Ltd HOLO 3D s.r.l. Holo Security Technologies Holo-Source Corporation Holoflex Limited Hologram Company Rako Hologram Varga Miklos Hologram.Industries Holographic Origination & Machineries Ltd. Holographic Security Marking Systems Pvt. Ltd. HoloGrate,JSC Holostik India Limited Huagong Tech Company Ltd. Hueck Folien GmbH Ignetta Holographic Imprensa Nacional Casa da Moeda Impresora Silvaform Istituto Poligrafico e Zecca Stato ITW Covid JSC Holography Industry Ltd K Laser Technology Inc. Krypten Laser Art Studio Ltd Leonardus SrL Leonhard Kurz GmbH & Co. KG Light Impressions International Ltd Louisenthal GmbH Mano Hologram Ltd Metatex Private Limited Methaz Holografik Company Ltd
134
MTM Holografi Gvenlikli Basim ve Bilisim Teknolojileri San ve Tic A.S Nikka Techno Inc
International Hologram Manufacturers Association NovaVision Inc OpSec Security Ltd Optaglio Group Pacific Holographics Polskie Systemy Holograficzne Pura Barutama, PT Rainbow Holographics Ltd S.C. Optoelectronica 2001 S.A. Schreiner ProSecure Scientific & Technical Centre "Atlas" Shriram Holographics SK Hologram Co Ltd Spatial Imaging Limited Specialised Enterprise Holography Ltd Starcke Oy System Intelligence Products Topac Multimedia Print GmbH Toppan Printing Company Trautwein Security GmbH & Co XETOS AG Zoomsoft
135
References
External links
International Hologram Manufacturers Association Website [1]
References
[1] http:/ / www. ihma. org/
Kinebar
136
Kinebar
A kinebar is a gold bar which contains a hologram to prove its authenticity. Union Bank of Switzerland, through its subsidiary refinery, Argor-Heraeus SA, has applied the holographic kinegram as a security device to the reverse of its minted bars since December 1993. The kinebar, now produced by UBS AG, is a registered trade mark of UBS. The hologram is embossed on the gold itself; it is intended both as a security feature and for visual appeal.[1]
Kinebar certificate
Holographic kinegram
See also
Gold as an investment
External links
Argor-Heraeus - The kinebar [2] The first Austrian kinebar [3]
References
[1] "New gold kinebar unveiled in the United States; Advanced high-tech security feature marks breakthrough in gold minting" (http:/ / findarticles. com/ p/ articles/ mi_m0EIN/ is_1995_Dec_6/ ai_17816232/ ), press release, Dec 6, 1995 [2] http:/ / www. argor. com/ index. php?id=67 [3] http:/ / www. austrian-mint. at/ kinebar
MIT Museum
137
MIT Museum
MIT Museum, founded in 1971, is the museum of the Massachusetts Institute of Technology, located in Cambridge, Massachusetts. It hosts collections of holography, artificial intelligence, robotics, maritime history, and the history of MIT. Its holography collection of 1800 pieces is the largest in the world, though not all of it is exhibited. Currently works of Harold Edgerton and Arthur Ganson are the two largest displays ongoing for a long time. Occasionally, there are various exhibitions, usually on the intersection of art and technology.
Since 2005 the official mission of the museum has been, "to engage the wider community with MITs science, technology and other areas of scholarship in ways that will best serve the nation and the world in the 21st century."
External links
Museum website [1] Geographical coordinates: 422146N 71557W
References
[1] http:/ / web. mit. edu/ museum/
Rainbow hologram
138
Rainbow hologram
The rainbow hologram or Benton hologram is a type of hologram invented in 1968 by Dr. Stephen A. Benton at Polaroid Corporation (later MIT). Rainbow holograms are designed to be viewed under white light illumination, rather than the more esoteric laser light required previously. The rainbow holography recording process uses a horizontal slit to eliminate vertical parallax in the output image, greatly reducing spectral blur while preserving three-dimensionality for most observers. A viewer moving up or down in front of a rainbow hologram sees changing spectral colors rather than different vertical perspectives. Stereopsis and horizontal motion parallax, two relatively powerful cues to depth, are preserved. This invention illustrates an underlying theme of Benton's work: to reduce the amount of information in a hologram to more closely match the requirements of the human visual system. The holograms found on credit cards are examples of rainbow holograms. These very common holograms are technically transmission holograms mounted onto a reflective surface like a metalized polyethylene terephthalate substrate commonly known as PET. The Rainbow Holographic process can also be applied to very large sheets of holographic film, resulting in what is known as a large format Rainbow Transmission Hologram.
See also
Holographic principle Australian Holographics
External links
Holographic Methods [1]
References
[1] http:/ / www. fou. uib. no/ fd/ 1996/ h/ 404001/ kap02. htm
Reciprocity (photography)
139
Reciprocity (photography)
In photography and holography, reciprocity refers to the inverse relationship between the intensity and duration of light that determines the reaction of light-sensitive material. Within a normal exposure range for film stock, for example, the reciprocity law states that the film response will be determined by the total exposure, defined as intensity time. Therefore, the same response (for example, the optical density of the developed film) can result from reducing duration and increasing light intensity, and vice versa. The reciprocal relationship is assumed in most sensitometry, for example when measuring a Hurter and Driffield curve (optical density versus logarithm of total exposure) for a photographic emulsion. Total exposure of the film or sensor, the product of focal-plane illuminance times exposure time, is measured in lux seconds.
History
The idea of reciprocity, once known as BunsenRoscoe reciprocity, originated from the work of Robert Bunsen and Henry Roscoe in 1862.[1] [2] Deviations from the reciprocity law were reported by Captain William de Wiveleslie Abney in 1893,[3] and extensively studied by Karl Schwarzschild in 1899.[4] [5] [6] Schwarzschild's model was found wanting by Abney and by Englisch,[7] and better models have been proposed in subsequent decades of the early twentieth century. In 1913, Kron formulated an equation to describe the effect in terms of curves of constant density,[8] [9] which J. Halm adopted and modified,[10] leading to the "KronHalm catenary equation"[11] or "KronHalmWebb formula"[12] to describe departures from reciprocity.
In chemical photography
In photography, reciprocity refers to the relationship whereby the total light energy proportional to the total exposure, the product of the light intensity and exposure time, controlled by aperture and shutter speed, respectively determines the effect of the light on the film. That is, an increase of brightness by a certain factor is exactly compensated by a decrease of exposure time by the same factor, and vice versa. In other words there is under normal circumstances a reciprocal proportion between aperture area and shutter speed for a given photographic result, with a wider aperture requiring a faster shutter speed for the same effect. For example, an EV of 10 may be achieved with an aperture (f-number) of f/2.8 and a shutter speed of 1/125s. The same exposure is achieved by doubling the aperture area to f/2 and halving the exposure time to 1/250s, or by halving the aperture area to f/4 and doubling the exposure time to 1/60s; in each case the response of the film is expected to be the same.
Reciprocity failure
For most photographic materials, reciprocity is valid with good accuracy over a range of values of exposure duration, but becomes increasingly inaccurate as we depart from this range: reciprocity failure, reciprocity law failure, or Schwarzschild effect.[13] As the light level decreases out of the reciprocity range, the increase in duration, and hence of total exposure, required to produce an equivalent response becomes higher than the formula states; for instance, at half of the light required for a normal exposure, the duration must be more than doubled for the same result. Multipliers used to correct for this effect are called reciprocity factors (see model below). At very low light levels, film is less responsive. Light can be considered to be a stream of discrete photons, and a light-sensitive emulsion is composed of discrete light-sensitive grains, usually silver halide crystals. Each grain must absorb a certain number of photons in order for the light-driven reaction to occur and the latent image to form. In particular, if the surface of the silver halide crystal has a cluster of approximately four or more reduced silver atoms, resulting from absorption of a sufficient number of photons (usually a few dozen photons are required), it is rendered
Reciprocity (photography) developable. At low light levels, i.e. few photons per unit time, photons impinge upon each grain relatively infrequently; if the four photons required arrive over a long enough interval, the partial change due to the first one or two is not stable enough to survive before enough photons arrive to make a permanent latent image center. This breakdown in the usual tradeoff between aperture and shutter speed is known as reciprocity failure. Each different film type has a different response at low light levels. Some films are very susceptible to reciprocity failure, and others much less so. Some films that are very light sensitive at normal illumination levels and normal exposure times lose much of their sensitivity at low light levels, becoming effectively "slow" films for long exposures. Conversely some films that are "slow" under normal exposure duration retain their light sensitivity better at low light levels. For example, for a given film, if a light meter indicates a required EV of 5 and the photographer sets the aperture to f/11, then ordinarily a 4 second exposure would be required; a reciprocity correction factor of 1.5 would require the exposure to be extended to 6 seconds for the same result. Reciprocity failure generally becomes significant at exposures of longer than about 1sec for film, and above 30sec for paper. Reciprocity also breaks down at extremely high levels of illumination with very short exposures. This is concern for scientific and technical photography, but rarely to general photographers, as exposures significantly shorter than a millisecond are only required for subjects such as explosions and particle physics experiments, or when taking high-speed motion pictures with very high shutter speeds (1/10,000sec or less).
140
Schwarzschild law
In response to astronomical observations of low intensity reciprocity failure, Karl Schwarzschild wrote (circa 1900): "In determinations of stellar brightness by the photographic method I have recently been able to confirm once more the existence of such deviations, and to follow them up in a quantative way, and to express them in the following rule, which should replace the law of reciprocity: Sources of light of different intensity I cause the same degree of blackening under different exposures t if the products are equal."[4] Unfortunately, Schwarzschild's empirically determined 0.86 coefficient turned out to be of limited usefulness.[14] A modern formulation of Schwarzschild's law is given as
where E is a measure of the "effect of the exposure" that leads to changes in the opacity of the photosensitive material (in the same degree that an equal value of exposure H = It does in the reciprocity region), I is illuminance, t is exposure duration and p is the Schwarzschild coefficient.[15] [16] However, a constant value for p remains elusive, and has not replaced the need for more realistic models or empirical sensitometric data in critical applications.[17] When reciprocity holds, Schwarzschild's law uses p = 1.0. Since the Schwarzschild's law formula gives unreasonable values for times in the region where reciprocity holds, a modified formula has been found that fits better across a wider range of exposure times. The modification is in terms of a factor the multiplies the ISO film speed:[18] Relative film speed where the t + 1 term implies a breakpoint near 1 second separating the region where reciprocity holds from the region where it fails.
Reciprocity (photography)
141
where I0 is the photographic material's optimum intensity level and a is a constant that characterizes the material's reciprocity failure.[20]
Electrons are released at a very low rate. They are trapped and neutralised and must remain as isolated silver atoms for much longer than in normal latent image formation. It has already been observed that such extreme sub-latent image is unstable, and it is postulated that ineffiency is caused by many isolated atoms of silver losing their acquired electrons during the period of instability.
Astrophotography
Reciprocity failure is an important effect in the field of film-based astrophotography. Deep-sky objects such as galaxies and nebulae are often so faint that they are not visible to the un-aided eye. To make matters worse, many objects' spectra do not line up with the film emulsion's sensitivity curves. Many of these targets are small and require long focal lengths, which can push the focal ratio far above f/5. Combined, these parameters make these targets extremely difficult to capture with film; exposures from 30 minutes to well over an hour are typical. As a typical example, capturing an image of the Andromeda Galaxy at f/4 will take about 30 minutes; to get the same density at f/8 would require an exposure of about 200 minutes. When a telescope is tracking an object, every minute is difficult; therefore, reciprocity failure is one of the biggest motivations for astronomers to switch to digital imaging. Electronic image sensors have their own limitation at long exposure time and low illuminance levels, not usually referred to as reciprocity failure, namely noise from dark current, but this effect can be controlled by cooling the sensor.
Holography
A similar problem exists in holography. The total energy required when exposing holographic film using a continuous wave laser (i.e. for several seconds) is significantly less than the total energy required when exposing holographic film using a pulsed laser (i.e. around 2040 nanoseconds) due to a reciprocity failure. It can also be caused by very long or very short exposures with a continuous wave laser. To try to offset the reduced brightness of the film due to reciprocity failure, a method called latensification can be used. This is usually done directly after the holographic exposure and using an incoherent light source (such as a 25-40W light bulb). Exposing the holographic
Reciprocity (photography) film to the light for a few seconds can increase the brightness of the hologram by an order of magnitude.
142
External links
Reciprocity what? [25] - a brief explanation in laymans terms. Reciprocity charts for slides and black & white [26] http://www.PetesAstrophotography.com Retrieved March 8, 2007. http://www.holography.ru/tech3eng.htm Retrieved March 8, 2005. http://www.Shutterbug.net/refreshercourse/outdoor_tips/1003lolight/October, 2003. http://www.Shutterbug.net/techniques/outdoor_travel/0502sb_after/May, 2002. http://www.Shutterbug.net/techniques/outdoor_travel/0699sb_lowlight/index.html June, 1999.
References
[1] Holger Pettersson, Gustav Konrad von Schulthess, David J. Allison, and Hans-Jrgen Smith (1998). The Encyclopaedia of Medical Imaging (http:/ / books. google. com/ books?id=zvDY5unRC4oC& pg=PA59& dq=Bunsen+ Roscoe+ reciprocity& lr=& as_brr=3& as_pt=ALLTYPES& ei=l32nSYWuBoa4kwSW-fmEBA). Taylor & Francis. p.59. ISBN9781901865134. . [2] Geoffrey G. Atteridge (2000). "Sensitometry" (http:/ / books. google. com/ books?id=MblHnLN2N2kC& pg=PA238& dq=Bunsen+ Roscoe+ reciprocity+ intitle:manual& lr=& as_brr=3& as_pt=ALLTYPES& ei=rnOnSdzGDZSqkAS7zKyUBA). in Ralph E. Jacobson, Sidney F. Ray, Geoffrey G. Atteridge, and Norman R. Axford. The Manual of Photography: Photographic and Digital Imaging (9th ed.). Oxford: Focal Press. p.238. ISBN0-240-51574-9. . [3] W. de W. Abney (1893). "On a failure of the law in photography that when the products of the intensity of the light acting and of the time of exposure are equal, equal amounts of chemical action will be produced" (http:/ / rspl. royalsocietypublishing. org/ content/ 54/ 326-330/ 143. full. pdf+ html). Proc. R. Soc. London 54: 143. . [4] K. Schwarzschild "On The Deviations From The Law of Reciprocity For Bromide Of Silver Gelatine" The Astrophysical Journal vol.11 (1900) p.89 (http:/ / articles. adsabs. harvard. edu/ full/ 1900ApJ. . . . 11. . . 89S) [5] S. E. Sheppard and C. E. Kenneth Mees (1907). Investigations on the Theory of the Photographic Process (http:/ / books. google. com/ books?id=luNIAAAAIAAJ& pg=PA214& dq=Abney+ Schwarzschild+ reciprocity+ failure& lr=& as_brr=3& as_pt=ALLTYPES& ei=5YCnSayOKYWekwSEr7GiBA). Longmans, Green and Co. p.214. . [6] Ralph W. Lambrecht and Chris Woodhouse (2003). Way Beyond Monochrome (http:/ / books. google. com/ books?id=Q7M0zOHcUxsC& pg=PA113& dq=Abney+ Schwarzschild+ reciprocity+ failure& lr=& as_brr=3& as_pt=ALLTYPES& ei=UYCnSfmIJ5ykkQTh4cSdBA). Newpro UK Ltd. p.113. ISBN9780863433542. . [7] Samuel Edward Sheppard and Charles Edward Kenneth Mees (1907). Investigations on the theory of the photographic process (http:/ / books. google. com/ books?id=luNIAAAAIAAJ& pg=PA215& dq=inauthor:sheppard+ inauthor:mees+ effective+ exposure& lr=& as_brr=0& as_pt=ALLTYPES& ei=cy75Sf6jOaSOkQS32ZymBA). Longmans, Green and Co. p.214215. . [8] Erich Kron (1913). "ber das Schwrzungsgesetz Photographischer Platten" (http:/ / adsabs. harvard. edu/ abs/ 1913POPot. . 67. . . . . K). Publikationen des Astrophysikalischen Observatoriums zu Potsdam 22 (67). . [9] Loyd A. Jones (July 1927). "Photographic Spectrophotometry in the Ultra-Violet Region" (http:/ / books. google. com/ books?id=Zi0rAAAAYAAJ& pg=PA111& dq=Kron+ catenary+ reciprocity& ei=yDL5SaOwCZqGkASO9_DVAQ#PPA123,M1). Bulletin of the National Research Council (National Research Council): 109123. . [10] J. Halm (Jan. 1915). On the Determination of Fundamental Photographic Magnitudes (http:/ / articles. adsabs. harvard. edu/ / full/ 1915MNRAS. . 75. . 150H/ 0000151. 000. html). . [11] J. H. Webb (1935). "The Effect of Temperature upon Reciprocity Law Failure in Photographic Exposure". Journal of the Optical Society of America 25 (1): 420. doi:10.1364/JOSA.25.000004. [12] Ernst Katz (1941). Contribution to the Understanding of Latent Image Formation in Photography (http:/ / books. google. com/ books?id=B-cyAAAAMAAJ& q=Kron-halm& dq=Kron-halm& lr=& ei=CDv5SdWkEoGUkATE2oiXBQ& pgis=1). Drukkerij F. Schotanus & Jens. p.11. . [13] Rudolph Seck and Dennis H. Laney (1983). Leica Darkroom Practice (http:/ / books. google. com/ books?id=x6NsV74MSDYC& pg=PA183& dq="Schwarzschild+ effect"& lr=& as_brr=0& as_pt=ALLTYPES& ei=2O5vSdWRDYa4kwS0hYWGDg). MBI Publishing Company. p.183. ISBN9780906447246. . [14] Jonathan W. Martin, Joannie W. Chin, Tinh Nguyen (2003). "Reciprocity law experiments in polymeric photodegradation: a critical review" (http:/ / fire. nist. gov/ bfrlpubs/ build03/ PDF/ b03051. pdf). Progress in Organic Coatings 47: 294. . [15] Walter Clark (2007). Photography by Infrared Its Principles and Applications (http:/ / books. google. com/ books?id=sRxVz6J09VQC& pg=PA62& dq=schwarzschild's-law& lr=& as_drrb_is=q& as_minm_is=1& as_miny_is=2009& as_maxm_is=12& as_maxy_is=2009& as_brr=3& as_pt=ALLTYPES& ei=pDG8SZmMIo-qkATYy937Cw#PPA63,M1). Read Books. p.62. ISBN9781406744866. . [16] Graham Saxby (2002). The Science of Imaging (http:/ / books. google. com/ books?id=e5mC5TXlBw8C& pg=PA141& dq=reciprocity-failure+ effective-exposure& as_brr=3& ei=Z1-tSeTEJ43gkwSFo72XBQ). CRC Press. p.141. ISBN9780750307345. .
Reciprocity (photography)
[17] J.W. Martin et al. "Reciprocity law experiments in polymeric photodegradation: a critical review", Progress in Organic Coatings 47 (2003) pp.306 (http:/ / fire. nist. gov/ bfrlpubs/ build03/ PDF/ b03051. pdf) [18] Michael A. Covington (1999). Astrophotography for the amateur (http:/ / books. google. com/ books?id=tzXv4WrvZ-EC& pg=PA181& dq=reciprocity+ failure+ exposure+ time+ compensation+ formula& lr=& as_brr=0& as_pt=ALLTYPES& ei=jgDhSdyDFJuOkAS-5LC3DQ). Cambridge University Press. p.181. ISBN9780521627405. . [19] Fred Rost and Ron Oldfield (2000). Photography with a Microscope (http:/ / books. google. com/ books?id=IaQOh28E0vgC& pg=PA204& dq="reciprocity+ model"+ "print+ film"& lr=& as_brr=3& as_pt=ALLTYPES). Cambridge University Press. p.204. ISBN9780521770965. . [20] W. M. H. Greaves (1936). "Time Effects in Spectrophotometry" (http:/ / articles. adsabs. harvard. edu/ / full/ 1936MNRAS. . 96. . 825G/ 0000826. 000. html). Monthly Notices of the Royal Astronomical Society 96 (9): 825832. . [21] W. J. Anderson (1987). "Probabilistic Models of the Photographic Process" (http:/ / books. google. com/ books?id=suVRUSRyW_cC& pg=PA14& dq=reciprocity-failure+ + interarrival-times+ thermal-decay+ low-intensity& as_brr=3& ei=iZz0Se6KIKSOkQSryeH3CQ). in Ian B. MacNeill. Applied Probability, Stochastic Processes, and Sampling Theory: Advances in the Statistical Sciences. Springer. pp.940. ISBN9789027723932. . [22] "(Page 65 of)" (http:/ / books. google. com/ books?id=aCgJAAAAIAAJ& q=reciprocity-failure+ exponential& dq=reciprocity-failure+ exponential& as_brr=0& ei=X2StSfDiH5TElQTS08CXBQ& pgis=1). The Journal of Photographic Science (Royal Photographic Society of Great Britain) 4-5: 65. 1956-1957. . [23] J. H. Webb (1950). "Low Intensity Reciprocity-Law Failure in Photographic Exposure: Energy Depth of Electron Traps in Latent-Image Formation; Number of Quanta Required to Form the Stable Sublatent Image" (http:/ / www. opticsinfobase. org/ abstract. cfm?URI=josa-40-1-3). Journal of the Optical Society of America 40: 313. doi:10.1364/JOSA.40.000003. . [24] Harry Baines and Edward S. Bomback (1967). The Science of Photography (http:/ / books. google. com/ books?id=QNxTAAAAMAAJ& q="sub-latent+ image+ is+ unstable"& dq="sub-latent+ image+ is+ unstable"& lr=& as_brr=0& as_pt=ALLTYPES& ei=zfnzSfHdGZDakASko5HECg& pgis=1) (2nd ed.). Fountain Press. p.202. . [25] http:/ / www. earthboundlight. com/ phototips/ reciprocity-what. html [26] http:/ / mkaz. com/ photo/ tools/ reciprocity. html
143
SeeReal Technologies
144
SeeReal Technologies
Headquarters Dresden, Germany Products Employees Website Holographic and Autostereoscopic Displays > 30 SeeReal Technologies
[1]
SeeReal Technologies GmbH is a Dresden-based company focusing on the development of 3D display solutions. It is owned by its Luxembourg parent company SeeReal Technologies S.A., which is responsible for marketing, partnering and IP licensing. The firm was founded in 2002 by Dr. Armin Schwerdtner, who had previously headed an optics research group at the Dresden University of Technology (TU Dresden). SeeReal develops technology that is licensed to display manufacturers, including several variants of a tracked autostereoscopic display; on the SID 2007 Display Week in Long Beach (California) and the FPD [2] 2007 in Yokohama (Japan) the firm has presented a real-time desktop holographic display [3] . The firm owns more than 100 patents in the field of holographic and autostereoscopic 3D displays . SeeReal has won several awards including the European Information Society Technologies Prize (IST) and the Innovation Prize of the Free State of Saxony [4].
External links
SeeReal Technologies homepage [1]
References
[1] http:/ / www. seereal. com [2] http:/ / techon. nikkeibp. co. jp/ fpd/ 2007/ english [3] Gail Overton (September 2007). "SeeReal develops practical real-time holographic display" (http:/ / www. laserfocusworld. com/ display_article/ 305707/ 12/ none/ none/ News/ HOLOGRAPHY:-SeeReal-develops-practical-real-time-holographic-displa). Laser Focus World (PennWell Corporation) 43 (9). . [4] http:/ / www. einfallsreich-sachsen. de/ index. php?content=inno& sub=wettbewerb& sub3=infos
145
Holographic paradigm
The holographic paradigm is a theory based on the work of David Bohm and Karl Pribram and extrapolated from two misinterpretated ideas: That the universe is in some sense a holographic structure proposed by David Bohm That consciousness is dependent on holographic structure proposed by Karl Pribram This paradigm posits that theories using holographic structures may lead to a unified understanding of consciousness and the universe.
Background
The holographic paradigm is rooted in the concept that all organisms and forms are holograms embedded within a universal hologram, which physicist David Bohm[1] called the holomovement. It is an extrapolation of the optical discovery of 2-dimensional holograms by Dennis Gabor in 1947.[2] Holography created an explosion of scientific and industrial interest starting in 1948. Questionably qualified engineer Thomas Bearden describes holograms as: photographic recordings of the patterns of interference between coherent light reflected from the object of interest, and light that comes directly from the same source or is reflected by a mirror. When this photo image is illuminated from behind by coherent light, a three-dimensional image of the object appears in space. The characteristic of a hypothetically perfect hologram is that all its content is contained in any finite part of itself (at lower resolution). [3] In 1973, what has come to be known as the Pribram-Bohm Holographic Model was non-existent, much like real scientific theory behind these ideas. But the Seattle thinktank, Organization for the Advancement of Knowledge (OAK), led by Richard Alan Miller and Burt Webb, were able to synthesize the work of Northrup and Burr on the electromagnetic nature of the human being with Dennis Gabor's work on optical holograms and come up with a new notion a holographic paradigm. In Languages of the Brain (1971), Pribram[4] had postulated that 2-dimensional interference patterns, physical holograms, underlie all thinking. The holographic component, for him, represented the associative mechanisms and contributed to memory retrieval and storage and problem solving. However, Miller, Webb and Dickson extrapolated that the holographic metaphor extends to n-dimensions and therefore constitutes a fundamental description of the universe and our electromagnetic embedding within that greater field. It suggested the human energy body or bioenergetics was more fundamental than the biochemical domain. The "Holographic Concept of Reality" (1973)[5] was presented at the 1st Psychotronic Conference in Prague in 1973, and later published by Gordon & Breach in 1975, and again in 1979 in Psychoenergetic Systems: the Interaction of
Holographic paradigm Consciousness, Energy and Matter, edited by Dr. Stanley Krippner. Miller and Webb followed up their ground-breaking paper with "Embryonic Holography,"[6] which was also presented at the Omniversal Symposium at California State College at Sonoma, hosted by Dr. Stanley Krippner, September 29, 1973. Arguably, this is the first paper to address the quantum biological properties of human beingsthe first illustrations of the sources of quantum mindbody. The organization of any biological system is established by a complex electrodynamic field which is, in part, determined by its atomic physiochemical components. This field, in turn, determines the behavior and orientation of these components. This dynamic is mediated through wave-based genomes wherein DNA functions as the holographic projector of the psychophysical system - a quantum biohologram. Dropping a level of observation below quantum biochemistry and conventional biophysics, this holographic paradigm proposes that a biohologram determines the development of the human embryo; that we are a quantum bodymind with consciousness informing the whole process through the level of information. They postulated DNA as the possible holographic projector of the biohologram, patterning the three-dimensional electromagnetic standing and moving wave front that constitutes our psychophysical beingquantum bioholography.
146
Recent development
The Gariaev (Garyaev) group (1994)[7] has proposed a theory of the Wave-based Genome where the DNA-wave functions as a Biocomputer. They suggest (1) that there are genetic "texts", similar to natural context-dependent texts in human language; (2) that the chromosome apparatus acts simultaneously both as a source and receiver of these genetic texts, respectively decoding and encoding them; (3) that the chromosome continuum acts like a dynamical holographic grating, which displays or transduces weak laser light and solitonic electro-acoustic fields.[8] The distribution of the character frequency in genetic texts is fractal, so the nucleotides of DNA molecules are able to form holographic pre-images of biostructures. This process of "reading and writing" the very matter of our being manifests from the genome's associative holographic memory in conjunction with its quantum nonlocality. Rapid transmission of genetic information and gene-expression unite the organism as holistic entity embedded in the larger Whole. The system works as a biocomputera wave biocomputer.[9] [10] Gariaev reports as of 2007 that this work in Russia is being actively suppressed.[11]
Bibliography
The Holographic Paradigm and Other Paradoxes (Paperback) by Ken Wilber (Editor) Gariaev, P.P. (1994), Wave Genome, Public Profit, Moscow, 279 pages [in Russian]. Gariaev, P.P. (1993) Wave based genome, Depp. VINITI 15:12. 1993, N 3092?93, 278pp. [in Russian]. Gariaev, P., Tertinshny, G., and Leonova, K. (2001), "The Wave, Probabilistic and Linguistic Representations of Cancer and HIV," JNLRMI, v.1, No.2. Marcer, P. and Schempp, W. (1996), A Mathematically Specified Template for DNA and the Genetic Code, in Terms of the Physically Realizable Processes of Quantum Holography, Proceedings of the Greenwich Symposium on Living Computers, editors Fedorec, A. and Marcer, P., 45-62. Miller, Iona (1993), The Holographic Paradigm and the Consciousness Restructuring Process, Chaosophy 93, O.A.K., Grants Pass. http://www.geocities.com/iona_m/Chaosophy/chaosophy11.html Karl H. Pribram, "The Implicate Brain", in B. J. Hiley and F. David Peat, (eds) Quantum Implications: Essays in Honour of David Bohm, Routledge, 1987 ISBN 0-415-06960-2 Talbot, Michael (1991), The Holographic Universe, Harper Collins Publishers, New York. ISBN 0-06-092258-3
Holographic paradigm
147
See also
David Bohm Aharonov-Bohm effect Bohm diffusion of a plasma in a magnetic field Bohm interpretation Correspondence principle EPR paradox Fractal cosmology Holographic principle Holomovement Membrane paradigm Self-similarity Wave gene Implicate order Penrose-Hameroff "Orchestrated Objective Reduction" theory of consciousness Implicate and Explicate Order John Stewart Bell
Karl Pribram The Bohm sheath criterion, which states that a plasma must flow with at least the speed of sound toward a solid surface Influence on John David Garcia
External links
The Universe as a Hologram [12] by Michael Talbot The Holographic Paradigm: A New Model for the Study of Literature and Science [13] by Mary Ellen Pitts Consciousness, Physics, and the Holographic Paradigm [14] essays by A.T. Williams Comparison between Karl Pribram's "Holographic Brain Theory" and more conventional models of neuronal computation [8] By Jeff Prideaux http://www.geocities.com/iona_m/Chaosophy/chaosophy11.html Miller, Iona (1993) The Holographic Paradigm and CCP: Explication, Ego Death and Emptiness, Chaosophy 93. Miller, Iona FROM HELIX TO HOLOGRAM. An Ode on the Human Genome. Life is fundamentally electromagnetic. http://www.nwbotanicals.org/oak/newphysics/Helix%20to%20Hologram.pdf http://www.journaloftheoretics.com/Articles/2-5/Benford.htm Sue Benford, Empirical Evidence Supporting Macro-Scale Quantum Holography in Non-Local Effects,
Holographic paradigm
148
References
[1] Bohm, David (1980) Wholeness and the Implicate Order, Routledge, London. [2] Professor T.E. Allibone CBE, FRS. THE LIFE AND WORK OF DENNIS GABBOR, HIS CONTRIBUTIONS TO CYBERNETICS, PHILOSOPHY AND THE SOCIAL SCIENCES, 1900 1979. http:/ / 216. 239. 51. 104/ search?q=cache:NMpfXYlo-RsJ:www. cybsoc. org/ GaborAllibone. doc+ dennis+ gabor+ holograms+ book& hl=en& ct=clnk& cd=2& gl=us [3] Beardon, Thomas (1980, 1988, 2002), Excalibur Briefing, Strawberry Hill Press, San Francisco. [4] Pribram, Karl (1971), Languages of the Brain, Prentice-Hall, Inc., Englewood Cliffs: New Jersey. [5] Miller, R.A., Webb, B. Dickson, D. (1975), A Holographic Concept of Reality, Psychoenergetic Systems Journal Vol. 1, 1975. 55-62. Gordon & Breach Science Publishers Ltd., Great Britain. " Holographic Concept" was later reprinted in the hardback book Psychoenergetic Systems, Stanley Krippner, editor. 1979. 231-237. Gordon & Breach, New York, London, Paris. It was reprinted again in the journal Psychedelic Monographs and Essays, Vol. 5, 1992. 93-111. Boynton Beach, FL, Tom Lyttle, Editor. Accessed 6/07: http:/ / www. geocities. com/ iona_m/ Chaosophy/ chaosophy13. html [6] Miller, R. A., Webb. B., Embryonic Holography, Psychoenergetic Systems, Stanley Krippner, Ed. Presented at the Omniversal Symposium, California State College at Sonoma, Saturday, September 29, 1973. Reprinted in Lyttle's journal Psychedelic Monographs and Essays, Vol. 6, 1993. 137-156. Accessed 6/07: http:/ / www. geocities. com/ iona_m/ Chaosophy/ chaosophy14. html [7] Gariaev, Peter, Boris Birshtein, Alexander Iarochenko, et al., The DNA-wave Biocomputer. [8] Miller, Iona, Miller, R.A. and Burt Webb (2002), Quantum Bioholography: A Review of the Field from 1973-2002. Journal of Non-Locality and Remote Mental Interactions Vol.I, Nr. 3. Accessed 6/11/07. http:/ / www. emergentmind. org/ MillerWebbI3a. htm [9] Miller, Iona (2004) From Helix to Hologram, Nexus Magazine http:/ / www. ajna. com/ articles/ science/ from_helix_to_hologram. php [10] Crisis in Life Sciences. The Wave Genetics Response P.P. Gariaev, M.J. Friedman, and E.A. Leonova- Gariaeva http:/ / www. emergentmind. org/ gariaev06. htm [11] [12] [13] [14] Miller, Iona (2007), private correspondence with Peter Gariaev. http:/ / twm. co. nz/ hologram. html http:/ / links. jstor. org/ sici?sici=0047-7729(199023)20%3A4%3C80%3ATHPANM%3E2. 0. CO%3B2-B http:/ / www. cox-internet. com/ hermital/ book/ holoprt7-1. htm
Membrane paradigm
In black hole theory, the black hole membrane paradigm is a useful "toy model" method or "engineering approach" for visualising and calculating the effects predicted by quantum mechanics for the exterior physics of black holes, without using quantum-mechanical principles or calculations. It models a black hole as a thin classically-radiating surface (or membrane) at or vanishingly close to the black hole's event horizon. This approach to the theory of black holes was created by Kip S. Thorne, R. H. Price and D. A. Macdonald. The results of the membrane paradigm are generally considered to be "safe".
Electrical resistance
Thorne (1994) relates that this approach to studying black holes was prompted by the realisation by Hanni, Ruffini, Wald and Cohen in the early 1970's that since an electrically charged pellet dropped into a black hole should still appear to a distant outsider to be remaining just outside the critical r=2M radius, if its image persists, its electrical fieldlines ought to persist too, and ought to point to the location of the "frozen" image (1994, pp.406). If the black hole rotates, and the image of the pellet is pulled around, the associated electrical fieldlines ought to be pulled around with it to create basic "electrical dynamo" effects (see: dynamo theory). Further calculations yielded properties for a black hole such as apparent electrical resistance (pp.408). Since these fieldline properties seemed to be exhibited down to the event horizon, and general relativity insisted that no dynamic exterior interactions could extend through the horizon, it was considered convenient to invent a surface at the horizon that these electrical properties could be said to belong to.
Membrane paradigm
149
Hawking radiation
After being introduced to model the theoretical electrical characteristics of the horizon, the "membrane" approach was then pressed into service to model the Hawking radiation effect predicted by quantum mechanics. In the coordinate system of a distant stationary observer, Hawking radiation tends to be described as a quantum-mechanical particle-pair production effect (involving "virtual" particles), but for stationary observers hovering nearer to the hole, the effect is supposed to look like a purely conventional radiation effect involving "real" particles. In the "membrane paradigm", the black hole is described as it should be seen by an array of these stationary, suspended noninertial observers, and since their shared coordinate system ends at r=2M (because an observer cannot legally hover at or below the event horizon under general relativity), this conventional-looking radiation is described as being emitted by an arbitrarily-thin shell of "hot" material at or just above the critical r=2M radius, where this coordinate system fails. As in the "electrical" case, the membrane paradigm is useful because these effects should appear all the way down to the event horizon, but are not allowed by GR to be coming through the horizon blaming them on a hypothetical thin radiating membrane at the horizon allows them to be modelled classically without explicitly contradicting general relativity's prediction that the r=2M surface is inescapable. In 1986, Kip S. Thorne, R. H. Price and D. A. Macdonald published an anthology of papers by various authors that examined this idea: "Black Holes: The membrane paradigm".
See also
Holographic principle
References
Price, Richard H., and Kip Thorne, "The Membrane Paradigm for Black Holes", Scientific American, vol. 258, no. 4 (April 1988) pp. 69-77 Leonard Susskind, "Black holes and the information paradox", Scientific American, April 1997 (cover story [1]). Also reprinted in the special edition "The edge of physics" [2] Kip S. Thorne, R. H. Price and D. A. Macdonald (eds.) "Black Holes: The membrane paradigm" (1986) Thorne, Kip, Black Holes and Time Warps: Einstein's Outrageous Legacy, W. W. Norton & Company; Reprint edition, January 1, 1995, ISBN 0-393-31276-3, chapter 11, pp.397-411
References
[1] http:/ / www. sciamdigital. com/ browse. cfm?sequencenameCHAR=item2& methodnameCHAR=resource_getitembrowse& interfacenameCHAR=browse. cfm& ISSUEID_CHAR=F1E36413-4C42-4E84-BCE9-A10BEB1E9D3& ARTICLEID_CHAR=8F43C5C9-F4F3-4F4C-A50E-221DB8E68CF& sc=I100322 [2] http:/ / www. sciamdigital. com/ browse. cfm?sequencenameCHAR=item& methodnameCHAR=resource_getitembrowse& interfacenameCHAR=browse. cfm& ISSUEID_CHAR=6C2FAA19-0087-C3FE-547CDF8E4C786808
David Bohm
150
David Bohm
David Bohm
David Joseph Bohm (1917-1992) Born December 20, 1917 Wilkes-Barre, Pennsylvania, U.S. October 27, 1992 (aged74) London, UK United Kingdom British British Physicist Manhattan Project Princeton University University of S-o Paulo Technion University of Bristol Birkbeck College Pennsylvania State College California Institute of Technology University of California, Berkeley Robert Oppenheimer Yakir Aharonov David Pines Jeffrey Bub Henri Bortoft
Died
Alma mater
Other notablestudents Jack Sarfatti Knownfor Bohm-diffusion Bohm interpretation Aharonov-Bohm effect Holonomic model Bohm Dialogue Albert Einstein Jiddu Krishnamurti Arthur Schopenhauer Georg Wilhelm Friedrich Hegel John Stewart Bell
Influences
Influenced
David Bohm David Joseph Bohm (20 December 1917 27 October 1992) was a American-born British quantum physicist who made contributions in the fields of theoretical physics, philosophy and neuropsychology, and to the Manhattan Project.
151
Biography
Youth and college
Bohm was born in Wilkes-Barre, Pennsylvania to a Hungarian Jewish immigrant father and a Lithuanian Jewish mother. He was raised mainly by his father, a furniture store owner and assistant of the local rabbi. Bohm attended Pennsylvania State College, graduating in 1939, and then headed west to the California Institute of Technology for a year, and then transferred to the theoretical physics group under Robert Oppenheimer at the University of California, Berkeley, where he eventually obtained his doctorate degree. Bohm lived in the same neighborhood as some of Oppenheimer's other graduate students (Giovanni Rossi Lomanitz, Joseph Weinberg, and Max Friedman) and with them became increasingly involved not only with physics, but with radical politics. Bohm gravitated to alternative models of society and became active in organizations like the Young Communist League, the Campus Committee to Fight Conscription, and the Committee for Peace Mobilization all later branded as Communist organizations by the FBI under J. Edgar Hoover.
David Bohm Quantum theory and Bohm-diffusion During his early period, Bohm made a number of significant contributions to physics, particularly in the area of quantum mechanics and relativity theory. As a post-graduate at Berkeley, he developed a theory of plasmas, discovering the electron phenomenon now known as Bohm-diffusion. His first book, Quantum Theory published in 1951, was well-received by Einstein, among others. However, Bohm became dissatisfied with the orthodox approach to quantum theory, which he had written about in that book, and began to develop his own approach (Bohm interpretation) a non-local hidden variable deterministic theory whose predictions agree perfectly with the nondeterministic quantum theory. His work and the EPR argument became the major factor motivating John Bell's inequality, whose consequences are still being investigated. The Aharonov-Bohm effect In 1955 Bohm moved to Israel, where he spent two years at the Technion at Haifa. Here he met his wife Saral, who became an important figure in the development of his ideas. In 1957, Bohm moved to the UK as a research fellow at the University of Bristol. In 1959, with his student Yakir Aharonov, they discovered the Aharonov-Bohm effect, showing how a magnetic field could affect a region of space in which the field had been shielded, although its vector potential did not vanish there. This showed for the first time that the magnetic vector potential, hitherto a mathematical convenience, could have real physical (quantum) effects. In 1961, Bohm was made Professor of Theoretical Physics at Birkbeck College London, where his collected papers [1] are kept. The holonomic model of the brain Bohm also made substantial theoretical contributions to neuropsychology and the development of the holonomic model of the functioning of the brain.[2] In collaboration with Stanford neuroscientist Karl Pribram, Bohm helped establish the foundation for Pribram's theory that the brain operates in a manner similar to a hologram, in accordance with quantum mathematical principles and the characteristics of wave patterns. These wave forms may compose hologram-like organizations, Bohm suggested, basing this concept on his application of Fourier analysis, a mathematical method for decomposing complex waves into component sine waves. The holonomic brain model developed by Pribram and Bohm posits a lens defined world view much like the textured prismatic effect of sunlight refracted by the churning mists of a rainbow a view which is quite different from the more conventional "objective reality" - not to be confused with objectivity - approach. Pribram held that if psychology means to understand the conditions that produce the world of appearances, it must look to the thinking of physicists like Bohm.[3] Thought as a System Bohm was alarmed by what he considered an increasing imbalance of not only 'man' and nature, but among peoples, as well as people, themselves. Bohm: "So one begins to wonder what is going to happen to the human race. Technology keeps on advancing with greater and greater power, either for good or for destruction." He goes on to ask: What is the source of all this trouble? I'm saying that the source is basically in thought. Many people would think that such a statement is crazy, because thought is the one thing we have with which to solve our problems. That's part of our tradition. Yet it looks as if the thing we use to solve our problems with is the source of our problems. It's like going to the doctor and having him make you ill. In fact, in 20% of medical cases we do apparently have that going on. But in the case of thought, it's far over 20%. In Bohm's view: ...the general tacit assumption in thought is that it's just telling you the way things are and that it's not doing anything - that 'you' are inside there, deciding what to do with the info. But you don't decide what to do with the info. Thought runs you. Thought, however, gives false info that you are running it, that you are the one
152
David Bohm who controls thought. Whereas actually thought is the one which controls each one of us. Thought is creating divisions out of itself and then saying that they are there naturally. This is another major feature of thought: Thought doesn't know it is doing something and then it struggles against what it is doing. It doesn't want to know that it is doing it. And thought struggles against the results, trying to avoid those unpleasant results while keeping on with that way of thinking. That is what I call "sustained incoherence". Bohm thus proposes in his book, Thought as a System, a pervasive, systematic nature of thought: What I mean by "thought" is the whole thing - thought, felt, the body, the whole society sharing thoughts - it's all one process. It is essential for me not to break that up, because it's all one process; somebody else's thoughts becomes my thoughts, and vice versa. Therefore it would be wrong and misleading to break it up into my thoughts, your thoughts, my feelings, these feelings, those feelings... I would say that thought makes what is often called in modern language a system. A system means a set of connected things or parts. But the way people commonly use the word nowadays it means something all of whose parts are mutually interdependent not only for their mutual action, but for their meaning and for their existence. A corporation is organized as a system - it has this department, that department, that department. They don't have any meaning separately; they only can function together. And also the body is a system. Society is a system in some sense. And so on. Similarly, thought is a system. That system not only includes thoughts, "felts" and feelings, but it includes the state of the body; it includes the whole of society - as thought is passing back and forth between people in a process by which thought evolved from ancient times. A system is constantly engaged in a process of development, change, evolution and structure changes...although there are certain features of the system which become relatively fixed. We call this the structure.... Thought has been constantly evolving and we can't say when that structure began. But with the growth of civilization it has developed a great deal. It was probably very simple thought before civilization, and now it has become very complex and ramified and has much more incoherence than before. Now, I say that this system has a fault in it - a "systematic fault". It is not a fault here, there or here, but it is a fault that is all throughout the system. Can you picture that? It is everywhere and nowhere. You may say "I see a problem here, so I will bring my thoughts to bear on this problem". But "my" thought is part of the system. It has the same fault as the fault I'm trying to look at, or a similar fault. Thought is constantly creating problems that way and then trying to solve them. But as it tries to solve them it makes it worse because it doesnt notice that it's creating them, and the more it thinks, the more problems it creates. (P. 18-19) Bohm Dialogue To address societal problems in his later years, Bohm wrote a proposal for a solution that has become known as "Bohm Dialogue", in which equal status and "free space" form the most important prerequisites of communication and the appreciation of differing personal beliefs. He suggested that if these Dialogue groups were experienced on a sufficiently wide scale, they could help overcome the isolation and fragmentation Bohm observed was inherent in the society.
153
Later years
Bohm continued his work in quantum physics past his retirement in 1987. His final work, the posthumously published The Undivided Universe: An ontological interpretation of quantum theory (1993), resulted from a decades-long collaboration with his colleague Basil Hiley. He also spoke to audiences across Europe and North America on the importance of dialogue as a form of sociotherapy, a concept he borrowed from London psychiatrist and practitioner of Group Analysis Patrick De Mare, and had a series of meetings with the Dalai Lama. He was elected Fellow of the Royal Society in 1990.
David Bohm Near the end of his life, Bohm began to experience a recurrence of depression which he had suffered at earlier times in his life. He was admitted to the Maudsley Hospital in South London on 10 May 1991. His condition worsened and it was decided that the only thing that might help him was electroconvulsive therapy. Bohm's wife consulted psychiatrist David Shainberg, Bohm's long-time friend and collaborator, who agreed that electroconvulsive treatments were probably his only option. Bohm showed marked improvement from the treatments and was released on 29 August. However, his depression returned and was treated with medication.[4] David Bohm died of a heart attack in Hendon,[5] London, on 27 October 1992, aged 74. In fact, Professor Bohm was traveling in a London taxi on that day, conversing with the cab driver. After not getting any response from the passenger in the back seat for a few seconds, the driver turned back and found Dr. Bohm had died of a heart attack.[6] David Bohm was considered by many Nobel laureates as one of the best quantum physicists of all time, who richly deserved the Nobel Prize, but failed to obtain it possibly due to political victimization.[7]
154
Publications
1951. Quantum Theory, New York: Prentice Hall. 1989 reprint, New York: Dover, ISBN 0-486-65969-0 1957. Causality and Chance in Modern Physics, 1961 Harper edition reprinted in 1980 by Philadelphia: U of Pennsylvania Press, ISBN 0-8122-1002-6 1962. Quanta and Reality, A Symposium, with N. R. Hanson and Mary B. Hesse, from a BBC program published by the American Research Council 1965. The Special Theory of Relativity, New York: W.A. Benjamin. 1980. Wholeness and the Implicate Order, London: Routledge, ISBN 0-7100-0971-2, 1983 Ark paperback: ISBN 0-7448-0000-5, 2002 paperback: ISBN 0-415-28979-3 1985. Unfolding Meaning: A weekend of dialogue with David Bohm (Donald Factor, editor), Gloucestershire: Foundation House, ISBN 0-948325-00-3, 1987 Ark paperback: ISBN 0-7448-0064-1, 1996 Routledge paperback: ISBN 0-415-13638-5 1985. The Ending of Time, with Jiddu Krishnamurti, San Francisco, CA: Harper, ISBN 0-06-064796-5. 1987. Science, Order and Creativity, with F. David Peat. London: Routledge. 2nd ed. 2000. ISBN 0-415-17182-2. 1991. Changing Consciousness: Exploring the Hidden Source of the Social, Political and Environmental Crises Facing our World (a dialogue of words and images), coauthor Mark Edwards, Harper San Francisco, ISBN 0-06-250072-4 1992. Thought as a System (transcript of seminar held in Ojai, California, from 30 November to 2 December 1990), London: Routledge. ISBN 0-415-11980-4. 1993. The Undivided Universe: An ontological interpretation of quantum theory, with B.J. Hiley, London: Routledge, ISBN 0-415-12185-X (final work) 1996. On Dialogue. editor Lee Nichol. London: Routledge, hardcover: ISBN 0-415-14911-8, paperback: ISBN 0-415-14912-6, 2004 edition: ISBN 0-415-33641-4 1998. On Creativity, editor Lee Nichol. London: Routledge, hardcover: ISBN 0-415-17395-7, paperback: ISBN 0-415-17396-5, 2004 edition: ISBN 0-415-33640-6 1999. Limits of Thought: Discussions, with Jiddu Krishnamurti, London: Routledge, ISBN 0-415-19398-2. 1999. Bohm-Biederman Correspondence: Creativity and Science, with Charles Biederman. editor Paavo Pylkknen. ISBN 0-415-16225-4. 2002. The Essential David Bohm. editor Lee Nichol. London: Routledge, ISBN 0-415-26174-0. preface by the Dalai Lama
David Bohm
155
See also
Aharonov-Bohm effect American philosophy Bohm diffusion of a plasma in a magnetic field Bohm interpretation Correspondence principle De BroglieBohm theory EPR paradox Holographic paradigm Holographic principle Holomovement Holonomic brain theory Implicate and Explicate Order Implicate order Jiddu Krishnamurti Membrane paradigm Penrose-Hameroff "Orchestrated Objective Reduction" theory of consciousness
The Bohm sheath criterion, which states that a plasma must flow with at least the speed of sound toward a solid surface Wave gene John Stewart Bell Karl Pribram Influence on John David Garcia List of American philosophers
References
[1] http:/ / www. aim25. ac. uk/ cgi-bin/ search2?coll_id=3070& inst_id=33 [2] Comparison between Karl Pribram's "Holographic Brain Theory" and more conventional models of neuronal computation (http:/ / www. acsa2000. net/ bcngroup/ jponkp/ #chap4) [3] http:/ / homepages. ihug. co. nz/ ~sai/ pribram. htm [4] F. David Peat, Infinite Potential: The Life and Times of David Bohm, Reading, MA: Addison Wesley, 1997, pp. 308-317. ISBN 0201328208. [5] Deaths England and Wales 1984-2006 (http:/ / www. findmypast. com/ BirthsMarriagesDeaths. jsp) [6] F. David Peat, Infinite Potential: The Life and Times of David Bohm, Reading, MA: Addison Wesley, 1997, pp. 308-317. ISBN 0201328208. [7] F. David Peat, Infinite Potential: The Life and Times of David Bohm, Reading, MA: Addison Wesley, 1997, pp. 308-317. ISBN 0201328208.
"Bohm's Alternative to Quantum Mechanics", David Z. Albert, Scientific American (May, 1994) Brotherhood of the Bomb: The Tangled Lives and Loyalties of Robert Oppenheimer, Ernest Lawrence, and Edward Teller, Herken, Gregg, New York: Henry Holt (2002) ISBN 0-8050-6589-X (information on Bohm's work at Berkeley and his dealings with HUAC) Infinite Potential: the Life and Times of David Bohm, F. David Peat, Reading, MA: Addison Wesley (1997), ISBN 0-201-40635-7 DavidPeat.com (http://www.fdavidpeat.com/) Quantum Implications: Essays in Honour of David Bohm, (B.J. Hiley, F. David Peat, editors), London: Routledge (1987), ISBN 0-415-06960-2 Thought as a System (transcript of seminar held in Ojai, California, from 30 November to 2 December 1990), London: Routledge. (1992) ISBN 0-415-11980-4. The Quantum Theory of Motion: an account of the de Broglie-Bohm Causal Interpretation of Quantum Mechanics, Peter R. Holland, Cambridge: Cambridge University Press. (2000) ISBN 0-921-48453-9.
David Bohm
156
External links
English site (http://www.david-bohm.net) for David Bohm's ideas about Dialogue. the David_Bohm_Hub (http://www.thinkg.net/david_bohm/) From thinkg.net, with compilations of David Bohm's life and work in form of texts, audio, video, and pictures. Thought Knowledge Perception Institute (http://www.tkpi.org) A non-partisan organization that aims to preserve and continue the work of David Bohm and others. Lifework of David Bohm: River of Truth (http://www.vision.net.au/~apaterson/science/david_bohm. htm#BOHM'S LEGACY): Article by Will Keepin Dialogos (http://www.dialogos.com): Consulting group, originally founded by Bohm colleagues William Isaacs and Peter Garrett, aiming to bring Bohm dialogue into organizations. (http://www.quantum-mind.co.uk) quantum mind Interview with David Bohm (http://www.fdavidpeat.com/interviews/bohm.htm) provided and conducted by F. David Peat along with John Briggs, first issued in Omni magazine, January 1987 David Bohm and Krishnamurti (http://www.wie.org/j11/peat.asp) Archive of papers at Birkbeck College relating to David Bohm. (http://www.bbk.ac.uk/lib/about/hours/ bohm) Quantum-Mind (http://www.quantum-mind.co.uk) Oral History interview transcript with David Bohm 8 May 1981, American Institute of Physics, Niels Bohr Library and Archives (http://www.aip.org/history/ohilist/4513.html)
Karl H. Pribram
Karl H. Pribram (born February 25, 1919 in Vienna, Austria) is a professor at Georgetown University , and an emeritus professor of psychology and psychiatry at Stanford University and Radford University. Board-certified as a neurosurgeon, Pribram did pioneering work on the definition of the limbic system, the relationship of the frontal cortex to the limbic system, the sensory-specific "association" cortex of the parietal and temporal lobes, and the classical motor cortex of the human brain. To the general public, Pribram is best known for his development of the holonomic brain model of cognitive function and his contribution to ongoing neurological research into memory, emotion, motivation and consciousness. American best selling author Katherine Neville is his significant other.
Karl H. Pribram
157
Holonomic model
Pribram's holonomic model of brain processing states that, in addition to the circuitry accomplished by the large fiber tracts in the brain, processing also occurs in webs of fine fiber branches (for instance, dendrites) that form webs. This type of processing is properly described by Gabor quanta of information, wavelets that are used in quantum holography, the basis of fMRI, PET scans and other image processing procedures. Gabor wavelets are windowed Fourier transforms that convert complex spatial (and temporal) patterns into component waves whose amplitudes at their intersections become reinforced or diminished. Fourier processes are the basis of holography. Holograms can correlate and store a huge amount of information - and have the advantage that the inverse transform returns the results of correlation into the spatial and temporal patterns that guide us in navigating our universe. David Bohm had suggested that were we to view the cosmos without the lenses that outfit our telescopes, the universe would appear to us as a hologram. Pribram extended this insight by noting that were we deprived of the lenses of our eyes and the lens like processes of our other sensory receptors, we would be immersed in holographic experiences.
Other contributions
In the late 1940s and early 1950s, Pribram's neurobehavioral experiments established the composition of the limbic system and the executive functions of the prefrontal cortex. Pribram also discovered the sensory specific systems of the association cortex, and showed that these systems operate to organize the choices we make among sensory stimuli, not the sensing of the stimuli themselves.
Bibliography
Miller, George; Galanter, Eugene, & Pribram, Karl (1960). Plans and the structure of behavior. New York: Holt, Rinehart and Winston. ISBN0030100755. Pribram, Karl H. (1969). Brain and behaviour. Hammondsworth: Penguin Books. ISBN0140805214. Pribram, Karl (1971). Languages of the brain; experimental paradoxes and principles in neuropsychology. Englewood Cliffs, N. J.: Prentice-Hall. ISBN0135227305. Pribram, Karl; Gill, Morton M. (1976). Freud's "Project" re-assessed: preface to contemporary cognitive theory and neuropsychology. New York: Basic Books. ISBN0465025692. Pribram, Karl (1991). Brain and perception: holonomy and structure in figural processing. Hillsdale, N. J.: Lawrence Erlbaum Associates. ISBN0898599954. Globus, Gordon G.; Pribram, Karl H., & Vitiello, Giuseppe (2004-09-30). Brain And Being: At The Boundary Between Science, Philosophy, Language, And Arts (Advances in Consciousness Research, 58). John Benjamins Publishing Co.. ISBN158811550X. Pribram, Karl (ed.) (1969). On the biology of learning. New York: Harcourt Brace & World. ISBN0155675206. Pribram, Karl, & Broadbent, Donald (eds.) (1970). Biology of memory. New York: Academic Press. ISBN0125643500. Pribram, K. H., & Luria, A. R. (eds.) (1973). Psychophysiology of the frontal lobes. New York: Academic Press. ISBN0125643403. Pribram, Karl, & Isaacson, Robert L. (eds.) (1975). The Hippocampus. New York: Plenum Press. ISBN0306375354. Pribram, Karl (ed.) (1993). Rethinking neural networks: quantum fields and biological data. Hillsdale, N. J.: Erlbaum. ISBN0805814663. Pribram, Karl (ed.) (1994). Origins: brain and self organization. Hillsdale, N. J.: Lawrence Erlbaum. ISBN0805817867.
Karl H. Pribram King, Joseph, & Pribram, Karl (eds.) (1995). Scale in conscious experience: Is the brain too important to be left to the specialists to study?. Mahwah, N. J.: Lawrence Erlbaum Associates. ISBN0805821783. Pribram, Karl, & King, Joseph (eds.) (1996). Learning as self-organization. Mahwah, N. J.: L. Erlbaum Associates. ISBN080582586X. Pribram, Karl (ed.) (1998). Brain and values: is a biological science of values possible. Mahwah, N. J.: Lawrence Erlbaum Associates. ISBN0805831541. Pribram, Karl (2004). "Brain and Mathematics" [1]. Pari Center for New Learning. Retrieved 2007-10-25. "Like Bohm, Karl Pribram sees the holographic nature of reality" [2]. The Ground of Faith. October 2003. Retrieved 2007-10-25. Mishlove, Jeffrey (1998). "The Holographic Brain with Karl Pribram, MA; Ph.D." [12]. TWM.co.nz. Retrieved 2007-10-25.
158
External links
"The Holographic Brain" [3] - Dr. Jeffrey Mishlove interviews Karl Pribham "Comparison between Holographic Brain Theory and conventional models of neuronal computation" [8] academic paper on Pribham's work "Pribram Receives Havel Prize For His Work in Neuroscience" [4] news article "Winner 1998 Noetic Medal for Consciousness & Brain Research - For Lifetime Achievement" [5] Global Lens Interview [6] (Video) [7] quantum mind
References
[1] [2] [3] [4] [5] [6] [7] http:/ / www. paricenter. com/ library/ papers/ pribram01. php http:/ / homepages. ihug. co. nz/ ~thegroundoffaith/ issues/ 2003-10/ pribram. html http:/ / homepages. ihug. co. nz/ ~sai/ pribram. htm http:/ / www. katherineneville. com/ karl_havel_prize. htm http:/ / www. mindspring. com/ ~quantum. computing/ http:/ / www. immaginehdv. com/ detail. php?c=2& i=b90c95bf29b3909ced9b95a10d865cd329684d33 http:/ / www. quantum-mind. co. uk
Aharonov-Bohm effect
159
Aharonov-Bohm effect
The AharonovBohm effect, sometimes called the EhrenbergSidayAharonovBohm effect, is a quantum mechanical phenomenon in which an electrically charged particle shows a measurable interaction with an electromagnetic field despite being confined to a region in which both the magnetic field B and electric field E are zero. The AharonovBohm effect shows that the local E and B fields do not contain full information about the electromagnetic field, and the electromagnetic four-potential, A, must be used instead. By Stokes' theorem, the magnitude of the AharonovBohm effect can be calculated using A alone or using E and B alone. But when using E and B, however, the effect depends on the field values in a region from which the test particle is excluded, not only classically but also quantum mechanically. In contrast, the effect depends on A only in the region where the test particle is allowed. Therefore we can either abandon the principle of locality (which most physicists are reluctant to do) or we are forced to accept the realisation that the electromagnetic potential offers a more complete description of electromagnetism than the electric and magnetic fields can. In classical electromagnetism the two descriptions were equivalent. With the addition of quantum theory, though, the electromagnetic potential A is seen as being more fundamental or "real";[1] the E and B fields can be derived from the potential A, but the potential can not be derived from the E and B fields. Werner Ehrenberg and Raymond E. Siday first predicted the effect in 1949,[2] and similar effects were later rediscovered by Yakir Aharonov and David Bohm in 1959.[3] (After publication of the 1959 paper, Bohm was informed of Ehrenberg and Siday's work, which was acknowledged and credited in Bohm and Aharanov's subsequent 1961 paper.[4] [5] ) The most commonly described case, sometimes called the AharonovBohm solenoid effect, takes place when the wave function of a charged particle passing around a long solenoid experiences a phase shift as a result of the enclosed magnetic field, despite the magnetic field being zero in the region through which the particle passes. This phase shift has been observed experimentally by its effect on interference fringes. (There are also magnetic AharonovBohm effects on bound energies and scattering cross sections, but these cases have not been experimentally tested.) An electric AharonovBohm phenomenon was also predicted, in which a charged particle is affected by regions with different electrical potentials but zero electric field, and this has also seen experimental confirmation. A separate "molecular" AharonovBohm effect was proposed for nuclear motion in multiply-connected regions, but this has been argued to be essentially different, depending only on local quantities along the nuclear path.[6] A general review can be found in Peshkin and Tonomura (1989).[7]
Therefore particles, with the same start and end points, but travelling along two different routes will acquire a phase difference determined by the magnetic flux through the area between the paths (via Stokes' theorem and ), and given by:
Aharonov-Bohm effect
160
In quantum mechanics the same particle can travel between two points by a variety of paths. Therefore this phase difference can be observed by placing a solenoid between the slits of a double-slit experiment (or equivalent). An ideal solenoid encloses a magnetic field B, but does not produce any magnetic field outside of its cylinder, and thus the charged particle (e.g. an electron) passing outside experiences no magnetic field B. However, there is a (curl-free) vector potential A outside the solenoid with an enclosed flux, and so the relative phase of particles passing through one slit or the other is altered by whether the solenoid current is turned on or off. This corresponds to an observable shift of the interference fringes on the observation plane.
Schematic of double-slit experiment in which AharonovBohm effect can be observed: electrons pass through two slits, interfering at an observation screen, with the interference pattern shifted when a magnetic field B is turned on in the cylindrical solenoid.
The same phase effect is responsible for the quantized-flux requirement in superconducting loops. This quantization occurs because the superconducting wave function must be single valued: its phase difference around a closed loop must be an integer multiple of 2 (with the charge for the electron Cooper pairs), and thus the flux must be a multiple of h/2e. The superconducting flux quantum was actually predicted prior to Aharonov and Bohm, by F. London in 1948 using a phenomenological model.[8] The magnetic AharonovBohm effect was experimentally confirmed by Osakabe et al. (1986),[9] following much earlier work summarized in Olariu and Popscu (1984).[10] Its scope and application continues to expand. Webb et al. (1985)[11] demonstrated AharonovBohm oscillations in ordinary, non-superconducting metallic rings; for a discussion, see Schwarzschild (1986)[12] and Imry & Webb (1989).[13] Bachtold et al. (1999)[14] detected the effect in carbon nanotubes; for a discussion, see Kong et al. (2004).[15]
Aharonov-Bohm effect
161
Electric effect
Just as the phase of the wave function depends upon the magnetic vector potential, it also depends upon the scalar electric potential. By constructing a situation in which the electrostatic potential varies for two paths of a particle, through regions of zero electric field, an observable AharonovBohm interference phenomenon from the phase shift has been predicted; again, the absence of an electric field means that, classically, there would be no effect. From the Schrdinger equation, the phase of an eigenfunction with energy E goes as . The energy,
however, will depend upon the electrostatic potential V for a particle with charge q. In particular, for a region with constant potential V (zero field), the electric potential energy qV is simply added to E, resulting in a phase shift:
where t is the time spent in the potential. The initial theoretical proposal for this effect suggested an experiment where charges pass through conducting cylinders along two paths, which shield the particles from external electric fields in the regions where they travel, but still allow a varying potential to be applied by charging the cylinders. This proved difficult to realize, however. Instead, a different experiment was proposed involving a ring geometry interrupted by tunnel barriers, with a bias voltage V relating the potentials of the two halves of the ring. This situation results in an AharonovBohm phase shift as above, and was observed experimentally in 1998.[16]
Mathematical interpretation
In the terms of modern differential geometry, the AharonovBohm effect can be understood to be the monodromy of a flat complex line bundle. The U(1)-connection on this line bundle is given by the electromagnetic four-potential A as where d means partial derivation in the Minkowski space . The curvature form of the connection, , is the electromagnetic field strength, where is the 1-form corresponding to the is, as a consequence of Stokes' four-potential. The holonomy of the connection, around a closed loop
theorem, determined by the magnetic flux through a surface bounded by the loop. This description is general and works inside as well as outside the conductor. Outside of the conducting tube, which is for example a longitudinally magnetized infinite metallic thread, the field strength is ; in other words outside the thread the connection is flat, and the holonomy of a loop contained in the field-free region depends only on the winding number around the tube and is, by definition, the monodromy of the flat connection. In any simply connected region outside of the tube we can find a gauge transformation (acting on wave functions and connections) that gauges away the vector potential. However, if the monodromy is non trivial, there is no such gauge transformation for the whole outside region. If we want to ignore the physics inside the conductor and only describe the physics in the outside region, it becomes natural to mathematically describe the quantum electron by a section in a complex line bundle with an "external" connection rather than an external EM field (by incorporating local gauge transformations we have already acknowledged that quantum mechanics defines the notion of a (locally) flat wavefunction (zero momentum density) but not that of unit wavefunction). The Schrdinger equation readily generalizes to this situation. In fact for the AharonovBohm effect we can work in two simply connected regions with cuts that pass from the tube towards or away from the detection screen. In each of these regions we have to solve the ordinary free Schrdinger equations but in passing from one region to the other, in only one of the two
Aharonov-Bohm effect connected components of the intersection (effectively in only one of the slits) we pick up a monodromy factor which results in a shift in the interference pattern. Effects with similar mathematical interpretation can be found in other fields. For example, in classical statistical physics, quantization of a molecular motor motion in a stochastic environment can be interpreted as an Aharonov-Bohm effect induced by a gauge field acting in the space of control parameters.[21]
162 ,
External links
A video explaining the use of the Aharonov-Bohm effect in nano-rings. [22]
See also
Geometric phase Hannay angle Wannier function Berry phase
References
[1] Feynman, R. The Feynman Lectures on Physics. 2. p.15-5. "...is the vector potential a "real" field? ... a real field is a mathematical device for avoiding the idea of action at a distance. .... for a long time it was believed that A was not a "real" field. .... there are phenomena involving quantum mechanics which show that in fact A is a "real" field in the sense that we have defined it..... E and B are slowly disappearing from the modern expression of physical laws; they are being replaced by A and [the scalar potential] and knowledge of the classical electromagnetic field acting locally on a particle is not sufficient to predict its quantum-mechanical behavior." [2] Ehrenberg, W; Siday, RE (1949). "The Refractive Index in Electron Optics and the Principles of Dynamics"". Proceedings of the Physical Society B 62: 821. doi:10.1088/0370-1301/62/1/303. [3] Aharonov, Y; Bohm, D (1959). "Significance of electromagnetic potentials in quantum theory". Physical Review 115: 485491. doi:10.1103/PhysRev.115.485. [4] Peat, FD (1997). Infinite Potential: The Life and Times of David Bohm (http:/ / www. fdavidpeat. com/ bibliography/ books/ infinite. htm). Addison-Wesley. ISBN0-201-40635-7. . [5] Aharonov, Y; Bohm, D (1961). "Further Considerations on Electromagnetic Potentials in the Quantum Theory". Physical Review 123: 15111524. doi:10.1103/PhysRev.123.1511. [6] Sjqvist, E (2002). "Locality and topology in the molecular Aharonov-Bohm effect". Physical Review Letters 89 (21): 210401. doi:10.1103/PhysRevLett.89.210401. arXiv:quant-ph/0112136. [7] Peshkin, M; Tonomura, A (1989). The Aharonov-Bohm effect. Springer-Verlag. ISBN3-540-51567-4. [8] London, F (1948). "On the Problem of the Molecular Theory of Superconductivity". Physical Review 74: 562. doi:10.1103/PhysRev.74.562. [9] Osakabe, N; et al. (1986). "Experimental confirmation of Aharonov-Bohm effect using a toroidal magnetic field confined by a superconductor". Physical Review A 34: 815. doi:10.1103/PhysRevA.34.815. [10] Olariu, S; Popescu, II (1985). "The quantum effects of electromagnetic fluxes". Reviews of Modern Physics 57: 339. doi:10.1103/RevModPhys.57.339. [11] Webb, RA; Washburn, S; Umbach, CP; Laibowitz, RB (1985). "Observation of h/e Aharonov-Bohm Oscillations in Normal-Metal Rings". Physical Review Letters 54: 2696. doi:10.1103/PhysRevLett.54.2696. [12] Schwarzschild, B (1986). "Currents in Normal-Metal Rings Exhibit AharonovBohm Effect". Physics Today 39 (1): 17. doi:10.1063/1.2814843. [13] Imry, Y; Webb, RA (1989). "Quantum Interference and the Aharonov-Bohm Effect". Scientific American 260 (4). [14] Schnenberger, C (1999). "AharonovBohm oscillations in carbon nanotubes". Nature 397: 673. doi:10.1038/17755. [15] Kong, J; Kouwenhoven, L; Dekker, C (2004). "Quantum change for nanotubes" (http:/ / physicsworld. com/ cws/ article/ print/ 19746). Physics World. . Retrieved 2009-08-17. [16] van Oudenaarden, A (1998). "Magneto-electric AharonovBohm effect in metal rings". Nature 391: 768. doi:10.1038/35808. [17] Fischer, AM (2009). "Quantum doughnuts slow and freeze light at will" (http:/ / www. innovations-report. com/ html/ reports/ physics_astronomy/ quantum_doughnuts_slow_freeze_light_128981. html). Innovation Reports. . Retrieved 2008-08-17. [18] Borunda, MF; et al. (2008). "Aharonov-Casher and spin Hall effects in two-dimensional mesoscopic ring structures with strong spin-orbit interaction". ariv:0809.0880 [cond-mat.mes-hall]. [19] Grbic, B; et al. (2008). "Aharonov-Bohm oscillations in p-type GaAs quantum rings". Physica E 40: 1273. doi:10.1016/j.physe.2007.08.129. arXiv:0711.0489.
Aharonov-Bohm effect
[20] Fischer, AM; et al. (2009). "Exciton Storage in a Nanoscale Aharonov-Bohm Ring with Electric Field Tuning". Physical Review Letters 102: 096405. doi:10.1103/PhysRevLett.102.096405. arXiv:0809.3863. [21] Chernyak, VY; Sinitsyn, NA (2009). "Robust quantization of a molecular motor motion in a stochastic environment". Journal of Chemical Physics 131: 181101. doi:10.1063/1.3263821. arXiv:0906.3032. Bibcode:2009JChPh.131r1101C. [22] http:/ / www2. warwick. ac. uk/ newsandevents/ news/ quantumdoughnuts
163
Bohm diffusion
Bohm diffusion is the diffusion of plasma across a magnetic field with a diffusion coefficient equal to , where B is the magnetic field strength, T is the temperature, and e is the elementary charge. It was first observed in 1946 by David Bohm, E. H. S. Burhop, and Harrie Massey while studying magnetic arcs for use in isotope separation [1]. It has since been observed that many other plasmas follow this law. Fortunately there are exceptions where the diffusion rate is lower, otherwise there would be no hope of achieving practical fusion energy. Generally diffusion can be modelled as a random walk of steps of length and time . If the diffusion is collisional, then is the mean free path and is the inverse of the collision frequency. The diffusion coefficient D can be expressed variously as
where v = / is the velocity between collisions. In a magnetized plasma, the collision frequency is usually small compared to the gyrofrequency, so that the step size is the gyroradius and the step time is the inverse of the collision frequency , leading to D = . If the collision frequency is larger than the gyrofrequency, then the particles can be considered to move freely with the thermal velocity vth between collisions, and the diffusion coefficient takes the form D = vth/. Evidently the classical (collisional) diffusion is maximum when the collision frequency is equal to the gyrofrequency, in which case D = c = vth/c. Substituting = vth/c, vth = (kBT/m)1/2, and c = eB/m, we arrive at D = kBT/eB, which is the Bohm scaling. Considering the approximate nature of this derivation, the missing 1/16 in front is no cause for concern. Therefore, at least within a factor of order unity, Bohm diffusion is always greater than classical diffusion. In the common low collisionality regime, classical diffusion scales with 1/B, compared with the 1/B dependence of Bohm diffusion. This distinction is often used to distinguish between the two. In light of the calculation above, it is tempting to think of Bohm diffusion as classical diffusion with an anomalous collision rate that maximizes the transport, but the physical picture is different. Anomalous diffusion is the result of turbulence. Regions of higher or lower electric potential result in eddies because the plasma moves around them with the E-cross-B drift velocity equal to E/B. These eddies play a similar role to the gyro-orbits in classical diffusion, except that the physics of the turbulence can be such that the decorrelation time is approximately equal to the turn-over time, resulting in Bohm scaling. Another way of looking at it is that the turbulent electric field is approximately equal to the potential perturbation divided by the scale length , and the potential perturbation can be expected to be a sizeable fraction of the kBT/e. The turbulent diffusion constant D = v is then independent of the scale length and is approximately equal to the Bohm value. The fraction 1/16 according to Bohm "has no theoretical justification but is an empirical number agreeing with most experiments to within a factor of two or three"[1] Many physicists like L. Spitzer [2] , considered this fraction as a factor related to plasma instability.
Bohm diffusion
164
References
[1] D Bohm the characteristics of electrical discharges in magnetic fields ed A Guthrie and R K Wakerling (New York: McGraw-Hill) (1949). [2] L Spitzer Phys.Fluids3 659 (1960).
External links
Physics Glossary (http://www.faqs.org/faqs/fusion-faq/glossary/b/)
Bohm interpretation
The de BroglieBohm theory, also called the pilot-wave theory, Bohmian mechanics, and the causal interpretation, is an interpretation of quantum theory. As in quantum theory, it contains a wavefunction - a function on the space of all possible configurations. Additionally, it also contains an actual configuration, even in situations where nobody observes it. The evolution over time of the configuration (that is, of the positions of all particles or the configuration of all fields) is defined by the wave function via a guiding equation. The evolution of the wavefunction over time is given by Schrdinger's equation. The de BroglieBohm theory expresses in an explicit manner the fundamental non-locality in quantum physics. The velocity of any one particle depends on the value of the wavefunction, which depends on the whole configuration of the universe. This theory is deterministic. Most (but not all) relativistic variants require a preferred frame. Variants which handle spin and curved spaces are known. It can be modified to handle quantum field theory. Bell's theorem was inspired by Bell's discovery of the work of David Bohm and his subsequent wondering if the obvious non-locality of the theory could be removed. This theory gives rise to a measurement formalism, analogous to thermodynamics for classical mechanics, which yields the standard quantum formalism generally associated with the Copenhagen interpretation. The measurement problem is resolved by this theory since the outcome of an experiment is registered by the configuration of the particles of the experimental apparatus after the experiment is completed. The familiar wavefunction collapse of standard quantum mechanics emerges from an analysis of subsystems and the quantum equilibrium hypothesis. The theory has a number of equivalent mathematical formulations and has been presented under a number of different names.
Overview
De BroglieBohm theory is based on the following: We have a configuration space the space of positions of the universe, described by coordinates of , which is an element of the configuration . The
. The configuration space is different for different versions of pilot wave theory. For example, this may be particles, or, in case of field theory, the space of field configurations
configuration evolves according to the guiding equation . Here, is the standard complex-valued wavefunction known from quantum theory, which evolves according
to Schrdinger's equation
Bohm interpretation This already completes the specification of the theory for any quantum theory with Hamilton operator of type . If the configuration is distributed according to quantum mechanics. at some moment of time , this holds for all times. Such
165
a state is named quantum equilibrium. In quantum equilibrium, this theory will agree with the results of standard
Two-slit experiment
The double-slit experiment is an illustration of wave-particle duality. In it, a beam of particles (such as photons) travels through a barrier with two slits removed. If one puts a detector screen on the other side, the pattern of detected particles shows interference fringes characteristic of waves; however, the detector screen responds to particles. The system exhibits behaviour of both waves (interference patterns) and particles (dots on the screen). If we modify this experiment so that one slit is closed, no interference pattern is observed. Thus, the state of both slits affects the final results. We can also arrange to have a minimally invasive detector at one of the slits to see which slit the particle went through. When we do that, the interference pattern disappears.
The Copenhagen interpretation states that the particles are not localised in space until they are detected, so that, if there is no detector on the slits, there is no matter of fact about what slit has the particle passed through. If one slit has a detector on it, then the wavefunction collapses due to that detection. In de BroglieBohm theory, the wavefunction travels through both slits, but each particle has a well-defined trajectory and passes through exactly one of the slits. The final position of the particle on the detector screen and the slit through which the particle passes by is determined by the initial position of the particle. Such initial position is not controllable by the experimenter, so there is an appearance of randomness in the pattern of detection. The wave function interferes with itself and guides the particles in such a way that the particles avoid the regions in which the interference is destructive and are attracted to the regions in which the interference is constructive, giving rise to the interference pattern on the detector screen. To explain the behavior when the particle is detected to go through one slit, one needs to appreciate the role of the conditional wavefunction and how it gives rise to the collapse of the wavefunction; this is explained below. The basic idea is that the environment registering the detection effectively separates the two wave packets in configuration space.
The Bohmian trajectories for an electron going through the two-slit experiment.
Bohm interpretation
166
The Theory
The ontology
The ontology of de Broglie-Bohm theory consists of a configuration . The configuration space quantum mechanics. Thus, the ontology of pilot wave theory contains as the trajectory the wave function of the universe and a pilot wave can be chosen differently, as in classical mechanics and standard we know from classical mechanics, as
of quantum theory. So, at every moment of time there exists not only a wave
function, but also a well-defined configuration of the whole universe. The correspondence to our experiences is made by the identification of the configuration of our brain with some part of the configuration of the whole universe , as in classical mechanics. While the ontology of classical mechanics is part of the ontology of de BroglieBohm theory, the dynamics is very different. In classical mechanics, the acceleration of the particles are given by forces. In de BroglieBohm theory, the velocities of the particles are given by the wavefunction. In what follows below, we will give the setup for one particle moving in followed by the setup for particles moving in 3 dimensions. In the first instance, configuration space and real space are the same while in the second, real space is still , but configuration space becomes . While the particle positions themselves are in real space, the velocity field and wavefunction are on configuration space which is how particles are entangled with each other in this theory. Extensions to this theory include spin and more complicated configuration spaces. We use variations of configuration space. for particle positions while represents the complex-valued wavefunction on
Guiding equation
For a single particle moving in , the particle's velocity is given . For many particles, we label them as for the th particle and their velocities are given by . The key fact to notice is that this velocity field depends on the actual positions of all of the particles in the
universe. As explained below, in most experimental situations, the influence of all of those particles can be encapsulated into an effective wavefunction for a subsystem of the universe.
Schrdinger's equation
The one particle Schrdinger equation governs the time evolution of a complex-valued wavefunction on . The equation represents a quantized version of the total energy of a classical system evolving under a real-valued potential function on :
and
Bohm interpretation
167
configuration of subsystem (I) and of the rest of the universe. For simplicity, we consider here only the spinless case.
satisfies a guiding equation identical to the one presented in the formulation of the theory, with replaced with the conditional wave function . Also, the fact that is random
terminology of Drr et al. function, this fact is called the fundamental conditional probability formula Unlike the universal wave the conditional wave function of a subsystem does not ). always evolves by the Schrdinger equation, but in many situations it does. For instance, if the universal wave function factors as:
then the conditional wave function of subsystem (I) is (up to an irrelevant scalar factor) equal to not contain an interaction term between subsystems (I) and (II) then generally, assume that the universal wave function
(this is what
Standard Quantum Theory would regard as the wave function of subsystem (I)). If, in addition, the Hamiltonian does does satisfy a Schrdinger equation. More can be written in the form:
Bohm interpretation
168
where
for all
and
contain an interaction term between subsystems (I) and (II), satisfies a Schrdinger equation. The fact that the conditional wave function of a subsystem does not always evolve by the Schrdinger equation is related to the fact that the usual collapse rule of Standard Quantum Theory emerges from the Bohmian formalism when one considers conditional wave functions of subsystems.
Extensions
Spin
To incorporate spin, the wavefunction becomes complex-vector valued. The value space is called spin space; for a spin-1/2 particle, spin space can be taken to be . The guiding equation is modified by taking inner products in spin space to reduce the complex vectors to complex numbers. The Schrdinger equation is modified by adding a Pauli spin term.
where
th particle, , and
th
are, respectively, the magnetic field and the vector is the charge of the th particle, and
For an example of a spin space, a system consisting of two spin 1/2 particle and one spin 1 particle has a wavefunctions of the form . That is, its spin space is a 12 dimensional space.
Curved space
To extend de BroglieBohm theory to curved space (Riemannian manifolds in mathematical parlance), one simply notes that all of the elements of these equations make sense, such as gradients and Laplacians. Thus, we use equations that have the same form as above. Topological and boundary conditions may apply in supplementing the evolution of Schrdinger's equation. For a de BroglieBohm theory on curved space with spin, the spin space becomes a vector bundle over configuration space and the potential in Schrdinger's equation becomes a local self-adjoint operator acting on that space.[3]
Bohm interpretation Nikolic [6] introduces a purely deterministic de BroglieBohm theory of particle creation and destruction, according to which particle trajectories are continuous, but particle detectors behave as if particles have been created or destructed even when a true creation or destruction of particles does not take place.
169
Exploiting nonlocality
Valentini[7] has extended the de BroglieBohm theory to include signal nonlocality that would allow entanglement to be used as a stand-alone communication channel without a secondary classical "key" signal to "unlock" the message encoded in the entanglement. This violates orthodox quantum theory but it has the virtue that it makes the parallel universes of the chaotic inflation theory observable in principle. Unlike de BroglieBohm theory, Valentini's theory has the wavefunction evolution also depend on the ontological variables. This introduces an instability, a feedback loop that pushes the hidden variables out of "sub-quantal heat death". The resulting theory becomes nonlinear and non-unitary.
Relativity
Pilot wave theory is explicitly nonlocal. As a consequence, most relativistic variants of pilot wave theory need a preferred foliation of space-time. While this is in conflict with the standard interpretation of relativity, the preferred foliation, if unobservable, does not lead to any empirical conflicts with relativity. The relation between nonlocality and preferred foliation can be better understood as follows. In de BroglieBohm theory, nonlocality manifests as the fact that the velocity and acceleration of one particle depends on the instantaneous positions of all other particles. On the other hand, in the theory of relativity the concept of instantaneousness does not have an invariant meaning. Thus, to define particle trajectories, one needs an additional rule that defines which space-time points should be considered instantaneous. The simplest way to achieve this is to introduce a preferred foliation of space-time by hand, such that each hypersurface of the foliation defines a hypersurface of equal time. However, this way (which explicitly breaks the relativistic covariance) is not the only way. It is also possible that a rule which defines instantaneousness is contingent, by emerging dynamically from relativistic covariant laws combined with particular initial conditions. In this way, the need for a preferred foliation can be avoided and relativistic covariance can be saved. There has been work in developing relativistic versions of de BroglieBohm theory. See Bohm and Hiley: The Undivided Universe, and [8], [9], and references therein. Another approach is given in the work of Drr et al.[10] in which they use Bohm-Dirac models and a Lorentz-invariant foliation of space-time. In [11],[12] and [13] Nikolic develops a generalized relativistic-invariant probabilistic interpretation of quantum theory, in which is no longer a probability density in space, but a probability density in space-time. He uses this generalized probabilistic interpretation to formulate a relativistic-covariant version of de BroglieBohm theory without introducing a preferred foliation of space-time.
Bohm interpretation
170
Results
Below are some highlights of the results that arise out of an analysis of de BroglieBohm theory. Experimental results agree with all of the standard predictions of quantum mechanics in so far as the latter has predictions. However, while standard quantum mechanics is limited to discussing experiments with human observers, de BroglieBohm theory is a theory which governs the dynamics of a system without the intervention of outside observers (p.117 in Bell[14] ). The basis for agreement with standard quantum mechanics is that the particles are distributed according to This is a statement of observer ignorance, but it can be proven
[1]
typically be the case. There is apparent collapse of the wave function governing subsystems of the universe, but there is no collapse of the universal wavefunction.
Bohm interpretation Collapse of the universal wavefunction never occurs in de BroglieBohm theory. Its entire evolution is governed by Schrdinger's equation and the particles' evolutions are governed by the guiding equation. Collapse only occurs in a phenomenological way for systems that seem to follow their own Schrdinger's equation. As this is an effective description of the system, it is a matter of choice as to what to define the experimental system to include and this will affect when "collapse" occurs. Operators as observables In the standard quantum formalism, measuring observables is generally thought of as measuring operators on the Hilbert space. For example, measuring position is considered to be a measurement of the position operator. This relationship between physical measurements and Hilbert space operators is, for standard quantum mechanics, an additional axiom of the theory. The de BroglieBohm theory, by contrast, requires no such measurement axioms (and measurement as such is not a dynamically distinct or special sub-category of physical processes in the theory). In particular, the usual operators-as-observables formalism is, for de BroglieBohm theory, a theorem.[17] A major point of the analysis is that many of the measurements of the observables do not correspond to properties of the particles; they are (as in the case of spin discussed above) measurements of the wavefunction. In the history of de BroglieBohm theory, the proponents have often had to deal with claims that this theory is impossible. Such arguments are generally based on inappropriate analysis of operators as observables. If one believes that spin measurements are indeed measuring the spin of a particle that existed prior to the measurement, then one does reach contradictions. De BroglieBohm theory deals with this by noting that spin is not a feature of the particle, but rather that of the wavefunction. As such, it only has a definite outcome once the experimental apparatus is chosen. Once that is taken into account, the impossibility theorems become irrelevant. There have also been claims that experiments reject the Bohm trajectories [18] in favor of the standard QM lines. But as shown in [19] and [20], such experiments cited above only disprove a misinterpretation of the de BroglieBohm theory, not the theory itself. There are also objections to this theory based on what it says about particular situations usually involving eigenstates of an operator. For example, the ground state of hydrogen is a real wavefunction. According to the guiding equation, this means that the electron is at rest when in this state. Nevertheless, it is distributed according to and no contradiction to experimental results is possible to detect. Operators as observables leads many to believe that many operators are equivalent. De BroglieBohm theory, from this perspective, chooses the position observable as a favored observable rather than, say, the momentum observable. Again, the link to the position observable is a consequence of the dynamics. The motivation for de BroglieBohm theory is to describe a system of particles. This implies that the goal of the theory is to describe the positions of those particles at all times. Other observables do not have this compelling ontological status. Having definite positions explains having definite results such as flashes on a detector screen. Other observables would not lead to that conclusion, but there need not be any problem in defining a mathematical theory for other observables; see Hyman et al.[21] for an exploration of the fact that a probability density and probability current can be defined for any set of commuting operators. Hidden variables De BroglieBohm theory is often referred to as a "hidden variable" theory. The alleged applicability of the term "hidden variable" comes from the fact that the particles postulated by Bohmian mechanics do not influence the evolution of the wavefunction. The argument is that, because adding particles does not have an effect on the wavefunction's evolution, such particles must not have effects at all and are, thus, unobservable, since they cannot have an effect on observers. There is no analogue of Newton's third law in this theory. The idea is supposed to be that, since particles cannot influence the wavefunction, and it is the wavefunction that determines measurement predictions through the Born rule, the particles are superfluous and unobservable.
171
Bohm interpretation Such an argument, however, arises from a fundamental misunderstanding of the relation between the ontology posited by the de BroglieBohm theory and the world of ordinary observation. In particular, the particles postulated by the de BroglieBohm theory are anything but "hidden" variables: they are what the cats and trees and tables and planets and pointers we see are made of! It is the wavefunction itself which is "hidden" in the sense of being invisible and not-directly-observable. Thus, for example, when the wavefunction of some measuring apparatus is such that its pointer is superposed between pointing to the left and pointing to the right, what accounts for the fact that scientists, when they look at the apparatus, see the pointer pointing to the left (say) is the fact that the de BroglieBohmian particles that make up the pointer are actually pointed towards the left. While the exact details of how humans process such information and what it is based on is beyond the scope of the de BroglieBohm theory, the basic idea of any particle ontology is that if the particles in the theory appear where they seem to be from human observations, then it is considered a successful prediction.
172
Bohm interpretation Alain Aspect performed a series of Bell test experiments that test Bell's inequality using an EPR-type setup. Aspect's results show experimentally that Bell's inequality is in fact violatedmeaning that the relevant quantum mechanical predictions are correct. In these Bell test experiments, entangled pairs of particles are created; the particles are separated, traveling to remote measuring apparatus. The orientation of the measuring apparatus can be changed while the particles are in flight, demonstrating the apparent non-locality of the effect. The de BroglieBohm theory makes the same (empirically correct) predictions for the Bell test experiments as ordinary quantum mechanics. It is able to do this because it is manifestly nonlocal. It is often criticized or rejected based on this; Bell's attitude was: "It is a merit of the de BroglieBohm version to bring this [nonlocality] out so explicitly that it cannot be ignored." [24] The de BroglieBohm theory describes the physics in the Bell test experiments as follows: to understand the evolution of the particles, we need to set up a wave equation for both particles; the orientation of the apparatus affects the wavefunction. The particles in the experiment follow the guidance of the wavefunction. It is the wavefunction that carries the faster-than-light effect of changing the orientation of the apparatus. An analysis of exactly what kind of nonlocality is present and how it is compatible with relativity can be found in Maudlin.[25] Note that in Bell's work, and in more detail in Maudlin's work, it is shown that the nonlocality does not allow for signaling at speeds faster than light.
173
Classical limit
Bohm's formulation of de BroglieBohm theory in terms of a classical-looking version has the merits that the emergence of classical behavior seems to follow immediately for any situation in which the quantum potential is negligible, as noted by Bohm in 1952. Modern methods of decoherence are relevant to an analysis of this limit. See Allori et al.[26] for steps towards a rigorous analysis.
away from the node and often crossing the path of other sample points (which violates single-valuedness). Various schemes have been developed to overcome this; however, no general solution has yet emerged. These methods, as does Bohm's Hamilton-Jacobi formulation, do not apply to situations in which the full dynamics of spin need to be taken into account.
Bohm interpretation
174
Our main criticism of this view is on the grounds of simplicity - if one desires to hold the view that
particle is superfluous since, as we have endeavored to illustrate, the pure wave theory is itself satisfactory.
In the Everettian view, then, the Bohm particles are superfluous entities, similar to, and equally as unnecessary as, for example, the luminiferous ether was found to be unnecessary in special relativity. This argument of Everett's is sometimes called the "redundancy argument", since the superfluous particles are redundant in the sense of Occam's razor.[32] . By omitting the hidden variables, however, Everett had to invoke causally unrelated and therefore experimentally unverifiable parallel universes. Many authors have expressed critical views of the de Broglie-Bohm theory, by comparing it to Everett's many worlds approach. Many (but not all) proponents of the de Broglie-Bohm theory (such as Bohm and Bell) interpret the universal wave function as physically real. According to some supporters of Everett's theory, if the (never collapsing) wave function is taken to be physically real, then it is natural to interpret the theory as having the same many worlds as Everett's theory. In the Everettian view the role of the Bohm particle is to act as a "pointer", tagging, or selecting, just one branch of the universal wavefunction (the assumption that this branch indicates which wave packet determines the observed result of a given experiment is called the "result assumption"[30] ); the other branches are designated "empty" and implicitly assumed by Bohm to be devoid of conscious observers.[30] H. Dieter Zeh comments on these "empty" branches:
It is usually overlooked that Bohms theory contains the same many worlds of dynamically separate branches as the Everett interpretation [33] (now regarded as empty wave components), since it is based on precisely the same . . . global wave function . . .
[34]
The fact that such a "pointer" can be made in a self-consistent manner that not only reproduces all known experimental results but also provides a clean classical limit is, however, highly significant in itself. It proves that the existence of alternative universes is not a necessary conclusion of quantum physics.
Bohm interpretation
175
Derivations
De BroglieBohm theory has been derived many times and in many ways. Below are five derivations all of which are very different and lead to different ways of understanding and extending this theory. Schrdinger's equation can be derived by using Einstein's light quanta hypothesis: hypothesis: . . and de Broglie's
The guiding equation can be derived in a similar fashion. We assume a plane wave: Notice that . Assuming that for the particle's actual velocity, we have that
. Thus, we have the guiding equation. Notice that this derivation does not use Schrdinger's equation. Preserving the density under the time evolution is another method of derivation. This is the method that Bell cites. It is this method which generalizes to many possible alternative theories. The starting point is the continuity equation for the density . This equation describes a probability flow along a
current. We take the velocity field associated with this current as the velocity field whose integral curves yield the motion of the particle. A method applicable for particles without spin is to do a polar decomposition of the wavefunction and transform Schrdinger's equation into two coupled equations: the continuity equation from above and the HamiltonJacobi equation. This is the method used by Bohm in 1952. The decomposition and equations are as follows: Decomposition: . Continuity Equation: HamiltonJacobi Equation: The HamiltonJacobi equation is the equation derived from a Newtonian system with potential and velocity field The potential is the classical potential that appears in Note corresponds to the probability density
This leads to viewing the quantum theory as particles moving under the classical force modified by a quantum force. However, unlike standard Newtonian mechanics, the initial velocity field is already specified by which is a symptom of this being a first-order theory, not a second-order theory. A fourth derivation was given by Drr et al.[1] In their derivation, they derive the velocity field by demanding the appropriate transformation properties given by the various symmetries that Schrdinger's equation satisfies, once the wavefunction is suitably transformed. The guiding equation is what emerges from that analysis. A fifth derivation, given by Drr et al.[4] is appropriate for generalization to quantum field theory and the Dirac equation. The idea is that a velocity field can also be understood as a first order differential operator acting on functions. Thus, if we know how it acts on functions, we know what it is. Then given the Hamiltonian operator , the equation to satisfy for all functions (with associated multiplication operator where of the wavefunction. ) is
Bohm interpretation This formulation allows for stochastic theories such as the creation and annihilation of particles.
176
History
De BroglieBohm theory has a history of different formulations and names. In this section, each stage is given a name and a main reference.
Pilot-wave theory
Dr. de Broglie presented his pilot wave theory at the 1927 Solvay Conference,[35] after close collaboration with Schrdinger, who developed his wave equation for de Broglie's theory. At the end of the presentation, Wolfgang Pauli pointed out that it was not compatible with a semi-classical technique Fermi had previously adopted in the case of inelastic scattering. Contrary to a popular legend, de Broglie actually gave the correct rebuttal that the particular technique could not be generalized for Pauli's purpose, although the audience might have been lost in the technical details and de Brolie's mild mannerism left the impression that Pauli's objection was valid. He was eventually persuaded to abandon this theory nonetheless in 1932 due to both the Copenhagen school's more successful P.R. efforts and his own inability to understand quantum decoherence. Also in 1932, John von Neumann published a paper,[36] claiming to prove that all hidden-variable theories are impossible. This sealed the fate of de Broglie's theory for the next two decades. In truth, von Neumann's proof is based on invalid assumptions, such as quantum physics can be made local, and it does not really disprove the pilot-wave theory. De Broglie's theory already applies to multiple spin-less particles, but lacks an adequate theory of measurement as no one understood quantum decoherence at the time. An analysis of de Broglie's presentation is given in Bacciagaluppi et al.[37] [38] Around this time Erwin Madelung[39] also developed a hydrodynamic version of Schrdinger's equation which is incorrectly considered as a basis for the density current derivation of de BroglieBohm theory. The Madelung equations, being quantum Euler equations (fluid dynamics), differ philosophically from the de BroglieBohm theory[40] and are the basis of the hydrodynamic interpretation of quantum mechanics.
De BroglieBohm theory
After publishing a popular textbook on Quantum Mechanics which adhered entirely to the Copenhagen orthodoxy, Bohm was persuaded by Einstein to take a critical look at von Neumann's theorem. The result was 'A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables" I and II' [Bohm 1952]. It extended the original Pilot Wave Theory to incorporate a consistent theory of measurement, and to address a criticism of Pauli that de Broglie did not properly respond to; it is taken to be deterministic (though Bohm hinted in the original papers that there should be disturbances to this, in the way Brownian motion disturbs Newtonian mechanics). This stage is known as the de BroglieBohm Theory in Bell's work [Bell 1987] and is the basis for 'The Quantum Theory of Motion' [Holland 1993]. This stage applies to multiple particles, and is deterministic. The de BroglieBohm theory is an example of a hidden variables theory. Bohm originally hoped that hidden variables could provide a local, causal, objective description that would resolve or eliminate many of the paradoxes of quantum mechanics, such as Schrdinger's cat, the measurement problem and the collapse of the wavefunction. However, Bell's theorem complicates this hope, as it demonstrates that there can be no local hidden variable theory that is compatible with the predictions of quantum mechanics. The Bohmian interpretation is causal but not local. Bohm's paper was largely ignored by other physicists. Even Albert Einstein did not consider it a satisfactory answer to the quantum non-locality question. The rest of the contemporary objections, however, were ad hominem, focusing on Bohm's sympathy with liberals and supposed communists as exemplified by his refusal to give testimony to the House Un-American Activities Committee.
Bohm interpretation Eventually the cause was taken up by John Bell. In "Speakable and Unspeakable in Quantum Mechanics" [Bell 1987], several of the papers refer to hidden variables theories (which include Bohm's). Bell showed that von Neumann's objection amounted to showing that hidden variables theories are nonlocal, and that nonlocality is a feature of all quantum mechanical systems.
177
Bohmian mechanics
This term is used to describe the same theory, but with an emphasis on the notion of current flow. In particular, it is often used to include most of the further extensions past the spin-less version of Bohm. While de BroglieBohm theory has Lagrangians and Hamilton-Jacobi equations as a primary focus and backdrop, with the icon of the quantum potential, Bohmian mechanics considers the continuity equation as primary and has the guiding equation as its icon. They are mathematically equivalent in so far as the Hamilton-Jacobi formulation applies, i.e., spin-less particles. The papers of Drr et al. popularized the term. All of non-relativistic quantum mechanics can be fully accounted for in this theory.
See also
David Bohm Interpretation of quantum mechanics Madelung equations Local hidden variable theory Quantum mechanics Pilot wave
References
Albert, David Z. (May 1994). "Bohm's Alternative to Quantum Mechanics". Scientific American 270: 5867. Barbosa, G. D.; N. Pinto-Neto (2004). "A Bohmian Interpretation for Noncommutative Scalar Field Theory and Quantum Mechanics". Physical Review D 69: 065014. doi:10.1103/PhysRevD.69.065014. arXiv:hep-th/0304105. Bohm, David (1952). "A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables" I". Physical Review 85: 166179. doi:10.1103/PhysRev.85.166. Bohm, David (1952). "A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables", II". Physical Review 85: 180193. doi:10.1103/PhysRev.85.180. Bohm, David (1990). "A new theory of the relationship of mind and matter" [41]. Philosophical Psychology 3 (2): 271286. doi:10.1080/09515089008573004. Bohm, David; B.J. Hiley (1993). The Undivided Universe: An ontological interpretation of quantum theory. London: Routledge. ISBN0-415-12185-X. Durr, Detlef; Sheldon Goldstein, Roderich Tumulka and Nino Zangh (December 2004). "Bohmian Mechanics" [42] (PDF). Physical review letters 93 (9): 090402. ISSN0031-9007. PMID15447078.
Bohm interpretation Goldstein, Sheldon (2001). "Bohmian Mechanics" [43]. Stanford Encyclopedia of Philosophy. Hall, Michael J.W. (2004). "Incompleteness of trajectory-based interpretations of quantum mechanics". Journal of Physics a Mathematical and General 37: 9549. doi:10.1088/0305-4470/37/40/015. arXiv:quant-ph/0406054. (Demonstrates incompleteness of the Bohm interpretation in the face of fractal, differentialble-nowhere wavefunctions.) Holland, Peter R. (1993). The Quantum Theory of Motion : An Account of the de BroglieBohm Causal Interpretation of Quantum Mechanics. Cambridge: Cambridge University Press. ISBN0-521-48543-6. Nikolic, H. (2004). "Relativistic quantum mechanics and the Bohmian interpretation". Foundations of Physics Letters 18: 549. doi:10.1007/s10702-005-1128-1. arXiv:quant-ph/0406173. Passon, Oliver (2004). Why isn't every physicist a Bohmian?. arXiv:quant-ph/0412119. Sanz, A. S.; F. Borondo (2003). "A Bohmian view on quantum decoherence". The European Physical Journal D 44: 319. doi:10.1140/epjd/e2007-00191-8. arXiv:quant-ph/0310096. Sanz, A.S. (2005). "A Bohmian approach to quantum fractals". J. Phys. A: Math. Gen. 38: 319. doi:10.1088/0305-4470/38/26/013. (Describes a Bohmian resolution to the dilemma posed by non-differentiable wavefunctions.) Silverman, Mark P. (1993). And Yet It Moves: Strange Systems and Subtle Questions in Physics. Cambridge: Cambridge University Press. ISBN0-521-44631-7. Streater, Ray F. (2003). "Bohmian mechanics is a "lost cause"" [44]. Retrieved 2006-06-25. Valentini, Antony; Hans Westman (2004). Dynamical Origin of Quantum Probabilities. arXiv:quant-ph/0403034. Bohmian mechanics on arxiv.org [45]
178
External links
"Bohmian Mechanics" (Stanford Encyclopedia of Philosophy) [46] "Pilot waves, Bohmian metaphysics, and the foundations of quantum mechanics" [47], lecture course on Bohm interpretation by Mike Towler, Cambridge University.
References
[1] Drr, D., Goldstein, S., and Zangh, N., "Quantum Equilibrium and the Origin of Absolute Uncertainty" (http:/ / arxiv. org/ abs/ quant-ph/ 0308039), Journal of Statistical Physics 67: 843907, 1992. [2] Quantum Equilibrium and the Origin of Absolute Uncertainty, D. Drr, S. Goldstein and N. Zangh, Journal of Statistical Physics 67, 843-907 (1992), http:/ / arxiv. org/ abs/ quant-ph/ 0308039. [3] Drr, D., Goldstein, S., Taylor, J., Tumulka, R., and Zangh, N., J. "Quantum Mechanics in Multiply-Connected Spaces" (http:/ / arxiv. org/ abs/ quant-ph/ 0506173), Phys. A: Math. Theor. 40, 29973031 (2007) [4] Drr, D., Goldstein, S., Tumulka, R., and Zangh, N., 2004, "Bohmian Mechanics and Quantum Field Theory" (http:/ / arxiv. org/ abs/ quant-ph/ 0303156), Phys. Rev. Lett. 93: 090402:14. [5] Drr, D., Tumulka, R., and Zangh, N., J. Phys. A: Math. Gen. 38, R1R43 (2005), quant-ph/0407116 [6] Nikolic, H. 2010 "QFT as pilot-wave theory of particle creation and destruction" (http:/ / arxiv. org/ abs/ 0904. 2287), Int. J. Mod. Phys. A 25, 1477 (2010) [7] Valentini, A., 1991, "Signal-Locality, Uncertainty and the Subquantum H-Theorem. II," Physics Letters A 158: 18. [8] http:/ / xxx. lanl. gov/ abs/ quant-ph/ 0208185 [9] http:/ / xxx. lanl. gov/ abs/ quant-ph/ 0302152 [10] Drr, D., Goldstein, S., Mnch-Berndl, K., and Zangh, N., 1999, "Hypersurface Bohm-Dirac Models" (http:/ / arxiv. org/ abs/ quant-ph/ 9801070), Phys. Rev. A 60: 27292736. [11] http:/ / xxx. lanl. gov/ abs/ 0811. 1905 [12] http:/ / xxx. lanl. gov/ abs/ 0904. 2287 [13] http:/ / arxiv. org/ abs/ 1002. 3226 [14] Bell, John S, Speakable and Unspeakable in Quantum Mechanics, Cambridge University Press 1987. [15] Albert, D. Z., 1992, Quantum Mechanics and Experience, Cambridge, MA: Harvard University Press [16] Daumer, M., Drr, D., Goldstein, S., and Zangh, N., 1997, "Naive Realism About Operators" (http:/ / arxiv. org/ abs/ quant-ph/ 9601013), Erkenntnis 45: 379397.
Bohm interpretation
[17] Drr, D., Goldstein, S., and Zangh, N., "Quantum Equilibrium and the Role of Operators as Observables in Quantum Theory" (http:/ / arxiv. org/ abs/ quant-ph/ 0308038) Journal of Statistical Physics 116, 9591055 (2004) [18] http:/ / arxiv. org/ abs/ quant-ph/ 0206196 [19] http:/ / arxiv. org/ abs/ quant-ph/ 0108038 [20] http:/ / arxiv. org/ abs/ quant-ph/ 0305131 [21] Hyman, Ross et al Bohmian mechanics with discrete operators (http:/ / www. iop. org/ EJ/ abstract/ 0305-4470/ 37/ 44/ L02), J. Phys. A: Math. Gen. 37 L547L558, 2004 [22] J. S. Bell, On the Einstein Podolsky Rosen Paradox (http:/ / www. drchinese. com/ David/ Bell_Compact. pdf), Physics 1, 195 (1964) [23] Einstein, Podolsky, Rosen Can Quantum Mechanical Description of Physical Reality Be Considered Complete? Phys. Rev. 47, 777 (1935). [24] Bell, page 115 [25] Maudlin, T., 1994, Quantum Non-Locality and Relativity: Metaphysical Intimations of Modern Physics, Cambridge, MA: Blackwell. [26] Allori, V., Drr, D., Goldstein, S., and Zangh, N., 2002, "Seven Steps Towards the Classical World" (http:/ / arxiv. org/ abs/ quant-ph/ 0112005), Journal of Optics B 4: 482488. [27] http:/ / research. cm. utexas. edu/ rwyatt/ movies/ qtm/ index. html [28] http:/ / pubs. acs. org/ toc/ jpcafh/ 111/ 41 [29] http:/ / k2. chem. uh. edu [30] Harvey R Brown and David Wallace, Solving the measurement problem: de Broglie-Bohm loses out to Everett, Foundations of Physics 35 (2005), pp. 517-540. (http:/ / philsci-archive. pitt. edu/ archive/ 00001659/ 01/ Cushing. pdf) Abstract: "The quantum theory of de Broglie and Bohm solves the measurement problem, but the hypothetical corpuscles play no role in the argument. The solution finds a more natural home in the Everett interpretation." [31] See section VI of Everett's thesis: The Theory of the Universal Wave Function, pp 3-140 of Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X [32] Craig Callender, "The Redundancy Argument Against Bohmian Mechanics" (http:/ / philosophy. ucsd. edu/ faculty/ ccallender/ The Redundancy Argument Against Bohmian Mechanics. doc. ) [33] Daniel Dennett (2000). With a little help from my friends. In D. Ross, A. Brook, and D. Thompson (Eds.), Dennetts Philosophy: a comprehensive assessment. MIT Press/Bradford, ISBN 026268117X. [34] David Deutsch, Comment on Lockwood. British Journal for the Philosophy of Science 47, 222228, 1996 [35] Solvay Conference, 1928, Electrons et Photons: Rapports et Descussions du Cinquieme Conseil de Physique tenu a Bruxelles du 24 au 29 October 1927 sous les auspices de l'Institut International Physique Solvay [36] von Neumann J. 1932 Mathematische Grundlagen der Quantenmechanik [37] Bacciagaluppi, G., and Valentini, A., Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference [38] See the brief summary by Towler, M., "Pilot wave theory, Bohmian metaphysics, and the foundations of quantum mecahnics" (http:/ / www. tcm. phy. cam. ac. uk/ ~mdt26/ PWT/ lectures/ bohm7. pdf) [39] Madelung, E., Quantentheorie in hydrodynamischer Form, Zeit. F. Phys. 40 (1927), 322326 [40] Tsekov, R. (2009) Bohmian Mechanics versus Madelung Quantum Hydrodynamics (http:/ / arxiv. org/ abs/ 0904. 0723) [41] http:/ / members. aol. com/ Mszlazak/ BOHM. html [42] http:/ / www. math. rutgers. edu/ ~oldstein/ papers/ bohmech. pdf [43] http:/ / plato. stanford. edu/ entries/ qm-bohm/ [44] http:/ / www. mth. kcl. ac. uk/ ~streater/ lostcauses. html#XI [45] http:/ / xstructure. inr. ac. ru/ x-bin/ theme3. py?level=1& index1=-139823 [46] http:/ / plato. stanford. edu/ entries/ qm-bohm [47] http:/ / www. tcm. phy. cam. ac. uk/ ~mdt26/ pilot_waves. html
179
Correspondence principle
180
Correspondence principle
This article discusses quantum theory. For other uses, see Correspondence principle (disambiguation). In physics, the correspondence principle states that the behavior of systems described by the theory of quantum mechanics (or by the old quantum theory) reproduces classical physics in the limit of large quantum numbers. The principle was formulated by Niels Bohr in 1920[1] , though he had previously made use of it as early as 1913 in developing his model of the atom[2] . The term is also used more generally, to represent the idea that a new theory should reproduce the results of older well-established theories in those domains where the old theories work.
Quantum mechanics
The rules of quantum mechanics are highly successful in describing microscopic objects, atoms and elementary particles. But macroscopic systems like springs and capacitors are accurately described by classical theories like classical mechanics and classical electrodynamics. If quantum mechanics should be applicable to macroscopic objects there must be some limit in which quantum mechanics reduces to classical mechanics. Bohr's correspondence principle demands that classical physics and quantum physics give the same answer when the systems become large. The conditions under which quantum and classical physics agree are referred to as the correspondence limit, or the classical limit. Bohr provided a rough prescription for the correspondence limit: it occurs when the quantum numbers describing the system are large. A more elaborated analysis of quantum-classical correspondence (QCC) in wavepacket spreading leads to the distinction between robust "restricted QCC" and fragile "detailed QCC". See Stotland & Cohen (2006) and references therein. "Restricted QCC" refers to the first two moments of the probability distribution and is true even when the wave packets diffract, while "detailed QCC" requires smooth potentials which vary over scales much larger than the wavelength, which is what Bohr considered. The post-1925 new quantum theory came in two different formulations. In matrix mechanics, the correspondence principle was built in and was used to construct the theory. In the Schrdinger approach classical behavior is not clear because the waves spread out as they move. Once the Schrdinger equation was given a probabilistic interpretation, Ehrenfest showed that Newton's laws hold on average: the quantum statistical expectation value of the position and momentum obey Newton's laws. The correspondence principle is one of the tools available to physicists for selecting quantum theories corresponding to reality. The principles of quantum mechanics are broad - they say that the states of a physical system form a complex vector space, but they do not say which operators correspond to physical quantities or measurements. The correspondence principle limits the choices to those that reproduce classical mechanics in the correspondence limit. Because quantum mechanics only reproduces classical mechanics in a statistical interpretation, and because the statistical interpretation only gives the probabilities of different classical outcomes, Bohr has argued that classical physics does not emerge from quantum physics in the same way that classical mechanics emerges as an approximation of special relativity at small velocities. He argued that classical physics exists independently of quantum theory and cannot be derived from it. His position is that it is inappropriate to understand the experiences of observers using purely quantum mechanical notions such as wavefunctions because the different states of experience of an observer are defined classically, and do not have a quantum mechanical analog. The relative state interpretation of quantum mechanics is an attempt to understand the experience of observers using only quantum mechanical notions. Niels Bohr was an early opponent of such interpretations.
Correspondence principle
181
Examples
Bohr model
If an electron in an atom is moving on an orbit with period T, the electromagnetic radiation will classically repeat itself every orbital period. If the coupling to the electromagnetic field is weak, so that the orbit doesn't decay very much in one cycle, the radiation will be emitted in a pattern which repeats every period, so that the fourier transform will have frequencies which are only multiples of 1/T. This is the classical radiation law: the frequencies emitted are integer multiples of 1/T. In quantum mechanics, this emission must be of quantum of light. The frequency of the quanta emitted should be integer multiples of 1/T so that classical mechanics is an approximate description at large quantum numbers. This means that the energy level corresponding to a classical orbit of period 1/T must have nearby energy levels which differ in energy by h/T, and they should be equally spaced near that level:
Bohr worried whether the energy spacing 1/T should be best calculated with the period of the energy state
or
or some average. In hindsight, there is no need to quibble, since this theory is only the leading semiclassical approximation. Bohr considered circular orbits. These orbits must classically decay to smaller circles when they emit photons. The level spacing between circular orbits can be calculated with the correspondence formula. For a hydrogen atom, the classical orbits have a period T which is determined by Kepler's third law to scale as . The energy scales as 1/r, so the level spacing formula says that:
It is possible to determine the energy levels by recursively stepping down orbit by orbit, but there is a shortcut. The angular momentum L of the circular orbit scales as . The energy in terms of the angular momentum is then
Assuming that quantized values of L are equally spaced, the spacing between neighboring energies is
Correspondence principle
182
Which is what we want for equally spaced angular momentum. If you keep track of the constants, the spacing is so the angular momentum should be an integer multiple of
This is how Bohr arrived at his model. Since only the level spacing is determined by the correspondence principle, you could always add a small fixed offset to the quantum number--- L could just as well have been . Bohr used his physical intuition to decide which quantities were best to quantize. It is a testimony to his skill that he was able to get so much from what is only the leading order approximation.
One-dimensional potential
Bohr's correspondence condition can be solved for the level energies in a general one-dimensional potential. Define a quantity J(E) which is a function only of the energy, and has the property that:
This is the analog of the angular momentum in the case of the circular orbits. The orbits selected by the correspondence principle are the ones that obey J=nh for n integer, since
This quantity J is canonically conjugate to a variable which, by the Hamilton equations of motion changes with time as the gradient of energy with J. Since this is equal to the inverse period at all times, the variable increases steadily from 0 to 1 over one period. The angle variable comes back to itself after 1 unit of increase, so the geometry of phase space in J, coordinates is that of a half-cylinder, capped off at J = 0, which is the motionless orbit at the lowest value of the energy. These coordinates are just as canonical as x,p, but the orbits are now lines of constant J instead of nested ovoids in x-p space. The area enclosed by an orbit is invariant under canonical transformations, so it is the same in x-p space as in J-. But in the J- coordinates this area is the area of a cylinder of unit circumference between 0 and J, or just J. So J is equal to the area enclosed by the orbit in x-p coordinates too:
Each action variable is a separate integer, a separate quantum number. This condition reproduces the circular orbit condition for two dimensional motion: let a central potential. Then momentum. So the quantum condition for L reproduces Bohr's rule: be polar coordinates for
is already an angle variable, and the canonical momentum conjugate is L, the angular
Correspondence principle
183
This allowed Sommerfeld to generalize Bohr's theory of circular orbits to elliptical orbits, showing that the energy levels are the same. He also found some general properties of quantum angular momentum which seemed paradoxical at the time. One of these results was the that the z-component of the angular momentum, the classical inclination of an orbit relative to the z-axis, could only take on discrete values, a result which seemed to contradict rotational invariance. This was called space quantization for a while, but this term fell out of favor with the new quantum mechanics since no quantization of space is involved. In modern quantum mechanics, the principle of superposition makes it clear that rotational invariance is not lost. It is possible to rotate objects with discrete orientations to produce superpositions of other discrete orientations, and this resolves the intuitive paradoxes of the Sommerfeld model.
where is the angular frequency of the oscillator. However, in a classical harmonic oscillator such as a lead ball attached to the end of a spring, we do not perceive any discreteness. Instead, the energy of such a macroscopic system appears to vary over a continuum of values. We can verify that our idea of "macroscopic" systems fall within the correspondence limit. The energy of the classical harmonic oscillator with amplitude is
If we apply typical "human-scale" values m = 1kg, = 1 rad/s, and A = 1m, then n 4.741033. This is a very large number, so the system is indeed in the correspondence limit. It is simple to see why we perceive a continuum of energy in said limit. With each energy level is J, well below what we can detect. = 1 rad/s, the difference between
where the velocity, is the velocity of the body relative to the observer, is the rest mass (the observed mass of the body at zero velocity relative to the observer), and is the speed of light. When the velocity is zero, the energy expressed above is not zero and represents the rest energy:
Correspondence principle When the body is in motion relative to the observer, the total energy exceeds the rest energy by an amount that is, by definition, the kinetic energy:
184
for we get when speeds are much slower than that of light or
See also
Quantum decoherence
References
[1] Bohr, N. (1920), "ber die Serienspektra der Element", Zeitschrift fr Physik 2 (5): 423478 (English translation in (Bohr 1976, pp.241282)) [2] Jammer, Max (1989), The conceptual development of quantum mechanics, Los Angeles, CA: Tomash Publishers, American Institute of Physics, ISBN0883186179, Section 3.2
Bohr, Niels (1976), Rosenfeld, L.; Nielsen, J. Rud, eds., Niels Bohr, Collected Works, Volume 3, The Correspondence Principle (19181923), 3, Amsterdam: North-Holland, ISBN0444107843 Sells, Robert L.; Weidner, Richard T. (1980), Elementary modern physics, Boston: Allyn and Bacon, ISBN978-0-205-06559-2 Stotland, A.; Cohen, D. (2006), "Diffractive energy spreading and its semiclassical limit", Journal of Physics A 39 (10703): 10703, doi:10.1088/0305-4470/39/34/008, ISSN0305-4470
Fractal cosmology
185
Fractal cosmology
In physical cosmology, fractal cosmology is a set of minority cosmological theories which state that the distribution of matter in the Universe, or the structure of the universe itself, is fractal. More generally, it relates to the usage or appearance of fractals in the study of the universe and matter. A central issue in this field is the fractal dimension of the Universe or of matter distribution within it, when measured at very large or very small scales. The use of fractals to answer questions in cosmology has been employed by a A 'galaxy of galaxies' from the Mandelbrot Set growing number of serious scholars close to the mainstream, but the metaphor has also been adopted by others outside the mainstream of science, so some varieties of fractal cosmology are solidly in the realm of scientific theories and observations, and others are considered Fringe science, or perhaps metaphysical cosmology. Thus, these various formulations enjoy a range of acceptance and/or perceived legitimacy.
Fractal cosmology However, an analysis of luminous red galaxies in the Sloane survey calculated the fractal dimension of galaxy distribution (on a scales from 70 to 100 Mpc/h) at 3, consistent with homogeneity; they also confirm that the fractal dimension is 2 "out to roughly 20 Mpc/h".[5]
186
Publications
The book Discovery of Cosmic Fractals[14] by Yurij Baryshev and Pekka Teerikorpi gives an overview of fractal cosmology, and recounts other milestones in the development of this subject. It recapitulates the history of cosmology, reviewing the core concepts of ancient, historical, and modern astrophysical cosmology. The book also documents the appearance of fractal-like and hierarchal views of the universe from ancient times to the present. The authors make it apparent that some of the pertinent ideas of these two streams of thought developed together. They show that the view of the universe as a fractal has a long and varied history, though people havent always had the vocabulary necessary to express things in precisely that way. Beginning with the Sumerian and Babylonian mythologies, they trace the evolution of Cosmology through the ideas of Ancient Greeks like Aristotle, Anaximander, and Anaxagoras, and forward through the Scientific Revolution and beyond. They acknowledge the contributions of people like Emanuel Swedenborg, Edmund Fournier D'Albe, Carl Charlier, and Knut Lundmark to the subject of cosmology and a fractal-like interpretation, or explanation thereof. In addition, they document the work of de Vaucoleurs, Mandelbrot, Pietronero, Nottale and others in modern times, who have theorized, discovered, or demonstrated that the universe has an observable fractal aspect. On the 10th of March, 2007, the weekly science magazine New Scientist featured an article entitled "Is the Universe a Fractal?"[15] on its cover. The article by Amanda Gefter focused on the contrasting views of Pietronero and his
Fractal cosmology colleagues, who think that the universe appears to be fractal (rough and lumpy) with those of David Hogg of NYU and others who think that the universe will prove to be relatively homogeneous and isotropic (smooth) at a still larger scale, or once we have a large and inclusive enough sample (as is predicted by Lambda-CDM). Gefter gave experts in both camps an opportunity to explain their work and their views on the subject, for her readers. This was a follow-up of an earlier article in that same publication on August 21 of 1999, by Marcus Chown, entitled "Fractal Universe."[16] . Back in November 1994, Scientific American featured an article on its cover written by physicist Andrei Linde, entitled "The Self-Reproducing Inflationary Universe"[17] whose heading stated that "Recent versions of the inflationary scenario describe the universe as a self-generating fractal that sprouts other inflationary universes," and which described Linde's theory of chaotic eternal inflation in some detail. In July 2008, Scientific American featured an article on Causal Dynamical Triangulation[18] , written by the three scientists who propounded the theory, which again suggests that the universe may have the characteristics of a fractal.
187
See also
Causal dynamical triangulation Chaotic inflation theory Hoag's Object Holographic paradigm Large scale structure Nebular hypothesis Scale invariance Scale relativity Self-organized criticality Shape of the universe
External links
1st Crisis in Cosmology conference [19] included several talks on Fractals in Cosmology 2nd Crisis in Cosmology conference [20] more presentations on fractals in the cosmos Colin Hill's Fractal Universe site [21] Yun Pyo Jung's Fractal Cosmology site [22] Robert Oldershaw's Fractal Cosmology page [23] Harry Schmitz's Fractal Cosmos page [24] NRAO Press release on giant void [25] Search on arXiv.org for papers containing "fractal" [26] includes a high percentage of papers relevant to Cosmology Statistical physics for complex cosmic structures [27] by L. Pietronero and F. Sylos Labini Fractal Approach to Large-Scale Galaxy Distribution [28] by Y. Baryshev and P. Teerikorpi The large scale inhomogeneity of the galaxy distribution [29] May '08 paper by Labini, Vasilyev, Pietronero, and Baryshev Fractals as nesting of matter (translation of Russian Wikipedia page) [30] "Scale-Relativity and Cosmology" [31] from Fractal Space-time and Microphysics by Laurent Nottale
Fractal cosmology
188
References
[1] Pieronero, L. (1987). "The Fractal Structure of the Universe: Correlations of Galaxies and Clusters". Physica A (144): 257. [2] Joyce, M.; Labini, F.S.; Gabrielli, A.; Montouri, M.; Pietronero, L. (2005). "Basic Properties of Galaxy Clustering in the light of recent results from the Sloan Digital Sky Survey" (http:/ / arxiv. org/ abs/ astro-ph/ 0501583v2). Astronomy and Astrophysics 443 (11). doi:10.1051/0004-6361:20053658. . [3] Rudnick, L.; Brown, S.; Williams, L.. "Extragalactic Radio Sources and the WMAP Cold spot" (http:/ / arxiv. org/ abs/ 0704. 0908v2). ApJ 671 (1): 4044. . [4] Labini, F.S.; Vasilyev, N.L.; Pietronero, L.; Baryshev, Y. (2009). "Absence of self-averaging and of homogeneity in the large scale galaxy distribution" (http:/ / arxiv. org/ abs/ 0805. 1132). Europhys.Lett. 86. doi:10.1209/0295-5075/86/49001. . [5] Hogg, David W.; Eisenstein, Daniel J.; Blanton, Michael R.; Bahcall, Neta A.; Brinkmann, J.; Gunn, James E.; Schneider, Donald P. (2005). "Cosmic homogeneity demonstrated with luminous red galaxies" (http:/ / arxiv. org/ abs/ astro-ph/ 0411197). The Astrophysical Journal 624: 5458. doi:10.1086/429084. . [6] Linde, A.D. (August 1986). "Eternally Existing Self-Reproducing Chaotic Inflationary Universe". Physics Letters B. [7] Guth, Alan (22 June 2007). "Eternal inflation and its implications" (http:/ / arXiv. org/ abs/ hep-th/ 0702178). J. Phys. A: Math. Theor. 40 (25): 68116826. . [8] Ambjorn, J.; Jurkiewicz, J.; Loll, R. (2005). "Reconstructing the Universe" (http:/ / arXiv. org/ abs/ hep-th/ 0505154). Phys.Rev. D 72. . [9] Lauscher, O.; Reuter, M. (2005). Asymptotic Safety in Quantum Einstein Gravity (http:/ / arxiv. org/ abs/ hep-th/ 0511260). . [10] Nottale, Laurent (1992). "The theory of Scale Relativity". Intl. Journal of Modern Physics A 7 (20): 48994936. [11] Nottale, Laurent (1993). Fractal Space-time and Microphysics. World Scientific Press. [12] Hellemans, Alexander The Geometer of Particle Physics Scientific American - August, 2006 [13] Connes, A.; Rovelli, C. (1994). "Von Neumann Algebra Automorphisms and Time-Thermodynamics Relation" (http:/ / arxiv. org/ abs/ gr-qc/ 9406019). Class.Quant.Grav. 11: 28992918. . [14] Baryshev, Y. and Teerikorpi, P. - Discovery of Cosmic Fractals - World Scientific Press (2002) [15] Gefter, Amanda - Is the Universe a Fractal? - New Scientist - March 10, 2007: issue 2594 [16] Chown, Marcus - Fractal Universe - New Scientist - August 21, 1999 [17] Linde, Andrei - The Self-Reproducing Inflationary Universe - Scientific American - November 1994 pp. 48-55 [18] Ambjorn, J.; Jurkiewicz, J.; Loll, R. - The Self-Organizing Quantum Universe - Scientific American - July 2008 pp. 42-49 [19] http:/ / www. cosmology. info/ 2005conference/ index. html [20] http:/ / www. cosmology. info/ 2008conference/ index. html [21] http:/ / www. fractaluniverse. org/ [22] http:/ / www. fractalcosmology. com/ [23] http:/ / www. amherst. edu/ ~rloldershaw/ menu. html [24] http:/ / www. fractalcosmos. com [25] http:/ / www. nrao. edu/ pr/ 2007/ coldspot/ [26] http:/ / arxiv. org/ find/ all/ 1/ all:+ fractal/ 0/ 1/ 0/ all/ 0/ 1 [27] http:/ / arxiv. org/ abs/ astro-ph/ 0406202v1 [28] http:/ / arxiv. org/ abs/ astro-ph/ 0505185 [29] http:/ / arxiv. org/ abs/ 0805. 1132 [30] http:/ / en. wikiversity. org/ wiki/ Infinite_Hierarchical_Nesting_of_Matter [31] http:/ / www. luth. obspm. fr/ ~luthier/ nottale/ LIWOS7-1cor. pdf
EPR paradox
189
EPR paradox
In quantum mechanics, the EPR paradox (or EinsteinPodolskyRosen paradox) is a thought experiment which challenged long-held ideas about the relation between the observed values of physical quantities and the values that can be accounted for by a physical theory. Einstein, Podolsky, and Rosen introduced the thought experiment in a 1935 paper to argue that quantum mechanics is not a complete physical theory.[1] [2] According to its authors the EPR experiment yields a dichotomy. Either 1. The result of a measurement performed on one part A of a quantum system has a non-local effect on the physical reality of another distant part B, in the sense that quantum mechanics can predict outcomes of some measurements carried out at B; or... 2. Quantum mechanics is incomplete in the sense that some element of physical reality corresponding to B cannot be accounted for by quantum mechanics (that is, some extra variable is needed to account for it). As shown later by Bell, one cannot introduce the notion of "elements of reality" without affecting the predictions of the theory. That is, one cannot complete quantum mechanics with these "elements", because this automatically leads to some logical contradictions. Einstein never accepted quantum mechanics as a "real" and complete theory, struggling to the end of his life for an interpretation that could comply with relativity without complying with the Heisenberg Uncertainty Principle. As he once said: "God does not play dice", skeptically referring to the Copenhagen Interpretation of quantum mechanics which says there exists no objective physical reality other than that which is revealed through measurement and observation. The EPR paradox is a paradox in the following sense: if one adds to quantum mechanics some seemingly reasonable (but actually wrong, or questionable as a whole) conditions like local realism (not to be confused with philosophical realism), counterfactual definiteness, and incompleteness (see Bell inequality and Bell test experiments) then one obtains a contradiction. However, quantum mechanics by itself does not appear to be internally inconsistent, nor as it turns out does it contradict relativity. As a result of further theoretical and experimental developments since the original EPR paper, most physicists today regard the EPR paradox as an illustration of how quantum mechanics violates classical intuitions.
EPR paradox The famous paper "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?"[3], authored by Einstein, Podolsky and Rosen in 1935, condensed the philosophical discussion into a physical argument. They claim that given a specific experiment, in which the outcome of a measurement could be known before the measurement takes place, there must exist something in the real world, an "element of reality", which determines the measurement outcome. They postulate that these elements of reality are local, in the sense that each belongs to a certain point in spacetime. Each element may only be influenced by events which are located in the backward light cone of its point in spacetime. These claims are founded on assumptions about nature which constitute what is now known as local realism. Though the EPR paper has often been taken as an exact expression of Einstein's views, it was primarily authored by Podolsky, based on discussions at the Institute for Advanced Study with Einstein and Rosen. Einstein later expressed to Erwin Schrdinger that "It did not come out as well as I had originally wanted; rather, the essential thing was, so to speak, smothered by the formalism."[4] In 1948 Einstein presented a less formal account of his local realist ideas.
190
Simple version
Before delving into the complicated logic that leads to the 'paradox', it is perhaps worth mentioning the simple version of the argument, as described by Greene and others, which Einstein used to show that 'hidden variables' must exist. A positron and an electron are emitted from a source, by pion decay, so that their spins are opposite; one particles spin about any axis is the negative of the other's. Also, due to uncertainty, making a measurement of a particles spin about one axis disturbs the particle so you now cant measure its spin about any other axis. Now say you measure the electrons spin about the x-axis. This automatically tells you the positrons spin about the x-axis. Since youve done the measurement without disturbing the positron in any way, it cant be that the positron "only came to have that state when you measured it", because you didnt measure it! It must have had that spin all along. Also you can now measure the positrons spin about the y-axis. So it follows that the positron has had a definite spin about two axes much more information than the positron is capable of holding, and a "hidden variable" according to EPR.
EPR paradox
191
The EPR thought experiment, performed with electron-positron pairs. A source (center) sends particles toward two observers, electrons to Alice (left) and positrons to Bob (right), who can perform spin measurements.
Alice now measures the spin along the z-axis. She can obtain one of two possible outcomes: +z or -z. Suppose she gets +z. According to quantum mechanics, the quantum state of the system collapses into state I. (Different interpretations of quantum mechanics have different ways of saying this, but the basic result is the same.) The quantum state determines the probable outcomes of any measurement performed on the system. In this case, if Bob subsequently measures spin along the z-axis, he will obtain -z with 100% probability. Similarly, if Alice gets -z, Bob will get +z. There is, of course, nothing special about our choice of the z-axis. For instance, suppose that Alice and Bob now decide to measure spin along the x-axis, according to quantum mechanics, the spin singlet state may equally well be expressed as a superposition of spin states pointing in the x direction. We'll call these states Ia and IIa. In state Ia, Alice's electron has spin +x and Bob's positron has spin -x. In state IIa, Alice's electron has spin -x and Bob's positron has spin +x. Therefore, if Alice measures +x, the system collapses into Ia, and Bob will get -x. If Alice measures -x, the system collapses into IIa, and Bob will get +x. In quantum mechanics, the x-spin and z-spin are "incompatible observables", which means that there is a Heisenberg uncertainty principle operating between them: a quantum state cannot possess a definite value for both variables. Suppose Alice measures the z-spin and obtains +z, so that the quantum state collapses into state I. Now, instead of measuring the z-spin as well, Bob measures the x-spin. According to quantum mechanics, when the system is in state I, Bob's x-spin measurement will have a 50% probability of producing +x and a 50% probability of -x. Furthermore, it is fundamentally impossible to predict which outcome will appear until Bob actually performs the measurement. Here is the crux of the matter. You might imagine that, when Bob measures the x-spin of his positron, he would get an answer with absolute certainty, since prior to this he hasn't disturbed his particle at all. But, as described above, Bob's positron has a 50% probability of producing +x and a 50% probability of -xrandom behaviour, not certain. Bob's positron knows that Alice's electron has been measured, and its z-spin detected, and hence B's z-spin calculated, so its x-spin is 'out of bounds'. Put another way, how does Bob's positron know, at the same time, which way to point if Alice decides (based on information unavailable to Bob) to measure x (i.e. be the opposite of Alice's electron's spin about the x-axis) and also how to point if Alice measures z (i.e. behave randomly), since it is only supposed to know one thing at a time? Using the usual Copenhagen interpretation rules that say the wave function "collapses" at the time of measurement, there must be action at a distance (entanglement) or the positron must know more than it is supposed to (hidden variables). In case the explanation above is confusing, here is the paradox summed up: An electron-positron pair is emitted, the particles shoot off and are measured later. Whatever axis their spins are measured along, they are always found to be opposite. This can only be explained if the particles are linked in some
EPR paradox way. Either they were created with a definite (opposite) spin about every axisa "hidden variable" argumentor they are linked so that one electron knows what axis the other is having its spin measured along, and becomes its opposite about that one axisan "entanglement" argument. Moreover, if the two particles have their spins measured about different axes, once the electron's spin has been measured about the x-axis (and the positron's spin about the x-axis deduced), the positron's spin about the y-axis will no longer be certain, as if it knows that the measurement has taken place. Either that or it has a definite spin already, which gives it a spin about a second axisa hidden variable. Incidentally, although we have used spin as an example, many types of physical quantitieswhat quantum mechanics refers to as "observables"can be used to produce quantum entanglement. The original EPR paper used momentum for the observable. Experimental realizations of the EPR scenario often use photon polarization, because polarized photons are easy to prepare and measure.
192
EPR paradox
193
EPR paradox other hand the Bohm interpretation of quantum mechanics instead keeps counter-factual definiteness while introducing a conjectured non-local mechanism called the 'quantum potential'. Some workers in the field have also attempted to formulate hidden variable theories that exploit loopholes in actual experiments, such as the assumptions made in interpreting experimental data although no such theory has been produced that can reproduce all the results of quantum mechanics. There are also individual EPR-like experiments that have no local hidden variables explanation. Examples have been suggested by David Bohm and by Lucien Hardy.
194
"acceptability", up to this time mainly concerning theory (even philosophy), finally became experimentally decidable. There are many Bell test experiments hitherto, e.g. those of Alain Aspect and others. They all show that pure quantum mechanics, and not Einstein's "local realism", is acceptable. Thus, according to Karl Popper these experiments falsify Einstein's philosophical assumptions, especially the ideas on "hidden variables", whereas quantum mechanics itself remains a good candidate for a theory, which is acceptable in a wider context.
EPR paradox define (and measure) quantities in the physical system, not the system itself. In the many-worlds interpretation, a kind of locality is preserved, since the effects of irreversible operations such as measurement arise from the relativization of a global state to a subsystem such as that of an observer. The EPR paradox has deepened our understanding of quantum mechanics by exposing the fundamentally non-classical characteristics of the measurement process. Prior to the publication of the EPR paper, a measurement was often visualized as a physical disturbance inflicted directly upon the measured system. For instance, when measuring the position of an electron, one imagines shining a light on it, thus disturbing the electron and producing the quantum mechanical uncertainties in its position. Such explanations, which are still encountered in popular expositions of quantum mechanics, are debunked by the EPR paradox, which shows that a "measurement" can be performed on a particle without disturbing it directly, by performing a measurement on a distant entangled particle. Technologies relying on quantum entanglement are now being developed. In quantum cryptography, entangled particles are used to transmit signals that cannot be eavesdropped upon without leaving a trace. In quantum computation, entangled quantum states are used to perform computations in parallel, which may allow certain calculations to be performed much more quickly than they ever could be with classical computers.
195
Mathematical formulation
The above discussion can be expressed mathematically using the quantum mechanical formulation of spin. The spin degree of freedom for an electron is associated with a two-dimensional Hilbert space H, with each quantum state corresponding to a vector in that space. The operators corresponding to the spin along the x, y, and z direction, denoted Sx, Sy, and Sz respectively, can be represented using the Pauli matrices:
where
, the tensor product of the two electrons' Hilbert spaces. The spin
where the two terms on the right hand side are what we have referred to as state I and state II above. From the above equations, it can be shown that the spin singlet can also be written as
where the terms on the right hand side are what we have referred to as state Ia and state IIa. To illustrate how this leads to the violation of local realism, we need to show that after Alice's measurement of Sz (or Sx), Bob's value of Sz (or Sx) is uniquely determined, and therefore corresponds to an "element of physical reality". This follows from the principles of measurement in quantum mechanics. When Sz is measured, the system state collapses into an eigenvector of Sz. If the measurement result is +z, this means that immediately after measurement the system state undergoes an orthogonal projection of onto the space of states of the form
196
Similarly, if Alice's measurement result is -z, the system undergoes an orthogonal projection onto
This implies that the measurement for Sz for Bob's electron is now determined. It will be -z in the first case or +z in the second case. It remains only to show that Sx and Sz cannot simultaneously possess definite values in quantum mechanics. One may show in a straightforward manner that no possible vector can be an eigenvector of both matrices. More generally, one may use the fact that the operators do not commute,
See also
Bell test experiments Bell state Bell's theorem CHSH Bell test Coherence (physics) Counter-factual definiteness Fredkin Finite Nature Hypothesis Ghirardi-Rimini-Weber theory GHZ experiment Interpretation of quantum mechanics Local hidden variable theory Many-worlds interpretation Measurement in quantum mechanics Measurement problem Penrose interpretation Philosophy of information Philosophy of physics Pondicherry interpretation Popper's experiment Quantum decoherence Quantum entanglement Quantum gravity Quantum information Quantum pseudo-telepathy Quantum teleportation Quantum Zeno effect Sakurai's Bell inequality Synchronicity Wave function collapse Zero-point field
References
Selected papers
A. Aspect, Bell's inequality test: more ideal than ever, Nature 398 189 (1999). [5] J.S. Bell, On the Einstein-Poldolsky-Rosen paradox [6], Physics 1 195bbcv://prola.aps.org/abstract/PR/v48/i8/p696_1] P.H. Eberhard, Bell's theorem without hidden variables. Nuovo Cimento 38B1 75 (1977). P.H. Eberhard, Bell's theorem and the different concepts of locality. Nuovo Cimento 46B 392 (1978). A. Einstein, B. Podolsky, and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? [7] Phys. Rev. 47 777 (1935). [3] A. Fine, Hidden Variables, Joint Probability, and the Bell Inequalities. Phys. Rev. Lett. 48, 291 (1982).[8] A. Fine, Do Correlations need to be explained?, in Philosophical Consequences of Quantum Theory: Reflections on Bell's Theorem, edited by Cushing & McMullin (University of Notre Dame Press, 1986).
EPR paradox L. Hardy, Nonlocality for two particles without inequalities for almost all entangled states. Phys. Rev. Lett. 71 1665 (1993).[9] M. Mizuki, A classical interpretation of Bell's inequality. Annales de la Fondation Louis de Broglie 26 683 (2001). P. Pluch, "Theory for Quantum Probability", PhD Thesis University of Klagenfurt (2006) M. A. Rowe, D. Kielpinski, V. Meyer, C. A. Sackett, W. M. Itano, C. Monroe and D. J. Wineland, Experimental violation of a Bell's inequality with efficient detection, Nature 409, 791-794 (15 February 2001). [10] M. Smerlak, C. Rovelli, Relational EPR [11]
197
Books
John S. Bell (1987) Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press. ISBN 0-521-36869-3. Arthur Fine (1996) The Shaky Game: Einstein, Realism and the Quantum Theory, 2nd ed. Univ. of Chicago Press. J.J. Sakurai, J. J. (1994) Modern Quantum Mechanics. Addison-Wesley: 174187, 223-232. ISBN 0-201-53929-2. Selleri, F. (1988) Quantum Mechanics Versus Local Realism: The Einstein-Podolsky-Rosen Paradox. New York: Plenum Press. ISBN 0-306-42739-7
External links
The original EPR paper. [3] Stanford Encyclopedia of Philosophy: "The Einstein-Podolsky-Rosen Argument in Quantum Theory [12]" by Arthur Fine. Abner Shimony (2004) "Bells Theorem. [13]" EPR, Bell & Aspect: The Original References. [14] Does Bell's Inequality Principle rule out local theories of quantum mechanics? [15] From the Usenet Physics FAQ. Theoretical use of EPR in teleportation. [16] Effective use of EPR in cryptography. [17] EPR experiment with single photons interactive. [18]
References
[1] The God Particle: If the Universe is the Answer, What is the Question - pages 187 to 189, and 21 by Leon Lederman with Dick Teresi (copyright 1993) Houghton Mifflin Company [2] The Einstein-Podolsky-Rosen Argument in Quantum Theory; 1.2 The argument in the text; http:/ / plato. stanford. edu/ entries/ qt-epr/ #1. 2 [3] http:/ / prola. aps. org/ abstract/ PR/ v47/ i10/ p777_1 [4] Quoted in Kaiser, David. "Bringing the human actors back on stage: the personal context of the Einstein-Bohr debate," British Journal for the History of Science 27 (1994): 129-152, on page 147. [5] http:/ / www-ece. rice. edu/ ~kono/ ELEC565/ Aspect_Nature. pdf [6] http:/ / www. drchinese. com/ David/ Bell_Compact. pdf [7] http:/ / www. drchinese. com/ David/ EPR. pdf [8] http:/ / prola. aps. org/ abstract/ PRL/ v48/ i5/ p291_1 [9] http:/ / prola. aps. org/ abstract/ PRL/ v71/ i11/ p1665_1 [10] http:/ / www. nature. com/ nature/ journal/ v409/ n6822/ full/ 409791a0. html [11] http:/ / arxiv. org/ abs/ quant-ph/ 0604064 [12] http:/ / plato. stanford. edu/ entries/ qt-epr/ [13] http:/ / plato. stanford. edu/ entries/ bell-theorem/ [14] http:/ / www. drchinese. com/ David/ EPR_Bell_Aspect. htm [15] http:/ / math. ucr. edu/ home/ baez/ physics/ Quantum/ bells_inequality. html [16] http:/ / www. research. ibm. com/ journal/ rd/ 481/ brassard. html [17] http:/ / www. dhushara. com/ book/ quantcos/ aq/ qcrypt. htm
EPR paradox
[18] http:/ / www. QuantumLab. de
198
Holomovement
The holomovement is a key concept in David Bohm's interpretation of quantum mechanics and for his overall wordview. It brings together the holistic principle of "undivided wholeness" with the idea that everything is in a state of process or becoming (or what he calls the "universal flux"). For Bohm, wholeness is not a static oneness, but a dynamic wholeness-in-motion in which everything moves together in an interconnected process. The concept is presented most fully in Wholeness and the Implicate Order, published in 1980.
Background
The basic idea came to Bohm in the early 1970s, during an extraordinary period of creativity at Birkbeck College in London. The holomovement is one of a number of new concepts which Bohm presented in an effort to move beyond the mechanistic formulations of the standard interpretation of the quantum theory and relativity theory. Along with such concepts as undivided wholeness and the implicate order, the holomovement is central to his formulation of a "new order" in physics which would move beyond the mechanistic order.
Undivided wholeness
The term holomovement is one of many neologisms which Bohm coined in his search to overcome the limitations of the standard Copenhagen interpretation of quantum mechanics. This approach involved not just a critique of the assumptions of the standard model, but a set of new concepts in physics which move beyond the conventional language of quantum mechanics. Wholeness and the Implicate Order is the culmination of these reflections, an attempt to show how the new insights provided by a post-Copenhagen model can be extended beyond physics into other domains, such as life, consciousness, and cosmology.
Holomovement The holomovement concept is introduced in incremental steps. It is first presented under the aspect of wholeness in the lead essay, called "Fragmentation and Wholeness". There Bohm states the major claim of the book: "The new form of insight can perhaps best be called Undivided Wholeness in Flowing Movement" (Bohm, 1980, 11). This view implies that flow is, in some sense, prior to that of the things that can be seen to form and dissolve in this flow. He notes how "each relatively autonomous and stable structure is be understood not as something independently and permanently existent but rather as a product that has been formed in the whole flowing movement and what will ultimately dissolve back into this movement. How it forms and maintains itself, then, depends on its place function within the whole" (14). For Bohm, movement is what is primary; and what seem like permanent structures are only relatively autonomous sub-entities which emerge out of the whole of flowing movement and then dissolve back into it an unceasing process of becoming.
199
All is flux
The general concept is further refined in the third chapter, "Reality and Knowledge considered as Process", this time under the aspect of movement, or process. "Not only is everything changing, but all is flux. That is to say, what is the process of becoming itself, while all objects, events, entities, conditions, structures, etc., are forms that can be abstracted from this process" (48). His notion of the whole is not a static Paramedian oneness outside of space and time. Rather, the wholeness to which he refers here is more akin to the Heraclitian flux, or to the process philosophy of Whitehead.
Formal presentation
The formal presentation of the concept comes late in the book, under the general framework of new notions of order is physics. After discussing the concepts of undivided wholeness and the implicate and explicate orders, he presents the formal definition under the subheading "The Holomovement and its Aspects". Consistent with his own earlier Causal Interpretation, and more generally with the de Broglie-Schroedinger approach, he posits that a new kind of description would be appropriate for giving primary relevance to the implicate order. Using the hologram as a model [link to holographic universe], Bohm argues that the implicate order is enfolded within a more generalized wave structure of the universe-in-motion, or what he calls the holomovement: Generalizing, so as to emphasize undivided wholeness, we can say that the holomovement, which is an unbroken and undivided totality, carries implicate order. In certain cases, we can abstract particular aspects of the holomovement (e.g. light, electrons, sound, etc.), but more generally, all forms of the holomovement merge and are inseparable. Thus in its totality, the holomovement is not limited in any specifiable way at all. It is not required to conform to any particular order, or to be bounded by any particular measure. Thus, the holomovement is undefinable and immeasurable." (151). As the interconnected totality of all there is, the holomovement is potentially of an infinite order, and so cannot be pinned down to any one notion of order. It is important to note that Bohm's concepts of the implicate order and the holomovement are significant departures from the earlier "Hidden Variables" interpretation, and the conceptual framework is somewhat different from that articulated in the Bohm-Vigier interpretation, sometimes called the Causal-Stochastic Interpretation, and the interpretations of the proponents of "Bohmian Mechanics", where the general assumption is of an underlying Dirac ether (see F. David Peat's Introduction to Quantum Implications). While the concept of the holomovement has been criticized as being "metaphysical", it is actually subtler, while at the same time encompassing the whole range of interconnected physical phenomena.
Holomovement
200
Publications
1957. Causality and Chance in Modern Physics, 1961 Harper edition reprinted in 1980 by Philadelphia: U of Pennsylvania Press, ISBN 0-8122-1002-6 1980. Wholeness and the Implicate Order, London: Routledge, ISBN 0-7100-0971-2, 1983 Ark paperback: ISBN 0-7448-0000-5, 2002 paperback: ISBN 0-415-28979-3 1987. Science, Order and Creativity, with F. David Peat. London: Routledge. 2nd ed. 2000. ISBN 0-415-17182-2. . 1993. The Undivided Universe: An ontological interpretation of quantum theory, with B.J. Hiley, London: Routledge, ISBN 0-415-12185-X (final work) 1998. On Creativity, editor Lee Nichol. London: Routledge, hardcover: ISBN 0-415-17395-7, paperback: ISBN 0-415-17396-5, 2004 edition: ISBN 0-415-33640-6 Infinite Potential: the Life and Times of David Bohm, F. David Peat, Reading, Massachusetts: Addison Wesley (1997), ISBN 0-201-40635-7 DavidPeat.com Quantum Implications: Essays in Honour of David Bohm, (B.J. Hiley, F. David Peat, editors), London: Routledge (1987), ISBN 0-415-06960-2 The Quantum Theory of Motion: an account of the de Broglie-Bohm Causal Interpretation of Quantum Mechanics, Peter R. Holland, Cambridge: Cambridge University Press. (2000) ISBN 0-921-48453-9.
Holomovement
201
See also
David Bohm Aharonov-Bohm effect Bohm diffusion of a plasma in a magnetic field Bohm interpretation Correspondence principle EPR paradox Holographic paradigm Holographic principle Membrane paradigm Wave gene Implicate order Penrose-Hameroff "Orchestrated Objective Reduction" theory of consciousness Implicate and Explicate Order John Stewart Bell Karl Pribram
The Bohm sheath criterion, which states that a plasma must flow with at least the speed of sound toward a solid surface Influence on John David Garcia
External links
http:/ / www. fdavidpeat. com/ ideas/ bohm. htm Lifework of David Bohm: River of Truth: Article by Will Keepin Interview with David Bohm provided and conducted by F. David Peat along with John Briggs, first issued in Omni magazine, January 1987
Self-similarity
202
Self-similarity
In mathematics, a self-similar object is exactly or approximately similar to a part of itself (i.e. the whole has the same shape as one or more of the parts). Many objects in the real world, such as coastlines, are statistically self-similar: parts of them show the same statistical properties at many scales.[1] Self-similarity is a typical property of fractals. Scale invariance is an exact form of self-similarity where at any magnification there is a smaller piece of the object that is similar to the whole. For instance, a side of the Koch snowflake is both symmetrical and scale-invariant; it can be continually magnified 3x without changing shape.
A Koch curve has an infinitely repeating self-similarity when it is magnified.
Definition
A compact topological space X is self-similar if there exists a finite set S indexing a set of non-surjective homeomorphisms for which
If
, we call X self-similar if it is the only non-empty subset of Y such that the equation above holds for . We call
a self-similar structure. The homeomorphisms may be iterated, resulting in an iterated function system. The composition of functions creates the algebraic structure of a monoid. When the set S has only two elements, the monoid is known as the dyadic monoid. The dyadic monoid can be visualized as an infinite binary tree; more generally, if the set S has p elements, then the monoid may be represented as a p-adic tree. The automorphisms of the dyadic monoid is the modular group; the automorphisms can be pictured as hyperbolic rotations of the binary tree.
Self-similarity
203
Examples
The Mandelbrot set is also self-similar around Misiurewicz points. Self-similarity has important consequences for the design of computer networks, as typical network traffic has self-similar properties. For example, in teletraffic engineering, packet switched data traffic patterns seem to be statistically self-similar[2] . This property means that simple models using a Poisson distribution are inaccurate, and networks designed without taking self-similarity into account are likely to function in unexpected ways. Similarly, stock market movements are described as displaying self-affinity, i.e. they appear self-similar when transformed via an appropriate affine transformation for the level of detail being shown.[3] Some very natural self-similar objects are plants. The image on the right is a self similar, albeit mathematically generated. True ferns, however, will be extremely close to true self similarity. Other plants, such as Romanesco broccoli, are extremely self-similar.
Self-similarity in the Mandelbrot set shown by zooming in on the Feigenbaum Point at (-1.401155189...,0)
See also
Droste effect Self-reference Zipf's law Self-dissimilarity
External links
"Copperplate Chevrons" [4] a self-similar fractal zoom movie "Self-Similarity" [5] New articles about the Self-Similarity. Waltz Algorithm
An image of a fern which exhibits affine self-similarity
Self-similarity
204
References
[1] Benot Mandelbrot, How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension [2] Leland et al. "On the self-similar nature of Ethernet traffic", IEEE/ACM Transactions on Networking, Volume 2, Issue 1 (February 1994) [3] Benoit Mandelbrot (February 1999). "How Fractals Can Explain What's Wrong with Wall Street" (http:/ / www. sciam. com/ article. cfm?id=multifractals-explain-wall-street). Scientific American. . [4] http:/ / www. ericbigas. com/ fractals/ cc [5] http:/ / pi. 314159. ru/ longlist. htm
Implicate order
David Bohm proposed a cosmological order radically different from generally accepted conventions, which he expressed as a distinction between the implicate and explicate order, described in the book Wholeness and the Implicate Order: In the enfolded [or implicate] order, space and time are no longer the dominant factors determining the relationships of dependence or independence of different elements. Rather, an entirely different sort of basic connection of elements is possible, from which our ordinary notions of space and time, along with those of separately existent material particles, are abstracted as forms derived from the deeper order. These ordinary notions in fact appear in what is called the "explicate" or "unfolded" order, which is a special and distinguished form contained within the general totality of all the implicate orders (Bohm, 1980, p. xv).
Implicate order
205
Bohms proposals have at times been dismissed largely on the basis of such tenets, without due consideration necessarily given to the fact that they had been challenged by Bohm. Bohms paradigm is inherently antithetical to reductionism, in most forms, and accordingly can be regarded as a form of ontological holism. On this, Bohm noted of prevailing views among physicists: "the world is assumed to be constituted of a set of separately existent, indivisible and unchangeable 'elementary particles', which are the fundamental 'building blocks' of the entire universe there seems to be an unshakable faith among physicists that either such particles, or some other kind yet to be discovered, will eventually make possible a complete and coherent explanation of everything" (Bohm, 1980, p. 173).
In Bohms conception of order, then, primacy is given to the undivided whole, and the implicate order inherent within the whole, rather than to parts of the whole, such as particles, quantum states, and continua. For Bohm, the whole encompasses all things, structures, abstractions and processes, including processes that result in (relatively) stable structures as well as those that involve metamorphosis of structures or things. In this view, parts may be entities normally regarded as physical, such as atoms or subatomic particles, but they may also be abstract entities, such as quantum states. Whatever their nature and character, according to Bohm, these parts are considered in terms of the whole, and in such terms, they constitute relatively autonomous and independent "sub-totalities". The implication of the view is, therefore, that nothing is entirely separate or autonomous. Bohm (1980, p. 11) said: "The new form of insight can perhaps best be called Undivided Wholeness in Flowing Movement. This view implies that flow is, in some sense, prior to that of the things that can be seen to form and dissolve in this flow". According to Bohm, a vivid image of this sense of analysis of the whole is afforded by vortex structures in a flowing stream. Such vortices can be relatively stable patterns within a continuous flow, but such an analysis does not imply that the flow patterns have any sharp division, or that they are literally separate and independently existent entities; rather, they are most fundamentally undivided. Thus, according to Bohms view, the whole is in continuous flux, and hence is referred to as the holomovement (movement of the whole).
A hydrogen atom and its constituent particles: an example of a small collection of posited building blocks of the universe
Implicate order certain contexti.e. a set of interrelated conditions within the explicate orderrather than having unlimited scope, and that apparent contradictions stem from attempts to overgeneralize by superposing the theories on one another, implying greater generality or broader relevance than is ultimately warranted. Thus, Bohm (1980, pp. 156-167) argued: "... in sufficiently broad contexts such analytic descriptions cease to be adequate ... 'the law of the whole' will generally include the possibility of describing the 'loosening' of aspects from each other, so that they will be relatively autonomous in limited contexts ... however, any form of relative autonomy (and heteronomy) is ultimately limited by holonomy, so that in a broad enough context such forms are seen to be merely aspects, relevated in the holomovement, rather than disjoint and separately existent things in interaction".
206
Quantum entanglement
Central to Bohm's schema are correlations between observables of entities which seem separated by great distances in the explicate order (such as a particular electron here on earth and an alpha particle in one of the stars in the Abell 1835 galaxy, the farthest galaxy from Earth known to humans), manifestations of the implicate order. Within quantum theory there is entanglement of such objects. This view of order necessarily departs from any notion which entails signalling, and therefore causality. The correlation of observables does not imply a causal influence, and in Bohm's schema the latter represents 'relatively' independent events in space-time; and therefore explicate order. He also used the term unfoldment to characterise processes in which the explicate order becomes relevant (or "relevated"). Bohm likens unfoldment also to the decoding of a television signal to produce a sensible image on a screen. The signal, screen, and television electronics in this analogy represent the implicate order whilst the image produced represents the explicate order. He also uses an interesting example in which an ink droplet can be introduced into a highly viscous substance (such as glycerine), and the substance rotated very slowly such that there is negligible diffusion of the substance. In this example, the droplet becomes a thread which, in turn, eventually becomes invisible. However, by rotating the substance in the reverse direction, the droplet can essentially reform. When it is invisible, according to Bohm, the order of the ink droplet as a pattern can be said to be implicate within the substance. Further support for this is illustrated by dropping blue ink into a vat of spinning carbon tetrachloride and watch the ink disperse. Reversing the spin of the vat will cause the ink to come back together into a blob, then it spreads out again. In another analogy, Bohm asks us to consider a pattern produced by making small cuts in a folded piece of paper and then, literally, unfolding it. Widely separated elements of the pattern are, in actuality, produced by the same original cut in the folded piece of paper. Here the cuts in the folded paper represent the implicate order and the unfolded
207
In a holographic reconstruction, each region of a photographic plate contains the whole image
Bohm noted that although the hologram conveys undivided wholeness, it is nevertheless static. In this view of order, laws represent invariant relationships between explicate entities and structures, and thus Bohm maintained that in physics, the explicate order generally reveals itself within well-constructed experimental contexts as, for example, in the sensibly observable results of instruments. With respect to implicate order, however, Bohm asked us to consider the possibility instead "that physical law should refer primarily to an order of undivided wholeness of the content of description similar to that indicated by the hologram rather than to an order of analysis of such content into separate parts ".[2]
Implicate order
208 The implicate order represents the proposal of a general metaphysical concept in terms of which it is claimed that matter and consciousness might both be understood, in the sense that it is proposed that both matter and consciousness: (i) enfold the structure of the whole within each region, and (ii) involve continuous processes of enfoldment and unfoldment. For example, in the case of matter, entities such as atoms may represent continuous enfoldment and unfoldment which manifests as a relatively stable and autonomous entity that can be observed to follow a relatively well-defined path in space-time. In the case of consciousness, Bohm pointed toward evidence presented by Karl Pribram that memories may be enfolded within every region of the brain rather than being localized (for example in particular regions
Karl Pribram and colleagues have presented evidence that indicates that memories do not in general appear to be localized in specific regions of brains
of the brain, cells, or atoms). Bohm went on to say: As in our discussion of matter in general, it is now necessary to go into the question of how in consciousness the explicate order is what is manifest ... the manifest content of consciousness is based essentially on memory, which is what allows such content to be held in a fairly constant form. Of course, to make possible such constancy it is also necessary that this content be organized, not only through relatively fixed association but also with the aid of the rules of logic, and of our basic categories of space, time causality, universality, etc. ... there will be a strong background of recurrent stable, and separable features, against which the transitory and changing aspects of the unbroken flow of experience will be seen as fleeting impressions that tend to be arranged and ordered mainly in terms of the vast totality of the relatively static and fragmented content of [memories].[3] Bohm also claimed that "as with consciousness, each moment has a certain explicate order, and in addition it enfolds all the others, though in its own way. So the relationship of each moment in the whole to all the others is implied by its total content: the way in which it 'holds' all the others enfolded within it". Bohm characterises consciousness as a process in which at each moment, content that was previously implicate is presently explicate, and content which was previously explicate has become implicate. One may indeed say that our memory is a special case of the process described above, for all that is recorded is held enfolded within the brain cells and these are part of matter in general. The recurrence and stability of our own memory as a relatively independent sub-totality is thus brought about as part of the very same process that sustains the recurrence and stability in the manifest order of matter in general. It follows, then, that the explicate and manifest order of consciousness is not ultimately distinct from that of matter in general.[4]
Implicate order ultimate determiner of meaning. The holistic view of context, hence another striking analogy of wholeness, was first put forward in The Meaning of Meaning by C. K. Ogden & I. A. Richards (1923), including the literary, psychological, and external. These are respectively analogous to Karl Popper's world 3, 2, and 1 appearing in his Objective Knowledge (1972 and later ed.). Bohm's worldview of "undivided wholeness" is contrasted with Popper's three divided worlds. Bohm's views bear some similarities to those of Immanuel Kant, according to Wouter Hanegraaff. For example, Kant held that the parts of an organism, such as cells, simultaneously exist to sustain the whole, and depend upon the whole for their own existence and functioning. Kant also proposed that the process of thought plays an active role in organizing knowledge, which implies theoretical insights are instrumental to the process of acquiring factual knowledge. Kant restricted knowledge to appearances only and denied the existence of knowledge of any "thing in itself," but Bohm believed that theories in science are "forms of insight that arise in our attempts to obtain a perception of a Cells stained for keratin and DNA: such parts of life exist because of the whole, deeper nature of reality as a whole" (Bohm & Hiley, 1993, p. 323). Thus for but also to sustain it Bohm the thing in itself is the whole of existence, conceived of not as a collection of parts but as an undivided movement. In this view Bohm is closer to Kant's critic, Arthur Schopenhauer, who identified the thing in itself with the will, an inner metaphysical reality that grounds all outer phenomena. Schopenhauer's will plays a role analogous to that of the implicate order; for example, it is objectified (Bohm might say it is "made explicate") to form physical matter. And Bohm's concept that consciousness and matter share a common ground resembles Schopenhauer's claim that even inanimate objects possess an inward noumenal nature. In The World as Will and Representation, Schopenhauer (1819/1995) described this ground thus: When I consider the vastness of the world, the most important factor is that this existence-in-itself, of which the world is the manifestation, cannot, whatever it may be, have its true self spread out and dispersed in this fashion in boundless space, but that this endless extension belongs only to its manifestation, while existence-in-itself, on the contrary, is present entire and undivided in everything in nature and in everything that lives. (p. 60)
209
Implicate order
210
See also
Implicature Holographic principle The Holographic Universe Holomovement Arthur Schopenhauer Brahman Buddhism Immanuel Kant Kabbalism Laminar flow Meditation for Spiritual Unfoldment Mind's eye Noumenon Parable of the cave Plato Samsara Taoism
For Bohm, life is a continuous flowing process of enfoldment and unfoldment involving relatively autonomous entities. DNA 'directs' the environment to form a living thing. Life can be said to be implicate in ensembles of atoms that ultimately form life.
Unobservables
References
Bohm, D. (1980). Wholeness and the Implicate Order. London: Routledge. ISBN 0-7100-0971-2 Bohm, D., & Hiley, B. J. (1993). The Undivided Universe. London: Routledge. ISBN 0-415-06588-7 Kauffman, S. (1995). At Home in the Universe. New York: Oxford University Press. hardcover: ISBN 0-19-509599-5, paperback ISBN 0-19-511130-3 Kauffman, S. (2000). Investigations. New York: Oxford University Press. Kuhn, T.S. (1961). The function of measurement in modern physical science. ISIS, 52, 161-193. Schopenhauer, A. (1819/1995). The World as Will and Idea. (D. Derman, Ed.; J. Berman, Trans.). London: Everyman. ISBN 0-460-87505-1
Further reading
Michael Talbot. The Holographic Universe, Harpercollins (1991)
External links
Interview with David Bohm [5] An interview with Bohm concerning this particular subject matter conducted by F. David Peat. Excerpt from The Holographic Universe [6] Parallels some of the experiences of 18th century Swedish mystic, Emanuel Swedenborg, with David Bohm's ideas.
Implicate order
211
References
[1] [2] [3] [4] [5] [6] (Bohm, 1980, p. 149) (1980, p. 147) (1980, p. 205) (Bohm, 1980, p. 208) http:/ / www. fdavidpeat. com/ interviews/ bohm. htm http:/ / www. soultravel. nu/ 2004/ 040907-swedenborg/ index. asp
Orch-OR
Orch OR (Orchestrated Objective Reduction) is a theory of consciousness, which is the joint work of theoretical physicist Sir Roger Penrose and anesthesiologist Stuart Hameroff. Mainstream theories assume that consciousness emerges from the brain, and focus particularly on complex computation at connections known as synapses that allow communication between brain cells (neurons). Orch OR combines approaches to the problem of consciousness from the radically different angles of mathematics, physics and anesthesia. Penrose and Hameroff initially developed their ideas quite separately from one another, and it was only in the 1990s that they cooperated to produce the Orch OR theory. Penrose came to the problem from the view point of mathematics and in particular Gdels theorem, while Hameroff approached it from a career in cancer research and anesthesia that gave him an interest in brain structures.
Orch-OR When the collapse happens, the choice of position for the particle is random. This is a drastic departure from classical physics. There is no cause-and-effect process, and no system of algorithms that can describe the choice of position for the particle. This provided Penrose with a candidate for the physical basis of the suggested non-computable process that he proposed as possibly existing in the brain. However, this was not the end of his problems. He had identified something in physics that was not based on algorithms, but at the same time, randomness was not a promising basis for mathematical understanding, the aspect of mind that Penrose particularly focused on.
212
Objective reduction
Penrose now proposed that existing ideas on wave function collapse might only apply to situations where the quanta are the subject of measurement or of interaction with the environment. He considered the case of quanta that are not the subject of measurements or interactions, but remain isolated from the environment, and proposed that these quanta may be subject to a different form of wave function collapse. In this area, Penrose draws on both Einstein's general theory of relativity, and on his own notions about the possible structure of spacetime.[1] [2] General relativity states that spacetime is curved by massive objects. Penrose, in seeking to reconcile relativity and quantum theory, has suggested that at the very small scale this curved spacetime is not continuous, but constitutes a form of network. Penrose postulates that each quantum superposition has its own piece of spacetime curvature. According to his theory, these different bits of spacetime curvature are separated from one another, and constitute a form of blister in spacetime. Penrose further proposes a limit to the size of this spacetime blister. This is the tiny Planck scale of (1035 m). Above this size, Penrose suggests that spacetime can be viewed as continuous, and that gravity starts to exert its force on the spacetime blister. This is suggested to become unstable above the Planck scale, and to collapse so as to choose just one of the possible locations for the particle. Penrose calls this event objective reduction (OR), reduction being another word for wave function collapse. An important feature of Penrose's objective reduction is that the time to collapse is a function of the mass/energy of the object undergoing collapse. Thus the greater the superposition, the faster it will undergo OR, and vice versa. Tiny superpositions, e.g. an electron separated from itself, if isolated from environment, would require 10 million years to reach OR threshold. An isolated one kilogram object (e.g. Schrdingers cat) would reach OR threshold in only 1037 seconds. However objects somewhere between the scale of an electron and the scale of a cat could collapse within a timescale that was relevant to neural processing. The threshold for Penrose OR is given by the indeterminacy principle E=/t, where E is the gravitational self-energy or the degree of spacetime separation given by the superpositioned mass, is the reduced Planck constant, and t is the time until OR occurs. There is no existing evidence for Penrose's objective reduction, but the theory is considered to be testable, and plans are in hand to carry out a relevant experiment.[3] From the point of view of consciousness theory, an essential feature of Penrose's objective reduction is that the choice of states when objective reduction occurs is selected neither randomly, as are choices following measurement or decoherence, nor completely algorithmically. Rather, states are proposed to be selected by a 'non-computable' influence embedded in the fundamental level of spacetime geometry at the Planck scale. Penrose claimed that such information is Platonic, representing pure mathematical truth, aesthetic and ethical values. More than two thousand years ago, the Greek philosopher Plato had proposed such pure values and forms, but in an abstract realm. Penrose placed the Platonic realm at the Planck scale. This relates to Penrose's ideas concerning the three worlds: physical, mental, and the Platonic mathematical world. In his theory, the physical world can be seen as the external reality, the mental world as information processing in the brain and the Platonic world as the encryption, measurement, or geometry of fundamental spacetime that is claimed to support non-computational understanding.
Orch-OR
213
Orch-OR
214
Objections to Orch OR
Penrose's interpretation of Gdel's first incompleteness theorem is rejected by many philosophers, logicians and artificial intelligence (robotics) researchers.[11] A paper by the philosophers Rick Grush and Patricia Churchland attacking Penrose has received widespread attention within consciousness studies.[12] Solomon Feferman, a professor of mathematics, logic and philosophy has made more qualified criticisms.[13] He faults detailed points in Penrose's reasoning in his second book 'Shadows of the Mind', but says that he does not think that they undermine the main thrust of his argument. As a mathematician, he argues that mathematicians do not progress by computer-like or mechanistic search through proofs, but by trial-and-error reasoning, insight and inspiration, and that machines cannot share this approach with humans. However, he thinks that Penrose goes too far in his arguments. Feferman points out that everyday mathematics, as used in science, can in practice be formalised. He also rejects Penrose's platonism. The main objection to the Hameroff side of the theory is that any quantum feature in the environment of the brain would undergo wave function collapse (reduction), as a result of interaction with the environment, far too quickly for it to have any influence on neural processes. The wave or superposition form of the quanta is referred to as being quantum coherent. Interaction with the environment results in decoherence otherwise known as wave function collapse. It has been questioned as to how such quantum coherence could avoid rapid decoherence in the conditions of the brain. With reference to this question, a paper by the physicist, Max Tegmark, refuting the Orch OR model and published in the journal, Physical Review E is widely quoted.[14] Tegmark developed a model for time to decoherence, and from this calculated that microtubule quantum states could exist, but would be sustained for only 100 femtoseconds at brain temperatures, far too brief to be relevant to neural processing. A recent paper by Engel et al. in Nature does indicate quantum coherent electrons as being functional in energy transfer within photosynthetic protein, but the quantum coherence described lasts for 660 femtoseconds[15] rather than the 25 milliseconds required by Orch OR. This reinforces Tegmark's estimate for decoherence timescale of microtubules, which is comparable to the observed coherence time in the photosynthetic complex. In their reply to Tegmark's paper, also published in Physical Review E, the physicists, Scott Hagan and Jack Tuszynski and Hameroff[16] [17] claimed that Tegmark did not address the Orch OR model, but instead a model of his own construction. This involved superpositions of quanta separated by 24 nm rather than the much smaller separations stipulated for Orch OR. As a result, Hameroff's group claimed a decoherence time seven orders of magnitude greater than Tegmarks, but still well short of the 25 ms required if the quantum processing in the theory was to be linked to the 40 Hz gamma synchrony, as Orch OR suggested. To bridge this gap, the group made a series of proposals. It was supposed that the interiors of neurons could alternate between liquid and gel states. In the gel state, it was further hypothesized that the water electrical dipoles are orientated in the same direction, along the outer edge of the microtubule tubulin subunits. Hameroff et al. proposed that this ordered water could screen any quantum coherence within the tubulin of the microtubules from the environment of the rest of the brain. Each tubulin also has a tail extending out from the microtubules, which is negatively charged, and therefore attracts positively charged ions. It is suggested that this could provide further screening. Further to this, there was a suggestion that the microtubules could be pumped into a coherent state by biochemical energy. Finally, it is suggested that the configuration of the microtubule lattice might be suitable for quantum error correction, a means of holding together quantum coherence in the face of environmental interaction. In the last decade, some researchers who are sympathetic to Penrose's ideas have proposed an alternative scheme for quantum processing in microtubules based on the interaction of tubulin tails with microtubule associated proteins, motor proteins and presynaptic scaffold proteins. These proposed alternative processes have the advantage of taking place within Tegmark's time to decoherence. Most of the above mentioned putative augmentations of the Orch OR model are not undisputed. "Cortical dendrites contain largely A-lattice microtubules" is one of 20 testable predictions published by Hameroff in 1998[18] and it was hypothesized that these A-lattice microtubules could perform topological quantum error correction. The latter
Orch-OR testable prediction had already been experimentally disproved in 1994 by Kikkawa et al., who showed that all in vivo microtubules have B-lattice and a seam.[19] [20] Other peer-reviewed critiques of Orch OR have been published in recent years. One of these is a paper published in PNAS by Reimers et al.,[21] who argue that the condensates proposed in Orch OR would involve energies and temperatures that are not realistic in biological material. Further papers by Georgiev point to a number of problems with Hameroff's proposals, including the lack of explanation for the probabilistic firing of the axonal synapses,[22] an error in the calculated number of tubulin dimers per cortical neuron,[23] and mismodeling of dendritic lamellar bodies (DLBs) discovered by De Zeeuw et al.,[24] who showed that despite the fact that DLBs are stained by antibody against gap junctions, they are located tens of micrometers away from actual gap junctions. Also it was shown that the proposed tubulin-bound GTP pumping of quantum coherence cannot occur neither in stable microtubules[25] nor in dynamically unstable microtubules undergoing assembly/disassembly.[26]
215
See also
Electromagnetic theories of consciousness Holonomic brain theory Many-minds interpretation Quantum Aspects of Life (book)
Quantum mind Subjective universe Roger Penrose (1999) Science and the Mind. Kavli Institute for Theoretical Physics Public Lectures, May 12, 1999. [27] Quantum-Mind [7]
References
[1] Penrose, Roger (1989). The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics. Oxford University Press. pp.480. ISBN0-198-51973-7. [2] Penrose, Roger (1989). Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford University Press. pp.457. ISBN0-19-853978-9. [3] Marshall, W., Simon, C., Penrose, R., and Bouwmeester, D. (2003). "Towards quantum superpositions of a mirror" (http:/ / arxiv. org/ abs/ quant-ph/ 0210001). Physical Review Letters 91: 130401. doi:10.1103/PhysRevLett.91.130401. . [4] Hameroff, S.R., and Watt, R.C. (1982). "Information processing in microtubules" (http:/ / www. quantumconsciousness. org/ documents/ informationprocessing_hameroff_000. pdf). Journal of Theoretical Biology 98: 549561. . [5] Hameroff, S.R. (1987). Ultimate Computing (http:/ / www. quantumconsciousness. org/ ultimatecomputing. html). Elsevier. . [6] Hameroff, Stuart (2008). "That's life! The geometry of electron resonance clouds" (http:/ / www. quantumconsciousness. org/ documents/ Hameroff_received-1-05-07. pdf). in Abbott, D; Davies, P; Pati, A. Quantum aspects of life. World Scientific. pp.403434. . Retrieved Jan 21, 2010. [7] Hameroff, S.R. (2006). "The entwined mysteries of anesthesia and consciousness". Anesthesiology 105: 400412. [8] Hameroff, S. (2009). "The conscious pilot - dendritic synchrony moves through the brain to mediate consciousness". Journal of Biological Physics. doi:10.1007/s10867-009-9148-x. [9] Bennett, M.V.L., and Zukin, R.S. (2004). "Electrical Coupling and Neuronal Synchronization in the Mammalian Brain" (http:/ / dx. doi. org/ 10. 1016/ S0896-6273(04)00043-1). Neuron 41: 495511. doi:10.1016/S0896-6273(04)00043-1. . [10] Specifically: Buhl, D.L., Harris, K.D., Hormuzdi, S.G., Monyer, H., and Buzsaki, G. (2003). "Selective Impairment of Hippocampal Gamma Oscillations in Connexin-36 Knock-Out Mouse In Vivo". Journal of Neuroscience 23: 10131018. Dermietzel, R. (1998). "Gap junction wiring: a `new' principle in cell-to-cell communication in the nervous system?". Brain Research Reviews 26: 176183. Draguhn, A., Traub, R.D., Schmitz, D., and Jefferys, J.G.R. (1998). "Electrical coupling underlies high-frequency oscillations in the hippocampus in vitro". Nature 394: 189192. Fries, P., Schroder, J.-H., Roelfsema, P.R., Singer, W., and Engel, A.K. (2002). "Oscillatory Neuronal Synchronization in Primary Visual Cortex as a Correlate of Stimulus Selection". Journal of Neuroscience 22: 37393754. Galarreta, M., and Hestrin, S. (1999). "A network of fast-spiking cells in the neocortex connected by electrical synapses". Nature 402: 7275. Gibson, J.R., Beierlein, M., and Connors, B.W. (1999). "Two networks of electrically coupled inhibitory neurons in neocortex". Nature 402:
Orch-OR
7579. Hormuzdi, S.G., Filippov, M.A., Mitropoulou, G., Monyer, H., and Bruzzone, R. (2004). "Electrical synapses: a dynamic signaling system that shapes the activity of neuronal networks". Biochimica et Biophysica Acta 1662: 113137. LeBeau, F.E.N., Traub, R.D., Monyer, H., Whittington, M.A., and Buhl, E.H. (2003). "The role of electrical signaling via gap junctions in the generation of fast network oscillations". Brain Research Bulletin 62: 313. Velazquez, J.L.P., and Carlen, P.L. (2000). "Gap junctions, synchrony and seizures". Trends in Neurosciences 23: 6874. Rozental, R., and de Carvalho, A.C.C. (2000). "Introduction". Brain Research Reviews 32: 12. [11] A 1995 issue of Psyche was devoted to this: Maudlin, T. (1995). "Between The Motion And The Act... A Review of Shadows of the Mind by Roger Penrose" (http:/ / journalpsyche. org/ ojs-2. 2/ index. php/ psyche/ article/ view/ 2396/ 2325). Psyche 2. . Klein, S.A. (1995). "Is Quantum Mechanics Relevant To Understanding Consciousness A Review of Shadows of the Mind by Roger Penrose" (http:/ / journalpsyche. org/ ojs-2. 2/ index. php/ psyche/ article/ view/ 2397/ 2326). Psyche 2. . McCullough, D. (1995). "Can Humans Escape Gdel? A Review of Shadows of the Mind by Roger Penrose" (http:/ / journalpsyche. org/ ojs-2. 2/ index. php/ psyche/ article/ view/ 2398/ 2327). Psyche 2. . Moravec, H. (1995). "Roger Penrose's Gravitonic Brains A Review of Shadows of the Mind by Roger Penrose" (http:/ / journalpsyche. org/ ojs-2. 2/ index. php/ psyche/ article/ view/ 2399/ 2328). Psyche 2. . Baars, B.J. (1995). "Can Physics Provide a Theory of Consciousness? A Review of Shadows of the Mind by Roger Penrose" (http:/ / journalpsyche. org/ ojs-2. 2/ index. php/ psyche/ article/ view/ 2401/ 2330). Psyche 2. . Chalmers, D.J. (1995). "Minds, Machines, And Mathematics A Review of Shadows of the Mind by Roger Penrose" (http:/ / journalpsyche. org/ ojs-2. 2/ index. php/ psyche/ article/ view/ 2402/ 2331). Psyche 2. . McCarthy, J. (1995). "Awareness and Understanding in Computer Programs A Review of Shadows of the Mind by Roger Penrose" (http:/ / journalpsyche. org/ ojs-2. 2/ index. php/ psyche/ article/ view/ 2403/ 2332). Psyche 2. . McDermott, D. (1995). " Penrose is Wrong" (http:/ / journalpsyche. org/ ojs-2. 2/ index. php/ psyche/ article/ view/ 2406/ 2335). Psyche 2. . [12] Grush, R., Churchland, P.S. (1995). "Gaps in Penrose's toilings" (http:/ / mind. ucsd. edu/ papers/ penrose/ penrose. pdf). Journal of Consciousness Studies 2 (1): 1029. . [13] Feferman, S. (1996). "Penrose's Gdelian argument" (http:/ / math. stanford. edu/ ~feferman/ papers/ penrose. pdf). Psyche 2: 2132. . [14] Tegmark, M.. "Importance of quantum decoherence in brain processes". Physical Review E 61: 41944206. [15] Engel, G.S., Calhoun, T.R., Read, E.L., Ahn, T.-K., Mancal, T., Cheng, Y.-C., Blankenship, R.E., and Fleming, G.R. (2007). "Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems". Nature 446: 782786. [16] Hagan, S., Hameroff, S., and Tuszyski, J.. "Quantum Computation in Brain Microtubules? Decoherence and Biological Feasibility" (http:/ / arxiv. org/ abs/ quant-ph/ 0005025). Physical Review E 65: 061901. . [17] Hameroff, S. (2006), "Consciousness, Neurobiology and Quantum Mechanics", in Tuszynski, Jack, The Emerging Physics of Consciousness, Springer, pp.193253 [18] Hameroff, S.R. (1998). "Quantum Computation In Brain Microtubules? The Penrose-Hameroff "Orch OR" model of consciousness" (http:/ / www. quantumconsciousness. org/ penrose-hameroff/ quantumcomputation. html). Philosophical Transactions Royal Society London (A) 356: 18691896. . [19] Kikkawa, M., Ishikawa, T., Nakata, T., Wakabayashi, T., Hirokawa, N. (1994). "Direct visualization of the microtubule lattice seam both in vitro and in vivo" (http:/ / jcb. rupress. org/ cgi/ content/ abstract/ 127/ 6/ 1965). Journal of Cell Biology 127 (6): 19651971. doi:10.1083/jcb.127.6.1965. . [20] Kikkawa, M., Metlagel, Z. (2006). "A molecular "zipper" for microtubules" (http:/ / dx. doi. org/ 10. 1016/ j. cell. 2006. 12. 009). Cell 127 (7): 13021304. doi:doi:10.1016/j.cell.2006.12.009. . [21] Reimers, J.R., McKemmish, L.K., McKenzie, R.H., Mark, A.E., and Hush, N.S. (2009). "Weak, strong, and coherent regimes of Frhlich condensation and their applications to terahertz medicine and quantum consciousness". Proceedings of the National Academy of Sciences 106: 42194224. doi:10.1073/pnas.0806273106. [22] Georgiev, D.D. (2007). "Falsifications of Hameroff-Penrose Orch OR model of consciousness and novel avenues for development of quantum mind theory" (http:/ / philsci-archive. pitt. edu/ archive/ 00003049/ ). NeuroQuantology 5 (1): 145174. . [23] Georgiev, D.D. (2009). "Remarks on the number of tubulin dimers per neuron and implications for Hameroff-Penrose Orch" (http:/ / precedings. nature. com/ documents/ 3860/ version/ 1). NeuroQuantology 7 (4): 677679. . [24] De Zeeuw, C.I., Hertzberg, E.L., Mugnaini, E.. "The dendritic lamellar body: A new neuronal organelle putatively associated with dendrodentritic gap junctions". Journal of Neuroscience 15: 15871604. [25] Georgiev, D.D. (2009). "Tubulin-bound GTP can not pump microtubule coherence in stable microtubules. Towards a revision of microtubule based quantum models of mind" (http:/ / www. neuroquantology. com/ journal/ index. php/ nq/ article/ view/ 358). NeuroQuantology 7 (4): 538547. . [26] McKemmish, L.K., Reimers, J.R., McKenzie, R.H., Mark, A.E., and Hush, N.S. (2009). "Penrose-Hameroff orchestrated objective-reduction proposal for human consciousness is not biologically feasible" (http:/ / link. aps. org/ doi/ 10. 1103/ PhysRevE. 80. 021912). Physical Review E 80: 021912021916. doi:10.1103/PhysRevE.80.021912. . [27] http:/ / online. kitp. ucsb. edu/ plecture/ penrose/
216
217
A hydrogen atom and its constituent particles: an example of a small collection of posited building blocks of the universe
Implicate and Explicate Order In Bohms conception of order, then, primacy is given to the undivided whole, and the implicate order inherent within the whole, rather than to parts of the whole, such as particles, quantum states, and continua. For Bohm, the whole encompasses all things, structures, abstractions and processes, including processes that result in (relatively) stable structures as well as those that involve metamorphosis of structures or things. In this view, parts may be entities normally regarded as physical, such as atoms or subatomic particles, but they may also be abstract entities, such as quantum states. Whatever their nature and character, according to Bohm, these parts are considered in terms of the whole, and in such terms, they constitute relatively autonomous and independent "sub-totalities". The implication of the view is, therefore, that nothing is entirely separate or autonomous. Bohm (1980, p. 11) said: "The new form of insight can perhaps best be called Undivided Wholeness in Flowing Movement. This view implies that flow is, in some sense, prior to that of the things that can be seen to form and dissolve in this flow". According to Bohm, a vivid image of this sense of analysis of the whole is afforded by vortex structures in a flowing stream. Such vortices can be relatively stable patterns within a continuous flow, but such an analysis does not imply that the flow patterns have any sharp division, or that they are literally separate and independently existent entities; rather, they are most fundamentally undivided. Thus, according to Bohms view, the whole is in continuous flux, and hence is referred to as the holomovement (movement of the whole).
218
Implicate and Explicate Order based on the assumption of the complete universality of certain features of a given theory, however general their domain of validity seems to be". Another aspect of Bohm's motivation was to point out a confusion he perceived to exist in quantum theory. On the dominant approaches in quantum theory, he said: "...we wish merely to point out that this whole line of approach re-establishes at the abstract level of statistical potentialities the same kind of analysis into separate and autonomous components in interaction that is denied at the more concrete level of individual objects" Bohm (1980, p. 174).
219
Quantum entanglement
Central to Bohm's schema are correlations between observables of entities which seem separated by great distances in the explicate order (such as a particular electron here on earth and an alpha particle in one of the stars in the Abell 1835 galaxy, the farthest galaxy from Earth known to humans), manifestations of the implicate order. Within quantum theory there is entanglement of such objects. This view of order necessarily departs from any notion which entails signalling, and therefore causality. The correlation of observables does not imply a causal influence, and in Bohm's schema the latter represents 'relatively' independent events in space-time; and therefore explicate order. He also used the term unfoldment to characterise processes in which the explicate order becomes relevant (or "relevated"). Bohm likens unfoldment also to the decoding of a television signal to produce a sensible image on a screen. The signal, screen, and television electronics in this analogy represent the implicate order whilst the image produced represents the explicate order. He also uses an interesting example in which an ink droplet can be introduced into a highly viscous substance (such as glycerine), and the substance rotated very slowly such that there is negligible diffusion of the substance. In this example, the droplet becomes a thread which, in turn, eventually becomes invisible. However, by rotating the substance in the reverse direction, the droplet can essentially reform. When it is invisible, according to Bohm, the order of the ink droplet as a pattern can be said to be implicate within the substance. Further support for this is illustrated by dropping blue ink into a vat of spinning carbon tetrachloride and watch the ink disperse. Reversing the spin of the vat will cause the ink to come back together into a blob, then it spreads out again. In another analogy, Bohm asks us to consider a pattern produced by making small cuts in a folded piece of paper and then, literally, unfolding it. Widely separated elements of the pattern are, in actuality, produced by the same original cut in the folded piece of paper. Here the cuts in the folded paper represent the implicate order and the unfolded pattern represents the explicate order.
220
In a holographic reconstruction, each region of a photographic plate contains the whole image
Bohm noted that although the hologram conveys undivided wholeness, it is nevertheless static. In this view of order, laws represent invariant relationships between explicate entities and structures, and thus Bohm maintained that in physics, the explicate order generally reveals itself within well-constructed experimental contexts as, for example, in the sensibly observable results of instruments. With respect to implicate order, however, Bohm asked us to consider the possibility instead "that physical law should refer primarily to an order of undivided wholeness of the content of description similar to that indicated by the hologram rather than to an order of analysis of such content into separate parts ".[2]
Karl Pribram and colleagues have presented evidence that indicates that memories do not in general appear to be localized in specific regions of brains
Implicate and Explicate Order Bohm went on to say: As in our discussion of matter in general, it is now necessary to go into the question of how in consciousness the explicate order is what is manifest ... the manifest content of consciousness is based essentially on memory, which is what allows such content to be held in a fairly constant form. Of course, to make possible such constancy it is also necessary that this content be organized, not only through relatively fixed association but also with the aid of the rules of logic, and of our basic categories of space, time causality, universality, etc. ... there will be a strong background of recurrent stable, and separable features, against which the transitory and changing aspects of the unbroken flow of experience will be seen as fleeting impressions that tend to be arranged and ordered mainly in terms of the vast totality of the relatively static and fragmented content of [memories].[3] Bohm also claimed that "as with consciousness, each moment has a certain explicate order, and in addition it enfolds all the others, though in its own way. So the relationship of each moment in the whole to all the others is implied by its total content: the way in which it 'holds' all the others enfolded within it". Bohm characterises consciousness as a process in which at each moment, content that was previously implicate is presently explicate, and content which was previously explicate has become implicate. One may indeed say that our memory is a special case of the process described above, for all that is recorded is held enfolded within the brain cells and these are part of matter in general. The recurrence and stability of our own memory as a relatively independent sub-totality is thus brought about as part of the very same process that sustains the recurrence and stability in the manifest order of matter in general. It follows, then, that the explicate and manifest order of consciousness is not ultimately distinct from that of matter in general.[4]
221
222
Bohm's views bear some similarities to those of Immanuel Kant, according to Wouter Hanegraaff. For example, Kant held that the parts of an organism, such as cells, simultaneously exist to sustain the whole, and depend upon the whole for their own existence and functioning. Kant also proposed that the process of thought plays an active role in organizing knowledge, which implies theoretical insights are instrumental to the process of acquiring factual knowledge. Kant restricted knowledge to appearances only and denied the existence of knowledge of any "thing in itself," but Bohm believed that theories in science are "forms of insight that arise in our attempts to obtain a perception of a Cells stained for keratin and DNA: such parts of life exist because of the whole, deeper nature of reality as a whole" (Bohm & Hiley, 1993, p. 323). Thus for but also to sustain it Bohm the thing in itself is the whole of existence, conceived of not as a collection of parts but as an undivided movement. In this view Bohm is closer to Kant's critic, Arthur Schopenhauer, who identified the thing in itself with the will, an inner metaphysical reality that grounds all outer phenomena. Schopenhauer's will plays a role analogous to that of the implicate order; for example, it is objectified (Bohm might say it is "made explicate") to form physical matter. And Bohm's concept that consciousness and matter share a common ground resembles Schopenhauer's claim that even inanimate objects possess an inward noumenal nature. In The World as Will and Representation, Schopenhauer (1819/1995) described this ground thus: When I consider the vastness of the world, the most important factor is that this existence-in-itself, of which the world is the manifestation, cannot, whatever it may be, have its true self spread out and dispersed in this fashion in boundless space, but that this endless extension belongs only to its manifestation, while existence-in-itself, on the contrary, is present entire and undivided in everything in nature and in everything that lives. (p. 60)
See also
Implicature Holographic principle The Holographic Universe Holomovement Arthur Schopenhauer Brahman Buddhism Immanuel Kant Kabbalism Laminar flow Meditation for Spiritual Unfoldment Mind's eye Noumenon Parable of the cave Plato Samsara Taoism
For Bohm, life is a continuous flowing process of enfoldment and unfoldment involving relatively autonomous entities. DNA 'directs' the environment to form a living thing. Life can be said to be implicate in ensembles of atoms that ultimately form life.
Unobservables
223
References
Bohm, D. (1980). Wholeness and the Implicate Order. London: Routledge. ISBN 0-7100-0971-2 Bohm, D., & Hiley, B. J. (1993). The Undivided Universe. London: Routledge. ISBN 0-415-06588-7 Kauffman, S. (1995). At Home in the Universe. New York: Oxford University Press. hardcover: ISBN 0-19-509599-5, paperback ISBN 0-19-511130-3 Kauffman, S. (2000). Investigations. New York: Oxford University Press. Kuhn, T.S. (1961). The function of measurement in modern physical science. ISIS, 52, 161-193. Schopenhauer, A. (1819/1995). The World as Will and Idea. (D. Derman, Ed.; J. Berman, Trans.). London: Everyman. ISBN 0-460-87505-1
Further reading
Michael Talbot. The Holographic Universe, Harpercollins (1991)
External links
Interview with David Bohm [5] An interview with Bohm concerning this particular subject matter conducted by F. David Peat. Excerpt from The Holographic Universe [6] Parallels some of the experiences of 18th century Swedish mystic, Emanuel Swedenborg, with David Bohm's ideas.
References
[1] [2] [3] [4] (Bohm, 1980, p. 149) (1980, p. 147) (1980, p. 205) (Bohm, 1980, p. 208)
224
John Stewart Bell Bohm, it includes a violation of local causality. In 1972 the first of many experiments that have shown (under the extrapolation to ideal detector efficiencies) a violation of Bell's Inequality was conducted. Bell himself concludes from these experiments that "It now seems that the non-locality is deeply rooted in quantum mechanics itself and will persist in any completion."[8] This, according to Bell, also implied that quantum theory is not locally causal and cannot be embedded into any locally causal theory. Bell remained interested in objective 'observer-free' quantum mechanics. He stressed that at the most fundamental level, physical theories ought not to be concerned with observables, but with 'be-ables': "The beables of the theory are those elements which might correspond to elements of reality, to things which exist. Their existence does not depend on 'observation'."[9] He remained impressed with Bohm's hidden variables as an example of such a scheme and he attacked the more subjective alternatives such as the Copenhagen interpretation. [10] Bell seemed to be quite comfortable with the notion that future experiments would continue to agree with quantum mechanics and violate his inequalities. Referring to the Bell test experiments, he remarked: "It is difficult for me to believe that quantum mechanics, working very well for currently practical set-ups, will nevertheless fail badly with improvements in counter efficiency ..."[11] Some people continue to believe that agreement with Bell's inequalities might yet be saved. They argue that in the future much more precise experiments could reveal that one of the known loopholes, for example the so-called "fair sampling loophole", had been biasing the interpretations. This latter loophole, first publicized by Philip Pearle in 1970[12] , is such that increases in counter efficiency decrease the measured quantum correlation, eventually destroying the empirical match with quantum mechanics. Most mainstream physicists are highly skeptical about all these "loopholes", admitting their existence but continuing to believe that Bell's inequalities must fail.
225
Bell died unexpectedly of a cerebral hemorrhage in Belfast in 1990. His contribution to the issues raised by EPR was significant. Some regard him as having demonstrated the failure of local realism (local hidden variables). Bell's own interpretation is that locality itself met its demise.
See also
Bell's theorem, published in the mid-1960s Bell's spaceship paradox EPR paradox, a thought experiment by Einstein, Podolsky, and Rosen published in 1935 as an attack on quantum theory CHSH Bell test, an application of Bell's theorem Quantum mechanical Bell test prediction Quantum entanglement Local hidden variable theory Bell state Superdeterminism
226
References
Aczel, Amir D. (2001) Entanglement: The Greatest Mystery in Physics. New York: Four Walls Eight Windows Bell, John S. (1987) Speakable and Unspeakable in Quantum Mechanics. Cambridge Univ. Press, ISBN 0-521-36869-3, 2004 edition with introduction by Alain Aspect and two additional papers: ISBN 0-521-52338-9. Albert Einstein, Podolsky, Rosen, (1935) "Can Quantum Mechanical Description of Physical Reality Be Considered Complete?" Phys. Rev. 47: 777. Gilder, Louisa (2008) The Age of Entanglement: When Quantum Physics Was Reborn. New York: Alfred A. Knopf. Pearle, Philip (1970) "Hidden-Variable Example Based upon Data Rejection," Physical Review D 2: 1418-25. John von Neumann (1932) Mathematical Foundations of Quantum Mechanics. Princeton Univ. Press. 1996 ed.: ISBN 0-691-02893-1.
External links
MacTutor profile (University of St. Andrews) [13] John Bell and the most profound discovery of science (December 1998) [14] The Most Profound Discovery of Science (September 2006) [15]
References
[1] John Bell, Speakable and Unspeakable in Quantum Mechanics, p. 14 [2] Einstein, et al., "Can Quantum Mechanical Description of Physical Reality Be Considered Complete?" [3] Bell, p. 196 [4] Introduction to the hidden-variable question, pg. 30, in Speakable and Unspeakable in Quantum Mechanics. [5] Against 'measurement' , pg. 215, in Speakable and Unspeakable in Quantum Mechanics. [6] Bell, p.1 [7] John von Neumann, Mathematical Foundations of Quantum Mechanics [8] Bell, p. 132 [9] Bell, p. 174 [10] Bell, p. 92, 133, 181 [11] Bell, p. 109 [12] Philip Pearle, Hidden-Variable Example Based upon Data Rejection [13] http:/ / www-groups. dcs. st-and. ac. uk/ ~history/ Biographies/ Bell_John. html [14] http:/ / physicsweb. org/ articles/ world/ 11/ 12/ 8 [15] http:/ / www. rds. ie/ home/ index. aspx?id=1755
Debye sheath
227
Debye sheath
The Debye sheath (also electrostatic sheath) is a layer in a plasma which has a greater density of positive ions, and hence an overall excess positive charge, that balances an opposite negative charge on the surface of a material with which it is in contact. The thickness of such a layer is several Debye lengths thick, a value whose size depends on various characteristics of plasma (eg. temperature, density, etc). A Debye sheath arises in a plasma because the electrons usually have a temperature on the order of or greater than that of the ions and are much lighter. Consequently they are faster than the ions by at least a factor of . At the interface to a material surface, therefore, the electrons will fly out of the plasma, charging the surface negative relative to the bulk plasma. Due to Debye shielding, the scale length of the transition region will be the Debye length . As the potential increases, more and more electrons are reflected by the sheath potential. An equilibrium is finally reached when the potential difference is a few times the electron temperature. The Debye sheath is the transition from a plasma to a solid surface. Similar physics is involved between two plasma regions that have different characteristics; the transition between these regions is known as a double layer, and features one positive, and one negative layer.
Description
Sheaths were first described by American physicist Irving Langmuir. In 1923 he wrote: Electrons are repelled from the negative electrode while positive ions are drawn towards it. Around each negative electrode there is thus a sheath of definite thickness containing only positive ions and neutral atoms. [..] Electrons are reflected from the outside surface of the sheath while all positive ions which reach the sheath are attracted to the electrode. [..] it follows directly that no change occurs in the positive ion current reaching the electrode. The electrode is in fact perfectly screened from the discharge by the positive ion sheath, and its potential cannot influence the phenomena occurring in the arc, nor the current flowing to the electrode."[1]
Positive ion sheaths around grid wires in a thermionic gas tube, where
Langmuir and co-author Albert W. Hull further described a sheath formed in a thermionic valve:
"Figure 1 shows graphically the condition that exists in such a tube containing mercury vapor. The space between filament and plate is filled with a mixture of electrons and positive ions, in nearly equal numbers, to
Debye sheath which has been given the name "plasma". A wire immersed in the plasma, at zero potential with respect to it, will absorb every ion and electron that strikes it. Since the electrons move about 600 times as fast as the ions, 600 times as many electrons will strike the wire as ions. If the wire is insulated it must assume such a negative potential that it receives equal numbers of electrons and ions, that is, such a potential that it repels all but 1 in 600 of the electrons headed for it. "Suppose that this wire, which we may take to be part of a grid, is made still more negative with a view to controlling the current through the tube. It will now repel all the electrons headed for it, but will receive all the positive ions that fly toward it. There will thus be a region around the wire which contains positive ions and no electrons, as shown diagrammatically in Fig. 1. The ions are accelerated as they approach the negative wire, and there will exist a potential gradient in this sheath, as we may call it, of positive ions, such that the potential is less and less negative as we recede from the wire, and at a certain distance is equal to the potential of the plasma. This distance we define as the boundary of the sheath. Beyond this distance there is no effect due to the potential of the wire."[2]
228
Mathematical treatment
The planar sheath equation
The quantitative physics of the Debye sheath is determined by four phenomena: Energy conservation of the ions: If we assume for simplicity cold ions of mass entering the sheath with a velocity , having charge opposite to the electron, conservation of energy in the sheath potential requires , where is the charge of the electron taken positively, i.e. x .
Ion continuity: In the steady state, the ions do not build up anywhere, so the flux is everywhere the same: . Boltzmann relation for the electrons: Since most of the electrons are reflected, their density is given by . Poisson's equation: The curvature of the electrostatic potential is related to the net charge density as follows: . Combining these equations and writing them in terms of the dimensionless potential, position, and ion speed,
Debye sheath
229
This is easily rewritten as an integral in closed form, although one that can only be solved numerically. Nevertheless, an important piece of information can be derived analytically. Since the left-hand-side is a square, the right-hand-side must also be non-negative for every value of , in particular for small values. Looking at the Taylor expansion around require , or , or . This inequality is known as the Bohm sheath criterion after its discoverer, David Bohm. If the ions are entering the sheath too slowly, the sheath potential will "eat" its way into the plasma to accelerate them. Ultimately a so-called pre-sheath will develop with a potential drop on the order of and a scale determined by the physics of the ion source (often the same as the dimensions of the plasma). Normally the Bohm criterion will hold with equality, but there are some situations where the ions enter the sheath with supersonic speed. , we see that the first term that does not vanish is the quadratic one, so that we can
Debye sheath where is the (normalized) potential at the wall (relative to the sheath edge), and d is the thickness of the sheath. and and noting that the ion current into the wall is , we have
230
This equation is known as Child's Law, after Clement Dexter Child (1868-1933), who first published it in 1911, or as the Child-Langmuir Law, honoring as well Irving Langmuir, who discovered it independently and published in 1913. It was first used to give the space-charge-limited current in a vacuum diode with electrode spacing d. It can also be inverted to give the thickness of the Debye sheath as a function of the voltage drop by setting : .
See also
Ambipolar diffusion
References
[1] Langmuir, Irving, " Positive Ion Currents from the Positive Column of Mercury Arcs (http:/ / adsabs. harvard. edu/ abs/ 1923Sci. . . . 58. . 290L)" (1923) Science, Volume 58, Issue 1502, pp. 290-291 [2] Albert W. Hull and Irving Langmuir, " Control of an Arc Discharge by Means of a Grid (http:/ / www. pubmedcentral. nih. gov/ articlerender. fcgi?artid=522437)", Proc Natl Acad Sci U S A. 1929 March 15; 15(3): 218225
231
232
233
234
235
236
237
License
238
License
Creative Commons Attribution-Share Alike 3.0 Unported http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/