background preloader

Holon (philosophy)

Holon (philosophy)
A holon (Greek: ὅλον, holon neuter form of ὅλος, holos "whole") is something that is simultaneously a whole and a part. The word was coined by Arthur Koestler in his book The Ghost in the Machine (1967, p. 48). Koestler was compelled by two observations in proposing the notion of the holon. Koestler also says holons are autonomous, self-reliant units that possess a degree of independence and handle contingencies without asking higher authorities for instructions. Finally, Koestler defines a holarchy as a hierarchy of self-regulating holons that function first as autonomous wholes in supra-ordination to their parts, secondly as dependent parts in sub-ordination to controls on higher levels, and thirdly in coordination with their local environment. A hierarchy of holons is called a holarchy. A significant feature of Koestler's concept of holarchy is that it is open ended both in the macrocosmic as well as in the microcosmic dimensions. Jump up ^ Simon, Herbert A. (1969).

Second-order cybernetics Second-order cybernetics, also known as the cybernetics of cybernetics, investigates the construction of models of cybernetic systems. It investigates cybernetics with awareness that the investigators are part of the system, and of the importance of self-referentiality, self-organizing, the subject–object problem, etc. Investigators of a system can never see how it works by standing outside it because the investigators are always engaged cybernetically with the system being observed; that is, when investigators observe a system, they affect and are affected by it. Overview[edit] The anthropologists Gregory Bateson and Margaret Mead contrasted first and second-order cybernetics with this diagram in an interview in 1973.[1] It emphasizes the requirement for a possibly constructivist participant observer in the second order case: . . . essentially your ecosystem, your organism-plus-environment, is to be considered as a single circuit.[1] See also[edit] Gyroteleostasis References[edit]

Holonomic brain theory The holonomic brain theory, developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm, is a model of human cognition that describes the brain as a holographic storage network.[1][2] Pribram suggests these processes involve electric oscillations in the brain's fine-fibered dendritic webs, which are different than the more commonly known action potentials involving axons and synapses.[3][4][5] These oscillations are waves and create wave interference patterns in which memory is encoded naturally, in a way that can be described with Fourier Transformation equations.[3][4][5][6][7] Gabor, Pribram and others noted the similarities between these brain processes and the storage of information in a hologram, which also uses Fourier Transformations.[1][8] In a hologram, any part of the hologram with sufficient size contains the whole of the stored information. Origins and development[edit] Theory overview[edit] The hologram and holonomy[edit] Correlograph[edit]

Living systems Some scientists have proposed in the last few decades that a general living systems theory is required to explain the nature of life.[1] Such general theory, arising out of the ecological and biological sciences, attempts to map general principles for how all living systems work. Instead of examining phenomena by attempting to break things down into components, a general living systems theory explores phenomena in terms of dynamic patterns of the relationships of organisms with their environment.[2] Theory[edit] Living systems theory is a general theory about the existence of all living systems, their structure, interaction, behavior and development. Miller said that systems exist at eight "nested" hierarchical levels: cell, organ, organism, group, organization, community, society, and supranational system. The processors of matter–energy are: ingestor, distributor, converter, producer, storage, extruder, motor, supporter The processors of information are All nature is a continuum.

Systems science Impression of systems thinking about society. Systems science is an interdisciplinary field that studies the nature of complex systems in nature, society, and science itself. It aims to develop interdisciplinary foundations that are applicable in a variety of areas, such as engineering, biology, medicine, and social sciences.[1] Systems science covers formal sciences such as complex systems, cybernetics, dynamical systems theory, and systems theory, and applications in the field of the natural and social sciences and engineering, such as control theory, operations research, social systems theory, systems biology, systems dynamics, systems ecology, systems engineering and systems psychology.[2] Theories[edit] Since the emergence of the General Systems Research in the 1950s,[3] systems thinking and systems science have developed into many theoretical frameworks. Systems notes of Henk Bikker, TU Delft, 1991 Systems analysis Systems design System dynamics Systems engineering Systems Methodologies

Systems thinking Impression of systems thinking about society[1] A system is composed of interrelated parts or components (structures) that cooperate in processes (behavior). Natural systems include biological entities, ocean currents, the climate, the solar system and ecosystems. Designed systems include airplanes, software systems, technologies and machines of all kinds, government agencies and business systems. Systems Thinking has at least some roots in the General System Theory that was advanced by Ludwig von Bertalanffy in the 1940s and furthered by Ross Ashby in the 1950s. The term Systems Thinking is sometimes used as a broad catch-all heading for the process of understanding how systems behave, interact with their environment and influence each other. Systems thinking has been applied to problem solving, by viewing "problems" as parts of an overall system, rather than reacting to specific parts, outcomes or events and potentially contributing to further development of unintended consequences.

Complex system This article largely discusses complex systems as a subject of mathematics and the attempts to emulate physical complex systems with emergent properties. For other scientific and professional disciplines addressing complexity in their fields see the complex systems article and references. A complex system is a damped, driven system (for example, a harmonic oscillator) whose total energy exceeds the threshold for it to perform according to classical mechanics but does not reach the threshold for the system to exhibit properties according to chaos theory. History[edit] Although it is arguable that humans have been studying complex systems for thousands of years, the modern scientific study of complex systems is relatively young in comparison to conventional fields of science with simple system assumptions, such as physics and chemistry. Types of complex systems[edit] Chaotic systems[edit] For a dynamical system to be classified as chaotic, it must have the following properties:[2]

System dynamics Dynamic stock and flow diagram of model New product adoption (model from article by John Sterman 2001) System dynamics is an approach to understanding the behaviour of complex systems over time. It deals with internal feedback loops and time delays that affect the behaviour of the entire system.[1] What makes using system dynamics different from other approaches to studying complex systems is the use of feedback loops and stocks and flows. These elements help describe how even seemingly simple systems display baffling nonlinearity. Overview[edit] System dynamics (SD) is a methodology and mathematical modeling technique for framing, understanding, and discussing complex issues and problems. Convenient GUI system dynamics software developed into user friendly versions by the 1990s and have been applied to diverse systems. System dynamics is an aspect of systems theory as a method for understanding the dynamic behavior of complex systems. History[edit] Topics in systems dynamics[edit]

Autopoiesis 3D representation of a living cell during the process of mitosis, example of an autopoietic system. The original definition can be found in Autopoiesis and Cognition: the Realization of the Living (1st edition 1973, 2nd 1980): Page 78: - An autopoietic machine is a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components which: (i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological domain of its realization as such a network. [1] Page 89:- [...] the space defined by an autopoietic system is self-contained and cannot be described by using dimensions that define another space. Meaning[edit] Criticism[edit] See also[edit] Notes and references[edit] Further reading[edit] External links[edit]

List of unsolved problems in philosophy This is a list of some of the major unsolved problems in philosophy. Clearly, unsolved philosophical problems exist in the lay sense (e.g. "What is the meaning of life?", "Where did we come from?" Aesthetics[edit] Essentialism[edit] In art, essentialism is the idea that each medium has its own particular strengths and weaknesses, contingent on its mode of communication. Art objects[edit] This problem originally arose from the practice rather than theory of art. While it is easy to dismiss these assertions, further investigation[who?] Epistemology[edit] Epistemological problems are concerned with the nature, scope and limitations of knowledge. Gettier problem[edit] In 1963, however, Edmund Gettier published an article in the periodical Analysis entitled "Is Justified True Belief Knowledge?" In response to Gettier's article, numerous philosophers have offered modified criteria for "knowledge." Infinite regression[edit] Molyneux problem[edit] Münchhausen trilemma[edit] Qualia[edit] Ethics[edit]

Timeline of Western philosophers A wide-ranging list of philosophers from the Western traditions of philosophy. Included are not only philosophers (Socrates, Plato), but also those who have had a marked importance upon the philosophy of the day. The list stops at the year 1950, after which philosophers fall into the category of Contemporary philosophy. Western and Middle Eastern philosophers[edit] Classical philosophers[edit] 600-500 BCE[edit] 500-400 BCE[edit] 400-300 BCE[edit] Hellenistic Philosophers[edit] 300-200 BCE[edit] 200-100 BCE[edit] Carneades (c. 214 – 129 BCE). 100-0 BCE[edit] Lucretius (c. 99 – 55 BCE). Roman Era Philosophers[edit] 0-100 CE[edit] Cicero (c. 106 BCE – 43 BCE)Philo (c. 20 BCE – 40 CE). 100-200 CE[edit] 200-400 CE[edit] Western Medieval Era Philosophers[edit] 500-800 CE[edit] 800-900 CE[edit] 900-1000 CE[edit] al-Faràbi (c. 870 – 950). 1000-1100 CE[edit] Ibn Sina (Avicenna) (c. 980 – 1037). 1100-1200 CE[edit] 1200-1300 CE[edit] 1300-1400 CE[edit] 1400-1500 CE[edit] Early Modern Philosophers[edit] 1500-1550 CE[edit]

Comparison between Karl Pribram's "Holographic Brain Theory" and ore conventional models of neuronal computation One of the problems facing neural science is how to explain evidence that local lesions in the brain do not selectively impair one or another memory trace. Note that in a hologram, restrictive damage does not disrupt the stored information because it has become distributed. The information has become blurred over the entire extent of the holographic film, but in a precise fashion that it can be deblurred by performing the inverse procedure. This paper will discuss in detail the concept of a holograph and the evidence Karl Pribram uses to support the idea that the brain implements holonomic transformations that distribute episodic information over regions of the brain (and later "refocuses" them into a form in which we re-member). 1. 2. It will be necessary in this report to first explain the concepts of a hologram and Fourier transforms before the physiological experiments can be understood. Chapter 4 will review the evidence for the alternative holonomic view. What is holography? 1. 2.

Kolmogorov Complexity – A Primer The Complexity of Things Previously on this blog (quite a while ago), we’ve investigated some simple ideas of using randomness in artistic design (psychedelic art, and earlier randomized css designs), and measuring the complexity of such constructions. Here we intend to give a more thorough and rigorous introduction to the study of the complexity of strings. This naturally falls into the realm of computability theory and complexity theory, and so we refer the novice reader to our other primers on the subject (Determinism and Finite Automata, Turing Machines, and Complexity Classes; but Turing machines will be the most critical to this discussion). The Problem with Randomness What we would really love to do is be able to look at a string of binary digits and decide how “random” it is. And yet, by the immutable laws of probability, each string has an equal chance ( ) in being chosen at random from all sequences of 50 binary digits. Definition: The Kolmogorov complexity of a string , denoted .

The Philosopher Stoned Neuroscience News | picower In the 1983 movie “A Man with Two Brains,” Steve Martin kept his second brain in a jar. In reality, he had two brains inside his own skull—as we all do, one on the left and one on the right hemisphere. When it comes to seeing the world around us, each of our two brains works independently and each has its own bottleneck for working memory. Normally, it takes years or decades after a brand new discovery about the brain for any practical implications to emerge. Monkeys, amazingly, have the same working memory capacity as humans, so Earl Miller, the Picower Professor of Neuroscience in MIT’s Picower Institute for Learning and Memory, and Timothy Buschman, a post doctoral researcher in his lab, investigated the neural basis of this capacity limitation in two monkeys performing the same test used to explore working memory in humans. In other words, monkeys, and by extension humans, do not have a capacity of four objects, but of two plus two.

Related: