Is a universal ontology feasible? Can a universal ontology help resolve the many "wicked problems" that beset the world today?
These notes begin to tie together an array of possibilities, many drawn from well-established and classical approaches, and some involving exploratory new ideas.
In this framework, by the phrase "semantic ontology" we are referring to a broad and universal approach to the fundamentals of word definition and the nature of categories and concepts. How do human beings form their ideas, how are the objects of the world categorized, and how are judgments formed based on those categories?
Clearly, this is hugely important in human affairs. In law, for example, the categories "guilty" and "not guilty" are critical. How do we define the boundaries of these terms, such that we know at what point some particular person in some particular context should be understood as "guilty" or "not guilty"? What makes the difference, and how do we know for sure?
Human beings "draw lines". They "create boundaries". Those boundaries determine how human beings regulate and manage their world -- at all levels, ranging from the most intimate and local (the family, a small local group) to the regional to the national to the international. This subject is critical to a "world that works for everyone". It's critical to any sort of successful self-governance or democracy.
This project is about firming up and clarifying foundational methods for successful large-scale collaboration.
A WORK IN PROGRESS
We are gathering together many ideas from mathematics and computer science, drawing them together as an exploration of a simple but inclusive thesis: all conceptual structure can be understood or interpreted as the expression of a single constructive principle -- the notion of "cut" or distinction.
In very simple terms, this project is an exploration of this single idea: all conceptual structure (including all natural language and all mathematical forms, such as sets and classes) can be interpreted or integrated through one (vastly simplifying) idea: the notion of "cut on a cut" defined in a recursive (repetitive, self-similar) way. Most of this project can be understood as working notes towards the clarification of this single thesis. The objective of the project is to "prove" the authenticity or accuracy of this thesis, perhaps illustrating its usefulness along the way.
BASIC DEVELOPMENT STRATEGY
This project has roots that extend back to the computer science of the 1970s, and there are many contributing threads and facets. We are putting this vision back together as it originally emerged, but updated in terms of modern understanding and employing the current power of Internet, Google and Wikipedia. The plan: gather up all the themes that seems relevant, and look for integrating ways to combine them. Keep exploring ways to combine these elements in ways that satisfy the instinct for simplicity and parsimony. Include scientific and mathematical and philosophical ideas -- and be aware of the richly illustrated history of intuition. These are not new questions, but the answers seem to be coming together in powerful new ways.
DIMENSIONS AND CUTS
In the framework explored here, main facets of this approach include an innovative/experimental concept called "synthetic dimensionality", and the underlying notion that all concepts are assembled from ("composed out of") dimensions.
What are dimensions? We explore a general approach to concept formation that understands dimensions (features, factors, attributes, aspects, characteristics, properties) as composite abstract objects composed of "cuts", in a form that includes "quantitative" dimensionality, but also includes "qualitative" descriptors as well.
A "cut " is a distinction -- a small difference we can name and measure in some way, and taking a form similar or identical to the idea of the Dedekind Cut in the traditional foundations of algebraic mathematics and the definition of a "real number". Concepts are composite units composed of nested implicit cuts defined across levels of abstraction -- and their quantitative meaning is grounded in quantitative dimensionality.
A recent article by Stephan Manning and Juliane Reinecke, and republished by GovLab, reviews the issue of "wicked problems" and offers a suggestion: Don't go for grand solutions to the toughest and most intractable world problems.
We live in a world burdened by large-scale problems that refuse to go away: the refugee crisis; terrorism; rising sea levels; frequent floods, droughts and wildfires; not to mention persistent inequality and violation of basic human rights across the world.
What do these problems have in common? They resist any simple solution. In policy research they are called “wicked.” This is because cause-effect relations are complex and solutions unclear; many of these problems are urgent, yet there is no central authority to solve them; their magnitude is often hard to estimate; and those trying to solve them may even contribute to causing them.
Don't try to fix everything at once. Instead, consider more "modular" approaches -- not entirely "integral" or "holistic" or "unified" -- but still incorporating a kind of "aggregating" or "chunking" process, that pulls together solution methods from a variety of related fields.
"THE HIGH-LEVEL SOLUTION DOESN'T REACH THE GROUND..."
The Manning and Reinecke article cites the UN as offering a "grand solution to a wicked problem", but suggests that such solutions are generally too vague and too controversial to be of much actual help. Yes, perhaps some over-arching and all-inclusive approach might eventually be desirable -- and we're looking into such things here -- but so far, in human experience, these extremely ambitious and theoretical approaches tend to vaporize under the stresses of real world experience.
A CREATIVE INQUIRY
This project as it stands today is a comrepehensive approach to the fundamentals of conceptual structure, guided by an instinct for simplification. We are supposing there is a line of reasonable thinking that pulls all these pieces together, and in so doing, takes a crack at interpreting a very wide array of facets and factors from epistemology and cognitive psychology -- as well as the foundations of mathematics and computer science. Hierarchy, taxonomy, top-down, bottom-up, continuity, center-point -- these are all factors. But how? The plan right now is -- take the content that exists here, and keep combing through it -- adding new factors as they appear, getting rid of the high redundancy -- and make it all as concise and clear as possible.
As we are imagining it, the great dream, the "holy grail" -- is something along the lines of a brief block of algebraic definitions that generalize conceptual space in a irrefutably persuasive way, perhaps "closing the space" in a form that locks the energy of logic and rationality into an absolutely stable format. THIS thing is the container, the "silo of all silos", the solar ring, the "one ring to rule them all"....
This is an innovative and exploratory project emerging at the border between psychology, philosophy and computer science. In this framework, we are exploring some challenging questions that could be difficult or impossible to consider in a narrower professional working context. Though grounded in mainline and established academic and scientific ideas, the approach developed here is original and insofar as I know, not found elsewhere.
Our method involves taking a "constructivist" approach to the definition of data structures and concepts. It is built on the argument that absolutely any mathematical or semantic or logical concept or term can be given an explicit and zero-ambiguity definition in terms of "cuts" (distinctions), in ways that are as closely linked to the real number line and quantitative dimensionality as our computing power and error tolerances allow.
It seems that every mathematical or logical or epistemological concept can be defined in this way, in a bottom-up construction that assembles composite logical/abstract objects from the fundamental constructive element "cut" (or distinction). Distinctions create dimensions, and following this method, everything that exists can be described in algebraic dimensions, and explicitly represented in absolutely every detail in a machine logic.
I am not attempting to solve special-purpose problems that arise within a specific industry or can be located within a "domain". Instead, this is a very broadly conceived and global project that takes on a wide variety of issues and seeks for universal general-purpose solutions, perhaps interconnecting domains. These broader objectives involve questions of "interoperability" between social or professional groups, and general issues of human cooperation in a context of diversity.
My definition of "universal ontology" or "upper ontology" might be significantly different from a commonly-accepted industry standard. I believe that all concepts can be defined in a bottom-up way using the particular dimension-based method developed here. This approach leads to the notion that an "upper ontology" that guides or controls or mediates all semantic processes within its framework can become 100% atomically fluid and continuously adaptive in context-sensitive ways in every and any conceivable dimension.
This is true because every element within the logic is defined with this "atomic" level of adaptive fluency. This framework makes the conceptual model "near-continuously variable" in any dimension of concern, and it applies to any question or issue or definition where something can be described in dimensions.
Themes
What is meant by constructivism?
What is a "cut"? How is this idea related to "Dedekind cut"?
How can it make sense to build all mathematical or logical objects from one concept?
How can this approach be grounded in a way that could conceivably become consensual?
A universal theory of concepts
There is today no widely-accepted theory of concepts, and there are many ways to understand what a concept is and how concepts are formed or grouped together. In this framework, I offer a new or innovative approach to the general theory of conceptual structure. This new perspective is based on this atomic approach to compositional semantics, asserting that any concept can be most accurately and parsimoniously understood as an integrated aggregate of cuts or distinctions, which can be interpreted in terms of dimensions.
This project is intended to develop a universal interpreter of conceptual form, and I believe this method is that, or can become that. As is obvious, these notes are a work in progress, with much of the raw processing and sketches visible in this online framework.
Though of course it is true that there is a common and generally-understood pool of word meanings, and general communication in society would not be possible without this common and approximate pool of shared meanings, under formal circumstances all word and concept definition is essentially stipulative and top down. A particular interpretation or meaning of some word or concept in a specific context is "assigned" by the parties to that context and explicitly defined by negotiated agreement.
This is the approach to word meaning employed by lawyers drawing up big contracts. Every object, every action, is defined in precise dimensional detail characterized by mutually acceptable boundary values. The exact meaning of a term is "stipulated". Without such stipulation, the meaning of words can be vague, blurry, or ambiguous -- which can be dangerous, ambiguous or very expensive in the context of big-time (100's of millions of dollars) contracting. No important specific detail is left to the imagination.
The particulars of word and concept definition embody and convey the intent of the person or group using the term. A contract or agreement between people is a stipulated definition defined in as much precise detail as participants feel is important.
All words and concepts can be stipulatively defined in terms of dimensionality.
Semantic dimensionality can be constructed in all its detail from the single concept of "cut" or distinction, beginning with the real number line, and the definition of a "computable number" defined as a "computable cut"
The intended meaning of "qualitative" dimensions can be stipulatively defined in terms of "quantitative" dimensions.
Dimensions are composite objects composed of the fundamental primitive element "distinction" -- which we define in terms related to the Dedekind definition of a the real number line -- as a "cut".
All abstract concepts are therefore composed of a hierarchical cascade of implicitly-nested cuts (distinctions).
And yes, to advance this or any similar vision of intentional collaboration into the world, its "early adopters" will have to want to do it. This is an optional and intentional choice for cutting-edge cultural leaders who know we must work together on this planet.
Boundary values
Computable or "finite" number - defined in a finite number of "decimal places" or a finite number of matrix cells -- a finite number of bits. The distinction between values defines a "unit interval" between values which is essentially unknown and presumed to be continuous.
The demarcations between data cells are akin to "cuts" -- they make a distinction in the continuity of definition -- so this is part of the general recursion that makes this process so mysterious.
All these levels appear to be "self-similar" in important ways -- and maybe highly/fully/absolutely recursive in some critical dimensions.
Everything is defined as a "cascade of cuts across levels of abstraction" -- with the notion of "cut" defined to some specific level of round-off error -- so, understood this way, even a "cut" has an upper and lower bound -- with a defined range within those bounds containing no further distinction.
In general, in practical contexts, the number of cuts is constrained by "acceptable error tolerance". How accurate do we have to be? How accurate CAN we be, given the immediate practical realities? Accuracy is defined within a boundary value range. The "real" value, we have to accept, is not entirely knowable, but is "somewhere within this defined range".
Integration
If we can understand or frame these issues in the right way, we are supposing it might be possible that the form of "absolutely all" conceptual structure can be understood as 100% linearly recursive -- or, in other words, can be 100% explicitly defined in every detectable detail by a simple algebraic cascade defined in one algebraic primitive.
If we can then close the space of this structure through a topological transformation by showing that this format now "contains everything", we will have created what amounts to something like the global community of ideas, across all sectors, divisions, cultural distinctions and systems of value and measurement.
Reality and its symbolic representation -- everything about the correlation of an abstract symbol to the "physical reality" it is intended to represent.
What is "science"? Can science occur solely within the mind? We are supposing that the answer should be no.
Constructionist definitions -- why they are essential and very empowering -- why a computerized society should base its science on explicitly machine-instantiated logic.
What's wrong with "primitive notions and axioms"? They are ungrounded and make sweeping unconscious presumptions about the correlation of their symbolism with reality. This is a radical thesis and somewhat counter-intuitive -- so we have to clearly map the line of connection between an abstract symbol ("Let RBM = the set of all men with red beards"). Can we now safely run set operations on the logical object RBM? The sensitive issue is in the question of what is a red beard, and what is a man.... They are "unscientific" because they are about something (undefined) going on in the mind of the mathematician.
What is "reality"? We say that reality is a continuum with no divisions or distinctions, as per the
quote from Plotinus. All borders and boundaries that appear in reality are creations of the mind. But this too is counter-intuitive and must be tempered.
"Reality" and "Model of reality"
Correspondence between abstract model and the continuous undifferentiated reality that is the continuum.
Science emerges as the process of establishing correlation between this abstract model and testable/reproducible experience. How is the model connected to the reality? How does the experience of the reality confirm the model?
Reality is "the one" -- and "the whole" -- the "all" -- because it is undifferentiated
Cuts and distinctions "arise" within this context, driven by human motivation -- which we can generally ground in survival and human welfare -- "It is helpful for me to make this distinction because it contributes to a skill I need that will help make me more comfortable or safer or accomplish some other fundamental evolutionary objective."
Machine-grounded constructivist definitions and philosophy
We are supposing that a mathematical science that accepts as "proof" something that seems logical to some mathematician or some school of mathematicians is not really a science. For something to be "scientific", it would seem, it must be independently reproducible in some medium outside the mind of the observer or peer-review group.
"Primitives" are confusing and avoidable -- there are better ways. Insofar as possible, all "primitive notions" should be replaced by explicit and absolute compositional definitions grounded in the real number line and the concept of cut.
There are a variety of ways to determine "what a word means".
In this framework, while appreciating and acknowledging common word usage and "lexical definitions" (dictionary definitions), we emphasize "stipulative" definition: a word means what the user of the word intends it to mean.
STIPULATION
Stipulative definition is 100% context-sensitive, and shapes the particular intended meaning of any word or concept for the particular and precise intended meaning in the actual context of usage. This includes the speaker's understanding of the listener's expectations and how the listener will most likely understand the meaning of the words chosen. In the context of dialogue and conversation, stipulative definition enables the user of the word to add further specificity to intended meaning through a process of "drill-down." If the user of a word or phrase has reason to suppose that the intended meaning is being misunderstood, the user can provided additional specificity. If the listener is not certain of particular intended meaning, the listener can inquire, asking questions that help clarify intended meaning.
In general, we propose that all word meaning and conceptual definition should be recognized as stipulative. Any sort of formal consensual definition (an agreement among members of a group) is formally stipulated. Any other approach to definition is inherently uncertain and must be based on statistical probability. Of course, in common every-day usage, people must rely on common meanings. People must have a reasonable expectation that another person will know what they mean by a word. But under pressure, when we must be absolutely certain, definition must be stipulated -- whether by one person, or by a group through a process of formally negotiated agreement.
A WORD IS A LABEL FOR ITS MEANING
A word is a label or name for its intended specific meaning, which may be complex and left implicit for reasons of convenience and economy. If potentially ambiguous -- this word or phrase or concept could be interpreted in more than one way -- this ambiguity of interpretation can be defused through further discussion or "drill-down" inquiry. In some context of communication or negotiation, if more detail is needed and more specifics are required to reduce misinterpretation or ambiguity -- then provide it and defuse any possible misunderstanding.
CONCEPTS ARE MADE OUT OF DIMENSIONS
Word definition and meaning is best understood as stipulative.
Concepts are symbolic structures that "contain" or define intended meaning.
Concepts are abstract, and defined at a level of abstraction (a level of specificity or generality).
The "internal structure" of a concept is implicit and inherent within the concept.
The more abstract a term, the more likely it is to be ambiguous (because its implicit structure is more complex).
Because intended word meaning is implicit (not explicitly defined in particular ways), intended word meaning of abstract concepts can often be ambiguous or confusing.
The ambiguity inherent within any abstract concept can be reduced or clarified by providing additional detail regarding specific intent. This additional information can and should be defined in terms of dimensions.
The internal structure of a concept is "made out of" or composed out of dimensions.
Dimensions are themselves -- or can be understood as -- composite abstract objects. They, too, "have internal dimensional structure".
Bottom line: dimensions are (recursively) made out of dimensions -- and dimensions are made out of "cuts" ("distinctions").
This is a "recursive" definition, taking a form similar to a fractal, and repeating that general form through a descending series of "self-similar" levels that terminate at some point determined by practicality and necessity.
"Cuts" are the baseline and fundamental concept at the foundation of all mathematics. All human conceptual form -- including both the exact quantitative measurements of science, and the fuzzier and more ambiguous concepts of the humanities and liberal arts -- are defined by and can be most usefully understand as composite abstract structures composed from cuts in a context of stipulation. The internal semantic meaning of a concept is what the user of the term intends by it in the actual and immediate context of usage.
PERCEPTUAL INFERENCE
The emergence of distinctions and the construction of perception
THEMES
Ambiguity
Uncertainty
Indeterminacy
Boundary value range (actual value is unknown but is known to exist within lower and upper bounds)
"Fuzziness"
"Primitive" foundational definitions that introduce uncertainty or ambiguity
Abstraction
Levels of abstraction
Range from "actual instance" to specific category to general category
Relation to "parts and wholes"
Relation to taxonomy
Bottom-up and top-down
"Measurement omission"
Similar properties are retained, dissimilar properties are omitted
Types and levels of variables
Stanley Smith Stevens
Ratio variable
Interval variable
Ordinal variable
Nominal variable
Relativity
Is the part-whole relationship directly analogous to the relative-absolute relationship?
Is "the whole" the absolute and "the part" the relative?
Symbolic representation
What is the correlation between an abstract symbol and the reality it represents?
What is the correlation between the implicit meaning structure of an abstract symbol and the mechanical/physical medium (computer, bits, bytes, pixels, fonts) that represent it?
What is a "distinction"?
What is a "cut"?
What is a "cut on a cut on a cut on a cut..."?
What is the absolute foundation or ground for the construction of concepts?
How are concepts "made out of dimensions"?
How are "dimensions made out of dimensions?"
WHAT IS ABSTRACTION?
Levels of abstraction
Levels of variables
Abstract variables implicitly (or potentially) contain variables defined at lower levels of abstraction (higher levels of specificity)
Abstract variables can be "grounded" in explicit quantitative dimensionality and measurement through a process of stipulation. What do you mean by "beauty"? At what point, in what dimension, does that thing you defined as beautiful somehow become "not beautiful"? That is a choice made by the speaker, as defined in some dimension chosen by the speaker. The speaker draws the line and sets the boundary.
A UNIVERSAL THEORY OF CONCEPTS
In brief:
A distinction is some small difference or dimension of reality we can perceive and name and perhaps measure.
All concepts -- scientific or religious, empirical or abstract -- are made from or contain specific distinctions.
All distinctions arise under the drive of some ad hoc or local motivation and purpose/need/intention.
Concepts are essentially "stipulative" and serve the intentions of the person using them, taking a specific form in alignment with those intentions.
Distinctions become part of a loose social contract if they are broadly useful.
Social agreements can be formed around these definitions to enable the collaborative interactions of society, but there is no one absolute all-purpose taxonomy of reality.
Concepts are inherently "discrete" and reality is inherently "continuous". Concepts are at best an approximation.
Concepts are "holons" -- an inherently "Janus-faced many/one". Every concept is an abstract "whole", composed of those nested and implicit distinctions which are its inherent "parts" and its meaning.
The "levels of nesting" inherent within any concept characterize the process of abstraction, which occurs within a hierarchical spectrum ranging from concrete and specific to abstract and universal. A taxonomy is such a "hierarchy of abstraction", as it ranges up levels of abstraction from particular concrete instances to broad general categories.
All concepts, in any language, are formed this way, as composite units (holons) of implicitly nested (contained within the holon) distinctions.
All conceptual form, and all ideas, take this same general and universal form -- a broader containing abstraction which has as its "parts" its inherent implications.
Though this universal Rosetta Stone of semantic interpretation, all understanding can be rendered absolute fluent.
THE ABSOLUTE GROUND
Humans understand the world through concepts.
Concepts are composite structures, composed of distinctions (whether implicit or explicit).
The absolute bottom-line foundational element in this constructive process is "a cut on a cut" -- a "distinction made on a distinction".
This process defines the "real number line".
The absolute ground for any concept definition is quantitative empirical measurement. Any concept can be drilled down under stipulation to connect with empirical measurement.
MINIMAL DISTINCTION
On / Off
Yes / No
Black / White
Figure / Ground
The isomorphism of abstract concepts and their physical machine representation:
At the level of abstraction, concepts are defined by an ascending hierarchical cascade of finite-state on-off distinctions.
These abstractions are represented in computer space by the same logic.
This isomorphism or similarity is surprising, elegant and parsimonious.
MAJOR RESEARCH THEME
The definition of a "cut on a cut" may involve some potent and mysterious topological form, such as a "strange loop" or a Moebius Strip. This involves the definition of continuity and the role of intention. This theme opens up questions of "closing the space" or creating a single integral and absolute foundation for all conceptual structure defined in one algebraic primitive.
One algebraic primitive (cut).
The mystery of figure/ground -- a cut is a figure defined in a ground.
One-sided -- a cut on itself?
All conceptual form is built up from this foundation across levels of abstraction.
Simplification: reduction of redundant and overlapping terminology -- there are far too many ways to say almost the same thing. This overlap reflects fragmented and partial interpretations of a single larger whole.
Minimum primitives: Clarification and simplification of foundational concepts and structures, eliminating redundancy and ambiguity
Universal interpretation of "dimension" - a single general form that characterizes description at multiple levels of abstraction ranging from quantitative ("empirical") to qualitative ("holistic")
Quantitative / qualitative integration
Absolute grounding in "One" -- at bottom and top of abstraction cascade
Features, attributes, properties, qualities -- all defined as abstract dimensions with stipulative definition cascade to quantitative dimensionality
Interdisciplinary integration (e.g., "unity of the sciences") through levels of abstraction
Absolutely uniform and systematic method - one method and approach capable of interpreting all conceptual structure, in any discipline, at any level of abstraction
OBJECTIVE
Absolute definition of all mathematical, logical and semantic elements in one primitive
Hard (concrete, physical) grounding of all conceptual abstractions in terms of physical machine space
COMPARISON
Any form of comparison between two objects, creating a quantitative basis for comparison
All comparisons and all definitions based on comparisons are defined in terms of dimensions and values in those dimensions
Dimensions and values are defined in terms of cuts, including
All affirmations of identity or equality (two things are equal, they are "the same thing", they are "equivalent")
Any affirmation of "similarity"
Any affirmation of "difference"
Metaphor and analogy are examples of comparison.
ISOMORPHISM
We are suggesting a simple and profound self-similarity or isomorphism of logical elements, proposing that all these objects take the same general form
Cut
Dimension
Feature / quality
CONTAINERS
A boundary value is a cut in a dimension. All abstract containers can be defined by boundary values, including
Sets
Classes
Categories
INTEGRATION
All of this emerging in the same general form under the same guiding logic implies a profound integration and simplification of what is today a vast terminological complexity.
Plotinus taught that there is a supreme, totally transcendent "One", containing no division, multiplicity or distinction; beyond all categories of being and non-being.
His "One" "cannot be any existing thing", nor is it merely the sum of all things [compare the Stoic doctrine of disbelief in non-material existence], but "is prior to all existents". Plotinus identified his "One" with the concept of 'Good' and the principle of 'Beauty'.
His "One" concept encompassed thinker and object. Even the self-contemplating intelligence (the noesis of the nous) must contain duality. "Once you have uttered 'The Good,' add no further thought: by any addition, and in proportion to that addition, you introduce a deficiency." Plotinus denies sentience, self-awareness or any other action (ergon) to the One [V.6.6]. Rather, if we insist on describing it further, we must call the One a sheer potentiality (dynamis) or without which nothing could exist. [III.8.10] As Plotinus explains in both places and elsewhere [e.g. V.6.3], it is impossible for the One to be Being or a self-aware Creator God. At [V.6.4], Plotinus compared the One to "light", the Divine Nous (first will towards Good) to the "Sun", and lastly the Soul to the "Moon" whose light is merely a "derivative conglomeration of light from the 'Sun'". The first light could exist without any celestial body.
The One, being beyond all attributes including being and non-being, is the source of the world—but not through any act of creation, willful or otherwise, since activity cannot be ascribed to the unchangeable, immutable One. Plotinus argues instead that the multiple cannot exist without the simple. The "less perfect" must, of necessity, "emanate", or issue forth, from the "perfect" or "more perfect". Thus, all of "creation" emanates from the One in succeeding stages of lesser and lesser perfection. These stages are not temporally isolated, but occur throughout time as a constant process. Later Neoplatonic philosophers, especially Iamblichus, added hundreds of intermediate beings as emanations between the One and humanity; but Plotinus' system was much simpler in comparison.
The One is not just an intellectual conception but something that can be experienced, an experience where one goes beyond all multiplicity. Plotinus writes, "We ought not even to say that he will see, but he will be that which he sees, if indeed it is possible any longer to distinguish between seer and seen, and not boldly to affirm that the two are one."
This concept explores the bottom-up construction of a universal top-level composite "one" into which and from which branch all possible or existing distinctions, whether branched across levels of abstraction and generality (or part/whole relationships?), or by geographic region...
Distinctions arise...
Where do distinctions come from?
We propose that distinctions arise semi-spontaneously within the context of human life everywhere, as a natural part of growth and the development of mind and consciousness.
The process begins in great simplicity and direct immediacy, as human beings make distinctions that help support life or convenience or comfort or pleasure or safety. Warm vs. cold, high vs. low, wet vs. dry, good-to-eat vs. poisonous, safe vs. dangerous, green vs. blue. People discover these distinctions, they share them if they are useful, others learn them, people give them names and languages emerge in cultures as ways to support the growth and strength of community.
These fundamental distinctions define the basic dimensions of life and experience, that grow in similar ways in cultures anywhere.
Individuals discover these distinctions by instinct. A newborn baby learns to distinguish mother and not-mother and gradually learns to vocally symbolize this distinction.
These distinctions arise in generally similar ways, but are given different names in different cultures and locales.
These distinctions can form dimensional composites of great variety -- but the general form of the distinction is always the same -- a cut with upper and lower boundaries and eventually as a dimension with a range of values.
Languages form as consensual agreements share these distinctions as ways to support the community.
This approach defines "the one" in a bottom-up and empirical way, rather than postulating the idea as a metaphysical principle or starting point, as suggested by Plotinus. Distinctions emerge locally, everywhere, and compile into an implicit and undefined integral unity at the highest level of the cascade.
A constructivist foundation for universal collaboration
"Atomic semantics in digital machine space"
This approach is intended to create a 100% explicit grounding of all mathematical and semantic definition. It is presumed that by defining these elements in a "machine space" -- as a process running inside a machine (a computer) -- all possible ambiguity can be removed from definitions and structures built from those definitions.
The "atomic" element in this process is the concept of "cut" or distinction -- a point or line or interval ("gap") that creates "difference" in a form that can be described, named, labelled, measured and located. Because it is mechanically and physically constructed, like the so-called "standard meter in a glass case in Paris", this concept is given explicit and physical/actual definition and instantiation in mechanical cellular space and all semantics are constructed from it.
The advantages of constructivist definitions established in this way
Every aspect of every definition reduced to binary/bit/"atomic" level, in exact specific detail in every regard, including both "abstract meaning" and "physical machine implementation"
Every function, definition or structure explicitly defined in every detail as a machine (computer) process
Fully "transparent" and open to objective observation and specification in every regard
Mapped directly to "real number line" as it can best be defined in a computer ("finite and cellular") space
Every algebraic object defined as a construction grounded from this foundation
Every semantic object defined in terms of these algebraic objects
Fundamental primitive element is "cut" or distinction
Primary constructive element is "dimension"
Dimensions are created from cuts
This approach is 100% "transparent". Every facet of every mathematical and semantic definition can be explicitly and objectively observed and studied by a neutral independent observer. This approach does not invalidate or criticize other approaches to the philosophy of mathematics. It does create a workable common standard.
At the core of the vision driving this project, there is a strong intuitive sense that a significant algebraic proof or integration is awaiting expression. Cognitive science is identifying profound linkage between once-independent areas of study. Critical ideas in semantics and computer science are converging. Language, mathematics, brain structure and computer science are becoming intimately linked. Are we awaiting a "revolutionary" integration or generalization of conceptual form? Is this possible?
In a global context of extreme complexity, when the convergence in cognitive science is viewed in broad terms there is a growing sense that a potent simplification or generalization of conceptual structure may be possible. Are all these many overlapping and interrelated disciplines and methods so very different, or can they be understood as complementary facets of an emerging new simplification? Could it be possible to conceive a non-trivial unifying framework that contains them all?
Integral
Clearly, any such integration would be an astonishing thing. Yes, it is almost inconceivably ambitious. it would be difficult, astonishing, highly unlikely, and very complex. Why -- if it is a "simplification" -- is it "complex"? Because it (attempts to) interconnect and integrate into a single framework hundreds of concepts that today are generally understood as independent facets or factors in the world of conceptual structure, including logic and mathematics, as well as the abstractions of philosophy and the humanities. This integrating framework attempts to "define them all" within the context of a few basic principles, tied together and locked into a stable and unbroken format by a single theorem. Its extreme ontological simplicity seems a strong case for its authenticity and value.
In one stroke, on the basis of a simple set of enlightened guiding principles, including stipulative definition and the recursive definition of dimension based on "cut", the integrating framework we are exploring supposedly integrates or "de-frags" the entire context of intellectual space. It creates definitions for hundreds of ideas that today are generally understood in semi-stand-alone ways, as independent logical elements each one of which attracts its own logically independent definition, marginally commensurate with any related concepts.
"Yes, of course, pursue empiricism or local independence as you will. Invent your own system, create a language, build bridges to other systems...
"But this model integrates the entire framework and the entire range of its possibilities...."
All concepts, classes and categories are defined by boundary values in n dimensions, taking a format that defines a "n-dimensional envelope". An object is "in the category" if it is defined within the boundary values of that envelope.
All constructive elements in this process are "cuts" or logical distinctions, that map directly into the fundamental and foundational concept of mathematics: the real number line.
All parsing or cuts are stipulative, ad hoc and context-specific, taking a 100% fluent and adaptive form that fits the purposes of the moment as best they can be understood.
All abstract (generalized) categories are mapped in an ascending cascade from this ground, always defined in the same terms by the same methods. The entire structure is "linearly recursive" and self-similar -- beginning from its absolute foundation -- the "unit interval defined as a cut" -- and ascending in levels of generality from there.
Universal container
This framework is a universal container. It defines a "boundary" around the entire content of intellectual structure, including algebraic and algebraic logic, and all conceptual form. It composes and compiles all these elements in a "self-similar and recursive" format using one primitive logical element: distinction. Everything exists within it -- every distinction and every composite abstraction composed of distinctions.
It creates a mapping from "the highest level" to "the lowest level" -- understanding "high and low" to be characteristics of a "level of abstraction" -- generally understood in philosophy as the range from the particular to the general, and operating over a bottom-up/top-down hierarchical framework.
This ambition is a direct challenge to some important established principles in the philosophy of mathematics, which assert that the notion of one container is inherently self-contradictory. We must have a solution for that issue.
The bottom is the infinitesimal
The top is the all
Every category of being is defined within this framework by distinctions drawn within it
Recursion at both ends of this framework is terminated by a "one-sided" boundary
This termination defines a limit
This boundary maps the bottom into the top and "closes the space"
Circular definition - one definition containing every definition by implication because every definition is (can be) parsed from it...
This project, or intimations of it, have been out there a long time. As an undergraduate at UC Santa Cruz, I was studying the symbolism of deep intuition, psychology and philosophy, and the mathematics of system and epistemology. I did what I could with a typewriter and file cards -- but when I could finally afford a small personal computer, my studies took a big jump. I was now able to work on hundreds of interconnected elements at the same time, in a fast economical way, enabling rapid and creative free-association between ideas.
Working with an Atari 520ST and an outline processing program called HippoConcept, I went to work on an epistemological dictionary. I came up with about 300 interdependent definitions, considering every term or phrase or concept that I felt was critical for the development of a comprehensive model of cognition and the elements of epistemology.
With a strong academic and technical bibliography, I hammered hard on that system, going over and over and over my network of interconnected definitions. Finally, a central theme emerged: the concept of dimension. After all that work, that project converged to a central thesis: "everything is made out of dimensions" -- and indeed, "dimensions are made out of dimensions".
This was a revelation -- and in the early 1990's, I wrote a series of papers on the general theme of "Synthetic Dimensionality" -- the algebraic explanation and theory of why this is true. The mantra, the illusive key mystery, in a recursive nutshell: conceptual form is "a cut on a cut on a cut on a cut on a cut...."
This model, I felt convinced, explains anything hierarchical, including any process described as top-down or bottom-up. All conceptual form is defined this way, and any concept or term or expression can be "stipulatively defined" by a cascade of "synthetic dimensions". This is the general form of "abstraction" -- the spectrum of levels that characterizes the relationship between "the specific" and "the general" -- between the empirical and the abstract.
Today, these studies and the projects built on them form the background of this new project. Something feels different; maybe it's the astrology or the years or the critical "wicked problems" that seem so threatening to so many today. In any case, as an MS SQL/ColdFusion programmer with a vision, I am banging this stuff into a database, uploading any image I can find that seems illuminating, and gathering up what I am supposing might turn into a well-ordered and comprehensive approach to this very demanding and difficult technical subject.
WHY DO WE NEED TO DO THIS?
What problems can we solve?
How does semantic ontology relate to the issue of "wicked problems"?
What are the broad implications for interdependence of simultaneous issues and factors?
How or in what way could a universal semantic ontology impact the broad issue of governance across multiple levels of scale?
RECURSIVE UNIVERSALITY
This approach is revolving around the mystery of absolute recursion and primal simplicity. There are any number of ways to describe and compose the elements of mathematics or semantic structure. This approach is looking for an absolute and simple general method that is based on a single underlying integral principle which multiplies into every form of conceptual structure.
Recursion, dimension, cut, similarity, difference, boundary value, figure/ground -- the guiding instinct or intuition is that these elements combine in in some way that is astonishingly simple and "elegant".
A GUIDING AESTHETIC
"Beauty is truth, truth beauty – that is all Ye know on earth, and all ye need to know..."
This creative process involves an ongoing tension between an instinct for holism and the need to integrate working reality. If there is a single guiding "meme" for this work, it is the vision of "many and one" -- the fundamental principle of "holon", of the governing structure of the United States of America, and the fundamental concept of "set" as originally defined by Georg Cantor.
The fundamental container or concept is "the one" -- and the issue and question is -- how does this one divide up into everything that is...
THE CHALLENGE OF BABEL
Human beings -- for any number of reasons -- are failing to work together to solve collective problems.
Languages and cultural perspectives are not commensurate. We need a protocol for universal interpretation.
Nations, academic sectors and scientific disciplines are commonly locked within their own "silo" -- which makes it difficult for workers in a particular sector or silo to cooperate or work with people in other silos.
The challenge of "interoperability" refers to the problems that emerge when projects are developed that involve the cooperation or participation or groups or agencies existing within separate silos.
These issues are dangerous and threaten human welfare in many ways. This problem merits substantial investigation and development.
Cuts are a generalization of a "Dedekind cut" that enables "width" or a "gap" in the context of finite numbers
Abstract descriptors like "features" are dimensional composites -- multi-dimensional structures
All abstractions can be defined in a descending cascade across levels, taking the general form of a "cut on a cut on a cut on a cut..."
We are posing the notion of "cut on a cut" as the foundational definition of all categorical/conceptual structure. We think it should be possible to show that all conceptual form can be defined in a simple, clear, linearly-coherent algebraic definition chain that extends from the implications of this foundation.
The top level of this cascade is the "absolute container", and "everything" exists within it.
We are supposing that this bottom-line format -- the end of the "turtles all the way down" cascade -- is a space that is "closed on itself" to form a sealed terminal unit at the bottom of the abstraction cascade
What does this notion of "sealed on itself" mean?
This implication or consequence emerges from a perspective, a point of view, an angle of perception
From one angle -- it is a bounded unit -- a unit interval with a lower and an upper bound
That unit interval has no distinctions internal to it. It may be a bounded range -- but all we can say is that this range is continuous with no differentiation (what does this say about "the infinite")?
So -- based on this conceptual unit -- a number of fundamentals must emerge as consequences or implications
This fundamental unit is a definition of continuity -- "the continuum"
Continuity is uncertainty -- an undefined and unmeasurable range of values
It is a "unit interval" -- but it is somehow "unbounded" -- it has no terminal cuts defining lower and upper boundary values
As a cut -- is it still a distinction? Or do we simply say -- this is not a cut -- or it is a "cut cutting itself" -- a distinction on a distinction -- where the range of values is 0 -- the range from "lower bound" (0) to "upper bound" (1) is 0 -- 0 length -- 0 measurable or definable length
Terms that we must immediately be able to define
Cut
Distinction
Figure/ground
Unity -- unit
Duality -- multiplicity
Identity
Similarity
Difference
Dimension
Cuts in a dimension defining units in that dimension
Quantity of units (multiples) in that dimension - 9 feet, 15 apples, 7 degrees
Secondary or derivative concepts we will be able to define
Review of holistic ontology and epistemology from historical examples
Generally and briefly introduce the idea that holistic intuition and symbolism can often (or sometimes) be understood as "pre-mathematical". Such symbols are often attempts to conceptualize some profound and holistic view of ontology, following guiding visions and intuitions taking a suggestive and encouraging form, but not yet or quite meeting the standards of mathematical rigor.
Visionary scientists and mystics have been speculating on these mysteries since the beginning of time, and doing what they can to accurately conceptualize their form and implications. Without the benefit of modern tools and concepts, these far-seeing mystics and prophets and philosophers did what they could to form a comprehensive and reliable map of reality.
The relevance of mysticism and deep intuition -- the "vision of the whole" -- the whole and its parts
The Great Chain of Being
Ramon Lull
Leibniz
Axis Mundi
Imagery - the theory of imagery - metaphor - symbolism - "pre-mathematical" - right-brain / left-brain
Diagrams, models, theories, visions, major themes like trees, hierarchies, staircases, towers
Brief history of emerging models -- bridge between intuition and science
Intuitive psychology -- Jung, etc.
Myth -- comparative cultural anthropology -- Joseph Campbell
Artists like Maurice Escher
Modern scientific writers like Godel and Hofstadter or Rudy Rucker
Rise of computer science -- how does computer science representation "constructivism" -- how mechanical logic helps filter out unconscious gaps or presumptions in purely abstract logic
The model of conceptual structure developed in this framework is inspired by similar intuitive ideas that have emerged in science and philosophy since the beginning of civilization. The general form of the framework is hierarchical, composed of multiple levels (or "scales"), often intended as a complete catalog of all possibilities in existence.
A common traditional model is "The Great Chain of Being" or "Scala Naturae", intended as a kind of universal divine ontology of all things, with God at the top level and all things in existence organized under or within God. The concept is similar to the biological taxonomy or classification created by Carolus Linnaeus, intended to classify all life forms, organized in a system of categories that extend from the most general and inclusive to the narrowest and most specific.
The traditional game "Twenty Questions" -- "Is it plant, animal, or mineral?" -- is a kind of taxonomic drill-down across levels as the question endeavor to narrow down some specific category or object from the widest possible range.
There are many related metaphors and descriptive concepts that have emerged from slightly different angles, or as expressions of different intentions or theologies.
In the models and interpretations we develop today, this hierarchic cascade of levels is preserved, but is defined slightly differently so as to overcome what has emerged as the now-obviously disqualifying properties of this rigidly top-down categorical framework.
Scientists and philosophers have learned over the years -- through a slow eveolutionary process of co-creative discovery that took a long time as we gradually began to understand the real nature of taxoniomi form -- that it is not realistic to suppose that there is any one set of "levels" that can chaaracterize all categorization and classification. The general principles that hold true are descriptions of of the process of abstraction, as broader catgopries are defined as "containing" narrower categories.
In the model developed here, this hierarhica/taxonomic form is understood not as a stabnle and enduring/constant property of reality or of a system of things, but as a property of the human mind and the naature of categories and concepts themselves. In then end, the actual choices in a biological taxonomy tend to negotiated stipulations and agreements among some working group. There are no absolute differences -- in, for example, the difference between a wolf and a dog.
What is preserved, it seems, across all cases and different uses of hierarhical classification, is that the general principle of "drill-down" across levels of scale or abstraction always takes the same general form, moving from more abstract and general/universal to less abstract and more specific/particular.
An infinite regress in a series of propositions arises if the truth of proposition P1 requires the support of proposition P2, the truth of proposition P2 requires the support of proposition P3, ... , and the truth of proposition Pn?1 requires the support of proposition Pn and n approaches infinity.
"Turtles all the way down" is a jocular expression of the infinite regress problem in cosmology posed by the "unmoved mover" paradox. The metaphor in the anecdote represents a popular notion of the theory that Earth is actually flat and is supported on the back of a World Turtle, which itself is propped up by a chain of larger and larger turtles. Questioning what the final turtle might be standing on, the anecdote humorously concludes that it is "turtles all the way down".
The expression is an illustration of the concept of Anavastha in Indian philosophy, and refers to the defect of infinite regress in any philosophical argument. Contrary to most extant western references, it is not a popular Hindu belief. Rather, it is a widely accepted principle in Indian philosophy, commonly used to reject arguments for a creator God or "unmoved mover".
The phrase has been commonly known since at least the early 20th century. A comparable metaphor describing the circular cause and consequence for the same problem is the "chicken and egg problem". The same problem in epistemology is known as the Münchhausen trilemma.
We relate this theme to absolute center, logically independent of all particular forces yet integral to all of them. All force moves relative to the unmoved absolutely abstract logically independent "highest level" in the ontological cascade of abstraction.
The unmoved mover (Ancient Greek: "that which moves without being moved") or prime mover (Latin: primum movens) is a monotheistic concept advanced by Aristotle, a polytheist, as a primary cause or "mover" of all the motion in the universe. As is implicit in the name, the "unmoved mover" moves other things, but is not itself moved by any prior action. In Book 12 of his Metaphysics, Aristotle describes the unmoved mover as being perfectly beautiful, indivisible, and contemplating only the perfect contemplation: itself contemplating. He equates this concept also with the Active Intellect. This Aristotelian concept had its roots in cosmological speculations of the earliest Greek "Pre-Socratic" philosophers and became highly influential and widely drawn upon in medieval philosophy and theology. St. Thomas Aquinas, for example, elaborated on the Unmoved Mover in the Quinque viae.
First philosophy
Aristotle argues, in Book 8 of the Physics and Book 12 of the Metaphysics, "that there must be an immortal, unchanging being, ultimately responsible for all wholeness and orderliness in the sensible world".
An Archimedean point (or "Punctum Archimedis") is a hypothetical vantage point from which an observer can objectively perceive the subject of inquiry, with a view of totality. The ideal of "removing oneself" from the object of study so that one can see it in relation to all other things, but remain independent of them, is described by a view from an Archimedean point. For example, the philosopher John Rawls uses the heuristic device of the original position in an attempt to remove the particular biases of individual agents in an attempt to demonstrate how rational beings might arrive at an objective formulation of justice.
The expression comes from Archimedes, who supposedly claimed that he could lift the Earth off its foundation if he were given a place to stand, one solid point, and a long enough lever. This is also mentioned in Descartes' second meditation with regards to finding certainty, the 'unmovable point' Archimedes sought.
Sceptical and anti-realist philosophers criticise the possibility of an Archimedean point, claiming it is a form of scientism.
Example quote: "We can no more separate our theories and concepts from our data and percepts than we can find a true Archimedean point—a god’s-eye view—of ourselves and our world."
In the Physics (VIII 4–6) Aristotle finds "surprising difficulties" explaining even commonplace change, and in support of his approach of explanation by four causes, he required "a fair bit of technical machinery". This "machinery" includes potentiality and actuality, hylomorphism, the theory of categories, and "an audacious and intriguing argument, that the bare existence of change requires the postulation of a first cause, an unmoved mover whose necessary existence underpins the ceaseless activity of the world of motion". Aristotle's "first philosophy", or Metaphysics ("after the Physics"), develops his peculiar theology of the prime mover, as an independent divine eternal unchanging immaterial substance.
It can be said that every distinct idea in mathematics or logic or semantics (or any science) is a concept.
Concept is a very broad term, and is understood in a wide variety of ways. We are approaching the subject from an angle intended to be "absolutely primal" or "absolutely foundational" -- presuming nothing -- including the very alphabet and fonts in which we describe these issues, as well as any other facet implicitly included in the existence of a concept. Alphabets and fonts, too, are composed of distinctions -- they are composite structures assembled from distinctions and represented in mechanical/physical ways. The entire structure of a concept, including its representation in language and its abstract (or implicit) meaning, can all be "assembled" or constructed in the same way, in absolutely every discernible/detectable detail, from physical to abstract.
So, taking this approach, we are less interested in debating what is a concept, or considering whether concepts have some Platonic existence somewhere, or whether they are representational or probabilistic or abstract or concrete or should be considered as "atomic" (without parts) objects, but instead want to follow a strict constructivist approach based on stipulative/intended definition and an explicit "unpacking" or specification of implicit context-specific meaning nested within or under the word. We are saying something like "every idea is a concept -- here's how you build them". Or maybe "the world is ruled by ideas and concepts, and the human mind can be usefully defined as a system of concepts."
OUR CLAIM
In this context, we explore the proposition that....
There is no facet or aspect of the theory of concepts that cannot be exactly modeled or described in terms of dimensionality as presented in this context. "Concept" is a very broad and inclusive idea, with many implications and possible meanings, extending throughout many areas of study, including various types of psychology (cognitive, gestalt), as well as mathematics (sets, categories, classes, categories), computer science, semantics, linguistics and philosophy. Confusion, fragmentation or argument are understandable side-effects of interdisciplinary boundaries or misunderstanding, or partial models. In this approach, we look at all major theories of concepts, as found in several mainline sources, as presented by major theorists listed in these sources. In an interdisciplinary spirit, we recognize value in many perspectives
REVIEW OF THE THEORY OF CONCEPTS
We look at several broad reviews of concept theory, to help insure inclusion of all relevant factors as seen from multiple alternative perspectives
Subject is a bit confused today because there is no broad consensus
Points of view are becoming more fragmented and redundant
Many independent and overlapping theories
This project offers a single integrated and consolidated perspective that does not contrast existing models on an "either/or" basis, but instead recognizes specific concerns addressed by specific alternative models, incorporating or interpreting them all
General point of view on concepts
The notion of "concept" can be understood in many complementary ways
In this context, we understand "concept" to be a broad and flexible term that adapts to human intention, with no strict Platonic meaning. Seen this way, there is no need to debate what a concept is.
The meaning of concepts can be developed from observation of actual human usage and psychological behavior.
Wikipedia article on theory of concepts, with links to various authors and related/alternative theories
Stanford University encyclopedia of philosophy
Internet Encyclopedia of Philosophy
Major review document: the book "Categories and Concepts" by Smith and Medin from 1981
Articles reviewing each major theory as represented by these theorists
Objective
Brief overview of each major perspective
List of aspects of each major theory
Comments addressing each facet of these major theories
"What is "realism" -- etc. -- We take positions on all these questions according to our "guiding principles" -- and the "correspondence theory of reality"
Terms -- a concept is
an abstract object
a mental representation
a generalization
a category
an ability (wikipedia)
a name or label or symbol for an abstraction
a fundamental category of existence (Platonic pure form -- questionable)
concepts are created (named) to capture or describe reality
definitional or classical theory: concept specified by necessary and sufficient conditions
Major factors for considering alternative theories and perspectives
Stipulative definition versus probabilistic interpretation
The person using a word can stipulate its meaning according to their intention (this intentional action is top-down)
Unless the context makes the meaning 100% unambiguous, the person hearing the word must "guess" (or "triangulate") what it means
Bottom up observation and empiricism
The Platonic "existence" of a concept
Consistency with computer science and the theory of languages
All these factors are meaningful and significant. We want to incorporate them all into a single balanced and inclusive interpenetration, using one method and one interpretive model
A concept is an abstract idea representing the fundamental characteristics of what it represents. Concepts arise as abstractions or generalisations from experience or the result of a transformation of existing ideas. The concept is instantiated (reified) by all of its actual or potential instances, whether these are things in the real world or other ideas. Concepts are treated in many if not most disciplines both explicitly, such as in linguistics, psychology, philosophy, etc., and implicitly, such as in mathematics, physics, etc. In informal use the word concept often just means any idea, but formally it involves the abstraction component.
In metaphysics, and especially ontology, a concept is a fundamental category of existence. In contemporary philosophy, there are at least three prevailing ways to understand what a concept is:
Concepts as mental representations, where concepts are entities that exist in the brain (mental objects)
Concepts as abilities, where concepts are abilities peculiar to cognitive agents (mental states)
Concepts as Fregean senses (see sense and reference), where concepts are abstract objects, as opposed to mental objects and mental states
Main article: Abstract object
In a Platonist theory of mind, concepts are construed as abstract objects. This debate concerns the ontological status of concepts – what they are really like.
There is debate as to the relationship between concepts and natural language. However, it is necessary at least to begin by understanding that the concept "dog" is philosophically distinct from the things in the world grouped by this concept – or the reference class or extension. Concepts that can be equated to a single word are called "lexical concepts".
Study of concepts and conceptual structure falls into the disciplines of linguistics, philosophy, psychology, and cognitive science.
In the simplest terms, a concept is a name or label that regards or treats an abstraction as if it had concrete or material existence, such as a person, a place, or a thing. It may represent a natural object that exists in the real world like a tree, an animal, a stone, etc. It may also name an artificial (man-made) object like a chair, computer, house, etc. Abstract ideas and knowledge domains such as freedom, equality, science, happiness, etc., are also symbolized by concepts. It is important to realize that a concept is merely a symbol, a representation of the abstraction. The word is not to be mistaken for the thing. For example, the word "moon" (a concept) is not the large, bright, shape-changing object up in the sky, but only represents that celestial object. Concepts are created (named) to describe, explain and capture reality as it is known and understood.
Issues in concept theory
A priori concepts
Main articles: A priori and a posteriori and Category (Kant)
Kant declared that human minds possess pure or a priori concepts. Instead of being abstracted from individual perceptions, like empirical concepts, they originate in the mind itself. He called these concepts categories, in the sense of the word that means predicate, attribute, characteristic, or quality. But these pure categories are predicates of things in general, not of a particular thing. According to Kant, there are 12 categories that constitute the understanding of phenomenal objects. Each category is that one predicate which is common to multiple empirical concepts. In order to explain how an a priori concept can relate to individual phenomena, in a manner analogous to an a posteriori concept, Kant employed the technical concept of the schema. He held that the account of the concept as an abstraction of experience is only partly correct. He called those concepts that result from abstraction "a posteriori concepts" (meaning concepts that arise out of experience). An empirical or an a posteriori concept is a general representation (Vorstellung) or non-specific thought of that which is common to several specific perceived objects
A concept is a common feature or characteristic. Kant investigated the way that empirical a posteriori concepts are created.
The logical acts of the understanding by which concepts are generated as to their form are:
comparison, i.e., the likening of mental images to one another in relation to the unity of consciousness;
reflection, i.e., the going back over different mental images, how they can be comprehended in one consciousness; and finally
abstraction or the segregation of everything else by which the mental images differ ...
In order to make our mental images into concepts, one must thus be able to compare, reflect, and abstract, for these three logical operations of the understanding are essential and general conditions of generating any concept whatever. For example, I see a fir, a willow, and a linden. In firstly comparing these objects, I notice that they are different from one another in respect of trunk, branches, leaves, and the like; further, however, I reflect only on what they have in common, the trunk, the branches, the leaves themselves, and abstract from their size, shape, and so forth; thus I gain a concept of a tree.
- Kant, Logic, §6
Embodied content
Main article: Embodied cognition
In cognitive linguistics, abstract concepts are transformations of concrete concepts derived from embodied experience. The mechanism of transformation is structural mapping, in which properties of two or more source domains are selectively mapped onto a blended space (Fauconnier & Turner, 1995; see conceptual blending). A common class of blends are metaphors. This theory contrasts with the rationalist view that concepts are perceptions (or recollections, in Plato's term) of an independently existing world of ideas, in that it denies the existence of any such realm. It also contrasts with the empiricist view that concepts are abstract generalizations of individual experiences, because the contingent and bodily experience is preserved in a concept, and not abstracted away. While the perspective is compatible with Jamesian pragmatism, the notion of the transformation of embodied concepts through structural mapping makes a distinct contribution to the problem of concept formation.
ONTOLOGY
Plato was the starkest proponent of the realist thesis of universal concepts. By his view, concepts (and ideas in general) are innate ideas that were instantiations of a transcendental world of pure forms that lay behind the veil of the physical world. In this way, universals were explained as transcendent objects. Needless to say this form of realism was tied deeply with Plato's ontological projects. This remark on Plato is not of merely historical interest. For example, the view that numbers are Platonic objects was revived by Kurt Gödel as a result of certain puzzles that he took to arise from the phenomenological accounts.
Gottlob Frege, founder of the analytic tradition in philosophy, famously argued for the analysis of language in terms of sense and reference. For him, the sense of an expression in language describes a certain state of affairs in the world, namely, the way that some object is presented. Since many commentators view the notion of sense as identical to the notion of concept, and Frege regards senses as the linguistic representations of states of affairs in the world, it seems to follow that we may understand concepts as the manner in which we grasp the world. Accordingly, concepts (as senses) have an ontological status (Morgolis:7).
According to Carl Benjamin Boyer, in the introduction to his The History of the Calculus and its Conceptual Development, concepts in calculus do not refer to perceptions. As long as the concepts are useful and mutually compatible, they are accepted on their own. For example, the concepts of the derivative and the integral are not considered to refer to spatial or temporal perceptions of the external world of experience. Neither are they related in any way to mysterious limits in which quantities are on the verge of nascence or evanescence, that is, coming into or going out of existence. The abstract concepts are now considered to be totally autonomous, even though they originated from the process of abstracting or taking away qualities from perceptions until only the common, essential attributes remained.
MENTAL REPRESENTATIONS
In a physicalist theory of mind, a concept is a mental representation, which the brain uses to denote a class of things in the world. This is to say that it is literally, a symbol or group of symbols together made from the physical material of the brain.
Concepts are mental representations that allow us to draw appropriate inferences about the type of entities we encounter in our everyday lives. Concepts do not encompass all mental representations, but are merely a subset of them. The use of concepts is necessary to cognitive processes such as categorization, memory, decision making, learning, and inference.
Notable theories on the structure of concepts
Classical theory
Main article: Definitionism
The classical theory of concepts, also referred to as the empiricist theory of concepts, is the oldest theory about the structure of concepts (it can be traced back to Aristotle), and was prominently held until the 1970s.
The classical theory of concepts says that concepts have a definitional structure. Adequate definitions of the kind required by this theory usually take the form of a list of features. These features must have two important qualities to provide a comprehensive definition. Features entailed by the definition of a concept must be both necessary and sufficient for membership in the class of things covered by a particular concept.
A feature is considered necessary if every member of the denoted class has that feature. A feature is considered sufficient if something has all the parts required by the definition. For example, the classic example bachelor is said to be defined by unmarried and man. An entity is a bachelor (by this definition) if and only if it is both unmarried and a man. To check whether something is a member of the class, you compare its qualities to the features in the definition. Another key part of this theory is that it obeys the law of the excluded middle, which means that there are no partial members of a class, you are either in or out.
The classical theory persisted for so long unquestioned because it seemed intuitively correct and has great explanatory power. It can explain how concepts would be acquired, how we use them to categorize and how we use the structure of a concept to determine its referent class. In fact, for many years it was one of the major activities in philosophy – concept analysis.
Concept analysis is the act of trying to articulate the necessary and sufficient conditions for the membership in the referent class of a concept.
Arguments against the classical theory
Given that most later theories of concepts were born out of the rejection of some or all of the classical theory, it seems appropriate to give an account of what might be wrong with this theory. In the 20th century, philosophers such as Wittgenstein and Rosch argued against the classical theory. There are six primary arguments[7] summarized as follows:
It seems that there simply are no definitions – especially those based in sensory primitive concepts.[7]
It seems as though there can be cases where our ignorance or error about a class means that we either don't know the definition of a concept, or have incorrect notions about what a definition of a particular concept might entail.[7]
Quine's argument against analyticity in Two Dogmas of Empiricism also holds as an argument against definitions.[7]
Some concepts have fuzzy membership. There are items for which it is vague whether or not they fall into (or out of) a particular referent class. This is not possible in the classical theory as everything has equal and full membership.[7]
Rosch found typicality effects which cannot be explained by the classical theory of concepts, these sparked the prototype theory.[7] See below.
Psychological experiments show no evidence for our using concepts as strict definitions.[7]
Prototype theory[edit]
Main article: Prototype theory
Prototype theory came out of problems with the classical view of conceptual structure.
Prototype theory says that concepts specify properties that members of a class tend to possess, rather than must possess.
Wittgenstein, Rosch, Mervis, Berlin, Anglin, and Posner are a few of the key proponents and creators of this theory.
Wittgenstein describes the relationship between members of a class as family resemblances. There are not necessarily any necessary conditions for membership, a dog can still be a dog with only three legs.
This view is particularly supported by psychological experimental evidence for prototypicality effects. Participants willingly and consistently rate objects in categories like 'vegetable' or 'furniture' as more or less typical of that class.
It seems that our categories are fuzzy psychologically, and so this structure has explanatory power.
We can judge an item's membership to the referent class of a concept by comparing it to the typical member – the most central member of the concept. If it is similar enough in the relevant ways, it will be cognitively admitted as a member of the relevant class of entities. Rosch suggests that every category is represented by a central exemplar which embodies all or the maximum possible number of features of a given category.
Theory-theory[edit]
Theory-theory is a reaction to the previous two theories and develops them further. This theory postulates that categorization by concepts is something like scientific theorizing. Concepts are not learned in isolation, but rather are learned as a part of our experiences with the world around us. In this sense, concepts' structure relies on their relationships to other concepts as mandated by a particular mental theory about the state of the world. How this is supposed to work is a little less clear than in the previous two theories, but is still a prominent and notable theory. This is supposed to explain some of the issues of ignorance and error that come up in prototype and classical theories as concepts that are structured around each other seem to account for errors such as whale as a fish (this misconception came from an incorrect theory about what a whale is like, combining with our theory of what a fish is). When we learn that a whale is not a fish, we are recognizing that whales don't in fact fit the theory we had about what makes something a fish. In this sense, the Theory-Theory of concepts is responding to some of the issues of prototype theory and classic theory.
Ideasthesia
According to the theory of ideasthesia (or "sensing concepts"), activation of a concept may be the main mechanism responsible for creation of phenomenal experiences. Therefore, understanding how the brain processes concepts may be central to solving the mystery of how conscious experiences (or qualia) emerge within a physical system e.g., the sourness of the sour taste of lemon. This question is also known as the hard problem of consciousness. Research on ideasthesia emerged from research on synesthesia where it was noted that a synesthetic experience requires first an activation of a concept of the inducer. Later research expanded these results into everyday perception.
Begin developing a glossary or list of terms, drawn from mainline reviews of concept theory, and redefined or interpreted in terms of this dimensional model.
How does our model address the concern raised by the existing theory?
What principles are involved? Why do we take a different view?
What is the relationship to computer science?
What is the relationship to cognitive psychology?
Why or how -- if at all -- is our approach in some ways superior -- more accurate, more effective, more parsimonious, more consistent across multiple disciplines --?
Various objectives
One model that works in multiple disciplines
Simplest possible algebraic/logical construction
Fewest moving parts
Minimal redundancy
Works harmoniously (supportively, constructively) with existing theories
In this context, "dimension" is understood as a broadly inclusive term for a variety of specific meanings, ranging from qualitative (abstract, holistic) to quantitative (numeric, linear, empirical). In our approach, these meanings are connected into a single multi-level concept that extends across levels of abstraction.
For us, a dimension is an aspect of abstract model, that may exist in the mind or in some physical representation (drawing, description, etc.)
We want to review this broad usage, then derive an inclusive general definition of dimension that is built up from the fundamental constructive element "cut" (or distinction). This definition can be psychologically grounded in the basics of cognitive psychology and mathematically grounded in the real number line.
Brief comprehensive overview
Qualitative and quantitative dimensionality -- including quantitative (stipulative drill-down) factoring of qualitative dimensionality to exactly specify intention
"Dimension" is a rather illusive concept. It has a wide array of meanings which can seem unrelated. The default Google definition offers two primary interpretations which cover the meaning we are developing here.
QUALITATIVE
The first of the Google definitions describes a dimension in abstract and "qualitative" (or intuitive or holistic) terms, as an aspect, feature, element, facet or side of something. These terms would not commonly be considered "quantitative" descriptors, and contrast with the common understanding of a dimension as linear and quantitative.
QUANTITATIVE
And the second definition describes "quantitative" interpretations of the concept, in terms of "measurable extent" that can be assigned some numeric value as multiples of some unit.
These are the primary meanings of "dimension" as we explore the concept here. One objective of this project is to show why it is reasonable to understand all these interpretations or meanings as interconnected facets of the same underlying general concept. It is this underlying general concept that we believe contains or unfolds the surprising recursion we believe is characteristic of conceptual structure when understood in this way.
What dimensions are required to specify a dimension?
Qualitative dimensions vary along a range of values which are abstract, with implicit meaning that must be dimensionally stipulated to be grounded in quantitative dimensionality.
Quantitative dimensions are "continuously variable"
Qualitative dimensionality
Dimension
Aspect
Feature
Characteristic
Quality
Facet
Attribute
Differentia
Property
Quantitative dimensionality
Quantity
Measurement
How is a dimension each of these things?
How are these things "composed of dimensions"?
What is the "dimensionality" of a feature? Why is a feature a dimension?
VALUES OF A DIMENSION
These terms should be defined in dimensions:
Equal
More
Less
All "value" should be defined in a dimension. Values can be quantitative ("one dimensional") or qualitative ("multi-dimensional").
We want to be able to define all aspects of "relation" or "relationship" in terms of dimensions and dimensional decomposition, as per these basic linear elements (equal, more, less).
In this framework, we make all comparisons in terms of dimensions.
Physical quantities that are commensurable have the same dimension and can be directly compared to each other, even if they are originally expressed in differing units of measure. If they have different dimensions, they are incommensurable and cannot be directly compared in quantity. For example, it is meaningless to ask whether a kilogram is greater than, equal to, or less than an hour.
Any physically meaningful equation (and likewise any inequality and inequation) will have the same dimensions on the left and right sides, a property known as "dimensional homogeneity".
In engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their fundamental dimensions (such as length, mass, time, and electric charge) and units of measure (such as miles vs. kilometers, or pounds vs. kilograms vs. grams) and tracking these dimensions as calculations or comparisons are performed.
Converting from one dimensional unit to another is often somewhat complex. Dimensional analysis, or more specifically the factor-label method, also known as the unit-factor method, is a widely used technique for such conversions using the rules of algebra.
The concept of physical dimension was introduced by Joseph Fourier in 1822.Physical quantities that are commensurable have the same dimension and can be directly compared to each other, even if they are originally expressed in differing units of measure. If they have different dimensions, they are incommensurable and cannot be directly compared in quantity. For example, it is meaningless to ask whether a kilogram is greater than, equal to, or less than an hour.
Any physically meaningful equation (and likewise any inequality and inequation) will have the same dimensions on the left and right sides, a property known as "dimensional homogeneity". Checking this is a common application of dimensional analysis. Dimensional analysis is also routinely used as a check on the plausibility of derived equations and computations. It is generally used to categorize types of physical quantities and units based on their relationship to or dependence on other units.
General form of a categorical proposition:
"The members of one category (the subject term) are included in another (the predicate term)."
All S are P. (A form)
No S are P. (E form)
Some S are P. (I form)
Some S are not P. (O form)
What is the role of dimensions and cuts in defining and validating this structure? They may be critical for for determining the actual truth of a proposition. The proposition may be stated in abstract terms, and those terms may be in some way undefined or unambiguous or impossible to resolve.
The boundaries of the subject term - the class, the category --
S and P are abstract objects that are specified in terms of dimensions and values in those dimensions
These propositions are different: This form is like a set specification -- "all S is P" -- all objects -- howsoever the boundaries of S is defined -- is a member of the set P, howsoever the boundaries of P are defined
All men are liars
Joe is a man
Therefore, Joe is a liar
There is no uncertainty in any of these statements:
Men who tell lies are liars
Joe is a man
Joe told a lie
Therefore Joe is a liar
Some humans are not clever
What we want to say is
Joe is a composite abstract object (but is this always true? Am I Bruce a composite abstract object? No -- but some representation of me is. I am a unitary object, with an identifying label -- my name)
We are talking about the dimensional properties of the categories into which we place Joe. The being of Joe is not in question. What we want to know is -- did Joe, on or about Nov 8, 2016, vote for Donald Trump? Or, more abstractly, did he think a nasty thought about Hillary Clinton?
In simplest terms, this project can be understood as the expression of a a single idea. Reality is One. Reality becomes innumerably divided by conceptual distinction. All words, all concepts, all ideas unfold within this framework.
The fundamentals of mathematics can be understood in these terms (sets, elements). The fundamentals of political organization can be understood in these terms (nations, regions) The fundamentals of all categorization and classification can be understood in these terms.
Reality is a Many thought of as a One. Reality is a One experienced as a Many. Everything works this way....
A symbol is something that represents an idea, a process, or a physical entity.
A symbol is a name for something -- a label that represents it. That something can be highly abstract or absolutely concrete. A picture can be a symbol. A word can be a symbol. A single letter of that word is a symbol.
Symbol may also refer to:
Computing
Symbol (data), the smallest amount of data transmitted at a time in digital communications
Symbol (programming), a primitive data type in many programming languages used to name variables and functions
Symbol (typeface), a font designed by Aldo Novarese (1982), one of the four standard PostScript fonts
Debug symbol, debugging information used to troubleshoot computer programs, analyze memory dumps
Symbol rate, the number of symbols transmitted per second
Logic
Symbol (formal), a string, used in formal languages and formal systems
Symbol grounding, the problem of how symbols acquire meaning
Mandala
Graphic imagery and maps of the mind, maps of the spirit, maps of cosmos
Metaphor, parable and analogy are types of comparisons between distinct objects or items.
In the dimension model developed here, objects are composed entirely of their dimensions. This approach makes the subject simple and easy to understand.
We are defining all types of comparison in terms of dimensionality and values in dimensions. Basic facets of comparison are
Identity Two things are exactly the same - they have "exactly the same values in the same dimensions".
Similarity Two things can be defined in some common dimensions and have closely-related or identical values in those dimensions.
Difference There are discernible distinctions (dimensions) of these two objects that are not identical, and/or measurable differences in dimensions they share.
[fill in this discussion]
This image from Wikipedia illustrates the approach taken by George Lakoff, which is not grounded in this simple analytic method....
The hierarchical nature of categories -- (see spectrum -- http://origin.org/one/spectrum.cfm )
top-down parsing
stipulative definition
Categories emerge within the framework of abstraction across levels
Is abstraction always across levels?
We can probably say yes, because it is always in the form of a ("janus-faced") holon -- always in the form of a many-to-one mapping unless we are talking about the "atomic" level -- the "lowest level" -- where we are talking about "undifferentiable units"
"Opposites" are defined along dimensions at high levels of abstraction. For convenience, we refer to that kind of dimension as "synthetic", because they are composed of nested sub-dimensions which characterize implicit features.
Are "hot and cold" opposites?
"Tall and short"?
"Rich and poor"?
The answer is "yes" if these terms are defined in the same dimensions.
Abstraction is a simple process, but because the subject of epistemology and conceptual structure is not built on a common theoretical basis, it is controversial and not well-understood.
In general concept formation, abstraction refers to the process of generalization, where general categories are created by combining similarities and ignoring differences.
Abstraction is a process for the symbolic representation of objects and categories. A name for some object or attribute of some object is an abstraction.
Abstraction is defined across a range of "levels" -- ranging from specific and concrete (where every object is essentially unique and "absolutely particular") to general and abstract. There can be any number of intervening levels in the categorization of objects.
Particular numeric and quantitative values drop out as categories become increasingly abstract and general.
From Wikipedia:
Abstraction in its main sense is a conceptual process by which general rules and concepts are derived from the usage and classification of specific examples, literal ("real" or "concrete") signifiers, first principles, or other methods. "An abstraction" is the product of this process — a concept that acts as a super-categorical noun for all subordinate concepts, and connects any related concepts as a group, field, or category.
Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, selecting only the aspects which are relevant for a particular purpose. For example, abstracting a leather soccer ball to the more general idea of a ball selects only the information on general ball attributes and behavior, eliminating the other characteristics of that particular ball. In a type–token distinction, a type (e.g., a 'ball') is more abstract than its tokens (e.g., 'that leather soccer ball').
"When I use a word," Humpty-Dumpty said, "it means just what I chose it to mean, neither more nor less."
There are many ways to understand the concept of word definition -- including, most popularly and commonly, the definition given in a dictionary (called a "lexical" definition by Wikipedia).
Here in this context, we take a stronger and more exacting view, and argue that in the end, when looked at carefully, all actual word usage, in any particular context, is most accurately understood as stipulative.
In other words, "Words mean what the person using the word intends for it to mean".
This the most accurate psychological model of what people are c\actually, doing, t\and the most faithfuyl and workable way to understand what they mean.
Not only this -- when scientific or public agencies make decisions as to what words or symbols are to mean, those are definitions are stipulated.
Actual word usage is context-specific -- intended by the user of the word for a specific purpose and a specific intended audience. We could make the case that it is always context-specific -- but rather than get tangled up in that, let's just say that "the best" way to understand definition is the stipulative approach.
If a term or word can be interpreted in more than one way, other approaches to definition do not include specific implications
Other approaches to definition do not imply the ability to "drill down to detailed intended specifics"
Wikipedia:
A stipulative definition is a type of definition in which a new or currently-existing term is given a specific meaning for the purposes of argument or discussion in a given context. When the term already exists, this definition may, but does not necessarily, contradict the dictionary (lexical) definition of the term. Because of this, a stipulative definition cannot be "correct" or "incorrect"; it can only differ from other definitions, but it can be useful for its intended purpose.
A stipulative definition is a type of definition in which a new or currently-existing term is given a specific meaning for the purposes of argument or discussion in a given context. When the term already exists, this definition may, but does not necessarily, contradict the dictionary (lexical) definition of the term. Because of this, a stipulative definition cannot be "correct" or "incorrect"; it can only differ from other definitions, but it can be useful for its intended purpose.
For example, in the riddle of induction by Nelson Goodman, "grue" was stipulated to be "a property of an object that makes it appear green if observed before some future time t, and blue if observed afterward." "Grue" has no meaning in standard English; therefore, Goodman created the new term and gave it a stipulative definition.
A governmental example is from the USNRC Final Safety Culture Policy Statement, which defines "nuclear safety culture" as “the core values and behaviors resulting from a collective commitment by leaders and individuals to emphasize safety over competing goals to ensure protection of people and the environment.” By this definition there can never be bad, poor, weak, or otherwise problematic nuclear safety culture[improper synthesis?]. It is, by definition, always good[improper synthesis?].
Stipulative definitions of existing terms are useful in making theoretical arguments, or stating specific cases. For example:
Suppose we say that to love someone is to be willing to die for that person.
Take "human" to mean any member of the species Homo sapiens.
For the purposes of argument, we will define a "student" to be "a person under 18 enrolled in a local school."
Some of these are also precising definitions, a subtype of stipulative definition that may not contradict but only extend the lexical definition of a term. Theoretical definitions, used extensively in science and philosophy, are similar in some ways to stipulative definitions (although theoretical definitions are somewhat normative -more like persuasive definitions).
Many holders of controversial and highly charged opinions use stipulative definitions in order to attach the emotional or other connotations of a word to the meaning they would like to give it; for example, defining "murder" as "the killing of any living thing for any reason." The other side of such an argument is likely to use a different stipulative definition for the same term: "the premeditated killing of a human being." The lexical definition in such a case is likely to fall somewhere in between.
When a stipulative definition is confused with a lexical definition within an argument there is a risk of equivocation.
The lexical definition of a term, also known as the dictionary definition, is the meaning of the term in common usage. As its other name implies, this is the sort of definition one is likely to find in the dictionary. A lexical definition is usually the type expected from a request for definition, and it is generally expected that such a definition will be stated as simply as possible in order to convey information to the widest audience.
Note that a lexical definition is descriptive, reporting actual usage within speakers of a language, and changes with changing usage of the term, rather than prescriptive, which would be to stick with a version regarded as "correct", regardless of drift in accepted meaning. They tend to be inclusive, attempting to capture everything the term is used to refer to, and as such are often too vague for many purposes.
When the breadth or vagueness of a lexical definition is unacceptable, a precising definition or a stipulative definition is often used.
Words can be classified as lexical or nonlexical. Lexical words are those that have independent meaning (such as a Noun (N), verb (V), adjective (A), adverb (Adv), or preposition (P).
The definition which reports the meaning of a word or a phrase as it is actually used by people is called a lexical definition. Meanings of words given in a dictionary are lexical definitions. As a word may have more than one meaning, it may also have more than one lexical definition.
Lexical definitions are either true or false. If the definition is the same as the actual use of the word then it is true, otherwise it is false.
“I don’t know what you mean by ‘glory,’" Alice said.
Humpty Dumpty smiled contemptuously. “Of course you don’t–till I tell you. I meant ‘there’s a nice knock-down argument for you!’”
“But ‘glory’ doesn’t mean ‘a nice knock-down argument,’” Alice objected.
“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean–neither more nor less.”
“The question is,” said Alice, “whether you can make words mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master–that’s all.”
Alice was too much puzzled to say anything; so after a minute Humpty Dumpty began again. “They’ve a temper, some of them–particularly verbs, they’re the proudest–adjectives you can do anything with, but not verbs–however, I can manage the whole lot of them! Impenetrability! That’s what I say!”
“Would you tell me, please,” said Alice, “what that means?”
“Now you talk like a reasonable child,” said Humpty Dumpty, looking very much pleased. “I meant by ‘impenetrability’ that we’ve had enough of that subject, and it would be just as well if you’d mention what you mean to do next, as I suppose you don’t mean to stop here all the rest of your life.”
“That’s a great deal to make one word mean,” Alice said in a thoughtful tone.
“When I make a word do a lot of work like that,” said Humpty Dumpty, “I always pay it extra.”
(Lewis Carroll, Through the Looking-Glass, 1871)
Ontology is the philosophical study of the nature of being, becoming, existence or reality as well as the basic categories of being and their relations. Traditionally listed as a part of the major branch of philosophy known as metaphysics, ontology often deals with questions concerning what entities exist or may be said to exist and how such entities may be grouped, related within a hierarchy, and subdivided according to similarities and differences. Although ontology as a philosophical enterprise is highly theoretical, it also has practical application in information science and technology, such as ontology engineering.
We are exploring a strong hypothesis regarding the absolute integration of knowledge within the One and are considering these claims:
All knowledge can be understood to exist within the same all-inclusive framework, or be understood as representing
different aspects of the same framework.
Major traditional divisions -- such as "science" and "religion" -- can be held together within this framework and "reconciled" through a universal model of conceptual form that sees concepts at different levels of abstraction within a single framework.
We call this all-inclusive framework "integral" because it integrates all facets of understanding.
This integration implies a major transformation or "revolution" in science and philosophy, and has not yet been proven to be possible. But there are solid reasons for supposing this integration is feasible and will be achieved.
This project is about finding pieces of this puzzle and exploring different combinations for combining them.
This wedding band in the form of a Moebius Strip might offer a strong clue as to how this is possible.
THE ABSOLUTE
This integral framework proposes that all human discourse and deliberation can be grounded in the absolute. This is a radical claim in many regards.
Throughout history, any such affirmation would of course be seen as absolutely religious, and not grounded in a consensual reality that can be verified by science. This fact has always made any such claim questionable and extremely controversial, with very good reason. The problem of "church-state separation" is wisely based on this concern.
Any notion of "absolute" that is defined in vague or uncertain terms is not reliable. If the concept is ambiguous, its use can lead to conflicting alternative interpretations. This inherent potential for conflict is the basic reason for religious schism.
In this project, we are defining an ontology (a method and system for creating systematic word definitions and categorizes) which we believe are more or less consistent with wide-spread usage, and can be authentically derived from common use age in both science and religion.
The concept "absolute" as used here implies "highest level of abstraction". The absolute is the comprehensive and absolute container of everything. It is the framework that contains reality and the context from which all particular concepts of distinctions emerge
All human disagreements emerge in the context of this framework. There are many reasons for disagreement, and many of them are based on genuine differences that must be resolved through mediation/negotiation. But many others are grounded in semantic and logical weaknesses that we should repair.
EPISTEMOLOGICAL CONCEPTS
Distinction
Concept
Attribute / property / characteristic / quality
Dimension
Abstraction
Comparison
Metaphor
Analogy
Simile
Greater than / less than (in some dimension)
In rough terms, the derivation of this project from a universal scientific ontology:
All reality is held in the supreme context of "the one". This is the foundation not only of science, and of philosophy, and of religion, but also of mathematics and system science.
All words and concepts and ideas are derived from within the framework of this universal ontology.
The essential form of any concept is "holon" -- an abstract unit or "whole" composed of "parts", where the parts are either composite holons themselves, or are "bottom level" distinctions, which form the atomic level elements from which all (abstract, composite) concepts are constructed.
We are defining a universal scientific ontology that is consistent with the foundations of mathematics (real number line, Dedekind cut, definition of numbers) and at the same time, following basic insights from the Perennial Philosophy of religion, is grounded in "the absolute one", which can be and is known by many names.
MYSTERY OF THE ABSOLUTE CONTAINER
We are supposing that there is a kind of mysterious algebraic puzzle at the core of this work, involving a conjunction of ideas with roots in intuition and modern mathematics. These idea include
Traditional intuitive symbolism
Mandala - many types
Tree of life
Kabbalah
Axis Mundi
Cross (Christianity)
Yin/Yang
Wheel of dharma
Ouroboros - snake swallowing its tail
Mathematical structures
Moebius strip
Klein bottle
Matrix row
Matrix column
Turing tape
Mathematical concepts
Cartesian coordinate frame
Dimension
Unit
Hierarchy
Tree
Circle
Mathematical foundations
Continuity
Set theory
Real number line
Dedekind cut
Boundary value
Limit
Concept of infinity
Concept of infinitesimal
STRATEGY AND DESIGN
Get a general-purpose intuitive statement of the broad thesis and agenda, then slowly work to fill in the blanks and the pieces, showing how this general form unpacks into the endless myriad facets of symbolic representation and understanding.
Broad intuitions, the history and symbolic metaphors, the general closed space model, and then going through the details of each major facet.
Gather the fragments
This might be the best way to do this.
Recognize that this project exceeds cognitive bandwidth -- certainly my own, and probably anybody's.
So -- it will be necessary to work in localized parts of the problem at any one time.
General design might be one localized part.
Then individually work through the specific list of identified or suggested contributing elements.
Stay relaxed, and deal with what comes up. If there is a burst of inspiration or vision of some facet of the whole, try to locate that insight within this emerging broader design.
Go back and forth between these levels. Let the details help inform the geometry and topology of the whole, and help position and interconnect the details by mapping back from the whole.
Keep going over a broad outline, looking for ways to file subjects into the framework.
Remove redundancy (there will be a lot).
Can the entire subject be presented in a strict linear/hierarchical format (which an outline is)?
I am starting to like this very broad agenda for ontology -- now, where does this fit into the broader concept that is supposedly the intention of this "book" -- as outlined by Black Elk
The idea of a universal ontology is often regarded with skepticism by working professionals. To some degree it's true that this kind of work is something of a religious quest, and the case can be made that those who pursue it are distracted and naive Don Quixotes chasing a useless dream while tilting at windmills.
But in this context, we are taking the strong view that this doubt and skepticism is myopic and based on something like "bad science". We make the case that "we can do better -- and here's how".
Briefly, elements of an argument against skepticism -- as based on Wikipedia quotes on "infeasibility" below.
A universal theory of conceptual structure IS possible and feasible -- as well as highly desirable.
There are good and solid reasons for pursing this kind of work, and the right kind of solution could be enormously helpful.
Yes, there are "political" challenges and issues to be overcome -- but those are true for any sort of new scientific idea that may have disruptive consequences for an industry or establishment with a substantial investment in existing methods and operating principles/assumptions.
A universal ontology does not have to be defined as "above" or "ranked over" all other word meanings and definitions. It should be an "interpreter" rather than an "imposer". Like the "Rosetta Stone" -- it should understand and offer fluent interconnection between languages.
It does not have to be "complex", and defined by hundreds of rules. Indeed, it can and should be profoundly simple.
If defined in truly general terms, it does not have to emerge as "one ontology among many" -- and thus be forced to "complete" for dominance in the industry -- if it can clearly and a unambiguously show that its inherent constructive methods are capable of defining any other ontology.
This universal ontology can and should be 100% flexible and adaptive and context-specific, without imposing "rules" over anything, other than making informed suggestions for better and faster ways to understand semantic constructions
To the argument that there cannot be any one "God's eye view" of ontology (or anything else), we can respond by saying that there are indeed ways to rank theories as better or worse, and the approach we develop here is extremely simple ("parsimonious") and logically elegant. Generally speaking, our approach is written at a "lower level" than any approach defined by "primitives and axioms" -- since it is capable of constructing those objects, and argues that those objects must be explicitly constructed or the logic built on them will not be universal and unambiguous.
Problems in existing assumptions
A successful universal ontology is not likely to emerge through an empirical perspective. it almost certainly must be developed through a universal and "top-down" point of view. If its definitions emerge in some "bottom-up" format -- for example, as grounded in fundamental algebraic definitions like "continuity", those definitions must be universally applicable.
The notion of "primitive" concepts or axioms as widely accepted in science and mathematics is highly problematic. Definitions for a universal ontology cannot embody inherent ambiguity, and must be clearly resolved in a constructive way down to their "absolutely atomic" level.
The basic approach we are taking -- and which we continue to expand -- is briefly reviewed in this project here:
here
In information science, an upper ontology (also known as a top-level ontology or foundation ontology) is an ontology (in the sense used in information science) which describes very general concepts that are the same across all knowledge domains. An important function of an upper ontology is to support broad semantic interoperability among a large number of domain-specific ontologies which are accessible ranking "under" this upper ontology. As the rank metaphor suggests, it is usually a hierarchy of entities and associated rules (both theorems and regulations) that attempts to describe those general entities that do not belong to a specific problem domain.
There have been many upper ontologies proposed, each with its own proponents. Each upper ontology can be considered as a computational implementation of natural philosophy, which itself is a more empirical method for investigating the topics within the philosophical discipline of physical ontology.
Development
Upper ontologies are also commercially valuable, creating competition to define them. Peter Murray-Rust has claimed that this leads to "semantic and ontological warfare due to competing standards", and accordingly any standard foundation ontology is likely to be contested among commercial or political parties, each with their own idea of "what exists". An important factor exacerbating the failure to arrive at a common approach has been the lack of open-source applications that would permit the testing of different ontologies in the same computational environment.
The differences have been debated largely on theoretical grounds, or are merely the result of personal preferences, with no method to objectively compare practical performance.
No particular upper ontology has yet gained widespread acceptance as a de facto standard. Different organizations have attempted to define standards for specific domains. The 'Process Specification Language' (PSL) created by the National Institute for Standards and Technology (NIST) is one example.
Another important factor leading to the absence of wide adoption of any existing upper ontology is the complexity. An upper ontology typically has from 2,000 to 10,000 elements (classes, relations), with complex interactions among them.
The resulting complexity is similar to that of a human natural language, and the learning process can be even longer than for a natural language because of the unfamiliar format and logical rules. The motivation to overcome this learning barrier is largely absent because of the paucity of publicly accessible examples of use.
As a result, those building domain ontologies for local applications tend to create the simplest possible domain-specific ontology, not related to any upper ontology. Such domain ontologies may function adequately for the local purpose, but they are very time-consuming to relate accurately to other domain ontologies.
There has been debate over whether the concept of using a single, shared upper ontology is even feasible or practical at all. There has been further debate over whether the debates are valid – often leading to outright censorship and boosterism of particular approaches in supposedly neutral sources. Some of these arguments are outlined below.
Arguments for the infeasibility of an upper ontology
Historically, many attempts in many societies have been made to impose or define a single set of concepts as more primal, basic, foundational, authoritative, true or rational than others.
In the kind of modern societies that have computers at all, the existence of academic and political freedoms imply that many ontologies will simultaneously exist and compete for adherents. While the differences between them may be narrow and appear petty to those not deeply involved in the process, so too did many of the theological debates of medieval Europe, but they still led to schisms or wars, or were used as excuses for same. The tyranny of small differences, that standard ontologies seek to end, may continue simply because other forms of tyranny are even less desirable. So private efforts to create competitive ontologies that achieve adherents by virtue of better communication may proceed, but tend not to result in long-standing monopolies.
A deeper objection derives from ontological constraints that philosophers have found historically inescapable. Some argue that a transcendent perspective or omniscience is implied by even searching for any general-purpose ontology – see God's eye view – since it is a social or cultural artifact, there is no purely objective perspective from which to observe the whole terrain of concepts and derive any one standard.
A narrower and much more widely held objection is implicature: the more general the concept and the more useful in semantic interoperability, the less likely it is to be reducible to symbolic concepts or logic and the more likely it is to be simply accepted by the complex beings and cultures relying on it. In the same sense that a fish doesn't perceive water, we don't see how complex and involved is the process of understanding basic concepts.
There is no self-evident way of dividing the world up into concepts, and certainly no non-controversial one
There is no neutral ground that can serve as a means of translating between specialized (or "lower" or "application-specific") ontologies.
Human language itself is already an arbitrary approximation of just one among many possible conceptual maps. To draw any necessary correlation between English words and any number of intellectual concepts we might like to represent in our ontologies is just asking for trouble. (WordNet, for instance, is successful and useful precisely because it does not pretend to be a general-purpose upper ontology; rather, it is a tool for semantic / syntactic / linguistic disambiguation, which is richly embedded in the particulars and peculiarities of the English language.)
Any hierarchical or topological representation of concepts must begin from some ontological, epistemological, linguistic, cultural, and ultimately pragmatic perspective. Such pragmatism does not allow for the exclusion of politics between persons or groups, indeed it requires they be considered as perhaps more basic primitives than any that are represented.
Those who doubt the feasibility of general purpose ontologies are more inclined to ask “what specific purpose do we have in mind for this conceptual map of entities and what practical difference will this ontology make?” This pragmatic philosophical position surrenders all hope of devising the encoded ontology version of “everything that is the case,” (Wittgenstein, Tractatus Logico-Philosophicus).
According to Barry Smith in The Blackwell Guide to the Philosophy of Computing and Information (2004), "the initial project of building one single ontology, even one single top-level ontology, which would be at the same time non-trivial and also readily adopted by a broad population of different information systems communities, has largely been abandoned." (p. 159)
Finally there are objections similar to those against artificial intelligence. Technically, the complex concept acquisition and the social / linguistic interactions of human beings suggests any axiomatic foundation of "most basic" concepts must be cognitive, biological or otherwise difficult to characterize since we don't have axioms for such systems. Ethically, any general-purpose ontology could quickly become an actual tyranny by recruiting adherents into a political program designed to propagate it and its funding means, and possibly defend it by violence. Historically, inconsistent and irrational belief systems have proven capable of commanding obedience to the detriment or harm of persons both inside and outside a society that accepts them. How much more harmful would a consistent rational one be, were it to contain even one or two basic assumptions incompatible with human life?
Aiming for absolute almost inconceivable simplicity
Single undifferentiated "closed space" -- closed on itself in some circular way -- such as a Moebius Strip --
"Two becomes one" in an inherently undifferentiated form that is differentiated only by perspective or viewpoint
All conceptual structure built up from one algebraic primitive or process
That one primitive/process is "cut" -- or distinction
This cut or distinction at its bottom level of abstract representation can be defined by strict binary logic: 0/1
Cut or distinction arises or is driven by some "motivation" -- some reason, some purpose, some advantage that is given by asserting and identifying/naming and remembering this cut
All algebraic or conceptual or abstract objects are constructed as composite assemblies of cuts, which are combined in various ways and given names that identify useful composiites
The fundamental universal form is
Many/one
Holon ("Janus-faced")
"A cut on a cut on a cut on a cut on a cut..."
All boundaries are cuts -- "distinctions between objects"
It may be critically important or innately inherent to link abstract meaning and "name" to some "physical" manifestation or representation, employing fonts and alphabets and a physical medium (person, sheet of paper, computer) to support and contain and represent that composite of distinctions as a unitary "object"
From this model and framework arise many other significant algebraic constructions as consequences/implications. We can make a long list of objects and categories and definitions and principles that can be derived within it
The "absolute foundation" is a "one-sided cut" where the distinction between 0 and 1 is dissolved and merged, and is entirely a matter or perspective. All other distinctions are also thus merged.
"What are the dimensions of this primal cut"?
If it is a "strip", how is it differentiated, and what is the meaning of the (x,y) values in its rectangular form? If its length is x, what is the y dimension?
In computer science and information science, an ontology is a formal naming and definition of the types, properties, and interrelationships of the entities that really or fundamentally exist for a particular domain of discourse. It is thus a practical application of philosophical ontology, with a taxonomy.
An ontology compartmentalizes the variables needed for some set of computations and establishes the relationships between them.
The fields of artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture all create ontologies to limit complexity and to organize information. The ontology can then be applied to problem solving.
ETYMOLOGY AND DEFINITION
The term ontology has its origin in philosophy and has been applied in many different ways. The word element onto- comes from the Greek ("being", "that which is"), present participle of the verb ("be"). The core meaning within computer science is a model for describing the world that consists of a set of types, properties, and relationship types. There is also generally an expectation that the features of the model in an ontology should closely resemble the real world (related to the object).
OVERVIEW
What many ontologies have in common in both computer science and in philosophy is the representation of entities, ideas, and events, along with their properties and relations, according to a system of categories. In both fields, there is considerable work on problems of ontological relativity (e.g., Quine and Kripke in philosophy, Sowa and Guarino in computer science), and debates concerning whether a normative ontology is viable (e.g., debates over foundationalism in philosophy, and over the Cyc project in AI). Differences between the two are largely matters of focus. Computer scientists are more concerned with establishing fixed, controlled vocabularies, while philosophers are more concerned with first principles, such as whether there are such things as fixed essences or whether enduring objects must be ontologically more primary than processes.
Other fields make ontological assumptions that are sometimes explicitly elaborated and explored. For instance, the definition and ontology of economics (also sometimes called the political economy) is hotly debated especially in Marxist economics where it is a primary concern, but also in other subfields. Such concerns intersect with those of information science when a simulation or model is intended to enable decisions in the economic realm; for example, to determine what capital assets are at risk and if so by how much (see risk management). Some claim all social sciences have explicit ontology issues because they do not have hard falsifiability criteria like most models in physical sciences and that indeed the lack of such widely accepted hard falsification criteria is what defines a social or soft science.
HISTORY
Historically, ontologies arise out of the branch of philosophy known as metaphysics, which deals with the nature of reality – of what exists. This fundamental branch is concerned with analyzing various types or modes of existence, often with special attention to the relations between particulars and universals, between intrinsic and extrinsic properties, and between essence and existence. The traditional goal of ontological inquiry in particular is to divide the world "at its joints" to discover those fundamental categories or kinds into which the world’s objects naturally fall.
During the second half of the 20th century, philosophers extensively debated the possible methods or approaches to building ontologies without actually building any very elaborate ontologies themselves. By contrast, computer scientists were building some large and robust ontologies, such as WordNet and Cyc, with comparatively little debate over how they were built.
Since the mid-1970s, researchers in the field of artificial intelligence (AI) have recognized that capturing knowledge is the key to building large and powerful AI systems. AI researchers argued that they could create new ontologies as computational models that enable certain kinds of automated reasoning. In the 1980s, the AI community began to use the term ontology to refer to both a theory of a modeled world and a component of knowledge systems. Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy.
In the early 1990s, the widely cited Web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" by Tom Gruber is credited with a deliberate definition of ontology as a technical term in computer science. Gruber introduced the term to mean a specification of a conceptualization:
An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy.
According to Gruber (1993):
Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited to conservative definitions — that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world. To specify a conceptualization, one needs to state axioms that do constrain the possible interpretations for the defined terms.
COMPONENTS
Main article: Ontology components
Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. As mentioned above, most ontologies describe individuals (instances), classes (concepts), attributes, and relations. In this section each of these components is discussed in turn.
Common components of ontologies include:
Individuals
instances or objects (the basic or "ground level" objects)
Classes
Sets, collections, concepts, classes in programming, types of objects, or kinds of things
Attributes
Aspects, properties, features, characteristics, or parameters that objects (and classes) can have
Relations
Ways in which classes and individuals can be related to one another
Function terms
Complex structures formed from certain relations that can be used in place of an individual term in a statement
Restrictions
Formally stated descriptions of what must be true in order for some assertion to be accepted as input
Rules
Statements in the form of an if-then (antecedent-consequent) sentence that describe the logical inferences that can be drawn from an assertion in a particular form
Axioms
Assertions (including rules) in a logical form that together comprise the overall theory that the ontology describes in its domain of application. This definition differs from that of "axioms" in generative grammar and formal logic. In those disciplines, axioms include only statements asserted as a priori knowledge. As used here, "axioms" also include the theory derived from axiomatic statements
Events
The changing of attributes or relations
Ontologies are commonly encoded using ontology languages.
TYPES
Domain ontology
A domain ontology (or domain-specific ontology) represents concepts which belong to part of the world. Particular meanings of terms applied to that domain are provided by domain ontology. For example, the word card has many different meanings. An ontology about the domain of poker would model the "playing card" meaning of the word, while an ontology about the domain of computer hardware would model the "punched card" and "video card" meanings.
Since domain ontologies represent concepts in very specific and often eclectic ways, they are often incompatible. As systems that rely on domain ontologies expand, they often need to merge domain ontologies into a more general representation. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.).
At present, merging ontologies that are not developed from a common foundation ontology is a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same foundation ontology to provide a set of basic elements with which to specify the meanings of the domain ontology elements can be merged automatically. There are studies on generalized techniques for merging ontologies, but this area of research is still largely theoretical.
Upper ontology
An upper ontology (or foundation ontology) is a model of the common objects that are generally applicable across a wide range of domain ontologies. It usually employs a core glossary that contains the terms and associated object descriptions as they are used in various relevant domain sets.
Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes, and relations.
OVERVIEW
Common components of ontologies include:
Individuals: instances or objects (the basic or "ground level" objects)
Classes: sets, collections, concepts, types of objects, or kinds of things.
Attributes: aspects, properties, features, characteristics, or parameters that objects (and classes) can have
Relations: ways in which classes and individuals can be related to one another
Function terms: complex structures formed from certain relations that can be used in place of an individual term in a statement
Restrictions: formally stated descriptions of what must be true in order for some assertion to be accepted as input
Rules: statements in the form of an if-then (antecedent-consequent) sentence that describe the logical inferences that can be drawn from an assertion in a particular form
Axioms: assertions (including rules) in a logical form that together comprise the overall theory that the ontology describes in its domain of application. This definition differs from that of "axioms" in generative grammar and formal logic. In these disciplines, axioms include only statements asserted as a priori knowledge. As used here, "axioms" also include the theory derived from axiomatic statements.
Events: the changing of attributes or relations
Ontologies are commonly encoded using ontology languages.
INDIVIDUALS
Individuals (instances) are the basic, "ground level" components of an ontology. The individuals in an ontology may include concrete objects such as people, animals, tables, automobiles, molecules, and planets, as well as abstract individuals such as numbers and words (although there are differences of opinion as to whether numbers and words are classes or individuals). Strictly speaking, an ontology need not include any individuals, but one of the general purposes of an ontology is to provide a means of classifying individuals, even if those individuals are not explicitly part of the ontology.
In formal extensional ontologies, only the utterances of words and numbers are considered individuals – the numbers and names themselves are classes. In a 4D ontology, an individual is identified by its spatio-temporal extent. Examples of formal extensional ontologies are ISO 15926 and the model in development by the IDEAS Group.
CLASSES
Main article: Class (Knowledge representation)
Classes – concepts that are also called type, sort, category, and kind – can be defined as an extension or an intension. According to an extensional definition, they are abstract groups, sets, or collections of objects. According to an intensional definition, they are abstract objects that are defined by values of aspects that are constraints for being member of the class. The first definition of class results in ontologies in which a class is a subclass of collection. The second definition of class results in ontologies in which collections and classes are more fundamentally different. Classes may classify individuals, other classes, or a combination of both.
Some examples of classes:
Person, the class of all people, or the abstract object that can be described by the criteria for being a person.
Vehicle, the class of all vehicles, or the abstract object that can be described by the criteria for being a vehicle.
Car, the class of all cars, or the abstract object that can be described by the criteria for being a car.
Class, representing the class of all classes, or the abstract object that can be described by the criteria for being a class.
Thing, representing the class of all things, or the abstract object that can be described by the criteria for being a thing (and not nothing).
Ontologies vary on whether classes can contain other classes, whether a class can belong to itself, whether there is a universal class (that is, a class containing everything), etc. Sometimes restrictions along these lines are made in order to avoid certain well-known paradoxes.
The classes of an ontology may be extensional or intensional in nature. A class is extensional if and only if it is characterized solely by its membership. More precisely, a class C is extensional if and only if for any class C', if C' has exactly the same members as C, then C and C' are identical. If a class does not satisfy this condition, then it is intensional. While extensional classes are more well-behaved and well-understood mathematically, as well as less problematic philosophically, they do not permit the fine grained distinctions that ontologies often need to make. For example, an ontology may want to distinguish between the class of all creatures with a kidney and the class of all creatures with a heart, even if these classes happen to have exactly the same members. In most upper ontologies, the classes are defined intensionally. Intensionally defined classes usually have necessary conditions associated with membership in each class. Some classes may also have sufficient conditions, and in those cases the combination of necessary and sufficient conditions make that class a fully defined class.
Importantly, a class can subsume or be subsumed by other classes; a class subsumed by another is called a subclass (or subtype) of the subsuming class (or supertype). For example, Vehicle subsumes Car, since (necessarily) anything that is a member of the latter class is a member of the former. The subsumption relation is used to create a hierarchy of classes, typically with a maximally general class like Anything at the top, and very specific classes like 2002 Ford Explorer at the bottom. The critically important consequence of the subsumption relation is the inheritance of properties from the parent (subsuming) class to the child (subsumed) class. Thus, anything that is necessarily true of a parent class is also necessarily true of all of its subsumed child classes. In some ontologies, a class is only allowed to have one parent (single inheritance), but in most ontologies, classes are allowed to have any number of parents (multiple inheritance), and in the latter case all necessary properties of each parent are inherited by the subsumed child class. Thus a particular class of animal (HouseCat) may be a child of the class Cat and also a child of the class Pet.
A partial ontology; The class Car has as subsumed classes 2-Wheel Drive Car and 4-Wheel Drive Car
A partition is a set of related classes and associated rules that allow objects to be classified by the appropriate subclass. The rules correspond with the aspect values that distinguish the subclasses from the superclasses. For example, to the right is the partial diagram of an ontology that has a partition of the Car class into the classes 2-Wheel Drive Car and 4-Wheel Drive Car. The partition rule (or subsumption rule) determines if a particular car is classified by the 2-Wheel Drive Car or the 4-Wheel Drive Car class.
If the partition rule(s) guarantee that a single Car cannot be in both classes, then the partition is called a disjoint partition. If the partition rules ensure that every concrete object in the super-class is an instance of at least one of the partition classes, then the partition is called an exhaustive partition.
ATTRIBUTES
Objects in an ontology can be described by relating them to other things, typically aspects or parts. These related things are often called attributes, although they may be independent things. Each attribute can be a class or an individual. The kind of object and the kind of attribute determine the kind of relation between them. A relation between an object and an attribute express a fact that is specific to the object to which it is related. For example the Ford Explorer object has attributes such as:
Ford Explorer
door (with as minimum and maximum cardinality: 4)
{4.0L engine, 4.6L engine}
6-speed transmission
The value of an attribute can be a complex data type; in this example, the related engine can only be one of a list of subtypes of engines, not just a single thing.
Ontologies are only true ontologies if concepts are related to other concepts (the concepts do have attributes). If that is not the case, then you would have either a taxonomy (if hyponym relationships exist between concepts) or a controlled vocabulary. These are useful, but are not considered true ontologies.
RELATIONSHIPS
Relationships (also known as relations) between objects in an ontology specify how objects are related to other objects. Typically a relation is of a particular type (or class) that specifies in what sense the object is related to the other object in the ontology. For example in the ontology that contains the concept Ford Explorer and the concept Ford Bronco might be related by a relation of type . The full expression of that fact then becomes:
Ford Explorer is defined as a successor of : Ford Bronco
This tells us that the Explorer is the model that replaced the Bronco. This example also illustrates that the relation has a direction of expression. The inverse expression expresses the same fact, but with a reverse phrase in natural language.
Much of the power of ontologies comes from the ability to describe relations. Together, the set of relations describes the semantics of the domain. The set of used relation types (classes of relations) and their subsumption hierarchy describe the expression power of the language in which the ontology is expressed.
Ford Explorer is-a-subclass-of 4-Wheel Drive Car, which in turn is-a-subclass-of Car.
An important type of relation is the subsumption relation (is-a-superclass-of, the converse of is-a, is-a-subtype-of or is-a-subclass-of). This defines which objects are classified by which class. For example we have already seen that the class Ford Explorer is-a-subclass-of 4-Wheel Drive Car, which in turn is-a-subclass-of Car.
The addition of the is-a-subclass-of relationships creates a taxonomy; a tree-like structure (or, more generally, a partially ordered set) that clearly depicts how objects relate to one another. In such a structure, each object is the 'child' of a 'parent class' (Some languages restrict the is-a-subclass-of relationship to one parent for all nodes, but many do not).
Another common type of relations is the mereology relation, written as part-of, that represents how objects combine to form composite objects. For example, if we extended our example ontology to include concepts like Steering Wheel, we would say that a "Steering Wheel is-by-definition-a-part-of-a Ford Explorer" since a steering wheel is always one of the components of a Ford Explorer. If we introduce meronymy relationships to our ontology, the hierarchy that emerges is no longer able to be held in a simple tree-like structure since now members can appear under more than one parent or branch. Instead this new structure that emerges is known as a directed acyclic graph.
As well as the standard is-a-subclass-of and is-by-definition-a-part-of-a relations, ontologies often include additional types of relations that further refine the semantics they model. Ontologies might distinguish between different categories of relation types. For example:
relation types for relations between classes
relation types for relations between individuals
relation types for relations between an individual and a class
relation types for relations between a single object and a collection
relation types for relations between collections
Relation types are sometimes domain-specific and are then used to store specific kinds of facts or to answer particular types of questions. If the definitions of the relation types are included in an ontology, then the ontology defines its own ontology definition language. An example of an ontology that defines its own relation types and distinguishes between various categories of relation types is the Gellish ontology.
For example in the domain of automobiles, we might need a made-in type relationship which tells us where each car is built. So the Ford Explorer is made-in Louisville. The ontology may also know that Louisville is-located-in Kentucky and Kentucky is-classified-as-a state and is-a-part-of the U.S.. Software using this ontology could now answer a question like "which cars are made in the U.S.?"
Objective of this section is to review the idea of absolute foundations, take a look at the tradition and historical exemplars, and list the raw elements in big-picture terms. Of course, this idea is historically unprecedented and experimental.
What these things are, why they matter
The absolute ontological foundations -- big picture, highly controversial, common ground, fragmented and confused at best
Society
Language
Intuition
Religion and spirituality
Intuitive foundations
Circle
Ouroboros
Relative / Absolute (defined relative to "wholeness")
We are exploring the notion of a "universal container" -- a "universal set" -- a "set of all sets" -- a "container that contains everything conceivable" -- a container that contains every container -- a container of everything that contains itself...
These ideas are explored in the traditional philosophy of mathematics.
This idea is generally ruled out or seen as fallible. We are considering the idea that something like the "one-sided" structure of the Moebius Strip -- perhaps symbolized in a traditional metaphysics by the symbolism of the Ouroboros (the snake swallowing its tail) can help address or overcome the weaknesses of these traditional approaches.
In set theory, a universal set is a set which contains all objects, including itself.
This is a confusing or ambiguous or counter-intuitive idea. If a house is a container, does it "contain itself"? If a jar is a container, does it contain itself? No -- not in any ordinary intuitive or "common" sense way. So the entire question seems artificial, artifactual, an unreal question that arises as an artifact of logic rather than an inherent property of reality. The issue suggests that there is something wrong with the logic -- and this seems to be the conclusion of leading mathematicians.
Instead -- what does seem to make sense, and might be the most desirable objective, involves the question of a "set that contains everything".
So we are presuming "set as container" -- that the concept of container is the right way to understand a set. Sets contains things -- they are boundaries around things -- they contain things by setting boundaries -- and the things they contain are defined in various ways, and most commonly called their "members". A set contains "members of the set" -- the things that are defined within the boundaries of the set.
We want to define sets and membership in set by means of boundary values -- defining a set (or class or category) as an "n-dimensional envelope" with upper and lower boundary values in each defining dimension.
In set theory as usually formulated, the conception of a universal set leads to a paradox (Russell's paradox) and is consequently not allowed. However, some non-standard variants of set theory include a universal set. It is often symbolized by the Greek letter xi.
Reasons for nonexistence
Zermelo–Fraenkel set theory and related set theories, which are based on the idea of the cumulative hierarchy, do not allow for the existence of a universal set. Its existence would cause paradoxes which would make the theory inconsistent.
What is a cumulative hierarchy?
A universal set "contains the universe"
But -- one would think -- if this thing exists -- the universe contains it. It's only an idea, some scribbles on a piece of paper or in a computer. But that thing does exist within or inside the universe. So, these things are "defined at different levels." One is "the real thing" -- and the other is just an abstract symbol representing the real thing.
Russell's paradox
Russell's paradox prevents the existence of a universal set in Zermelo–Fraenkel set theory and other set theories that include Zermelo's axiom of comprehension. This axiom states that, for any formula {\displaystyle \varphi (x)} \varphi (x) and any set A, there exists another set
{\displaystyle \{x\in A\mid \varphi (x)\}} \{x\in A\mid \varphi (x)\}
that contains exactly those elements x of A that satisfy {\displaystyle \varphi } \varphi . If a universal set V existed and the axiom of comprehension could be applied to it, then there would also exist another set {\displaystyle \{x\in V\mid x\not \in x\}} \{x\in V\mid x\not \in x\}, the set of all sets that do not contain themselves. However, as Bertrand Russell observed, this set is paradoxical. If it contains itself, then it should not contain itself, and vice versa. For this reason, it cannot exist.
Cantor's theorem
A second difficulty with the idea of a universal set concerns the power set of the set of all sets. Because this power set is a set of sets, it would automatically be a subset of the set of all sets, provided that both exist. However, this conflicts with Cantor's theorem that the power set of any set (whether infinite or not) always has strictly higher cardinality than the set itself.
Theories of universality
The difficulties associated with a universal set can be avoided either by using a variant of set theory in which the axiom of comprehension is restricted in some way, or by using a universal object that is not considered to be a set.
Restricted comprehension
There are set theories known to be consistent (if the usual set theory is consistent) in which the universal set V does exist (and {\displaystyle V\in V} V\in V is true). In these theories, Zermelo's axiom of comprehension does not hold in general, and the axiom of comprehension of naive set theory is restricted in a different way. A set theory containing a universal set is necessarily a non-well-founded set theory. The most widely studied set theory with a universal set is Willard Van Orman Quine’s New Foundations. Alonzo Church and Arnold Oberschelp also published work on such set theories. Church speculated that his theory might be extended in a manner consistent with Quine’s,[2] [3] but this is not possible for Oberschelp’s, since in it the singleton function is provably a set,[4] which leads immediately to paradox in New Foundations.[5] The most recent advances in this area have been made by Randall Holmes who published an online draft version of the book Elementary Set Theory with a Universal Set in 2012.[6]
Another example is positive set theory, where the axiom of comprehension is restricted to hold only for the positive formulas (formulas that do not contain negations). Such set theories are motivated by notions of closure in topology.
Universal objects that are not sets
The idea of a universal set seems intuitively desirable in the Zermelo–Fraenkel set theory, particularly because most versions of this theory do allow the use of quantifiers over all sets (see universal quantifier). One way of allowing an object that behaves similarly to a universal set, without creating paradoxes, is to describe V and similar large collections as proper classes rather than as sets. One difference between a universal set and a universal class is that the universal class does not contain itself, because proper classes cannot be elements of other classes. Russell's paradox does not apply in these theories because the axiom of comprehension operates on sets, not on classes.
The category of sets can also be considered to be a universal object that is, again, not itself a set. It has all sets as elements, and also includes arrows for all functions from one set to another. Again, it does not contain itself, because it is not itself a set.
Notes
Forster 1995 p. 1.
Church 1974 p. 308. See also Forster 1995 p. 136 or 2001 p. 17.
Flash Sheridan (2016). "A Variant of Church's Set Theory with a Universal Set in which the Singleton Function is a Set" (PDF). Logique et Analyse. 59 (233). §0.2. doi:10.2143/LEA.233.0.3149532. Lay summary (PDF).
Oberschelp 1973 p. 40.
Holmes 1998 p. 110.
"Overview of Randall Holmes's Home Page".
References
Alonzo Church (1974). “Set Theory with a Universal Set,” Proceedings of the Tarski Symposium. Proceedings of Symposia in Pure Mathematics XXV, ed. L. Henkin, American Mathematical Society, pp. 297–308.
T. E. Forster (1995). Set Theory with a Universal Set: Exploring an Untyped Universe (Oxford Logic Guides 31). Oxford University Press. ISBN 0-19-851477-8.
T. E. Forster (2001). “Church’s Set Theory with a Universal Set.”
Bibliography: Set Theory with a Universal Set, originated by T. E. Forster and maintained by Randall Holmes at Boise State University.
Randall Holmes (1998). Elementary Set theory with a Universal Set, volume 10 of the Cahiers du Centre de Logique, Academia, Louvain-la-Neuve (Belgium).
Arnold Oberschelp (1973). “Set Theory over Classes,” Dissertationes Mathematicae 106.
Willard Van Orman Quine (1937) “New Foundations for Mathematical Logic,” American Mathematical Monthly 44, pp. 70–80.
External links
Is the Moebius Band a "cut on a cut"? Can it be? How could it be? A "cut on itself" made possible because of the twist or one-sidedness?
Can it somehow intersect itself?
Can it define a single point as an x,y coordinate or origin?
Can it be a (the) monad?
If we twist the center of the monad we can (?) connect the center to the outer rim (?)
If we do this -- does this create an "orthogonal" ("perpendicular") intersection or crossing?
September 15, 2016
The Moebius strip is a leading edge -- one edge of it --
Argument, case: this form is "The Logos" -- "the unitary object that contains everything and from which all things emerge, including all concepts, all categories, and all logic"
Our issue is to show how this is true -- by
1) defining the fundamental integral algebra, and
2) mapping from this algebra into a wide variety of specific applications and areas that we think are important and can be usefully understood in this way
The strip is of zero thickness
Its length is the unit interval, which is recursively parsed across a potential infinity of levels
One edge of the strip is the unit interval -- that becomes infinite on the strip
The other side of the strip is also infinite -- it is the infinitesimal, the real number line defining continuity
The tfractal framework is defined across the moebius strip
The top level (leading edge) of the strip is "the one" at the highest level of the universal ontology -- and the lowest level (trailing edge) is continuity at the infinitesimal level
So we want to characterize the basic dimensions of the general tfractal framework
We want to talk about how the lowest level is "recursively isomorphic" to the top level -- "self-similar all the way down"
The strip turns and closes on itself -- the infinite maps into the infinitesimal -- because they are both defined as "one unit in the unit interval -- with no bound
Or maybe the there are bounds at the lowest level that bound the highest level (??)
September 7, 2016
Woke up last night with a lot of thoughts on this, but did not take any notes, other than to sketch down the title. The idea was to focus on this one subject and write everything I can think of about this, and keep this topic in focus (using showonly).
In this writing framework, it might be good to group some larger themes together -- even though that might be artificial and misleading.
The intuitive claim is: the desired solution is a single algebraic form based on all these elements -- combining all these elements in some way (and defining some of them precisely?)
Or is this not about defining more than absolute foundational elements -- like a cut, and how a cut is given a label
Cutting edge - where is the edge that makes the distinction -- makes the cut
What is "the fundamental distinction"? It is the distinction between one and two -- two sides that are a unit (?)
Do "1" and "2" of anything require a unit -- or does a "pure number" have a meaning? Does a pure number exist, or does a number always require a unit
What is the dynamic -- the larger intuitive picture of what is happening? Reality is undifferentiated and without concepts. Mind is a tabla rasa. But then -- motivation arises. Hunger, some factor in survival or well-being -- and it becomes helpful to make a distinction -- tall or short, warm or cold, tasty or poisonous, safe or unsafe -- and those concepts begin to be stored in the mind in some way.
Should I make this theme a topic in the broader database system? Or is that a waste of time and a diversion? Maybe I should hammer on this with the simplest intuitive approach -- just a raw list that I sort out by hand....
Question: How does this form create a "cut in itself"? Is it doing that? Yes -- it seems -- this form creates a cut in itself. That cut replicates and creates all cuts (?)
That could be the primary question (assuming that it does): How does a Moebius Strip make a cut in itself? And relating to my own spirituality -- how does this cut relate to spiritual identity and the realization of wholeness....?? And -- how does all of reality branch from this cut? Is this cut the primary trunk of the tree? The Christ, the Whole, the Axis Mundi?
WHAT CAN EMERGE FROM THIS?
Why do this, how does it help, what is the explanatory power, what practical problems does it solve?
This subject is (highly) confused and scattered in the real world
There is no agreement on any one single framework or formulation -- for many reasons, perhaps healthy reasons
But regardless of why the subject is fragmented, this fragmentation is highly problematic, because people cannot work together (their potential for collaborative work is severely limited)
Is this a foundation for all law? (yes)
Is this a foundation for all semantics and all meaning? (yes)
Is this a foundation for all mathematics and logic? (yes)
Is this a foundation for all categorization? (yes)
Are these wild claims? (very likely)
ABSOLUTE FOUNDATION
From an intuitive and somewhat fragmented perspective, it seems that all these elements are facets or interpretations of an invariant underlying topological form that is "closed on itself". Let's take some time to gather up as many of these elements as seem to emerge as likely facets or interpretations, and experiment with various ways to combine them all into a single unit.
The notion of a "one-sided strip" composed of differentiable cells that can be defined in two dimensions that closes on itself to form a single "one-sided surface" seems ultimately profound and capable of integrating in a single conceptual form all the laws of logic and mathematics and conceptual form as we know them. We want to figure out if and how this can be true, or what facets of the idea are true.
This is a statement about conceptual form. A "correspondence theory of reality" can connect abstractions to experience.
Visual or topological imagery
Moebius band (strip)
Ouroboros (snake swallowing its tail)
Cascaded coordinate frame connecting absolute to relative
The nature of abstraction - how does this form contain the essence of abstraction?
The general form of abstraction is "a cut on a cut on a cut on a cut on a cut" across descending levels
Holonic -- a many/one
Relationship of the general and the specific - levels of containers
Except in situations that are controlled by formal agreement among individuals or groups so as to facilitate cooperation, this parsing is always ad hoc and context specific
The One
Undifferentiated
Generic name for God depending on how viewed
Tao
The Absolute
Top level in the ontology
What is a "unit"?
Holon - part/whole - many/one
The infinite (absolutely large) closes on itself to connect to the infinitesimal (absolutely small) -- with everything else that exists (all categories, all distinctions) "in between"
One-sided
An "edge" that is itself a "cut"
It "cuts itself"
Real number line
Continuity
Definition of real numbers
Definition of integer or "whole" numbers
"Cuts" or distinctions separating these distinctions, as "boundaries" between them
Do these boundaries themselves have "thickness"? (the presumption is yes they do)
A "strip" -- with x ("width") and y ("height") dimensions -- with x also being "length"
Turing tape?
Matrix row with cells
Database row with cells
"Length" is "number of cells"
Unit circle
The length or height is 1
Unit square
The length or height is 1
An "absolute" coordinate frame, with an "absolute invariant centerpoint (origin)", from which (in some fashion to be determined) cascade an infinitude of "relative" coordinate frames are dependently defined
How are these relative frames "dependent"? They are nested within one another like a cascade of fractals, and all cascade from the single central origin
This absolute central origin is defined "at the highest level of abstraction" -- it is the top level in the ontology
This definition implies absolute linear inheritance -- absolute linear hierarchy
Buddhist "beginningless universe" cosmology (Dalai Lama)
A Möbius strip employed as a gold wedding band, inscribed with a scripture quote [Gen 2:24, quoted later in Eph.5:31]. Visible: "Two shall become (one)" http://goo.gl/vhno4a
This project develops the argument that the foundational definitions of traditional mathematics, generally based on "primitives" and "axioms", can be misleading and inherently ambiguous to a significant degree.
On the presumption that deeper analysis is impossible, these traditional approaches can involve detailed implicit complexities that are left unexplained and undefined, inherent within foundational definitions (see the definition by Tarski below) that are the logical bedrock of civilization.
This approach leaves philosophy and mathematical logic on a confused and argument-prone foundation. Because these vague and confusing definitions make agreement and solid reasoning almost impossible, discussions of semantics often quickly devolve into unresolvable and bottomless complexities.
In the age of computer science, where we have well-defined methods for approaching all levels of symbolic representation in an absolutely explicit way, clinging to this old fashioned and traditional approach is not necessary and is very often misleading.
We need a fully "contructivist" approach to foundational definitions , taking a form that removes absolutely every micro-iota of unconscious or non-explicit presumption from our foundational postulates.
We must have explicit reproducible answers to the question "how is each facet of this construction defined?" -- in both its abstract aspects ("what does it mean?") and its concrete/physical aspects ("how is this representation instantiated in a physical medium?").
These question have been traditionally ignored by philosophers and mathematicians as not relevant. This is a misleading and short-sighted presumption, that leads even our best scholars today into bottomless pits of useless semantic ambiguity and confusion, leaving critical questions contentious and unresolvable.
Put simply, if any facet of a primitive definition can be interpreted in more than one way, that definition is incoherently ambiguous
In this essay, we understand or presume that this traditional approach to primitives and definitions was unavoidable because there was no explicit way to construct every possible facet of abstract symbolic objects. Today, in the context of computer science, it IS possible to explicitly construct every facet of an abstract symbolic object, removing every possible degree of ambiguity down to something like the atomic level of machine hardware.
In mathematics, logic, and formal systems, a primitive notion is an undefined concept. In particular, a primitive notion is not defined in terms of previously defined concepts, but is only motivated informally, usually by an appeal to intuition and everyday experience. In an axiomatic theory or other formal system, the role of a primitive notion is analogous to that of axiom. In axiomatic theories, the primitive notions are sometimes said to be "defined" by one or more axioms, but this can be misleading. Formal theories cannot dispense with primitive notions, under pain of infinite regress.
Alfred Tarski explained the role of primitive notions as follows:
When we set out to construct a given discipline, we distinguish, first of all, a certain small group of expressions of this discipline that seem to us to be immediately understandable; the expressions in this group we call PRIMITIVE TERMS or UNDEFINED TERMS, and we employ them without explaining their meanings. At the same time we adopt the principle: not to employ any of the other expressions of the discipline under consideration, unless its meaning has first been determined with the help of primitive terms and of such expressions of the discipline whose meanings have been explained previously. The sentence which determines the meaning of a term in this way is called a DEFINITION...
An inevitable regress to primitive notions in the theory of knowledge was explained by Gilbert de B. Robinson:
To a non-mathematician it often comes as a surprise that it is impossible to define explicitly all the terms which are used. This is not a superficial problem but lies at the root of all knowledge; it is necessary to begin somewhere, and to make progress one must clearly state those elements and relations which are undefined and those properties which are taken for granted.
The necessity for primitive notions is illustrated in several axiomatic foundations in mathematics:
Set theory, the concept of the set is an example of a primitive notion. As Mary Tiles writes:
The 'definition' of 'set' is less a definition than an attempt at explication of something which is being given the status of a primitive, undefined, term. As evidence, she quotes Felix Hausdorff: "A set is formed by the grouping together of single objects into a whole. A set is a plurality thought of as a unit."
Naive set theory, the empty set is a primitive notion. To assert that it exists would be an implicit axiom.
Peano arithmetic, the successor function and the number zero are primitive notions. Since Peano arithmetic is useful in regards to properties of the numbers, the objects that the primitive notions represent may not strictly matter.
Axiomatic systems, the primitive notions will depend upon the set of axioms chosen for the system. Alessandro Padoa discussed this selection at the International Congress of Philosophy in Paris in 1900.[4] The notions themselves may not necessarily need to be stated; Susan Haack (1978) writes, "A set of axioms is sometimes said to give an implicit definition of its primitive terms."
Euclidean geometry, under Hilbert's axiom system the primitive notions are point, line, plane, congruence, betweeness, and incidence.
Euclidean geometry, under Peano's axiom system the primitive notions are point, segment, and motion.
Philosophy of mathematics, Bertrand Russell considered the "indefinables of mathematics" to build the case for logicism in his book The Principles of Mathematics (1903).
This article is about boundaries in general topology. For the boundary of a manifold, see boundary of a manifold.
A set (in light blue) and its boundary (in dark blue).
In topology and mathematics in general, the boundary of a subset S of a topological space X is the set of points which can be approached both from S and from the outside of S.
More precisely, it is the set of points in the closure of S, not belonging to the interior of S. An element of the boundary of S is called a boundary point of S. The term boundary operation refers to finding or taking the boundary of a set. Notations used for boundary of a set S include bd(S), fr(S), and ?S. Some authors (for example Willard, in General Topology) use the term frontier instead of boundary in an attempt to avoid confusion with the concept of boundary used in algebraic topology and manifold theory. However, frontier sometimes refers to a different set, which is the set of boundary points which are not actually in the set; that is, S \ S.
A connected component of the boundary of S is called a boundary component of S.
If the set consists of discrete points only, then the set has only a boundary and no interior.
The "Strange Loop" phenomenon occurs whenever, by moving upwards (or downwards) through the levels of some hierarchical system, we unexpectedly find ourselves right back where we started. .) Sometimes I use the term Tangled Hierarchy to describe a system in which a Strange Loop occurs. As we go on, the theme of Strange Loops will recur again and again.
p.23
Implicit in the concept of Strange Loops is the concept of infinity, since what else is a loop but a way of representing an endless process in a finite way?
In some of his drawings, one single theme can appear on different levels of reality. For instance, one level in a drawing might clearly be recognizable as representing fantasy or imagination; another level would be recognizable as reality. These two levels might be the only explicitly portrayed levels. But the mere presence of these two levels invites the viewer to look upon himself as part of yet another level; and by taking that step, the viewer cannot help getting caught up in Escher's implied chain of levels, in which, for any one level, there is always another level above it of greater "reality", and likewise, there is always a level below, "more imaginary" than it is. This can be mind-boggling in itself. However, what happens if the chain of levels is not linear, but forms a loop? What is real, then, and what is fantasy? The genius of Escher was that he could not only concoct, but actually portray, dozens of half-real, half-mythical worlds, worlds filled with Strange Loops, which he seems to be inviting his viewers to enter.
Gödel
In the examples we have seen of Strange Loops by Bach and Escher, there is a conflict between the finite and the infinite, and hence a strong sense of paradox. Intuition senses that there is something mathematical involved here. And indeed in our own century a mathematical counterpart was discovered, with the most enormous repercussions. And, just as the Bach and Escher loops appeal to very simple and ancient intuitions-a musical scale, a staircase-so this discovery, by K. Gödel, of a Strange Loop in
p. 26
We shall examine the Godel construction quite carefully in Chapters to come, but so that you are not left completely in the dark, I will sketch here, in a few strokes, the core of the idea, hoping that what you see will trigger ideas in your mind. First of all, the difficulty should be made absolutely clear. Mathematical statements-let us concentrate on number- theoretical ones-are about properties of whole numbers. Whole numbers are not statements, nor are their properties. A statement of number theory is not about a. statement of number theory; it just is a statement of number theory. This is the problem; but Godel realized that there was more here than meets the eye.
Godel had the insight that a statement of number theory could be about a statement of number theory (possibly even itself), if only numbers could somehow stand for statements. The idea of a code, in other words, is at the heart of his construction. In the Godel Code, usually called "Godel-numbering", numbers are made to stand for symbols and sequences of symbols. That way, each statement of number theory, being a sequence of specialized symbols, acquires a Godel number, something like a telephone number or a license plate, by which it can be referred to. And this coding trick enables statements of number theory to be understood on two different levels: as statements of number theory, and also as statements about statements of number theory.
Once Godel had invented this coding scheme, he had to work out in detail a way of transporting the Epimenides paradox into a numbertheoretical formalism. His final transplant of Epimenides did not say, "This statement of number theory is false", but rather, "This statement of number theory does not have any proof".
General principle (refine this into a constructive cascade of clearly-defined layers/levels)
Bit
Pixel
Font
Label / alphabet
All conceptual form is represented in data structures ("information structure")
All categories can be defined in quantitative dimensionality by factoring and stipulation
All conceptual form and categorization is represented by data structures
All data structures can be defined by cuts
Every form of classication can be defined by cuts
Every facet of the physical/mechanical representation of information/data can be define by (binary) cuts
This is pointing towards a universal global semantics grounded in constructivism
List of data structures:
https://en.wikipedia.org/wiki/List_of_data_structures
Linear data structures[edit]
A data structure is said to be linear if its elements form a sequence.
Arrays[edit]
Array
Elastic Array
Bit array
Bit field
Bitboard
Bitmap
Circular buffer
Control table
Image
Dope vector
Dynamic array
Gap buffer
Hashed array tree
Heightmap
Lookup table
Matrix
Parallel array
Sorted array
Sparse array
Sparse matrix
Iliffe vector
Variable-length array
Lists[edit]
Doubly linked list
Array list
Linked list
Self-organizing list
Skip list
Unrolled linked list
VList
Conc-Tree list
Xor linked list
Zipper
Doubly connected edge list
Difference list
Free list
Trees[edit]
Main article: Tree (data structure)
Binary trees[edit]
AA tree
AVL tree
Binary search tree
Binary tree
Cartesian tree
Left-child right-sibling binary tree
Order statistic tree
Pagoda
Randomized binary search tree
Red–black tree
Rope
Scapegoat tree
Self-balancing binary search tree
Splay tree
T-tree
Tango tree
Threaded binary tree
Top tree
Treap
WAVL tree
Weight-balanced tree
Find all notes and materials relevant to strategy and objective and bring them together into one place
GENERAL OVERVIEW OF OBJECTIVE AND STRATEGY
Explore and consider a "constructivist" approach to a shared foundation for global semantics, because the subject is (seems to be) confused, fragmented and conflicted, for many reasons that could require a book to explain and that still might be controversial.
Explore and consider a "new foundation" for algebraic definitions based on the concept of "cut", because this approach seems to remove or "cut through" a variety of issues regarding the notion of "primitive definition", apparently making it possible to explicitly define the actual operating foundations of rationality in the world on a fully explicit and transparent basis.
Explain why this is true or possible in the simplest possible terms:
The point (origin) is defined by 2 cuts (and there is no definition possible prior to this?)
The line is a cut
The interval is a cut
The boundary values at either end of the interval are cuts
Cuts are a generalization of a "Dedekind cut" that enables "width" or a "gap" in the context of finite numbers
Abstract descriptors like "features" are dimensional composites -- multi-dimensional structures
All abstractions can be defined in a descending cascade across levels, taking the general form of a "cut on a cut on a cut on a cut..."
We are posing the notion of "cut on a cut" as the foundational definition of all categorical/conceptual structure. We think it should be possible to show that all conceptual form can be defined in a simple, clear, linearly-coherent algebraic definition chain that extends from the implications of this foundation.
The top level of this cascade is the "absolute container", and "everything" exists within it.
We are supposing that this bottom-line format -- the end of the "turtles all the way down" cascade -- is a space that is "closed on itself" to form a sealed terminal unit at the bottom of the abstraction cascade
What does this notion of "sealed on itself" mean?
This implication or consequence emerges from a perspective, a point of view, an angle of perception
From one angle -- it is a bounded unit -- a unit interval with a lower and an upper bound
That unit interval has no distinctions internal to it. It may be a bounded range -- but all we can say is that this range is continuous with no differentiation (what does this say about "the infinite")?
So -- based on this conceptual unit -- a number of fundamentals must emerge as consequences or implications
This fundamental unit is a definition of continuity -- "the continuum"
Continuity is uncertainty -- an undefined and unmeasurable range of values
It is a "unit interval" -- but it is somehow "unbounded" -- it has no terminal cuts defining lower and upper boundary values
As a cut -- is it still a distinction? Or do we simply say -- this is not a cut -- or it is a "cut cutting itself" -- a distinction on a distinction -- where the range of values is 0 -- the range from "lower bound" (0) to "upper bound" (1) is 0 -- 0 length -- 0 measurable or definable length
Terms that we must immediately be able to define
Cut
Distinction
Figure/ground
Unity -- unit
Duality -- multiplicity
Identity
Similarity
Difference
Dimension
Cuts in a dimension defining units in that dimension
Quantity of units (multiples) in that dimension - 9 feet, 15 apples, 7 degrees
Secondary or derivative concepts we will be able to define
This is a very abstract project. What problem(s) does it solve, and why is this worth doing? If it were fully successful, what would that look like? What could happen that cannot happen today?
List of critical words, probably in order of development of definition sequence -- maybe give each one of these words its own database record
Consider the concept of "mathematical object" -- and the properties of that object
Fundamental terms and constructive elements
Mathematical object
Cut
Distinction
Cut in a cut (creates what) -- "which one is on top" -- which one is first - the second one cuts the first one.... a distinction on a distinction...
Interval (gap)
Boundary
Line (real line)
Dimension
Boundary value
Unit
Figure/ground
Unity - unit
Duality - multiplicity
Label / word / name / value
Qualitative dimensionality
Dimension
Aspect
Feature
Characteristic
Quality
Facet
Attribute
Property
Terms like continuity are side-effects or implications (?)
Continuity
Continuum
Linear
One-dimensional
Unit
Value
Number (multiple units -- whole or fractional)
Order
Discrete
Analog
Digital
Division
Binary
Bisection
Arithmetic
Equation
Addition
Subtraction
Multiplication
Division
Language
Fonts
Alphabets
Words
Sentences
Languages
Empiricism / Holism Spectrum
Quantity
Quality
Type - implicit level of abstraction (dimensional composition of these elements)
Arithmetic
Logical
Boolean
Level of abstraction (see Plato/Aristotle diagram)
Specific
General
Mathematics
Point
Line
Set
Universal set
Element / member
Class
Category
Order
Recursion
Self-similarity
Variable
Function
Operator
Correspondence theory of reality
Abstract symbolic object or item has correspondence with "something" in physical reality
In equations, abstract expressions like "A" or "X" have some meaning assigned to them -- "Let X equal the distance to Texas"
"State" - object, system or situation is in a "state" -- which undergoes "state transformation" as the definition of "event"
Comparison (abstract dimensionality)
Metaphor
Analogy
Simile
Epistemology - concepts
Abstraction
Similarity
Difference
Identity - equality - equation ("A = B") -- A is defined in the same dimension(s) as B and has the same value in those dimensions
Equal or more than
Equal or less than
Intuition
Holon
Gestalt
Chunk
Whole
Unit
Linear structures
Line
Matrix
Array
List
Row vector
Column
Dimension
Relativity
Coordinate frame
Taxonomy
Taxa (taxonomic unit, plural)
Taxon (singular of taxa)
Class
Order
Rank
Genus
Species
Instance
Can apply to relationship schemes other than parent-child hierarchies, such as networks
Considered a narrower term than "ontology", since ontologies include a wider variety of relation types (and if possible, we want to define those relation types in terms of linear dimensionality, GTE, LTE, etc)
Mathematically, a hierarchical taxonomy is a tree structure
A hierarchical taxonomy is a containment hierarchy
Containment hierarchy https://en.wikipedia.org/wiki/Hierarchy#Containment_hierarchy
Meronomy - the classification of parts of a whole - https://en.wikipedia.org/wiki/Taxonomy_(general)
Hierarchy
Level
Rank
Tree
Bottom-up
Top-down
Center
Circle
Point
Origin
0,0
Higher-level holistic abstractions like
Feature
Property
Attribute
Model
Equations, principles, axioms
Axiom
A set is formed by the grouping together of single objects into a whole. A set is a plurality thought of as a unit (sb 100574)
Origin
Coordinate
Intersection
Union
Boolean
AND
OR
NOT
IS (equals)
GTE
LTE
GT
LTE
Ontology -- subject 100602
Individuals:
instances or objects (the basic or "ground level" objects)
Classes:
sets, collections, concepts, types of objects, or kinds of things.
Attributes:
aspects, properties, features, characteristics, or parameters that objects (and classes) can have
Relations:
ways in which classes and individuals can be related to one another
Function terms:
complex structures formed from certain relations that can be used in place of an individual term in a statement
Restrictions:
formally stated descriptions of what must be true in order for some assertion to be accepted as input
Rules:
statements in the form of an if-then (antecedent-consequent) sentence that describe the logical inferences that can be drawn from an assertion in a particular form.
Axioms:
assertions (including rules) in a logical form that together comprise the overall theory that the ontology describes in its domain of application. This definition differs from that of "axioms" in generative grammar and formal logic. In these disciplines, axioms include only statements asserted as a priori knowledge. As used here, "axioms" also include the theory derived from axiomatic statements.
In this framework, every definition describes and constructs an (abstract) "object". An "object" is a logical construction composed of "parts" combined to form a "whole". A "cut" is an object, and so are all other objects defined in terms of ("constructed out of") cuts.
In the "correspondence theory of reality", the process of creating a "model of the world" involves correlating an abstract object (the model) with the real world (a concrete object).
We propose to explore "cut" as the fundamental constructive unit, from which all abstract objects are constructed/composed.
This form shows a cut as a rectangular object defined in two dimensions, x and y, length and height.
The exploratory argument made here is that to actually represent the cut as more than an idealized abstract object with no possible concrete or actual existence, it must be shown (represented, modeled, instantiated) in actual physical dimensions. This implies "width" or "thickness" as well as "length".
This thickness has to do with finite numbers, and is a simple measure along the line. Actual proportions of this cut: 243 in x, 9 in y.
X
Y
Make a drawing of this constructivist concept
Real number line is a cut defined as "0 thickness" -- no extension in the y dimension
Draw a cut emerging from real number line -- this is a "cut in a cut"
The Dedekind cut is also "0 thickness" -- it's a limit
But if defined as a "computable number", this cut has "thickness" ("width") -- extension in x
Draw second cut extending from first cut -- a "cut in a cut"
Show progression of construction of basic objects
Set -- one dimensional set: a linear interval
Two-dimensional set -- objects defined in two dimensions
Three dimensions
Definition of abstract dimensions -- abstract range of values
Features, properties, attributes of abstract objects, all defined in this way in a hierarchy of ascending implicit complexity
Would this be defining a set or a class?
We want to explore the relationship between several interconnected ideas:
Dedekind cut
The real number line is a Dedekind cut (confirm)
Cantor's diagonal argument
How does it relate to
Genus/species
Can we say that a species is a cut on a genus?
Computable number
Defined in a finite number of decimal places or cellular addresses
Show that every additional decimal place (or "cellular place" -- cell required to hold a binary value, either 0 or 1) -- is defined as a cut on the previous decimal place -- it is a bounded range within the bounded range of the previous decimal place.
Every decimal place (cellular place) in a number specification is a bounded range
Potent ontology here, right at the intersection of hardware and software
[If the elements are ordered] A (one dimensional) array is a synthetic dimension and an ordered class
It is a list
It is totally ordered
It is composed of cells
Cells have boundaries in the x and y dimensions
This thing looks like a Turing tape (is that significant?)
There is an analogy between the index for the cell and the index for a number on the real number line
The mechanical definition of an array
ARRAY PROGRAMMING
https://en.wikipedia.org/wiki/Array_programming
ARRAY DATA STRUCTURE
https://en.wikipedia.org/wiki/Array_data_structure
In computer science, an array data structure, or simply an array, is a data structure consisting of a collection of elements (values or variables), each identified by at least one array index or key. An array is stored so that the position of each element can be computed from its index tuple by a mathematical formula.[1][2][3] The simplest type of data structure is a linear array, also called one-dimensional array.
For example, an array of 10 32-bit integer variables, with indices 0 through 9, may be stored as 10 words at memory addresses 2000, 2004, 2008, ... 2036, so that the element with index i has the address 2000 + 4 × i.[4]
The memory address of the first element of an array is called first address or foundation address.
Because the mathematical concept of a matrix can be represented as a two-dimensional grid, two-dimensional arrays are also sometimes called matrices. In some cases the term "vector" is used in computing to refer to an array, although tuples rather than vectors are more correctly the mathematical equivalent. Arrays are often used to implement tables, especially lookup tables; the word table is sometimes used as a synonym of array.
Arrays are among the oldest and most important data structures, and are used by almost every program. They are also used to implement many other data structures, such as lists and strings. They effectively exploit the addressing logic of computers. In most modern computers and many external storage devices, the memory is a one-dimensional array of words, whose indices are their addresses. Processors, especially vector processors, are often optimized for array operations.
Arrays are useful mostly because the element indices can be computed at run time. Among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array. For that reason, the elements of an array data structure are required to have the same size and should use the same data representation. The set of valid index tuples and the addresses of the elements (and hence the element addressing formula) are usually,[3][5] but not always,[2] fixed while the array is in use.
The term array is often used to mean array data type, a kind of data type provided by most high-level programming languages that consists of a collection of values or variables that can be selected by one or more indices computed at run-time. Array types are often implemented by array structures; however, in some languages they may be implemented by hash tables, linked lists, search trees, or other data structures.
The term is also used, especially in the description of algorithms, to mean associative array or "abstract array", a theoretical computer science model (an abstract data type or ADT) intended to capture the essential properties of arrays.
ARRAY DATA TYPE Array data type: https://en.wikipedia.org/wiki/Array_data_type
We want to define the concept of "set" in terms of cuts
We want to define the elements of the set in terms of cuts (why are some objects in the set and others not; this distinction defined in dimensions)
We want to be able to define/characterize those elements in terms of dimensions and values in those dimensions
Do all the elements of the set have to be defined in the same dimensions? No -- but perhaps they have at least one dimension in common (they are all in the set?)
We want to explore the notion of an "ordered class", where we say that an ordered class is a set where the elements of the set can be put into an unambiguous serial order in terms of at least one dimension of their definition. This "ordered class" is identical to a "synthetic dimension"
This definition of order is linear in one dimension -- LT, EQ, GT
All categories are created by cuts creating boundaries in dimensions
All objects within those categories (boundaries) are defined by dimensions (characteristics) and cuts in those dimensions (values)
Set - category - class - all defined by boundaries and boundary values -- and the objects contained within those boundaries
We are going to define the structure of any category, and the process of creating categories, as a process of parsing reality by cuts -- by making distinctions.
Categories, classes, sets, types -- are all created by a process of free-hand cutting or distinction -- and then become socialized when broadly useful
The general view on this is -- "a species is a cut on a genus" -- and we want to explore whether that idea makes sense, or is consistent across many related definitions...
And -- we want to look at decimal number systems -- by saying something like
The "next decimal place" is a species of the previous decimal place -- as it parses (splits, divides) the bounded range of that decimal-place interval
or
each decimal place is a cut on the previous decimal place
This framework is an algebraic interpretation of the "real number line" as it is generally understood in mainstream mathematics, with particular emphasis on the concept of "cut" as introduced by Richard Dedekind, and exploring the notion that all elements within the framework can be understood as a cut or as a composite of cuts.
This "T-fractal" (or "H-tree fractal") shown here on the right is a "fractal cascade of cuts" showing each end of every T as a boundary value cut. The cut itself -- the line itself -- is a boundary value. If drawn in the x dimension, it is a cut in the y dimension, and a limit value -- upper or lower -- in y. And each distinction within the line (cut) is also a cut.
The immediate objective is to create a comprehensive list of subjects that are directly linked to the primary task of "mapping the line" into all its primary interpretations when understood in this way. Trees, gaps, intervals, numbers, categories, boundaries, taxonomies, etc., are all defined in all their facets as a cut or a cascade of cuts. Every term in mathematics or epistemology can be defined in this way.
This creates an absolutely systematic and consistent method for "constructing" all elements of natural languages or mathematical languages, in ways that are absolutely minimalist, built from the most "atomic" level possible, in a way that presumes no "composite constructions" within its semantics.
There are no "axioms", and no presumed or undefined elements or "primitives". Absolutely every facet is explicitly defined, every symbol and every facet of every symbol explicitly constructed.
Explicit construction
We argue that this "explicit machine-compatible constructivism" is the only way (and a very natural and organic way) to remove inherent ambiguity from symbolic construction and language. Human beings fight among themselves for many reasons, but inherent ambiguity is a leading cause that should be removed by the emergence of a clear-cut and fully reproducible algebraic semantics, in a format that does not have to be "believed" by some theorist who "sees it in their mind and decides it is true" (ie, accepts that some statement has been "proven" because they see a way to interpret the symbolism in a way that seems convincing.
Every dimension or facet explicit-- no facet undefined or presumed.
Every facet explicitly a "cut" (or gap or interval or bounded range), or explicitly composed of cuts.
The line itself is a cut -- and every distinction within it is a cut. Every label or name or facet of this construction is composed of cuts.
All the major concepts directly related to "the line"
Linear continuum
All the major concepts having to do with intervals and gaps.
All the major concepts having to do with boundaries and boundary values.
Extreme reduction in terminological diversity / profusion.
Absolute simplicity, immaculate elegance, perfect conformity with Occam's Razor.
Links all facets of human thinking, at any level of abstraction, through a common absolutely simple framework, in a format that can be explained to a child.
Kronecker constructivism -- mathematics is loosely related to computation (and I am tending to say that in a political/shared context, it should be understood as identical to computation, at least for some dimensions -- but don't limit it to constructivism only, but build from this common/visible/transparent foundation wherever possible))
This process begins with similarity / difference which becomes a dimension which is "cut" into values by distinctions
But the dimension itself is a cut
Is itself similarity a "cut"? Similarity, it could be said, begins with identity -- "object A is the same thing as object B" ("they are identical") -- and then further perception begins to notice differences
What is the difference between a ball and an apple?
What is the difference between a ball and a pumpkin?
All objects defined by cuts
First cut (by what, in what, distinguishing what -- it is a figure/ground distinction)
Second cut (by what, in what, distinguishing what)
Third cut (by what, in what, distinguishing what)
This definition by cuts in this order looks a bit like the basic definitions of Euclidean geometry
What are the dimensions of a dimension?
How figure/ground in each of these cases?
We say there is a lowest level in the parsing of the real interval: 0...1
That level is an open interval with no cuts
It is "pure uncertainty" -- and continuity
How is that lowest level defined by, held by -- the twist of the moebius strip
From these objects -- we want to be able to construct all basic and common categorical objects, including
Sets
Elements within a set
Class
Ordered class (synthetic dimension, list, array, row, column)
How is this guided by H-tree fractal?
Each cut is digital/binary (with potential for further value differentiation) -- as per the cantor set -- middle third removed
Why this moebius angle
because -- like the dewey decimal system defined in a floating point value -- it can always be cut in half
The power of the Moebius -- is that it gives us a way to define the absolute container viewed in terms of perspective
A dimension is an ordered class of values, with the following
properties:
It is a set of distinct elements.
This set of elements have in common one or more similarities.
- These similarities among the elements are defined in terms of
values in dimensions; ie, elements are "similar" to the degree that
they have identical values in identical dimensions.
These similar elements may have distinguishable differences.
- These differences are defined in terms of values in dimensions;
ie, elements are "different" to the degree that they are defined in
non-identical dimensions, or have non- identical values in identical
dimensions.
- If these similar elements do not have distinguishable differences
(ie, they are identical), the dimension is a "quantitative
dimension" of the type which describes physical measurements, and
the elements are the units of measure (such as "feet" or "pounds" or
"apples").
These distinct elements can be ranked in serial/linear order,
according to their values in the dimensions in which their differences are
defined.
- A consequence of this ranking is that each ranked element or
object in a dimension/class can be interpreted as a value of the
dimension.
This definition has a number of properties or consequences which are worth
noting:
It provides a systematic and precise way to define the somewhat
ambiguous concepts of similarity and difference.
It defines a dimension as a class in such a way as is simultaneously
consistent with the ordinary intuitive definition of a dimension (such as
length as measured in inches) and the intuitive definition of a class (a
set of objects with one or more properties in common). If the elements of
the class are identical, the dimension is quantitative, and the elements
are the units of measure.
Both the common properties (the similarities), and the
distinguishing differences of the elements of the class are defined in
terms of dimensions and values in dimensions.
This approach allows us to use one highly compact and recursively
defined algebraic element (ie, dimension) to both abstractly classify and
fully describe any object, to any desired degree of specificity. Clearly,
when we wish to exactly describe any object, we give its exact
measurements in some set of dimensions. Both abstractly classifying and
empirically describing any object in terms of a single algebraic concept
is elegant, compact, and convenient.
A dimension is "recursively compositional" -- which is to say that
a dimension, like a fractal, is built out of "self- similar" elements.
There is a complementary duality between a value and a dimension. Every
value is itself a dimension; every dimension is a value.
Since a dimension is a class of values, and a value is a dimension,
"dimensions are built out of dimensions".
"Class" and "dimension" are isomorphic concepts: an ordered class is
a synthetic dimension, and a synthetic dimension is an ordered class.
As an ordered class, a dimension is not merely any set of objects
which we can group together by common properties. In order for a class to
be defined as a dimension, we must be able to order (or "sort") the
elements of this class into an unambiguous serial/linear sequential list.
A class of elements defined in four different dimensions can be sorted in
four different ways, according to the values of the elements in each of
these dimensions. (An example is the sorting of files on a floppy disk by
a personal computer operating system, according to their values in four
descriptive parameters: name, date, size, type.)
A dimension is thus a (unambiguously sequential) list of
values.
Just as any value is itself a dimension, any element or member or
value of this list (in this definition, "element", "member" and "value"
are equivalent and interchangeable concepts) may itself be either a single
undifferentiated "unit", or can be itself another list.
It is interesting to note that these above two points make the
definition of dimension intimately related to the fundamental definitions
in the LISP ("list processing") programming language, oftentimes
considered the primary language for artificial intelligence (see Douglas
Hofstadter, Metamagical Themas, pp 396-454).
The definition of dimension is recursive (ie, "defined in terms of
itself") in some profound and subtle ways. Not only is there an
equivalence between dimension (a class of values) and an ordered class (a
class of abstract objects), but each of the values of the dimension are
themselves recursively definable as dimensions. It is this recursion
which allows me to argue that not only are "all concepts built from
dimensions -- but dimensions themselves are built from dimensions".
This can be illustrated in detail by systematically demonstrating that
all elements of this above definition can be defined in terms of
dimensions. That is, the concepts of "class", "set", "distinction",
"similarity", "difference", "category", "element", "member", "value",
"rank", and "unit" can all be defined in terms of dimensions -- as can
any other basic concept from epistemology. Additionally, I have found that
all the basic "data structures" of computer science and linear algebra can
be defined in terms of dimensions. The process begins simply by noting
the fact that a dimension can be represented as a row vector. Thus, every
row vector in a data structure is a dimension of the structure.
In a "quantitative dimension" such as "length in inches", all the
inches are the same. Each inch is exactly identical to every other inch,
-- with the one exception that each inch is labeled or identified as a
particular numeric multiple; ie, the first inch, the second inch, the
third inch, etc. This dimension is thus a scale of values, like a ruler
or yardstick, where the values are "one dimensional".
I use the phrase "synthetic dimension" to describe any dimension which
involves multi-dimensional (or linearly decomposable) values. A
"synthetic" dimension is a range of values, just like any other dimension,
but its values are not simply identical units, but are instead "similar"
units which nevertheless have some distinguishable difference. In this
sense, the concept "synthetic dimension" includes the normal intuitive
definition of dimension, but is more general, and is defined at a higher
level of abstraction.
A consequence of the above fundamental definition is that any ordered
class can be thought of as a dimension. Thus, a "set of tea cups", if the
cups can be placed in serial order according to some criteria inherent in
their description, becomes a (synthetic) dimension. In this dimension,
the unit is "tea cups", and they are ordered or sorted by their value in
some criteria of their description, such as height or weight or volume.
Synthetic dimensionality offers a way to not only define or fully
characterize and describe all objects in terms of quantitative dimensions,
but also defines a consistent way that all abstract features, properties,
characteristics, and attributes of any object can be defined as
(synthetic) dimensions.
Naive set theory is one of several theories of sets used in the discussion of the foundations of mathematics. Unlike axiomatic set theories, which are defined using a formal logic, naive set theory is defined informally, in natural language. It describes the aspects of mathematical sets familiar in discrete mathematics (for example Venn diagrams and symbolic reasoning about their Boolean algebra), and suffices for the everyday usage of set theory concepts in contemporary mathematics.
Sets are of great importance in mathematics; in fact, in modern formal treatments, most mathematical objects (numbers, relations, functions, etc.) are defined in terms of sets. Naive set theory can be seen as a stepping-stone to more formal treatments, and suffices for many purposes.
Method
A naive theory is considered to be a non-formalized theory, that is, a theory that uses a natural language to describe sets and operations on sets. The words and, or, if ... then, not, for some, for every are not here subject to rigorous definition. It is useful to study sets naively at an early stage of mathematics in order to develop facility for working with them. Furthermore, a firm grasp of set theory's concepts from a naive standpoint is a step to understanding the motivation for the formal axioms of set theory. As a matter of convenience, usage of naive set theory and its formalism prevails even in higher mathematics – including in more formal settings of set theory itself.
Sets are defined informally and a few of their properties are investigated. Links to specific axioms of set theory describe some of the relationships between the informal discussion here and the formal axiomatization of set theory, but no attempt is made to justify every statement on such a basis. The first development of set theory was a naive set theory. It was created at the end of the 19th century by Georg Cantor as part of his study of infinite sets[3] and developed by Gottlob Frege in his Begriffsschrift.
Noticing of similarity -- as first a form of unity -- thing/not-thing -- then as a range of similarity -- where there are discernible facets that are identical and discernible facets hat are not identical
Emergence of distinction as a psychological and as a mathematical phenomena -- "telling two things apart" -- noticing something about something" -- seeing or defining an object -- defining a boundary around an object
Model the construction of this process in terms of Dedekind cut and in terms of dimensionality -- build every facet of this process through the "cut" process -- and define the meaning of "a cut on a cut" -- or more recursively, of a "cut on a cut on a cut..."
Levels of abstraction -- descending from genus to species to sub-species to instance -- with a "species" defined as a "cut on genus", and sub-species defined as a "cut on species".
Is a species a subset of a genus? Would therefore "every subset be defined as a cut on a set"?
This no doubt (?) must be defined in a slightly broader way than the definition of Dedekind cut which is defined in terms of no upper bound
Show how this process of "cut" defines the fundamentals of algebraic set theory and concepts like Real Number Line
Show how it is possible to build all exact description of anything in terms of numeric measurement is defined quantitative dimensions emerging from or defined through this process
Show how it is possible to build all "higher-level" conceptual objects from these constructive elements
Abstraction
Generality
Comparison (identity - similarity - difference)
Speculative exploration of the creation of distinction and dimensionality as it emerges from undefined and unconceptualized. This subject has been studied by scholars and analysts who have suggested themes about the "emergence" of distinctions, among them Dr. Francis Heylighen, a founder of the "Principia Cybernetica" proejct.
On the emergence of semantic and conceptual distinctions. http://psycnet.apa.org/index.cfm?fa=buy.optionToBuy&id=1984-09160-001
This a movement out of "the undefined infinite" -- and into the finite... as motivated -- by "something"
This distinction can be highly "holistic" -- an integral "gestalt" -- a perceptual unit that is seen or understood as a "whole", a composite perceptual unit -- and not as some linear "dimension", which does not become defined until further development
A distinction emerges, from undefined to defined, and a range of values becomes defined in that distinction
So, we want to talk about this process in terms of "Dedekind Cut"
And from there -- we want to explore how it becomes possible to define all constituents elements of "any concept whatsoever, at any level of abstraction, defined by intentional stipulation.
Does this process create a "total order" -- significantly different from a "partial order" that would be created by attempts to model this process based on empirical observation?
So -- does this process create a "set" -- or is it a general definition of "set" -- perhaps as guided by this original definition from Georg Cantor:
I need a simple definition of a computable number, because I am guessing this concept is the foundation for measurement in a constructivist/finite-state framework.
This diagram feels appealing and magnetic. I am seeing these intersections as points of "cut" -- but with the intriguing fractal aspect of 90-degree rotation (where a binary cut intersects continuity)
What are the factors here?
Real number line -- and sequential redifferentiation of the real number line at successive levels of decimal point -- or successive levels of binary differentiation
The band has two edges along the length of the strip -- which I have tended to see as width=unit interval
We want to "seal the space." "Close the space"
Form unity out of diversity
"Two become One"
In the band, there is a rotation around -- what -- a 180-degree turn -- so that 0 maps to 1 and 1 maps to 0
Each cut is a "dimension" of variation, defined in units cascaded below the level of that cut
The range from the "start" of the interval to the "end" of the interval is always one unit - until it is parsed in the middle
The beginning of the interval has value of 0
The end of the interval has value 1
The interval is always parsed "in the exact middle"
The range of the interval is analogous to continuous variation and when it is parsed, there is an analog-to-digital conversion defined at the boundary value edges
"Potentially continuous" variation along each bounded range from 0 to 1 (could be differentiated in any way, since it is recursively isomorphic to real number line)
Should we call those bounded ranges "intervals"?
But actually -- each interval has one start point (minimum value: 0) and one end point (maximum value: 1) -- and one center point where it can be split/differentiated -- forming a new 0 point
Top level -- first cut -- is unbounded on one end
Direction of cascade as shown is "top-down"
Every T intersection is a boundary value cut
Every T intersection defines a digital opposite
Each cut is a "dimension" of variation, defined in units cascaded below the level of that cut
The range from the "start" of the interval to the "end" of the interval is always one unit - until it is parsed in the middle
The beginning of the interval has value of 0
The end of the interval has value 1
The interval is always parsed "in the exact middle"
The range of the interval is analogous to continuous variation and when it is parsed, there is an analog-to-digital conversion defined at the boundary value edges
Potentially continuous variation along each bounded range from 0 to 1
Should we call those bounded ranges "intervals"?
But actually -- each interval has one start point (minimum value: 0) and one end point (maximum value: 1) -- and one center point where it can be split/differentiated -- forming a new 0 point
This framework shows the basic "T" (or "H") structure of a hierarchical cascade of bounded cuts.
The concept of "cut" is the baseline constructive element in the model we develop here. This is a subtle idea with an illusive and chameleon-like character. A cut is a single object, of an extremely or even absolutely simple character, but at the same time, it has many aspects and can be understood in a variety of ways. To help bring this definition into clear focus, we need to list these ways and facets.
A cut is distinction - a distinction
It can be a difference -- a "difference between two things" -- as defined in some dimension?
And it can be a similarity -- a range of values that two things have in common (this is a cut as a dimension)
A cut is a distinction
It is a boundary -- or part of a boundary
Does it have "thickness"? We are saying that in a finite space, where every value is defined in a a finite set of characters or distinctions, yes it must have "thickness"
The thickness of an n-dimensional envelope depends on distinctions in each individual dimension that defines the envelope
The distinction is a "difference" perceived against a background
It can simply be a difference -- the boundary line between two numbers on a line
But as its meaning morphs -- while remaining "the same thing" -- it can also become a dimension -- a range of variation
Heat (how is heat detected? by the body or by a finely calibrated instrument?)
Height
Width
A "dimension" is a cut
A dimension itself can be "cut" -- into a range of values -- measuring "amount" or "quantity"
How "thick" is the distinction between 0 and 1? If this distinction is represented in a machine space -- such as a row vector -- a set of "cells" -- it has the thickness of one cell (???) What is the actual transition from 0 to 1? Is this nonsense? Does a table border in html have a thickness? yes -- one pixel at minimum.
Yin/yang -- we can say that in "yang" mode -- "assertive" mode -- the cut is always figure -- and the thing it cuts is "yin" -- feminine (?)
All of these things are defined in a context of figure/ground
Identity -- two things are "identical" -- meaning that two separate objects have the same values in the same dimensions. There are no discernible differences between two objects
Similarity -- two things have dimensions or values in common -- but there are discernible differences
Difference -- requires a dimension to define --
Can we build up a comprehensive list of definitions -- for terms like identity or similarity -- grounded in the fundamental bottom-line notion of a "cut in a cut" terminating in -- continuity? It terminates in the moebius strip --
MAJOR INTUITION (?): The moebius strip is the key to overcoming the figure/ground issue -- from one angle of perception what is figure (content view against a background), and from "the other" point of view, that "same" thing is now ground (background)
This is fundamental and foundational -- not incidental. This "relativity" of definition and interpretation could become a huge source of confusion. If we want a clear-cut chain of definitions extending from a foundational structure, this "duality" has to be absolutely defined and clear
Identity / Similarity / Difference
A dimension first appears as a similarity (OR as a difference)?
A cut IS a dimension
A cut is a demarcation IN a dimension
A dimension is (or implies) a range of similarity among class/groups/objects/distinctions that are in at least one dimesnion "different"
A "synthetic dimension" (old definition) is
The fundamental constructive unit
This form shows a "cut" as a rectangular object defined in two dimensions, x and y, length and height
The exploratory argument made here is that to actually represent the cut as more than an idealized abstract object with no possible concrete or actual existence, it must be shown (represented, modeled, instantiated) in actual physical dimensions in an actual physical medium
The end-points in the line are cuts (boundary values)
The cuts "contain" the values the represent
A bit is a cut in the boundary value range 0 - 1 -- which is cut "in the middle" by the Cantor Set and also, in the same way, by the T fractal
So the bottom level is the line ("the real line") -- which is a cut -- which is bounded by cuts, and which is defined by cuts, and all the demarcations within the line are cuts
In mathematics, a Dedekind cut, named after Richard Dedekind, is a partition of the rational numbers into two non-empty sets A and B, such that all elements of A are less than all elements of B, and A contains no greatest element. Dedekind cuts are one method of construction of the real numbers.
The set B may or may not have a smallest element among the rationals. If B has a smallest element among the rationals, the cut corresponds to that rational. Otherwise, that cut defines a unique irrational number which, loosely speaking, fills the "gap" between A and B. In other words, A contains every rational number less than the cut, and B contains every rational number greater than or equal to the cut. An irrational cut is equated to an irrational number which is in neither set. Every real number, rational or not, is equated to one and only one cut of rationals.
Dedekind cut - square root of two
More generally, a Dedekind cut is a partition of a totally ordered set into two non-empty parts A and B, such that A is closed downwards (meaning that for all a in A, x <= a implies that x is in A as well) and B is closed upwards, and A contains no greatest element. See also completeness (order theory).
It is straightforward to show that a Dedekind cut among the real numbers is uniquely defined by the corresponding cut among the rational numbers. Similarly, every cut of reals is identical to the cut produced by a specific real number (which can be identified as the smallest element of the B set). In other words, the number line where every real number is defined as a Dedekind cut of rationals is a complete continuum without any further gaps.
Dedekind used the German word Schnitt (cut) in a visual sense rooted in Euclidean geometry. His theorem asserting the completeness of the real number system is nevertheless a theorem about numbers and not geometry. Classical Euclidean geometry lacked a treatment of continuity (although Eudoxus did construct a sophisticated theory of incommensurable quantities such as square-root-of-2): thus the very first proposition of the very first book of Euclid's geometry (constructing an equilateral triangle) was criticised by Pappus of Alexandria on the grounds that there was nothing in the axioms that asserted two intersecting circles in fact intersect in points. In David Hilbert's axiom system, continuity is provided by the Axiom of Archimedes, while in Alfred Tarski's system continuity is provided by what is essentially Dedekind's section. In mathematical logic, the identification of the real numbers with the real number line is provided by the Cantor–Dedekind axiom.
This is the issue to explore:
All elements of A are less than all elements of B, and A contains no greatest element.
Now -- my issue is -- that I want to construct all these things out of "finite numbers" -- numbers defined in a finite number of symbols -- in any base, whether decimal or binary or whatever --
And I understand that every finite number can be subdivided
I think
but this sub-division comes up against a limit -- which will be like a Dedekind cut -- somethng like "the left inside edge of the right side of a digital cell containing a 0 or a 1
I think (?) that I am defining a "finite number" as the concept named by Alan Turing as "computable number"
which I think (??) means that it can be represented as the contents of a row matrix (row vector, a one-dimensional array) -- each square of which has one symbol in it representing a "digit" in the value of the number -- though we want this to be binary, so there are only 2 symbols -- 0 and 1
and no nulls (??)
every square has one positive symbol in it -- to represent some number -- some value, some quantitu, some multiple of some unit
The straight-line decomposition of the real number line is a local (hence relative) point of view. The fractal model is a global (hence absolute) point of view
This is "linear convergence" -- in the same form as the schedule for a basketball tournament.
At the tips of each branch, player A (team A) plays Player B -- and the winner goes on to play the winner from the opposing branch of the tree -- and there is a continuing convergence to the top of the tree
In the "decomposition of the one" -- the one at the top of the hierarchy is undefined -- unbounded -- because all these lines are "cuts" in the form of boundary values
The cut itself IS continuous. It IS continuity. But it is interpreted as a boundary value by the cut that intersects it from a higher level.
So this is a descending cascade of digital (finite-state, discrete, digital) cuts that terminates in analog continuity. "continuity is orthogonal to the lowest level cut -- continuity is the unknown and immeasurable"
Converging towards the square root of 2 across descending levels of decimal places: the "real" value is somewhere in there between the boundary values.
“Center everywhere”
How is the TH fractal a cascade of Cartesian coordinate frames?
How are they linked?
Where is the “origin”?
They are linked through their origins
Their origins are zero-points
Because the cascade is infinitely fractal, “the center” is “everywhere”
"Close the space"
Map the top level of the cascade into the bottom level
In the mathematical field of order theory, a continuum or linear continuum is a generalization of the real line.
Formally, a linear continuum is a linearly ordered set S of more than one element that is densely ordered, i.e., between any two distinct elements there is another (and hence infinitely many others), and which "lacks gaps" in the sense that every non-empty subset with an upper bound has a least upper bound. More symbolically:
a) S has the least-upper-bound property
b) For each x in S and each y in S with x < y, there exists z in S such that x < z < y
A set has the least upper bound property, if every nonempty subset of the set that is bounded above has a least upper bound. Linear continua are particularly important in the field of topology where they can be used to verify whether an ordered set given the order topology is connected or not.
Unlike the standard real line, a linear continuum may be bounded on either side: for example, any (real) closed interval is a linear continuum.
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. Please help to improve this article by introducing more precise citations. (February 2016) (Learn how and when to remove this template message)
In mathematics, a linear order, total order, simple order, or (non-strict) ordering is a binary relation on some set {\displaystyle X} X, which is transitive, antisymmetric, and total (this relation is denoted here by infix {\displaystyle \leq } \leq ). A set paired with a total order is called a totally ordered set, a linearly ordered set, a simply ordered set, or a chain.
Dedekind cut
In mathematics, a Dedekind cut, named after Richard Dedekind, is a partition of the rational numbers into two non-empty sets A and B, such that all elements of A are less than all elements of B, and A contains no greatest element. Dedekind cuts are one method of construction of the real numbers.
The set B may or may not have a smallest element among the rationals. If B has a smallest element among the rationals, the cut corresponds to that rational. Otherwise, that cut defines a unique irrational number which, loosely speaking, fills the "gap" between A and B. In other words, A contains every rational number less than the cut, and B contains every rational number greater than or equal to the cut. An irrational cut is equated to an irrational number which is in neither set. Every real number, rational or not, is equated to one and only one cut of rationals.
Whenever, then, we have to do with a cut produced by no rational number, we create a new irrational number, which we regard as completely defined by this cut ... . From now on, therefore, to every definite cut there corresponds a definite rational or irrational number ....
—Richard Dedekind[1]
More generally, a Dedekind cut is a partition of a totally ordered set into two non-empty parts A and B, such that A is closed downwards (meaning that for all a in A, x ? a implies that x is in A as well) and B is closed upwards, and A contains no greatest element. See also completeness (order theory).
It is straightforward to show that a Dedekind cut among the real numbers is uniquely defined by the corresponding cut among the rational numbers. Similarly, every cut of reals is identical to the cut produced by a specific real number (which can be identified as the smallest element of the B set). In other words, the number line where every real number is defined as a Dedekind cut of rationals is a complete continuum without any further gaps.
"In mathematics, a self-similar object is exactly or approximately similar to a part of itself (i.e. the whole has the same shape as one or more of the parts).
"Many objects in the real world, such as coastlines, are statistically self-similar: parts of them show the same statistical properties at many scales.
"Self-similarity is a typical property of fractals. Scale invariance is an exact form of self-similarity where at any magnification there is a smaller piece of the object that is similar to the whole. For instance, a side of the Koch snowflake is both symmetrical and scale-invariant; it can be continually magnified 3x without changing shape. The non-trivial similarity evident in fractals is distinguished by their fine structure, or detail on arbitrarily small scales.
"As a counterexample, whereas any portion of a straight line may resemble the whole, further detail is not revealed."
Every facet of the real number line is a cut. A cut is a "figure/ground" distinction. A cut distinguishes figure from ground.
Figure–ground organization is a type of perceptual grouping which is a vital necessity for recognizing objects through vision. In Gestalt psychology it is known as identifying a figure from the background. For example, you see words on a printed paper as the "figure" and the white sheet as the "background".
An H-tree is a binary recursive cascade of intervals
So -- we want to interpret it as a universal decomposition of unit interval. Every level or bounded interval within the fractal is a unit interval divided at the center
Recursive -- exactly self-similar and isomorphic/identical at all levels
Symmetric -- always divided in the exact middle, as per Cantor set
Binary
Bounded intervals - each orthogonal/perpendicular intersection is a bounding limit of the interval -- lowest or highest, a 0 or 1 depending on how interpreted
In fractal geometry, the H tree is a fractal tree structure constructed from perpendicular line segments, each smaller by a factor of the square root of 2 from the next larger adjacent segment. It is so called because its repeating pattern resembles the letter "H". It has Hausdorff dimension 2, and comes arbitrarily close to every point in a rectangle. Its applications include VLSI design and microwave engineering.
The H tree is a self-similar fractal; its Hausdorff dimension is equal to 2.
The points of the H tree come arbitrarily close to every point in a rectangle (the same as the starting rectangle in the constructing by centroids of subdivided rectangles). However, it does not include all points of the rectangle; for instance, the perpendicular bisector of the initial line segment is not included.
According to this statement from Wikipedia, this below image is not strictly an H-tree, because the perpendicular bisector of the initial line segment IS included -- and is important in additional development and interpretation because it is the link to infinite recursion in the vertical/expansive direction.