Entailment Meshes and the THOUGHTSTICKER program

Abstract

The symbol-based, correspondence epistemology used in AI is contrasted with the constructivist, coherence epistemology promoted by cybernetics. The latter leads to bootstrapping knowledge representations, in which different parts of the cognitive system mutually support each other. Gordon Pask's entailment meshes and their implementation in the ThoughtSticker program are reviewed as a basic application of this methodology. Entailment meshes are then extended to entailment nets: directed graph representations governed by the "bootstrapping axiom", determining which concepts are to be distinguished or merged. This allows a constant restructuring and elicitation of the conceptual network. Semantic networks and frame-like representations with inheritance can be expressed in this very general scheme by introducing a basic ontology of node and link types. Entailment nets are then generalized to associative nets characterized by weighted links. Learning algorithms are presented which can adapt the link strengths, based on the frequency with which links are selected by hypertext browsers. It is argued that these different bootstrapping methods could be applied to make the World-Wide Web more intelligent, by allowing it to self-organize and support inferences through spreading activation.

Download PostScript version of this paper (for easy printing)
Heylighen F. (1997): "Bootstrapping knowledge representations: from entailment meshes via semantic nets to learning websӬ International Journal of Human-Computer Studies [submitted]

Bootstrapping knowledge representations: from entailment meshes via semantic nets to learning webs

Francis HEYLIGHEN

Center "Leo Apostel", Free University of Brussels, Pleinlaan 2, B-1050 Brussels, Belgium

E-mail: fheyligh@vnet3.vub.ac.be

http://pespmc1.vub.ac.be/HEYL.html

ABSTRACT. The symbol-based, correspondence epistemology used in AI is contrasted with the constructivist, coherence epistemology promoted by cybernetics. The latter leads to bootstrapping knowledge representations, in which different parts of the cognitive system mutually support each other. Gordon Pask's entailment meshes and their implementation in the ThoughtSticker program are reviewed as a basic application of this methodology. Entailment meshes are then extended to entailment nets: directed graph representations governed by the "bootstrapping axiom", determining which concepts are to be distinguished or merged. This allows a constant restructuring and elicitation of the conceptual network. Semantic networks and frame-like representations with inheritance can be expressed in this very general scheme by introducing a basic ontology of node and link types. Entailment nets are then generalized to associative nets characterized by weighted links. Learning algorithms are presented which can adapt the link strengths, based on the frequency with which links are selected by hypertext browsers. It is argued that these different bootstrapping methods could be applied to make the World-Wide Web more intelligent, by allowing it to self-organize and support inferences through spreading activation.

Contents

  • 1. Introduction
  • 2. Epistemology: correspondence versus coherence
  • 3. Entailment Meshes and the THOUGHTSTICKER program
  • 4. Bootstrapping in graphs
    • 4.1. ENTAILMENT MESHES AS NON-DIRECTED GRAPHS
    • 4.2. ENTAILMENT NETS AND THE BOOTSTRAPPING AXIOM
    • 4.3. NODE INTEGRATION
  • 5. Bootstrapping semantic networks
    • 5.1. ONTOLOGICAL DISTINCTIONS AS BASIC NODE TYPES
    • 5.2. BASIC LINK TYPES
    • 5.3. FRAMES AND INHERITANCE
  • 6. Learning Webs
    • 6.1. ASSOCIATIVE NETWORKS
    • 6.2. ALGORITHMS FOR LEARNING HYPERTEXTS
    • 6.3. EXPERIMENTAL RESULTS
  • 7. Towards an Intelligent Web
  • Acknowledgments

References

Introduction

In my contribution to this special issue devoted to the memory of Gordon Pask, I wish to review my research on knowledge representation and knowledge acquisition. Although it was started independently, this research program in a number of ways parallels and extends Pask's work on Conversation Theory, and in particular its representation through entailment meshes and their implementation in the computer program THOUGHTSTICKER (Pask, 1975, 1976, 1980, 1984, 1990; Pask, & Gregory, 1986). My work started around 1983 with the development of a "structural language" for representing the fundamental space-time structures of physics (Heylighen, 1990a). This primitive modelling scheme plays a role similar to Pask's (1984) "protolanguage" or "protologic" Lp (cf. Heylighen, 1992). An investigation into artificial intelligence (AI) and cognitive science made me understand that the main application of this language might lie in knowledge representation rather than in the foundations of physics. However, the limitations of AI made me turn to cybernetics as a more general theoretical framework (Heylighen, 1987, 1990).

I first came into contact with Gordon Pask through the conference "Self-Steering and Cognition in Complex Systems" (Heylighen, Rosseel & Demeyere, 1990) which I co-organized at the Free University of Brussels in 1987. As one of the founding fathers of cybernetics, Pask was invited to give a keynote speech (Pask, 1990). In his inimitable style, he reviewed his work on conversation theory and Lp. His manners of an archetypal British eccentric and his dandyish appearance, with bow-tie, cape and walking-stick, did not fail to impress the audience, myself included. At that moment, I did not yet understand the relevance of his approach for my own work, though.

I had the chance to get better acquainted with the Paskian philosophy during a short stay at the University of Amsterdam in 1988-1989. There I worked for the research program "Support, Survival and Culture", headed by Gerard de Zeeuw and Gordon Pask. The program's focus on computer-supported collaborative work incited me to implement my ideas into a prototype software system, the CONCEPTORGANIZER, which supports one or more users in exteriorizing their implicit knowledge in the form of a hypertext network of concepts (Heylighen, 1989, 1991a). Although I was not consciously influenced, the similarity in both intention and name between this program and Pask's THOUGHTSTICKER seems obvious in retrospect.

My interest in the foundations of cybernetics and in conceptual networks led me to join Valentin Turchin and Cliff Joslyn in the creation of the Principia Cybernetica Project (Joslyn, Heylighen & Turchin, 1993). The project's aim is the computer-supported collaborative development of an evolutionary-cybernetic philosophy. Its results are presently implemented as a large hypertext net on the World-Wide Web (Heylighen, Joslyn & Turchin, 1997). Although some cyberneticists reacted critically to the project's foundational ambitions, it was welcomed enthusiastically by Gordon Pask. In his contribution to the project's first workshop, he noted that "a philosophy of Cybernetics, encapsulated in the [...] title, 'Principia Cybernetica', is not only justifiable, but necessary and in this day and age, utterly essential" (Pask, 1991). In my own contribution, I sketched an extension of my work on the structural language to encompass semantic networks as a possible framework to structure the knowledge produced by the project (Heylighen, 1991b).

A focus on the philosophical content as well as on the organizational and technical implementation of the project during the following years kept me from further developing my scheme for knowledge representation. However, in 1994, when I was joined by Johan Bollen, we started to work on a new application, an associative network of concepts that would self-organize or "learn" from the way it is used (Bollen & Heylighen, 1996). Although I knew his health was deteriorating, I had hoped to present this work to Gordon Pask during the 13th European Meeting of Cybernetics and Systems in Vienna (April 1996), where we were both chairing a symposium. Unfortunately, Gordon died a few weeks before the congress, and the meeting became the first occasion for those who had known him to reminisce and pay tribute to his work.

The present paper will review my on-going research on "bootstrapping" methods for knowledge representation, emphasizing the similarities (and differences) with Pask's approach. Although this review will mostly summarize ideas scattered over different papers (Heylighen, 1989, 1990a, 1990d, 1991a, 1991b; Bollen & Heylighen, 1996, 1997; Heylighen & Bollen, 1996), some of the results included here have not been published before.

2. Epistemology: correspondence versus coherence

Both Pask's work on knowledge representation and my own are distinguished from the more traditional AI approach by their underlying epistemology. Most AI work implicitly assumes a correspondence epistemology, which sees knowledge as a simple mapping or "reflection" of the external world. Every conceptual object (symbol) in the knowing subject's model is supposed to correspond to a physical object in the environment. The structure of the model can be seen as a homomorphic map, or an encoding, of the structure of outside reality. This epistemology, which has been called the "reflection-correspondence" theory by Turchin (1993), leads to a host of conceptual and practical problems (Bickhard & Terveen, 1995).

The most pressing ones center around the origin and nature of the mapping from reality to its symbolic representation. Since a cognitive system has no access to reality (Kant's "Ding an Sich") except through perceptions--which are already internal models--, how can it ever determine whether it uses a correct mapping? Another formulation of this difficulty is the symbol grounding problem (Harnad, 1990): how are the symbols, the elements of the model, "grounded" in the external reality which they are supposed to represent? This problem cannot be solved within the model itself. This follows from the "linguistic complementarity" principle (Löfgren, 1991), which generalizes classic epistemological restrictions such as the theorem of Gödel or the Heisenberg indeterminacy principle. It states that no language can fully describe its own description or interpretation processes. In other words, models cannot include a representation of the mapping that connects their symbols to their interpretations.

Because there is no inherent procedure to determine a correct mapping, models in AI tend to be arbitrarily imposed by the system's programmer or designer. The model's foundations or building blocks, the symbols, are primitives which have to be accepted at face value, without formal justification. The model can be more or less adequate for the problem domain, but never complete, in the sense of covering all potentially relevant situations. As argued by van Brakel (1992), this 'problem of complete description' generalizes the famous 'frame problem' in AI (Ford & Hayes, 1991). Incompleteness would not be a real obstacle if the models could adapt or learn, that is, extend their capabilities each time a problem is encountered. But the correspondence philosophy does not allow any simple way for a model to be changed. New symbols cannot be derived from those that are already there, since they are supposed to reflect outside phenomena to which the model does not have access. Introducing a new symbol must be done by the programmer, and requires a redefinition of the syntax and semantics of the model. Thus, models in AI tend to be static, absolutist and largely arbitrary in structure and contents.

These problems have helped focus attention on an alternative epistemological position, constructivism, which is espoused by most cyberneticists, and emphasized in "second-order" cybernetics (von Foerster, 1996) and the theory of autopoiesis (Maturana & Varela, 1992). According to this philosophy, knowledge is not a passive mapping of outside objects, but an active construction by the subject. That construction is not supposed to reflect an objective reality, but to help the subject adapt or "fit in" to the world which it subjectively experiences.

This means that the subject will try to build models which are coherent with the models which it already possesses, or which it receives through the senses or through communication with others. Since models are only compared with other models, the lack of access to exterior reality no longer constitutes an obstacle to further development. In such an epistemology, knowledge is not justified or "true" because of its correspondence with an outside reality, but because of its coherence with other pieces of knowledge (Rescher, 1973; Thagard, 1989). The problem remains to specify what "coherence" precisely means: mere consistency is clearly not sufficient, since any collection of unrelated facts is logically consistent.

Model construction can be seen as a trial-and-error process, where different variations are generated, but only those variations are retained which "fit in" with the rest of the experiential material. Thus, the process is selectionist (Cziko, 1995; Bickhard & Ter Veen, 1995) rather than instructionist: instead of instructing the subject on how to build a model, the (inside or outside) environment merely helps it to select the most "fit" models among all of the subject's autonomously generated trials.

Many constructivists tend to emphasize the role of social interaction in this selection process: those models will be retained about which there is a consensus within the community. This is the social constructivist position, which is popular especially in the social sciences and humanities. Psychologists like von Glasersfeld (1984) and Piaget (1937), on the other hand, emphasize the individual subject, who tries to find coherence between his or her different models and perceptions. The selectionists inspired by Popper's (1959) evolutionary epistemology and theory of falsification, finally, emphasize the role of the outside environment in weeding out inadequate models. For obvious reasons, this more "realist" position is most popular in the natural sciences. My own philosophy is pragmatic, and acknowledges the combined role of individual, social and physical ("objective") factors in the selection of knowledge (Heylighen, 1993, 1997a).

Although constructivists usually also accept some form of a coherence view of truth, few people have proposed concrete mechanisms that show how the dynamic process of construction (variation) can result in the static requirement of coherence (selective retention). The originality of Pask's Conversation Theory is that it provides a detailed formal model of such mechanisms. The metaphor of conversation is aptly chosen to describe such a process of cognitive interaction, in which concepts are exchanged, combined and recombined (the construction phase), with the aim of achieving agreement about shared meanings (the coherence phase). Although the conversation metaphor seems to favor the social construction of knowledge, Pask is quick to point out that his theory applies equally well to interactions between different roles or perspectives ("p-individuals") within a single individual. I have suggested elsewhere (Heylighen, 1990c) that the conversational perspective could even be extended to interactions between observers and objects. Pask might not have disagreed with this generalization, since he saw his THOUGHTSTICKER program as a way to turn a computer into a conversational partner.

To describe my own solution to the problem, however, I prefer the metaphor of bootstrapping (cf. Heylighen, 1990a, 1990d, 1992). As said, the problem with correspondence epistemologies is that they lack grounding: everything is built on top of the symbols, which constitute the atoms of meaning; yet, the symbols themselves are not supported. The advantage of a coherence epistemology is that there is no need for a fixed ground or foundation on which to build models: coherence is a two-way relation. In other words, coherent concepts support each other. The dynamic equivalent of this mutual support relation may be called "bootstrapping": Model A can be used to help construct model B, while B is used to help construct A. It is as if I am pulling myself up by my own bootstraps: while my arms (A) pull up my boots (B), my boots at the same time--through my legs, back and shoulders--push up my arms. The net effect is that more (complexity, meaning, quality, ...) is produced out of less. This is the hallmark of self-organization: the creation of structure without need for external intervention.

I will now show how this bootstrapping philosophy can be applied to the practical problem of knowledge representation, first by reviewing Pask's entailment meshes, then by extending the underlying formalism to my own entailment nets.

Entailment Meshes and the THOUGHTSTICKER program

During the development of Pask's Conversation Theory, it turned out that many of its statements could be succinctly expressed by graphical schemes that were called "entailment meshes". These informal representations were later interpreted as expressions in a formal language of distinction and coherence, which Pask called Lp. The manipulation of Lp expressions was facilitated by an interactive computer program called THOUGHTSTICKER (Pask, 1984; Pask & Gregory, 1986). As the name implies, the main aim of THOUGHTSTICKER is to help users exteriorize their thoughts in the form of a stable and explicit knowledge representation. (The same aim underlies my own CONCEPTORGANIZER prototype (Heylighen, 1989, 1991a). I will here summarize the representation of knowledge through entailment meshes, and the rules that are embedded in the THOUGHTSTICKER software for manipulating and eliciting these meshes.

The basic elements of an entailment mesh are called topics. They are "public" concepts, in the sense that their meaning is shared by a number of conversational participants. These concepts are connected through coherences. A coherence is a collection or cluster of topics which are so interrelated that the meaning of any topic of the coherence can be derived from the meaning of the other topics in the cluster. In other words, the topics in a coherence entail or define each other. A simple example of a coherence is the cluster <pen, paper, writing>. This means that we can somehow start from the concepts of writing and paper and produce the concept of an instrument that allows you to write on paper: a pen. Complementarily, we can start from the notions of pen and paper and derive from them the activity you do when applying a pen to paper: writing.

Fig. 1: (a) a simple entailment mesh consisting of two coherences overlapping on the topic writing ; (b) a 'prune' unfoldment of the same mesh from the point of view of the topic pen.

The same topic, e.g. writing, can belong to different coherences, for example: <pen, paper, writing> and <table, chair, writing> (see Fig. 1a). In other words, coherences can overlap in one or more topics. An entailment mesh is then a complex of overlapping coherences, that is, a collection of topics and coherences such that every topic belongs to at least one coherence (see Fig. 2). Every topic in an entailment mesh should be entailed by, or unambiguously derivable, from the other topics in the mesh. This derivation can be represented in THOUGHTSTICKER by the operation called 'Prune'. Prune produces an unfoldment of the mesh from the perspective of the concept you want to derive. For example, Fig. 1b shows a pruning of the mesh in Fig. 1 from the point of view of pen. With symbols this derivation could be represented as: pen <- {writing, paper}, writing <- {table, chair}.

Fig. 2: a more complex entailment mesh

The main function of the Prune operation is to discover structural ambiguities or conflicts in a mesh. An ambiguity arises when different topics are derived in the same way. In that case, there is no way to distinguish the topics within the mesh. Consider for example the two coherences <pen, paper, writing> and <pencil, paper, writing>. In this case, it is not clear which of the two topics pencil or pen should be produced from paper and writing: pen <- {writing, paper} and pencil <- {writing, paper} both hold. This poses a problem for the knowledge representation, which can be resolved in one of the following ways:

· merge topics: the ambiguous topics may be really equivalent, in the sense that either they are synonyms (e.g. persons and people), or they are two tokens (e.g. pen and pencil) standing for the same basic concept (e.g. writing device). The resolution is to merge the topics, thus producing the single new coherence: <writing device, paper, writing>

· add topic: if the topics are really distinct, this must be reflected in the entailment mesh by changing the coherences that define them. The simplest way to do that is to add a topic to one of the two coherences. For example, one could replace <pencil, paper, writing> by <pencil, paper, writing, erasable>. The derivation of pencil from its coherence cluster is now different from the derivation of pen, and thus the ambiguity is resolved.

· split topic: another way to create distinction where there is none is to split or "bifurcate" one of the topics occurring in both coherences into two distinct topics. For example, writing could be split into writing letters and writing notes, resulting in the two coherences <pen, paper, writing letters> and <pencil, paper, writing notes>. Since the two new topics derive from the same underlying idea, they remain related in THOUGHTSTICKER by a relation of analogy.

· merge coherences: another way to eliminate the ambiguity is to replace the two coherences by one new coherence including all topics, e.g. <pen, pencil, paper, writing>. Now, pen and pencil have each a different derivation.

Another type of ambiguity can arise when coherences are nested, i.e. when one coherence, e.g. <pen, pencil, ball-point>, is a subset of another coherence, e.g. <pen, pencil, ball-point, paper, writing>. In that case, it is not clear how to derive a topic belonging to both coherences, e.g. is it pen <- {writing, paper, ball-point, pencil} or pen <- {ball-point, pencil}? Such a nested construction is illegal in THOUGHTSTICKER. The way to resolve it is to "condense" the topics of the inner cluster (<pen, pencil, ball-point>) into a new, "generalized" concept, e.g. writing device. This leaves a single new cluster, <writing device, paper, writing> (see Fig. 3).

Fig. 3: a condensation of the subcoherence <pen, pencil, ball-point> into the new, more general topic writing device.

THOUGHTSTICKER will assist the user in representing her implicit knowledge more explicitly, and in searching for conflicts and proposing resolution. The latter will elicit new knowledge by pointing to gaps and ambiguities in the knowledge that is already there. Moreover, THOUGHTSTICKER can directly suggest possible expansions of the knowledge base through the 'Saturate' operation. This will produce new candidate coherences that would not create structural ambiguities. These suggested combinations of topics may be completely random, or restricted to certain by the user defined ranges of topics. It is up to the user to decide whether the proposed cluster is meaningful or not. Thus, THOUGHTSTICKER constantly interacts or "converses" with the user, helping to construct an ever more complete and well-balanced system of concepts.








Дата добавления: 2016-03-05; просмотров: 627;


Поиск по сайту:

При помощи поиска вы сможете найти нужную вам информацию.

Поделитесь с друзьями:

Если вам перенёс пользу информационный материал, или помог в учебе – поделитесь этим сайтом с друзьями и знакомыми.
helpiks.org - Хелпикс.Орг - 2014-2024 год. Материал сайта представляется для ознакомительного и учебного использования. | Поддержка
Генерация страницы за: 0.021 сек.