Bootstrapping semantic networks

A shortcoming of both Pask's entailment meshes and the entailment nets as I have presented them until now is that there is no clear interpretation of the "entailment" that keeps the concepts together. This very abstract relation tells us that concepts somehow depend on each other, but it does not tell us how.

Another knowledge representation scheme from AI, the semantic network (Brachman, 1977; Shastri, 1988; Sowa, 1991), is based on similar nets of interdependent concepts, but here the dependencies are classified into distinct types with specific interpretations. For example, different types of relations might specify that a concept a "causes" another concept b, that a "is a part of" b, or that a "is a special case of" b. The motivation underlying semantic networks is that concepts get their meaning through the semantic relations they have with other concepts. This is similar to the bootstrapping philosophy underlying entailments meshes and entailment nets.

However, semantic networks do not solve the "symbol grounding" problem; they merely push it to another level of description. Although the nodes are supposed to mutually define each other, the relations or link types do not. Like the symbols in a more traditional representation, the link types used in a semantic network are primitives, which are more or less arbitrarily imposed by the system designer. In practice, this has produced confusion, with different researchers using different link types, or interpreting what seem to be the same link types in a different manner. Moreover, since the number of link types is not a priori limited, there is a tendency to solve problems by creating a new ad hoc link type each time it is not clear how a particular relation can be expressed using the existing link types. Because of this semantic confusion, the empirical verification of semantic network-based theories of cognition has produced ambiguous results, sometimes seeming to confirm the theory, sometimes seeming to contradict it. Yet, a well-chosen set of link types can produce a kind of intuitive recognition, which may help the user to understand knowledge formulated as a semantic network more easily than knowledge expressed in a more abstract or sparse formalism.

For this reason I have tried to reconstruct a semantic network-like structure within my entailment nets (Heylighen, 1991b), while keeping to the fundamental requirement that all meaning or distinction only be justified by bootstrapping within the system, not by appealing to external reality as an invisible arbiter. The trick to bootstrap link types is simply to reduce them to node types, which themselves are reduced to nodes. Since all nodes are subject to the bootstrapping axiom, this indirectly subjects link types to bootstrapping as well. Let me explain this procedure in more detail.

The first step is to propose a general interpretation of the existing nodes and links in an entailment net. Until now, I have merely sketched the representation formalism and pointed out its analogies with Pask's Lp expressions, without clarifying what its elements concretely stand for. Since nodes are defined by the way they are distinguished, it is natural to interpret them as basic cognitive distinctions (cf. Spencer Brown, 1969; Heylighen, 1990b,d), that is, as classes of phenomena that are separated or distinguished by an observer from all other phenomena that do not belong to the class. It can be argued that all perceptual and cognitive entities, such as concepts, patterns, categories, or experiences, are distinctions. They are all triggered by certain phenomena but not by others. They thus separate the universe of phenomena into two complementary classes: those that fit the concept (the "indication" or "marked" side of the distinction, according to Spencer Brown, 1969), and those that do not. A distinction can be seen as the most fundamental unit of cognition.

Whereas the distinctions cut up the phenomenal universe, the entailment relation connects everything back together. If distinction a entails distinction b, we might say that given a we can somehow expect to see b as well. This is a kind of a generalized or weakened form of "if a, then b". In order to recover more of the properties of the traditional "if...then" in logic (implication), we should at least demand transitivity of the relation. Indeed, "if a, then b" and "if b, then c" together imply "if a, then c". The entailment relation does not have any a priori properties such as transitivity, symmetry or reflexivity. Yet, we can always choose to focus attention on that part of the entailment relation that is transitive, and to interpret it as an "if...then". (One way to mathematically generate the transitive part is to take the intersection of an entailment relation E with its second power: Etrans = E E2).

The interpretation of this "if a, then b" between distinctions is that the class a is somehow subsumed in or followed by the class b. For example, if a phenomenon is a dog, then it is also a mammal: dog -> mammal. It means that a phenomenon denoted by the first concept cannot be present or actual, without a phenomenon denoted by the second one being (simultaneously) or becoming (afterwards) actual. As the example shows, a primary instantiation of entailment is the relation between an instance or subclass and the more general class to which it belongs. The more general class (e.g. mammal) can be seen as grouping a number of related, more concrete concepts (e.g. dog, cat, mouse, deer, etc.). With such an interpretation we can reinterpret the input of a concept x, I(x), as its "extension", i.e. the set of its instances, and its output O(x) as its "intension", i.e. the conjunction of its defining features. The meaning of x, as expressed by the bootstrapping axiom, can then be interpreted as determined by the disjunction of its input elements, and the conjunction of its output elements.

This interpretation suggests yet another heuristic for distinguishing "similar" nodes. Suppose you have two nodes a and b, such that I(a) I(b), while O(b) O(a). In that case the system might suggest to the user to create a link from a to b, a -> b, assuming that a concept a with a smaller extension and a larger intension than b is likely to be a special case or subcategory of b.








Дата добавления: 2016-03-05; просмотров: 754;


Поиск по сайту:

При помощи поиска вы сможете найти нужную вам информацию.

Поделитесь с друзьями:

Если вам перенёс пользу информационный материал, или помог в учебе – поделитесь этим сайтом с друзьями и знакомыми.
helpiks.org - Хелпикс.Орг - 2014-2024 год. Материал сайта представляется для ознакомительного и учебного использования. | Поддержка
Генерация страницы за: 0.008 сек.