ENTAILMENT NETS AND THE BOOTSTRAPPING AXIOM

Let us generalize the above scheme by considering a general, non-directed, i.e. asymmetric relation. I will call this relation "entailment" and its instances "links", denoting them by right arrows: -> . Instead of "topics", I will call the elements connected by the entailment relation "nodes", or "concepts". The graphical representation of nodes and links will take the form of an unlabelled directed graph, with the only constraint that there can be at most one link between any two nodes (see Fig. 3 for an example). In analogy to the entailment meshes, we may call such graphs entailment nets.

Fig. 3: an entailment net consisting of nodes (a, b, c, ...) connected by links. I(a) denotes the set of input nodes of a, O(a) the set of its output nodes.

For every node or concept a, there will now be two relevant sets, depending on the direction in which we follow the relation, i.e. depending on whether we look at the concepts that a entails, or that are entailed by it. We can thus define the input and output sets of a concept a (see Fig. 3):

Input: I(a) = { x | x -> a}

Output : O(a) = { x | a -> x }

Like in an entailment mesh, the meaning (definition, distinction) of a can be interpreted as derived or produced from these two sets. The requirement of ambiguity avoidance can be formulated most generally as the following bootstrapping axiom (Heylighen, 1990a,d, 1991b):

two concepts are distinct if and only if their input and output sets are distinct:

a != b <=> I(a) != I (b) and O(a) != O(b)

Thus, the concept a is unambiguously defined or determined by the other concepts it is connected with by the entailment relation. This definition is "bootstrapping" because the elements in I(a) and I(b) are of course themselves only distinguished by virtue of their own connections with distinct elements, including the original a and b. It is not recursive in the conventional sense, because there is no privileged set of primitive elements in terms of which all others are defined, like in a traditional symbol-based knowledge representation. Note that the axiom implies that concepts with empty input and output sets (i.e. independent, disconnected nodes) cannot be distinguished at all.

Although the bootstrapping axiom is formulated as a static, logical requirement, its practical value lies in the dynamic construction of new concepts and entailments. Suppose that a user is developing a knowledge representation consisting of different concepts together with their entailment relations. A computer program like CONCEPTORGANIZER would then constantly analyse the resulting network looking for ambiguities, i.e. nodes that are different yet have the same input and output sets. If it would find such an ambiguity, it would suggest a number of methods for resolving it, analogous to the methods proposed by THOUGHTSTICKER. These obviously divide in two main strategies: either merge the ambiguous nodes if they are really equivalent, or differentiate their respective input and output sets by adding, splitting or deleting nodes (Heylighen, 1991a).

The THOUGHTSTICKER strategy of merging coherences could be translated in an entailment net by adding a two-way entailment between the ambiguous nodes. For example, if I(a) = I(b) and O(a) = O(b) then adding the entailments b -> a and a -> b would add a to I(b) and O(b), and add b to I(a) and O(b), thus distinguishing the respective input and output sets. The two-way entailment could be interpreted as a "similar, yet different" relation. (However, if the entailment would also be reflexive, i.e. include the links a -> a and b -> b, then we would be back to ambiguity since the input and output sets would again be identical.)

The richer structure of entailment nets makes it less likely to encounter ambiguities in an arbitrary net, since both input and output sets must be identical to have an ambiguity. However, there are different types of "approximate" ambiguities, which do not strictly transgress the bootstrapping axiom, but where the difference is not very large. For example, the two input sets of a pair of nodes could be identical, while their output sets differ only in one element. In such cases, it is possible that the single difference is accidental, and that the nodes should better be merged. Or, it is likely that there is more than one difference, and therefore the computer system will suggest to the user to add a difference in the input sets which reflects the existing difference in the output sets.

NODE INTEGRATION

A system based on entailment nets would also allow an equivalent of THOUGHTSTICKER's "condense" operation for creating a more general concept, by integrating a number of more specific concepts. This may be called node integration, as distinguished from the node identification (merging) suggested in the case of ambiguity (cf. Bakker, 1987; Stokman & de Vries, 1988).

A cluster of concepts that are strictly distinct according to the bootstrapping axiom may still be indistinguishable from outside the cluster, because they have the same input and output links with nodes outside the cluster: consider a set of nodes A = {ai | i = 1, ..., n}, with the property that for all 1 <= i, j <= n : I(ai) \ A = I(aj) \ A and O(ai) \ A= O(aj) \ A (where "\" stands for the set theoretic difference). In that case, the nodes belonging to A can be integrated and replaced by a single new node A, resulting in a much simplified graph, without affecting the distinctions between nodes outside of A. See Fig. 4 for an example.

Fig. 4: (a) a cluster {a1, a2, a3} of nodes that are externally indistinguishable; (b) the graph resulting from the replacement of that cluster by the single node A.

Again, the property of external indistinguishability of a cluster can be relaxed to approximate indistinguishability. This means that the external input and output sets of the cluster elements are not completely identical, but have most elements in common. This would be the case with concepts that have a "family resemblance", with many common properties, but some idiosyncratic differences. Fisher (1987) has proposed an algorithm for clustering together such concepts on the basis of the number of properties --in an entailment net, this would mean input and output elements--they have in common. (The algorithm uses the conditional entropy as a measure of difference between sets of properties). Such conceptual clustering is a basic form of what is usually called "machine learning", "knowledge discovery" or "data mining", i.e. the retrieval of regularities in large sets of interrelated data.








Дата добавления: 2016-03-05; просмотров: 498;


Поиск по сайту:

При помощи поиска вы сможете найти нужную вам информацию.

Поделитесь с друзьями:

Если вам перенёс пользу информационный материал, или помог в учебе – поделитесь этим сайтом с друзьями и знакомыми.
helpiks.org - Хелпикс.Орг - 2014-2024 год. Материал сайта представляется для ознакомительного и учебного использования. | Поддержка
Генерация страницы за: 0.006 сек.