Comparing heterogeneous sources often involves comparing conflicts. Suppose we are dealing with a group of clinicians advising on some patient, a group of witnesses of some incident, or a set of newspaper reports covering some event. These are all situations where we expect some degree of inconsistency in the information.
When an intelligent agent works with a set of information, beliefs, knowledge, preferences, ... expressed in a logical form, the notion of informational content of a piece of information and the notion of amount of contradiction are of crucial interest. Effectively, in many high-level reasoning tasks one needs to know what is the amount of information conveyed by a piece of information and/or what is the amount of contradiction involved with this piece of information. This is particularly important in complex information about the real world where inconsistencies are hard to avoid.
While information measures enable us to say how "valuable" a piece of information is by showing how precise it is, contradiction measures enable us to say how "unvaluable" a piece of information is by showing how conflicting it is. As joint/conditional information measures are useful to define a notion of pertinence of a new piece of information with respect to an old one (or more generally for a set of information), joint/conditional contradiction measures can be useful to define a notion of conflict between pieces of information, that can be useful for many applications. These two measures are to a large extent independent of one another, but needed in numerous applications.
Five key approaches to measuring inconsistent information are: Consistency-based analysis that focuses on the consistent and inconsistent subsets of a knowledgebase; Information theoretic analysis that is an adaptation of Shannon's information measure; Probabilistic semantic analysis that assumes a probability distribution over a set of formulae; Epistemic actions analysis that measures the degree of information in a knowledgebase in terms of the number of actions required to identify the truth value of each atomic proposition and the degree of contradiction in a knowledgebase in terms of the number of actions needed to render the knowledgebase consistent; and model-theoretic analyses that are based on evaluating a knowledgebase in terms of three or four valued models that permit an "inconsistent" truth value. Whilst inter-relationships between these approaches are yet to be fully established, it is clear they offer a range of formalisms that can be harnessed via a knowledgebase for evaluating sources of information prior to merging.
Having some understanding of the "degree of inconsistency" of a structured report can help in deciding how to act on it. Moreover, inconsistencies between information in a structured report and domain knowledge can tell us important things about the structured report. For this we use a significance function to give a value for each possible inconsistency that can arise in a structured report in a given domain. We may also use significance in the following ways: (1) to reject reports that are too inconsistent; (2) to highlight unexpected information; (3) to focus on repairing significant inconsistencies; and (4) to monitor sources of information to identify sources that are unreliable.
Contact a.hunter@cs.ucl.ac.uk or +44
20 7679 7295.
Back to Fusion Rule Technology homepage.