Monday, August 27, 2007

Explaining Inferences

One of the nice capabilities of OWL is its rich built-in semantics. These semantics can be used to employ generic inference engines that can make implicit relationships in your OWL model explicit. However, once you start using an inference engine, you are often more or less working with a black box and may discover that the system makes inferences that are difficult to understand.

In the OWL tools spectrum, SWOOP was probably the first to provide some kind of explanation facility that would point users to a set of base axioms to explain inferences. This capability was based on an experimental version of the Pellet inference engine which has been folded into the regular Pellet distribution with its version 1.5. The new Pellet release made it straight-forward for us to add a similar capability to the new TopBraid Composer version 2.2. After running inferences via Pellet you can click on the menu next to inferred triples to open the Explanation view:




The Explanation view displays a clickable list of axioms. In the case below, it shows that the class Safari is inconsistent (i.e. a subclass of owl:Nothing) because it is the subclass of the two mutually disjoint classes Sightseeing and Adventure.


At TopQuadrant we have already had several use cases where this capability was a huge time-saver, in particular if ontologies get as large as those that we develop for NASA. Although the explanations may look weird or geeky at first, any pointers into the right direction can make a huge difference.

0 Comments:

Post a Comment

<< Home