### Model Theory – Normal Semantics

• The problem of inductive inference:
• Given is background (prior) knowledge B and evidence E
• The evidence E = E+ ∪ E- consists of positive evidence E+ and negative evidence E-
• The aim is then to find a hypothesis H such that the following conditions hold:
Prior Satisfiability: B ∪ E- ⊭ □
Posterior Satisfiability: B ∪ H ∪ E- ⊭ □
Prior Necessity: B ⊭ E+
Posterior Sufficiency: B ∪ H ⊧ E+
• The Sufficiency criterion is sometimes named completeness with regard to positive evidence
• The Posterior Satisfiability criterion is also known as consistency with the negative evidence
• In this general setting, background-theory, examples, and hypotheses can be any (well-formed) formula

### Model Theory – Definite Semantics

• In most ILP practical systems background theory and hypotheses are restricted to being definite clauses
• Clause: A disjunction of literals
• Horn Clause: A clause with at most one positive literal
• Definite Clause: A Horn clause with exactly one positive literal
• This setting has the advantage that definite clause theory T has a unique minimal Herbrand model M +( T )
• Any logical formulae is either true or false in this minimal model (all formulae are decidable and the Closed World Assumption holds)

### Model Theory – Definite Semantics

• The definite semantics again require a set of conditions to hold
• We can now refer to every formula in E since they are guaranteed to have a truth value in the minimal model

• Consistency:
Prior Satisfiability: all e in E - are false in M +( B )
• Negative evidence should not be part of the minimal model
Posterior Satisfiability: all e in E - are false in M +( B H )
• Negative evidence should not be supported by our hypotheses
• Completeness
Prior Necessity: some e in E + are false in M +( B )
• If all positive examples are already true in the minimal model of the background knowledge, then no hypothesis we derive will add useful information
Posterior Sufficiency: all e in E + are true in M +( B H )
• All positive examples are true (explained by the hypothesis) in the minimal model of the background theory and the hypothesis

### Model Theory – Definite Semantics

• An additional restriction in addition to those of the definite semantics is to only allow true and false ground facts as examples (evidence)
• This is called the example setting
• The example setting is the main setting employed by ILP systems
• Only allows factual and not causal evidence (which usually captures more knowledge)
• Example:
• B: grandfather(X, Y) ← father(X, Z), parent(Z, Y)
father(henry, jane) ←
etc.
• E: grandfather(henry, john) ←
grandfather(henry, alice) ←
• Not allowed in example setting:
• ← grandfather(X, X)     [Not allowed in definite semantics]
grandfather(henry, john) ← father(henry, jane), mother(jane, john)

### Model Theory – Non-monotonic Semantics

• In the non­monotonic setting:
• The background theory is a set of definite clauses

• The evidence is empty
• The positive evidence is considered part of the background theory
• The negative evidence is derived implicitly, by making the closed world assumption (realized by the minimal Herbrand model)

• The hypotheses are sets of general clauses expressible using the same alphabet as the background theory

### Model Theory – Non-monotonic Semantics (2)

• Since only positive evidence is present, it is assumed to be part of the background theory:
B’ = B ∪ E
• The following conditions should hold for H and B’:
• Validity: all h in H are true in M +( B’ )
• All clauses belonging to a hypothesis hold in the database B, i.e. that they are true properties of the data
• Completeness: if general clause g is true in M +( B’ ) then H g
• All information that is valid in the minimal model of B’ should follow from the hypothesis
• Additionally the following can be a requirement:
• Minimality: there is no proper subset G of H which is valid and complete
• The hypothesis should not contain redundant clauses

### Model Theory – Non-monotonic Semantics (3)

• Example for B (definite clauses):
male(luc) ←
female(lieve) ←
human(lieve) ←
human(luc) ←
• A possible solution is then H (a set of general clauses):
← female(X), male(X)
human(X) ← male(X)
human(X) ← female(X)
female(X), male(X) ← human(X)

### Model Theory – Non-monotonic Semantics (4)

• One more example to illustrate the difference between the example setting and the non-monotonic setting
• Consider:
• Background theory B
bird(tweety) ←
bird(oliver) ←
• Examples E+:
flies(tweety)

• For the non-monotonic setting B’ = B ∪ E+ because positive examples are considered part of the background knowledge

### Model Theory – Non-monotonic Semantics (5)

• Example setting:
• An acceptable hypothesis H1 would be
flies(X) ← bird(X)
• It is acceptable because if fulfills the completeness and consistency criteria of the definite semantics
• This realizes can inductive leap because flies(oliver) is true in
M + ( B ∪ H ) = { bird(tweety), bird(oliver), flies(tweety), flies(oliver) }
• Non-monotonic setting:
• H1 is not a solution since there exists a substitution {X oliver} which makes the clause false in M +( B’ ) (the validity criteria is violated:
M + ( B’ ) = { bird(tweety), bird(oliver), flies(tweety) }
{X ← oliver}: flies(oliver) ← bird(oliver)
{X ← tweety}: flies(tweety) ← bird(tweety)