Propositional Logic

Title 

1 
Introduction 

2 
Propositional Logic 

3 
Predicate Logic 

4 
Reasoning 

5 
Search Methods 

6 
CommonKADS 

7 
ProblemSolving Methods 

8 
Planning 

9 
Software Agents 

10 
Rule Learning 

11 
Inductive Logic Programming 

12 
Formal Concept Analysis 

13 
Neural Networks 

14 
Semantic Web and Services 
Motivation
Technical Solution
Syntax
Semantics
Inference
Illustration by a Larger Example
Extensions
Summary
References
MOTIVATION
Logic is used to formalize deduction
Deduction = derivation of true statements (called conclusions ) from statements that are assumed to be true (called premises )
Natural language is not precise, so the careless use of logic can lead to claims that false statements are true, or to claims that a statement is true, even tough its truth does not necessarily follow from the premises
=> Logic provides a way to talk about truth and correctness in a rigorous way, so that we can prove things, rather than make intelligent guesses and just hope they are correct
Propositional logic is a good vehicle to introduce basic properties of logic; used to:
Associate natural language expressions with semantic representations
Evaluate the truth or falsity of semantic representations relative to a knowledge base
Compute inferences over semantic representations
One of the simplest and most common logic
The core of (almost) all other logics
An unambiguous formal language, akin to a programming language
Syntax: Vocabulary for expressing concepts without ambiguity
Semantics: Connection to what we're reasoning about
Interpretation  what the syntax means
Reasoning: How to prove things
What steps are allowed
TECHNICAL SOLUTIONS
SYNTAX
Logical constants: true, false
Propositional symbols: P, Q, S, ...
Wrapping parentheses: ( … )
Atomic formulas: Propositional Symbols or logical constants
Formulas are either atomic formulas, or can be formed by combining atomic formulas with the following connectives:
∧ ...and [conjunction]
∨ ...or [disjunction]
→...implies [implication / conditional]
↔ ...is equivalent [biconditional]
¬ ...not [negation]
A sentence (well formed formula) is defined as follows:
A symbol is a sentence
If S is a sentence, then ¬S is a sentence
If S is a sentence, then (S) is a sentence
If S and T are sentences, then (S ∨ T), (S ∧ T), (S → T), and (S ↔ T) are sentences
A sentence results from a finite number of applications of the above rules
Sentence → AtomicSentence  ComplexSentence
AtomicSentence → True  False  P  Q  R  ...
ComplexSentence → ( Sentence )  Sentence Connective Sentence  ¬ Sentence
Connective → ∧  ∨  →  ↔
Ambiguities are resolved through precedence ¬
∧ ∨
↔
or parentheses
e.g. ¬ P ∨ Q ∧ R ⇒ S is equivalent to (¬ P) ∨ (Q ∧ R)) ⇒ S
P means "It is hot. "  ¬ p ∧ ¬ q 
Q means "It is humid. "  ¬(p ∧ q) 
R means "It is raining. "  (p ∧ q) ∨ r 
(P ∧ Q) → R "If it is hot and humid, then it is raining " 
p ∧ q ∧ r (((¬ p) ∧ q) → r) ↔ ((¬ r) ∨ p) 
Q → P "If it is humid, then it is hot " 
(¬ (p ∨ q) → q) → r ((¬ p) ∨ (¬ q)) ↔ (¬ r) Etc. 
SEMANTICS
Given the truth values of all symbols in a sentence, it can be “evaluated” to determine its truth value (True or False)
A model for a KB is a “possible world” (assignment of truth values to propositional symbols) in which each sentence in the KB is True
A valid sentence or tautology is a sentence that is True under all interpretations, no matter what the world is actually like or how the semantics are defined (example: “It’s raining or it’s not raining”)
An inconsistent sentence or contradictio n is a sentence that is False under all interpretations (the world is never like what it describes, as in “It’s raining and it’s not raining”)
P entails Q , written P ⊧ Q, means that whenever P is True, so is Q; in other words, all models of P are also models of Q
In propositional logic, truth values are assigned to the atoms of a formula in order to evaluate the truth value of the formula
An assignment is a function
v : P → {T,F}
v
assigns a truth value to any atom in a given formula (
P
is the set of all propositional letters, i.e. atoms)
Suppose
F
denotes the set of all propositional formulas. We can extend an assignment v to a function
v : F → {T,F}
which assigns the truth value v(A) to any formula A in F. v is called an interpretation.
v(p) = F, v(q) = T.
If A = (¬p → q) ↔ (p V q) , what is v(A) ?
Solution:
v(A) = v ((¬ p → q ) ↔ ( p V q ))
= v (¬ p → q ) ↔ v ( p V q )
= ( v (¬ p ) → v ( q )) ↔ ( v ( p ) V v ( q ))
= (¬ v ( p ) → v ( q )) ↔ ( v ( p ) V v ( q ))
= (¬F → T) ↔ (F V T)
= (T → T) ↔ (F V T)
= T ↔ T
= T
If A,B are formulas are such that
v ( A ) = v ( B )
for all interpretations v , A is (logically) equivalent to B:
A ≡ B
Example: ¬p V q ≡ p → q since both formulas are true in all interpretations except when v ( p ) = T, v ( q ) = F and are false for that particular interpretation
Caution : ≡ does not mean the same thing as ↔ :
A ↔ B is a formula (syntax)
A ≡ B is a relation between two formula (semantics)
Theorem : A ≡ B if and only if A ↔ B is true in every interpretation; i.e. A ↔ B is a tautology .
A propositional formula A is satisfiable iff v ( A ) = T in some interpretation v; s uch an interpretation is called a model for A .
A is unsatisfiable (or, contradictory) if it is false in every interpretation
A set of formulas U = { A 1 ,A 2 ,…,A n } is satisfiable iff there exists an interpretation v such that v ( A 1 ) = v ( A 2 ) = … = v ( A n ) = T; such an interpretation is called a model of U.
U is unsatisfiable if no such interpretation exists
Relevant properties:
If U is satisfiable, then so is U − {Ai} for any i = 1, 2,…, n
If U is satisfiable and B is valid, then U U { B } is also satisfiable
If U is unsatisfiable and B is any formula, U U { B } is also unsatisfiable
If U is unsatisfiable and some A i is valid, then U − {Ai} is also unsatisfiable
A is valid (or, a tautology), denoted ⊧ A , iff v ( A ) = T , for all i nterpretations v
A is not valid (or, falsifiable), denoted ⊭ A if we can find some interpretation v , such that v(A) = F
Relationship between validity, satisfiability, falsifiability, and unsatisfiability:
Examples:
Valid (tautology):
Not valid, but satisfiable:
False (contradiction):
Theorem:
(a) A is valid if and only if ¬A is unsatisfiable
(b) A is satisfiable if and only if ¬A is falsifiable
Let U be a set of formulas and A a formula. A is a (logical) consequence of U , if any interpretation v which is a model of U is also a model for A :
U ⊧ A
Example:
If some interpretation v is a model for the set
it must satisfy but in this interpretation, we also have
A set of formulas T is a theory if it is closed under logical consequence. This means that, for every formula A , if T ⊧ A , then A is in T
Let U be a set of formulas. Then, the set of all consequences of U
T ( U ) = { A  U ⊧ A }
is called the theory of U . The formulas in U are called the axioms for the theory T(U) .
INFERENCE
Several basic methods for determining whether a given set of premises propositionally entails a given conclusion
Truth Table Method
Deductive (Proof) Systems
Resolution
One way of determining whether or not a set of premises logically entails a possible conclusion is to check the truth table for the logical constants of the language
This is called the truth table method and can be formalized as follows:
Step 1: Starting with a complete truth table for the propositional constants, iterate through all the premises of the problem, for each premise eliminating any row that does not satisfy the premise
Step 2: Do the same for the conclusion
Step 3: Finally, compare the two tables; If every row that remains in the premise table, i.e. is not eliminated, also remains in the conclusion table, i.e. is not eliminated, then the premises logically entail the conclusion
Simple sentences:
Amy loves Pat: lovesAmyPat
Amy loves Quincy: lovesAmyQuincy
It is Monday: ismonday
Premises:
If Amy loves Pat, Amy loves Quincy:
lovesAmyPat → lovesAmyQuincy
If it is Monday, Amy loves Pat or Quincy:
ismonday → lovesAmyPat ∨ lovesAmyQuincy
Question:
If it is Monday, does Amy love Quincy?
i.e. is ismonday → lovesAmyQuincy entailed by the premises?


lovesAmyPat 
lovesAmyQuincy 
ismonday 
lovesAmyPat → lovesAmyQuincy 
ismonday → lovesAmyPat ∨ lovesAmyQuincy 

T 
T 
T 
T 
T 

T 
T 
F 
T 
T 

T 
F 
T 
F 
T 

T 
F 
F 
F 
T 

F 
T 
T 
T 
T 

F 
T 
F 
T 
T 

F 
F 
T 
T 
F 

F 
F 
F 
T 
T 


lovesAmyPat 
lovesAmyQuincy 
ismonday 
lovesAmyPat → lovesAmyQuincy 
ismonday → lovesAmyPat ∨ lovesAmyQuincy 

T 
T 
T 
T 
T 

T 
T 
F 
T 
T 

T 
F 
T 
F 
T 

T 
F 
F 
F 
T 

F 
T 
T 
T 
T 

F 
T 
F 
T 
T 

F 
F 
T 
T 
F 

F 
F 
F 
T 
T 


lovesAmyPat 
lovesAmyQuincy 
ismonday 
ismonday → lovesAmyQuincy 

T 
T 
T 
T 

T 
T 
F 
T 

T 
F 
T 
F 

T 
F 
F 
T 

F 
T 
T 
T 

F 
T 
F 
T 

F 
F 
T 
F 

F 
F 
F 
T 


lovesAmyPat 
lovesAmyQuincy 
ismonday 
ismonday → lovesAmyQuincy 

T 
T 
T 
T 

T 
T 
F 
T 

T 
F 
T 
F 

T 
F 
F 
T 

F 
T 
T 
T 

F 
T 
F 
T 

F 
F 
T 
F 

F 
F 
F 
T 
Finally, in order to make the determination of logical entailment, we compare the two rightmost tables and notice that every row remaining in the premise table also remains in the conclusion table.
In other words, the premises logically entail the conclusion.
The truth table method has the merit that it is easy to understand
It is a direct implementation of the definition of logical entailment.
In practice, it is awkward to manage two tables, especially since there are simpler approaches in which only one table needs to be manipulated
Validity Checking
Unsatisfability Checking
Approach: To determine whether a set of sentences
{ φ_{1},…,φ_{n} }
logically entails a sentence j, form the sentence
( φ_{1} ∧…∧ φ_{n} → φ )
and check that it is valid.
To see how this method works, consider the previous example and write the tentative conclusion as shown below.
(lovesAmyPat → lovesAmyQuincy) ∧ (ismonday → lovesAmyPat ∨ lovesAmyQuincy) → (ismonday → lovesAmyQuincy)
Then, form a truth table for our language with an added column for this sentence and check its satisfaction under each of the possible interpretations for our logical constants
It is almost exactly the same as the validity checking method, except that it works negatively instead of positively.
To determine whether a finite set of sentences {φ_{1},…,φ_{n}} logically entails a sentence φ, we form the sentence
(φ_{1} ∧…∧ φ_{n} ∧ ¬ φ)
and check that it is unsatisfiable .
Both the validity checking method and the satisfiability checking method require about the same amount of work as the truth table method, but they have the merit of manipulating only one table


p 
q 
r 
p → q 
p → r 
p → r ∨ q 
( p → q ) ∧ ( p → r ) → ( p → r ∨ q ) 
p → r ∧ q 
( p → q ) ∧ ( p → r ) → ( p → r ∧ q ) 

T 
T 
T 
T 
T 
T 
T 
T 
T 

T 
T 
F 
T 
F 
T 
T 
F 
T 

T 
F 
T 
F 
T 
T 
T 
F 
T 

T 
F 
F 
F 
F 
F 
T 
F 
T 

F 
T 
T 
T 
T 
T 
T 
T 
T 

F 
T 
F 
T 
T 
T 
T 
T 
T 

F 
F 
T 
T 
T 
T 
T 
T 
T 

F 
F 
F 
T 
T 
T 
T 
T 
T 
Semantic methods for checking logical entailment have the merit of being conceptually simple; they directly manipulate interpretations of sentences
Unfortunately, the number of interpretations of a language grows exponentially with the number of logical constants.
When the number of logical constants in a propositional language is large, the number of interpretations may be impossible to manipulate.
Deductive (proof) systems provide an alternative way of checking and communicating logical entailment that addresses this problem
In many cases, it is possible to create a “proof” of a conclusion from a set of premises that is much smaller than the truth table for the language;
Moreover, it is often possible to find such proofs with less work than is necessary to check the entire truth table
An important component in the treatment of proofs is the notion of a schema
A schema is an expression satisfying the grammatical rules of our language except for the occurrence of metavariables in place of various subparts of the expression.
For example, the following expression is a pattern with metavariables φ and ψ.
φ → (ψ → φ)
An instance of a sentence schema is the expression obtained by substituting expressions for the metavariables.
For example, the following is an instance of the preceding schema.
p →( q → p )
The basis for proof systems is the use of correct rules of inference that can be applied directly to sentences to derive conclusions that are guaranteed to be correct under all interpretations
Since the interpretations are not enumerated, time and space can often be saved
A rule of inference is a pattern of reasoning consisting of:
One set of sentence schemata, called premises , and
A second set of sentence schemata, called conclusions
A rule of inference is sound if and only if, for every instance, the premises logically entail the conclusions
φ → ψ φ ψ 
P →(Q →R) P Q →R 
raining → wet raining wet 
(P →Q) →R P → Q R 
wet → slippery wet slippery 

The implication introduction schema ( II ), together with Modus Ponens, allows us to infer implications
φ → (ψ → φ)
The implication distribution schema ( ID ) allows us to distribute one implication over another
(φ → (ψ → c)) → ((φ → ψ) → (φ → c))
The contradiction realization schemata ( CR ) permit us to infer a sentence if the negation of that sentence implies some sentence and its negation
(ψ → ¬ φ) → ((ψ → φ) → ¬ ψ)
(¬ ψ → ¬ φ) → ((¬ ψ → φ) → ψ)
The equivalence schemata ( EQ ) captures the meaning of the ↔ operator
(φ ↔ ψ) → (φ → ψ)
(φ ↔ ψ) → (ψ → φ)
(φ → ψ) → ((ψ → φ) → (ψ ↔ φ))
The meaning of the other operators in propositional logic is captured in the following axiom schemata
(φ ← ψ) ↔ (ψ → φ)
(φ ∨ ψ) ↔ (¬φ → ψ)
(φ ∧ ψ) ↔ ¬(¬φ ∨ ¬ψ)
The above axiom schemata are jointly called the standard axiom schemata for Propositional Logic
They all are valid
A proof of a conclusion from a set of premises is a sequence of sentences terminating in the conclusion in which each item is either
a premise,
an instance of an axiom schema, or
the result of applying a rule of inference to earlier items in sequence
Example:
1. p → q 
Premise 
2. q → r 
Premise 
3. ( q → r ) → ( p → ( q → r )) 
II 
4. p → ( q → r ) 
MP : 3,2 
5. ( p → ( q → r )) → (( p → q ) →( p → r )) 
ID 
6. ( p → q ) → ( p → r ) 
MP : 5,4 
7. p → r 
MP : 6,1 
If there exists a proof of a sentence φ from a set Δ of premises and the standard axiom schemata using Modus Ponens, then φ is said to be provable from Δ, written as
Δ ⊢ φ
There is a close connection between provability and logical entailment (⊧):
A set of sentences Δ logically entails a sentence φ if and only if φ is provable from Δ
Soundness Theorem:
If φ is provable from Δ, then Δ logically entails φ.
Completeness Theorem:
If Δ logically entails φ, then φ is provable from Δ.
The concept of provability is important because it suggests how we can automate the determination of logical entailment
Starting from a set of premises Δ, we enumerate conclusions from this set
If a sentence φ appears, then it is provable from Δ and is, therefore, a logical consequence
If the negation of φ appears, then ¬φ is a logical consequence of Δ and φ is not logically entailed (unless Δ is inconsistent)
Note that it is possible that neither φ nor ¬φ will appear
Propositional resolution is an extremely powerful rule of inference for Propositional Logic
Using propositional resolution (without axiom schemata or other rules of inference), it is possible to build a theorem prover that is sound and complete for all of Propositional Logic
The search space using propositional resolution is much smaller than for standard propositional logic
Propositional resolution works only on expressions in clausal form
Before the rule can be applied, the premises and conclusions must be converted to this form
A clause is a set of literals which is assumed (implicitly) to be a disjunction of those literals
Example:
Unit clause: clause with only one literal; e.g. {¬ q }
Clausal form of a formula: Implicit conjunction of clauses
Example:
Abbreviated notation:
Notation:
l literal, l c complement of l
C clause (a set of literals)
S a clausal form (a set of clauses)
(1) If l appears in some clause of S , but l ^{c} does not appear in any clause, then, if we delete all clauses in S containing l , the new clausal form S' is satisfiable if and only if S is satisfiable
Example: Satisfiability of is equivalent to satisfiability of
(2) Suppose C = { l } is a unit clause and we obtain S' from S by deleting C and l ^{c} from all clauses that contain it; then, S is satisfiable if and only if S' is satisfiable
Example: is satisfiable iff is satisfiable
(3) If S contains two clauses C and C' , such that C is a subset of C' , we can delete C‘ without affecting the (un)satisfiability of S
Example: is satisfiable iff is satisfiable
(4) If a clause
C
in
S
contains a pair of complementary literals
l
,
l
^{
c
}
, then
C
can be deleted from
S
without affecting its (un)satisfiability
Example: is satisfiable iff is such
Theorem : Every propositional formula can be transformed into an equivalent formula in CNF
1. Implications:
φ1 → φ2 → ¬φ1 ∨ φ2
φ1 ← φ2 → φ1 ∨ ¬φ2
φ1 ↔ φ2 → (¬φ1 ∨ φ2 ) ∧ (φ1 ∨ ¬φ2)
2. Negations:
¬¬φ → φ
¬(φ1 ∧ φ2 ) → ¬φ1 ∨ ¬φ2
¬(φ1 ∨ φ2 ) → ¬φ1 ∧ ¬φ2
3. Distribution:
φ1 ∨ (φ2 ∧ φ3 ) → (φ1 ∨ φ2) ∧ (φ1 ∨ φ3 )
(φ1 ∧ φ2) ∨ φ3 → (φ1 ∨ φ3 ) ∧ (φ2 ∨ φ3)
(φ1 ∨ φ2) ∨ φ3 → φ1 ∨ (φ2 ∨ φ3)
(φ1 ∧ φ2) ∧ φ3 → φ1 ∧ (φ2 ∧ φ3)
4. Operators:
φ1 ∨... ∨ φn → {φ1,...,φn}
φ1 ∧ ...∧ φn → {φ1}...{φn}
Transform the formula
(p → q) → (¬q → ¬p)
into an equivalent formula in CNF
Solution:
Suppose C 1 ,C 2 are clauses such that l in C 1 , l ^{c} in C 2 . The clauses C 1 and C 2 are said to be clashing clauses and they clash on the complementary literals l , l ^{c}
C , the resolvent of C 1 ,C 2 is the clause
C 1 and C 2 are called the parent clauses of C.
Example:
The clauses
clash on P, ¬ P
C 1 ,C 2 also clash on Q, ¬ Q so, another way to find a resolvent for these two clauses is
Theorem : Resolvent C is satisfiable if and only if the parent clauses C 1 ,C 2 are simultaneously satisfiable
Resolution Algorithm:
Input : S – a set of clauses
Output : “S is satisfiable” or “S is not satisfiable”
Set S _{0} := S
Suppose S _{i} has already been constructed
To construct S _{i+1} , choose a pair of clashing literals and clauses C 1 ,C 2 in S (if there are any) and derive
C := Res( C 1 ,C 2 )
S _{i+1} := S i U {C}
If C is the empty clause, output “S is not satisfiable”; if S _{i+1} = S _{i} , output “S is satisfiable”
Otherwise, set i := i + 1 and go back to Step 2
Example: Show that
(p → q) → (¬q → ¬p)
is a valid formula
Solution: We will show that
¬[(p → q) → (¬q → ¬p)]
is not satisfiable.
(1) Transform the formula into CNF:
(2) Show, using resolution, that
C is the empty clause
A derivation of the empty clause from S is called a refutation of S
Theorem : If the set of a clauses labeling the leaves of a resolution tree is satisfiable, then the clause at the root is satisfiable
Theorem ( Soundness ): If the empty clause is derived from a set of clauses, then the set of clauses is unsatisfiable
Theorem ( Completeness ) If a set of clauses is unsatisfiable, then the empty clause can be derived from it using resolution algorithm
ILLUSTRATION BY LARGER EXAMPLE
For each of these sets of premises, what relevant conclusion or conclusions can be drawn? Explain the rules of inference used to obtain each conclusion from the premises.
(a) “If I eat spicy foods, then I have strange dreams.” “I have strange dreams if there is thunder while I sleep.” “I did not have strange dreams.”
(b) “I am dreaming or hallucinating.” “I am not dreaming.” “If I am hallucinating, I see elephants running down the road.”
(c) “If I work, it is either sunny or partly sunny.” “I worked last Monday or I worked last Friday.” “It was not sunny on Tuesday.” “It was not partly sunny on Friday.”
(b) “I am dreaming or hallucinating.” “I am not dreaming.” “If I am hallucinating, I see elephants running down the road.”
The relevant conclusion is: “I see elephants running down the road.”.
Let the primitive statements be:
d, ‘I am dreaming’
h, ‘I am hallucinating’
e, ‘I see elephants running down the road’
Then the premises are translated as: d ∨ h, ¬d, and h → e.
And the conclusion: e.
Steps Reason
d ∨ h premise
¬d premise
h rule of disjunctive syllogism to Steps 1 and 2
h → e premise
e Modus Ponens to Steps 4 and 3
(c) “If I work, it is either sunny or partly sunny.” “I worked last Monday or I worked last Friday.” “It was not sunny on Tuesday.” “It was not partly sunny on Friday.”
There is no single relevant conclusion in this problem, its main difficulty is to to represent the premises so that one is able infer anything at all. One possible relevant conclusion is: “It was sunny or partly sunny last Monday or it was sunny last Friday.”.
Let the primitive statements be:
wm, ‘I worked last Monday’
wf , ‘I worked last Friday’
sm, ‘It was sunny last Monday’
st, ‘It was sunny last Tuesday’
sf , ‘It was sunny last Friday’
pm, ‘It was partly sunny last Monday’
pf , ‘It was partly sunny last Friday’
Then the premises are translated as: wm ∨ wf , wm → (sm ∨ pm), wf → (sf ∨ pf ), ¬st, and ¬pf .
And the conclusion: sf ∨ sm ∨ pm.
Steps  Reason  
1  wf → (sf ∨ pf )  premise 
2  ¬wf ∨ sf ∨ pf  expression for implication 
3  ¬pf → (¬wf ∨ sf )  expression for implication 
4  ¬pf  premise 
5  ¬wf ∨ sf  modus ponens to Steps 3 and 4 
6  wf → sf  expression for implication 
7  wm ∨ wf  premise 
8  ¬wm → wf  expression for implication 
9  ¬wm → sf  rule of syllogism to Steps 8 and 6 
10  wm ∨ sf  expression for implication 
11  ¬sf → wm  expression for implication 
12  wm → (sm ∨ pm)  premise 
13  ¬sf → (sm ∨ pm)  rule of syllogism to Steps 11 and 12 
14  sf ∨ sm ∨ pm  expression for implication. 
Steps  Reason  
1  wf → (sf ∨ pf )  premise 
2  ¬wf ∨ sf ∨ pf  expression for implication 
3  ¬pf  premise 
4  ¬wf ∨ sf  rule of resolution to Steps 2 and 3 
5  wm ∨ wf  premise 
6  wm ∨ sf  rule of resolution to Steps 4 and 5 
7  wm → (sm ∨ pm)  premise 
8  ¬wm ∨ sm ∨ pm  expression for implication 
9  sf ∨ sm ∨ pm  rule of resolution to Steps 7 and 8 
EXTENSIONS
Propositional logic is not adequate for formalizing valid arguments that rely on the internal structure of the propositions involved
In propositional logic the smallest atoms represent whole propositions (propositions are atomic)
Propositional logic does not capture the internal structure of the propositions
It is not possible to work with units smaller than a proposition
Example:
“A Mercedes Benz is a Car” and “A car drives” are two individual, unrelated propositions
We cannot conclude “A Mercedes Benz drives”
It is possible to represent everything you want in propositional logic
But often this is not very efficient
Basic idea : A proposition is expressed as predicate about (on or more) objects in the world
Propositions are predicates and arguments
I.e. Car(Mercedes Benz).
The most immediate way to develop a more complex logical calculus is to introduce rules that are sensitive to more finegrained details of the sentences being used
When the atomic sentences of propositional logic are broken up into terms, variables, predicates, and quantifiers, they yield firstorder logic , which keeps all the rules of propositional logic and adds some new ones
SUMMARY
Propositional logic is one of the simplest and most common logic and is the core of (almost) all other logics
Propositional logic commits only to the existence of facts that may or may not be the case in the world being represented
Propositional logic quickly becomes impractical, even for very small worlds
This lecture focused on three core aspects of the propositional logic:
Syntax: Vocabulary for expressing concepts without ambiguity
Semantics: Connection to what we're reasoning about
Interpretation  what the syntax means
Reasoning: How to prove things
What steps are allowed
REFERENCES
Mandatory Reading:
FirstOrder Logic and Automated Theorem Proofing (2nd edition) by Melvin Fitting
Further Reading:
Mathematical Logic for Computer Science (2nd edition) by Mordechai BenAri
http://www.springer.com/computer/foundations/book/9781852333195
Propositional Logic at The Internet Encyclopedia of Philosophy
http://www.iep.utm.edu/p/proplog.htm
Wikipedia links:
http://en.wikipedia.org/wiki/Propositional_calculus

Title 

1 
Introduction 

2 
Propositional Logic 

3 
Predicate Logic 

4 
Reasoning 

5 
Search Methods 

6 
CommonKADS 

7 
ProblemSolving Methods 

8 
Planning 

9 
Software Agents 

10 
Rule Learning 

11 
Inductive Logic Programming 

12 
Formal Concept Analysis 

13 
Neural Networks 

14 
Semantic Web and Services 
Questions?