Strongest postconditions

Strongest postconditions generate a verification condition for checking the validity of Floyd-Hoare triples by starting at the precondition and moving forward through the program, just like an ordinary program execution.

Definition

The strongest postcondition of a program statement Sand a pprecondition P is the predicate SP describing the smallest possible set of states such that the Floyd-Hoare triple

{ P } S { SP }

is valid for partial correctness. In other words, whenever { P } S { Q } is valid for partial correctness, then SP implies Q.

Notice that strongest postcondition on their own only make sense for partial correctness. For example, how should we choose SP such that { true } assert false { SP } is valid? There is no SP such that the triple is valid for total correctness. For partial correctness, we have SP = false and, in fact, every postcondition leads to a triple that is valid for partial correctness.

Intuition

The strongest postcondition of a program S and a precondition P is the predicate that consists of all final states in which program S terminates when started on states that satisfy P. It thus corresponds to symbolically executing the program, that is, we run the program by modifying sets of states - described by predicates - instead of single states.

When determining strongest postconditions, we thus aim to modify the precondition such that it captures the changes made to the program states by the executed program.

Computation

Similar to weakest preconditions, we can compute strongest postconditions on the program structure using the following recursive definition:

S                       SP(P, S)
==================================================================
var x                   exists x :: P
assert R                P && R
assume R                P && R
x := a                  exists x0 :: P[x / x0] && x == a[x / x0]
S1; S2                  SP(SP(P, S1), S2)
S1 [] S2                SP(P, S1) || SP(P, S2)

Here, P[x / x0] is the predicate P in which every (free) appearance of x has been substituted by x0.

As for weakest preconditions, we briefly go over the above rules for computing strongest postconditions.

Variable declarations

The variable declaration var x declares x as an unitialized (integer-valued) variable named x. All information about the value of some other variables named x that might have existed before are forgotten. This is reflected by the operational rule

which nondeterministically assigns any integer to x.

Now, assume that precondition P holds for the initial state . What are the final states that we can reach by running var x on ? The state for some (nondeterministically chosen) integer . Hence, the strongest postcondition is sp(P, var x) ::= exists x :: P.

Assignments

The assignment x := a evaluates expression a in the current (initial) program state and assigns the resulting value to variable x. This is reflected by the operational rule

Similarly to the variable declaration, only variable x is potentially changed by the assignment.

Now, assume that precondition P holds for the initial state . What are the possible final states that we can reach by executing x := a on ?

A first guess might be to define SP(P, x := a) ::= P && x == a, that is, P should still hold and after the assignment x equals a. However, the following triples are not valid:

  • { x == 4 } x := 7 { x == 4 && x == 7 }
  • { true } x := x + 1 { true && x == x + 1 }

Hence, the above proposal is not necessarily correct if P or a depend on x. The underlying issue is that both P and a are originally evaluated in initial states but the strongest postconditions talks about final states. The variable x is the only variable that might have a different value before and after executing the assignment.

To define the correct strongest postcondition, we introduce an auxiliary variable, say x0, that represents the original value of x when evaluated in some initial state. The predicate P[x / x0] then means that precondition P was true in some initial state; similarly, a[x/x0] refers to the evaluation of a in the same initial state. Put together, we then use the original proposal but state that P and a were true in some initial state.

Hence, the strongest postcondition of assignments is defined as SP(P, x:=a) ::= exists x0 :: P[x / x0] && x == a[x / x0].

As an example, let consider precondition x == 4 and the assignment x := x + 7:

sp(x == 4, x := x + 7)
= (by definition)
exists x0 :: x0 == 4 && x == x0 + 7
= (simplification)
x == 11

Indeed, x will be 11 after running x := x + 7 on a state where x is initially 4.

Assumptions

The assumption assume R is a somewhat strange statement because we cannot execute it on an actual machine; it is a verification-specific statement, similar to making an assumption in a mathematical proof. Intuitively, assume R checks whether the predicate R holds in the current (initial) state; there is no effect if the predicate holds; we reach a "magical state" in which everything goes if the predicate does not hold. This is reflected by two operational rules, one for the case that R holds and one for the case that it does not hold:

Now, assume that precondition P holds for the initial state . We do not consider the configuration magic; executions that move to magic cannot invalidate a triple anyway.

What are the possible final states that we can reach by executing assume R on ?

Since we do not have to consider executions that move to magic, we know that satisfies R. Moreover, since the assumption that does not change , P also holds in the final state.

Hence, the strongest postcondition of assumptions is defined as SP(P, assume R) ::= P && R.

Assertions

The assertion assert R checks whether the predicate R holds in the current (initial) state and causes a runtime error if this is not the case; otherwise, it has no effect. Since we reason about partial correctness with strongest postconditions, it does not matter whether we reach an error configuration or a magical configuration.

The strongest postcondition of the assertion assert R thus does not differ from the strongest postcondition of the assumption assume R. That is, SP(P, assert R) ::= P && R.

Sequential composition

The sequential composition S1;S2 first executes program S1 and then program S2. This is reflected by four operational rules

Here, the first rule keeps executing S1; termination of S1 is handled by the second rule, which moves on with executing S2. The last two rules propagate the case that we already encountered a runtime error or a wrong assumption.

What are the possible final states that we can reach when executing S1;S2 on some initial state that satisfies precondition P?

By the inductive definition of strongest postconditions, we can assume that we already know how to construct strongest postconditions for S1 and S2, namely SP(P, S1) and SP(SP(P, S1), S2).

SP(P, S1) is then the set of all final states we can reach after running S1 on a state satisfying P. We then use this predicate as the precondition of S2. By the same argument, SP(SP(P, S1), S2) is then the set of final states we can reach after running S2 on a state satisfying SP(P, S1).

Putting both arguments together, SP(SP(P, S1), S2) is the set of all final states reached after running S1 followed by S2 on initial states that satisfy P.

Hence, the strongest postcondition of sequential composition is defined as SP(P, S1;S2) ::= SP(SP(P, S1), S2).

Nondeterministic choice

The nondeterministic choice S1 [] S2 executes either program S1 or program S2; we do not know which one. This is reflected by two operational rules, one for running S1 and one for running S2:

If we run S1 [] S2 on an initial states that satisfy precondition P, then we end up in the states reached by executing S1 on P or in the states reached by executing S2 on P, respectively.

Hence, the strongest postcondition of nondeterministic choice is defined as SP(P, S1 [] S2) ::= SP(P, S1) || SP(P, S2).