Chapter Three Introduction to Digital Design
Logic circuits are the basis for modern digital computer systems. To appreciate how computer systems operate you will need to understand digital logic and boolean algebra.
This chapter provides only a basic introduction to boolean algebra. That subject alone is often the subject of an entire textbook. This chapter concentrates on those subjects that support other chapters in this text.
Boolean logic forms the basis for computation in modern binary computer systems. You can represent any algorithm, or any electronic computer circuit, using a system of boolean equations. This chapter provides a brief introduction to boolean algebra, truth tables, canonical representation, of boolean functions, boolean function simplification, logic design, and combinatorial and sequential circuits.
This material is especially important to those who want to design electronic circuits or write software that controls electronic circuits. Even if you never plan to design hardware or write software than controls hardware, the introduction to boolean algebra this chapter provides is still important since you can use such knowledge to optimize certain complex conditional expressions within IF, WHILE, and other conditional statements.
The section on minimizing (optimizing) logic functions uses Veitch Diagrams or Karnaugh Maps. The optimizing techniques this chapter uses reduce the number of terms in a boolean function. You should realize that many people consider this optimization technique obsolete because reducing the number of terms in an equation is not as important as it once was. This chapter uses the mapping method as an example of boolean function optimization, not as a technique one would regularly employ. If you are interested in circuit design and optimization, you will need to consult a text on logic design for better techniques.
3.1 Boolean Algebra
Boolean algebra is a deductive mathematical system closed over the values zero and one (false and true). A binary operator "°" defined over this set of values accepts a pair of boolean inputs and produces a single boolean value. For example, the boolean AND operator accepts two boolean inputs and produces a single boolean output (the logical AND of the two inputs).
For any given algebra system, there are some initial assumptions, or postulates, that the system follows. You can deduce additional rules, theorems, and other properties of the system from this basic set of postulates. Boolean algebra systems often employ the following postulates:
 Closure. The boolean system is closed with respect to a binary operator if for every pair of boolean values, it produces a boolean result. For example, logical AND is closed in the boolean system because it accepts only boolean operands and produces only boolean results.
 Commutativity. A binary operator "°" is said to be commutative if A°B = B°A for all possible boolean values A and B.
 Associativity. A binary operator "°" is said to be associative if
 (A ° B) ° C = A ° (B ° C)
 for all boolean values A, B, and C.
 Distribution. Two binary operators "°" and "%" are distributive if
 A °(B % C) = (A ° B) % (A ° C)
 for all boolean values A, B, and C.
 Identity. A boolean value I is said to be the identity element with respect to some binary operator "°" if A ° I = A
for all boolean values A. Inverse. A boolean value I is said to be the inverse element with respect to some binary operator "°" if A ° I = B and B¦A (i.e., B is the opposite value of A in a boolean system)
for all boolean values A and B.
For our purposes, we will base boolean algebra on the following set of operators and values:
The two possible values in the boolean system are zero and one. Often we will call these values false and true (respectively).
The symbol "" represents the logical AND operation; e.g., A B is the result of logically ANDing the boolean values A and B. When using single letter variable names, this text will drop the "" symbol; Therefore, AB also represents the logical AND of the variables A and B (we will also call this the product of A and B).
The symbol "+" represents the logical OR operation; e.g., A + B is the result of logically ORing the boolean values A and B. (We will also call this the sum of A and B.)
Logical complement, negation, or not, is a unary operator. This text will use the (') symbol to denote logical negation. For example, A' denotes the logical NOT of A.
If several different operators appear in a single boolean expression, the result of the expression depends on the precedence of the operators. We'll use the following precedences (from highest to lowest) for the boolean operators: parenthesis, logical NOT, logical AND, then logical OR. The logical AND and OR operators are left associative. If two operators with the same precedence are adjacent, you must evaluate them from left to right. The logical NOT operation is right associative, although it would produce the same result using left or right associativity since it is a unary operator.
We will also use the following set of postulates:
 P1 Boolean algebra is closed under the AND, OR, and NOT operations.
 P2 The identity element with respect to is one and + is zero. There is no identity element with respect to logical NOT.
 P3 The and + operators are commutative.
 P4 and + are distributive with respect to one another. That is, A (B + C) = (A B) + (A C) and A + (B C) = (A + B) (A + C).
 P5 For every value A there exists a value A' such that AA' = 0 and A+A' = 1. This value is the logical complement (or NOT) of A.
 P6 and + are both associative. That is, (AB)C = A(BC) and (A+B)+C = A+(B+C).
You can prove all other theorems in boolean algebra using these postulates. This text will not go into the formal proofs of these theorems, however, it is a good idea to familiarize yourself with some important theorems in boolean algebra. A sampling includes:
 Th1: A + A = A
 Th2: A A = A
 Th3: A + 0 = A
 Th4: A 1 = A
 Th5: A 0 = 0
 Th6: A + 1 = 1
 Th7: (A + B)' = A' B'
 Th8: (A B)' = A' + B'
 Th9: A + AB = A
 Th10: A (A + B) = A
 Th11: A + A'B = A+B
 Th12: A' (A + B') = A'B'
 Th13: AB + AB' = A
 Th14: (A'+B') (A' + B) = A'
 Th15: A + A' = 1
 Th16: A A' = 0
Theorems seven and eight above are known as DeMorgan's Theorems after the mathematician who discovered them.
The theorems above appear in pairs. Each pair (e.g., Th1 & Th2, Th3 & Th4, etc.) form a dual. An important principle in the boolean algebra system is that of duality. Any valid expression you can create using the postulates and theorems of boolean algebra remains valid if you interchange the operators and constants appearing in the expression. Specifically, if you exchange the and + operators and swap the 0 and 1 values in an expression, you will wind up with an expression that obeys all the rules of boolean algebra. This does not mean the dual expression computes the same values, it only means that both expressions are legal in the boolean algebra system. Therefore, this is an easy way to generate a second theorem for any fact you prove in the boolean algebra system.
Although we will not be proving any theorems for the sake of boolean algebra in this text, we will use these theorems to show that two boolean equations are identical. This is an important operation when attempting to produce canonical representations of a boolean expression or when simplifying a boolean expression.
3.2 Boolean Functions and Truth Tables
A boolean expression is a sequence of zeros, ones, and literals separated by boolean operators. A literal is a primed (negated) or unprimed variable name. For our purposes, all variable names will be a single alphabetic character. A boolean function is a specific boolean expression; we will generally give boolean functions the name F with a possible subscript. For example, consider the following boolean:
F0 = AB+CThis function computes the logical AND of A and B and then logically ORs this result with C. If A=1, B=0, and C=1, then F0 returns the value one (10 + 1 = 1).
Another way to represent a boolean function is via a truth table. A previous chapter (see "Logical Operations on Bits" on page65) used truth tables to represent the AND and OR functions. Those truth tables took the forms:
Table 13: AND Truth Table AND 0 1 0 0 0 1 0 1
Table 14: OR Truth Table OR 0 1 0 0 1 1 1 1For binary operators and two input variables, this form of a truth table is very natural and convenient. However, reconsider the boolean function F0 above. That function has three input variables, not two. Therefore, one cannot use the truth table format given above. Fortunately, it is still very easy to construct truth tables for three or more variables. The following example shows one way to do this for functions of three or four variables:
In the truth tables above, the four columns represent the four possible combinations of zeros and ones for A & B (B is the H.O. or leftmost bit, A is the L.O. or rightmost bit). Likewise the four rows in the second truth table above represent the four possible combinations of zeros and ones for the C and D variables. As before, D is the H.O. bit and C is the L.O. bit.
The following table shows another way to represent truth tables. This form has two advantages over the forms above  it is easier to fill in the table and it provides a compact representation for two or more functions.
Note that the truth table above provides the values for three separate functions of three variables.
Although you can create an infinite variety of boolean functions, they are not all unique. For example, F=A and F=AA are two different functions. By theorem two, however, it is easy to show that these two functions are equivalent, that is, they produce exactly the same outputs for all input combinations. If you fix the number of input variables, there are a finite number of unique boolean functions possible. For example, there are only 16 unique boolean functions with two inputs and there are only 256 possible boolean functions of three input variables. Given n input variables, there are 2**(2n) (two raised to the two raised to the nth power)^{1} unique boolean functions of those n input values. For two input variables, 2**(22) = 24 or 16 different functions. With three input variables there are 2**(23) = 28 or 256 possible functions. Four input variables create 2**(24) or 216, or 65,536 different unique boolean functions.
When dealing with only 16 boolean functions, it's easy enough to name each function. The following table lists the 16 possible boolean functions of two input variables along with some common names for those functions:
Beyond two input variables there are too many functions to provide specific names. Therefore, we will refer to the function's number rather than the function's name. For example, F8 denotes the logical AND of A and B for a twoinput function and F14 is the logical OR operation. Of course, the only problem is to determine a function's number. For example, given the function of three variables F=AB+C, what is the corresponding function number? This number is easy to compute by looking at the truth table for the function. If we treat the values for A, B, and C as bits in a binary number with C being the H.O. bit and A being the L.O. bit, they produce the binary numbers in the range zero through seven. Associated with each of these binary strings is a zero or one function result. If we construct a binary value by placing the function result in the bit position specified by A, B, and C, the resulting binary number is that function's number. Consider the truth table for F=AB+C:
CBA: 7 6 5 4 3 2 1 0 F=AB+C : 1 1 1 1 1 0 0 0If we treat the function values for F as a binary number, this produces the value F816 or 24810. We will usually denote function numbers in decimal.
This also provides the insight into why there are 2**2n different functions of n variables: if you have n input variables, there are 2n bits in function's number. If you have m bits, there are 2m different values. Therefore, for n input variables there are m=2n possible bits and 2m or 2**2n possible functions.
^{1}In this context, the operator "**" means exponentiation.
