Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Digital Electronics: Basic Concepts, Boolean Algebra, and Sequential Logic, Lecture notes of Operating Systems

HANDNOTES OF OPERATING SYSTEM FOR ENGINEERING STUDENTS

Typology: Lecture notes

2016/2017

Uploaded on 04/29/2017

sangeeta-pal
sangeeta-pal 🇮🇳

4.3

(3)

2 documents

1 / 43

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture Notes for Digital Electronics
Raymond E. Frey
Physics Department
University of Oregon
Eugene, OR 97403, USA
rayfrey@cosmic.uoregon.edu
March, 2000
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b

Partial preview of the text

Download Digital Electronics: Basic Concepts, Boolean Algebra, and Sequential Logic and more Lecture notes Operating Systems in PDF only on Docsity!

Lecture Notes for Digital Electronics

Raymond E. Frey Physics Department University of Oregon Eugene, OR 97403, USA rayfrey@cosmic.uoregon.edu

March, 2000

1 Basic Digital Concepts

By converting continuous analog signals into a finite number of discrete states, a process called digitization, then to the extent that the states are sufficiently well separated so that noise does create errors, the resulting digital signals allow the following (slightly idealized):

  • storage over arbitrary periods of time
  • flawless retrieval and reproduction of the stored information
  • flawless transmission of the information

Some information is intrinsically digital, so it is natural to process and manipulate it using purely digital techniques. Examples are numbers and words. The drawback to digitization is that a single analog signal (e.g. a voltage which is a function of time, like a stereo signal) needs many discrete states, or bits, in order to give a satisfactory reproduction. For example, it requires a minimum of 10 bits to determine a voltage at any given time to an accuracy of ≈ 0 .1%. For transmission, one now requires 10 lines instead of the one original analog line. The explosion in digital techniques and technology has been made possible by the incred- ible increase in the density of digital circuitry, its robust performance, its relatively low cost, and its speed. The requirement of using many bits in reproduction is no longer an issue: The more the better. This circuitry is based upon the transistor, which can be operated as a switch with two states. Hence, the digital information is intrinsically binary. So in practice, the terms digital and binary are used interchangeably. In the following sections we summarize some conventions for defining the binary states and for doing binary arithmetic.

1.1 Binary Logic States

The following table attempts to make correspondences between conventions for defining binary logic states. In the case of the TTL logic gates we will be using in the lab, the Low voltage state is roughly 0–1 Volt and the High state is roughly 2.5–5 Volts. See page 475 of the text for the exact conventions for TTL as well as other hardware gate technologies.

Boolean Logic Boolean Algebra Voltage State Voltage State (positive true) (negative true ) True (T) 1 High (H) Low (L) False (F) 0 L H

The convention for naming these states is illustrated in Fig. 1. The “positive true” case is illustrated. The relationship between the logic state and label (in this case “switch open”) at some point in the circuit can be summarized with the following: The labelled voltage is High (Low) when the label’s stated function is True (False). In the figure, the stated function is certainly true (switch open), and this does correspond to a high voltage at the labelled point. (Recall that with the switch open, Ohm’s Law implies that with zero current, the voltage difference across the “pull up” resistor is zero, so that

Yet another convention is Gray code. You have a homework problem to practice this. This is less commonly used.

1.2.1 Representation of Negative Numbers

There are two commonly used conventions for representing negative numbers. With sign magnitude, the MSB is used to flag a negative number. So for example with 4-bit numbers we would have 0011 = 3 and 1011 = −3. This is simple to see, but is not good for doing arithmetic. With 2’s complement, negative numbers are designed so that the sum of a number and its 2’s complement is zero. Using the 4-bit example again, we have 0101 = 5 and its 2’s complement −5 = 1011. Adding (remember to carry) gives 10000 = 0. (The 5th bit doesn’t count!) Both addition and multiplication work as you would expect using 2’s complement. There are two methods for forming the 2’s complement:

  1. Make the transformation 0 → 1 and 1 → 0, then add 1.
  2. Add some number to − 2 MSB^ to get the number you want.^ For 4-bit numbers an example of finding the 2’s complement of 5 is −5 = −8 + 3 = 1000 + 0011 = 1011.

1.2.2 Hexadecimal Representation

It is very often quite useful to represent blocks of 4 bits by a single digit. Thus in base 16 there is a convention for using one digit for the numbers 0,1,2,.. .,15 which is called hexadecimal. It follows decimal for 0–9, then uses letters A–F.

Decimal Binary Hex 0 0000 0 1 0001 1 2 0010 2 3 0011 3 4 0100 4 5 0101 5 6 0110 6 7 0111 7 8 1000 8 9 1001 9 10 1010 A 11 1011 B 12 1100 C 13 1101 D 14 1110 E 15 1111 F

2 Logic Gates and Combinational Logic

2.1 Gate Types and Truth Tables

The basic logic gates are AND, OR, NAND, NOR, XOR, INV, and BUF. The last two are not standard terms; they stand for “inverter” and “buffer”, respectively. The symbols for these gates and their corresponding Boolean expressions are given in Table 8.2 of the text which, for convenience, is reproduced (in part) in Fig. 2.

Figure 2: Table 8.2 from the text.

All of the logical gate functions, as well as the Boolean relations discussed in the next section, follow from the truth tables for the AND and OR gates. We reproduce these below. We also show the XOR truth table, because it comes up quite often, although, as we shall see, it is not elemental.

2.2 Boolean Algebra and DeMorgan’s Theorems

Boolean algebra can be used to formalize the combinations of binary logic states. The fundamental relations are given in Table 8.3 of the text. In these relations, A and B are binary quantities, that is, they can be either logical true (T or 1) or logical false (F or 0). Most of these relations are obvious. Here are a few of them:

AA = A ; A + A = A ; A + A = 1 ; AA = 0 ; A = A

Recall that the text sometimes uses an apostrophe for inversion (A′). We use the standard overbar notation (A). We can use algebraic expressions to complete our definitions of the basic logic gates we began above. Note that the Boolean operations of “multiplication” and “addition” are defined by the truth tables for the AND and OR gates given above in Figs. 3 and 4. Using these definitions, we can define all of the logic gates algebraically. The truth tables can also be constructed from these relations, if necessary. See Fig. 2 for the gate symbols.

  • AND: Q = AB (see Fig. 3)
  • OR: Q = A + B (see Fig. 4)
  • NAND: Q = AB
  • NOR: Q = A + B
  • XOR: Q = A ⊕ B (defined by truth table Fig. 5)
  • INV: Q = A
  • BUF: Q = A

2.2.1 Example: Combining Gates

Let’s re-express the XOR operation in terms of standard Boolean operations. The following truth table evaluates the expression Q = AB + AB.

A B AB AB Q 0 0 0 0 0 1 0 0 1 1 0 1 1 0 1 1 1 0 0 0

We see that this truth table is identical to the one for the XOR operation. Therefore, we can write A ⊕ B = AB + AB (1)

A schematic of this expression in terms of gates is given in Fig. 6 (as well as Fig. 8.25 of the text). Recall that the open circles at the output or input of a gate represent inversion.

A B Q^ = Q

A

B

Figure 6: Realization of the XOR gate in terms of AND and OR gates.

2.2.2 Gate Interchangeablilty

In an example from the homework, we can make an INV gate from a 2-input NOR gate. Simply connect the two inputs of the NOR gate together. Algebraically, if the two original NOR gate inputs are labelled B and C, and they are combined to form A, then we have Q = B + C = A + A = A, which is the INV operation. Note that an INV gate can not be made from OR or AND gates. For this reason the OR and AND gates are not universal. So for example, no combination of AND gates can be combined to substitute for a NOR gate. However, the NAND and NOR gates are universal.

2.2.3 DeMorgan

Perhaps the most interesting of the Boolean identities are the two known as DeMorgan’s Theorems: A + B = A¯B ¯ (or, A + B = A¯ B¯) (2)

AB = A + B (or, AB = A + B) (3)

These expressions turn out to be quite useful, and we shall use them often. An example of algebraic logic manipulation follows. It is the one mentioned at the end of Lab 1. One is to show that an XOR gate can be composed of 4 NAND gates. From the section above we know A ⊕ B = AB + AB. Since AA = 0 and BB = 0, we can add these, rearrange, and apply the two DeMorgan relations to give

A ⊕ B = A(A + B) + B(A + B) = A(AB) + B(AB) =

( A(AB)

) ( B(AB)

)

2.3 Symbolic Logic

The two DeMorgan expressions above can be envoked using gate symbols by following this prescription: Change gate shape (AND↔OR) and invert all inputs and outputs. By examining the two rightmost columns of Fig. 2, one sees that the transformation between 3rd and 4th columns for the gates involving AND/OR gates works exactly in this way. For example, the DeMorgan expression AB = A + B is represented symbolically by the equivalence between the 3rd and 4th columns of the 2nd row (“NAND”) of Fig. 2. We will go over how this works, and some more examples, in class.

Finally, one can try a K-map solution. The first step is to write out the truth table in the form below, with the input states the headings of rows and columns of a table, and the corresponding outputs within, as shown below.

Table 2: K-map of truth table. A\B 0 1 0 1 1 1 0 1

The steps/rules are as follows:

  1. Form the 2-dimensional table as above. Combine 2 inputs in a “gray code” way – see 2nd example below.
  2. Form groups of 1’s and circle them; the groups are rectangular and must have sides of length 2n^ × 2 m, where n and m are integers 0, 1 , 2 ,.. ..
  3. The groups can overlap.
  4. Write down an expression of the inputs for each group.
  5. OR together these expressions. That’s it.
  6. Groups can wrap across table edges.
  7. As before, one can alternatively form groups of 0’s to give a solution for Q.
  8. The bigger the groups one can form, the better (simpler) the result.
  9. There are usually many alternative solutions, all equivalent, some better than others depending upon what one is trying to optimize.

Here is one way of doing it:

A\B 0 1

The two groups we have drawn are A¯ and B. So the solution (as before) is:

Q = A¯ + B

2.4.1 K-map Example 2

Let’s use this to determine which 3-bit numbers are prime. (This is a homework problem.) We assume that 0, 1 , 2 are not prime. We will let our input number have digits a 2 a 1 a 0. Here is the truth table: Here is the corresponding K-map and a solution. Note that where two inputs are combined in a row or column that their progression follows gray code, that is only one bit changes at a time. The solution shown above is:

Q = a 1 a 0 + a 2 a 0 = a 0 (a 1 + a 2 )

Table 3: 3-digit prime finder. Decimal a 2 a 1 a 0 Q 0 0 0 0 0 1 0 0 1 0 2 0 1 0 0 3 0 1 1 1 4 1 0 0 0 5 1 0 1 1 6 1 1 0 0 7 1 1 1 1

Table 4: K-map of truth table. a 2 \a 1 a 0 00 01 11 10 0 0 0 1 0 1 0 1 1 0

This yields Couti = aibi + Ciniai + Cinibi = aibi + Cini(ai + bi)

which in hardware would be 2 2-input OR gates and 2 2-input AND gates. As stated above, the carry bits allow our adder to be expanded to add any number of bits. As an example, a 4-bit adder circuit is depicted in Fig. 8. The sum can be 5 bits, where the MSB is formed by the final carry out. (Sometimes this is referred to as an “overflow” bit.)

a b Cout Cin S

a b Cout Cin S

a b Cout Cin S

a b Cout Cin S

a b (^0)

a 0 b

a a b b

3 2 1 3 2 1

S 0

S S S S 4 3 2 1

Figure 8: Expansion of 1-bit full adder to make a 4-bit adder.

2.4.3 Making a Multiplier from an Adder

In class we will discuss how to use our full adder (the “Σ chip”) to make a multiplier.

2.5 Multiplexing

A multiplexer (MUX) is a device which selects one of many inputs to a single output. The selection is done by using an input address. Hence, a MUX can take many data bits and put them, one at a time, on a single output data line in a particular sequence. This is an example of transforming parallel data to serial data. A demultiplexer (DEMUX) performs the inverse operation, taking one input and sending it to one of many possible outputs. Again the output line is selected using an address. A MUX-DEMUX pair can be used to convert data to serial form for transmission, thus reducing the number of required transmission lines. The address bits are shared by the MUX and DEMUX at each end. If n data bits are to be transmitted, then after multiplexing, the number of separate lines required is log 2 n + 1, compared to n without the conversion to serial. Hence for large n the saving can be substantial. In Lab 2, you will build such a system. Multiplexers consist of two functionally separate components, a decoder and some switches or gates. The decoder interprets the input address to select a single data bit. We use the example of a 4-bit MUX in the following section to illustrate how this works.

2.5.1 A 4-bit MUX Design

We wish to design a 4-bit multiplexer. The block diagram is given in Fig. 9. There are 4 input data bits D 0 – D 3 , 2 input address bits A 0 and A 1 , one serial output data bit Q, and

an (optional) enable bit E which is used for expansion (discussed later). First we will design the decoder.

C 3 C 2 C 1 C 0

DECODER

D

D D

D 0

1

2

3 GATES /SWITCHES Q

E MUX

A 1 A 0

Figure 9: Block diagram of 4-bit MUX.

We need m address bits to specify 2m^ data bits. So in our example, we have 2 address bits. The truth table for our decoder is straightforward:

A 1 A 0 C 0 C 1 C 2 C 3 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 1 1 0 0 0 1

The implementation of the truth table with standard gates is also straightforward, as given in Fig. 10.

A 1

A 0

C 3 C 2 C 1 C 0

Figure 10: Decoder for the 4-bit MUX.

For the “gates/switches” part of the MUX, the design depends upon whether the input data lines carry digital or analog signals. We will discuss the analog possibility later. The digital case is the usual and simplest case. Here, the data routing can be accomplished

3 Flip-Flops and Introductory Sequential Logic

We now turn to digital circuits which have states which change in time, usually according to an external clock. The flip-flop is an important element of such circuits. It has the interesting property of memory: It can be set to a state which is retained until explicitly reset.

3.1 Simple Latches

The following 3 figures are equivalent representations of a simple circuit. In general these are called flip-flops. Specifically, these examples are called SR (“set-reset”) flip-flops, or SR latches.

R

S (^) Q

R Q

S (^) Q

Q

Figure 11: Two equivalent versions of an SR flip-flop (or “SR latch”).

R

S (^) Q

Q

Figure 12: Yet another equivalent SR flip-flop, as used in Lab 3.

The truth table for the SR latch is given below. S S R R Q Q 1 0 0 1 1 0 0 1 1 0 0 1 0 1 0 1 retains previous 1 0 1 0 0 0

The state described by the last row is clearly problematic, since Q and Q should not be the same value. Thus, the S = R = 1 inputs should be avoided. From the truth table, we can develop a sequence such as the following:

  1. R = 0, S = 1 ⇒ Q = 1 (set)
  2. R = 0, S = 0 ⇒ Q = 1 (Q = 1 state retained: “memory”)
  3. R = 1, S = 0 ⇒ Q = 0 (reset)
  4. R = 0, S = 0 ⇒ Q = 0 (Q = 0 state retained)

In alternative language, the first operation “writes” a true state into one bit of memory. It can subsequently be “read” until it is erased by the reset operation of the third line.

3.1.1 Latch Example: Debounced Switch

A useful example of the simple SR flip-flop is the debounced switch, like the ones on the lab prototyping boards. The point is that any simple mechanical switch will bounce as it makes contact. Hence, an attempt to provide a simple transition from digital HIGH to LOW with a mechanical switch may result in an unintended series of transitions between the two states as the switch damps to its final position. So, for example, a digital counter connected to Q would count every bounce, rather than the single push of the button which was intended. The debounced configuration and corresponding truth table are given below. When the switch is moved from A to B, for example, the output Q goes LOW. A bounce would result in A = B = 1, which is the “retain previous” state of the flip-flop. Hence, the bounces do not appear at the output Q.

Q

+5 V

+5 V

1 k

1 k

A B

Figure 13: A debounced switch.

A B Q

1 1 retains previous 0 0 not allowed

D. This is shown in Fig. 15. Note that we have explicitly eliminated the bad S = R = 1 state with this configuration. We can override this data input and clock sychronization scheme by including the “jam set” (S) and “jam reset” (R) inputs shown in Fig. 15. These function just as before with the unclocked SR flip-flop. Note that these “jam” inputs go by various names. So sometimes the set is called “preset” and reset is called “clear”, for example.

Q

Q

CLK

S

_

R

_

D

Figure 15: A “D-type transparent” flip-flop with jam set and reset.

A typical timing diagram for this flip-flop is given in Fig. 16. Note that the jam reset signal R overrides any action of the data or clock inputs.

CLK

D

R

_

Q

Figure 16: Example of timing diagram for the transparent D flip-flop. (It is assumed that S is held HIGH throughout.)

3.2.1 Edge Triggered Flip-Flops

We need to make one final modification to our clocked flip-flop. Note that in the timing diagram of Fig. 16 that there is quite a bit of apparent ambiguity regarding exactly when the D input gets latched into Q. If a transition in D occurs sometime during a clock HIGH, for example, what will occur? The answer will depend upon the characteristics of the particular electronics being used. This lack of clarity is often unacceptable. As a point of terminology,

the clocked flip-flop of Fig. 15 is called a transparent D-type flip-flop or latch. (An example in TTL is the 7475 IC.) The solution to this is the edge-triggered flip-flop. We will discuss how this works for one example in class. It is also discussed some in the text. Triggering on a clock rising or falling edge is similar in all respects to what we have discussed, except that it requires 2–3 coupled SR-type flip-flops, rather than just one clocked SR flip-flop. The most common type is the positive-edge triggered D-type flip-flop. This latches the D input upon the clock transition from LOW to HIGH. An example of this in TTL is the 7474 IC. It is also common to employ a negative-edge triggered D-type flip-flop, which latches the D input upon the clock transition from HIGH to LOW. The symbols used for these three D-type flip-flops are depicted in Fig. 17. Note that the small triangle at the clock input depicts positive-edge triggering, and with an inversion symbol represents negative-edge triggered. The JK type of flip-flop is a slightlier fancier version of the D-type which we will discuss briefly later. Not shown in the figure are the jam set and reset inputs, which are typically included in the flip-flop IC packages. In timing diagrams, the clocks for edge-triggered devices are indicated by arrows, as shown in Fig. 18.

D CLK

Q D Q CLK

D Q CLK

Q

J

K

CLK

Figure 17: Symbols for D-type and JK flip-flops. Left to right: transparent D-type, positive- edge triggered D-type, negative-edge triggered D-type, and positive-edge triggered JK-type.

CLK CLK

Figure 18: Clocks in timing diagrams for positive-edge triggered (left) and negative-edge triggered (right) devices.

For edge-triggered devices, the ambiguity regarding latch timing is reduced significantly. But at high clock frequency it will become an issue again. Typically, the requirements are as follows:

  • The data input must be held for a time tsetup before the clock edge. Typically, tsetup ≈ 20 ns or less.
  • For some ICs, the data must be held for a short time thold after the clock edge. Typically thold ≈ 3 ns, but is zero for most newer ICs.
  • The output Q appears after a short propagation delay tprop of the signal through the gates of the IC. Typically, tprop ≈ 10 ns.