Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software Input Devices | Data Loggers + Recorders | Books | Links + Resources |

Infinite impulse response (IIR) digital filters are fundamentally different from FIR filters because practical IIR filters always require feedback. Where FIR filter output samples depend only on past input samples, each IIR filter output sample depends on previous input samples and previous filter output samples. IIR filters' memory of past outputs is both a blessing and a curse. Like all feedback systems, perturbations at the IIR filter input could, depending on the design, cause the filter output to become unstable and oscillate indefinitely. This characteristic of possibly having an infinite duration of nonzero output samples, even if the input becomes all zeros, is the origin of the phrase infinite impulse response. It's interesting at this point to know that, relative to FIR filters, HR filters have more complicated structures (block diagrams), are harder to design and analyze, and do not have linear phase responses. Why in the world, then, would anyone use an IIR filter? Because they are very efficient. IIR filters require far fewer multiplications per filter output sample to achieve a given frequency magnitude response. From a hardware standpoint, this means that UR filters can be very fast, allowing us to build real-time IIR filters that operate over much higher sample rates than FIR filters. [At the end of this section, we briefly compare the advantages and disadvantages of IIR filters relative to FIR filters.]
To illustrate the utility of IIR filters, FIG. 1 contrasts the frequency magnitude responses of what's called a fourth-order low-pass IIR filter and the 19-tap FIR filter of Figure 19(b) from Section 5. Where the 19-tap FIR filter in FIG. 1 requires 19 multiplications per filter output sample, the fourth-order IIR filter requires only 9 multiplications for each filter output sample. Not only does the IIR filter give us reduced passband ripple and a sharper filter roll-off, it does so with less than half the multiplication workload of the FIR filter. Recall from Section 5.3 that, to force an FIR filter's frequency response to have very steep transition regions, we had to design an FIR filter with a very long impulse response. The longer the impulse response, the more ideal our filter frequency response will become. From a hardware standpoint, the maxi mum number of FIR filter taps we can have (the length of the impulse response) depends on how fast our hardware can perform the required number of multiplications and additions to get a filter output value before the next filter input sample arrives. IIR filters, however, can be designed to have impulse responses that are longer than their number of taps! Thus, IIR filters can give us much better filtering for a given number of multiplications per output sample than FIR filters. With this in mind, let's take a deep breath, flex our mathematical muscles, and learn about IIR filters. 1.
IIR filters get their name from the fact that some of the filter's previous out put samples are used to calculate the current output sample. Given a finite duration of nonzero input values, the effect is that an IIR filter could have a infinite duration of nonzero output samples. So, if the IIR filter's input suddenly becomes a sequence of all zeros, the filter's output could conceivably remain nonzero forever. This peculiar attribute of IIR filters comes about be cause of the way they're realized, i.e., the feedback structure of their delay units, multipliers, and adders. Understanding IIR filter structures is straight forward if we start by recalling the building blocks of an FIR filter. FIG. 2(a) shows the now familiar structure of a 4-tap FIR digital filter that implements the time-domain FIR equation y(n) = h(0)x(n) + h(1).x(n-1) + h(2)x(n-2) + h(3)x(n -3) . (eqn. 1) Although not specifically called out as such in Section 5, Eq. (eqn. 1) is known as a difference equation. To appreciate how past filter output samples are used in the structure of IIR filters, let's begin by reorienting our FIR structure in FIG. 2(a) to that of FIG. 2(b). Notice how the structures in FIG. 2 are computationally identical, and both are implementations, or realizations, of Eq. (eqn. 1). We can now show how past filter output samples are combined with past input samples by using the IIR filter structure in FIG. 3. Because IIR filters have two sets of coefficients, we'll use the standard notation of the variables b(k) to denote the feed forward coefficients and the variables a(k) to indicate the feedback coefficients in Figure 3. OK, the difference equation describing the IIR filter in FIG. 3 is (eqn. 2) Look at FIG. 3 and Eq. (eqn. 2) carefully. It's important to convince our selves that FIG. 3 really is a valid implementation of Eq. (eqn. 2) and that, conversely, difference equation Eq. (eqn. 2) fully describes the IIR filter structure in FIG. 3. Keep in mind now that the sequence y(n) in FIG. 3 is not the same y(n) sequence that's shown in FIG. 2. The d(n) sequence in FIG. 3 is equal to the y(n) sequence in FIG. 2. By now you're probably wondering, "Just how do we determine those a(k) and b(k) IIR filter coefficients if we actually want to design an IIR filter?" Well, fasten your seat belt because this is where we get serious about under standing IIR filters. Recall from the last section concerning the window method of low-pass FIR filter design that we defined the frequency response of our desired FIR filter, took the inverse Fourier transform of that frequency response, and then shifted that transform result to get the filter's time domain impulse response. Happily, due to the nature of transversal FIR filters, the desired h(k) filter coefficients turned out to be exactly equal to the impulse response sequence. Following that same procedure with IIR filters, we could define the desired frequency response of our IIR filter and then take the inverse Fourier transform of that response to yield the filter's time-do main impulse response. The bad news is that there's no direct method for computing the filter's a(k) and b(k) coefficients from the impulse response! Unfortunately, the FIR filter design techniques that we've learned so far simply cannot be used to design HR filters. Fortunately for us, this wrinkle can be ironed out by using one of several available methods of designing IIR filters. Standard IIR filter design techniques fall into three basic classes: the impulse invariance, bilinear transform, and optimization methods. These design methods use a discrete sequence, mathematical transformation process known as the z-transform whose origin is the Laplace transform traditionally used in the analyzing of continuous systems. With that in mind, let's start this UR filter analysis and design discussion by briefly reacquainting ourselves with the fundamentals of the Laplace transform. [Heaviside (1850-1925), who was interested in electrical phenomena, developed an efficient algebraic process of solving differential equations. He initially took a lot of heat from his contemporaries because they thought his work was not sufficiently justified from a mathematical standpoint. However, the discovered correlation of Heaviside's methods with the rigorous mathematical treatment of the French mathematician Marquis Pierre Simon de Laplace's (1749-1827) operational calculus verified the validity of Heaviside's techniques.] 2. The Laplace transform is a mathematical method of solving linear differential equations that has proved very useful in the fields of engineering and physics. This transform technique, as it's used today, originated from the work of the brilliant English physicist Oliver Heaviside. The fundamental process of using the Laplace transform goes something like the following: Step 1: A time-domain differential equation is written that describes the input/output relationship of a physical system (and we want to find the output function that satisfies that equation with a given input). Step 2: The differential equation is Laplace transformed, converting it to an algebraic equation. Step 3: Standard algebraic techniques are used to determine the desired out put function's equation in the Laplace domain. Step 4: The desired Laplace output equation is, then, inverse Laplace trans formed to yield the desired time-domain output function's equation. This procedure, at first, seems cumbersome because it forces us to go the long way around, instead of just solving a differential equation directly. The justification for using the Laplace transform is that although solving differential equations by classical methods is a very powerful analysis technique for all but the most simple systems, it can be tedious and (for some of us) error prone. The reduced complexity of using algebra outweighs the extra effort needed to perform the required forward and inverse Laplace transformations. This is especially true now that tables of forward and inverse Laplace trans forms exist for most of the commonly encountered time functions. Well known properties of the Laplace transform also allow practitioners to decompose complicated time functions into combinations of simpler functions and, then, use the tables. (Tables of Laplace transforms allow us to translate quickly back and forth between a time function and its Laplace transform--analogous to, say, a German-English dictionary if we were studying the German language.) Let's briefly look at a few of the more important characteristics of the Laplace transform that will prove useful as we make our way toward the discrete z-transform used to design and analyze IIR digital filters. [Although tables of commonly encountered Laplace transforms are included in almost every system analysis textbook, very comprehensive tables are also available]. ] The Laplace transform of a continuous time-domain function f(t), where f(t) is defined only for positive time (t > 0), is expressed mathematically as (eqn. 3) F(s) is called "the Laplace transform of f(t)," and the variable s is the complex number s =δ + j ω . (eqn. 4) A more general expression for the Laplace transform, called the bilateral or two-sided transform, uses negative infinity (- .) as the lower limit of integration. However, for the systems that we'll be interested in, where system conditions for negative time (t < 0) are not needed in our analysis, the one-sided Eq. (eqn. 3) applies. Those systems, often referred to as causal systems, may have initial conditions at t = 0 that must be taken into account (velocity of a mass, charge on a capacitor, temperature of a body, etc.) but we don't need to know what the system was doing prior to t = 0. In Eq. (eqn. 4), is a real number and co is frequency in radians/ second. Because et is dimensionless, the exponent term s must have the dimension of 1/time, or frequency. That's why the Laplace variable s is often called a complex frequency. To put Eq. (eqn. 3) into words, we can say that it requires us to multiply, point for point, the function f(t) by the complex function e^-5' for a given value of s. (We'll soon see that using the function et here is not accidental; e^-st is used because it's the general form for the solution of linear differential equations.) After the point-for-point multiplications, we find the area under the curve of the function f(t)e 5f by summing all the products. That area, a complex number, represents the value of the Laplace transform for the particular value of s = δ + jw chosen for the original multiplications. If we were to go through this process for all values of s, we'd have a full description of F(s) for every value of s. I like to think of the Laplace transform as a continuous function, where the complex value of that function for a particular value of s is a correlation of f(t) and a damped complex c^st sinusoid whose frequency is co and whose damping factor is a. What do these complex sinusoids look like? Well, they are rotating phasors described by (eqn. 5) From our knowledge of complex numbers, we know that e j" is a unity magnitude phasor rotating clockwise around the origin of a complex plane at a frequency of co radians per second. The denominator of Eq. (eqn. 5) is a real number whose value is one at time t = 0. As t increases, the denominator et gets larger (when a is positive), and the complex e phasor's magnitude gets smaller as the phasor rotates on the complex plane. The tip of that phasor traces out a curve spiraling in toward the origin of the complex plane. One way to visualize a complex sinusoid is to consider its real and imaginary parts individually. We do this by expressing the complex e -51 sinusoid from Eq. (eqn. 5) in rectangular form as (eqn. 5') FIG. 4 shows the real parts (cosine) of several complex sinusoids with different frequencies and different damping factors. In FIG. 4(a), the complex sinusoid's frequency is the arbitrary co', and the damping factor is the arbitrary a'. So the real part of F(s), at s + jo)' , is equal to the correlation of f(t) and the wave in FIG. 4(a). For different values of s, we'll correlate f(t) with different complex sinusoids as shown in FIG. 4. (As we'll see, this correlation is very much like the correlation of f(t) with various sine and cosine waves when we were calculating the discrete Fourier transform.) Again, the real part of F(s), for a particular value of s, is the correlation off(t) with a cosine wave of frequency w and a damping factor of a, and the imaginary part of F(s) is the correlation off(t) with a sinewave of frequency co and a damping factor of a. Now, if we associate each of the different values of the complex s vari able with a point on a complex plane, rightfully called the s-plane, we could plot the real part of the F(s) correlation as a surface above (or below) that s-plane and generate a second plot of the imaginary part of the F(s) correlation as a surface above (or below) the s-plane. We can't plot the full complex F(s) surface on paper because that would require four dimensions. That's because s is complex, requiring two dimensions, and F(s) is itself complex and also requires two dimensions. What we can do, however, is graph the magnitude I F(s) I as a function of s because this graph requires only three dimensions. Let's do that as we demonstrate this notion of an I F(s) I surface by illustrating the Laplace transform in a tangible way. Say, for example, that we have the linear system shown in FIG. 5. Also, let's assume that we can relate the x(t) input and the y(t) output of the linear time invariant physical system in FIG. 5 with the following messy homogeneous constant-coefficient differential equation
(eqn. 6) We'll use the Laplace transform toward our goal of figuring out how the sys tem will behave when various types of input functions are applied, i.e., what the y(t) output will be for any given x(t) input. Let's slow down here and see exactly what FIG. 5 and Eq. (eqn. 6) are telling us. First, if the system is time invariant, then the a,, and h„ coefficients in Eq. (eqn. 6) are constant. They may be positive or negative, zero, real or complex, but they do not change with time. If the system is electrical, the coefficients might be related to capacitance, inductance, and resistance. If the system is mechanical with masses and springs, the coefficients could be related to mass, coefficient of damping, and coefficient of resilience. Then, again, if the system is thermal with masses and insulators, the coefficients would be related to thermal capacity and thermal conductance. To keep this discussion general, though, we don't really care what the coefficients actually represent. OK, Eq. (eqn. 6) also indicates that, ignoring the coefficients for the moment, the sum of the y(t) output plus derivatives of that output is equal to the sum of the x(t) input plus the derivative of that input. Our problem is to determine exactly what input and output functions satisfy the elaborate relationship in Eq. (eqn. 6). (For the stout hearted, classical methods of solving differential equations could be used here, but the Laplace transform makes the problem much simpler for our purposes.) Thanks to Laplace, the complex exponential time function of co is the one we'll use. It has the beautiful property that it can be differentiated any number of times without destroying its original form. That is (eqn. 7) If we let x(t) and y(t) be functions of e51 , x(c 4 ) and y(c , and use the properties shown in Eq. (eqn. 7), Eq. (eqn. 6) becomes (eqn. 8) Although it's simpler than Eq. (eqn. 6), we can further simplify the relationship in the last line in Eq. (eqn. 8) by considering the ratio of y(est) over x(est) as the Laplace transfer function of our system in FIG. 5. If we call that ratio of polynomials the transfer function H(s), (eqn. 9) To indicate that the original x(t) and y(t) have the identical functional form of est, we can follow the standard Laplace notation of capital letters and show the transfer function as (eqn. 10) where the output Y(s) is given by (eqn. 11) Equation (eqn. 11) leads us to redraw the original system diagram in a form that highlights the definition of the transfer function H(s) as shown in FIG. 6. The cautious reader may be wondering, "Is it really valid to use this Laplace analysis technique when it's strictly based on the system's x(t) input being some function of est, or x(est)?" The answer is that the Laplace analysis technique, based on the complex exponential x(est), is valid because all practical x(t) input functions can be represented with complex exponentials. For ex ample, • a constant, c = cew , • sinusoids, sin(wt) = eipot _ e-iwt)/2] or cos(wt) = (el(nt + • a monotonic exponential, eat, and • an exponentially varying sinusoid, cat cos(o.)t).
With that said, if we know a system's transfer function H(s), we can take the Laplace transform of any x(t) input to determine X(s), multiply that X(s) by H(s) to get Y(s), and then inverse Laplace transform Y(s) to yield the time-domain expression for the output y(t). In practical situations, however, we usually don't go through all those analytical steps because it's the system's transfer function H(s) in which we're most interested. Being able to ex press H(s) mathematically or graph the surface I H(s) I as a function of s will tell us the two most important properties we need to know about the system under analysis: Is the system stable, and if so, what is its frequency response? "But wait a minute," you say. "Equations (eqn. 10) and (eqn. 11) indicate that we have to know the Y(s) output before we can determine H(s)!" Not really. All we really need to know is the time-domain differential equation like that in Eq. (eqn. 6). Next we take the Laplace transform of that differential equation and rearrange the terms to get the H(s) ratio in the form of Eq. (eqn. 10). With practice, systems designers can look at a diagram (block, circuit, mechanical, whatever) of their system and promptly write the Laplace expression for H(s). Let's use the concept of the Laplace transfer function H(s) to determine the stability and frequency response of simple continuous systems. 2.1 One of the most important characteristics of any system involves the concept of stability. We can think of a system as stable if, given any bounded input, the output will always be bounded. This sounds like an easy condition to achieve because most systems we encounter in our daily lives are indeed stable. Nevertheless, we have all experienced instability in a system containing feedback. Recall the annoying howl when a public address system's micro phone is placed too close to the loudspeaker. A sensational example of an un stable system occurred in western Washington when the first Tacoma Narrows Bridge began oscillating on the afternoon of November 7th, 1940. Those oscillations, caused by 42 mph winds, grew in amplitude until the bridge destroyed itself. For IIR digital filters with their built-in feedback, in stability would result in a filter output that's not at all representative of the filter input; that is, our filter output samples would not be a filtered version of the input; they'd be some strange oscillating or pseudorandom values. A situation we'd like to avoid if we can, right? Let's see how. We can determine a continuous system's stability by examining several different examples of H(s) transfer functions associated with linear time invariant systems. Assume that we have a system whose Laplace transfer function is of the form of Eq. (eqn. 10), the coefficients are all real, and the coefficients b1 and a2 are equal to zero. We'll call that Laplace transfer function Hi (s), where (eqn. 12) Notice that if 5 = -a0 /a1 , the denominator in Eq. (eqn. 12) equals zero and H1 (s) would have an infinite magnitude. This s = -a0 /a1 point on the s-plane is called a pole, and that pole's location is shown by the "x" in FIG. 7(a). Notice that the pole is located exactly on the negative portion of the real a axis. If the system described by H1 were at rest and we disturbed it with an impulse like x(t) input at time t = 0, its continuous time-domain y(t) output would be the damped exponential curve shown in FIG. 7(b). We can see that H (s) is stable because its y(t) output approaches zero as time passes. By the way, the distance of the pole from the a = 0 axis, a0 / a1 for our Hi (s), gives the decay rate of the y(t) impulse response. To illustrate why the term pole is appropriate, FIG. 8(b) depicts the three-dimensional surface of I Hi (s) I above the s-plane. Look at FIG. 8(b) carefully and see how we've reoriented the s-plane axis. This new axis orientation allows us to see how the Hi (s) system's frequency magnitude response can be determined from its three-dimensional s-plane surface. If we examine the I H1 (5) I surface at a = 0, we get the bold curve in FIG. 8(b). That bold curve, the intersection of the vertical a = 0 plane (the jœ axis plane) and the I Hi (s) I surface, gives us the frequency magnitude response I H1 (0)) P of the system-and that's one of the things we're after here. The bold I Hi (co) I curve in FIG. 8(b) is shown in a more conventional way in FIG. 8(c). FIGs. 8(b) and 6-8(c) high light the very important property that the Laplace transform is a more general case of the Fourier transform because if a = 0, then s = joe. In this case, the I H1 (5) I curve for a = 0 above the s-plane becomes the I H1 (w) curve above the jo) axis in FIG. 8(c).
Another common system transfer function leads to an impulse response that oscillates. Let's think about an alternate system whose Laplace transfer function is of the form of Eq. (eqn. 10), the coefficient bo equals zero, and the coefficients lead to complex terms when the denominator polynomial is factored. We'll call this particular second-order transfer function H 2 (s), where (eqn. 13) (By the way, when a transfer function has the Laplace variable s in both the numerator and denominator, the order of the overall function is defined by the largest exponential order of s in the denominator polynomial. So our H2(s) is a second-order transfer function.) To keep the following equations from be coming too messy, let's factor its denominator and rewrite Eq. (eqn. 13) as (eqn. 14) where A = b1 /a2, p = vreal + jpimas, and p* = p„al - jpirnag (complex conjugate r of p). Notice that, if s is equal to -p or -p*, one of the polynomial roots in the denominator of Eq. (eqn. 14) will equal zero, and H2 (s) will have an infinite magnitude. Those two complex poles, shown in FIG. 9(a), are located off the negative portion of the real a axis. If the H2 system were at rest and we disturbed it with an impulse like x(t) input at time t = 0, its continuous time domain y(t) output would be the damped sinusoidal curve shown in FIG. 9(b). We see that H2 (s) is stable because its oscillating y(t) output, like a plucked guitar string, approaches zero as time increases. Again, the distance of the poles from the a = 0 axis ( v_real,) gives the decay rate of the sinusoidal y(t) impulse response. Likewise, the distance of the poles from the /co = 0 axis (±pimag) gives the frequency of the sinusoidal y(t) impulse response. Notice something new in FIG. 9(a). When s = 0, the numerator of Eq. (eqn. 14) is zero, making the transfer function H2 (s) equal to zero. Any value of s where H2 (s) = 0 is sometimes of interest and is usually plotted on the s plane as the little circle, called a "zero," shown in FIG. 9(a). At this point we're not very interested in knowing exactly what p and p* are in terms of the coefficients in the denominator of Eq. (eqn. 13). However, an energetic reader could determine the values of p and p* in terms of /1 0, al , and a2 by using the following well-known quadratic factorization formula: Given the second order polynomial f(s) = as2 + bs + c, then f(s) can be factored as (eqn. 15) FIG. 10(b) illustrates the I II2 (s) I surface above the s-plane. Again, the bold I H2 (o) I curve in FIG. 10(b) is shown in the conventional way in
FIG. 10(c) to indicate the frequency magnitude response of the system described by Eq. (eqn. 13). Although the three-dimensional surfaces in Figures 8(h) and 10(b) are informative, they're also unwieldy and unnecessary. We can determine a system's stability merely by looking at the locations of the poles on the two-dimensional s-plane. To further illustrate the concept of system stability FIG. 11 shows the s-plane pole locations of several example Laplace transfer functions and their corresponding time-domain impulse responses. We recognize Figures 11(a) and 11(b), from our previous discussion, as indicative of stable systems. When disturbed from their at-rest condition they respond and, at some later time, return to that initial condition. The single pole location at s = 0 in FIG. 11(c) is indicative of the 1/s transfer function of a single element of a linear system. In an electrical system, this 1/s transfer function could be a capacitor that was charged with an impulse of current, and there's no discharge path in the circuit. For a mechanical system, FIG. 11(c) would describe a kind of spring that's compressed with an impulse of force and, for some reason, re mains under compression. Notice, in FIG. 11(d), that, if an H(s) transfer function has conjugate poles located exactly on the je.) axis (a = 0), the system will go into oscillation when disturbed from its initial condition. This situation, called conditional stability, happens to describe the intentional transfer function of electronic oscillators. Instability is indicated in Figures 11(e) and 11. Here, the poles lie to the right of the ja) axis. When disturbed from their initial at-rest condition by an impulse input, their outputs grow without bound. See how the value of a, the real part of s, for the pole locations is the key here? When a < 0, the system is well behaved and stable; when a = 0, the system is conditionally stable; and when a> 0 the system is unstable. So we can say that, when a is located on the right half of the s-plane, the system is unstable. We show this characteristic of linear continuous systems in FIG. 12. Keep in mind that real-world systems often have more than two poles, and a system is only as stable as its least stable pole. For a system to be stable, all of its transfer-function poles must lie on the left half of the s-plane. [Impulse response testing in a laboratory can be an important part of the system design process. The difficult part is generating a true impulse-like input. lithe system is electrical, for example, al though somewhat difficult to implement, the input x(t) impulse would be a very short duration voltage or current pulse. lf, however, the system were mechanical, a whack with a hammer would suffice as an x(t) impulse input. For digital systems, on the other hand, an impulse input is easy to generate; it's a single unity-valued sample preceded and followed by all zero-valued samples. ]
To consolidate what we've learned so far: H(s) is determined by writing a linear system's time-domain differential equation and taking the Laplace transform of that equation to obtain a Laplace expression in terms of X(s), Y(s), s, and the system's coefficients. Next we rearrange the Laplace expression terms to get the H(s) ratio in the form of Eq. (eqn. 10). (The really slick part is that we do not have to know what the time-domain x(t) input is to analyze a linear system!) We can get the expression for the continuous frequency response of a system just by substituting jw for s in the H(s) equation. To deter mine system stability, the denominator polynomial of ii(s) is factored to find each of its roots. Each root is set equal to zero and solved for s to find the location of the system poles on the s-plane. Any pole located to the right of the jw axis on the s-plane will indicate an unstable system. OK, returning to our original goal of understanding the z-transform, the process of analyzing TIR filter systems requires us to replace the Laplace transform with the z-transform and to replace the s-plane with a z-plane. Let's introduce the z-transform, determine what this new z-plane is, discuss the stability of IIR filters, and design and analyze a few simple IIR filters. [In the early 1960s, James Kaiser, after whom the Kaiser window function is named, consolidated the theory of digital filters using a mathematical description known as the z-transform. Until that time, the use of the z-transform had generally been restricted to the field of discrete control systems. ] 3. The z-transform is the discrete-time cousin of the continuous Laplace transform. While the Laplace transform is used to simplify the analysis of continuous differential equations, the z-transform facilitates the analysis of discrete difference equations. Let's define the z-transform and explore its important characteristics to see how it's used in analyzing IIR digital filters. The z-transform of a discrete sequence h(n), expressed as H(z), is defined a: (eqn. 16) where the variable z is complex. Where Eq. (eqn. 3) allowed us to take the Laplace transform of a continuous signal, the z-transform is performed on a discrete h(n) sequence, converting that sequence into a continuous function H(z) of the continuous complex variable z. Similarly, as the function e is the general form for the solution of linear differential equations, z - " is the general form for the solution of linear difference equations. Moreover, as a Laplace function F(s) is a continuous surface above the s-plane, the z-transform function Fir(z) is a continuous surface above a z-plane. To whet your appetite, we'll now state that, if H(z) represents an IIR filter's z-domain transfer function, evaluating the H(z) surface will give us the filter's frequency magnitude response, and H(z)'s pole and zero locations will determine the stability of the filter. We can determine the frequency response of an IIR digital filter by expressing z in polar form as z = re I"), where r is a magnitude and to is the angle. In this form, the z-transform equation becomes (eqn. 17)
Equation (eqn. 17) can be interpreted as the Fourier transform of the product of the original sequence h(n) and the exponential sequence r ^-n . When r equals one, Eq. (eqn. 17) simplifies to the Fourier transform. Thus on the z-plane, the contour of the H(z) surface for those values where |z| = I is the Fourier trans form of h(n). If h(n) represents a filter impulse response sequence, evaluating the H(z) transfer function for |z| = 1 yields the frequency response of the filter. So where on the z-plane is |z| = 1? It's a circle with a radius of one, centered about the z = 0 point. This circle, so important that it's been given the name unit circle, is shown in FIG. 13. Recall that the JO) frequency axis on the continuous Laplace s-plane was linear and ranged from - to + 00 radians/s. The co frequency axis on the complex z-plane, however, spans only the range from -it to +1C radians. With this relationship between the je.) axis on the Laplace s-plane and the unit circle on the z-plane, we can see that the z-plane frequency axis is equivalent to coiling the s-plane's jo.) axis about the unit circle on the z-plane as shown in FIG. 14. Then, frequency co on the z-plane is not a distance along a straight line, but rather an angle around a circle. With o.) in FIG. 13 being a general normalized angle in radians ranging from - pi to + pi , we can relate co to an equivalent f, sampling rate by defining a new frequency variable cos = 27cfs in radians/s. The periodicity of discrete frequency representations, with a period of (o, = 27cfs radians/s or I", Hz, is indicated for the z-plane in FIG. 14. Where a 'walk along the je.) frequency axis on the s-plane could take us to infinity in either direction, a trip on the co frequency path on the z-plane leads us in circles (on the unit circle). FIG. 14 shows us that only the -Tcf, to +Tcf, radians/s frequency range for co can be accounted for on the z-plane, and this is another example of the universal periodicity of the discrete frequency do main. (Of course the -Tcf s to +7cfs radians/s range corresponds to a cyclic frequency range of -4/2 to +4/2.) With the perimeter of the unit circle being z = ei 4), later, we'll show exactly how to substitute eia) for z in a filter's H(z) transfer function, giving us the filter's frequency response. 3.1 One of the most important characteristics of the z-plane is that the region of filter stability is mapped to the inside of the unit circle on the z-plane. Given the H(z) transfer function of a digital filter, we can examine that function's pole locations to determine filter stability. If all poles are located inside the unit circle, the filter will be stable. On the other hand, if any pole is located outside the unit circle, the filter will be unstable. FIG. 15 shows the z-plane pole locations of several example z-domain transfer functions and their corresponding discrete time-domain impulse responses. It's a good idea for the reader to compare the z-plane and discrete-time responses of FIG. 15 with the s-plane and the continuous time responses of FIG. 11. The y(n) outputs in FIGs. 15(d) and (e) show examples of how unstable filter outputs increase in amplitude, as time passes, whenever their x(n) inputs are nonzero. To avoid this situation, any IIR digital filter that we design should have an H(z) transfer function with all of its individual poles inside the unit circle. Like a chain that's only as strong as its weakest link, an IIR filter is only as stable as its least stable pole. The coo oscillation frequency of the impulse responses in FIGs. 15(c)and (e) is, of course, proportional to the angle of the conjugate pole pairs from the zreal axis, or coo radians/s corresponding to fo = co0 /21t Hz. Because the intersection of the -zreal axis and the unit circle, point z = -1, corresponds to it radians (or 7tf, radians/s =fs /2 Hz), the coo angle of 7c/4 in FIG. 15 means that fo = f/8 and our y(n) will have eight samples per cycle of f.
3.2 We have one last concept to consider before we can add the z-transform to our collection of digital signal processing tools. We need to determine what the time delay operation in FIG. 3 is relative to the z-transform. To do this, assume we have a sequence x(n) whose z-transform is X(z) and a sequence y(n) = x(n-1) whose z-transform is Y(z) as shown in FIG. 16. The z-transform of y(n) is, by definition, (eqn. 18) (eqn. 19) (eqn. 20) Thus, the effect of a single unit of time delay is to multiply the z-transform by z-1. Interpreting a unit time delay to be equivalent to the z I operator leads us to the relationship shown in FIG. 17, where we can say X(z)z° X(z) is the z-transform of x(n), X(z)z is the z-transform of x(n) delayed by one sample, X(z)z" 2 is the z-transform of x(n) delayed by two samples, and X(z)z' is the z-transform of x(n) delayed by k samples. So a transfer function of z k is equivalent to a delay of kt, seconds from the instant when t = 0, where t, is the period between data samples, or one over the sample rate. Specifically, 1/J. Because a delay of one sample is equivalent to the factor z 1 , the unit time delay symbol used in FIGs. 2 and 3 is usually indicated by the z -1 operator. Let's pause for a moment and consider where we stand so far. Our acquaintance with the Laplace transform with its s-plane, the concept of stability based on H(s) pole locations, the introduction of the z-transform with its z-plane poles, and the concept of the z -1 operator signifying a single unit of time delay has led us to our goal: the ability to inspect art IIR filter difference equation or filter structure and immediately write the filter's z-domain transfer function H(z). Accordingly, by evaluating an IIR filter's H(z) transfer function appropriately, we can determine the filter's frequency response and its stability. With those ambitious thoughts in mind, let's develop the z-domain equations we need to analyze IIR filters.
Using the relationships of FIG. 17, we redraw FIG. 3 as a general Mth-order HR filter using the z--1 operator as shown in FIG. 18. (In hardware, those z operations are memory locations holding successive filter input and output sample values. When implementing an IIR filter in a soft ware routine, the z" 1 operation merely indicates sequential memory locations where input and output sequences are stored.) The IIR filter structure in FIG. 18 is often called the Direct Form I structure. The time-domain difference equation describing the general Mth-order HR filter, having N feed forward stages and M feedback stages, in FIG. 18 is (eqn. 21) In the z-domain, that IIR filter's output can be expressed by (eqn. 22) where Y(z) and X(z) represent the z-transform of y(n) and x(n). Look Eqs. (eqn. 21) and (eqn. 22) over carefully and see how the unit time delays translate to negative powers of z in the z-domain expression. A more compact form for Y(z) is (eqn. 23) OK, now we've arrived at the point where we can describe the transfer function of a general IIR filter. Rearranging Eq. (eqn. 23) to collect like terms, (24) Finally, we define the filter's z-domain transfer function as H(z) = Y(z)/X(z), where H(z) is given by (eqn. 25) (just like Laplace transfer functions, the order of our z-domain transfer function is defined by the largest exponential order of z in the denominator, in this case M.) Equation (eqn. 25) tells us all we need to know about an IIR filter. We can evaluate the denominator of Eq. (eqn. 25) to determine the positions of the filter's poles on the z-plane indicating the filter's stability. Remember, now, just as the Laplace transfer function H(s) in Eq. (eqn. 9) was a complex-valued surface on the s-plane, H(z) is a complex-valued surface above, or below, the z-plane. The intersection of that H(z) surface and the perimeter of a cylinder representing the z = eitù unit circle is the filter's complex frequency response. This means that substituting ei'" for z in Eq. (eqn. 25)'s transfer function gives us the expression for the filter's Hu(co) frequency response as (eqn. 26) Let's alter the form of Eq. (eqn. 26) to obtain more useful expressions for HIIR (co)'s frequency magnitude and phase responses. Because a typical BR filter frequency response HIIR (w) is the ratio of two complex functions, we can express H(w) in its equivalent rectangular form as (eqn. 27), (eqn. 28) It's usually easier and, certainly, more useful, to consider the complex frequency response expression in terms of its magnitude and phase. Let's do this by representing the numerator and denominator in Eq. (eqn. 28) as two complex functions of radian frequency co. Calling the numerator of Eq. (eqn. 28) Num(w), then, (eqn. 29) (eqn. 30)
(eqn. 31), (eqn. 32) These Num(w) and Den(w) definitions allow us to represent 1-/TIR(w) in the more simple forms of (eqn. 33), (eqn. 33') These Num(w) and Den(w) definitions allow us to represent 1-/TIR(w) in the more simple forms of Given the form in Eq. (eqn. 33) and the rules for dividing one complex number by another, provided by Eqs. (A-2) and (A-19'), the frequency magnitude response of a general HR filter is (eqn. 34) Furthermore, the filter's phase response ø(o) is the phase of the numerator minus the phase of the denominator, or (eqn. 35) To reiterate our intent here, we've gone through the above algebraic gymnastics to develop expressions for an IIR filter's frequency magnitude response | Hu(co) | and phase response ki lTR(co) in terms of the filter coefficients in Eqs. (eqn. 30) and (eqn. 32). Shortly, we'll use these expressions to analyze an actual [ER filter.
Pausing a moment to gather our thoughts, we realize that we can use Eqs. (eqn. 34) and (eqn. 35) to compute the magnitude and phase response of IIR filters as a function of the frequency to. And again, just what is to? It's the normalized radian frequency represented by the angle around the unit circle in FIG. 13, having a range of --rt +TC. In terms of a discrete sampling frequency (.1)„ measured in radians/s, from FIG. 14, we see that co covers the range -o.),/2 +o.),/2. In terms of our old friend f, Hz, Eqs. (eqn. 34) and (eqn. 35) apply over the equivalent frequency range of -f12 to Als /2 Hz. So, for example, if digital data is arriving at the filter's input at a rate of f, = 1000 samples/s, we could use Eq. (eqn. 34) to plot the filter's frequency magnitude response over the frequency range of -500 Hz to +500 Hz. Although the equations describing the transfer function 1-1 /1R (w), its magnitude response I H o(w) I, and phase response o(o) look somewhat complicated at first glance, let's illustrate their simplicity and utility by analyzing the simple second-order low-pass IIR filter in FIG. 19 whose positive cut off frequency is cos /10. By inspection, we can write the filter's time-domain difference equation as (eqn. 36) or the z-domain expression as (eqn. 37) Using Eq. (eqn. 25), we write the z-domain transfer function H(z) as (eqn. 38) Replacing z with ejw, we see that the frequency response of our example IIR filter is (eqn. 39) We're almost there. Remembering Euler's equations and that cos(0) = 1 and sin(0) = 0, we can write the rectangular form of 1-1 11R(w) as (eqn. 40) Equation (eqn. 40) is what we're after here, and if we calculate its magnitude over the frequency range of -1C (.0 IC, we get the I 1-1HR(w)1 shown as the solid curve in FIG. 20(a). For comparison purposes we also show a 5-tap low-pass FIR filter magnitude response in FIG. 20(a). Although both filters require the same computational workload, five multiplications per filter output sample, the low-pass IIR filter has the superior frequency magnitude response. Notice the steeper roll-off and lower sidelobes of the IIR filter relative to the FIR filter.± A word of warning here. It's easy to reverse some of the signs for the terms in the denominator of Eq. (eqn. 40), so be careful if you attempt these calculations at home. Some authors avoid this problem by showing the a(k) coefficients in FIG. 18 as negative values, so that the summation in the denominator of Eq. (eqn. 25) is always positive. Moreover, some commercial software HR design routines provide a(k) coefficients whose signs must be re versed before they can be applied to the IIR structure in FIG. 18. (If, while using software routines to design or analyze IIR filters, your results are very strange or unexpected, the first thing to do is reverse the signs of the a(k) coefficients and see if that doesn't solve the problem.) [To make this IIR and FIR filter comparison valid, the coefficients used for both filters were chosen so that each filter would approximate the ideal low-pass frequency response shown in Figure 5-17(a). ] The solid curve in FIG. 20(b) is our IIR filter's theta_IIR(w) phase response. Notice its nonlinearity relative to the FIR filter's phase response. (Re member now, we're only interested in the filter phase responses over the low-pass filters' passband. So those phase discontinuities for the FIR filter are of no consequence.) Phase nonlinearity is inherent in IIR filters and, based on the ill effects of nonlinear phase introduced in the group delay discussion of Section 5.8, we must carefully consider its implications whenever we decide to use an IIR filter instead of an FIR filter in any given application. The question any filter designer must ask and answer, is "How much phase distortion can I tolerate to realize the benefits of the reduced computational load and high data rates afforded by IIR filters?" To determine our IIR filter's stability, we must find the roots of the Eq. (eqn. 38) H(z)'s denominator second-order polynomial. Those roots are the poles of H(z) if their magnitudes are less than one, the filter will be stable. To deter mine the two poles zp1 and zp21 first we multiply H(z) by z2 /z2 to obtain polynomials with positive exponents. After doing so, H(z)'s denominator is H(z) denominator: z2 -1.194z + 0.436. (eqn. 41) Factoring Eq. (eqn. 41) using the quadratic equation from Eq. (eqn. 15), we obtain the factors H(z) denominator: (z + zpi )(z + zp2 ) = (z -0.597 +j0.282)(z -0.597 -j0.282). (eqn. 42) So when z = = 0.597 -j0.282 or z = -z p2 = 0.597 +j0.282, the filter's H(z) transfer function's denominator is zero and H(z) is infinite. We show the 0.597 +j0.282 and 0.597 -j0.282 pole locations, along with a crude depiction of the |H(z)| surface, in FIG. 21(a). Because those pole locations are inside the unit circle (their magnitudes are less than one), our example TIR filter is stable. While we're considering at the 1H(z) I surface, we can show its intersection with the unit circle as the bold curve in FIG. 21(b). Because z = rejw, with r restricted to unity then z = eiw and the bold curve is I H(z) I izi=1= I H(co) I, representing the lowpass filter's frequency magnitude response on the z-plane. The I H(w) I curve corresponds to the I HI/R (w) I in FIG. 20(a).
3.3 The Direct Form I structure of the TIR filter in FIG. 18 can be converted to alternate forms. It's easy to explore this idea by assuming that there is an equal number of feed forward and feedback stages, letting M = N = 2 as in FIG. 22(a), and thinking of the feed forward and feedback stages as two separate filters. Because both halves of the filter are linear we can swap them, as shown in FIG. 22(b), with no change in the final y(n) output. The two identical delay paths in FIG. 22(b) provide the motivation for this reorientation. Because the sequence g(n) is being shifted down along both delay paths in FIG. 22(b), we can eliminate one of the paths and arrive at the simplified Direct Form II filter structure shown in FIG. 22(c), where only half the delay storage registers are required compared to the Direct Form I structure. Another popular IIR structure is the transposed form of the Direct Form II filter. We obtain a transposed form by starting with the Direct Form II filter, convert its signal nodes to adders, convert its adders to signal nodes, reverse the direction of its arrows, and swap x(n) and y(n). (The transposition steps can also be applied to FIR filters.) Following these steps yields the transposed Direct Form II structure given in FIG. 22(d). The filters in FIG. 22 all have the same performance just so long as infinite precision arithmetic is used. However; using quantized filter coefficients, and with truncation or rounding being used to combat binary overflow errors, the various filters exhibit different noise and stability characteristics. In fact, the transposed Direct Form II structure was developed because it has improved behavior over the Direct Form II structure when integer arithmetic is used. Common consensus among IIR filter designers is that the Direct Form I filter has the most resistance to coefficient quantization and stability problems. We'll revisit these finite-precision arithmetic issues in Section 7. By the way, because of the feedback nature of IIR filters, they're often referred to as recursive filters. Similarly, FIR filters are often called non-recursive filters. A common misconception is that all recursive filters are TIR. This not true, FIR filters can be designed with recursive structures. So, the terminology recursive and non-recursive should be applied to a filter's structure, and the IIR and FIR should be used to describe the duration of the filter's impulse response. [Due to its popularity, throughout the rest of this section we'll use the phrase analog filter to designate those hardware filters made up of resistors, capacitors, and (maybe) operational amplifiers. ] Now that we have a feeling for the performance and implementation structures of IIR filters, let's briefly introduce three filter design techniques. These IIR design methods go by the impressive names of impulse invariance, bilinear transform, and optimized methods. The first two methods use analytical (pencil and paper algebra) filter design techniques to approximate continuous analog filters. Because analog filter design methods are very well understood, designers can take advantage of an abundant variety of analog filter design techniques to design, say, a digital IIR Butterworth filter with its very flat passband response, or perhaps go with a Chebyshev filter with its fluctuating passband response and sharper passband-to-stopband cutoff characteristics. The optimized methods (by far the most popular way of designing IIR filters) comprise linear algebra algorithms available in commercial filter de sign software packages. The impulse invariance, bilinear transform design methods are somewhat involved, so a true DSP beginner is justified in skipping those subjects upon first reading this guide. However, those filter design topics may well be valuable sometime in your future as your DSP knowledge, experience, and challenges grow.
4. IMPULSE INVARIANCE IIR FILTER DESIGN METHOD The impulse invariance method of IIR filter design is based upon the notion that we can design a discrete filter whose time-domain impulse response is a sampled version of the impulse response of a continuous analog filter. If that analog filter (often called the prototype filter) has some desired frequency response, then our IIR filter will yield a discrete approximation of that desired response. The impulse response equivalence of this design method is depicted in FIG. 23, where we use the conventional notation of 6 to represent an impulse function and h(t) is the analog filter's impulse response. We use the subscript "c" in FIG. 23(a) to emphasize the continuous nature of the analog filter. FIG. 23(b) illustrates the definition of the discrete filter's impulse response: the filter's time-domain output sequence when the input is a single unity-valued sample (impulse) preceded and followed by all zero valued samples. Our goal is to design a digital filter whose impulse response is a sampled version of the analog filter's continuous impulse response. Implied in the correspondence of the continuous and discrete impulse responses is the property that we can map each pole on the s-plane for the analog filter's He(s) transfer function to a pole on the z-plane for the discrete IIR filter's H(z) transfer function. What designers have found is that the impulse invariance method does yield useful IIR filters, as long as the sampling rate is high relative to the bandwidth of the signal to be filtered. In other words, IIR filters designed using the impulse invariance method are susceptible to aliasing problems be cause practical analog filters cannot be perfectly band-limited. Aliasing will occur in an IIR filter's frequency response as shown in FIG. 24. FIG. 24 Aliasing in the impulse invariance design method: (a) prototype analog filter magnitude response; (b) replicated magnitude responses where H11 (œ) is the discrete Fourier transform of h(n) = hc(nts ); (c) potential resultant IIR filter magnitude response with aliasing effects. From what we've learned in Section 2 about the spectral replicating effects of sampling, if FIG. 4(a) is the spectrum of the continuous MI) impulse response, then the spectrum of the discrete hc (nt) sample sequence is the replicated spectra in FIG. 24(b). in FIG. 24(c) we show the possible effect of aliasing where the dashed curve is a desired Hirm(c)) frequency magnitude response. However, the actual frequency magnitude response, indicated by the solid curve, can occur when we use the impulse invariance design method. For this reason, we prefer to make the sample frequency f, as large as possible to minimize the overlap between the primary frequency response curve and its replicated images spaced at multiples of ±f, Hz. To see how aliasing can affect IIR filters de signed with this method, let's list the necessary impulse invariance design steps and then go through a filter design example. [In a low-pass filter design, for example, the filter type (Chebyshev, Butterworth, elliptic), filter order (number of poles), and the cutoff frequency are parameters to be defined in this step. ] There are two different methods for designing HR filters using impulse invariance. The first method, which we'll call Method 1, requires that an in verse Laplace transform as well as a z-transform be performed. The second impulse invariance design technique, Method 2, uses a direct substitution process to avoid the inverse Laplace and z-transformations at the expense of needing partial fraction expansion algebra necessary to handle polynomials. Both of these methods seem complicated when de scribed in words, but they're really not as difficult as they sound. Let's com pare the two methods by listing the steps required for each of them. The impulse invariance design Method I goes like this: Method 1, Step 1: Design (or have someone design for you) a prototype analog filter with the desired frequency response. The result of this step is a continuous Laplace transfer function H,(s) expressed as the ratio of two polynomials, such as (eqn. 43) which is the general form of Eq. (eqn. 10) with N <M, and a(k) and b(k) are constants. Method 1, Step 2: Determine the analog filter's continuous time-domain impulse response h (I) from the H,.(s) Laplace transfer function. I hope this can be done using Laplace tables as op posed to actually evaluating an inverse Laplace transform equation. Method 1, Step 3: Determine the digital filter's sampling frequency 4, and calculate the sample period as t, = 18 - 5 . The f, sampling rate is chosen based on the absolute frequency', in Hz, of the prototype analog filter. Because of the aliasing problems associated with this impulse invariance design method, later, we'll see why fs should be made as large as practical. Method 1, Step 4: Find the z-transform of the continuous h(t) to obtain the UR filter's z-domain transfer function H(z) in the form of a ratio of polynomials in z. Method 1, Step 5: Substitute the value (not the variable) t, for the continuous variable t in the H(z) transfer function obtained in Step 4. In performing this step, we are ensuring, as in FIG. 23, that the TIR filter's discrete h(n) impulse response is a sampled version of the continuous filter's h(t) impulse response so that h(n) = hc (nt,), for 0 .S n Method 1, Step 6: Our H(z) from Step 5 will now be of the general form (eqn. 44) Because the process of sampling the continuous impulse response results in a digital filter frequency response that's scaled by a factor of 1/t, many filter designers find it appropriate to include the t, factor in Eq. (eqn. 44). So we can rewrite Eq. (eqn. 44) as (eqn. 45) Incorporating the value of t, in Eq. (eqn. 45), then, makes the TIR filter time-response scaling independent of the sampling rate, and the discrete filter will have the same gain as the prototype analog filter.' Method 1, Step 7: Because Eq. (eqn. -44) is in the form of Eq. (eqn. 25), by inspection, we can express the filter's time-domain difference equation in the general form of Eq. (eqn. 21) as (eqn. 46) Choosing to incorporate ts , as in Eq. (eqn. 45), to make the digital filter's gain equal to the prototype analog filter's gain by multiplying the b(k) coefficients by the sample period t, leads to an HR filter time-domain expression of the form (eqn. 47) Notice how the signs changed for the a(k) coefficients from Eqs. (eqn. 44) and (eqn. 45) to Eqs. (eqn. 46) and (eqn. -47). These sign changes always seem to cause problems for beginners, so watch out. Also, keep in mind that the time-domain expressions in Eq. (eqn. 46) and Eq. (eqn. 47) apply only to the filter structure in FIG. 18. The a(k) and b(k), or t, • b(k), coefficients, however, can be applied to the improved IIR structure shown in FIG. 22 to complete our design. Before we go through an actual example of this design process, let's discuss the other impulse invariance design method. [Some authors have chosen to include the 1, factor in the discrete h(u) impulse response in the above Step 4, that is, make h(n) = 1,11,(nt,). The final result of this, of course, is the same as that obtained by including t, as described in Step 5. ] FIG. 25 Mathematical flow of the impulse invariance design Method 2. The impulse invariance Design Method 2, also called the standard z transform method, takes a different approach. It mathematically partitions the prototype analog filter into multiple single-pole continuous filters and then approximates each one of those by a single-pole digital filter. The set of M single-pole digital filters is then algebraically combined to form an M-pole, Mth-ordered IIR filter. This process of breaking the analog filter to discrete filter approximation into manageable pieces is shown in FIG. 25. The steps necessary to perform an impulse invariance Method 2 design are: Method 2, Step 1: Obtain the Laplace transfer function 1-1,(s) for the proto type analog filter in the form of Eq. (eqn. 43). (Same as Method 1, Step 1.) Method 2, Step 2: Select an appropriate sampling frequency J. , and calculate the sample period as ts 1/4. (Same as Method 1, Step 3.) Method 2, Step 3: Express the analog filter's Laplace transfer function H,(s) as the sum of single-pole filters. This requires us to use partial fraction expansion methods to express the ratio of polynomials in Eq. (eqn. 43) in the form of (eqn. 48) where the individual A k factors are constants and the kth pole is located at -pk on the s-plane. We'll denote the kth single-pole analog filter as Ws), or (eqn. 49) Method 2, Step 4: Substitute 1 -- e- Pki , z -1 for s + pk in Eq. (eqn. 48). This mapping of each Hk (s) pole, located at s = -pi on the s-plane, to the z e-Pki , location on the z-plane is how we approximate the impulse response of each single-pole analog filter by a single-pole digital filter. (The reader can find the derivation of this 1 - e - Pkt , z -1 substitution, illustrated in our FIG. 25, in references [14i through [ 16].) So, the kth analog single-pole filter Hk (s) is approximated by a single-pole digital filter whose z-domain transfer function is (eqn. 50) The final combined discrete filter transfer function 1-1(z) is the sum of the single-poled discrete filters, or (eqn. 51) Keep in mind that the above H(z) is not a function of time. The t s factor in Eq. (eqn. 51) is a constant equal to the discrete-time sample period. Method 2, Step 5: Calculate the z-domain transfer function of the sum of the M single-pole digital filters in the form of a ratio of two polynomials in z. Because the H(z) in Eq. (eqn. 51) will be a series of fractions, we'll have to combine those fractions over a common denominator to get a single ratio of polynomials in the familiar form of (eqn. 52) Method 2, Step 6: Just as in Method 1 Step 6, by inspection, we can express the filter's time-domain equation in the general form of (eqn. 53) Again, notice the a(k) coefficient sign changes from Eq. (eqn. 52) to Eq. (eqn. 53). As described in Method 1 Steps 6 and 7, if we choose to make the digital filter's gain equal to the prototype analog filter's gain by multiplying the b(k) coefficients by the sample period t„ then the IIR filter's time domain expression will be in the form (eqn. 54) yielding a final H(z) z-domain transfer function of (eqn. 54') Finally, we can implement the improved IIR structure shown in FIG. 22 using the a(k) and b(k) coefficients from Eq. (eqn. 53) or the a(k) and t,- b(k) coefficients from Eq. (eqn. 54). To provide a more meaningful comparison between the two impulse in variance design methods, let's dive in and go through an IIR filter design ex ample using both methods. 4.1 Assume that we need to design an IIR filter that approximates a second-order Chebyshev prototype analog low-pass filter whose passband ripple is 1 dB. Our f, sampling rate is 100 Hz (t, = 0.01), and the filter's 1 dB cutoff frequency is 20 Hz. Our prototype analog filter will have a frequency magnitude response like that shown in FIG. 26. Given the above filter requirements, assume that the analog prototype filter design effort results in the H e (s) Laplace transfer function of (eqn. 55) It's the transfer function in Eq. (eqn. 55) that we intend to approximate with our discrete IIR filter. To find the analog filter's impulse response, we'd like to get He (s) into a form that allows us to use Laplace transform tables to find h(t).
Searching through systems analysis textbooks we find the following Laplace transform pair: (eqn. 56) Our intent, then, is to modify Eq. (eqn. 55) to get it into the form on the left side of Eq. (eqn. 56). We do this by realizing that the Laplace transform expression in Eq. (eqn. 56) can be rewritten as (eqn. 57) If we set a, and for A, Eq. (eqn. 55) equal to the right side of Eq. (eqn. 57), we can solve w. Doing that, (eqn. 59) (eqn. 60) (eqn. 61) [...] OK, hang in there; we're almost finished. Here are the final steps of Method 1. Because of the transfer function H(z) = Y(z)/X(z), we can cross-multiply the denominators to rewrite the bottom line of Eq. (eqn. 67) as By inspection of Eq. (eqn. 68), we can now get the time-domain expression for our IIR filter. Performing Method 1, Steps 6 and 7, we multiply the x(n--1) co efficient by the sample period value of t, = 0.01 to allow for proper scaling as … and there we (finally) are. The coefficients from Eq. (eqn. 69) are what we use in implementing the improved IIR structure shown in FIG. 22 to approximate the original second-order Chebyshev analog low-pass filter. Let's see if we get the same result if we use the impulse invariance de sign Method 2 to approximate the example prototype analog filter. 4.2 Given the original prototype filter's Laplace transfer function as (eqn. 70) and the value of t, = 0.01 for the sample period, we're ready to proceed with Method 2's Step 3. To express tic (s) as the sum of single-pole filters, we'll have to factor the denominator of Eq. (eqn. 70) and use partial fraction expansion methods. For convenience, let's start by replacing the constants in Eq. (eqn. 70) with variables in the form of (eqn. 71) where b = 137.94536, and c = 17410.145. Next, using Eq. (eqn. -15) with a = 1, we can factor the quadratic denominator of Eq. (eqn. 71) into (eqn. 72) If we substitute the values for b and c in Eq. (eqn. 72), we'll find that the quantity under the radical sign is negative. This means that the factors in the denominator of Eq. (eqn. 72) are complex. Because we have lots of algebra ahead of us, let's replace the radicals in Eq. (eqn. 72) with the imaginary term jR, where j = -nTT1 and R = 1 (b2-4c)/41, such that (eqn. 73) OK, partial fraction expansion methods allow us to partition Eq. (eqn. 73) into two separate fractions of the form (eqn. 74) where the K1 constant can be found to be equal to jc /2R and constant K2 is the complex conjugate of K1 , or K2 = -jc / 2R. (To learn the details of partial fraction expansion methods, the interested reader should investigate standard college algebra or engineering mathematics textbooks.) Thus, He (s) can be of the form in Eq. (eqn. 48) or (eqn. 75) We can see from Eq. (eqn. 75) that our second-order prototype filter has two poles, one located at p1 = -6/2 - jR and the other at p2 = -b/2 + jR. We're now ready to map those two poles from the s-plane to the z-plane as called out in Method 2, Step 4. Making our 1 - er-Pkts z-1 substitution for the s +pk terms in Eq. (eqn. 75), we have the following expression for the z-domain single-pole dig ital filters, (eqn. 76) Our objective in Method 2, Step 5 is to massage Eq. (eqn. 76) into the form of Eq. (eqn. 52), so that we can determine the hR filter's feed forward and feedback coefficients. Putting both fractions in Eq. (eqn. 76) over a common denominator gives us Collecting like terms in the numerator and multiplying out the denominator gives us (eqn. 78) Factoring the exponentials and collecting like terms of powers of z in Eq. (eqn. 78), (eqn. 79) Continuing to simplify our 11(z) expression by factoring out the real part of the exponentials, (eqn. 80) We now have H(z) in a form with all the like powers of z combined into single terms, and Eq. (eqn. 80) looks something like the desired form of Eq. (eqn. 52). Knowing that the final coefficients of our I1R filter must be real numbers, the question is "What do we do with those imaginary j terms in Eq. (eqn. 80)?" Once again, Euler to the rescue. Using Euler's equations for sinusoids, we can eliminate the imaginary exponentials and Eq. (eqn. 80) become ... If we plug the values c = 17410.145, b = 137.94536, R .= 112.48517, and ts = 0.01 into Eq. (eqn. 81), we get the following IIR filter transfer function: Because the transfer function H(z) = Y(z)/X(z), we can again cross-multiply the denominators to rewrite Eq. (eqn. 82) as . (eqn. 83) Now we take the inverse z-transform of Eq. (eqn. 83), by inspection, to get the time-domain expression for our IIR filter as One final step remains. To force the IIR filter gain equal to the prototype analog filter's gain, we multiply the x(n-1) coefficient by the sample period t_s as suggested in Method 2, Step 6. In this case, there's only one x(n) coefficient, giving us (eqn. 85) that compares well with the Method 1 result in Eq. (eqn. 69). (Isn't it comforting to work a problem two different ways and get the same result?) FIG. 27 shows, in graphical form, the result of our IIR design example. The s-plane pole locations of the prototype filter and the z-plane poles of the IIR filter are shown in FIG. 27(a). Because the s-plane poles are to the left of the origin and the z-plane poles are inside the unit circle, both the prototype analog and the discrete TIR filters are stable. We find the prototype filter's s-plane pole locations by evaluating He(s) in Eq. (eqn. 75). When s = -b/2 -JR. the denominator of the first term in Eq. (eqn. 75) becomes zero and He(s) is infinitely large. That s = 4/2 - JR value is the location of the lower s-plane pole in FIG. 27(a). When s = -b/2 + jR, the denominator of the second term in Eq. (eqn. 75) becomes zero and s = -b/2 + JR is the location of the second s-plane pole. The IIR filter's z-plane pole locations are found from Eq. (eqn. 76). If we multiply the numerators and denominators of Eq. (eqn. 76) by z, (eqn. 86)
In Eq. (eqn. 86), when z is set equal to e( 112 the denominator of the first term in Eq. (eqn. 86) becomes zero and H(z) becomes infinitely large. The value of z of (eqn. 87) defines the location of the lower z-plane pole in FIG. 27(a). Specifically, this lower pole is located at a distance of e -1112 = 0.5017 from the origin, at an angle of O = --Rt s radians, or -64.45°. Being conjugate poles, the upper z-plane pole is located the same distance from the origin at an angle of O ,= Rt . radians, Or +64.45'. FIG. 27(b) illustrates the frequency magnitude response of the IIR filter in Hz. Two different implementations of our IIR filter are shown in FIG. 28. FIG. 28(a) is an implementation of our second-order HR filter based on the general III( structure given in FIG. 22, and FIG. 28 (3) shows the second-order TIR filter implementation based on the alternate structure from FIG. 21(b). Knowing that the b(0) coefficient on the left side of FIG. 28(b) is zero, we arrive at the simplified structure on the right side of FIG. 28(b). Looking carefully at FIG. 28(a) and the right side of FIG. 28(b), we can see that they are equivalent.
Although both impulse invariance design methods are covered in the literature, we might ask, "Which one is preferred?" There's no definite answer to that question because it depends on the Hr(s) of the prototype analog filter. Although our Method 2 example above required more algebra than Method 1, if the prototype filter's s-domain poles were located only on the real axis, Method 2 would have been much simpler because there would be no complex variables to manipulate. In general, Method 2 is more popular for two reasons: (1) the inverse Laplace and z-transformations, although straightforward in our Method 1 example, can be very difficult for higher order filters, and (2) unlike Method 1, Method 2 can be coded in a software routine or a computer spreadsheet. [A piece of advice: whenever you encounter any frequency representation (be it a digital filter magnitude response or a signal spectrum) that has nonzero values at +4/2, be suspicious-be very suspicious-that aliasing is taking place. ]
Upon examining the frequency magnitude response in FIG. 27(b), we can see that this second-order IIR filter's roll-off is not particularly steep. This is, admittedly, a simple low-order filter, but its attenuation slope is so gradual that it doesn't appear to be of much use as a low-pass filter. We can also see that the filter's passband ripple is greater than the desired value of 1 dB in FIG. 26. What we'll find is that it's not the low order of the filter that contributes to its poor performance, but the sampling rate used. That second-order IIR filter response is repeated as the shaded curve in FIG. 29. If we increased the sampling rate to 200 Hz, we'd get the frequency response shown by the dashed curve in FIG. 29. Increasing the sampling rate to 400 Hz results in the much improved frequency response indicated by the solid line in the figure. Sampling rate changes do not affect our filter order or implementation structure. Remember, if we change the sampling rate, only the sample period t s changes in our design equations, resulting in a different set of filter coefficients for each new sampling rate. So we can see that the smaller we make ts (larger f) the better the resulting filter when either impulse invariance design method is used because the replicated spectral over lap indicated in FIG. 24(b) is reduced due to the larger f, sampling rate. The bottom line here is that impulse invariance HR filter design techniques are most appropriate for narrowband filters; that is, low-pass filters whose cutoff frequencies are much smaller than the sampling rate. The second analytical technique for analog filter approximation, the bi linear transform method, alleviates the impulse invariance method's aliasing problems at the expense of what's called frequency warping. Specifically, there's a nonlinear distortion between the prototype analog filter's frequency scale and the frequency scale of the approximating I IR filter designed using the bilinear transform. Let's see why. 5. There's a popular analytical RR filter design technique known as the bilinear transform method. Like the impulse invariance method, this design technique approximates a prototype analog filter defined by the continuous Laplace transfer function He(s) with a discrete filter whose transfer function is H(z). However, the bilinear transform method has great utility because • it allows us simply to substitute a function of z for s in He(s) to get H(z), thankfully, eliminating the need for Laplace and z-transformations as well as any necessity for partial fraction expansion algebra; • it maps the entire s-plane to the z-plane, enabling us to completely avoid the frequency-domain aliasing problems we had with the impulse in variance design method; and • it induces a nonlinear distortion of H(z)'s frequency axis, relative to the original prototype analog filter's frequency axis, that sharpens the final roll-off of digital low-pass filters. Don't worry. We'll explain each one of these characteristics and see exactly what they mean to us as we go about designing an IIR filter. If the transfer function of a prototype analog filter is fie(s), then we can obtain the discrete IIR filter z-domain transfer function H(z) by substituting the following for s in H(s) (eqn. 88) where, again, ts is the discrete filter's sampling period (1/4). Just as in the impulse invariance design method, when using the bilinear transform method, we're interested in where the analog filter's poles end up on the z-plane after the transformation. This s-plane to z-plane mapping behavior is exactly what makes the bilinear transform such an attractive design technique. [ The bilinear transform is a technique in the theory of complex variables for mapping a function on the complex plane of one variable to the complex plane of another variable. It maps circles and straight lines to straight lines and circles, respectively. ] Let's investigate the major characteristics of the bilinear transform's s-plane to z-plane mapping. First we'll show that any pole on the left side of the s-plane will map to the inside of the unit circle in the z-plane. It's easy to show this by solving Eq. (eqn. 88) for z in terms of s. Multiplying Eq. (eqn. 88) by (t5 /2)(1 + z -1 ) and collecting like terms of z leads us to (eqn. 89) If we designate the real and imaginary parts of s as (eqn. 90) where the subscript in the radian frequency wa signifies analog, Eq. (eqn. 89) becomes ... We see in Eq. (eqn. 91) that z is complex, comprising the ratio of two complex expressions. As such, if we denote z as a magnitude at an angle in the form of Z 1z1ZO„ we know that the magnitude of z is given by (eqn. 92) OK, if 05 is negative (a < 0) the numerator of the ratio on the right side of Eq. (eqn. 92) will be less than the denominator, and I z I will be less than 1. On the other hand, if CI is positive (a > 0), the numerator will be larger than the denominator, and I z I will be greater than 1. This confirms that, when using the bilinear transform defined by Eq. (eqn. 88), any pole located on the left side of the s-plane (a < 0) will map to a z-plane location inside the unit circle. This characteristic ensures that any stable s-plane pole of a prototype analog filter will map to a stable z-plane pole for our discrete IIR filter. Likewise, any analog filter pole located on the right side of the s-plane (a > 0) will map to a z-plane location outside the unit circle when using the bilinear transform. This reinforces our notion that, to avoid filter instability, during LIR filter de sign, we should avoid allowing any z-plane poles to lie outside the unit circle. Next, let's show that the j_w_a axis of the s-plane maps to the perimeter of the unit circle in the z-plane. We can do this by setting a , 0 in Eq. (eqn. 91) to get (eqn. 93) Here, again, we see in Eq. (eqn. 93) that z is a complex number comprising the ratio of two complex numbers, and we know the magnitude of this z is given by (eqn. 94) The magnitude of z in Eq. (eqn. 94) is always 1. So, as we stated, when using the bilinear transform, the w axis of the s-plane maps to the perimeter of the unit circle in the z-plane. However, this frequency mapping from the s-plane to the unit circle in the z-plane is not linear. It's important to know why this frequency nonlinearity, or warping, occurs and to understand its effects. So we shall, by showing the relationship between the s-plane frequency and the z-plane frequency that we'll designate as wd . If we define z on the unit circle in polar form as z = re-/cod as we did for FIG. 13, where r is 1 and cod is the angle, we can substitute z = e in Eq. (eqn. 88) as (eqn. 95) If we show s in its rectangular form and partition the ratio in brackets into half-angle expressions, (eqn. 96) [...] FIG. 30 Nonlinear relationship between the z-domain frequency wri and the s-domain frequency wa. Notice that, because tan -1 (wa t s /2) approaches /c/2 as co„ gets large, cod must, then, approach twice that value, or it. This means that no matter how large co, gets, the z-plane cod will never be greater than IC. Remember how we considered FIG. 14 and stated that only the -icfs to -Frcf, radians/s frequency range for wa can be accounted for on the z-plane? Well, our new mapping from the bilinear transform maps the entire s-plane to the z-plane, not just the primary strip of the s-plane shown in FIG. 14. Now, just as a walk along the jwa frequency axis on the s-plane takes us to infinity in either direction, a trip halfway around the unit circle in a counterclock wise direction takes us from (O„ = 0 to wa = +0.9 radians/s. As such, the bilinear transform maps the s-plane's entire jwa axis onto the unit circle in the z-plane. We illustrate these bilinear transform mapping properties in FIG. 31. To show the practical implications of this frequency warping, let's relate the s-plane and z-plane frequencies to the more practical measure of the f, sampling frequency. We do this by remembering the fundamental relationship between radians/s and Hz of co = 27cf and solving for f to get (eqn. 100) (eqn. 101) (eqn. 102) FIG. 31 Bilinear transform mapping of the s-plane to the z-plane. FIG. 32 Nonlinear relationship between the fci and fc frequencies: (a) frequency warping curve scaled in terms of the IIR filter's fs sampling rate: (b) s-domain frequency response 1-fc(fc) transformation to a z-domain frequency response Fid (fd ). Equation (eqn. 102), the relationship between the s-plane frequency fa in Hz and the z-plane frequency fd in Hz, is plotted in FIG. 32(a) as a function of the HR filter's sampling rate 4. The distortion of the fa frequency scale, as it translates to the fd frequency scale, is illustrated in FIG. 32(b), where an s-plane bandpass frequency magnitude response 1 Ha (h) I is subjected to frequency compression as it is transformed to I Hd (fd) I. Notice how the low-frequency portion of the IIR filter's 1 H(fd) I response is reasonably linear, but the higher frequency portion has been squeezed in toward zero Hz. That's frequency warping. This figure shows why no IIR filter aliasing can occur with the bilin ear transform design method. No matter what the shape or bandwidth of the I Ha(fa) I prototype analog filter, none of its spectral content can extend be yond half the sampling rate of f5 /2 in I Hd(fd)-and that's what makes the bi linear transform design method as popular as it is. The steps necessary to perform an IIR filter design using the bilinear transform method are as follows: Step 1: Obtain the Laplace transfer function He(s) for the prototype analog filter in the form of Eq. (eqn. 43). Step 2: Determine the digital filter's equivalent sampling frequency f, and establish the sample period t, = 1 /4. Step 3: In the Laplace fic (s) transfer function, substitute the expression (eqn. 103) for the variable s to get the IIR filter's H(z) transfer function. Step 4: Multiply the numerator and denominator of H(z) by the appropriate power of (1 + z-1 ) and grind through the algebra to collect terms of like powers of z in the form (eqn. 104) Step 5: Just as in the impulse invariance design methods, by inspection, we can express the IIR filter's time-domain equation in the general form of (eqn. 105) Although the expression in Eq. (eqn. 105) only applies to the filter structure in FIG. 18, to complete our design, we can apply the a(k) and b(k) coefficients to the improved IIR structure shown in FIG. 22. To show just how straightforward the bilinear transform design method is, let's use it to solve the IIR filter design problem first presented for the impulse invariance design method. 5.1 Again, our goal is to design an IIR filter that approximates the second-order Chebyshev prototype analog low-pass filter, shown in FIG. 26, whose passband ripple is 1 dB. The f, sampling rate is 100 Hz (ts = 0.01), and the filter's 1 dB cutoff frequency is 20 Hz. As before, given the original prototype filter's Laplace transfer function as (eqn. 106) and the value of 15 = 0.01 for the sample period, we're ready to proceed with Step 3. For convenience, let's replace the constants in Eq. (eqn. 106) with variables in the form of (eqn. 107) where b = 137.94536 and c = 17410.145. Performing the substitution of Eq. (eqn. 103) in Eq. (eqn. 107), (eqn. 108) To simplify our algebra a little, let's substitute the variable a for the fraction 2/ts to give [...] The frequency magnitude response of our bilinear transform IIR design example is shown as the dark curve in FIG. 33(a), where, for comparison, we've shown the result of that impulse invariance design example as the shaded curve. Notice how the bilinear transform designed filter's magnitude response approaches zero at the folding frequency of f5 /2 = 50 Hz. This is as it should be-that's the whole purpose of the bilinear transform design method. FIG. 33(b) illustrates the nonlinear phase response of the bilinear transform designed IIR filter. FIG. 33 Comparison of the bilinear transform and impulse invariance design IIR filters: (a) frequency magnitude responses: (b) phase of the bilinear transform IIR filter. FIG. 34 Implementation of the bilinear transform design example filter. We might be tempted to think that not only is the bilinear transform de sign method easier to perform than the impulse invariance design method, but that it gives us a much sharper roll-off for our low-pass filter. Well, the frequency warping of the bilinear transform method does compress (sharpen) the roll-off portion of a low-pass filter, as we saw in FIG. 32, but an additional reason for the improved response is the price we pay in terms of the additional complexity of the implementation of our IIR filter. We see this by examining the implementation of our IIR filter as shown in FIG. 34. Notice that our new filter requires five multiplications per filter output sample where the impulse invariance design filter in FIG. 28(a) required only three multiplications per filter output sample. The additional multiplications are, of course, required by the additional feed forward z terms in the numerator of Eq. (eqn. 113). These added b(k) coefficient terms in the 1-1(z) transfer function correspond to zeros in the z-plane created by the bilinear transform that did not occur in the impulse invariance design method. Because our example prototype analog low-pass filter had a cutoff frequency that was f,/5, we don't see a great deal of frequency warping in the bilinear transform curve in FIG. 33. (In fact, Kaiser has shown that, when fs is large, the impulse invariance and bilinear transform design methods re suit in essentially identical H(z) transfer functions.) Had our cutoff frequency been a larger percentage off„ bilinear transform warping would have been more serious, and our resultant Hd(fd) I cutoff frequency would have been below the desired value. What the pros do to avoid this is to pre-warp the prototype analog filter's cutoff frequency requirement before the analog He (S) transfer function is derived in Step 1. In that way, they compensate for the bilinear transform's frequency warping before it happens. We can use Eq. (eqn. 98) to determine the pre-warped prototype analog filter low-pass cutoff frequency that we want mapped to the desired RR low-pass cutoff frequency. We plug the desired HR cutoff frequency cod in Eq. (eqn. 98) to calculate the prototype analog w_a cutoff frequency used to derive the prototype analog filter's H(s) transfer function. Although we explained how the bilinear transform design method avoided the impulse invariance method's inherent frequency response aliasing, it's important to remember that we still have to avoid filter input data aliasing. No matter what kind of digital filter or filter design method is used, the original input signal data must always be obtained using a sampling scheme that avoids the aliasing, described in Section 2. If the original input data contains errors due to sample rate aliasing, no filter can remove those errors. Our introductions to the impulse invariance and bilinear transform de sign techniques have, by necessity, presented only the essentials of those two design methods. Although rigorous mathematical treatment of the impulse invariance and bilinear transform design methods is inappropriate for an introductory text such as this, more detailed coverage is available to the interested reader. References [13] and [15], by the way, have excellent material on the various prototype analog filter types used as a basis for the analytical IIR filter design methods. Although our examples of IIR filter de sign using the impulse invariance and bilinear transform techniques approximated analog low-pass filters, it's important to remember that these techniques apply equally well to designing bandpass and highpass IIR filters. To design a highpass IIR filter, for example, we'd merely start our design with a Laplace transfer function for the prototype analog highpass filter. Our IIR digital filter design would then proceed to approximate that prototype high pass filter. As we have seen, the impulse invariance and bilinear transform design techniques are both powerful and a bit difficult to perform. The mathematics is intricate and the evaluation of the design equations is arduous for all but the simplest filters. As such, we'll introduce a third class of IIR filter design methods based on software routines that take advantage of iterative optimization computing techniques. In this case, the designer defines the desired filter frequency response, and the algorithm begins generating successive approximations until the IIR filter coefficients converge (hopefully) to an optimized design. 6. The final class of IIR filter design methods we'll introduce are broadly categorized as optimization methods. These IIR filter design techniques were developed for the situation when the desired IIR filter frequency response was not of the standard low-pass, bandpass, or highpass form. When the desired response has an arbitrary shape, closed-form expressions for the filter's z-transform do not exist, and we have no explicit equations to work with to determine the TIR filter's coefficients. For this general IIR filter design problem, algorithms were developed to solve sets of linear, or nonlinear, equations on a computer. These software routines mandate that the designer describe, in some way, the desired IIR filter frequency response. The algorithms, then, assume a filter transfer function H(z) as a ratio of polynomials in z and start to calculate the filter's frequency response. Based on some error criteria, the algorithm begins iteratively adjusting the filter's coefficients to minimize the error between the desired and the actual filter frequency response. The process ends when the error cannot be further minimized, or a predefined number of iterations has occurred, and the final filter coefficients are presented to the filter designer. Although these optimization algorithms are too mathematically complex to cover in any detail here, descriptions of the most popular optimization schemes are readily available in the literature. FIG. 35 Example low-pass IIR filter design parameters required for the optimized IIR filter design method. The reader may ask, "If we're not going to cover optimization methods in any detail, why introduce the subject here at all?" The answer is that if we spend much time designing HR filters, we'll end up using optimization techniques in the form of computer software routines most of the time. The vast majority of commercially available digital signal processing software pack ages include one or more HR filter design routines that are based on optimization methods. When a computer-aided design technique is available, filter designers are inclined to use it to design the simpler low-pass, bandpass, or highpass forms even though analytical techniques exist. With all due respect to Laplace, Heaviside, and Kaiser, why plow through all the z-transform de sign equations when the desired frequency response can be applied to a soft ware routine to yield acceptable filter coefficients in a few seconds? As it turns out, using commercially available optimized IIR filter de sign routines is very straightforward. Although they come in several flavors, most optimization routines only require the designer to specify a few key amplitude and frequency values, the desired order of the IIR filter (the number of feedback taps), and the software computes the final feed forward and feedback coefficients. In specifying a low-pass, IIR filter for example, a software design routine might require us to specify the values for , / f and 12 shown in FIG. 35. Some optimization de- p s sign routines require the user to specify the order of the IIR filter. Those routines then compute the filter coefficients that best approach the required frequency response. Some software routines, on the other hand, don't require the user to specify the filter order. They compute the mini mum order of the filter that actually meets the desired frequency response. 7. There's an old saying in engineering: "It's one thing to design a system on paper, and another thing to actually build one and make it work." (Recall the Tacoma Narrows Bridge episode!) Fabricating a working system based on theoretical designs can be difficult in practice. Let's see why this is often true for TIR digital filters. Again, the IIR filter structures in FIGs. 18, and 22 are called Direct Form implementations of an IIR filter. That's because they're all equivalent to directly implementing the general time domain expression for an Mth-order IIR filter given in Eq. (eqn. 21). As it turns out, there can be stability problems and frequency response distortion errors when direct form implementations are used for high-order filters. Such problems arise because we're forced to represent the TIR filter coefficients and results of intermediate filter calculations with binary numbers having a finite number of bits. There are three majors categories of finite-wordlength errors that plague IIR filter implementations: coefficient quantization, overflow errors and roundoff errors. Coefficient quantization (limited-precision coefficients) will result in filter pole and zero shifting on the z-plane, and a frequency magnitude response that may not meet our requirements. The response distortion worsens for higher-order TIR filters. Overflow, the second finite-wordlength effect that troubles IIR filters, is what happens when the result of an arithmetic operation is too large to be represented in the fixed-length hardware registers assigned to contain that result. Because we perform so many additions when we implement TIR filters, overflow is always a potential problem. With no precautions being made to handle overflow, large nonlinearity errors can result in our filter output samples-often in the form of overflow oscillations. The most common way of dealing with binary overflow errors is called roundoff, or rounding, where a data value is represented by, or rounded off to, the b-bit binary number that's nearest the unrounded data value. It's usually valid to treat roundoff errors as a random process, but conditions occur in TIR filters where rounding can cause the filter output to oscillate forever, even when the filter input sequence is all zeros. This situation, caused by the roundoff noise being highly correlated with the signal (going by the names limit cycles and deadband effects) has been well analyzed in the literature. We can demonstrate limit cycles by considering the second-order IIIZ filter in FIG. 36(a), whose time domain expression is y(n) = x(n) + 1.3y(n-1) 0.76y(n-2) . (eqn. 115) Let's assume this filter rounds the adder's output to the nearest integer value, lithe situation ever arises where y(-2) = 0, y(-1) = 8, and x(0) and all successive x(n) inputs are zero, the filter Output goes into endless oscillation, as shown in FIG. 36(b). If this filter were to be used in an audio application, when the input signal went silent the listener could end up hearing an audio tone instead of silence. The dashed line in FIG. 36(b) shows the filter's stable response to this particular situation if no rounding is used. With rounding however, this 1W filter certainly lives up to its name. FIG. 36 Limit cycle oscillations due to rounding: (a) second-order IIR filter; (b) one possible time domain response of the IIR filter. There are several ways to reduce the ill effects of coefficient quantization errors and limit cycles. We can increase the word widths of the hardware registers that contain the results of intermediate calculations. Because roundoff limit cycles affect the least significant bits of an arithmetic result, larger word sizes diminish the impact of limit cycles should they occur. To avoid filter input sequences of all zeros, some practitioners add a dither sequence to the filter's input signal sequence. A dither sequence is a sequence of low amplitude pseudorandom numbers that interferes with an IIR filter's roundoff error generation tendency, allowing the filter output to reach zero should the input signal remain at zero. Dithering, however, decreases the effective signal to noise ratio of the filter output. Finally, to avoid limit cycle problems, we can just use an FIR filter. Because FIR filters by definition have finite-length impulse responses, and have no feedback paths, they cannot support output oscillations of any kind. As for overflow errors, we can eliminate them if we increase the word width of hardware registers so overflow never takes place in the IIR filter. Filter input signals can be scaled (reduced in amplitude by multiplying signals within the filter by a factor less than one) so overflow is avoided, but this signal amplitude loss reduces signal to noise ratios. Overflow oscillations can be avoided by using saturation arithmetic logic where signal values aren't per mitted to exceed a fixed limit when an overflow condition is detected. It may be useful for the reader to keep in mind that when the signal data is rep resented in two's complement arithmetic, multiple summations resulting in intermediate overflow errors cause no problems if we can guarantee the final magnitude of the sum of the numbers is not too large for the final accumulator register. Of course, standard floating point and block floating point data for mats can greatly reduce the errors associated with overflow oscillations and limit cycles[30]. (We discuss floating point number formats in Section 12.4.) These quantized coefficient and overflow errors, caused by finite-width words, have different effects depending on IIR filter structure used. Referring to FIG. 22, practice has shown the Direct Form II structure to be the most error-prone implementation. The most popular technique for minimizing the errors associated with finite-word widths is to design IIR filters comprising a cascade string, or parallel combination, of low-order filters. The next section tells us why. 8. Filter designers minimize IIR filter stability and quantization noise problems by building high-performance filters by implementing combinations of cascaded lower performance filters. Before we consider this design idea, let's re view important issues regarding the behavior of combinations of multiple filters. FIG. 37 Combinations of two filters: (a) cascaded filters; (b) parallel filters. 8.1 Here we summarize the combined behavior of linear filters (be they IIR or FIR) connected in cascade and in parallel. As indicated in FIG. 37(a), the resultant transfer function of two cascaded filter transfer functions is the product of those functions, or [...] FIG. 38 Definition of fitter passband ripple R. For small values of R1 and R2, the R1 R2 term becomes negligible, and we state our rule of thumb as (eqn. 121) Thus, in designs using cascaded filters it's prudent to specify their individual passband ripple values to be roughly half the desired Rcas ripple specification for the final combined filter, or (eqn. 122) FIG. 39 IIR filter partitioning: (a) initial sixth-order OR filter; (b) three second order sections, 8.2 Experienced filter designers routinely partition high-order IIR filters into a string of second-order IIR filters arranged in cascade because these lower- order filters are easier to design, are less susceptible to coefficient quantization errors and stability problems, and their implementations allow easier data word scaling to reduce the potential overflow effects of data word size growth. Optimizing the partitioning of a high-order filter into multiple second-order filter sections is a challenging task, however. For example, say we have the sixth-order Direct Form I filter in FIG. 39(a) that we want to partition into three second-order sections. In factoring the sixth-order filter's H(z) transfer function we could get up to three separate sets of feed forward coefficients in the factored H(z) numerator: b'(k), b"(k), and b'"(k). Likewise we could have up to three separate sets of feedback coefficients in the factored denominator: a' (k), a"(k), and a'"(k). Because there are three second-order sections, there are 3 factorial, or six, ways of pairing the sets of coefficients. Notice in FIG. 39(b) how the first section uses the a' (k) and b/ (k) coefficients, and the second section uses the a"(k) and b"(k) coefficients. We could just as well have interchanged the sets of coefficients so the first second-order section uses the a' (k) and b"(k) coefficients, and the second section uses the a"(k) and b' (k) coefficients. So, there are six different mathematically equivalent ways of combining the sets of coefficients. Add to this the fact that for each different combination of low-order sections there are three factorial distinct ways those three separate 2nd-order sections can be arranged in cascade. This means if we want to partition a 2M-order HR filter into M distinct second-order sections, there are M factorial squared, (M!)2 , ways to do so. As such, there are then (31) 2 = 36 different cascaded filters we could obtain when going from FIG. 39(a) to FIG. 39(b). To further complicate this filter partitioning problem, the errors due to coefficient quantization will, in general, be different for each possible filter combination. Although full details of this subject are outside the scope of this introductory text, ambitious readers can find further material on optimizing cascaded filter sections in References 19, 27, and in Part 3 of Reference 31. One simple (although perhaps not optimum) method for arranging cascaded second-order sections has been proposed1141. First, factor a high-order IIR filter's H(z) transfer function into a ratio of the form (eqn. 123) with the zk zeros in the numerator and Pk poles in the denominator. (Hope fully you have a signal processing software package to perform the factorization.) Next, the second-order section assignments go like this: 1. Find the pole, or pole pair, in H(z) closest to the unit circle. 2. Find the zero, or zero pair, closest to the pole, or pole pair, found in step 1. 3. Combine those poles and zeros into a single second-order filter section. This means your first second-order section may be something like: (eqn. 124) 4. Repeat steps 1 to 3 until all poles and zeros have been combined into 2nd-order sections. 5. The final ordering (cascaded sequence) of the sections is based on how far the sections' poles are from the unit circle. Order the sections in either increasing or decreasing pole-distances from the unit circle. 6. Implement your filter as cascaded second-order sections in the order from step 5. In digital filter vernacular, a second-order IIR filter is called a "biquad" for two reasons. First, the filter's z-domain transfer function includes two quadratic polynomials. Second, the word biquad sounds cool. By the way, we started our second-order sectioning discussion with a high-order Direct Form I filter in FIG. 39(a). We chose that filter form be cause it's the structure most resistant to coefficient quantization and overflow problems. As seen in FIG. 39(a), we have redundant delay elements. These can be combined, as shown in FIG. 40, to reduce our temporary storage requirements, as we did with the Direct Form II structure in FIG. 22. There's much material in the literature concerning finite word effects as they relate to digital IIR filters. (References 14, 16, and 19 discuss quantization noise effects in some detail as well as providing extensive bibliographies on the subject.) As for IIR filter scaling to avoid overflow errors, readable design guidance is available.
9. The question naturally arises as to which filter type, IIR or FIR, is best suited for a given digital filtering application. That's not an easy question to answer, but we can point out a few factors that should be kept in mind. First, we can assume that the differences in the ease of design between the two filter types arc unimportant. There are usually more important performance and implementation properties to consider than design difficulty when choosing be- tween an IIR and an FIR filter. One design consideration that may be significant is the IIR filter's ability to simulate a predefined prototype analog filter. FIR filters do not have this design flexibility. From a hardware standpoint, with so many fundamental differences be tween IIR and FIR filters, our choice must to be based on those filter characteristics that are most and least important to us. For example, if we needed a filter with exactly linear phase, then an FIR filter is the only way to go. If on the other hand, if our design required a filter to accept very high data rates and slight phase nonlinearity is tolerable, we might lean toward IIR filters with their reduced number of necessary multipliers per output sample. One caveat though: just because an FIR filter has, say, three times the number of multiplies per output sample relative an TIR filter does not mean the IIR filter will execute faster on a programmable DSP chip. Typical DSP chips have a zero-overhead looping capability whose parallelism speeds the execution of multiply and accumulate (MAC) routines, with which FIR filtering is included. The code for TIR filtering has more data /coefficient pointer book keeping to accommodate than FIR filter code. So, if you're choosing between an IIR filter requiring K multiplies per output sample and an FIR filter needing 2K (or 3K) multiplies per output sample, code both filters and measure their execution speeds. Table 1 presents a brief comparison between TIR and FIR filters based on several performance and implementation properties.
[1] Churchill, R. V. Modern Operational Mathematics in Engineering, McGraw-Hill, New York, 1944, pp. 307-334. [2] Aseltine, J. A. Transform Method in Linear System Analysis, McGraw-Hill, New York, 1958, pp. 287-292. [3] Nixon, F. E. Handbook of Laplace Transformation, Tables and Examples, Prentice-Hall, Engle wood Cliffs, New Jersey, 1960. [4] Kaiser, J. F. "Digital Filters," in System Analysis by Digital Computer. Ed. by F. F. Kuo and J. F. Kaiser, John Wiley and Sons, New York, 1966, pp. 218-277. [5] Kaiser, J. F. "Design Methods for Sampled Data Filters," Section 7, in 5 1963 Proc. 1st Allerton Conference, pp. 221-236. [6] Ragazzini, J. R. and Franklin, G. F. Sampled-Data Control Systems, McGraw-Hill, New York, 1958, pp. 52-83. [7] Milne-Thomson, L. M. The Calculus of Finite Differences, Macmillan, London, 1951, pp. 232-251. [8] Truxal, J. G. 1955. Auhmmtic feedback Control System Synthesis, McGraw-Hill, New York, 1955, p. 283. [9] Blackman, R. B. Linear Data-Smoothing and Prediction in Theory and Practice, Addison Wesley, Reading, Mass., 1965, pp. 81-84. [10] Gold, B. and Jordan, K. L., Jr. "A Note on Digital Filter Synthesis," Proem Hugs of the IEEE, Vol. 56, October 1968, p. 1717. 11 Rabiner, L. R., et al. "Terminology in Digital Signal Processing," IEEE Trans. on Audio and Electroawuslics, Vol. AU-20, No. 5, December 1972, p. 327. 12 Stearns, S. D. Digital Signal Analysis, Hayden Book Co. Inc., Rochelle Park, New Jersey, 1975, p. 114. [13] Stanley, W. D., et al., Digital Signal Processing, Reston Publishing Co. Inc., Reston, Virginia, 1984, p. 191. 14 Oppenheim, A. V., and Schafer, R. W. Discrete-Time Signal Processing, Prentice-11a, Englewood Cliffs, New Jersey, 1989, p. 406. [15] Williams, C. S. Designing Digital lifters, Prentice-Hall, Englewood Cliffs, New Jersey, 1986, pp. 166-186. [16] Rabiner, L. R., and Gold, B. Theory and Application of Digital Signal Processing, Prentice-Hall, Friglewood Cliffs, New Jersey, 1975, p. 216. [17] Johnson, M. "Implement Stable IIR Filters Using Minimal Hardware," EON, 14 April 1983. [18] Oppenheim, A. V., Willsky, A. S., and Young, I. T. Signals and Systems, Prentice-Hall, Englewood Cliffs, New Jersey, 1983, p. 659. 19 Kaiser, J. F. "Some Practical Considerations in the Realization of linear Digital Filters," Proc. Third Annual Allerlon Conference on Circuit and System Theory, 1965, pp. 621.-633. [20] Deczky, A. G. "Synthesis of Digital Recursive Filters Using the Minimum P Error Criterion," !EEL Trans. on Audio and Electroacoustics, Vol. AU-20, No. 2, October 1972, p. 257. [21] Steiglitz, K. "Computer-Aided Design of Recursive Digital Filters," Trans. on Audio and Electroacoustics, Vol. 18, No. 2,1970, P. 123. [22] Richards, M. A. "Application of Deczky's Program for Recursive Filter Design to the De sign of Recursive Decimators," IEEE Trans. on Acoustics, Speech, and Signal Processing, Vol. ASSP-30, October 1982, p. 811. [23] Parks, T. W., and Burrus, C. S. Digital tiller Design, John Wiley and Sons, New York, 1987, p. 244. [24] Rabiner, L., Graham, Y., and Helms, H. "Linear Programming Design of IIR Digital Filters with Arbitrary Magnitude Functions," IEEE Trans. on Acoustics, Speech, and Signal Processing., Vol. ASSP-22, No. 2, April 1974, p. 117. [25] Friedlander, B., and Porat, B. "The Modified Yule-Walker Method of ARMA Spectral Estimation," IEEE Trans. on Aerospace Electronic Systems, Vol. AES-20, No. 2, March 1984, pp. 158-173. [26] Jackson, L. B. "On the Interaction of Roundoff Noise and Dynamic Range and Dynamic Range in Digital Filters," Bell System Technical Journal, Vol. 49, February 1970, pp. 159-184. [27] Jackson, L. B. "Roundoff Noise Analysis for Fixed-Point Digital Filters Realized in Cascade or Parallel Form," IEEE Trans. Audio Electroacoustics, Vol. AU-18, June 1970, pp. 107-122. [28] Sandberg, I. W. "A Theorem Concerning Limit Cycles in Digital Filters," Proc. Seventh Annual Allerton Conference on Circuit and System Theory, Monticello, Illinois, October 1969. [29] Ebert, P. M., et al. "Overflow Oscillations in Digital Filters," Bell Sys. Tech. Journal, Vol. 48, November 1969, pp. 2999-3020. [30] Oppenheim, A. V. "Realization of Digital Filters Using Block Floating Point Arithmetic," IEEE Trans. Audio Electroacoustics, Vol. AU-18, June 1970, pp. 130-136. [31] Rabiner, L. R., and Rader, C. M., Eds., Digital Signal Processing, IEEE Press, New York, 1972, p. 361. [32] Grover, D. "Subject: Re: How to arrange the (gain, pole, zero) of the cascaded biquad filter" Usenet group comp.dsp post, Dec. 28,2000. [33] Grover, D. and Deller, J. Digital Signal Processing and the Microcontroller, Prentice Hall, Upper Saddle River, New Jersey, 1998. Related Articles -- Top of Page -- Home |

Updated: Sunday, August 4, 2019 16:27 PST