Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software Input Devices | Data Loggers + Recorders | Books | Links + Resources |

The filtering of digitized data, if not the most fundamental, is certainly the oldest discipline in the field of digital signal processing. Digital filtering's origins go back 50 years. The growing availability of digital computers in the early 1950s led to efforts in the smoothing of discrete sampled data and the analysis of discrete data control systems. However it wasn't until the early to mid-1960s, around the time the Beatles came to America, that the analysis and development of digital equivalents of analog filters began in earnest. That's when digital signal processing experts realized that computers could go be yond the mere analysis of digitized signals into the domain of actually changing signal characteristics through filtering. Today, digital filtering is so widespread that the quantity of literature pertaining to it exceeds that of any other topic in digital signal processing. In this section, we introduce the fundamental attributes of digital filters, learn how to quantify their performance, and review the principles associated with the design of finite impulse response digital filters. So let's get started by illustrating the concept of filtering a time-domain signal as shown in FIG. 1.
In general, filtering is the processing of a time-domain signal resulting in some change in that signal's original spectral content. The change is usually the reduction, or filtering out, of some unwanted input spectral components; that is, filters allow certain frequencies to pass while attenuating other frequencies. FIG. 1 shows both analog and digital versions of a filtering process. Where an analog filter operates on a continuous signal, a digital filter processes a sequence of discrete sample values. The digital filter in FIG. 1(b), of course, can be a software program in a computer, a programmable hardware processor, or a dedicated integrated circuit. Traditional linear digital filters typically come in two flavors: finite impulse response (FIR) filters and infinite impulse response (IIR) filters. Because FIR filters are the simplest type of digital filter to analyze, we'll examine them in this section and cover IIR filters in Section 6.
1. First of all, FIR digital filters use only current and past input samples, and none of the filter's previous output samples, to obtain a current output sample value. (That's also why FIR filters are sometimes called non-recursive filters.) Given a finite duration of nonzero input values, the effect is that an FIR filter will always have a finite duration of nonzero output values, and that's how FIR filters got their name. So, if the FIR filter's input suddenly becomes a sequence of all zeros, the filter's output will eventually be all zeros. While not sounding all that unusual, this characteristic is however, very important, and we'll soon find out why, as we learn more about digital filters. FIR filters use addition to calculate their outputs in a manner much the same as the process of averaging uses addition. In fact, averaging is a kind of FIR filter that we can illustrate with an example. Let's say we're counting the number of cars that pass over a bridge every minute, and we need to know the average number of cars per minute over five-minute intervals; that is, every minute we'll calculate the average number of cars/minute over the last five minutes. If the results of our car counting for the first ten minutes are those values shown in the center column of Table 1, then the average number of cars/minute over the previous five one-minute intervals is listed in the right column of the table. We've added the number of cars for the first five one-minute intervals and divided by five to get our first five-minute average output value, (10+22-v24+42+37)15 =-- 27. Next we've averaged the number of cars/minute for the second to the sixth one-minute intervals to get our second five-minute average output of 40.4. Continuing, we average the number of cars/minute for the third to the seventh one-minute intervals to get our third average output of 53.8, and so on. With the number of cars/minute for the one-minute intervals represented by the dashed line in FIG. 2, we show our five-minute average output as the solid line. (FIG. 2 shows cars/minute input values beyond the first ten minutes listed in Table 1 to illustrate a couple of important ideas to be discussed shortly.)
There's much to learn from this simple averaging example. In FIG. 2, notice that the sudden changes in our input sequence of cars/minute are flattened out by our averager. The averager output sequence is considerably smoother than the input sequence. Knowing that sudden transitions in a time sequence represent high frequency components, we can say that our averager is behaving like a low-pass filter and smoothing sudden changes in the input. Is our averager an FIR filter? It sure is--no previous averager output value is used to determine a current output value; only input values are used to calculate output values. In addition, we see that, if the bridge were suddenly closed at the end of the 19th minute, the dashed line immediately goes to zero cars/minute at the end of the 20th minute, and the averager's output in FIG. 2 approaches and settles to a value of zero by the end of the 24th minute.
FIG. 2 shows the first averager output sample occurring at the end of the 5th minute because that's when we first have five input samples to calculate a valid average. The 5th output of our averager can be denoted as Y_ave (5) where […] (eqn. 1) In the general case, if the kth input sample is x(k), then the nth output is (eqn. 2) Look at (eqn. 2) carefully now. It states that the nth output is the average of the nth input sample and the four previous input samples. We can formalize the digital filter nature of our averager by creating the block diagram in FIG. 3 showing how the averager calculates its output samples. This block diagram, referred to as the filter structure, is a physical depiction of how we might calculate our averaging filter outputs with the input sequence of values shifted, in order, from left to right along the top of the filter as new output calculations are performed. This structure, implementing Eqs. (eqn. 1) and (eqn. 2), shows those values used when the first five input sample values are available. The delay elements in FIG. 3, called unit delays, merely indicate a shift register arrangement where input sample values are temporarily stored during an output calculation. In averaging, we add five numbers and divide the sum by five to get our answer. In a conventional FIR filter implementation, we can just as well multiply each of the five input samples by the coefficient 1/5 and then perform the summation as shown in FIG. 4(a). Of course, the two methods in FIGs 3 and 4(a) are equivalent because (eqn. 2) describing the structure, shown in FIG. 3 is equivalent to (eqn. 3) that describes the structure in FIG. 4(a). [We've used the venerable distributive law for multiplication and addition of scalars, a(17-i-C-14) ab+ac +ad, in moving (eqn. 2)'s factor of 1/5 inside the summation in (eqn. 3).] Let's make sure we understand what's happening in FIG. 4(a). Each of the first five input values are multiplied by 1/5, and the five products are summed to give the 5th filter output value. The left to right sample shifting is illustrated in FIGs 4(b) and 4(c). To calculate the filter's 6th output value, the input sequence is right shifted discarding the 1st input value of 10, and the 6th input value 77 is accepted on the left. Likewise, to calculate the filter's 7th output value, the input sequence is right shifted discarding the 2nd value of 22, and the 7th input value 89 arrives on the left. So, when a new input sample value is applied, the filter discards the oldest sample value, multiplies the samples by the coefficients of 1/5, and sums the products to get a single new output value. The filter's structure using this bucket brigade shifting process is often called a transversal filter due to the cross-directional flow of the input samples. Because we tap off five separate input sample values to calculate an output value, the structure in FIG. 4 is called a 5-tap FIR filter, in digital filter vernacular. One important and, perhaps, most interesting aspect of understanding FIR filters is learning how to predict their behavior when sinusoidal samples of various frequencies are applied to the input, i.e., how to estimate their frequency-domain response. Two factors affect an FIR filter's frequency response: the number of taps and the specific values used for the multiplication coefficients. We'll explore these two factors using our averaging example and, then, see how we can use them to design FIR filters. This brings us to the point where we have to introduce the C word: convolution. (Actually, we al ready slipped a convolution equation in on the reader without saying so. It was (eqn. 3), and we'll examine it in more detail later.) 2. OK, here's where we get serious about understanding the mathematics behind FIR filters. We can graphically depict (eqn. 3)'s and FIG. 4's calculations as shown in FIG. 5. Also, let's be formal and use the standard notation of digital filters for indexing the input samples and the filter coefficients by starting with an initial index value of zero; that is, we'll call the initial input value the 0th sample x(0). The next input sample is represented by the term x(1), the following input sample is called x(2), and so on. Likewise, our five coefficient values will be indexed from zero to four, h(0) through h(4). (This indexing scheme makes the equations describing our example consistent with conventional filter notation found in the literature.) In (eqn. 3) we used the factor of 1/5 as the filter coefficients multiplied by our averaging filter's input samples. The left side of FIG. 5 shows the alignment of those coefficients, black squares, with the filter input sample values represented by the white squares. Notice in FIG. 5(a) through 5(e) that we're marching the input samples to the right, and, at each step, we calculate the filter output sample value using (eqn. 3). The output samples on the right side of FIG. 5 match the first five values represented by the black squares in FIG. 2. The input samples in FIG. 5 are those values represented by the white squares in FIG. 2. Notice that the time order of the inputs in FIG. 5 has been reversed from the input sequence order in FIG. 2! That is, the input sequence has been flipped in the time-domain in FIG. 5. This time order reversal is what happens to the input data using the filter structure in FIG. 4. Repeating the first part of (eqn. 3) and omitting the subscript on the output term, our original FIR filter's y(n)th output is given by (eqn. 4) Because we'll explore filters whose coefficients are not all the same value, we need to represent the individual filter coefficients by a variable, such as the term h(k), for example. Thus we can rewrite the averaging filter's output from (eqn. 4) in a more general way as (eqn. 5) where h(0) through h(4) all equal 1/5. Equation (eqn. 5) is a concise way of de scribing the filter structure in FIG. 4 and the process illustrated in FIG. 5. Let's take (eqn. 5) one step further and say, for a general M-tap FIR filter, the nth output is (eqn. 6) FIG. 6 Convolution of filter coefficients and an input impulse to obtain the filter's output impulse response: (a) impulse sample aligned with the first filter coefficient, index n = 4; (b) impulse sample shifts to the right and index n = 5; (c) index n = 6; (d) index n = (e) index n = 8. Well, there it is. (eqn. 6) is the infamous convolution equation as it applies to digital FIR filters. Beginners in the field of digital signal processing often have trouble understanding the concept of convolution. It need not be that way. Eq. (eqn. 6) is merely a series of multiplications followed by the addition of the products. The process is actually rather simple. We just flip the time order of an input sample sequence and start stepping the flipped sequence across the filter's coefficients as shown in FIG. 5. For each new filter input sample, we sum a series of products to compute a single filter output value. Let's pause for a moment and introduce a new term that's important to keep in mind, the impulse response. The impulse response of a filter is exactly what its name implies-it's the filter's output time-domain sequence when the input is a single unity-valued sample (impulse) preceded and followed by zero-valued samples. FIG. -6 illustrates this idea in the same way we determined the filter's output sequence in FIG. 5. The left side of FIG. 6 shows the alignment of the filter coefficients, black squares, with the filter input impulse sample values represented by the white squares. Again, in FIG. 6(a) through 5-6(e) we're shifting the input samples to the right, and, at each step, we calculate the filter output sample value using (eqn. 4). The out put samples on the right side of FIG. 6 are the filter's impulse response. Notice the key point here: the FIR filter's impulse response is identical to the five filter coefficient values. For this reason, the terms FIR filter coefficients and impulse response are synonymous. Thus, when someone refers to the impulse response of an FIR filter, they're also talking about the coefficients. Returning to our averaging filter, recall that coefficients (or impulse response) h(0) through h(4) were all equal to 1/5. As it turns out, our filter's performance can be improved by using coefficients whose values are not all the same. By performance we mean how well the filter passes desired signals and attenuates unwanted signals. We judge that performance by determining the shape of the filter's frequency-domain response that we obtain by the convolution property of linear systems. To describe this concept, let's repeat (eqn. 6) using the abbreviated notation of y(n) = h(k) * x(n) (eqn. 7) where the *symbol means convolution. (Equation 7 is read as "y of n equals the convolution of h of k and x of n.") The process of convolution, as it applies to FIR filters is as follows: the discrete Fourier transform (DFT) of the convolution of a filter's impulse response (coefficients) and an input sequence is equal to the product of the spectrum of the input sequence and the DFT of the impulse response. The idea we're trying to convey here is that if two time domain sequences h(k) and x(n) have DFTs of H(m) and X(m), respectively, then the DFT of y(n) = h(k) * x(n) is H(m) X(m). Making this point in a more compact way, we state this relationship with the expression y(n) = h(k)* x(n) H(m). X(m). (eqn. 8)
With IDFT indicating the inverse DFT, (eqn. 8) indicates that two sequences resulting from h(k)*x(n) and H(m)X(m) are Fourier transform pairs. So taking the DFT of h(k)*x(n) gives us the product H(m)-X(m) that is the spectrum of our filter output Y(m). Likewise, we can determine h(k)*x(n) by taking the in verse DFT of H(m)-X(m). The very important conclusion to learn from (eqn. 8) is that convolution in the time domain is equivalent to multiplication in the frequency domain. To help us appreciate this principle, FIG. 7 sketches the relationship between convolution in the time domain and multiplication in the frequency domain. The process of convolution with regard to linear systems is discussed in more detail in Section 9. The beginner is encouraged to review that material to get a general idea of why and when the convolution process can be used to analyze digital filters. Equation (eqn. 8) and the relationships in FIG. 7 tell us what we need to do to determine the frequency response of an FIR filter. The product X(ni)H(ni) is the DFT of the filter output. Because X(///) is the DFT of the filter's input sequence, the frequency response of the filter is then defined as H(m), the DFT of filter's impulse response 11(k). Getting back to our original problem, we can determine our averaging filter's frequency-domain response by taking the DFT of the individual filter coefficients (impulse response) in (eqn. 4). If we take the five 1/(k) coefficient values of 1/5 and append 59 zeros, we have the sequence depicted in FIG. 8(a). Performing a 64-point DFT on that sequence, and normalizing the DFT magnitudes, gives us the filter's frequency magnitude response I H(m) I in FIG. 8(b) and phase response shown in FIG. 8(c)." H(m) is our old friend, the sin(x)/x function from Section 3.13. [We use the term impulse response here, instead of coefficients, because this concept also applies to Ilk filters. IIR filter frequency responses are also equal to the DFT of their impulse responses. " There's nothing sacred about using a 64-point DFF here. We could just as well have appended only enough zeros to take a 16- or 32-point FFI'. We chose 64 points to get a frequency resolution that would make the shape of the response in FIG. 8(b) reasonably smooth. Remember, the more points in the WI, the finer the frequency resolution-right?] Let's relate the discrete frequency response samples in FIGs 8(b) and 5-8(c) to the physical dimension of the sample frequency L. We know, from Section 3.5 and our experience with the DFT, that the in = N/ 2 discrete frequency sample, m = 32 in this case, is equal to the folding frequency, or half the sample rate, f/2. Keeping this in mind, we can convert the discrete frequency axis in FIG. 8 to that shown in FIG. 9. In FIG. 9(a), notice that the filter's magnitude response is, of course, periodic in the frequency domain with a period of the equivalent sample rate L. Because we're primarily interested in the filter's response between 0 and half the sample rate, FIG. 9(c) shows that frequency band in greater detail affirming the notion that averaging behaves like a low-pass filter. It's a relatively poor low-pass filter compared to an arbitrary, ideal low-pass filter indicated by the dashed lines in FIG. 9(c), but our averaging filter will attenuate higher frequency inputs relative to its response to low-frequency input signals. We can demonstrate this by way of example. Suppose we applied a low frequency sinewave to the averaging FIR filter, as shown by the white squares in FIG. 10(a), and that the input sinewave's frequency is L/32 and its peak amplitude is unity. The filter output in this case would be a . sinewave of frequency f,/32, but its peak amplitude would be reduced to a value of 0.96, and the output sinewave is delayed by a phase angle of 22.5'. Next, if we applied a higher frequency sinewave of 3L/32 to the FIR filter as shown in FIG. 10(b), the filter output would, then, be a sinewave of frequency 3f132, but its peak amplitude is even further reduced to a value of 0.69. In addition, the FIG. 10(b) output sinewave has an even larger phase angle delay of 67.5°. Although the output amplitudes and phase delays in FIG. 10 were measured values from actually performing a 5-tap FIR filter process on the input sinewave samples, we could have obtained those amplitude and phase delay values directly from FIGs 8(a) and 5-8(b). The emphasis here is that we don't have to implement an FIR filter and apply various sinewave in puts to discover what its frequency response will be. We need merely take the DFT of the FIR filter's coefficients (impulse response) to determine the filter's frequency response as we did for FIG. 8. FIG. 11 is another depiction of how well our 5-tap averaging FIR filter performs, where the dashed line is the filter's magnitude response I H(m) I, and the shaded line is the |X(m) I magnitude spectrum of the filter's input values (the white squares in FIG. 2). The solid line is the magnitude spectrum of the filter's output sequence, which is shown by the black squares in FIG. 2. So in FIG. 11, the solid output spectrum is the product of the dashed filter response curve and the shaded input spectrum, or I X(m)H(m) I. Again, we see that our averager does indeed attenuate the higher frequency portion of the input spectrum. Let's pause for a moment to let all of this soak in a little. So far we've gone through the averaging filter example to establish that: • FIR filters perform time-domain convolution by summing the products of the shifted input samples and a sequence of filter coefficients, • an FIR filter's output sequence is equal to the convolution of the input sequence and a filter's impulse response (coefficients), • an FIR filter's frequency response is the DFT of the filter's impulse response, • an FIR filter's output spectrum is the product of the input spectrum and the filter's frequency response, and • convolution in the time domain and multiplication in the frequency do main are Fourier transform pairs.
OK, here's where FIR filters start to get really interesting. Let's change the values of the five filter coefficients to modify the frequency response of our 5-tap, low-pass filter. In fact, FIG. 12(a) shows our original five filter coefficients and two other arbitrary sets of 5-tap coefficients. FIG. 12(b) compares the frequency magnitude responses of those three sets of coefficients. Again, the frequency responses are obtained by taking the DFT of the three individual sets of coefficients and plotting the magnitude of the trans forms, as we did for FIG. 9(c). So we see three important characteristics in FIG. 12. First, as we expected, different sets of coefficients give us different frequency magnitude responses. Second, a sudden change in the values of the coefficient sequence, such as the 0.2 to 0 transition in the first coefficient set, causes ripples, or sidelobes, in the frequency response. Third, if we minimize the suddenness of the changes in the coefficient values, such as the third set of coefficients in FIG. 12(a), we reduce the sidelobe ripples in the frequency response. However, reducing the sidelobes results in increasing the main lobe width of our low-pass filter. (As we'll see, this is exactly the same effect encountered in the discussion of window functions used with the DFT in Section 3.9.) To reiterate the function of the filter coefficients, FIG. 13 shows the 5-tap FIR filter structure using the third set of coefficients from FIG. 12. The implementation of constant-coefficient transversal FIR filters does not get any more complicated than that shown in FIG. 13. It's that simple. We can have a filter with more than five taps, but the input signal sample shifting, the multiplications by the constant coefficients, and the summation are all there is to it. (By constant coefficients, we don't mean coefficients whose values are all the same; we mean coefficients whose values remain un changed, or time-invariant. There is a class of digital filters, called adaptive filters, whose coefficient values are periodically changed to adapt to changing input signal parameters. While we won't discuss these adaptive filters in this introductory text, their descriptions are available in the literature.) So far, our description of an FIR filter implementation has been presented from a hardware perspective. In FIG. 13, to calculate a single filter output sample, five multiplications and five additions must take place before the arrival of the next input sample value. In a software implementation of a 5-tap FIR filter, however, all of the input data samples would be previously stored in memory. The software filter routine's job, then, is to access different five-sample segments of the x(n) input data space, perform the calculations shown in FIG. 13, and store the resulting filter y(n) output sequence in an array of memory locations. Now that we have a basic understanding of what a digital FIR filter is, let's see what effect is had by using more than five filter taps by learning to design FIR filters. 3. OK, instead of just accepting a given set of FIR filter coefficients and analyzing their frequency response, let's reverse the process and design our own low-pass FIR filter. The design procedure starts with the determination of a desired frequency response followed by calculating the filter coefficients that will give us that response. There are two predominant techniques used to de sign FIR filters: the window method and the so-called optimum method. Let's discuss them in that order. 3.1 Window Design Method [In reviewing the literature of FIR filters, the reader will often find the term z I replacing the delay function in FIG. 13. 'Ibis equivalence is explained in the next section when we study FIR filters. ]
The window method of FIR filter design (also called the Fourier series method) begins with our deciding what frequency response we want for our low-pass filter. We can start by considering a continuous low-pass filter, and simulating that filter with a digital filter. We'll define the continuous frequency response H(f) to be ideal, i.e., a low-pass filter with unity gain at low frequencies and zero gain (infinite attenuation) beyond some cutoff frequency, as shown in FIG. 14(a). Representing this H(f) response by a discrete frequency response is straightforward enough because the idea of a discrete frequency response is essentially the same as a continuous frequency response--with one important difference. As described in Sections 2.2 and 3.13, discrete frequency-domain representations are always periodic with the period being the sample rate f5 . The discrete representation of our ideal, continuous low-pass filter H(f) is the periodic response H(m) depicted by the frequency-domain samples in FIG. 14(b).
We have two ways to determine our low-pass filter's time-domain coefficients. The first way is algebraic: 1. Develop an expression for the discrete frequency response H(m). 2. Apply that expression to the inverse DFT equation to get the time do main h(k). 3. Evaluate that h(k) expression as a function of time index k. The second method is to define the individual frequency-domain samples representing H(m) and then have a software routine perform the inverse DFT of those samples, giving us the FIR filter coefficients. In either method, we need only define the periodic H(m) over a single period of fs Hz. As it turns out, defining H(m) in FIG. 14(b) over the frequency span -4/2 to f8 /2 is the easiest form to analyze algebraically, and defining H(m) over the frequency span 0 to f5 is the best representation if we use the inverse DFT to obtain our filter's coefficients. Let's try both methods to determine the filter's time-domain coefficients. In the algebraic method, we can define an arbitrary discrete frequency response H(m) using N samples to cover the -4/2 to f5 /2 frequency range and establish K unity-valued samples for the passband of our low-pass filter as shown in FIG. 15. To determine h(k) algebraically we need to take the in verse DFT of 1-1(m) in the form of N / 2 (eqn. 9) where our time-domain index is k. The solution to (eqn. 9), derived in Section 3.13 as Eq. (3-59), is repeated here as (eqn. 10) If we evaluate (eqn. 10) as a function of k, we get the sequence shown in FIG. 16 taking the form of the classic sin(x)/x function. By reviewing the material in Section 3.13, it's easy to see the great deal of algebraic manipulation required to arrive at (eqn. 10) from (eqn. 9). So much algebra, in fact, with its many opportunities for making errors, that digital filter designers like to avoid evaluating (eqn. 9) algebraically. They prefer to use software routines to perform inverse DFTs (in the form of an inverse FFT) to determine h(k), and so will we. We can demonstrate the software inverse DFT method of FIR filter de sign with an example. Let's say we need to design a low-pass FIR filter simulating the continuous frequency response shown in FIG. 17(a). The discrete representation of the filter's frequency response H(m) is shown in FIG. 17(b), where we've used N = 32 points to represent the frequency-domain variable H(f). Because it's equivalent to FIG. 17(b) but avoids the negative values of the frequency index m, we represent the discrete frequency samples over the range 0 to f_s in FIG. 17(c), as opposed to the -4/2 to +4/2 range in FIG. 17(b). OK, we're almost there. Using a 32-point inverse FFT to implement a 32-point inverse DFT of the H(m) sequence in FIG. 17(c), we get the 32 h(k) values depicted by the dots from k = -15 to k = 16 in FIG. 18(a). We have one more step t( perform. Because we want our final 31-tap h(k) filter coefficients to be sym metrical with their peak value in the center of the coefficient sample set, w( drop the k = 16 sample and shift the k index to the left from FIG. 18(a) giving us the desired sin(x)/x form of h(k) as shown in FIG. 18(b). This shift of the index k will not change the frequency magnitude response 01 our FIR filter. (Remember from our discussion of the DFT Shifting Theorem in Section 3.6 that a shift in the time domain manifests itself only as a linear phase shift in the frequency domain with no change in the frequency domain magnitude.) The sequence in FIG. 18(b), then, is now the coefficients we use in the convolution process of FIG. 5 to implement z low-pass FIR filter. [If you want to use this FIR design method but only have a forward FF T software routine available, Section 13.6 shows a slick way to perform an inverse FFT with the forward FFT algorithm. ]
It's important to demonstrate that the more h(k) terms we use as filter coefficients, the closer we'll approximate our ideal low-pass filter response. Let'5 be conservative, just use the center nine h(k) coefficients, and see what our filter response looks like. Again, our filter's magnitude response in this case will be the DFT of those nine coefficients as shown on the right side of FIG. 19(a). The ideal filter's frequency response is also shown for reference as the dashed curve. (To show the details of its shape, we've used a continuous curve for | H(m)| in FIG. 19(a), but we have to remember that |H(m) I is really a sequence of discrete values.) Notice that using nine coefficients gives us a low pass filter, but it's certainly far from ideal. Using more coefficients to improve our situation, FIG. 19(b) shows 19 coefficients and their corresponding frequency magnitude response that is beginning to look more like our desired rectangular response. Notice that magnitude fluctuations, or ripples, are evident in the passband of our H(n) filter response. Continuing, using all 31 of the h(k) values for our filter coefficients results in the frequency response in FIG. 19(c). Our filter's response is getting better (approaching the ideal), but those conspicuous passband magnitude ripples are still present. It's important that we understand why those passband ripples are in the low-pass FIR filter response in FIG. 19. Recall the above discussion of convolving the 5-tap averaging filter coefficients, or impulse response, with an input data sequence to obtain the averager's output. We established that convolution in the time domain is equivalent to multiplication in the frequency domain, that we symbolized with (eqn. 8), and repeat it here as (eqn. 11)
This association between convolution in the time-domain and multiplication in the frequency domain, sketched in FIG. 7, indicates that, if two time-domain sequences h(k) and x(n) have DFTs of H(m) and X(m), respectively, then the DFT of h(k) * x(n) is H(m) • X(nt). No restrictions whatsoever need be placed on what the time-domain sequences h(k) and x(n) in (eqn. 11) actually represent. As detailed later in Section 9, convolution in one domain is equivalent to multiplication in the other domain, allowing us to state that multiplication in the time domain is equivalent to convolution in the frequency domain, or (eqn. 12) Now we're ready to understand why the magnitude ripples are present in FIG. 19. Rewriting (eqn. 12) and replacing the h(k) and x(n) expressions with h (k) and w(k), respectively (eqn. 13) Let's say that h(k) represents an infinitely long sin(x)/x sequence of ideal low-pass FIR filter coefficients and that w(k) represents a window sequence that we use to truncate the sin(x)/x terms as shown in FIG. 20. Thus, the w(k) sequence is a finite-length set of unity values and its DFT is W(m). The length of w(k) is merely the number of coefficients, or taps, we intend to use to implement our low-pass FIR filter. With 11 -(k) defined as such, the product 1r(k) w(k) represents the truncated set of filter coefficients 11(k) in FIGs 19(a) and 19(b). So, from (eqn. 13), the FIR filter's true frequency response H(m) is the convolution H(nr) = H(ni) *W(nt) . (eqn. 14) We depict this convolution in FIG. 21 where, to keep the figure from being so busy, we show Hint) (the DFT of the Ir(k) coefficients) as the dashed rectangle. Keep in mind that it's really a sequence of constant-amplitude sample values.
Let's look at FIG. 21(a) very carefully to see why all three I H(m) I s exhibit passband ripple in FIG. 19. We can view a particular sample value of the H(m) = Him) * W(m) convolution as being the sum of the products of Him) and W(m) for a particular frequency shift of W(m). Him) and the unshifted W(m) are shown in FIG. 21(a.) With an assumed value of unity for all of Him), a particular H(m) value is now merely the sum of the W(m) samples that overlap the Him) rectangle. So, with a W(m) frequency shift of 0 Hz, the sum of the W(m) samples that overlap the H(m) rectangle in FIG. 21(a) is the value of H(m) at 0 Hz. As W(m) is shifted to the right to give us additional positive frequency H(m) values, we can see that the sum of the positive and negative values of W(m) under the rectangle oscillate during the shifting of W(m). As the convolution shift proceeds, FIG. -21(b) shows why there are ripples in the passband of H(m) -again, the sum of the positive and negative W(m) samples under the H(m) rectangle continue to vary as the W(m) function is shifted. The W(m) frequency shift, indicated in FIG. 21(c), where the peak of W(m)'s main lobe is now outside the H(m) rectangle, corresponds to the frequency where H(m)'s passband begins to roll off. FIG. 21(d) shows that, as the W(m) shift continues, there will be ripples in H(m) beyond the positive cutoff frequency. t The point of all of this is that the ripples in H(m) are caused by the sidelobes of W(m). [FIG. 21(b), had we started to shift W(m) to the left in order to determine the negative frequency portion of H(m), we would have obtained the mirror image of the positive frequency portion of H(m). ] FIG. 22 helps us answer the question: How many sin(x)/x coefficients do we have to use (or how wide must w(k) be) to get nice sharp falling edges and no ripples in our H(m) passband? The answer is that we can't get there from here. It doesn't matter how many sin(x)/x coefficients (filter taps)we use, there will always be filter passband ripple. As long as w(k) is a finite number of unity values (i.e., a rectangular window of finite width) there will be sidelobe ripples in W(m), and this will induce passband ripples in the final 1-1(m) frequency response. To illustrate that increasing the number of sin(x)/x coefficients doesn't reduce passband ripple, we repeat the 31-tap, low-pass filter response in FIG. 22(a). The frequency response, using 63 coefficients, is shown in FIG. 22(b), and the passband ripple remains. We can make the filter's transition region more narrow using additional h(k) filter coefficients, but we cannot eliminate the passband ripple. That ripple, known as Gibbs' phenomenon, manifests itself anytime a function (w(k) in this case) with a instantaneous discontinuity is represented by a Fourier series 16-81. No finite set of sinusoids will be able to change fast enough to be exactly equal to an instantaneous discontinuity. Another way to state this Gibbs' dilemma is that, no matter how wide our w(k) window is, its DFT of W(m) will al ways have sidelobe ripples. As shown in FIG. 22(b), we can use more coefficients by extending the width of the rectangular w(k) to narrow the filter transition region, but a wider w(k) does not eliminate the filter passband ripple nor does it even reduce their peak-to-peak ripple magnitudes, as long as w(k) has sudden discontinuities. 3.2 OK. The good news is that we can minimize FIR passband ripple with window functions the same way we minimized DFT leakage in Section 3.9. Here's how. Looking back at FIG. 20, by truncating the infinitely long h(k) sequence through multiplication by the rectangular w(k), our final h(k) exhibited ripples in the frequency-domain passband. FIG. 21 shows us that the passband ripples were caused by W(m)'s sidelobes that, in turn, were caused by the sudden discontinuities from zero to one and one to zero in w(k). If we think of w(k) in FIG. 20 as a rectangular window, then, it is w(k)'s abrupt amplitude changes that are the source of our filter passband ripple. The window FIR design method is the technique of reducing w(k)'s discontinuities by using window functions other than the rectangular window. Consider FIG. 23 to see how a nonrectangular window function can be used to design low-ripple FIR digital filters. Imagine if we replaced FIG. 20's rectangular w(k) with the Blackman window function whose discrete values are defined as (eqn. 15) This situation is depicted for N = 31 in FIG. 23(a), where (eqn. 15)'s w(k) looks very much like the Hanning window function in Figure 3-17(a). This Blackman window function results in the 31 smoothly tapered h(k) coefficients at the bottom of FIG. 23(a). Notice two things about the resulting H(m) in FIG. 23(b). First, the good news. The passband ripples are greatly reduced from those evident in FIG. 22(a)-so our Blackman window function did its job. Second, the price we paid for reduced passband ripple is a wider H(m) transition region. We can get a steeper filter response roll-off by increasing the number of taps in our FIR filter. FIG. 23(c) shows the improved frequency response had we used a 63-coefficient Blackman window function for a 63-tap FIR filter. So using a nonrectangular window function reduces passband ripple at the expense of slower passband to stopband roll-off. [As we mentioned in Section 3.9, specific expressions for window functions depend on the range of the sample index k. Had we defined k to cover the range -N/2, < k < N/2, for example, the expression for the Blackman window would have a sign change and be: w(k) = 0.42 + 0.5cos(2 pi k /N) + 0.08cos(4 pi k /N). ] A graphical comparison of the frequency responses for the rectangular and Blackman windows is provided in FIG. 24. (The curves in FIG. 24 were obtained for the window functions defined by 32 discrete samples, to which 480 zeros were appended, applied to a 512-point DFT.) The sidelobe magnitudes of the Blackman window's I W(m) I are too small to see on a linear scale. We can see those sidelobe details by plotting the two windows' frequency responses on a logarithmic scale and normalizing each plot so that their main lobe peak values are both zero dB. For a given window function, we can get the log magnitude response of WdB(m) by using the expression (eqn. 16) (The | W(0) | term in (eqn. 16) is the magnitude of W(m) at the peak of the main lobe when m = 0.) FIG. 24(b) shows us the greatly reduced sidelobe levels of the Blackman window and how that window's main lobe is almost three times as wide as the rectangular window's main lobe. Of course, we could have used any of the other window functions, discussed in Section 3.9, for our low-pass FIR filter. That's why this FIR filter de sign technique is called the window design method. We pick a window function and multiply it by the sin(x)/x values from Him) in FIG. 23(a) to get our final It(k) filter coefficients. It's that simple. Before we leave the window method of FIR filter design, let's introduce two other interesting window functions. Although the Blackman window and those windows discussed in Section 3.9 are useful in FIR filter design, we have little control over their frequency responses; that is, Our only option is to select some window function and accept its corresponding frequency response. Wouldn't it be nice to have more flexibility in trading off, or striking a compromise between, a window's main lobe width and sidelobe levels? Fortunately, there are two popular window functions that give us this opportunity. Called the Chebyshev and the Kaiser window functions, they're defined by the following formidable expressions: (eqn. 17), (eqn. 18) Two typical Chebyshev and Kaiser window functions and their frequency magnitude responses are shown in FIG. 25. For comparison, the rectangular and Blackman window functions are also shown in that figure. (Again, the curves in FIG. 25(b) were obtained for window functions de fined by 32 discrete samples, with 480 zeros appended, applied to a 512-point DFT.)
Equation (eqn. 17) was originally based on the analysis of antenna arrays using the mathematics of Chebyshev polynomials. Equation (eqn. 18) evolved from Kaiser's approximation of prolate spheroid functions using zeroth-order Bessel functions [12-13]. Don't be intimidated by the complexity of Eqs. (eqn. 17) and (eqn. 18)--at this point, we need not be concerned with the mathematical details of their development. We just need to realize that the control parameters y and [3, in Eqs. (eqn. 17) and (eqn. 18), give us control over the windows' main lobe widths and the sidelobe levels.
Let's see how this works for Chebyshev window functions, having four separate values of y, and their frequency responses shown in FIG. 26. FIR filter designers applying the window method typically use predefined software routines to obtain their Chebyshev window coefficients. Commercial digital signal processing software packages allow the user to specify three things: the window function (Chebyshev in this case), the desired number of coefficients (the number of taps in the FIR filter), and the value of y. Selecting different values for y enables us to adjust the sidelobe levels and see what effect those values have on main lobe width, a capability that we didn't have with the Black man window or the window functions discussed in Section 3.9. The Chebyshev window function's stopband attenuation, in decibels, is equal to (eqn. 19) So, for example, if we needed our sidelobe levels to be no greater than -.60 dB below the main lobe, we use (eqn. 19) to establish a y value of 3.0 and let the software generate the Chebyshev window coefficients. [By the way, some digital signal processing software packages require that we specify Attencho, in decibels instead of y. 1 hat way, we don't have to bother using (eqn. 19) at all. ] The same process applies to the Kaiser window, as shown in FIG. 27. Commercial software packages allow us to specify 3 in (eqn. 18) and provide us with the associated window coefficients. The curves in FIG. 27(b), obtained for Kaiser window functions defined by 32 discrete samples, show that we can select the desired sidelobe levels and see what effect this has on the main lobe width. Chebyshev or Kaiser, which is the best window to use? It depends on the application. Returning to FIG. 25(b), notice that, unlike the constant side lobe peak levels of the Chebyshev window, the Kaiser window's sidelobes de crease with increased frequency. However, the Kaiser sidelobes are higher than the Chebyshev window's sidelobes near the main lobe. Our primary trade-off here is trying to reduce the sidelobe levels without broadening the main lobe too much. Digital filter designers typically experiment with various values of and p for the Chebyshev and Kaiser windows to get the optimum Wdu(m) for a particular application. (For that matter, the Blackman window's very low sidelobe levels outweigh its wide main lobe in many applications.) Different window functions have their own individual advantages and disadvantages for FIR filter design. Regardless of the nonrectangular window function used, they always decrease the FIR filter passband ripple over that of the rectangular window. For the enthusiastic reader, a very thorough discussion of window functions can be found in reference [14]. 4. The window method of low-pass FIR filter design can be used as the first step in designing a bandpass FIR filter. Let's say we want a 31-tap FIR filter with the frequency response shown in FIG. 22(a), but instead of being centered about zero Hz, we want the filter's passband to be centered about fs /4 Hz. If we define a low-pass FIR filter's coefficients as hIp (k), our problem is to find the hbp(k) coefficients of a bandpass FIR filter. As shown in FIG. 28, we can shift Hlp (m)'s frequency response by multiplying the filter's hlp(k) low pass coefficients by a sinusoid of f/4 Hz. That sinusoid is represented by the s_shift (k) sequence in FIG. 28(a), whose values are a sinewave sampled at a rate of four samples per cycle. Our final 31-tap hi ,p(k) FIR bandpass filter coefficients are (eqn. 20) whose frequency magnitude response | Hbp (m) | is shown as the solid curves in FIG. 28(b). The actual magnitude of | Hbp (m) | is half that of the original | Hbp (m) | because half the values in hbp(k) are zero when s_shift (k) corresponds exactly to fs/4. This effect has an important practical implication. It means that, when we design an N-tap bandpass FIR filter centered at a frequency of fs /4 Hz, we only need to perform approximately N/2 multiplications for each filter output sample. (There's no reason to multiply an input sample value, x(n-k), by zero before we sum all the products from (eqn. 6) and FIG. 13, right? We just don't bother to perform the unnecessary multiplications at all.) Of course, when the bandpass FIR filter's center frequency is other than . fs/4, we're forced to perform the full number of N multiplications for each FIR filter output sample. Notice, here, that the hip (k) low-pass coefficients in FIG. 28(a) have not been multiplied by any window function. In practice, we'd use an hip(k) that has been windowed prior to implementing (eqn. 20) to reduce the pass band ripple. If we wanted to center the bandpass filter's response at some frequency other than fs /4, we merely need to modify s shift (k) to represent sampled values of a sinusoid whose frequency is equal to the desired bandpass center frequency. That new s shift (k) sequence would then be used in (eqn. 20) to get the new hbr (k). 5. Going one step further, we can use the bandpass FIR filter design technique to design a highpass FIR filter. To obtain the coefficients for a highpass filter, we need only modify the shifting sequence s shift (k) to make it represent a sampled sinusoid whose frequency is f/2. This process is shown in FIG. 29. Our final 31-tap highpass FIR filter's hhp(k) coefficients are h, i (k) = h, (k)• s shift (k) (eqn. 21) whose |Hhp(M)| frequency response is the solid curve in FIG. -29(b). Because s_shift (t (k) in FIG. 29(a) has alternating plus arid minus ones, we can see that hhp (k) is merely 1i 1 (k) with the sign changed for every other coefficient. Unlike | Hbp (m) | in FIG. 28(b), the |Hbp(n) | response in FIG. 29(b) has the same amplitude as the original | ffip (m)|. Again, notice that the hip(k) low-pass coefficients in FIG. 29(a) have not been modified by any window function. In practice, we'd use a windowed hlp (k) to reduce the passband ripple before implementing (eqn. 21). 6. Let's introduce one last FIR filter design technique that has found wide acceptance in practice. The Remez Exchange FIR filter design method (also called the Parks-McClellan, or Optimal method) is a popular technique used to design high-performance FIR filters. To use this design method, we have to visualize a desired frequency response Hd (m) like that shown in FIG. 30. We have to establish a desired passband cutoff frequency fpass and the frequency where the attenuated stopband begins, fstop . In addition, we must establish the variables 6p and 6, that define our desired passband and stop band ripple. Passband and stopband ripples, in decibels, are related to δp and δs by [15] (eqn. 22) and (eqn. 22') (Some of the early journal papers describing the Remez design method used the equally valid expression -20 • log10(6p ) to define the passband ripple in decibels. However, (eqn. 22) is the most common form used today.) Next, we apply these parameters to a computer software routine that generates the filter's N time-domain h(k) coefficients where N is the minimum number of filter taps to achieve the desired filter response.
On the other hand, some software Remez routines assume that we want 6 and 6, to be as small as possible and require us only to define the desired values of the Hd (m) response as shown by the solid black dots in FIG. 31. The software then adjusts the values of the undefined (shaded dots) values of Hd (m) to minimize the error between our desired and actual frequency response while minimizing 6p and 6,. The filter designer has the option to de fine some of the Hd (m) values in the transition band, and the software calculates the remaining undefined Hd(m) transition band values. With this version of the Remez algorithm, the issue of most importance becomes how we define the transition region. We want to minimize its width while, at the same time, minimizing passband and stopband ripple. So exactly how we de sign an FIR filter using the Remez Exchange technique is specific to the avail able filter design software. Although the mathematics involved in the development of the Remez Exchange method is rather complicated, we don't have to worry about that here; just remember that the Remez Exchange design method gives us a Chebyshev-type filter whose actual frequency response is as close as possible to the desired Hd (m) response for a given number of filter taps. To illustrate the advantage of the Remez method, the solid curve in FIG. 32 shows the frequency response of a 31-tap FIR designed using this technique. For comparison, FIG. 32 also shows the frequency responses of two 31-tap FIR filters for the same passband width using the Chebyshev and Kaiser windowing techniques. Notice how the three filters have roughly the same stopband sidelobe levels, near the main lobe, but that the Remez filter has the more desirable (steeper) transition band roll-off. 7. There's a specialized FIR filter that's proved useful in decimation applications [21-25]. Called a half-band FIR filter, its frequency response is symmetrical about the f5 /4 point as shown in FIG. 33(a). As such, the sum of f_pass and "' stop is 4/2. This symmetry has the beautiful property that the time domain FIR impulse response has every other filter coefficient being zero, except at the peak. This enables us to avoid approximately half the number of multiplications when implementing this kind of filter. By way of example, FIG. 33(h) shows the coefficients for a 31-tap half-band filter where Af was defined to be approximately fs /32 using the Reniez Exchange method. (To preserve symmetry, the parameters 8p and 8, were specified to be equal to each other.) Notice how the alternating H(k) coefficients are zero, so we perform 17 multiplications per output sample instead of the expected 31 multiplications. For an S-tap half-band FIR filter, we'll only need perform (S + 1)/2 + 1 multiplications per output sample. Be careful though; there's a restriction on the number of coefficients. To build linear phase half-band FIR filters, S + 1 must be an integer multiple of four. The structure of a simple seven-coefficient half band filter is shown in FIG. 33(c), with the h(1) and h(5) multipliers absent. [ Section 13.7 shows a technique to further reduce the number of necessary multiplies for linear phase tapped delay line HR filters including half-band filters. ] 8. Although we illustrated a couple of output phase shift examples for our original averaging FIR filter in FIG. 10, the subject of FIR phase response de-. serves additional attention. One of the dominant features of FIR filters is their linear phase response that we can demonstrate by way of example. Given the 25 h(k) FIR filter coefficients in FIG. 34(a), we can perform a DFT to deter mine the filter's H(m) frequency response. The normalized real part, imaginary part, and magnitude of H(m) are shown in FIGs 34(b) and 34(c), respectively. Being complex values, each H(ni) sample value can be described by its real and imaginary parts, or equivalently, by its magnitude I H(m) I and its phase H0 (m) shown in FIG. 35(a). The phase of a complex quantity is, of course, the arctangent of the imaginary part divided by the real part, or ø = tan^1 (imag/real). Thus the phase of Hø (m) is determined from the samples in FIG. 34(b). [ Any DFT size greater than the h(k) width of 25 is sufficient to obtain fi(m). The h(k) sequence was padded with 103 zeros to take a 128-point DFT resulting in the H(m) sample values in FIG. 34. ] The phase response in FIG. 35(a) certainly looks linear over selected frequency ranges, but what do we make of those sudden jumps, or discontinuities, in this phase response? If we were to plot the angles of Hø(m) starting with the m = 0 sample on a polar graph, using the nonzero real part of H(0), and the zero-valued imaginary part of H(0), we'd get the zero-angled H0 (0) phasor shown on the right side of FIG. 35(b). Continuing to use the real and imaginary parts of H(m) to plot additional phase angles results in the phasors going clockwise around the circle in increments of -33.75°. It's at the H0(6) that we discover the cause of the first discontinuity in FIG. 35(a). Taking the real and imaginary parts of H(6), we'd plot our phasor oriented at an angle of -202.5°. But FIG. 35(a) shows that H0(6) is equal to 157.5°. The problem lies in the software routine used to generate the arctangent values plotted in FIG. 35(a). The software adds 360° to any negative angles in the range of -180° > e -360°, i.e., angles in the upper half of the circle. This makes ø a positive angle in the range of 0° < e 180° and that's what gets plotted. (This apparent discontinuity between H0 (5) and H0 (6) is called phase wrapping.) So the true Hø(6) of -202.5° is converted to a +157.5° as shown in parentheses in FIG. 35(b). If we continue our polar plot for additional He (m) values, we'll see that their phase angles continue to decrease with an angle increment of -33.75°. If we compensate for the software's behavior and plot phase angles more negative than -180°, by unwrapping the phase, we get the true Hø (m) shown in FIG. 35(c).t Notice that Hø (m) is, indeed, linear over the passband of H(m). It's at H0(17) that our particular H(m) experiences a polarity change of its real part while its imaginary part remains negative- this induces a true phase angle discontinuity that really is a constituent of H(m) at m = 17. (Additional phase discontinuities occur each time the real part of H(m) reverses polarity, as shown in FIG. 35(c).) The reader may wonder why we care about the linear phase response of H(m). The answer, an important one, requires us to introduce the notion of group delay. [ When plotting filter phase responses, if we encounter a phase angle sample that looks like an unusual discontinuity, it's a good idea to add 360 deg. to when is negative, or -360° when ø is positive, to see if that compensates for any software anomalies.] Group delay is defined as the negative of the derivative of the phase with respect to frequency, or G = -Aø/f. For FIR filters, then, group delay is the slope of the Hø(m) response curve. When the group delay is constant, as it is over the passband of all FIR filters having symmetrical coefficients, all frequency components of the filter input signal are delayed by an equal amount of time G before they reach the filter's output. This means that no phase distortion is induced in the filter's desired output signal, and this is crucial in communications signals. For amplitude modulation (AM) signals, constant group delay preserves the time waveform shape of the signal's modulation envelope. That's important because the modulation portion of an AM signal contains the sig- nal's information. Conversely, a nonlinear phase will distort the audio of AM broadcast signals, blur the edges of television video images, blunt the sharp edges of received radar pulses, and increase data errors in digital communication signals. (Group delay is sometimes called envelope delay because group delay was originally the subject of analysis due to its affect on the envelope, or modulation signal, of amplitude modulation AM systems.) Of course we're not really concerned with the group delay outside the passband because signal energy outside the passband is what we're trying to eliminate through filtering. Over the passband frequency range for an S-tap FIR digital filter, group delay has been shown to be given by (eqn. 23) where ts is the sample period (1/fs). This group delay is measured in seconds. Eliminating, the t, factor in (eqn. 23) would change its dimensions to samples. The value G, measured in samples, is always an integer for odd-tap FIR filters, and a non-integer for even tap-filters. Although we used a 128-point DFT to obtain the frequency responses in FIGs .34 and 35, we could just as well have used N = 32-point or N = 64-point DFTs. These smaller DFTs give us the phase response curves shown in FIG. 36(a) and 36(b). Notice how different the phase response curves are when N = 32 in FIG. 36(a) compared to when N = 128 in FIG. 36(c). The phase angle resolution is much finer in FIG. 36(c). The pass band phase angle resolution, or increment Aø, is given by (eqn. 24) ... where N is the number of points in the DFT. So, for our S = 25-tap filter in FIG. 34(a), G = 12, and Ao is equal to -12 .360732 -135' in FIG. 36(a), and Ao is -33.750 in Figure 36(c). If we look carefully at the sample values in FIG. 36(a), we'll see that they're all included within the samples in FIGs 36(b) and 36(c). Let's conclude this FIR phase discussion by reiterating the meaning of phase response. The phase, or phase delay, at the output of an FIR filter is the phase of the first output sample relative to the phase of the filter's first input sample. Over the passband, that phase shift, of course, is a linear function of frequency. This will be true only as long as the filter has symmetrical coefficients. FIG. 10 is a good illustration of an FIR filter's output phase delay. For FIR filters, the output phase shift measured in degrees, for the pass band frequency f = mfs/N, is expressed as (eqn. 25) Table 2 Values Used in (eqn. 25) for the Frequency f 5 /32 We can illustrate (eqn. 25) and show the relationship between the phase responses in FIG. 36, by considering the phase delay associated with the frequency of 4/32 in Table 2. The subject of group delay is described further in Sect. F, where an example of envelope delay distortion, due to a filter's nonlinear phase, is illustrated. 9. Although convolution was originally an analysis tool used to prove continuous signal processing theorems, we now know that convolution affects every aspect of digital signal processing. Convolution influences our results when ever we analyze or filter any finite set of data samples from a linear time invariant system. Convolution not only constrains DFTs to be just approximations of the continuous Fourier transform; it is the reason that discrete spectra are periodic in the frequency domain. It's interesting to note that, although we use the process of convolution to implement FIR digital filters, convolution effects induce frequency response ripple preventing us from ever building a perfect digital filter. Its influence is so pervasive that, to repeal the law of convolution, quoting a phrase from Dr. Who, would "unravel the entire causal nexus" of digital signal processing. Convolution has always been a somewhat difficult concept for the beginner to grasp. That's not too surprising for several reasons. Convolution's effect on discrete signal processing is not intuitively obvious for those with out experience working with discrete signals, and the mathematics of convolution does seem a little puzzling at first. Moreover, in their sometimes justified haste, many authors present the convolution equation and abruptly start using it as an analysis tool without explaining its origin and meaning. For example, this author once encountered what was called a tutorial article on the FFT in a professional journal that proceeded to define convolution merely by presenting something like that shown in FIG. 37 with no further explanation! Unfortunately, few beginners can gain an understanding of the convolution process from FIG. 37 alone. Here, we avoid this dilemma by defining the process of convolution and gently proceed through a couple of simple convolution examples. We conclude this section with a discussion of the powerful convolution theorem and show why it's so useful as a qualitative tool in discrete system analysis. 9.1 Discrete convolution is a process whose input is two sequences, that provides a single output sequence. Convolution inputs can be two time-domain sequences giving a time-domain output, or two frequency-domain input sequences providing a frequency-domain result. (Although the two input sequences must both be in the same domain for the process of convolution to have any practical meaning, their sequence lengths need not be the same.) Let's say we have two input sequences h(k) of length P and x(k) of length Q in the time domain. The output sequence y(n) of the convolution of the two in puts is defined mathematically as (eqn. 26)
Let's examine (eqn. 26) by way of example using the h(k) and x(k) sequences shown in FIG. 38. In this example, we can write the terms for each y(n) in (eqn. 26) as (eqn. 27) With P = 4 and Q = 3, we need evaluate only 4 + 3 - 1 = 6 individual y(n) terms. Because h(4) and h(5) are zero, we can eliminate some of the term's in (eqn. 27) and evaluate the remaining x(rt-k) indices giving the following expressions for y(n) as (eqn. 28) Looking at the indices of the h(k) and x(k) terms in (eqn. 28), we see two very important things occurring. First, convolution is merely the summation of a series of products--so the process itself is not very complicated. Second, notice that, for a given y(n), h(k)'s index is increasing as x(k)'s index is decreasing. This fact has led many authors to introduce a new sequence x(-k), and LISC that new sequence to graphically illustrate the convolution process. The x(-k) sequence is simply our original x(k) reflected about the 0 index of the k axis as shown in FIG. 39. Defining x(-k) as such enables us to depict the products and summations of (eqn. 28)'s convolution as in FIG. 40; that is, we can now align the x(-k) samples with the samples of h(k) for a given n index to calculate y(n). As shown in FIG. 40(a), the alignment of h(k) and x(n-k), for n = 0, yields y(0) = 1. This is the result of the first line in (eqn. 28 ) repeated on the right side of FIG. 40(a). The calculation of y(1), for n = 1, is depicted in Figure-34(b), where x(n-k) is shifted one element to the right, resulting in y(1) = 3. We continue this x(n-k) shifting and incrementing n until we arrive at the last nonzero convolution result of y(5) shown in FIG. 40(f). So, performing the convolution of h(k) and x(k) comprises: Step 1: plotting both h(k) and x(k), Step 2: flipping the x(k) sequence about the k = 0 value to get x(-k), Step 3: summing the products of h(k) and x(0-k) for all k to get y(0), Step 4: shifting x(-k) one sample to the right, Step 5: summing the products of h(k) and x(1-k) for all k to get y(1), and Step 6: shifting and summing products until there's no overlap of h(k) and the shifted x(n-k), in which case all further y(n)s are zero and we're done. The full convolution of our h(k) and x(k) is the y(n) sequence on the right side of FIG. 40(f). We've scanned the x(-k) sequence across the h(k) sequence and summed the products where the sequences overlap. By the way, notice that the y(n) sequence in FIG. 40(f) has six elements where h(k) had a length of four and x(k) was of length three. In the general case, if h(k) is of length P and x(k) is of length Q, the length of y(n) will have a sequence length of L, where (eqn. 29) At this point, it's fair for the beginner to ask, "OK, so what? What does this strange convolution process have to do with digital signal processing?" The answer to that question lies in understanding the effects of the convolution theorem. 9.2 The convolution theorem is a fundamental constituent of digital signal processing. It impacts our results anytime we filter or Fourier transform discrete data. To see why this is true, let's simplify the notation of (eqn. 26) and use the abbreviated form y(n) = h(k) * x(k) , (eqn. 30) where, again, the "*" symbol means convolution. The convolution theorem may be stated as follows: If two time-domain sequences h(k) and x(k) have DFTs of H(m) and X(m), respectively, then the DFT of h(k) * x(k) is the product H(m) • X(m). Likewise, the inverse DFT of H(m) • X(m) is h(k) * x(k). We can rep resent this relationship with the expression (eqn. 31) Equation (eqn. 31) tells us that two sequences resulting from h(k) * x(k) and H(m) • X(m) are Fourier transform pairs. So, taking the DFT of h(k) * x(k) always gives us H(m) X(m). Likewise, we can determine h(k) * x(k) by taking the inverse OFT of H(m) • X(m). The important point to learn from (eqn. 31) is that convolution in the time domain is equivalent to multiplication in the frequency domain. (We won't derive the convolution theorem here because its derivation is readily available to the interested reader.) To help us appreciate this principle, FIG. 41 sketches the relationship between convolution in the time domain and multiplication in the frequency domain. FIG. 41 Relationships of the convolution theorem. We can easily illustrate the convolution theorem by taking 8-point DFTs of h(k) and x(k) to get H(m) and X(m), respectively, and listing these values as in Table 3. (Of course, we have to pad h(k) and x(k) with zeros, so they both have lengths of 8 to take 8-point DFTs.) Tabulating the inverse DFT of the product H(m) • X(m) allows us to verify (eqn. 31), as listed in the last two columns of Table 3, where the acronym IDFT again means inverse DFT. The values from Table 3 are shown in FIG. 42. (For simplicity, only the magnitudes of H(m), X(m), and H(m) • X(m) are shown in the figure.) We need to become comfortable with convolution in the time domain because, as we've learned, it's the process used in FIR filters. As detailed in Section 2, we perform discrete time-domain FIR filtering by convolving an input sequence, x(n) say, […] with the impulse response x(k) of a filter, and, for FIR filters that impulse response happens to also be the filter's coefficients. The result of that convolution is a filtered time-domain sequence whose spectrum is modified (multiplied) by the filter's frequency response X(m). Section 13.10 describes a clever scheme to perform FIR filtering efficiently using the FFT algorithm to implement convolution. [As we'll see in Section 6, the coefficients used for an infinite impulse response (IIR) filter are not equal to that filter's impulse response. ] Table 3 Convolution Values of h(k) and x(k) from FIG. 38, FIG. 42 Convolution relationships of h(k), x(k). H(m), and X(m) from FIG. 38. Because of the duality of the convolution theorem, we could have swapped the time and frequency domains in our discussion of convolution and multiplication being a Fourier transform pair. This means that, similar to (eqn. 31), we can also write (eqn. 32) So the convolution theorem can be stated more generally as Convolution in one domain is equivalent to multiplication in the other domain. FIG. 43 shows the relationship between multiplication in the time domain and convolution in the frequency domain. Equation (eqn. 32) is the fundamental relation ship used in the process of windowing time-domain data to reduce DFT leakage, as discussed in Section 3.9. 9.3 The convolution theorem is useful as a qualitative tool in predicting the effects of different operations in discrete linear time-invariant systems. For ex ample, many authors use the convolution theorem to show why periodic sampling of continuous signals results in discrete samples whose spectra are periodic in the frequency domain. Consider the real continuous time-domain waveform in FIG. 44(a), with the one-sided spectrum of bandwidth B. FIG. 43 Relationships of the convolution theorem related to multiplication in the time domain. FIG. 44 Using convolution to predict the spectral replication effects of periodic sampling . Being a real signal, of course, its spectrum is symmetrical about 0 Hz. (In FIG. 44, the large right-pointing shaded arrows represent Fourier transform operations.) Sampling this waveform is equivalent to multiplying it by a sequence of periodically spaced impulses, FIG. 44(b), whose values are unity. If we say that the sampling rate is f, samples/second, then the sample period t_s =1/f5 seconds. The result of this multiplication is the sequence of discrete time-domain impulses shown in FIG. 44(c). We can use the convolution theorem to help us predict what the frequency-domain effect is of this multiplication in the time domain. From our theorem, we now realize that the spectrum of the time-domain product must be the convolution of the original spectra. Well, we know what the spectrum of the original continuous waveform is. What about the spectrum of the time domain impulses? It has been shown that the spectrum of periodic impulses, whose period is t s seconds, is also periodic impulses in the frequency domain with a spacing offs Hz as shown in FIG. 44(b) [30] . Now, all we have to do is convolve the two spectra. In this case, convolution is straightforward because both of the frequency-domain functions are symmetrical about the zero-Hz point, and flipping one of them about zero Hz is superfluous. So we merely slide one of the functions across the other and plot the product of the two. The convolution of the original waveform spectrum and the spectral impulses results in replications of the waveform spectrum every J. , Hz, as shown in FIG. 44(c). This discussion reiterates the fact that the bFT is always periodic with a period off Hz. Here's another example of how the convolution theorem can come in handy when we try to understand digital signal processing operations. This author once used the theorem to resolve the puzzling result, at the time, of a triangular window function having its first frequency response null at twice the frequency of the first null of a rectangular window function. The question was "If a rectangular time-domain function of width T has its first spectral null at 1/T Hz, why does a triangular time-domain function of width T have its first spectral null at 2/T Hz?" We can answer this question by considering convolution in the time domain. FIG. 45 Using convolution to show that the Fourier transform of a triangular function has its first null at twice the frequency of the Fourier trans form of a rectangular function [The sin(x)/x function was introduced in our discussion of window functions in Section 3.9 and is covered in greater detail in Section 3.13. ] Look at the two rectangular time-domain functions shown in FIGs 45(a) and 45(b). If their widths are each T seconds, their spectra are shown to have nulls at 1/T Hz as depicted in the frequency-domain functions in FIGs 45(a) and 45(b). We know that the frequency magnitude responses will be the absolute value of the classic sin(x)/x function. If we convolve those two rectangular time-domain functions of width T, we'll get the triangular function shown in FIG. 45(c). Again, in this case, flipping one rectangular function about the zero time axis is unnecessary. To convolve them, we need only scan one function across the other and determine the area of their overlap. The time shift where they overlap the most happens to be a zero time shift. Thus, our resultant convolution has a peak at a time shift of zero seconds because there's 100 percent overlap. If we slide one of the rectangular functions in either direction, the convolution decreases linearly toward zero. When the time shift is T/2 seconds, the rectangular functions have a 50 percent overlap. The convolution is zero when the time shift is T seconds-that's when the two rectangular functions cease to overlap. Notice that the triangular convolution result has a width of 2T, and that's really the key to answering our question. Because convolution in the time domain is equivalent to multiplication in the frequency domain, the Fourier transform magnitude of our 2T width triangular function is the 1 sin(x)/x1 in FIG. 45(a) times the I sin(x)/x | in FIG. 45(b), or the (sin(x)/x) 2 function in FIG. 45(c). If a triangular function of width 2T has its first frequency-domain null at 1/T Hz, then the same function of width T must have its first frequency null at 2/T Hz as shown in FIG. 45(d), and that's what we set out to show. Comparison of FIG. 45(c) and FIG. 45(d) illustrates a fundamental Fourier transform property that compressing a function in the time domain results in an expansion of its corresponding frequency-domain representation.
1. Shynk, J. J. "Adaptive IIR Filtering," Ira ASSP Magazine, April 1989. 2. Laundrie, A. "Adaptive Filters Friable Systems to Track Variations," Microwaves & RI', September 1989. 3. Bullock, S. R. "High Frequency Adaptive Filter," Microwave Journal, September 1990. 4. Haykin, S. S. Adaptive filter Theory, Prentice-Hall, Englewood Cliffs, New Jersey, 1986. 5. Goodwin, G. C., and Sin, K. S. Adaptive Filtering Prediction and Control, Prentice-Hall, Englewood Cliffs, New Jersey, 1984. 6. Gibbs, J. W. Nature, Vol. 59, 1899, p. 606. 7. Stockham, T. G. "High-Speed Convolution and Correlation with Applications to Digital Filtering," Section 7 in Digital Process* of Signals. Ed. by B. Gold et al., McGraw-Hill, New York, 1969, pp. 203--232. 8. Wait, J. V. "Digital Filters," in Active Filters: Lumped, Distributed, Integrated, Digital, and Parametric. Ed. by L. P. Huelsman. McGraw-Hill, New York, 1970, pp. 200-277. [9] Dolph, C. L. "A Current Distribution for Broadside Arrays Which Optimizes the Relation ship Between Beam Width and Side-Lobe Level," Proceedings of the IRE, Vol. 35, June 1946. 10. Barbiere, D. "A Method for Calculating the Current Distribution of Chebyshev Arrays," Proceedings of the IRE, Vol. 40, January 1952. [11] Cook, C. E., and Bernfeld, M. Radar Signals, Academic Press, New York, 1967, pp. 178-180. [12] Kaiser, J. F. "Digital Filters," in System Analysis by Digital Computer. Ed. by F. F. Kuo and J. F. Kaiser, John Wiley and Sons, New York, 1966, pp. 218-277. [13] Williams, C. S. Designing Digital Filters, Prentice-Hall, Englewood Cliffs, New Jersey, 1986, p. 117. [14] Harris, F. J. "On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform," Proceedings of the IEEE, Vol. 66, No. 1, January 1978. [15] McClellan, J. H., Parks, T. W., and Rabiner, L. R. "A Computer Program for Designing Optimum FIR Linear Phase Digital Filters," IEEE Trans. on Audio and Electroacoustics, Vol. AU-21, No, 6, December 1973, p. 515. [16] Rabiner, L. R. and Gold, B. Theory and Application of Digital Signal Processing, Prentice-Hall, Englewood Cliffs, New Jersey, 1975, p. 136. [17] Parks, T. W., and McClellan, J. H. "Chebyshev Approximation for Nonrecursive Digital Filters with Linear Phase," IEEE Trans. on Circuit Theory, Vol. CT-19, March 1972. [18] McClellan, J. H., and Parks, T. W. "A Unified Approach to the Design of Optimum FIR Linear Phase Digital Filters," IEEE Trans. on Circuit Theory, Vol. CT-20, November 1973. [19] Rabiner, L. R., McClellan, J. H., and Parks, T. W. "FIR Digital Filter Design Techniques Using Weighted Chebyshev Approximation," Proc. IEEE, Vol. 63, No. 4, April 1975. [20] Oppenheim, A. V., and Schafer, R. W. Discrete-Time Signal Processing, Prentice-Hall, Engle wood Cliffs, New Jersey, 1989, p. 478. [21] Funderburk, D. M., and Park, S. "Implementation of a C-QUAM AM-Stereo Receiver Using a General Purpose DSP Device," RF Design, June 1993. [22] Harris Semiconductor Inc. "A Digital, 16-Bit, 52 Msps Halfband Filter," Microwave Journal, September 1993. [23] Ballanger, M. G. "Computation Rate and Storage Estimation in Multirate Digital Filtering with Half-Band Filters," IEEE Trans. on Acoust. Speech, and Signal Proc., Vol. ASSP-25, No. 4, August 1977. [24] Crochiere, R. E., and Rabiner, L. R. "Decimation and Interpolation of Digital Signals- A Tutorial Review," Proceedings of the IEEE, Vol. 69, No. 3, March 1981, p. 318. [25] Ballanger, M. G., Daguet, J. L., and Lepagnol, G. P "Interpolation, Extrapolation, and Reduction of Computational Speed in Digital Filters," IEEE Trans. on Acoust. Speech, and Signal Proc., Vol. ASSP-22, No. 4, August 1974. [26] Oppenheim, A. V, Willsky, A. S., and Young, L T. Signals and Systems, Prentice-Hall, Englewood Cliffs, New Jersey, 1983, p. 212. [27] Stearns, S. Digital Signal Analysis, I layden Book Co., Rochelle Park, New Jersey, 1975, 93. [28] Oppenheim, A. V., and Schafer, R. W. Discrete-Time Signal Processing, Prentice-Hall, Engle wood Cliffs, New Jersey, 1989, p. 58. [29] Rabiner, L. R., and Gold, B. Theory and Application of Digital Signal Processing, Prentice -liai!, Englewood Cliffs, New Jersey, 1975, p. 59. [30] Oppenheim, A. V., Willsky, A. S., and Young, I. T. Signals and Systems, Prentice-Hall, Englewood Cliffs, New Jersey, 1983, p. 201. Related Articles -- Top of Page -- Home |

Updated: Friday, September 13, 2019 12:06 PST