Книга Introduction to Flight Testing читать онлайн бесплатно, автор James W. Gregory – Fictionbook, cтраница 9
James W. Gregory Introduction to Flight Testing
Introduction to Flight Testing
Introduction to Flight Testing

4

  • 0
Поделиться

Полная версия:

James W. Gregory Introduction to Flight Testing

  • + Увеличить шрифт
  • - Уменьшить шрифт

Figure 4.5 Power spectrum based on the FFT of the triangle waveform.

4.2 Filtering

We'll now discuss filtering techniques, and how they can be used to improve the interpretation of the signal by removing unwanted frequency content. When considering a signal represented in the frequency domain, filtering suppresses the amplitude of signal content over a select range of frequencies. This is done by defining and applying a transfer function, which is a frequency‐dependent weighting value that is multiplied with the signal in the frequency domain.

Common types of filters include low pass, high pass, band pass, and band stop schemes, which are illustrated in Figure 4.6. A low pass filter will attenuate signal content at frequencies above a specified cutoff frequency, and preserve signal content below that cutoff frequency. Low pass filtering is useful for removing high‐frequency noise in a signal, which may obscure the desired low frequency data. High pass filtering does just the opposite – it attenuates signal content at frequencies below the cutoff, and preserves signal content at higher frequencies. High pass filtering is particularly useful for removing the steady‐state voltage (a DC mean value) from a signal. The band pass filter is essentially a combination of the two, where the high‐pass cutoff frequency is at a lower frequency than the cutoff for the low‐pass filter. The region between the two cutoff frequencies is the passband. Finally, a band stop filter is designed to selectively remove a range of frequencies – it is the logical inverse of a band pass filter, where the low‐pass cutoff frequency is below the high‐pass cutoff frequency.

Figure 4.6 illustrates the effect of each of these filters on a signal (Eq. (4.1)) in both the time domain (left column) and in the frequency domain (center column). The right‐most column represents the transfer function for each filter. A filter can be applied to a given signal by transforming that signal into the frequency domain (via FFT), linearly multiplying the signal's power spectrum with the filter's frequency response function in the frequency domain, and then transforming the signal back into the time domain (via the inverse FFT). In Figure 4.6, the baseline signal is the same waveform that we considered in an earlier example (see Eq. (4.1)). The effects of the low pass filter are to suppress the higher frequency components at 40 and 80 Hz, leaving only the DC level and the sinusoidal component at 8 Hz. In contrast, the high pass filter removes the DC level and the lowest frequency component (8 Hz), leaving both high frequency components unattenuated. The band pass and band stop filters have equivalent performance, where the filter suppresses signal content in portions of the spectrum where the filter's transfer function has a high level of attenuation.


Figure 4.6 Examples of various filtering schemes applied to signals in the time domain (left column) and frequency domain (center column). The right column shows the transfer function associated with each defined filter: low pass, high pass, band pass, and band stop.


Beyond the definition of the various filter types, there are other filter characteristics that are important to consider. Common filter classes include Butterworth, Bessel, Chebyshev, and elliptical – each of which has varying characteristics. One important parameter is the filter order, which governs the stopband attenuation rate – a description of how much increase in attenuation can be achieved over a given frequency interval. Attenuation rates are typically specified as dB/octave or dB/decade, where an octave is a factor of two change in frequency and a decade is an order of magnitude change in frequency. Filter order is related to attenuation rate for the Butterworth filter by 6m dB/octave, where m is the filter order. A third important parameter is the amount of ripple allowed in the passband or the stopband. Figure 4.6 illustrates ripple throughout the stopband, where the transfer function is not flat across a range of frequencies. There is typically a tradeoff between ripple and attenuation rate, where high attenuation rate is achieved at the expense of increased ripple, and the filter class has a significant impact on the attenuation rate.

One final, critical characteristic of filters is the phase lag induced by the filter. In the same way that a filter exhibits attenuation as a function of frequency, there will be induced phase delay that is also frequency‐dependent. In most data processing applications this is an undesirable feature of the filter, but can be worked around through careful filter design or creative application of the filter (e.g., the

function in MATLAB's Signal Processing Toolbox, which feeds a signal forward and then backward through the filter in order to cancel the phase effect). Phase delay in processed flight test data can be important when comparing a filtered signal with an unfiltered signal. A detailed discussion of filter design is beyond the scope of this text, but appropriate resources may be consulted for further details (Wheeler and Ganji 2003; Bendat and Piersol 2010).

Filtering can be applied before digitization or afterwards. Typically pre‐DAQ analog filtering is used to prevent aliasing, which will be discussed in Section 4.4. Analog filtering involves the use of dedicated circuitry, and once the signal has been digitized there is no longer any flexibility to change the filter cutoff frequency or characteristics. Digital filtering, on the other hand, can be changed at will during post‐processing, allowing an interactive and adaptive approach to data analysis.

4.3 Digital Sampling: Bit Depth Resolution and Sample Rate

Let's now consider the details of how a signal is actually captured in digital form. The fundamental principle of DAQ is creating a digital representation of an analog signal. An analog signal is defined as one where the signal level (e.g., voltage) is a continuous function of time. Digital signals, however, are always a discretized representation of that continuous function, with the fidelity of that representation depending on how many discretization levels are used across amplitude and time. Resolution of the signal amplitude depends on the bit‐depth resolution of the data acquisition device, and the defined input range. The input range essentially determines the minimum and maximum voltages that can be recorded for a given measurement, where any input values exceeding those limits will be clipped. Typical data acquisition ranges can be unipolar, where all of the input voltages are of the same sign (e.g., 0–5 V, or 0–10 V), or bipolar, where positive or negative voltages can be measured (e.g., −5 to 5 V, or −10 to 10 V). Bit depth resolution is a measure of how many discretization levels are used to subdivide the input range. This resolution is typically expressed as a power of 2, due to the architecture of the data acquisition hardware. For example, a 12‐bit data acquisition device will have 212 (4096) discretization levels spanning the input range.

A combination of input range and the bit depth resolution defines the minimum change in voltage that can be resolved in a digital waveform. The maximum error between a given analog voltage and its digital representation is the quantization error,


(4.10)


where R is the input range, and B is the number of bits of the DAQ device. Digitization of a desired signal should use an input range that is as close to the limits of the anticipated signal as possible (with little risk of the signal exceeding that input range) and a bit depth resolution as high as possible. The downsides of increased bit depth resolution are increased cost of the data acquisition hardware, and larger file sizes required to store the digitized signals. In practice, the bit depth resolution should be sufficiently high such that the discretization error is small relative to the smallest voltage change in the desired signal.

Figures 4.7 and 4.8 illustrate the effects of bit depth resolution on the digital representation of an analog signal. For this example, a simple sine wave with frequency of 1000 Hz (628 rad/s) is defined as


(4.11)


A digital representation of that signal with an input range of 0–5 V and a 4‐bit converter (24, or 16 steps) is shown in Figure 4.7, compared to the original analog function. The step‐stair appearance of the signal is due to quantization error, where the digital representation of the continuous waveform is rounded off to the nearest quantization level. If the bit depth resolution is increased to 212, as shown in the zoomed‐in waveform in Figure 4.8, the signal representation is much more faithful to the original analog signal. The increase in bit depth resolution from 24 to 212 provides a factor of 256 more levels to represent the analog waveform than the 4‐bit case shown in Figure 4.7.

Sample rate, defined as the number of digital samples acquired per second, is the other predominant factor that dictates the fidelity of the digital representation of the analog waveform. Sampling rate is determined by the time required to perform the analog‐to‐digital conversion process, limiting how many samples can be digitized in a given amount of time. If the sample rate is low – i.e., there is a long period of time between each sample of the analog signal – the acquisition process could miss important changes in the analog signal in the intervening time. Not only is important information missed by an insufficient sample rate, but the resulting digital representation can be misleading.


Figure 4.7 Digital representation of an analog waveform with input range of 0–5 V, a 4‐bit converter, and very high sample rate.



Figure 4.8 Digital representation of the analog waveform with input range of 0–5 V, a 12‐bit converter, and very high sample rate.


The sample rate must be sufficiently high relative to the highest frequency present in the signal. Note that signals can be a superposition of many constituent frequencies. As a general rule of thumb, a sample rate of at least 10 to 20 times the highest frequency present in the signal will provide a reasonable representation of the analog signal in the time domain. For example, Figure 4.9 shows a generally insufficient representation of the analog signal, where the sample rate of 4400 Hz is only 4.4 times higher than the 1000 Hz frequency being measured. In contrast, Figure 4.10 shows a fairly good representation of the signal, with a sample rate of 22 000 Hz (22 times the frequency of the sampled waveform), with only minor residual error in capturing the magnitude of the waveform peaks (approximately 80 mV error).


Figure 4.9 Digital representation of the analog waveform with a sample rate of 4400 Hz (4.4 samples per cycle of the 1000 Hz analog waveform), with very high bit depth resolution.



Figure 4.10 Digital representation of the analog waveform with a sample rate of 22 000 Hz (22 samples per cycle of the 1000 Hz analog waveform), with very high bit depth resolution.

4.4 Aliasing

A theoretical minimum sampling rate needed to capture all frequency content is given by the Nyquist criterion (Nyquist 1928). This limit states that the sampling rate (fs) must be greater than twice the maximum expected frequency component (fmax),


(4.12)


Notice that the criterion dictates that the sample rate must be greater than, not greater than or equal to, the maximum frequency. If a sample rate equal to twice the maximum frequency were employed, the digital representation of that frequency (if the signal were a single‐frequency waveform) could be a straight line, a situation illustrated by Figure 4.11. Based on this understanding of the Nyquist criterion, it is clear that the scenario illustrated in Figure 4.9, while not capturing the magnitude of the waveform peaks very well, is sufficiently high to identify the frequency of the signal being measured.

The consequences of not meeting the Nyquist criterion can be significant. If the Nyquist limit is not met, then frequencies in the signal beyond fs/2 cannot be resolved. This situation alone would be somewhat benign if the high frequencies were simply omitted. However, these high frequencies beyond the cutoff are actually represented as lower frequency content in the digital misrepresentation of the waveform. This can be insidious if the flight test engineer is unaware of the presence of this high‐frequency content and misinterprets these false low‐frequency representations as being real. This phenomenon of false representation of high frequency content (beyond the Nyquist cutoff) as low frequency content is referred to as aliasing.

The effects of aliasing can be represented in the time and frequency domain, as shown in Figure 4.12. The top row of the figure illustrates the actual waveform to be represented in digital form (the analog signal). The frequency of this signal is 20 Hz (left column), 80 Hz (center column), or 120 Hz (right column). Below this, the second row shows the digital representation of each signal when sampled at 100 Hz. Note that only the left‐most column satisfies the Nyquist criterion. Even though the indicated sample rate (100 Hz) is higher than the signal's fundamental frequency of the center column (80 Hz), the Nyquist criterion in this case (f

Конец ознакомительного фрагмента.

Текст предоставлен ООО «Литрес».

Прочитайте эту книгу целиком, купив полную легальную версию на Литрес.

Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.

Notes

1

There have been many iterations of the U.S. Standard Atmosphere over the years. The original version was published in 1958, and as scientific understanding of the atmosphere advanced, was updated in 1962, 1966, and finally 1976. Some older versions of the standard atmosphere persist today – for example, Anderson (2016) continues to refer to a 1959 definition of the standard atmosphere from the U.S. Air Force. However, the 1976 U.S. Standard Atmosphere and the 1993 ICAO standard atmosphere are widely accepted as the appropriate standards to use today.

2

There are two different definitions of absolute altitude that we will use in this chapter. The first one, considered here, is for development of the standard atmosphere. The second definition is widely used in aviation as the height above ground level. We will clarify these distinctions at the end of this chapter.

3

Additional details are available in an online supplement, “Effects of Kollsman Setting on Altimeter Reading.”

Купить и скачать всю книгу
1...789
ВходРегистрация
Забыли пароль