Comm.
Short Description
Download Comm....
Description
c
c c c c c c
c
c c c c c c c c c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c c
c c c
c c { c cc
c c cc{ c c{ !#
!"# !"#$%
c{ c c{ & '
%
&
& ( &)) cc cc cc c% !! #!% "#!% "# % $ c{ cc cc& { & &* &&
& !(" c c c c c
c c c c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
c c
c is a random fluctuation in an electrical signal, a characteristic of all electronic circuits. Noise generated by electronic devices varies greatly, as it can be produced by several different effects. Thermal and shot noise are unavoidable and due to the laws of nature, rather than to the device exhibiting them, while other types depend mostly on manufacturing quality and semiconductor defects. In communication systems, the noise is an error or undesired random disturbance of a useful information signal, introduced before or after the detector and decoder. The noise is a summation of unwanted or disturbing energy from natural and sometimes man-made sources. Noise is, however, typically distinguished from interference, (e.g. cross-talk, deliberate jamming or other unwanted electromagnetic interference from specific transmitters), for example in the signal-tonoise ratio (SNR), signal-to-interference ratio (SIR) and signal-to-noise plus interference ratio (SNIR) measures. Noise is also typically distinguished from distortion, which is an unwanted alteration of the signal waveform, for example in the signal-to-noise and distortion ratio (SINAD). In a carrier-modulated passband analog communication system, a certain carrier-tonoise ratio (CNR) at the radio receiver input would result in a certain signal-to-noise ratio in the detected message signal. In a digital communications system, a certain b/ 0 (normalized signalto-noise ratio) would result in a certain bit error rate (BER). While noise is generally unwanted, it can serve a useful purpose in some applications, such as random number generation or dithering.
c cc !"#c (sometimes , or "#c ) is unavoidable, and generated by the random thermal motion of charge carriers (usually electrons), inside an electrical conductor, which happens regardless of any applied voltage.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Thermal noise is approximately white, meaning that its power spectral density is nearly equal throughout the frequency spectrum. The amplitude of the signal has very nearly a Gaussian probability density function. A communication system affected by thermal noise is often modelled as an additive white Gaussian noise (AWGN) channel. The root mean square (RMS) voltage due to thermal noise ½ , generated in a resistance (ohms) over bandwidth ǻ (hertz), is given by
where is Boltzmann's constant (joules per kelvin) and is the resistor's absolute temperature (kelvin). As the amount of thermal noise generated depends upon the temperature of the circuit, very sensitive circuits such as preamplifiers in radio telescopes are sometimes cooled in liquid nitrogen to reduce the noise level.
cc Shot noise in electronic devices consists of unavoidable random statistical fluctuations of the electric current in an electrical conductor. Random fluctuations are inherent when current flows, as the current is a flow of discrete charges (electrons).
$%cc Flicker noise, also known as &^c, is a signal or process with a frequency spectrum that falls off steadily into the higher frequencies, with a pink spectrum. It occurs in almost all electronic devices, and results from a variety of effects, though always related to a direct current.
#cc Burst noise consists of sudden step-like transitions between two or more levels (non-Gaussian), as high as several hundred millivolts, at random and unpredictable times. Each shift in offset voltage or current lasts for several milliseconds, and the intervals between pulses tend to be in the audio range (less than 100 Hz), leading to the term c for the popping or crackling sounds it produces in audio circuits.
' cc Avalanche noise is the noise produced when a junction diode is operated at the onset of avalanche breakdown, a semiconductor junction phenomenon in which carriers in a high voltage c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
gradient develop sufficient energy to dislodge additional carriers through physical impact, creating ragged current flows.
-# ( c The c' in an electronic system is typically measured as an electrical power in watts or dBm, a root mean square (RMS) voltage (identical to the noise standard deviation) in volts, dBȝV or a mean squared error (MSE) in volts squared. Noise may also be characterized by its probability distribution and noise spectral density 0() in watts per hertz. A noise signal is typically considered as a linear addition to a useful information signal. Typical signal quality measures involving noise are signal-to-noise ratio (SNR or / ), signal-toquantization noise ratio (SQNR) in analog-to-digital coversion and compression, peak signal-tonoise ratio (PSNR) in image and video coding, b/ 0 in digital transmission, carrier to noise ratio (CNR) before the detector in carrier-modulated systems, and noise figure in cascaded amplifiers. Noise is a random process, characterized by stochastic properties such as its variance, distribution, and spectral density. The spectral distribution of noise can vary with frequency, so its power density is measured in watts per hertz (W/Hz). Since the power in a resistive element is proportional to the square of the voltage across it, noise voltage (density) can be described by taking the square root of the noise power density, resulting in volts per root hertz ( ). Integrated circuit devices, such as operational amplifiers commonly quote equivalent input noise level in these terms (at room temperature). Noise power is measured in Watts or decibels (dB) relative to a standard power, usually indicated by adding a suffix after dB. Examples of electrical noise-level measurement units are dBu, dBm0, dBrn, dBrnC, and dBrn(1 í 2), dBrn(144-line). Noise levels are usually viewed in opposition to signal levels and so are often seen as part of a signal-to-noise ratio (SNR). Telecommunication systems strive to increase the ratio of signal level to noise level in order to effectively transmit data. In practice, if the transmitted signal falls below the level of the noise (often designated as the noise floor) in the system, data can no longer be decoded at the receiver. Noise in telecommunication systems is a product of both internal and external sources to the system.
c If the noise source is correlated with the signal, such as in the case of quantisation error, the intentional introduction of additional noise, called dither, can reduce overall noise in the
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
bandwidth of interest. This technique allows retrieval of signals below the nominal detection threshold of an instrument. This is an example of stochastic resonance.
cc c(c
c
Pink Brown/Red Grey
c is a random signal (or process) with a flat power spectral density. In other words, the signal contains equal power within a fixed bandwidth at any center frequency. White noise draws its name from white light in which the power spectral density of the light is distributed over the visible band in such a way that the eye's three color receptors (cones) are approximately equally stimulated. An infinite-bandwidth white noise signal is a purely theoretical construction. The bandwidth of white noise is limited in practice by the mechanism of noise generation, by the transmission medium and by finite observation capabilities. A random signal is considered "white noise" if it is observed to have a flat spectrum over a medium's widest possible bandwidth.
ccc c c)c While it is usually applied in the context of frequency domain signals, the term J is also commonly applied to a noise signal in the spatial domain. In this case, it has an auto correlation which can be represented by a delta function over the relevant space dimensions. The signal is then "white" in the spatial frequency domain (this is equally true for signals in the angular frequency domain, e.g., the distribution of a signal across all angles in the night sky).
c cc
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
An example realization of a Gaussian white noise process. The image to the right displays a finite length, discrete time realization of a white noise process generated from a computer. Being uncorrelated in time does not restrict the values a signal can take. Any distribution of values is possible (although it must have zero DC component). Even a binary signal which can only take on the values 1 or -1 will be white if the sequence is statistically uncorrelated. Noise having a continuous distribution, such as a normal distribution, can of course be white. It is often incorrectly assumed that Gaussian noise (i.e., noise with a Gaussian amplitude distribution ² see normal distribution) is necessarily white noise, yet neither property implies the other. Gaussianity refers to the probability distribution with respect to the value i.e. the probability that the signal has a certain given value, while the term 'white' refers to the way the signal power is distributed over time or among frequencies.
FFT spectrogram of pink noise (left) and white noise (right), shown with linear frequency axis (vertical). We can therefore find Gaussian white noise, but also Poisson, Cauchy, etc. white noises. Thus, the two words "Gaussian" and "white" are often both specified in mathematical models of systems. Gaussian white noise is a good approximation of many real-world situations and generates mathematically tractable models. These models are used so frequently that the term c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
additive white Gaussian noise has a standard abbreviation: AWGN. Gaussian white noise has the useful statistical property that its values are independent (see Statistical independence). White noise is the generalized mean-square derivative of the Wiener process or Brownian motion.
c It is used by some emergency vehicle sirens due to its ability to cut through background noise, which makes it easier to locate.[¦ w White noise is commonly used in the production of electronic music, usually either directly or as an input for a filter to create other types of noise signal. It is used extensively in audio synthesis, typically to recreate percussive instruments such as cymbals which have high noise content in their frequency domain. It is also used to generate impulse responses. To set up the equalization (EQ) for a concert or other performance in a venue, a short burst of white or pink noise is sent through the PA system and monitored from various points in the venue so that the engineer can tell if the acoustics of the building naturally boost or cut any frequencies. The engineer can then adjust the overall equalization to ensure a balanced mix. White noise can be used for frequency response testing of amplifiers and electronic filters. It is not used for testing loudspeakers as its spectrum contains too great an amount of high frequency content. Pink noise is used for testing transducers such as loudspeakers and microphones. White noise is a common synthetic noise source used for sound masking by a tinnitus masker.[1w White noise is a particularly good source signal for masking devices as it contains higher frequencies in equal volumes to lower ones, and so is capable of more effective masking for high pitched ringing tones most commonly perceived by tinnitus sufferers.[¦ w White noise is used as the basis of some random number generators. For example, Random.org uses a system of atmospheric antennae to generate random digit patterns from white noise. White noise machines are sold as privacy enhancers and sleep aids and to mask tinnitus. Some people claim white noise, when used with headphones, can aid concentration by masking irritating or distracting noises in a person's environment.[2w
c*(c
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
c *c'c A random vector is a white random vector if and only if its mean vector and autocorrelation matrix are the following:
That is, it is a zero mean random vector, and its autocorrelation matrix is a multiple of the identity matrix. When the autocorrelation matrix is a multiple of the identity, we say that it has spherical correlation.
c *cc+,c-c A continuous time random process J() where is a white noise process if and only if its mean function and autocorrelation function satisfy the following:
i.e. it is a zero mean process for all time and has infinite power at zero time shift since its autocorrelation function is the Dirac delta function. The above autocorrelation function implies the following power spectral density.
since the Fourier transform of the delta function is equal to 1. Since this power spectral density is the same at all frequencies, we call it J as an analogy to the frequency spectrum of white light. A generalization to random elements on infinite dimensional spaces, such as random fields, is the white noise measure.
*c'c ( c Two theoretical applications using a white random vector are the and J of another arbitrary random vector. To an arbitrary random vector, we transform a white random vector with a carefully chosen matrix. We choose the transformation matrix so that the mean and covariance matrix of the transformed white random vector matches the mean and c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
covariance matrix of the arbitrary random vector that we are simulating. To J an arbitrary random vector, we transform it by a different carefully chosen matrix so that the output random vector is a white random vector. These two ideas are crucial in applications such as channel estimation and channel equalization in communications and audio. These concepts are also used in data compression.
6*.c # /c c *c'c Suppose that a random vector has covariance matrix § . Since this matrix is Hermitian symmetric and positive semidefinite, by the spectral theorem from linear algebra, we can diagonalize or factor the matrix in the following way.
where is the orthogonal matrix of eigenvectors and ȁ is the diagonal matrix of eigenvalues. We can simulate the 1st and 2nd moment properties of this random vector with mean and covariance matrix § via the following transformation of a white vector of unit variance:
where
Thus, the output of this transformation has expectation
and covariance matrix
/c c *c'c The method for whitening a vector following calculation:
c c
with mean
and covariance matrix § is to perform the
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Thus, the output of this transformation has expectation
and covariance matrix
By diagonalizing § , we get the following:
Thus, with the above transformation, we can whiten the random vector to have zero mean and the identity covariance matrix.
*c/ c ( c We cannot extend the same two concepts of simulating and whitening to the case of continuous time random signals or processes. For simulating, we create a filter into which we feed a white noise signal. We choose the filter so that the output signal simulates the 1st and 2nd moments of any arbitrary random process. For whitening, we feed any arbitrary random signal into a specially chosen filter so that the output of the filter is a white noise signal.
# /c c##c *c/ c
White noise fed into a linear, time-invariant filter to simulate the 1st and 2nd moments of an arbitrary random process. We can simulate any wide-sense stationary, continuous-time random process with constant mean ȝ and covariance function c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
and power spectral density
We can simulate this signal using frequency domain techniques. Because § (IJ) is Hermitian symmetric and positive semi-definite, it follows that (Ȧ) is real and can be factored as
if and only if (Ȧ) satisfies the Paley-Wiener criterion.
If (Ȧ) is a rational function, we can then factor it into pole-zero form as
Choosing a minimum phase (Ȧ) so that its poles and zeros lie inside the left half s-plane, we can then simulate () with (Ȧ) as the transfer function of the filter. We can simulate () by constructing the following linear, time-invariant filter
where J() is a continuous-time, white-noise signal with the following 1st and 2nd moment properties:
Thus, the resultant signal
has the same 2nd moment properties as the desired signal ().
/c c##c *c/ c c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
An arbitrary random process x(t) fed into a linear, time-invariant filter that whitens x(t) to create white noise at the output. Suppose we have a wide-sense stationary, continuous-time random process defined with the same mean ȝ, covariance function § (IJ), and power spectral density (Ȧ) as above. We can , this signal using frequency domain techniques. We factor the power spectral density (Ȧ) as described above. Choosing the minimum phase (Ȧ) so that its poles and zeros lie inside the left half s-plane, we can then whiten () with the following inverse filter
We choose the minimum phase filter so that the resulting inverse filter is stable. Additionally, we must be sure that (Ȧ) is strictly positive for all so that ½(Ȧ) does not have any singularities. The final form of the whitening procedure is as follows:
so that J() is a white noise random process with zero mean and constant, unit power spectral density
Note that this power spectral density corresponds to a delta function for the covariance function of J(). c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
c#c White noise, pink noise, and brown noise are used as percussion in 8-bit (chiptune) music.
$%cc $%c is a type of electronic noise with a 1/, or pink spectrum. It is therefore often referred to as &c or %c , though these terms have wider definitions. It occurs in almost all electronic devices, and results from a variety of effects, such as impurities in a conductive channel, generation and recombination noise in a transistor due to base current, and so on. It is always related to a direct current. Its origin is not well known. In electronic devices, it is a low-frequency phenomenon, as the higher frequencies are overshadowed by white noise from other sources. In oscillators, however, the low-frequency noise is mixed up to frequencies close to the carrier which results in oscillator phase noise. Flicker noise is often characterized by the corner frequency c between the regions dominated by each type. MOSFETs have a higher c than JFETs or bipolar transistors which is usually below 2 kHz for the latter. The flicker noise voltage power in MOSFET can be expressed by §/(Cox), where § is the process-dependent constant, and are channel width and length respectively[1w. Flicker noise is found in carbon composition resistors, where it is referred to as ¦ , since it increases the overall noise level above the thermal noise level, which is present in all resistors. In contrast, wire-wound resistors have the least amount of flicker noise. Since flicker noise is related to the level of DC, if the current is kept low, thermal noise will be the predominant effect in the resistor, and the type of resistor used will not affect noise levels.
#c For measurements the interest is in the "drift" of a variable with respect to a measurement at a previous time. This is calculated by applying the signal time differencing:
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
to
where is the time between measurements,
and includes the contribution of both positive and negative frequency terms. After some manipulation, the variance of the voltage difference is:
where Cin is the cosine integral
[2w
c c
where is a brick-wall filter limiting the upper bandwidth during measurement.
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Real measurements involve more complicated calculations.
' cc# c *c #c For DC measurements 1/f noise can be particularly troublesome as it is very significant at low frequencies (tending to infinity with integration/averaging at DC.) One powerful technique involves moving the signal of interest to a higher frequency, and using a phase sensitive detector to measure it. For example the signal of interest can be chopped with a frequency.
"#' cc c In telecommunication, an "#' c c is a quantitative representation in resistance units of the spectral density of a noise-voltage generator, given by where is the spectral density, is the Boltzmann's constant, 0 is the standard noise temperature (290 K), so
c c
.
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
The equivalent noise resistance in terms of the mean-square noise-generator voltage, 2, within a frequency increment, ǻ , is given by
c #c From Wikipedia, the free encyclopedia In electronics, c # is a temperature (in kelvins) assigned to a component such that the noise power delivered by the noisy component to a noiseless matched resistor is given by Ê = in watts, where: è è è
is the Boltzmann constant (1.381×10í23 J/K, joules per kelvin) is the noise temperature (K) is the noise bandwidth (Hz)
Engineers often model noisy components as an ideal component in series with a noisy resistor. The source resistor is often assumed to be at room temperature, conventionally taken as 290 K (17 °C, 62 °F).[1w
c A communications system is typically made up of a transmitter, a communications channel, and a receiver. The communications channel may consist of any one or a combination of many different physical media (air, coaxial cable, printed wiring board traces«). The important thing to note is that no matter what physical media the channel consists of, the transmitted signal will be randomly corrupted by a number of different processes. The most common form of signal degradation is called additive noise.[2w The additive noise in a receiving system can be of thermal origin (thermal noise) or can be from other noise-generating processes. Most of these other processes generate noise whose spectrum and probability distributions are similar to thermal noise. Because of these similarities, the c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
contributions of all noise sources can be lumped together and regarded as thermal noise. The noise power generated by all these sources ( ) can be described by assigning to the noise a noise temperature ( ) defined as:[3w = Ê / ( ) In a wireless communications receiver,
would equal the sum of two noise temperatures:
= ( + ) is the antenna noise temperature and determines the noise power seen at the output of the antenna. The physical temperature of the antenna has no affect on . is the noise temperature of the receiver circuitry and is representative of the noise generated by the non-ideal components inside the receiver.
c( c *cc(/#c An important application of noise temperature is its use in the determination of a component¶s noise factor. The noise factor quantifies the noise power that the component adds to the system when its input noise temperature is .
The noise factor (a linear term) can be converted to noise figure (in decibels) using:
c #c(c c *c If there are multiple noisy components in cascade, the noise temperature of the cascade can be calculated using the Friis equation:[1w
where è è
c c
= cascade noise temperature = noise temperature of the first component in the cascade
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
è è è è è è è
= noise temperature of the second component in the cascade = noise temperature of the third component in the cascade = noise temperature of the nth component in the cascade = linear gain of the first component in the cascade = linear gain of the second component in the cascade = linear gain of the third component in the cascade = linear gain of the ( -1) component in the cascade
Components early in the cascade have a much larger influence on the overall noise temperature than those later in the chain. This is because noise introduced by the early stages is, along with the signal, amplified by the later stages. The Friis equation shows why a good quality preamplifier is important in a receive chain.
#/cc #c The direct measurement of a component¶s noise temperature is a difficult process. Suppose that the noise temperature of a low noise amplifier (LNA) is measured by connecting a noise source to the LNA with a piece of transmission line. From the cascade noise temperature it can be seen that the noise temperature of the transmission line ( ) has the potential of being the largest contributor to the output measurement (especially when you consider that LNA¶s can have noise temperatures of only a few Kelvin). To accurately measure the noise temperature of the LNA the noise from the input coaxial cable needs to be accurately known.[4w This is difficult because poor surface finishes and reflections in the transmission line make actual noise temperature values higher than those predicted by theoretical analysis.[1w Similar problems arise when trying to measure the noise temperature of an antenna. Since the noise temperature is heavily dependent on the orientation of the antenna, the direction that the antenna was pointed during the test needs to be specified. In receiving systems, the system noise temperature will have three main contributors, the antenna ( ), the transmission line ( ), and the receiver circuitry ( ). The antenna noise temperature is considered to be the most difficult to measure because the measurement must be made in the field on an open system. One technique for measuring antenna noise temperature involves using cryogenically cooled loads to calibrate a noise figure meter before measuring the antenna. This provides a direct reference comparison at a noise temperature in the range of very low antenna noise temperatures, so that little extrapolation of the collected data is required.[5w
c(/#c
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
c (/# ($) is a measure of degradation of the signal-to-noise ratio (SNR), caused by components in a radio frequency (RF) signal chain. The noise figure is defined as the ratio of the output noise power of a device to the portion thereof attributable to thermal noise in the input termination at standard noise temperature 0 (usually 290 K). The noise figure is thus the ratio of actual output noise to that which would remain if the device itself did not introduce noise. It is a number by which the performance of a radio receiver can be specified.
c The noise figure is the difference in decibels (dB) between the noise output of the actual receiver to the noise output of an ³ideal´ receiver with the same overall gain and bandwidth when the receivers are connected to sources at the standard noise temperature 0 (usually 290 K). The noise power from a simple load is equal to , where is Boltzmann's constant, is the absolute temperature of the load (for example a resistor), and is the measurement bandwidth. This makes the noise figure a useful figure of merit for terrestrial systems where the antenna effective temperature is usually near the standard 290 K. In this case, one receiver with a noise figure say 2 dB better than another, will have an output signal to noise ratio that is about 2 dB better than the other. However, in the case of satellite communications systems, where the antenna is pointed out into cold space, the antenna effective temperature is often colder than 290 K. In these cases a 2 dB improvement in receiver noise figure will result in more than a 2 dB improvement in the output signal to noise ratio. For this reason, the related figure of ¦½ is therefore often used instead of the noise figure for characterizing satellitecommunication receivers and low noise amplifiers. In heterodyne systems, output noise power includes spurious contributions from imagefrequency transformation, but the portion attributable to thermal noise in the input termination at standard noise temperature includes only that which appears in the output via the principal frequency transformation of the system and excludes that which appears via the image frequency transformation.
(c The c( of a system is defined as:
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
where SNRin and SNRout are the input and output power signal-to-noise ratios, respectively. The noise figure is defined as:
where SNRin,dB and SNRout,dB are in decibels (dB). The noise figure is the noise factor, given in dB:
These formulae are only valid when the input termination is at standard noise temperature 0, although in practice small differences in temperature do not significantly affect the values. The noise factor of a device is related to its noise temperature :
Devices with no gain (e.g., attenuators) have a noise factor equal to their attenuation (absolute value, not in dB) when their physical temperature equals 0. More generally, for an attenuator at a physical temperature , the noise temperature is e = ( í 1), giving a noise factor of:
If several devices are cascaded, the total noise factor can be found with Friis' Formula:
where is the noise factor for the -th device and is the power gain (linear, not in dB) of the -th device. In a well designed receive chain, only the noise factor of the first amplifier should be significant.
#*c'c
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
A 5-tube superheterodyne receiver made in Japan around 1955 In electronics, a #*c' uses frequency mixing or heterodyning to convert a received signal to a fixed intermediate frequency, which can be more conveniently processed than the original radio carrier frequency. Virtually all modern radio and television receivers use the superheterodyne principle.
c
Two-section variable capacitor, used in superheterodyne receivers The word is derived from the Greek roots "different", and "power". The heterodyne technique was developed by Tesla circa 1896. Isochronous electro-mechanical local oscillators were used in connection with those receiving circuits.[1w The heterodyne technique was subsequently adopted by Canadian inventor Reginald Fessenden, [2w but it was not pursued because the local oscillator used was unstable in its frequency output, and vacuum tubes were not yet available.[3w
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
The superheterodyne principle was revisited in 1918 by U.S. Army Major Edwin Armstrong in France during World War I. [2w He invented this receiver as a means of overcoming the deficiencies of early vacuum tube triodes used as high-frequency amplifiers in radio direction finding equipment. Unlike simple radio communication, which only needs to make transmitted signals audible, direction-finders measure the received signal strength, which necessitates linear amplification of the actual carrier wave. In a triode radio-frequency (RF) amplifier, if both the plate and grid are connected to resonant circuits tuned to the same frequency, stray capacitive coupling between the grid and the plate will cause the amplifier to go into oscillation if the stage gain is much more than unity. In early designs, dozens (in some cases over 100) low-gain triode stages had to be connected in cascade to make workable equipment, which drew enormous amounts of power in operation and required a team of maintenance engineers. The strategic value was so high, however, that the British Admiralty felt the high cost was justified. Armstrong realized that if RDF receivers could be operated at a higher frequency, this would allow better detection of enemy shipping. However, at that time, no practical "short wave" amplifier existed, (defined then as any frequency above 500 kHz) due to the limitations of existing triodes. A "heterodyne" refers to a beat or "difference" frequency produced when two or more radio frequency carrier waves are fed to a detector. The term was coined by Canadian Engineer Reginald Fessenden describing his proposed method of producing an audible signal from the Morse Code transmissions of an Alexanderson alternator-type transmitter. With the spark gap transmitters then in use, the Morse Code signal consisted of short bursts of a heavily modulated carrier wave which could be clearly heard as a series of short chirps or buzzes in the receiver's headphones. However, the signal from an Alexanderson Alternator did not have any such inherent modulation and Morse Code from one of those would only be heard as a series of clicks or thumps. Fessenden's idea was to run two Alexanderson Alternators, one producing a carrier frequency 3 kHz higher than the other. In the receiver's detector the two carriers would beat together to produce a 3 kHz tone thus in the headphones the morse signals would then be heard as a series of 3 kHz beeps. For this he coined the term "heterodyne" meaning "Generated by a Difference" (in frequency). Later, when vacuum triodes became available, the same result could be achieved more conveniently by incorporating a "local oscillator" in the receiver, which became known as a "beat frequency oscillator" (BFO). As the BFO frequency was varied, the pitch of the heterodyne could be heard to vary with it. If the frequencies were too far apart the heterodyne became ultrasonic and hence no longer audible.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
It had been noticed some time before that if a regenerative receiver was allowed to go into oscillation, other receivers nearby would suddenly start picking up stations on frequencies different from those that the stations were actually transmitted on. Armstrong (and others) eventually deduced that this was caused by a "supersonic heterodyne" between the station's carrier frequency and the oscillator frequency. Thus if a station was transmitting on 300 kHz and the oscillating receiver was set to 400 kHz, the station would be heard not only at the original 300 kHz, but also at 100 kHz and 700 kHz. Armstrong realized that this was a potential solution to the "short wave" amplification problem, since the beat frequency still retained its original modulation, but on a lower carrier frequency. To monitor a frequency of 1500 kHz for example, he could set up an oscillator to, for example, 1560 kHz, which would produce a heterodyne difference frequency of 60 kHz, a frequency that could then be more conveniently amplified by the triodes of the day. He termed this the "Intermediate Frequency" often abbreviated to "IF". In December, 1919, Major E. H. Armstrong gave publicity to an indirect method of obtaining short-wave amplification, called the super-heterodyne. The idea is to reduce the incoming frequency which may be, say 1,500,000 cycles (200 meters), to some suitable super-audible frequency which can be amplified efficiently, then passing this current through a radio frequency amplifier and finally rectifying and carrying on to one or two stages of audio frequency amplification.[4w Early superheterodyne receivers used IFs as low as 20 kHz, often based on the self-resonance of iron-cored transformers. This made them extremely susceptible to image frequency interference, but at the time, the main objective was sensitivity rather than selectivity. Using this technique, a small number of triodes could be made to do the work that formerly required dozens of triodes. In the 1920s, commercial IF filters looked very similar to 1920s audio interstage coupling transformers, had very similar construction and were wired up in an almost identical manner, and so they were referred to as "IF Transformers". By the mid-1930s however, superheterodynes were using higher intermediate frequencies, (typically around 440±470 kHz), with tuned coils similar in construction to the aerial and oscillator coils, however the name "IF Transformer" persisted and is still used today. Modern receivers typically use a mixture of ceramic resonator or SAW (surface-acoustic wave) resonators as well as traditional tuned-inductor IF transformers. Armstrong was able to rapidly put his ideas into practice, and the technique was rapidly adopted by the military. However, it was less popular when commercial radio broadcasting began in the 1920s, mostly due to the need for an extra tube (for the oscillator), the generally higher cost of the receiver, and the level of technical skill required to operate it. For early domestic radios, tuned radio frequency receivers ("TRF"), also called the Neutrodyne, were more popular because they were cheaper, easier for a non-technical owner to use, and less costly to operate. Armstrong c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
eventually sold his superheterodyne patent to Westinghouse, who then sold it to RCA, the latter monopolizing the market for superheterodyne receivers until 1930.[5w By the 1930s, improvements in vacuum tube technology rapidly eroded the TRF receiver's cost advantages, and the explosion in the number of broadcasting stations created a demand for cheaper, higher-performance receivers. The development of practical indirectly-heated-cathode tubes allowed the mixer and oscillator functions to be combined in a single pentode tube, in the so-called autodyne mixer. This was rapidly followed by the introduction of low-cost multi-element tubes specifically designed for superheterodyne operation. These allowed the use of much higher intermediate frequencies (typically around 440±470 kHz) which eliminated the problem of image frequency interference. By the mid-30s, for commercial receiver production the TRF technique was obsolete. The superheterodyne principle was eventually taken up for virtually all commercial radio and TV designs.
'',c The superheterodyne receiver has three elements: the local oscillator, a frequency mixer that mixes the local oscillator's signal with the received signal, and a tuned amplifier. Reception starts with an antenna signal, optionally amplified, including the frequency the user wishes to tune, d. The local oscillator is tuned to produce a frequency close to d, LO. The received signal is mixed with the local oscillator signal. This stage does not just linearly add the two inputs, like an audio mixer. Instead it multiplies the input by the local oscillator, producing four frequencies in the output; the original signal, the original LO, and the two new frequencies d+LO and d-LO. The output signal also generally contains a number of undesirable mixtures as well. (These are 3rd- and higher-order intermodulation products. If the mixing were performed as a pure, ideal multiplication, the original d and LO would also not appear; in practice they do appear because mixing is done by a nonlinear process that only approximates true ideal multiplication.) The amplifier portion of the system is tuned to be highly selective at a single frequency, IF. By changing LO, the resulting d-LO (or d+LO) signal can be tuned to the amplifier's IF. In typical amplitude modulation ("AM radio" in the U.S., or MW) receivers, that frequency is 455 kHz; for FM receivers, it is usually 10.7 MHz; for television, 45 MHz. Other signals from the mixed output of the heterodyne are filtered out by the amplifier.
/c *cc'#c c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
The diagram below shows the basic elements of a single-conversion superheterodyne receiver. The essential elements of a local oscillator and a mixer followed by a fixed-tuned filter and IF amplifier are common to all superhet circuits. [6wCost-optimized designs may use one active device for both local oscillator and mixer²this is sometimes called a "converter" stage. One such example is the pentagrid converter.
The advantage to this method is that most of the radio's signal path has to be sensitive to only a narrow range of frequencies. Only the front end (the part before the frequency converter stage) needs to be sensitive to a wide frequency range. For example, the front end might need to be sensitive to 1±30 MHz, while the rest of the radio might need to be sensitive only to 455 kHz, a typical IF. Only one or two tuned stages need to be adjusted to track over the tuning range of the receiver; all the intermediate-frequency stages operate at a fixed frequency which need not be adjusted. To overcome obstacles such as image response, multiple IF stages are used, and in some cases multiple stages with two IFs of different values are used. For example, the front end might be sensitive to 1±30 MHz, the first half of the radio to 5 MHz, and the last half to 50 kHz. Two frequency converters would be used, and the radio would be a "Double Conversion Super Heterodyne"&mdash [6w;a common example is a television receiver where the audio information is obtained from a second stage of intermediate-frequency conversion. Receivers which are tunable over a wide bandwidth (e.g. scanners) may use an intermediate frequency higher than the signal, in order to improve image rejection.[7w Superheterodyne receivers have superior characteristics to simpler receiver types in frequency stability and selectivity. They offer better stability than Tuned radio frequency receivers (TRF) because a tuneable oscillator is more easily stabilized than a tuneable amplifier, especially with modern frequency synthesizer technology. IF filters can give narrower passbands at the same Q factor than an equivalent RF filter. A fixed IF also allows the use of a crystal filter when exceptionally high selectivity is necessary. [6w Regenerative and super-regenerative receivers offer better sensitivity than a TRF receiver, but suffer from stability and selectivity problems.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
In the case of modern television receivers, no other technique was able to produce the precise bandpass characteristic needed for vestigial sideband reception, first used with the original NTSC system introduced in 1941. This originally involved a complex collection of tuneable inductors which needed careful adjustment, but since the 1970s or early 1980s[¦ w these have been replaced with precision electromechanical surface acoustic wave (SAW) filters. Fabricated by precision laser milling techniques, SAW filters are cheaper to produce, can be made to extremely close tolerances, and are stable in operation. To avoid tooling costs associated with these components most manufacturers then tended to design their receivers around the fixed range of frequencies offered which resulted in de-facto standardization of intermediate frequencies. Microprocessor technology allows replacing the superheterodyne receiver design by a software defined radio architecture, where the IF processing after the initial IF filter is implemented in software. This technique is already in use in certain designs, such as very low-cost FM radios incorporated into mobile phones, since the system already has the necessary microprocessor. Radio transmitters may also use a mixer stage to produce an output frequency, working more or less as the reverse of a superheterodyne receiver.
* c("#c Usually the intermediate frequency is lower than either the carrier or oscillator frequencies, but with some types of receiver (e.g. scanners and spectrum analyzers) it is more convenient to use a higher intermediate frequency. In order to avoid interference to and from signal frequencies close to the intermediate frequency, in many countries IF frequencies are controlled by regulatory authorities. Examples of common IFs are 455 kHz for medium-wave AM radio, 10.7 MHz for FM, 38.9 MHz (Europe) or 45 MHz (US) for television, and 70 MHz for satellite and terrestrial microwave equipment.
,0 %c /*c *c,*c1c The amount that a signal is down-shifted by the local oscillator depends on whether its frequency is higher or lower than LO. That is because its new frequency is | í LO| in either case. Therefore, there are potentially two signals that could both shift to the same IF; one at = LO + IF and another at = LO í IF. One of those signals, called the image frequency, has to be filtered out prior to the mixer to avoid aliasing. When the upper one is filtered out, it is called /*c 1, because LO is above the frequency of the received signal. The other case is called , c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
*c 1. High-side injection also reverses the order of a signal's frequency components. Whether that actually changes the signal depends on whether it has spectral symmetry. The reversal can be undone later in the receiver, if necessary.
/c$"#c+^ /-c One major disadvantage to the superheterodyne receiver is the problem of image frequency. In heterodyne receivers, an image frequency is an undesired input frequency equal to the station frequency plus twice the intermediate frequency. The image frequency results in two stations being received at the same time, thus producing interference. Image frequencies can be eliminated by sufficient attenuation on the incoming signal by the RF amplifier filter of the superheterodyne receiver.
Early Autodyne receivers typically used IFs of only 150 kHz or so, as it was difficult to maintain reliable oscillation if higher frequencies were used. As a consequence, most Autodyne receivers needed quite elaborate antenna tuning networks, often involving double-tuned coils, to avoid image interference. Later superhets used tubes especially designed for oscillator/mixer use, which were able to work reliably with much higher IFs, reducing the problem of image interference and so allowing simpler and cheaper aerial tuning circuitry. For medium-wave AM radio, a variety of IFs have been used, but usually 455 kHz is used.
c c * c It is difficult to keep stray radiation from the local oscillator below the level that a nearby receiver can detect. The receiver's local oscillator can act like a miniature CW transmitter. This means that there can be mutual interference in the operation of two or more superheterodyne receivers in close proximity. In espionage, oscillator radiation gives a means to detect a covert receiver and its operating frequency. One effective way of preventing the local oscillator signal from radiating out from the receiver's antenna is by adding a stage of RF amplification between the receiver's antenna and its mixer stage.
c c*0 *cc Local oscillators typically generate a single frequency signal that has negligible amplitude modulation but some random phase modulation. Either of these impurities spreads some of the signal's energy into sideband frequencies. That causes a corresponding widening of the receiver's c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
frequency response, which would defeat the aim to make a very narrow bandwidth receiver such as to receive low-rate digital signals. Care needs to be taken to minimise oscillator phase noise, usually by ensuring that the oscillator never enters a non-linear mode.
c c c c c c c c c c c c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
*# c
In electronics, *# is the process of varying one or more properties of a high frequency periodic waveform, called the ¦ , with respect to a . This is done in a similar fashion as a musician may modulate a tone (a periodic waveform) from a musical instrument by varying its volume, timing and pitch. The three key parameters of a periodic waveform are its amplitude ("volume"), its phase ("timing") and its frequency ("pitch"), all of which can be modified in accordance with a low frequency signal to obtain the modulated signal. Typically a high-frequency sinusoid waveform is used as carrier signal, but a square wave pulse train may also occur. In telecommunications, *# is the process of conveying a message signal, for example a digital bit stream or an analog audio signal, inside another signal that can be physically transmitted. Modulation of a sine waveform is used to transform a baseband message signal to a passband signal, for example a radio-frequency signal (RF signal). In radio communications, cable TV systems or the public switched telephone network for instance, electrical signals can only be transferred over a limited passband frequency spectrum, with specific (non-zero) lower and upper cutoff frequencies. Modulating a sine wave carrier makes it possible to keep the frequency content of the transferred signal as close as possible to the centre frequency (typically the carrier frequency) of the passband. When coupled with demodulation, this technique can be used to, among other things, transmit a signal through a channel which may be opaque to the baseband frequency range (for instance, when sending a telephone signal through a fiber-optic strand). In music synthesizers, *# may be used to synthesise waveforms with a desired overtone spectrum. In this case the carrier frequency is typically in the same order or much lower than the modulating waveform. See for example frequency modulation synthesis or ring modulation. A device that performs modulation is known as a modulator and a device that performs the inverse operation of modulation is known as a demodulator (sometimes ¦ or ). A device that can do both operations is a modem (short for "Modulator-Demodulator").
c c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
c The aim of */ c *# is to transfer a digital bit stream over an analog bandpass channel, for example over the public switched telephone network (where a bandpass filter limits the frequency range to between 300 and 3400 Hz), or over a limited radio frequency band. The aim of /c *# is to transfer an analog baseband (or lowpass) signal, for example an audio signal or TV signal, over an analog bandpass channel, for example a limited radio frequency band or a cable TV network channel. Analog and digital modulation facilitate frequency division multiplexing (FDM), where several low pass information signals are transferred simultaneously over the same shared physical medium, using separate passband channels. The aim of */ c 0 0 *c *# methods, also known as line coding, is to transfer a digital bit stream over a baseband channel, typically a non-filtered copper wire such as a serial bus or a wired local area network. The aim of #c*# methods is to transfer a narrowband analog signal, for example a phone call over a wideband baseband channel or, in some of the schemes, as a bit stream over another digital transmission system.
/c*# c*c In analog modulation, the modulation is applied continuously in response to the analog information signal.
A low-frequency message signal (top) may be carried by an AM or FM radio wave. Common analog modulation techniques are:
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
è
Amplitude modulation (AM) (here the amplitude of the carrier signal is varied in accordance to the instantaneous amplitude of the modulating signal) î Double-sideband modulation (DSB) È Double-sideband modulation with carrier (DSB-WC) (used on the AM radio broadcasting band) È Double-sideband suppressed-carrier transmission (DSB-SC) È Double-sideband reduced carrier transmission (DSB-RC) î Single-sideband modulation (SSB, or SSB-AM), È SSB with carrier (SSB-WC) È SSB suppressed carrier modulation (SSB-SC) î Vestigial sideband modulation (VSB, or VSB-AM) î Quadrature amplitude modulation (QAM)
è
Angle modulation î Frequency modulation (FM) (here the frequency of the carrier signal is varied in accordance to the instantaneous amplitude of the modulating signal) î Phase modulation (PM) (here the phase shift of the carrier signal is varied in accordance to the instantaneous amplitude of the modulating signal)
The accompanying figure shows the results of (amplitude-)modulating a signal onto a carrier (both of which are sine waves). At any point along the y-axis, the amplitude of the modulated signal is equal to the sum of the carrier signal and the modulating signal amplitudes.
Simple example of amplitude modulation.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
/ c*# c*c In digital modulation, an analog carrier signal is modulated by a digital bit stream. Digital modulation methods can be considered as digital-to-analog conversion, and the corresponding demodulation or detection as analog-to-digital conversion. The changes in the carrier signal are chosen from a finite number of M alternative symbols (the
).
Schematic of 4 baud (8 bps) data link. c c ) 2 A telephone line is designed for transferring audible sounds, for example tones, and not digital bits (zeros and ones). Computers may however communicate over a telephone line by means of modems, which are representing the digital bits by tones, called symbols. If there are four alternative symbols (corresponding to a musical instrument that can generate four different tones, one at a time), the first symbol may represent the bit sequence 00, the second 01, the third 10 and the fourth 11. If the modem plays a melody consisting of 1000 tones per second, the symbol rate is 1000 symbols/second, or baud. Since each tone (i.e., symbol) represents a message consisting of two digital bits in this example, the bit rate is twice the symbol rate, i.e. 2000 bits per second. This is similar to the technique used by dialup modems as opposed to DSL modems. .
According to one definition of digital signal, the modulated signal is a digital signal, and according to another definition, the modulation is a form of digital-to-analog conversion. Most textbooks would consider digital modulation schemes as a form of digital transmission, synonymous to data transmission; very few would consider it as analog transmission.
6*.c$#* c*/ c*# c*c The most fundamental digital modulation techniques are based on keying:
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
è è è è
In the case of PSK (phase-shift keying), a finite number of phases are used. In the case of FSK (frequency-shift keying), a finite number of frequencies are used. In the case of ASK (amplitude-shift keying), a finite number of amplitudes are used. In the case of QAM (quadrature amplitude modulation), a finite number of at least two phases, and at least two amplitudes are used.
In QAM, an inphase signal (the I signal, for example a cosine waveform) and a quadrature phase signal (the Q signal, for example a sine wave) are amplitude modulated with a finite number of amplitudes, and summed. It can be seen as a two-channel system, each channel using ASK. The resulting signal is equivalent to a combination of PSK and ASK. In all of the above methods, each of these phases, frequencies or amplitudes are assigned a unique pattern of binary bits. Usually, each phase, frequency or amplitude encodes an equal number of bits. This number of bits comprises the that is represented by the particular phase, frequency or amplitude. If the alphabet consists of = 2 alternative symbols, each symbol represents a message consisting of bits. If the symbol rate (also known as the baud rate) is symbols/second (or baud), the data rate is bit/second. For example, with an alphabet consisting of 16 alternative symbols, each symbol represents 4 bits. Thus, the data rate is four times the baud rate. In the case of PSK, ASK or QAM, where the carrier frequency of the modulated signal is constant, the modulation alphabet is often conveniently represented on a constellation diagram, showing the amplitude of the I signal at the x-axis, and the amplitude of the Q signal at the yaxis, for each symbol.
*# c *c*cc(c c PSK and ASK, and sometimes also FSK, are often generated and detected using the principle of QAM. The I and Q signals can be combined into a complex-valued signal à+ (where is the imaginary unit). The resulting so called equivalent lowpass signal or equivalent baseband signal is a complex-valued representation of the real-valued modulated physical signal (the so called passband signal or RF signal). These are the general steps used by the modulator to transmit data: 1. Group the incoming data bits into codewords, one for each symbol that will be transmitted.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
2. Map the codewords to attributes, for example amplitudes of the I and Q signals (the equivalent low pass signal), or frequency or phase values. 3. Adapt pulse shaping or some other filtering to limit the bandwidth and form the spectrum of the equivalent low pass signal, typically using digital signal processing. 4. Perform digital-to-analog conversion (DAC) of the I and Q signals (since today all of the above is normally achieved using digital signal processing, DSP). 5. Generate a high-frequency sine wave carrier waveform, and perhaps also a cosine quadrature component. Carry out the modulation, for example by multiplying the sine and cosine wave form with the I and Q signals, resulting in that the equivalent low pass signal is frequency shifted into a modulated passband signal or RF signal. Sometimes this is achieved using DSP technology, for example direct digital synthesis using a waveform table, instead of analog signal processing. In that case the above DAC step should be done after this step. 6. Amplification and analog bandpass filtering to avoid harmonic distortion and periodic spectrum At the receiver side, the demodulator typically performs: 1. Bandpass filtering. 2. Automatic gain control, AGC (to compensate for attenuation, for example fading). 3. Frequency shifting of the RF signal to the equivalent baseband I and Q signals, or to an intermediate frequency (IF) signal, by multiplying the RF signal with a local oscillator sinewave and cosine wave frequency (see the superheterodyne receiver principle). 4. Sampling and analog-to-digital conversion (ADC) (Sometimes before or instead of the above point, for example by means of undersampling). 5. Equalization filtering, for example a matched filter, compensation for multipath propagation, time spreading, phase distortion and frequency selective fading, to avoid intersymbol interference and symbol distortion. 6. Detection of the amplitudes of the I and Q signals, or the frequency or phase of the IF signal. 7. Quantization of the amplitudes, frequencies or phases to the nearest allowed symbol values. 8. Mapping of the quantized amplitudes, frequencies or phases to codewords (bit groups). 9. Parallel-to-serial conversion of the codewords into a bit stream. 10. Pass the resultant bit stream on for further processing such as removal of any errorcorrecting codes. As is common to all digital communication systems, the design of both the modulator and demodulator must be done simultaneously. Digital modulation schemes are possible because the transmitter-receiver pair have prior knowledge of how data is encoded and represented in the communications system. In all digital communication systems, both the modulator at the c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
transmitter and the demodulator at the receiver are structured so that they perform inverse operations. Non-coherent modulation methods do not require a receiver reference clock signal that is phase synchronized with the sender carrier wave. In this case, modulation symbols (rather than bits, characters, or data packets) are asynchronously transferred. The opposite is coherent modulation.
c(cc*/ c*# c"#c The most common digital modulation techniques are: è
è
è è è è
è è è è
c c
Phase-shift keying (PSK): î Binary PSK (BPSK), using M=2 symbols î Quadrature PSK (QPSK), using M=4 symbols î 8PSK, using M=8 symbols î 16PSK, using M=16 symbols î Differential PSK (DPSK) î Differential QPSK (DQPSK) î Offset QPSK (OQPSK) î ±/4±QPSK Frequency-shift keying (FSK): î Audio frequency-shift keying (AFSK) î Multi-frequency shift keying (M-ary FSK or MFSK) î Dual-tone multi-frequency (DTMF) î Continuous-phase frequency-shift keying (CPFSK) Amplitude-shift keying (ASK) On-off keying (OOK), the most common ASK form î M-ary vestigial sideband modulation, for example 8VSB Quadrature amplitude modulation (QAM) - a combination of PSK and ASK: [¦ w î Polar modulation like QAM a combination of PSK and ASK. Continuous phase modulation (CPM) methods: î Minimum-shift keying (MSK) î Gaussian minimum-shift keying (GMSK) Orthogonal frequency-division multiplexing (OFDM) modulation: î discrete multitone (DMT) - including adaptive modulation and bit-loading. Wavelet modulation Trellis coded modulation (TCM), also known as trellis modulation Spread-spectrum techniques: î Direct-sequence spread spectrum (DSSS) î Chirp spread spectrum (CSS) according to IEEE 802.15.4a CSS uses pseudostochastic coding
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
î
Frequency-hopping spread spectrum (FHSS) applies a special scheme for channel release
MSK and GMSK are particular cases of continuous phase modulation. Indeed, MSK is a particular case of the sub-family of CPM known as continuous-phase frequency-shift keying (CPFSK) which is defined by a rectangular frequency pulse (i.e. a linearly increasing phase pulse) of one symbol-time duration (total response signaling). OFDM is based on the idea of frequency-division multiplexing (FDM), but is utilized as a digital modulation scheme. The bit stream is split into several parallel data streams, each transferred over its own sub-carrier using some conventional digital modulation scheme. The modulated sub-carriers are summed to form an OFDM signal. OFDM is considered as a modulation technique rather than a multiplex technique, since it transfers one bit stream over one communication channel using one sequence of so-called OFDM symbols. OFDM can be extended to multi-user channel access method in the orthogonal frequency-division multiple access (OFDMA) and multi-carrier code division multiple access (MC-CDMA) schemes, allowing several users to share the same physical medium by giving different sub-carriers or spreading codes to different users. Of the two kinds of RF power amplifier, switching amplifiers (Class C amplifiers) cost less and use less battery power than linear amplifiers of the same output power. However, they only work with relatively constant-amplitude-modulation signals such as angle modulation (FSK or PSK) and CDMA, but not with QAM and OFDM. Nevertheless, even though switching amplifiers are completely unsuitable for normal QAM constellations, often the QAM modulation principle are used to drive switching amplifiers with these FM and other waveforms, and sometimes QAM demodulators are used to receive the signals put out by these switching amplifiers.
/ c0 0 *c*# ccc*/c The term */ c0 0 *c*# (or digital baseband transmission) is synonymous to line codes. These are methods to transfer a digital bit stream over an analog baseband channel (a.k.a. lowpass channel) using a pulse train, i.e. a discrete number of signal levels, by directly modulating the voltage or current on a cable. Common examples are unipolar, non-return-to-zero (NRZ), Manchester and alternate mark inversion (AMI) codings.
3#c*# c*c Pulse modulation schemes aim at transferring a narrowband analog signal over an analog baseband channel as a two-level signal by modulating a pulse wave. Some pulse modulation schemes also allow the narrowband analog signal to be transferred as a digital signal (i.e. as a c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
quantized discrete-time signal) with a fixed bit rate, which can be transferred over an underlying digital transmission system, for example some line code. These are not modulation schemes in the conventional sense since they are not channel coding schemes, but should be considered as source coding schemes, and in some cases analog-to-digital conversion techniques. /' /c*: è è è
Pulse-amplitude modulation (PAM) Pulse-width modulation (PWM) Pulse-position modulation (PPM)
/'*/ c*: è
è è è è
Pulse-code modulation (PCM) î Differential PCM (DPCM) î Adaptive DPCM (ADPCM) Delta modulation (DM or ǻ-modulation) Sigma-delta modulation (ǻ) Continuously variable slope delta modulation (CVSDM), also called Adaptive-delta modulation (ADM) Pulse-density modulation (PDM)
#c*# c"#c è è è
The use of on-off keying to transmit Morse code at radio frequencies is known as continuous wave (CW) operation. Adaptive modulation Space modulation A method whereby signals are modulated within airspace, such as that used in Instrument landing systems.
c c c c c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
4c #*c*# c
#*c*# ( ) is a technique used in electronic communication, most commonly for transmitting information via a radio carrier wave. AM works by varying the strength of the transmitted signal in relation to the information being sent. For example, changes in the signal strength can be used to specify the sounds to be reproduced by a loudspeaker, or the light intensity of television pixels. (Contrast this with frequency modulation, also commonly used for sound transmissions, in which the frequency is varied; and phase modulation, often used in remote controls, in which the phase is varied) In the mid-1870s, a form of amplitude modulation²initially called "undulatory currents"²was the first method to successfully produce quality audio over telephone lines. Beginning with Reginald Fessenden's audio demonstrations in 1906, it was also the original method used for audio radio transmissions, and remains in use today by many forms of communication²"AM" is often used to refer to the mediumwave broadcast band (see AM radio).
Fig 1: An audio signal (top) may be carried by an AM or FM radio wave.
c
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
c $c(c #*c*# c As originally developed for the electric telephone, amplitude modulation was used to add audio information to the low-powered direct current flowing from a telephone transmitter to a receiver As a simplified explanation, at the transmitting end, a telephone microphone was used to vary the strength of the transmitted current, according to the frequency and loudness of the sounds received. Then, at the receiving end of the telephone line, the transmitted electrical current affected an electromagnet, which strengthened and weakened in response to the strength of the current. In turn, the electromagnet produced vibrations in the receiver diaphragm, thus closely reproducing the frequency and loudness of the sounds originally heard at the transmitter. In contrast to the telephone, in radio communication what is modulated is a continuous wave radio signal (carrier wave) produced by a radio transmitter. In its basic form, amplitude modulation produces a signal with power concentrated at the carrier frequency and in two adjacent sidebands. This process is known as heterodyning. Each sideband is equal in bandwidth to that of the modulating signal and is a mirror image of the other. Amplitude modulation that results in two sidebands and a carrier is often called (DSB-AM). Amplitude modulation is inefficient in terms of power usage and much of it is wasted. At least two-thirds of the power is concentrated in the carrier signal, which carries no useful information (beyond the fact that a signal is present); the remaining power is split between two identical sidebands, though only one of these is needed since they contain identical information. To increase transmitter efficiency, the carrier can be removed (suppressed) from the AM signal. This produces a reduced-carrier transmission or
¦ (DSBSC) signal. A suppressed-carrier amplitude modulation scheme is three times more power-efficient than traditional DSB-AM. If the carrier is only partially suppressed, a ¦
¦ (DSBRC) signal results. DSBSC and DSBRC signals need their carrier to be regenerated (by a beat frequency oscillator, for instance) to be demodulated using conventional techniques. Even greater efficiency is achieved²at the expense of increased transmitter and receiver complexity²by completely suppressing both the carrier and one of the sidebands. This is singlesideband modulation, widely used in amateur radio due to its efficient use of both power and bandwidth. A simple form of AM often used for digital communications is , a type of by which binary data is represented as the presence or absence of a carrier c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
wave. This is commonly used at radio frequencies to transmit Morse code, referred to as continuous wave (CW) operation.
c*/ c In 1982, the International Telecommunication Union (ITU) designated the various types of amplitude modulation as follows: / cc A3E
double-sideband full-carrier - the basic AM modulation scheme
R3E
single-sideband reduced-carrier
H3E
single-sideband full-carrier
J3E
single-sideband suppressed-carrier
B8E
independent-sideband emission
C3F
vestigial-sideband
Lincompex linked compressor and expander
c
) 2c*#0*0 *c c
Fig 2: The (2-sided) spectrum of an AM signal. A carrier wave is modeled as a simple sine wave, such as:
where the radio frequency (in Hz) is given by: c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
The constants and represent the carrier amplitude and initial phase, and are introduced for generality. For simplicity however, their respective values can be set to 1 and 0. Let () represent an arbitrary waveform that is the message to be transmitted. And let the constant represent its largest magnitude. For instance:
Thus, the message might be just a simple audio tone of frequency
It is generally assumed that
and that
Then amplitude modulation is created by forming the product:
represents the carrier amplitude which is a constant that we would choose to demonstrate the modulation index. The values =1, and =0.5, produce a () depicted by the graph labelled "50% Modulation" in Figure 4. For this simple example, () can be trigonometrically manipulated into the following equivalent form:
Therefore, the modulated signal has three components, a carrier wave and two sinusoidal waves (known as sidebands) whose frequencies are slightly above and below Also notice that the choice A=0 eliminates the carrier component, but leaves the sidebands. That is the DSBSC transmission mode. To generate double-sideband full carrier (A3E), we must choose:
#c c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
For more general forms of (), trigonometry is not sufficient. But if the top trace of Figure 2 depicts the frequency spectrum, of (), then the bottom trace depicts the modulated carrier. It has two groups of components: one at positive frequencies (centered on + Ȧ¦) and one at negative frequencies (centered on í Ȧ¦). Each group contains the two sidebands and a narrow component in between that represents the energy at the carrier frequency. We need only be concerned with the positive frequencies. The negative ones are a mathematical artifact that contains no additional information. Therefore, we see that an AM signal's spectrum consists basically of its original (2-sided) spectrum shifted up to the carrier frequency. Figure 2 is a result of computing the Fourier transform of: following transform pairs:
using the
Fig 3: The spectrogram of an AM broadcast shows its two sidebands (green) separated by the carrier signal (red).
3,c *c#c((c In terms of the positive frequencies, the transmission bandwidth of AM is twice the signal's original (baseband) bandwidth²since both the positive and negative sidebands are shifted up to the carrier frequency. Thus, double-sideband AM (DSB-AM) is spectrally inefficient, meaning that fewer radio stations can be accommodated in a given broadcast band. The various suppression methods in Forms of AM can be readily understood in terms of the diagram in c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Figure 2. With the carrier suppressed there would be no energy at the center of a group. And with a sideband suppressed, the "group" would have the same bandwidth as the positive frequencies of The transmitter power efficiency of DSB-AM is relatively poor (about 33%). The benefit of this system is that receivers are cheaper to produce. The forms of AM with suppressed carriers are found to be 100% power efficient, since no power is wasted on the carrier signal which conveys no information.
*# c*)c It can be defined as the measure of extent of amplitude variation about an unmodulated maximum carrier. As with other modulation indices, in AM, this quantity, also called , indicates by how much the modulated variable varies around its 'original' level. For AM, it relates to the variations in the carrier amplitude and is defined as:
where
and
were introduced above.
So if = 0.5, the carrier amplitude varies by 50% above and below its unmodulated level, and for = 1.0 it varies by 100%. To avoid distortion in the A3E transmission mode, modulation depth greater than 100% must be avoided. Practical transmitter systems will usually incorporate some kind of limiter circuit, such as a VOGAD, to ensure this. However, AM demodulators can be designed to detect the inversion (or 180 degree phase reversal) that occurs when modulation exceeds 100% and automatically correct for this effect.[¦ w Variations of modulated signal with percentage modulation are shown below. In each image, the maximum amplitude is higher than in the previous image. Note that the scale changes from one image to the next.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Fig 4: Modulation depth
c c c c c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
#*c*# c*/c #c A wide range of different circuits have been used for AM, but one of the simplest circuits uses anode or collector modulation applied via a transformer. While it is perfectly possible to create good designs using solid-state electronics, valved (vacuum tube) circuits are shown here. In general, valves are able to more easily yield RF powers, in excess of what can be easily achieved using solid-state transistors. Many high-power broadcast stations still use valves.
Anode modulation using a transformer. The tetrode is supplied with an anode supply (and screen grid supply) which is modulated via the transformer. The resistor R1 sets the grid bias; both the input and outputs are tuned LC circuits which are tapped into by inductive coupling Modulation circuit designs can be broadly divided into low and high level.
,c'c Here a small audio stage is used to modulate a low power stage; the output of this stage is then amplified using a linear RF amplifier. Wideband power amplifiers are used to preserve the sidebands of the modulated waves. In this arrangement, modulation is done at low power. To amplify it we use a wideband power amplifier at the output. Advantages The advantage of using a linear RF amplifier is that the smaller early stages can be modulated, which only requires a small audio amplifier to drive the modulator. c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Disadvantages The great disadvantage of this system is that the amplifier chain is less efficient, because it has to be linear to preserve the modulation. Hence Class C amplifiers cannot be employed. An approach which marries the advantages of low-level modulation with the efficiency of a Class C power amplifier chain is to arrange a feedback system to compensate for the substantial distortion of the AM envelope. A simple detector at the transmitter output (which can be little more than a loosely coupled diode) recovers the audio signal, and this is used as negative feedback to the audio modulator stage. The overall chain then acts as a linear amplifier as far as the actual modulation is concerned, though the RF amplifier itself still retains the Class C efficiency. This approach is widely used in practical medium power transmitters, such as AM radiotelephones.
/c'c With high level modulation, the modulation takes place at the final amplifier stage where the carrier signal is at its maximum Advantages One advantage of using class C amplifiers in a broadcast AM transmitter is that only the final stage needs to be modulated, and that all the earlier stages can be driven at a constant level. These class C stages will be able to generate the drive for the final stage for a smaller DC power input. However, in many designs in order to obtain better quality AM the penultimate RF stages will need to be subject to modulation as well as the final stage. Disadvantages A large audio amplifier will be needed for the modulation stage, at least equal to the power of the transmitter output itself. Traditionally the modulation is applied using an audio transformer, and this can be bulky. Direct coupling from the audio amplifier is also possible (known as a cascode arrangement), though this usually requires quite a high DC supply voltage (say 30 V or more), which is not suitable for mobile units.
c**# c*c The simplest form of AM demodulator consists of a diode which is configured to act as envelope detector. Another type of demodulator, the product detector, can provide better quality demodulation, at the cost of added circuit complexity.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
c #0*0 *#* c c #0*0 *c #* c (DSB-SC): transmission in which (a) frequencies produced by amplitude modulation are symmetrically spaced above and below the carrier frequency and (b) the carrier level is reduced to the lowest practical level, ideally completely suppressed. In the double-sideband suppressed-carrier transmission (DSB-SC) modulation, unlike AM, the wave carrier is not transmitted; thus, a great percentage of power that is dedicated to it is distributed between the sidebands, which implies an increase of the cover in DSB-SC, compared to AM, for the same power used. DSB-SC transmission is a special case of Double-sideband reduced carrier transmission. This is used for RDS (Radio Data System) because it is difficult to decouple.
#c This is basically an amplitude modulation wave without the carrier therefore reducing power wastage, giving it a 50% efficiency rate.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
c DSBSC is generated by a mixer. This consists of an audio source combined with the frequency carrier.
*# c For demodulation the audio frequency and the carrier frequency must be exact otherwise we get distortion.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
,cc,%c This is best shown graphically. Below, is a message signal that one may wish to modulate onto a carrier, consisting of a couple of sinusoidal components.
The equation for this message signal is c c
.
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
The carrier, in this case, is a plain 5 kHz (
) sinusoid²pictured below.
The modulation is performed by multiplication in the time domain, which yields a 5 kHz carrier signal, whose amplitude varies in the same manner as the message signal.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
The name "suppressed carrier" comes about because the carrier signal component is suppressed²it does not appear (theoretically) in the output signal. This is apparent when the spectra of the output signal is viewed:
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
c c c c c c c
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
/*0 *#* c c /*0 *c #* ( ) is a telecommunication technique, which belongs to the Amplitude modulation class. The information represented by the modulating signal is contained in both the upper and the lower sidebands. Since each modulating frequency ¦ produces corresponding upper and lower side-frequencies ¦ + and ¦ í it is not necessary to transmit both side-bands. Either one can be suppressed at the transmitter without any loss of information.
*' /c è è è è
Less transmitter power. Less bandwidth, one-half that of Double-Sideband (DSB). Less noise at the receiver. Size, weight and peak antenna voltage of a single-sideband (SSB) transmitters is significantly less than that of a standard AM transmitte
*c c*/c *c c*/ is a complex topic which can be broken down into a series of smaller topics. A radio communication system requires two tuned circuits each at the transmitter and receiver, all four tuned to the same frequency.[1w The transmitter is an electronic device which, usually with the aid of an antenna, propagates an electromagnetic signal such as radio, television, or other telecommunications.
c c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
*c At the beginning of the 20th century, there were four chief methods of arranging the transmitting circuits:[2w 1. The transmitting system consists of two tuned circuits such that the one containing the spark-gap is a persistent oscillator; the other, containing the aerial structure, is a free radiator maintained in oscillation by being coupled to the first (Nikola Tesla and Guglielmo Marconi). 2. The oscillating system, including the aerial structure with its associated inductance-coils and condensers, is designed to be both a sufficiently persistent oscillator and a sufficiently active radiator (Oliver Joseph Lodge). 3. The transmitting system consists of two electrically coupled circuits, one of which, containing the air-gap, is a powerful but not persistent oscillator, being provided with a device for quenching the spark so soon as it has imparted sufficient energy to the other circuit containing the aerial structure, this second circuit then independently radiating the train of slightly damped waves at its own period (Oliver Joseph Lodge and Wilhelm Wien). 4. The transmitting system, by means either of an oscillating arc (Valdemar Poulsen) or a highfrequency alternator (Rudolf Goldschmidt), emits a persistent train of undamped waves interrupted only by being broken up into long and short groups by the operator's key.
$"#cc $)*c("#cc For a fixed frequency transmitter one commonly used method is to use a resonant quartz crystal in a Crystal oscillator to fix the frequency. Where the frequency has to be variable, several options can be used.
5 0c("#cc è
è è è
c c
An array of crystals ± used to enable a transmitter to be used on several different frequencies; rather than being a truly variable frequency system, it is a system which is fixed to several different frequencies (a subset of the above). Variable-frequency oscillator (VFO) Phase-locked loop frequency synthesiser Direct digital synthesis
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
$"#c $"#c *#0 A basic design for a frequency tripler (screen grids, A basic design for a frequency doubler (screen grids, bias supplies and other elements are not shown). bias supplies and other elements are not shown).
$"#c# c For VHF transmitters, it is often not possible to operate the oscillator at the final output frequency. In such cases, for reasons including frequency stability, it is better to multiply the frequency of the free running oscillator up to the final, required frequency. If the output of an amplifier stage is tuned to a multiple of the frequency with which the stage is driven, the stage will give a larger harmonic output than a linear amplifier. In a push-push stage, the output will only contain ½ harmonics. This is because the currents which would generate the fundamental and the odd harmonics in this circuit (if one valve was removed) are canceled by the second valve. In the diagrams, bias supplies and neutralization measure have been omitted for clarity. In a real system, it is likely that tetrodes would be used, as plate-to-grid capacitance in a tetrode is lower, thereby reducing stage instability. In a push-pull stage, the output will contain only harmonics because of the canceling effect.
$"#c)/c *c*# c
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
The task of many transmitters is to transmit some form of information using a radio signal (carrier wave) which has been modulated to carry the intelligence. A few rare types of transmitter do not carry information: the RF generator in a microwave oven, electrosurgery, and induction heating. RF transmitters that do not carry information are required by law to operate in an ISM band.
c*c In many cases the carrier wave is mixed with another electrical signal to impose information upon it. This occurs in Amplitude modulation (AM). Amplitude Modulation: In Amplitude modulation the instantaneous change in the amplitude of the carrier Frequency with respect to the amplitude of the modulating or Base band signal. Low level
Here a small audio stage is used to modulate a low power stage, the output of this stage is then amplified using a linear RF amplifier. è
Advantages
The advantage of using a linear RF amplifier is that the smaller early stages can be modulated, which only requires a small audio amplifier to drive the modulator. è
Disadvantages
The great disadvantage of this system is that the amplifier chain is less efficient, because it has to be linear to preserve the modulation. Hence class C amplifiers cannot be employed. An approach which marries the advantages of low-level modulation with the efficiency of a Class C power amplifier chain is to arrange a feedback system to compensate for the substantial distortion of the AM envelope. A simple detector at the transmitter output (which can be little more than a loosely coupled diode) recovers the audio signal, and this is used as negative feedback to the audio modulator stage. The overall chain then acts as a linear amplifier as far as the actual modulation is concerned, though the RF amplifier itself still retains the Class C efficiency. This approach is widely used in practical medium power transmitters, such as AM radiotelephones. High level è
c c
Advantages
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
One advantage of using class C amplifiers in a broadcast AM transmitter is that only the final stage needs to be modulated, and that all the earlier stages can be driven at a constant level. These class C stages will be able to generate the drive for the final stage for a smaller DC power input. However, in many designs in order to obtain better quality AM the penultimate RF stages will need to be subject to modulation as well as the final stage. è
Disadvantages
A large audio amplifier will be needed for the modulation stage, at least equal to the power of the transmitter output itself. Traditionally the modulation is applied using an audio transformer, and this can be bulky. Direct coupling from the audio amplifier is also possible (known as a cascode arrangement), though this usually requires quite a high DC supply voltage (say 30 V or more), which is not suitable for mobile units. Y ^
A wide range of different circuits have been used for AM. While it is perfectly possible to create good designs using solid-state electronics, valved (tube) circuits are shown here. In general, valves are able to easily yield RF powers far in excess of what can be achieved using solid state. Most high-power broadcast stations still use valves.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Anode modulation transformer.
using
a
An example of a series modulated amplitude modulation stage. Plate AM modulators
In plate modulation systems the voltage delivered to the stage is changed. As the power output available is a function of the supply voltage, the output power is modulated. This can be done using a transformer to alter the anode (plate) voltage. The advantage of the transformer method is that the audio power can be supplied to the RF stage and converted into RF power. With anode modulation using a transformer, the tetrode is supplied with an anode supply (and screen grid supply) which is modulated via the transformer. The resistor R1 sets the grid bias, both the input and outputs are tuned LC circuits which are tapped into by inductive coupling. In series modulated amplitude modulation, the tetrode is supplied with an anode supply (and screen grid supply) which is modulated by the modulator valve. The resistor VR1 sets the grid bias for the c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
modulator valve, both the RF input (tuned grid) and outputs are tuned LC circuits which are tapped into by inductive coupling. When the valve at the top conducts more than the potential difference between the anode and cathode of the lower valve (RF valve) will increase. The two valves can be thought of as two resistors in a potentiometer. Screen AM modulators
Screen AM modulator.
Under steady state conditions (no audio driven) the stage will be a simple RF amplifier where the grid bias is set by the cathode current. When the stage is modulated the screen potential changes and so alters the gain of the stage.
c*c,c c *cc c Several derivatives of AM are in common use. These are ÷
SSB, or SSB-AM single-sideband full carrier modulation, is very similar to single-sideband suppressed carrier modulation (SSB-SC) Filter method
Using a balanced mixer a double side band signal is generated, this is then passed through a very narrow bandpass filter to leave only one side-band. By convention it is normal to use the upper sideband (USB) in communication systems, except for HAM radio when the carrier frequency is below 10 MHz here the lower side band (LSB) is normally used. Phasing method
This method is an alternative method for the generation of single sideband signals. One of the weaknesses of this method is the need for a network which imposes a constant 90o phase shift on c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
audio signals throughout the entire audio spectrum. By reducing the audio bandwidth the task of designing the phaseshift network can be made more easy. Imagine that the audio is a single sine wave = ° sine (Ȧt) The audio signal is passed through the phase shift network to give two identical signals which differ by 90o. So as the audio input is a single sine wave the outputs will be
and
These audio outputs are mixed in non linear mixers with a carrier, the carrier drive for one of these mixers is shifted by 90°. The output of these mixers is combined in a linear circuit to give the SSB signal. Ô
Vestigial-sideband modulation (VSB, or VSB-AM) is a type of modulation system commonly used in analogue TV systems. It is normal AM which has been passed through a filter which reduces one of the sidebands. Typically, components of the lower sideband more than 0.75 MHz or 1.25 MHz below the carrier will be heavily attenuated.
Strictly speaking the commonly used 'AM' is double-sideband full carrier. Morse is often sent using on-off keying of an unmodulated carrier (Continuous wave), this can be thought of as an AM mode.
$c*c Angle modulation is the proper term for modulation by changing the instantaneous frequency or phase of the carrier signal. True FM and phase modulation are the most commonly employed forms of analogue angle modulation.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Direct FM (true Frequency modulation) is where the frequency of an oscillator is altered to impose the modulation upon the carrier wave. This can be done by using a voltage-controlled capacitor (Varicap diode) in a crystal-controlled oscillator or frequency synthesiser. The frequency of the oscillator is then multiplied up using a frequency multiplier stage, or is translated upwards using a mixing stage, to the output frequency of the transmitter. à
Indirect FM solid state circuit.
Indirect FM employs a varicap diode to impose a phase shift (which is voltage-controlled) in a tuned circuit that is fed with a plain carrier. This is termed phase modulation. The modulated signal from a phase-modulated stage can be understood with an FM receiver, but for good audio quality, the audio is applied to the phase modulation stage. The amount of modulation is referred to as the deviation, being the amount that the frequency of the carrier instantaneously deviates from the centre carrier frequency. In some indirect FM solid state circuits, an RF drive is applied to the base of a transistor. The tank circuit (LC), connected to the collector via a capacitor, contains a pair of varicap diodes. As the voltage applied to the varicaps is changed, the phase shift of the output will change. Phase modulation is mathematically equivalent to direct Frequency modulation with a 6dB/octave high-pass filter applied to the modulating signal. This high-pass effect can be exploited or compensated for using suitable frequency-shaping circuitry in the audio stages ahead of the modulator. For example, many FM systems will employ pre-emphasis and deemphasis for noise reduction, in which case the high-pass equivalency of phase modulation automatically provides for the pre-emphasis. Phase modulators are typically only capable of relatively small amounts of deviation while remaining linear, but any frequency multiplier stages also multiply the deviation in proportion. è
c c
Sigma-delta modulation (ǻ)
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
$c,c (c 5 'c For high power systems it is normal to use valves, please see Valve RF amplifier for details of how valved RF power stages work.
^ è è
Good for high power systems Electrically very robust, they can tolerate overloads for minutes which would destroy bipolar transistor systems in milliseconds
^ è è è
Heater supplies are required for the cathodes High voltages ( ) are required for the anodes Valves often have a shorter working life than solid state parts because the heaters tend to fail
*c c For low and medium power it is often the case that solid state power stages are used. For higher power systems these cost more per watt of output power than a valved system.
%/cc ccc c The majority of modern transmitting equipment is designed to operate with a resistive load fed via coaxial cable of a particular characteristic impedance, often 50 ohms. To connect the aerial to this coaxial cable transmission line a matching network and/or a balun may be required. Commonly an SWR meter and/or an antenna analyzer are used to check the extent of the match between the aerial system and the transmitter via the transmission line (feeder). An SWR meter indicates forward power, reflected power, and the ratio between them. See Antenna tuner and balun for details of matching networks and baluns respectively.
c c While this section was written from the point of view of an amateur radio operator with relation to television interference it applies to the construction and use of all radio transmitters, and other electronic devices which generate high RF powers with no intention of radiating these. For instance a dielectric heater might contain a 2000 watt 27 MHz source within it, if the machine c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
operates as intended then none of this RF power will leak out. However, if the device is subject to a fault then when it operates RF will leak out and it will be now a transmitter. Also computers are RF devices, if the case is poorly made then the computer will radiate at VHF. For example if you attempt to tune into a weak radio station (88 to 108 MHz, band II) at your desk you may lose reception when you switch on your PC. Equipment which is not intended to generate RF, but does so through for example sparking at switch contacts is not considered here.
$c % /c+*('c$c*/-c All equipment using RF electronics should be inside a screened metal box, all connections in or out of the metal box should be filtered to avoid the ingress or egress of radio signals. A common and effective method of doing so for wires carrying DC supplies, 50/60 Hz AC connections, audio and control signals is to use a feedthrough capacitor. This is a capacitor which is mounted in a hole in the shield, one terminal of the capacitor is its metal body which touches the shielding of the box while the other two terminal of the capacitor are the on either side of the shield. The feed through capacitor can be thought of as a metal rod which has a dielectric sheath which in turn has a metal coating. In addition to the feed through capacitor, either a resistor or RF choke can be used to increase the filtering on the lead. In transmitters it is vital to prevent RF from entering the transmitter through any lead such as an electric power, microphone or control connection. If RF does enter a transmitter in this way then an instability known as motorboating can occur. Motorboating is an example of a self inflicted EMC problem. If a transmitter is suspected of being responsible for a television interference problem, then it should be run into a dummy load; this is a resistor in a screened box or can which will allow the transmitter to generate radio signals without sending them to the antenna. If the transmitter does not cause interference during this test, then it is safe to assume that a signal has to be radiated from the antenna to cause a problem. If the transmitter does cause interference during this test then a path exists by which RF power is leaking out of the equipment, this can be due to bad shielding. This is a rare but insidious problem and it is vital that it be tested for. Such leakage is most likely to occur on homemade equipment or equipment that has been modified. RF leakage from microwave ovens may also sometimes be observed, especially when the oven's RF seal has been compromised.
##cc è
c c
Early in the development of radio technology it was recognised that the signals emitted by transmitters had to be 'pure'. For instance Spark-gap transmitters were quickly outlawed as they give an output which is so wide in terms of frequency. In modern equipment there are three main types of spurious emissions.
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
è
The term spurious emissions refers to any signal which comes out of a transmitter other than the wanted signal. The spurious emissions include harmonics, out of band mixer products which are not fully suppressed and leakage from the local oscillator and other systems within the transmitter.
@
These are multiples of the operation frequency of the transmitter, they can be generated in a stage of the transmitter even if it is driven with a perfect sine wave because no real life amplifier is perfectly linear.
Note that B+ is the anode supply, C- is the grid bias. Note that B+ is the anode supply, C- is the grid While the circuit shown here uses tetrode valves (for bias. While the circuit shown here uses a tetrode example 2 x 4CX250B) many designs have used solid valve (for example the 4CX250B) many designs state semiconductor parts (such as MOSFETS). Note have used solid state semiconductor parts (such as that NC is a neutralization capacitor. MOSFETS). Avoiding harmonic generation
It is best if these harmonics are designed out at an early stage. For instance a push-pull amplifier consisting of two tetrode valves attached to an anode tank resonant LC circuit which has a coil which is connected to the high voltage DC supply at the centre (Which is also RF ground) will only give a signal for the fundamental and the odd harmonics. Removal of harmonics with filters
In addition to the good design of the amplifier stages, the transmitter's output should be filtered with a low pass filter to reduce the level of the harmonics.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Detection
The harmonics can be tested for using an RF spectrum analyser (expensive) or with an absorption wavemeter (cheap). If a harmonic is found which is at the same frequency as the frequency of the signal wanted at the receiver then this spurious emission can prevent the wanted signal from being received.
Imagine a transmitter, which has an intermediate frequency (IF) of 144 MHz, which is mixed with 94 MHz to create a signal at 50 MHz, which is then amplified and transmitted. If the local oscillator signal was to enter the power amplifier and not be adequately suppressed then it could be radiated. It would then have the potential to interfere with radio signals at 94 MHz in the FM audio (band II) broadcast band. Also the unwanted mixing product at 238 MHz could in a poorly designed system be radiated. Normally with good choice of the intermediate and local oscillator frequencies this type of trouble can be avoided, but one potentially bad situation is in the construction of a 144 to 70 MHz converted, here the local oscillator is at 74 MHz which is very close to the wanted output. Good well made units have been made which use this conversion but their design and construction has been challenging, for instance in the late 1980s Practical Wireless published a design (Meon-4) for such a transverter [1w[2w. This problem can be thought of as being related to the Image response problem which exists in receivers.
Simple but poor mixer
One method of reducing the potential for this transmitter defect is the use of balance and double balanced mixers. If the equation is assumed to be = 1 2 and is driven by two simple sine waves, f1 and f2 then the output will be a mixture of four frequencies 1 1+2 c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
1-2 2 If the simple mixer is replaced with a balanced mixer then the number of possible products is reduced. Imagine that two mixers which have the equation {à = 1 2} are wired up so that the current outputs are wired to the two ends of a coil (the centre of this coil is wired to ground) then the total current flowing through the coil is the difference between the output of the two mixer stages. If the f1 drive for one of the mixers is phase shifted by 180° then the overall system will be a balanced mixer.
Note that while this hypothetical design uses tetrodes many designs have used solid state semiconductor parts (such as MOSFETS).
E = K . Ef2 . ǻEf1 So the output will now have only three frequencies 1+2 1-2 2 Now as the frequency mixer has fewer outputs the task of making sure that the final output is ¦ will be simpler.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
à
If a stage in a transmitter is unstable and is able to oscillate then it can start to generate RF at either a frequency close to the operating frequency or at a very different frequency. One good sign that it is occurring is if an RF stage has a power output even without being driven by an exciting stage. Another sign is if the output power suddenly increases wildly when the input power is increased slightly, it is noteworthy that in a class C stage that this behaviour can be seen under normal conditions. The best defence against this transmitter defect is a good design, also it is important to pay good attention to the neutralization of the valves or transistors.
c *c'c*/c *c 'c */ is a complex topic which can be broken down into a series of smaller topics. A radio communication system requires two tuned circuits each at the transmitter and receiver, all four tuned to the same frequency.[1w The term radio receiver is understood in this article to mean any device which is intended to receive a radio signal and if need be to extract information from the signal.
c*/c c *c
A ¦ ¦½ consisting of an antenna, a variable inductor, a cat's whisker, and a capacitor. è
è
c c
Advantages î Simple, easy to make. This is the classic design for a clandestine receiver in a POW camp. Disadvantages
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Insensitive, it needs a very strong RF signal to operate. î Poor selectivity, it often only has only one tuned circuit. î
c ( c The directly amplifying receiver contains the input radio frequency filter, the radio frequency amplifier (amplifying radio signal of the tuned station), the detector and the sound frequency amplifier. This design is simple and reliable, but much less sensitive than the superheterodyne (described below).
( c The reflectional receiver contains the single amplifier that amplifies first radio, and then (after detection) sound frequency. It is simpler, smaller and consumes less power, but it is also comparatively unstable.
/ 'c The Regenerative circuit has the advantage of being potentially very sensitive, it uses positive feedback to increase the gain of the stage. Many valved sets were made which used a single stage. However if misused it has the great potential to cause radio interference, if the set is adjusted wrongly (too much feedback used) then the detector stage will oscillate so causing the interference.
Regenerative Receiver Schematic
#*c *c("#c Main article: Tuned radio frequency receiver
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
the RF interference that the local oscillator can generate can be controlled with the use of a buffer stage between the LO and the Detector, and a buffer or RF amp stage between the LO and the antenna.
c'c Main article: Direct-conversion receiver In the Direct conversion receiver, the signals from the aerial pass through a band pass filter and an amplifier before reaching a non-linear mixer where they are mixed with a signal from a local oscillator which is tuned to the carrier wave frequency of an AM or SSB transmitter. The output of this mixer is then passed through a low pass filter before an audio amplifier. This is then the output of the radio. For CW morse the local oscillator is tuned to a frequency slightly different from that of the transmitter to make the received signal audible. è
Advantages î Simpler than a superhet î Better tuning than a simple crystal set
è
Disadvantages î Less selective than a superhet with regard to strong in-band signals î A wider bandwidth than a good SSB communications radio, this is because no sideband filtering exists in this circuit.
#*c Main article: Superheterodyne receiver Here are two superheterodyne designs for AM and FM respectively. The FM design is a cheap design intended for a broadcast band household receiver.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
A schematic of a superhet AM receiver. Note that the radio has a AGC loop. For single conversion superheterodyne AM receivers designed for mediumwave and longwave the IF is commonly 455 kHz.
A schematic of a simple cheap superhet FM receiver. Note that the radio lacks a AGC loop, and that the IF amplifier has a very high gain and is driven into clipping. For many single conversion superheterodyne receivers designed for band II FM (88 - 108 MHz) the IF is commonly 10.7 MHz. For TV sets the IF tends to be at 33 to 40 MHz.
$c'6c c To make a good AM receiver an automatic gain control loop is essential; this requires good design. To make a good FM receiver a large number of RF amps which are driven into limiting are required to create a receiver which can take advantage of the capture effect, one of the biggest advantages of FM. With valved (tube) systems it is more expensive to make active stages than it is to make the same number of stages with solid state parts, so for a valved superhet it is simpler to make an AM receiver with the automatic gain control loop while for a solid state receiver it is simpler to make an FM unit. Hence even while the idea of FM was known before World War II its use was rare because of the cost of valves - in the UK the government had a valve holder tax which encouraged radio receiver designers to use as few active stages as possible, - but when solid state parts became available FM started to gain favour.
c c c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
7c $"#c*# c In telecommunications and signal processing, ("#c*# ($) conveys information over a carrier wave by varying its instantaneous frequency (contrast this with amplitude modulation, in which the amplitude of the carrier is varied while its frequency remains constant). In analog applications, the difference between the instantaneous and the base frequency of the carrier is directly proportional to the instantaneous value of the input signal amplitude. Digital data can be sent by shifting the carrier's frequency among a set of discrete values, a technique known as frequency-shift keying. Frequency modulation can be regarded as phase modulation where the carrier phase modulation is the time integral of the FM modulating signal.
c Suppose the baseband data signal (the message) to be transmitted is
()
and the sinusoidal
carrier is , where ¦ is the carrier's base frequency and ¦ is the carrier's amplitude. The modulator combines the carrier with the baseband data signal to get the transmitted signal:
In this equation, is the ¦ of the oscillator and is the ¦ ½ , which represents the maximum shift away from ¦ in one direction, assuming () is limited to the range 1.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Although it may seem that this limits the frequencies in use to ¦ ǻ, this neglects the distinction between ¦ and ¦ ¦ . The frequency spectrum of an actual FM signal has components extending out to infinite frequency, although they become negligibly small beyond a point.
#* c0 0 *c/ c While it is an over-simplification, a baseband modulated signal may be approximated by a sinusoidal Continuous Wave signal with a frequency . The integral of such a signal is
Thus, in this specific case, equation (1) above simplifies to:
where the amplitude of the modulating sinusoid, is represented by the peak deviation (see frequency deviation). The harmonic distribution of a sine wave carrier modulated by such a sinusoidal signal can be represented with Bessel functions - this provides a basis for a mathematical understanding of frequency modulation in the frequency domain.
*# c*)c As with other modulation indices, this quantity indicates by how much the modulated variable varies around its unmodulated level. It relates to the variations in the frequency of the carrier signal:
where is the highest frequency component present in the modulating signal (), and is the Peak frequency-deviation, i.e. the maximum deviation of the ¦ from the carrier frequency. If , the modulation is called J , and its bandwidth is approximately . If , the modulation is called J and its bandwidth is
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
approximately . While wideband FM uses more bandwidth, it can improve signal-to-noise ratio significantly. With a tone-modulated FM wave, if the modulation frequency is held constant and the modulation index is increased, the (non-negligible) bandwidth of the FM signal increases, but the spacing between spectra stays the same; some spectral components decrease in strength as others increase. If the frequency deviation is held constant and the modulation frequency increased, the spacing between spectra increases.
8c#c A rule of thumb, states that nearly all (~98%) of the power of a frequencymodulated signal lies within a bandwidth of
where
, as defined above, is the peak deviation of the instantaneous frequency
center carrier frequency
from the
.
c c"#/c The noise power decreases as the signal power increases, therefore the SNR goes up significantly.
c(#c The carrier and sideband amplitudes are illustrated for different modulation indices of FM signals. Based on the Bessel functions. *# *0 *c c c *)c c
4c
7c
?6??c
1.00
?64:c
0.98
0.12
?6:ic
0.94
0.24 0.03
6?ic
0.77
0.44 0.11 0.02
c c
9c
:c
;c
c
?c c 4c 7c 9c :c ;c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
6:ic
0.51
0.56 0.23 0.06 0.01
46?ic
0.22
0.58 0.35 0.13 0.03
469c
0ii
0.52 0.43 0.20 0.06 0.02
46:ic
í0.05 0.50 0.45 0.22 0.07 0.02 0.01
76?ic
í0.26 0.34 0.49 0.31 0.13 0.04 0.01
96?ic
í0.40
í0.0 0.36 0.43 0.28 0.13 0.05 0.02 7
:6?ic
í0.18
í0.3 0.0 0.05 0.36 0.39 0.26 0.13 0.05 3 2
:6:7c
0ii
í0.3 í0.1 0.0 0.0 0.25 0.40 0.32 0.19 0.09 4 3 3 1
;6?ic
0.15
í0.2 í0.2 0.0 0.0 0.11 0.36 0.36 0.25 0.13 8 4 6 2
6?ic
í0.09 0.25 0.14
?6?ic
í0.25 0.04 0.25 0.06
46?ic
0.05
í0.2 í0.2 0.2 0.1 0.1 0.0 0.0 0.03 0.26 0.34 4 3 8 8 0 5 2 í0.1 í0.2 í0.0 0.3 0.2 0.1 0.0 0.0 0.0 0.20 0.33 8 7 6 1 1 2 6 3 1 í0.2 í0.2 í0.0 0.3 0.2 0.2 0.1 0.0 0.0 0.0 0.22 2 3 1 2 9 1 2 6 3 1
í0.2 í0.0 í0.0 í0.2 í0.1 0.0 0.2 0.3 0.2 0.2 0.1 0.0 0.0 0.0 0.20 0.18 2 8 7 4 7 5 3 0 7 0 2 7 3 1
3 c c *# c FM signals can be generated using either direct or indirect frequency modulation. è è
c c
Direct FM modulation can be achieved by directly feeding the message into the input of a VCO. For indirect FM modulation, the message signal is integrated to generate a phase modulated signal. This is used to modulate a crystal controlled oscillator, and the result is passed through a frequency multiplier to give an FM signal[1w.
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
*# c A common method for recovering the information signal is through a Foster-Seeley discriminator. By the phenomenon of ¦ whereby FM is converted to AM in a frequency-selective circuit tuned slightly away from the nominal signal frequency, AM receivers may detect some FM transmissions, though this does not provide an efficient method of detection for FM broadcasts. There are other established methods for estimating the instantaneous frequency, including differentiation of the phase, adaptive frequency estimation techniques such as the phase locked loop (PLL), and extraction of the peak from time-varying spectral representations and time-frequency distributions[2w.
c
* /c FM is commonly used at VHF radio frequencies for high-fidelity broadcasts of music and speech (see FM broadcasting). Normal (analog) TV sound is also broadcast using FM. A narrow band form is used for voice communications in commercial and amateur radio settings. The type of FM used in broadcast is generally called wide-FM, or W-FM. In two-way radio, narrowband narrow-fm (N-FM) is used to conserve bandwidth. In addition, it is used to send signals into space.
/c c /c FM is also used at intermediate frequencies by all analog VCR systems, including VHS, to record both the luminance (black and white) and the chrominance portions of the video signal. FM is the only feasible method of recording video to and retrieving video from Magnetic tape without extreme distortion, as video signals have a very large range of frequency components ² from a few hertz to several megahertz, too wide for equalizers to work with due to electronic noise below í60 dB. FM also keeps the tape at saturation level, and therefore acts as a form of noise reduction, and a simple limiter can mask variations in the playback output, and the FM capture effect removes print-through and pre-echo. A continuous pilot-tone, if added to the signal ² as was done on V2000 and many Hi-band formats ² can keep mechanical jitter under control and assist timebase correction. These FM systems are unusual in that they have a ratio of carrier to maximum modulation frequency of less than two; contrast this with FM audio broadcasting where the ratio is around 10,000. Consider for example a 6 MHz carrier modulated at a 3.5 MHz rate; by Bessel analysis c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
the first sidebands are on 9.5 and 2.5 MHz, while the second sidebands are on 13 MHz and í1 MHz. The result is a sideband of reversed phase on +1 MHz; on demodulation, this results in an unwanted output at 6í1 = 5 MHz. The system must be designed so that this is at an acceptable level.[3w
#*c FM is also used at audio frequencies to synthesize sound. This technique, known as FM synthesis, was popularized by early digital synthesizers and became a standard feature for several generations of personal computer sound cards.
An audio signal (top) may be carried by an AM or FM radio wave.
*c Edwin Howard Armstrong (1890±1954) was an American electrical engineer who invented frequency modulation (FM) radio.[4w He patented the regenerative circuit in 1914, the superheterodyne receiver in 1918 and the super-regenerative circuit in 1922.[5w He presented his paper: "A Method of Reducing Disturbances in Radio Signaling by a System of Frequency Modulation", which first described FM radio, before the New York section of the Institute of Radio Engineers on November 6, 1935. The paper was published in 1936.[6w As the name implies, wideband FM (W-FM) requires a wider signal bandwidth than amplitude modulation by an equivalent modulating signal, but this also makes the signal more robust against noise and interference. Frequency modulation is also more robust against simple signal amplitude fading phenomena. As a result, FM was chosen as the modulation standard for high frequency, high fidelity radio transmission: hence the term "FM radio" (although for many years the BBC called it "VHF radio", because commercial FM broadcasting uses a well-known part of the VHF band -- the FM broadcast band. FM receivers employ a special detector for FM signals and exhibit a phenomenon called #c ((, where the tuner is able to clearly receive the stronger of two stations being broadcast on c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
the same frequency. Problematically however, frequency drift or lack of selectivity may cause one station or signal to be suddenly overtaken by another on an adjacent channel. Frequency drift typically constituted a problem on very old or inexpensive receivers, while inadequate selectivity may plague any tuner. An FM signal can also be used to carry a stereo signal: see FM stereo. However, this is done by using multiplexing and demultiplexing before and after the FM process. The rest of this article ignores the stereo multiplexing and demultiplexing process used in "stereo FM", and concentrates on the FM modulation and demodulation process, which is identical in stereo and mono processes. A high-efficiency radio-frequency switching amplifier can be used to transmit FM signals (and other constant-amplitude signals). For a given signal strength (measured at the receiver antenna), switching amplifiers use less battery power and typically cost less than a linear amplifier. This gives FM another advantage over other modulation schemes that require linear amplifiers, such as AM and QAM.
$c0 * /c $c 0 * / is a broadcast technology pioneered by Edwin Howard Armstrong that uses frequency modulation (FM) to provide high-fidelity sound over broadcast radio.
/c The term 'FM band' is effectively shorthand for 'frequency band in which FM is used for broadcasting'. This term can upset purists because it conflates a modulation scheme with a range of frequencies. The term 'VHF' (Very High Frequency) was previously in common use in Europe. 'UKW,' which stands for a J
(ultra short wave) in German is still widely used in Germany, as is 'UKV' (a ½ ) in Sweden.
* c0 *c Throughout the world, the broadcast band falls within the VHF part of the radio spectrum. Usually 87.5 to 108.0 MHz is used, or some portion thereof, with few exceptions: c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
è
è
In the former Soviet republics, and some former Eastern Bloc countries, the older 6574 MHz band is also used. Assigned frequencies are at intervals of 30 kHz. This band, sometimes referred to as the OIRT band, is slowly being phased out in many countries. In those countries the 87.5-108.0 MHz band is referred to as the CCIR band. In Japan, the band 76-90 MHz is used.
The frequency of an FM broadcast station (more strictly its assigned nominal centre frequency) is usually an exact multiple of 100 kHz. In most of the Americas and the Caribbean, only odd multiples are used. In some parts of Europe, Greenland and Africa, only even multiples are used. In Italy, multiples of 50 kHz are used. There are other unusual and obsolete standards in some countries, including 0.001, 0.01, 0.03, 0.074, 0.5, and 0.3 MHz.
*# c c *# c Frequency modulation (FM) is a form of modulation which conveys information over a carrier wave by varying its frequency (contrast this with amplitude modulation, in which the amplitude of the carrier is varied while its frequency remains constant). In analog applications, the instantaneous frequency of the carrier is directly proportional to the instantaneous value of the input signal. This form of modulation is commonly used in the FM broadcast band.
3 c *c* c Random noise has a 'triangular' spectral distribution in an FM system, with the effect that noise occurs predominantly at the highest frequencies within the baseband. This can be offset, to a limited extent, by boosting the high frequencies before transmission and reducing them by a corresponding amount in the receiver. Reducing the high frequencies in the receiver also reduces the high-frequency noise. These processes of boosting and then reducing certain frequencies are known as pre-emphasis and de-emphasis, respectively. The amount of pre-emphasis and de-emphasis used is defined by the time constant of a simple RC filter circuit. In most of the world a 50 µs time constant is used. In North America, 75 µs is used. This applies to both mono and stereo transmissions and to baseband audio (not the subcarriers). The amount of pre-emphasis that can be applied is limited by the fact that many forms of contemporary music contain more high-frequency energy than the musical styles which prevailed at the birth of FM broadcasting. They cannot be pre-emphasized as much because it would cause excessive deviation of the FM carrier. (Systems more modern than FM broadcasting
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
tend to use either programme-dependent variable pre-emphasis²e.g. dbx in the BTSC TV sound system²or none at all.)
$cc In the late 1950s, several systems to add stereo to FM radio were considered by the FCC. Included were systems from 14 proponents including Crosley, Halstead, Electrical and Musical Industries, Ltd (EMI), Zenith Electronics Corporation and General Electric. The individual systems were evaluated for their strengths and weaknesses during field tests in Uniontown, Pennsylvania using KDKA-FM in Pittsburgh as the originating station. The Crosley system was rejected by the FCC because it degraded the signal-to-noise ratio of the main channel and did not perform well under multipath RF conditions. In addition, it did not allow for SCA services because of its wide FM sub-carrier bandwidth. The Halstead system was rejected due to lack of high frequency stereo separation and reduction in the main channel signal-to-noise ratio. The GE and Zenith systems, so similar that they were considered theoretically identical, were formally approved by the FCC in April 1961 as the standard stereo FM broadcasting method in the USA and later adopted by most other countries.[1w It is important that stereo broadcasts should be compatible with mono receivers. For this reason, the left (L) and right (R) channels are algebraically encoded into sum (L+R) and difference (LíR) signals. A mono receiver will use just the L+R signal so the listener will hear both channels in the single loudspeaker. A stereo receiver will add the difference signal to the sum signal to recover the left channel, and subtract the difference signal from the sum to recover the right channel. The (L+R) Main channel signal is transmitted as baseband audio in the range of 30 Hz to 15 kHz. The (LíR) Sub-channel signal is modulated onto a 38 kHz double-sideband suppressed carrier (DSBSC) signal occupying the baseband range of 23 to 53 kHz. A 19 kHz pilot tone, at exactly half the 38 kHz sub-carrier frequency and with a precise phase relationship to it, as defined by the formula below, is also generated. This is transmitted at 8± 10% of overall modulation level and used by the receiver to regenerate the 38 kHz sub-carrier with the correct phase. The final multiplex signal from the stereo generator contains the Main Channel (L+R), the pilot tone, and the sub-channel (LíR). This composite signal, along with any other sub-carriers, modulates the FM transmitter. The instantaneous deviation of the transmitter carrier frequency due to the stereo audio and pilot tone (at 10% modulation) is:
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
[2w
Where A and B are the pre-emphasized Left and Right audio signals and is the frequency of the pilot tone. Slight variations in the peak deviation may occur in the presence of other subcarriers or because of local regulations. Converting the multiplex signal back into left and right audio signals is performed by a stereo decoder, which is built into stereo receivers. In order to preserve stereo separation and signal-to-noise parameters, it is normal practice to apply pre-emphasis to the left and right channels before encoding, and to apply de-emphasis at the receiver after decoding. Stereo FM signals are more susceptible to noise and multipath distortion than are mono FM signals.[3w In addition, for a given RF level at the receiver, the signal-to-noise ratio for the stereo signal will be worse than for the mono receiver. For this reason many FM stereo receivers include a stereo/mono switch to allow listening in mono when reception conditions are less than ideal, and most car radios are arranged to reduce the separation as the signal-to-noise ratio worsens, eventually going to mono while still indicating a stereo signal is being received.
-# * c$c In 1969 Louis Dorren invented the Quadraplex system of single station, discrete, compatible four-channel FM broadcasting. There are two additional subcarriers in the Quadraplex system, supplementing the single one used in standard stereo FM. The baseband layout is as follows: è è
è
è
c c
50 Hz to 15 kHz Main Channel (sum of all 4 channels) (LF+LB+RF+RB) signal, for mono FM listening compatibility. 23 to 53 kHz (cosine quadrature subcarrier) (LF+LB) - (RF+RB) Left minus Right difference signal. This signal's modulation in algebraic sum and difference with the Main channel was used for 2 channel stereo listener compatibility. 23 to 53 kHz (sine quadrature 38 kHz subcarrier) (LF+RF) - (LB+RB) Front minus Back difference signal. This signal's modulation in algebraic sum and difference with the Main channel and all the other subcarriers is used for the Quadraphonic listener. 61 to 91 kHz (cosine quadrature 76 kHz subcarrier) (LF+RB) - (LB+RF) Diagonal difference signal. This signal's modulation in algebraic sum and difference with the main channel and all the other subcarriers is also used for the Quadraphonic listener.
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
è
95 kHz SCA subcarrier, phase-locked to 19 kHz pilot, for reading services for the blind, background music, etc.
There were several variations on this system submitted by GE, Zenith, RCA, and Denon for testing and consideration during the National Quadraphonic Radio Committee field trials for the FCC. The original Dorren Quadraplex System outperformed all the others and was chosen as the national standard for Quadraphonic FM broadcasting in the United States. The first commercial FM station to broadcast quadraphonic program content was WIQB (now called WWWW-FM) in Ann Arbor/Saline, Michigan under the guidance of Chief Engineer Brian Brown.[4w
c c#0 c'c
Typical spectrum of composite baseband signal The subcarrier system has been further extended to add other services. Initially these were private analog audio channels which could be used internally or rented out. Radio reading services for the blind are also still common, and there were experiments with quadraphonic sound. If stereo is not on a station, everything from 23 kHz on up can be used for other services. The guard band around 19 kHz ( 4 kHz) must still be maintained, so as not to trigger stereo decoders on receivers. If there is stereo, there will typically be a guard band between the upper limit of the DSBSC stereo signal (53 kHz) and the lower limit of any other subcarrier. Digital services are now also available. A 57 kHz subcarrier (phase locked to the third harmonic of the stereo pilot tone) is used to carry a low-bandwidth digital Radio Data System signal, c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
providing extra features such as Alternative Frequency (AF) and Network (NN). This narrowband signal runs at only 1187.5 bits per second, thus is only suitable for text. A few proprietary systems are used for private communications. A variant of RDS is the North American RBDS or "smart radio" system. In Germany the analog ARI system was used prior to RDS for broadcasting traffic announcements to motorists (without disturbing other listeners). Plans to use ARI for other European countries led to the development of RDS as a more powerful system. RDS is designed to be capable of being used alongside ARI despite using identical subcarrier frequencies. In the United States, digital radio services are being deployed within the FM band rather than using Eureka 147 or the Japanese standard ISDB. This in-band on-channel approach, as do all digital radio techniques, makes use of advanced compressed audio. The proprietary iBiquity system, branded as "HD Radio", currently is authorized for "hybrid" mode operation, wherein both the conventional analog FM carrier and digital sideband subcarriers are transmitted. Eventually, presuming widespread deployment of HD Radio receivers, the analog services could theoretically be discontinued and the FM band become all digital. In the USA services (other than stereo, quad and RDS) using subcarriers are sometimes referred to as SCA (subsidiary communications authorisation) services. Uses for such subcarriers include book/newspaper reading services for blind listeners, private data transmission services (for example sending stock market information to stockbrokers or stolen credit card number blacklists to stores) subscription commercial-free background music services for shops, paging ("beeper") services and providing a program feed for AM transmitters of AM/FM stations. SCA subcarriers are typically 67 kHz and 92 kHz.
c c 0c$c A commercially unsuccessful noise reduction system used with FM radio in some countries during the late 1970s, Dolby FM used a modified 25 µs pre-emphasis time constant and a frequency selective companding arrangement to reduce noise. See: Dolby noise reduction system.
c'*c0c c$cc c
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
The range of an FM mono transmission is related to the transmitter RF power, the antenna gain and antenna height. The FCC (USA) publishes curves that aid in calculation of this maximum distance as a function of signal strength at the receiving location. For FM stereo, the maximum distance covered is significantly reduced. This is due to the presence of the 38 kHz subcarrier modulation. Vigorous audio processing improves the coverage area of an FM stereo station.
*c(c$c0 * /c,*,*c Despite FM having been patented in 1933, commercial FM broadcasting did not begin until 1939, when it was initiated by WRVE, the FM station of General Electric's main factory in Schenectady, NY. In countries outside of Europe it took many years for FM to be adopted by the majority of radio listeners. The first ¦¦ FM broadcasting stations were in the United States, but initially they were primarily used to broadcast classical music to an upmarket listenership in urban areas, and for educational programming. By the late 1960s FM had been adopted by fans of "Alternative Rock" music ("A.O.R. - 'Album Oriented Rock' Format"), but it wasn't until 1978 that listenership to FM stations exceeded that of AM stations in North America. During the 1980s and 1990s, Top 40 music stations and later even country music stations largely abandoned AM for FM. Today AM is mainly the preserve of talk radio, news, sports, religious programming, ethnic (minority language) broadcasting and some types of minority interest music. This shift has transformed AM into the "alternative band" that FM once was. The medium wave band (known as "AM" in North America) is overcrowded[¦ w in Western Europe, leading to interference problems and, as a result, many MW frequencies are suitable only for speech broadcasting. Belgium, the Netherlands, Denmark and particularly Germany were among the first countries to adopt FM on a widespread scale. Among the reasons for this were: 1. The medium wave band in Western Europe became overcrowded after World War II, mainly due to the best available medium wave frequencies being used at high power levels by the Allied occupation forces, both for broadcasting entertainment to their troops and for broadcasting cold war propaganda across the Iron curtain. 2. After World War II, broadcasting frequencies were reorganized and reallocated by delegates of the victorious countries in the Copenhagen Frequency Plan. German broadcasters were left with only two remaining AM frequencies, and were forced to look to FM for expansion.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Public service broadcasters in Ireland and Australia were far slower at adopting FM radio than those in either North America or continental Europe. However, in Ireland several unlicensed commercial FM stations were on air by the mid-1980s. These generally simulcast on AM and FM. In the United Kingdom, the BBC began FM broadcasting in 1955, with three national networks carrying the Light Programme, Third Programme and Home Service (renamed Radio 2, Radio 3 and Radio 4 respectively in 1967). These three networks used the sub-band 88.0±94.6 MHz. The sub-band 94.6±97.6 MHz was later used for BBC and local commercial services. Only when commercial broadcasting was introduced to the UK in 1973 did the use of FM pick up in Britain. With the gradual clearance of other users (notably Public Services such as police, fire and ambulance) and the extension of the FM band to 108.0 MHz between 1980 and 1995, FM expanded rapidly throughout the British Isles and effectively took over from LW and MW as the delivery platform of choice for fixed and portable domestic and vehicle-based receivers. In addition, Ofcom (previously the Radio Authority) in the UK issues on demand Restricted Service Licences on FM and also on AM (MW) for short-term local-coverage broadcasting which is open to anyone who does not carry a prohibition and can put up the appropriate licensing and royalty fees. In 2006 almost 500 such licenses were issued. FM started in Australia in 1947 but did not catch on and was shut down in 1961 to expand the television band. It was not reopened until 1975. Subsequently, it developed steadily until in the 1980s many AM stations transferred to FM because of its superior sound quality. Today, as elsewhere in the developed world, most Australian broadcasting is on FM ± although AM talk stations are still very popular. Most other countries expanded their use of FM through the 1990s. Because it takes a large number of FM transmitting stations to cover a geographically large country, particularly where there are terrain difficulties, FM is more suited to local broadcasting than for national networks. In such countries, particularly where there are economic or infrastructural problems, "rolling out" a national FM broadcast network to reach the majority of the population can be a slow and expensive process.
c(c 0#c$c The frequencies available for FM were decided by some important conferences of ITU. The milestone of those conferences is the Stockholm agreement of 1961 among 38 countries. è
c c
Final acts of the conference
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
A 1984 conference in Geneva made some modifications to the original Stockholm agreement particularly in the frequency range above 100 MHz.
c#c(cc$c0 * c0 *c
Belkin àà¦
#c#c(c$c c In some countries, small-scale (Part 15 in United States terms) transmitters are available that can transmit a signal from an audio device (usually an MP3 player or similar) to a standard FM radio receiver; such devices range from small units built to carry audio to a car radio with no audio-in capability (often formerly provided by special adapters for audio cassette decks, which are becoming less common on car radio designs) up to full-sized, near-professional-grade broadcasting systems that can be used to transmit audio throughout a property. Most such units transmit in full stereo, though some models designed for beginner hobbyists may not. Similar transmitters are often included in satellite radio receivers and some toys. Legality of these devices varies by country. The FCC in the US and Industry Canada allow them. Starting on 1 October 2006 these devices became legal in most countries in the European Union. Devices made to the harmonised European specification became legal in the UK on 8 December 2006.
$c *cc The FM broadcast band can also be used by some inexpensive wireless microphones, but professional-grade wireless microphones generally use bands in the UHF region so they can run on dedicated equipment without broadcast interference. Such inexpensive wireless microphones are generally sold as toys for karaoke or similar purposes, allowing the user to use an FM radio as an output rather than a dedicated amplifier and speaker.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
0 * /c Low-power transmitters such as those mentioned above are also sometimes used for neighborhood or campus radio stations, though campus radio stations are often run over carrier current. This is generally considered a form of microbroadcasting. As a general rule, enforcement towards low-power FM stations is stricter than AM stations due to issues such as the capture effect, and as a result, FM microbroadcasters generally do not reach as far as their AM competitors.
*c#c(c$c c FM transmitters have been used to construct miniature wireless microphones for espionage and surveillance purposes (covert listening devices or so-called "bugs"); the advantage to using the FM broadcast band for such operations is that the receiving equipment would not be considered particularly suspect. Common practice is to tune the bug's transmitter off the ends of the broadcast band, into what in the United States would be TV channel 6 (107.9); most FM radios with analog tuners have sufficient overcoverage to pick up these slightly-beyond-outermost frequencies, although many digitally tuned radios do not. Constructing a "bug" is a common early project for electronics hobbyists, and project kits to do so are available from a wide variety of sources. The devices constructed, however, are often too large and poorly shielded for use in clandestine activity. In addition, much pirate radio activity is broadcast in the FM range, because of the band's greater clarity and listenership, the smaller size and lower cost of equipment.
c c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
c c 9c 3 c c
/c+/ c/-c
Signal sampling representation. The continuous signal is represented with a green color whereas the discrete samples are in blue.
In signal processing, / is the reduction of a continuous signal to a discrete signal. A common example is the conversion of a sound wave (a continuous-time signal) to a sequence of samples (a discrete-time signal). A refers to a value or set of values at a point in time and/or space. A is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the instantaneous value of the continuous signal at the desired points. c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
c c For convenience, we will discuss signals which vary with time. However, the same results can be applied to signals varying in space or in any other dimension and similar results are obtained in two or more dimensions. Let () be a continuous signal which is to be sampled, and that sampling is performed by measuring the value of the continuous signal every seconds, which is called the sampling interval. Thus, the sampled signal [ w given by2 [ w = ( ), with = 0, 1, 2, 3, ...
The sampling frequency or sampling rate is defined as the number of samples obtained in one second, or = 1/. The sampling rate is measured in hertz or in samples per second. We can now ask: under what circumstances is it possible to reconstruct the original signal completely and exactly (perfect reconstruction)? A partial answer is provided by the Nyquist±Shannon sampling theorem, which provides a sufficient (but not always necessary) condition under which perfect reconstruction is possible. The sampling theorem guarantees that bandlimited signals (i.e., signals which have a maximum frequency) can be reconstructed perfectly from their sampled version, if the sampling rate is more than twice the maximum frequency. Reconstruction in this case can be achieved using the Whittaker±Shannon interpolation formula. The frequency equal to one-half of the sampling rate is therefore a bound on the highest frequency that can be unambiguously represented by the sampled signal. This frequency (half the sampling rate) is called the Nyquist frequency of the sampling system. Frequencies above the Nyquist frequency can be observed in the sampled signal, but their frequency is ambiguous. That is, a frequency component with frequency cannot be distinguished from other components with frequencies + and ± for nonzero integers . This ambiguity is called aliasing. To handle this problem as gracefully as possible, most analog signals are filtered with an antialiasing filter (usually a low-pass filter with cutoff near the Nyquist frequency) before conversion to the sampled discrete representation.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
0' c*c The observation period is the span of time during which a series of data samples are collected at regular intervals. More broadly, it can refer to any specific period during which a set of data points is gathered, regardless of whether or not the data is periodic in nature. Thus a researcher might study the incidence of earthquakes and tsunamis over a particular time period, such as a year or a century. The observation period is simply the span of time during which the data is studied, regardless of whether data so gathered represents a set of discrete events having arbitrary timing within the interval, or whether the samples are explicitly bound to specified sub-intervals.
3 c c In practice, the continuous signal is sampled using an analog-to-digital converter (ADC), a nonideal device with various physical limitations. This results in deviations from the theoretically perfect reconstruction capabilities, collectively referred to as distortion. Various types of distortion can occur, including: è
è
è è è è è
Aliasing. A precondition of the sampling theorem is that the signal be bandlimited. However, in practice, no time-limited signal can be bandlimited. Since signals of interest are almost always time-limited (e.g., at most spanning the lifetime of the sampling device in question), it follows that they are not bandlimited. However, by designing a sampler with an appropriate guard band, it is possible to obtain output that is as accurate as necessary. Integration effect or aperture effect. This results from the fact that the sample is obtained as a time average within a sampling region, rather than just being equal to the signal value at the sampling instant. The integration effect is readily noticeable in photography when the exposure is too long and creates a blur in the image. An ideal camera would have an exposure time of zero. In a capacitor-based sample and hold circuit, the integration effect is introduced because the capacitor cannot instantly change voltage thus requiring the sample to have non-zero width. Jitter or deviation from the precise sample timing intervals. Noise, including thermal sensor noise, analog circuit noise, etc. Slew rate limit error, caused by an inability for an ADC output value to change sufficiently rapidly. Quantization as a consequence of the finite precision of words that represent the converted values. Error due to other non-linear effects of the mapping of input voltage to converted output value (in addition to the effects of quantization).
The conventional, practical digital-to-analog converter (DAC) does not output a sequence of dirac impulses (such that, if ideally low-pass filtered, result in the original signal before c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
sampling) but instead output a sequence of piecewise constant values or rectangular pulses. This means that there is an inherent effect of the zero-order hold on the effective frequency response of the DAC resulting in a mild roll-off of gain at the higher frequencies (a 3.9224 dB loss at the Nyquist frequency). This zero-order hold effect is a consequence of the action of the DAC and is due to the sample and hold that might precede a conventional ADC as is often misunderstood. The DAC can also suffer errors from jitter, noise, slewing, and non-linear mapping of input value to output voltage. Jitter, noise, and quantization are often analyzed by modeling them as random errors added to the sample values. Integration and zero-order hold effects can be analyzed as a form of low-pass filtering. The non-linearities of either ADC or DAC are analyzed by replacing the ideal linear function mapping with a proposed nonlinear function.
c #*c /c ÷
When it is necessary to capture audio covering the entire 20±20,000 Hz range of human hearing, such as when recording music or many types of acoustic events, audio waveforms are typically sampled at 44.1 kHz (CD), 48 kHz (professional audio), or 96 kHz. The approximately doublerate requirement is a consequence of the Nyquist theorem. There has been an industry trend towards sampling rates well beyond the basic requirements; 96 kHz and even 192 kHz are available.[1w This is in contrast with laboratory experiments, which have failed to show that ultrasonic frequencies are audible to human observers; however in some cases ultrasonic sounds do interact with and modulate the audible part of the frequency spectrum (intermodulation distortion). It is noteworthy that intermodulation distortion is not present in the live audio and so it represents an artificial coloration to the live sound.[2w One advantage of higher sampling rates is that they can relax the low-pass filter design requirements for ADCs and DACs, but with modern oversampling sigma-delta converters this advantage is less important.
Audio is typically recorded at 8-, 16-, and 20-bit depth, which yield a theoretical maximum signal to quantization noise ratio (SQNR) for a pure sine wave of, approximately, 49.93 dB, 98.09 dB and 122.17 dB [3w. Eight-bit audio is generally not used due to prominent and inherent quantization noise (low maximum SQNR), although the A-law and u-law 8-bit encodings pack c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
more resolution into 8 bits while increase total harmonic distortion. CD quality audio is recorded at 16-bit. In practice, not many consumer stereos can produce more than about 90 dB of dynamic range, although some can exceed 100 dB. Thermal noise limits the true number of bits that can be used in quantization. Few analog systems have signal to noise ratios (SNR) exceeding 120 dB; consequently, few situations will require more than 20-bit quantization. For playback and not recording purposes, a proper analysis of typical programme levels throughout an audio system reveals that the capabilities of well-engineered 16-bit material far exceed those of the very best hi-fi systems, with the microphone noise and loudspeaker headroom being the real limiting factors[¦ w.
c /c Speech signals, i.e., signals intended to carry only human speech, can usually be sampled at a much lower rate. For most phonemes, almost all of the energy is contained in the 5Hz-4 kHz range, allowing a sampling rate of 8 kHz. This is the sampling rate used by nearly all telephony systems, which use the G.711 sampling and quantization specifications.
5*c /c Standard-definition television (SDTV) uses either 720 by 480 pixels (US NTSC 525-line) or 704 by 576 pixels (UK PAL 625-line) for the visible picture area. High-definition television (HDTV) is currently moving towards three standards referred to as 720p (progressive), 1080i (interlaced) and 1080p (progressive, also known as Full-HD) which all 'HD-Ready' sets will be able to display.
* /c
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
Plot of sample rates (y axis) versus the upper edge frequency (x axis) for a band of width 1; grays areas are combinations that are "allowed" in the sense that no two frequencies in the band alias to same frequency. The darker gray areas correspond to undersampling with the lowest allowable sample rate.
When one samples a bandpass signal at a rate lower than the Nyquist rate, the samples are equal to samples of a low-frequency alias of the high-frequency signal; the original signal will still be uniquely represented and recoverable if the spectrum of its alias does not cross over half the sampling rate. Such undersampling is also known as , ¦ , à , and ¦à ¦ ½ [4w
' /c Oversampling is used in most modern analog-to-digital converters to reduce the distortion introduced by practical digital-to-analog converters, such as a zero-order hold instead of idealizations like the Whittaker±Shannon interpolation formula.
)c /c refers to the simultaneous sampling of two different, but related, waveforms, resulting in pairs of samples that are subsequently treated as complex numbers. Usually one waveform
is the Hilbert transform of the other waveform
and the complex-valued
function, is called an analytic signal, whose Fourier transform is zero for all negative values of frequency. In that case, the Nyquist rate for a waveform with no frequencies 8c can be reduced to just (complex samples/sec), instead of (real 1w samples/sec).[note More apparently, the equivalent baseband waveform, also has a Nyquist rate of , because all of its non-zero frequency content is shifted into the interval [-B/2, B/2). Although complex-valued samples can be obtained as described above, they are much more commonly created by manipulating samples of a real-valued waveform. For instance, the equivalent baseband waveform can be created without explicitly computing
by processing
[note 2w the product sequence through a digital lowpass filter whose [note 3w cutoff frequency is B/2. Computing only every other sample of the output sequence reduces the sample-rate commensurate with the reduced Nyquist rate. The result is half as many complex-valued samples as the original number of real samples. No information is lost, and the original s(t) waveform can be recovered, if necessary.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
c c c "#! c /cc
$/62 Hypothetical spectrum of a 0 **c/ as a function of frequency The "#! c /c , which has been named after Harry Nyquist and Claude Shannon, is a fundamental result in the field of information theory, in particular telecommunications and signal processing. Sampling is the process of converting a signal (for example, a function of continuous time or space) into a numeric sequence (a function of discrete time or space). Shannon's version of the theorem states:[1w If a function () contains no frequencies higher than hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2) seconds apart. The theorem is commonly called the "#c /c ; since it was also discovered independently by E. T. Whittaker, by Vladimir Kotelnikov, and by others, it is also known as "#! !@%', %! !@%', %!"#! @%'! , @ , etc., sampling theorem, as well as the * c c (c c . It is often referred to simply as . In essence, the theorem shows that a bandlimited analog signal that has been sampled can be perfectly reconstructed from an infinite sequence of samples if the sampling rate exceeds 2 samples per second, where is the highest frequency in the original signal. If a signal contains a component at exactly hertz, then samples spaced at exactly 1/(2) seconds do not completely
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
determine the signal, Shannon's statement notwithstanding. This sufficient condition can be weakened, as discussed at Sampling of non-baseband signals below. More recent statements of the theorem are sometimes careful to exclude the equality condition; that is, the condition is if () contains no frequencies higher than ; this condition is equivalent to Shannon's except when the function includes a steady sinusoidal component at exactly frequency . The theorem assumes an idealization of any real-world situation, as it only applies to signals that are sampled for infinite time; any time-limited () cannot be perfectly bandlimited. Perfect reconstruction is mathematically possible for the idealized model but only an approximation for real-world signals and sampling techniques, albeit in practice often a very good one. The theorem also leads to a formula for reconstruction of the original signal. The constructive proof of the theorem leads to an understanding of the aliasing that can occur when a sampling system does not satisfy the conditions of the theorem. The sampling theorem provides a sufficient condition, but not a necessary one, for perfect reconstruction. The field of compressed sensing provides a stricter sampling condition when the underlying signal is known to be sparse. Compressed sensing specifically yields a sub-Nyquist sampling criterion.
*#c A signal or function is bandlimited if it contains no energy at frequencies higher than some bandlimit or bandwidth . A signal that is bandlimited is constrained in how rapidly it changes in time, and therefore how much detail it can convey in an interval of time. The sampling theorem asserts that the uniformly spaced discrete samples are a complete representation of the signal if this bandwidth is less than half the sampling rate. To formalize these concepts, let () represent a continuous-time signal and () be the continuous Fourier transform of that signal:
The signal () is said to be bandlimited to a one-sided baseband bandwidth, , if: for all c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
or, equivalently, supp(X)[2w [í, w. Then the sufficient condition for exact reconstructability from samples at a uniform sampling rate (in samples per unit time) is:
or equivalently:
2 is called the and is a property of the bandlimited signal, while / 2 is called the ¦ and is a property of this sampling system. The time interval between successive samples is referred to as the ½ :
and the samples of () are denoted by: (integers). The sampling theorem leads to a procedure for reconstructing the original () from the samples and states sufficient conditions for such a reconstruction to be exact.
c /cc The theorem describes two processes in signal processing: a sampling process, in which a continuous time signal is converted to a discrete time signal, and a reconstruction process, in which the original continuous signal is recovered from the discrete time signal. The continuous signal varies over (or ¦ in a digitized image, or another independent variable in some other application) and the sampling process is performed by measuring the continuous signal's value every units of time (or space), which is called the ½ . In practice, for signals that are a function of time, the sampling interval is typically quite small, on the order of milliseconds, microseconds, or less. This results in a sequence of numbers, called , to represent the original signal. Each sample value is associated with the instant in time when it was measured. The reciprocal of the sampling interval (1/) is the sampling frequency denoted , which is measured in samples per unit of time. If is expressed in seconds, then is expressed in Hz.
c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
#c Reconstruction of the original signal is an interpolation process that mathematically defines a continuous-time signal () from the discrete samples [ w and at times in between the sample instants .
$/642 The normalized sinc function: sin(± ) / (± ) ... showing the central peak at = 0, and zerocrossings at the other integer values of . è
c *#2 Each sample value is multiplied by the sinc function scaled so that the zero-crossings of the sinc function occur at the sampling instants and that the sinc function's central point is shifted to the time of that sample, . All of these shifted and scaled functions are then added together to recover the original signal. The scaled and time-shifted sinc functions are continuous making the sum of these also continuous, so the result of this operation is a continuous signal. This procedure is represented by the Whittaker±Shannon interpolation formula.
è
c *2 The signal obtained from this reconstruction process can have no frequencies higher than one-half the sampling frequency. According to the theorem, the reconstructed signal will match the original signal provided that the original signal contains no frequencies at or above this limit. This condition is called the ¦ , or sometimes the ¦
If the original signal contains a frequency component equal to one-half the sampling rate, the condition is not satisfied. The resulting reconstructed signal may have a component at that frequency, but the amplitude and phase of that component generally will not match the original component. This reconstruction or interpolation using sinc functions is not the only interpolation scheme. Indeed, it is impossible in practice because it requires summing an infinite number of terms. c c
c c c c c ccccc c ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc c c
c
However, it is the interpolation method that in theory exactly reconstructs given bandlimited () with bandlimit < 1/(2); any other method that does so is formally equivalent to it.
3 c* c A few consequences can be drawn from the theorem:
c c
è
If the highest frequency in the original signal is known, the theorem gives the lower bound on the sampling frequency for which perfect reconstruction can be assured. This lower bound to the sampling frequency, 2, is called the Nyquist rate.
è
If instead the sampling frequency is known, the theorem gives us an upper bound for frequency components,
View more...
Comments