Digigram - Transport of The FM MPX Composite Signal Over IP, White Paper
Digigram - Transport of The FM MPX Composite Signal Over IP, White Paper
Abstract
For more than two decades, topics about digital transformation have been discussed in all industries. In
Broadcast and Media, often all our central infrastructure is digital by now, and we are already dealing with
the next transformation, turning away from SDI, AES3, MADI etc. to IP based technologies like SMPTE ST-
2110, AES67 etc. Especially looking into audio, where we from Digigram are coming from, we see even
microphones and loudspeakers, our most standard analogue equipment, more and more connected via
AoIP to our infrastructure, and this is great. Considering transport via IP, one undoubtful advantage is the
point-to-multipoint transmission - compared to technologies where we need an active distribution if we want
to share signals.
When it comes to Studio-to-Transmitter applications we can see the same evolution. Coming from the analogue
times with 950 MHz radio transmission or analogue telephone lines used for STL, we already surpassed
digital T1, ISDN lines etc. and nowadays most of the links are IP based, either via private WAN connections
or maintained by Telcos. And especially we from Digigram are a reliable partner for numerous installations
worldwide, as one of the first manufacturers that specialized entirely on IP based codec connections.
And therefore we basically focussed on the transmission of baseband audio signals, as there are some
clear advantages. We often have multiple kinds of transmission technologies, such as FM, AM or all the
new Digital Radio formats (DAB, DRM, etc.), so the most efficient way is to process the final signal directly
at the transmitter sites. And with highly optimised codec algorithms for audio, we can compress the signal
efficiently to adjust to the available bandwidth.
But transporting the baseband signal also means that we need to do the processing at the transmitter site.
It is not only about the EQing, the dynamics or the limiting to avoid clipping of the audio signal; it is also
for instance about processing and limiting the MPX signal itself that we provide to the FM transmitters so
that we can ensure compliance with the legal requirements, such as the maximum MPX signal power or
the maximum frequency deviation. So what if you want to have the same processed MPX signal at multiple
transmitter sites? Provisioning of the required equipment at each site is associated with significant capital
expenditure and operational costs.
White
2 Paper
Transport of the FM MPX composite
signal over IP
DIGIGRAM WHITE PAPER
This is why we see a growing demand for processing the MPX signal for FM radio at the studio side and then
distributing it from there to all the transmitter sites. And with the newest developments, the transport of
MPX signals via IP over private WAN connections or even internet connections becomes more and more a
sizable option for broadcasters.
And when we have a look at FM transmission itself and compare it, for instance, with the different digital
radio transmission technologies, it is still the most widely used technology to send the signal to our radios
in cars or at home. And for broadcasters in most countries worldwide, this will still be a relevant technology
to provide and maintain for the next years to come. So there is no resting on old technologies for this; we
continuously need to look for more efficient technologies.
What is MPX?
So what is actually an MPX signal? And why are we using it? MPX stands for multiplexing, basically a technology
to transport multiple signals on one carrier signal. In this case, we want to put all the elements together that
we need for our FM transmission, all the information that a radio receiver needs to playback a particular
radio channel. This includes our stereo audio signal, the RDS or RBDS information (Radio Data System, or
Radio Broadcast Data System in the US), perhaps some additional subcarrier information like paging signals
or traffic control signal switching, and not to forget some essential synchronisation signals.
So before we have a look at our MPX signal itself, maybe we should look at the frequency spectrum that we
are using for FM broadcasting. Usually, the frequency range from 87.5 to 108.0 MHz of the radio spectrum is
used in most countries worldwide. Every signal has its own assigned carrier frequency within this spectrum,
normally a maximum frequency deviation of 75 kHz.
In the pioneering times of FM Broadcasting, this signal was specified for standard stereo FM broadcasting.
Such technology had to fulfil certain kinds of criteria, such as:
- Stereo broadcasts needed to be compatible with mono receivers.
- It should be compatible with existing subsidiary communications authorisation services.
In April 1961, the ‘Pilot Tone’ System from GE/Zenith was formally approved in the USA and adopted by most
other countries worldwide. In figure 2, we can see this signal with RDS / RBDS data and two additional audio
subcarrier bands, which is known as the composite baseband, multiplex or MPX signal.
This MPX signal is basically the input signal on an FM exciter, the first stage of the FM transmitter.
The first element of this signal is the arithmetic sum of the left and right channels, which represents the
mono audio signal and corresponds to the baseband signal. This signal can be processed by any FM mono
receiver. The pilot tone at 19 kHz, if existent in the signal, indicates that the signal carries stereo information.
These kinds of additional elements are modulated on the MPX signal. The stereo information, modulated at a
centre frequency of 38 kHz, which is the first harmonic of the pilot tone, is the arithmetic difference between
the left and right channels and enables the receiver to calculate and reconstruct the stereo signal with a clear
separation between the channels.
White
4 Paper
Transport of the FM MPX composite
signal over IP
DIGIGRAM WHITE PAPER
This MPX signal is generated by a stereo generator. Usually, this is a processing function within an audio
processor that takes the stereo audio baseband signal from the audio mixer or the central audio infrastructure,
and enhances the audio for the desired sound profile, generates the multiplexed signal and processes this
signal to improve the transmission efficiency.
Other stages may enrich the signal with the RDS / RBDS information, that are modulated at 57 kHz, the third
harmonic of the pilot tone, and additional SCA data.
This MPX signal can be transported as an analogue signal, but also digital within an AES/EBU frame. The
signal is typically sampled with fs=192kHz and includes the audio signals, the RDS/RBDS data and the first
subcarrier. In reference to the sampling rate, this is known as AES192.
This all means that we have to deal with two kinds of audio signals in the FM signal chain, the baseband audio
and the composite MPX signal.
The question is, how do we build a signal chain where we benefit the most from the different advantages of
the respective signals based on our overall system requirements?
So what are these advantages of the MPX signal compared to the rest of the signals that we have to deal
with? All the FM transmitters need this MPX as their input signal, so there are two different concepts to
provide the signal to the transmitters.
1. We generate the MPX signal directly at the transmitter sites.
2. We generate the MPX signal in the studio facilities and send it to the transmitter sites via STL.
White
5 Paper
Transport of the FM
Communications MPX composite
security and simple,
signal over IP systems
user-friendly
DIGIGRAM
DIGIGRAM WHITE
WHITE PAPER
PAPER
The first approach, where we generate the MPX signal directly at the transmitter sites, requires all the
equipment for MPX processing, such as audio processors or RDS encoders, directly at the transmitter site,
and the studio sends all the data via elementary signals to the transmitter sites.
Depending on the overall system, this might be the best and most efficient solution. If you have a transmitter
that also needs to provide AM or any kind of Digital Radio (DAB, DRM, etc.) then you might want to generate
different signals individually for the different transmission technologies, while processing the baseband
audio signal differently. Then it is probably beneficial to send the baseband audio along with the ancillary
data via the STL to the transmitter instead of sending multiple different kinds of multiplexed audio signals.
Sending all the elementary signals to the transmitter sites and processing them on-site clearly gives a lot
of flexibility. And for sending baseband audio signals via STL, we have well-advanced technologies and
experience nowadays. And even when it comes to sending audio via less reliable communication links like
the internet, we can now achieve a very high degree of QoS.
With encoding formats like OPUS or the AAC family, we have a great toolset to optimise the required bitrate
and adapt to the bandwidth requirements for the given network links while maintaining superb audio quality.
And moreover, with redundancy technologies like DUAL-Streaming or FEC (Forward Error Correction), where
we can send parity information alongside the actual audio stream, that we can even size between around
10% - 100% of the main audio bitrate, we really can tailor the right compromise between audio quality and
reliability.
White
6 Paper
Transport of the FM MPX composite
signal over IP
DIGIGRAM WHITE PAPER
But processing the MPX signal directly at the transmitter stations requires having all the equipment for
the audio processing directly at the transmitter sites, which means you need individual devices at each
transmitter station. Especially if there are multiple FM transmitter sites with the same requirements, this is
a massive investment for multiple devices that might all do the same processing, only at different locations.
And this also means it might be more difficult to monitor, control and maintain everything.
In this case, it would be great if we can do all the processing and the MPX signal generation directly at
the studio facilities and then send this one signal to all the transmitter stations. This allows centralised
management and can significantly reduce costs.
• Lower your CAPEX: no need for sound processors, RDS and stereo encoders at transmitter sites
• Lower your OPEX: less power consumption, less space required, and less maintenance operations at
transmitter sites.
But of course, these advantages only become considerable if we can transport the MPX signal from the
studio to the transmitter over the same cost-efficient networks that we use for the baseband audio STLs
nowadays. And this is why we talk about MPX over IP.
White
7 Paper
Transport of the FM
Communications MPX composite
security and simple,
signal over IP systems
user-friendly
DIGIGRAM
DIGIGRAM WHITE
WHITE PAPER
PAPER
To send any kind of signal over an IP network, the first step is to have the signal in digital form. This can be
achieved via A/D conversion of the analogue MPX signal, or if the audio processor can provide the signal
already in digital form as AES192, then it is possible to remain entirely in the digital domain.
To understand the bandwidth requirements for the IP link, first, we need to have a look at the options that
we have to sample the MPX signal.
The sample rate basically specifies the bandwidth of the analogue signal that we can use. And according to
the Nyquist–Shannon sampling theorem, we need to sample a signal at least with double the frequency of
the used bandwidth to achieve a correct mapping in the digital domain. So let’s have a look for the theoretical
limit of the sample rate that we need for the A/D conversion considering which elements we want to have in
our digital MPX signal.
- Audio Only:
To include the whole stereo audio, we need to cover at least 53 kHz of bandwidth; this results in a minimum
sampling rate of 106 kHz
- Audio + RDS:
The RDS signal is modulated at 57 kHz with a maximum frequency deviation of ± 2 kHz on the MPX signal;
this results in a minimum required bandwidth of 59 kHz and a minimum sampling rate of 118 kHz
White
8 Paper
Transport of the FM MPX composite
signal over IP
DIGIGRAM WHITE PAPER
But of course, these are very simplified theoretical numbers, and the actually used sample rate is higher.
The typically used sample rate is 192 kHz, which is widely used as AES192 interface. In fact, this is basically
just a 192 kHz sampled PCM stream packed in a standardised AES/EBU (AES3) frame. But as we can see, with
192 kHz we do not include the second SCA in our digital MPX signal. And as the SCA signals actually almost
exclusively are only used in the US, in most countries it is reasonable to consider only the audio + RDS signals.
And reducing the sample rate to 144 kHz, for instance, can help to reduce the signal bitrate significantly.
Besides the sample rate, we consider the bit depth or the quantisation. To define the resulting bitrate in an
A/D conversion, the bit depth basically maps the resolution of the signal; the higher the resolution - the better
the quality. The important question here to consider is, what is the required quality and when is depreciation
of the quality effectively audible?
In order to quantify this quality, we like to use the Signal-to-Noise-Ratio (SNR). In table 1 we can see typically
used quantisations for high-fidelity audio and the resolution that they provide.
To bring this a bit into relations, here are the two most commonly used formats for audio signals.
- 16bit / 44.1 kHz: This is used for CD audio and represents for most people a high-quality audio experience.
Differences to higher quality audio are often not perceivable for humans that do not have a trained or
especially good hearing. Also in order to hear differences to higher quality audio, it is required to use high-
end audio equipment respectively loudspeakers. This is just a guess, but probably 99% of the radios that are
typically used for FM radio listening could not provide any better audio experience, given the audio signal
would be of higher quality.
White
9 Paper
Transport of the FM
Communications MPX composite
security and simple,
signal over IP systems
user-friendly
DIGIGRAM
DIGIGRAM WHITE
WHITE PAPER
PAPER
- 24bit / 48 kHz: This format is used for professional audio transmission in broadcast facilities. The reason
why it is required to use much higher quality signals here is the fact that the audio often traverses multiple
processing stages or devices in the signal chain. Every stage diminishes the signal quality and in order to
avoid a noticeable generation loss, it is required to work with a signal quality that is far beyond any audible
resolution.
So if we can afford to use 24bit bit-depth for the MPX signal, then this probably is the right choice. But on a
realistic view, we can also consider that this MPX signal is effectively the last generation in our signal chain
before it is processed in the FM transmitter. And if we have a look into the audio processing, which is done
directly before generating the MPX signal, the signal is effectively compressed and bandwidth limited to a
15 kHz audio spectrum. And if we have a look at the FM transmission itself, we are talking about a noise
performance with an SNR below 70dB. The question is then if the potential benefit that a 24bit sampling
provides over a 16bit sampling justifies potentially higher costs for higher STL bandwidth.
Several tests have shown that there are no audible differences at the end devices.
To determine the required bandwidth, we can calculate the Sample Rate x Bit Depth, which gives our signal
bitrate. In order to have our service bitrate, we also need to consider some overhead for the IP transport.
Here are the most relevant options to compare.
White
10 Paper
Transport of the FM MPX composite
signal over IP
DIGIGRAM WHITE PAPER
As a comparison, if we stream baseband audio in stereo with 24bit / 48kHz in PCM, the typical format used
in Broadcast facilities, then we require a service bitrate of 2.5 Mbit/s.
One advantage we mentioned about transporting the baseband signal, is that we are able to reduce this
bitrate very efficiently via data compression while still maintaining the required audio quality. But these
compression algorithms, more specifically the lossy codecs, are mostly based on psychoacoustic models that
remove information in frequencies that are inaudible for the human ear. But our multiplexed MPX signal
does not work like this; here, we can not just use the same methods without losing critical information.
There are also lossless codec algorithms available that could be used to compress signals like the MPX, but
here, on the contrary, the compression factor depends on the signal itself and varies over time, which is not
suitable for streaming where we need to work with guaranteed bandwidth.
So the question is if there are other methods to compress the MPX signal?
Like with any other compression, the target is to reduce the amount of data that we need to transport, and
there are various techniques to achieve that. Some of these techniques for instance are Entropy Encoding,
where information is transformed into coded representations that can be transported more optimised, or
Filter Banks, where the signal is divided into multiple frequency ranges, that can be compressed higher
or lower individually, according to the importance of the information in each bank, often based on Psycho
Acoustic Modeling. Other methods are the use of Decorrelation, where we are using approximated
difference information between related signals or values, as these often can be quantised with a lower bit
depth than the actual information. And based on the accuracy of the approximation, we can control our
compression ratio. Another technique is Predictive Modeling, where we predict future information and only
store the difference to the actual information. This basically works like the Decorrelation, but can be even
more efficient.
White
11 Paper
Transport of the FM
Communications MPX composite
security and simple,
signal over IP systems
user-friendly
DIGIGRAM
DIGIGRAM WHITE
WHITE PAPER
PAPER
The problem now with the MPX signal is that it is hard to use these kinds of methods on the entire signal.
Psychoacoustic models and filter banks are not really suitable as all the information over the entire spectrum
are important. And on the other hand, potentially very effective methods to compress information like the
pilot tone or the RDS signal are not very effective for the audio signal itself, which still accounts for the
majority of the information in the MPX signal.
In 2015 first studies were shown about approaches and technologies concerning the compression of FM MPX
signals. The basic principle here is to demodulate the several parts of the MPX stream and to process and
compress the different parts individually optimised, based on their specific signal characteristics.
The audio information can then be compressed with the well-known methods that we are using also in
our audio codec algorithms. Other information like the pilot tone is very predictable, so that decorrelation
algorithms can work very effectively here. The same applies to the RDS signal, where we have other very
specific algorithms to transport the information in an efficient way.
All this compressed information can then be encoded into an IP stream and sent to the decoder.
For example, with µMPX, a codec algorithm that is developed by Thimeo Audio Technology, it is possible to
send a compressed MPX signal with a bitrate as low as 320 kbit/s, including the audio and the RDS information.
This codec by now is used in many hardware platforms, like our IQOYA X/LINK-MPX, to enable the transport
of the MPX signal with a much lower bitrate compared to uncompressed MPX.
White
12 Paper
Transport of the FM MPX composite
signal over IP
DIGIGRAM WHITE PAPER
If we find analogue interfaces here only, then the transmission path must support it. Often the MPX codecs
will support both options and will even allow the bridging between audio processing devices that can provide
digital interfaces, to FM exciters that might have only analogue inputs, or vice-versa.
Another advantage that we have already discussed earlier, is that digital interfaces are normally based on
AES192, which includes SCA information and uncompressed it requires higher bandwidth than what we
might actually need if we only want the audio and the RDS information.
Other considerations for interfaces are redundancy concepts. In Broadcast, we often need to provide
24/7 service, and serious outages due to failures are not acceptable. In order to compensate for device
breakdowns, it is important to design redundancy concepts that allow that at any time, at least one device or
one transmission path can fail without breaking the service itself. So it is also necessary to consider interface
redundancy on the devices themselves.
White
13 Paper
Transport of the FM
Communications MPX composite
security and simple,
signal over IP systems
user-friendly
DIGIGRAM
DIGIGRAM WHITE
WHITE PAPER
PAPER
For instance, if we compare the signal chain in workflow 2 with the one inworkflow 3, then we can see that
an error on one of the main devices, will cause a changeover to a completely different signal chain, and that
a fault of a second device, in the backup chain, will interrupt the service completely.
Especially if the different signal chains are based on different technologies, as we can see here in our example
with the main transmission via direct line and the backup via microwave link, then a change over to the
backup can have a big impact on the operational costs.
If we can provide devices with advanced redundancy features, such as redundant input and output interfaces,
it is possible to design much more reliable system concepts.
White
14 Paper
Transport of the FM MPX composite
signal over IP
DIGIGRAM WHITE PAPER
What do we provide?
With the IQOYA X/LINK-MPX, we from Digigram developed a product that really touches upon the discussed
requirements. IQOYA IP codecs are already well known for their feature-rich transmission capabilities and
rock-solid reliability, thanks to outstanding redundancy concepts and possibilities.
The X/LINK-MPX features analogue and digital input and output interfaces so that on top of using the
codec for MPX encoding or decoding, it is also possible to send baseband return audio back to the MPX
encoder in the studio. And analogue and digital interfaces also mean that it is possible to realise analogue -
analogue, analogue - digital, digital - digital or digital - analogue transmissions, which means this is suitable
for integration into any existing infrastructure.
It is designed to stream one MPX signal, either uncompressed or compressed with µMPX to the transmitter
side. Uncompressed, it is possible to adjust the quantisation optimised to the characteristics of the
transmission path. For analogue MPX inputs, it is even possible to specify the sampling rate, either 144 kHz
for audio and RDS data or 192 kHz if additional SCA data are required as well. This gives a lot of choices to
fine-tune the resulting final bitrate.
White
15 Paper
Transport of the FM
Communications MPX composite
security and simple,
signal over IP systems
user-friendly
DIGIGRAM
DIGIGRAM WHITE
WHITE PAPER
PAPER
As we are used to from all the IQOYAs, auxiliary data can be transported as well.
And we would not be Digigram if redundancy and service reliability would not be one of our highest priorities.
With redundant inputs to allow for processing backups, redundant outputs to use backup inputs on the
exciter and with 3 priorities and automatic change over, for instance to playback locally stored MPX files, it
really enables you to design high-availability system concepts. And stream reliability is anyway ensured by
redundancy technologies like our DUAL Streaming® or FEC streaming, either compressed or uncompressed.
White
16 Paper
Transport of the FM MPX composite
signal over IP
DIGIGRAM WHITE PAPER
White
17 Paper
Transport of the FM
Communications MPX composite
security and simple,
signal over IP systems
user-friendly
DIGIGRAM
DIGIGRAM WHITE
WHITE PAPER
PAPER
As we all know, digital radio is supposed to replace the traditional analogue AM and FM transmission
completely. And we know that there are already countries where this transition is already completed. But
nevertheless, we also know that this is not everywhere the case, and FM transmission will be required and
must be maintained at least for the next decade. This also means we still need to find ways on how to
optimise the systems.
We are happy that we can help to provide an optimised solution for this kind of use case. This solution is
demanded by a lot of our partners and clients, and we are proud to show our newest development - the
IQOYA X/LINK-MPX.
White
18 Paper