Remotes via IP
In the January Trends in Technology I discussed the difference between TDM and IP in the context of a station's STL system. I asserted that with a LAN extension to the transmitter site or even a WAN with known and controllable characteristics it made sense to consider IP for an STL system (most certainly for backup systems). It may seem logical to apply the same thought to a remote broadcast, but remote broadcasting is really a different ball game. The nature of remote broadcasts is that they are always somewhat risky. Everyone knows that you are walking on a tightrope without a net below. We are all willing to accept a certain amount of risk when it comes to executing a remote; indeed, when you pull it off against the odds, it makes the winning result that much sweeter.
The means by which you could carry out a remote broadcast used to be fairly limited. You could contact the telephone provider to see if it could drop a broadcast loop at the proposed location. Whether they could or not was often predicated on whether or not there was enough time to do so. If the line was dropped in time, the results could be good (if you got a good installer) or they could be not so good, for a variety of reasons.
If you were lucky enough to have a licensed RPU channel and the proper equipment, then likely you could pull off remotes fairly easily, though quite often the audio quality wasn't that great. A “remote” really sounded like it was remote, in most cases. Interference from other users of the shared radio channels was always the wildcard to be wary of.
In the early 1990s, when ISDN started to see widespread use, the ordering issues were similar. The telephone provider had to be able to actually deliver ISDN at the proposed site, and there had to be enough time before the remote date to get the line installed and tested. The good thing about ISDN is that (assuming you have a good connection) you know what audio quality to expect ahead of time. Because ISDN provides a full-duplex connection, mix-minus and talkback could be provided to the remote site too, which was a nice improvement in functionality.
At my station, we use ISDN frequently for remotes, but I know the future of remote broadcasting is wrapped up in the almost ubiquitous nature of the Internet. We don't make use of RPU, and I have found most programmers have been spoiled by the audio quality we get with ISDN codecs. The audio quality of RPU has become unacceptable in most cases. We're preparing ourselves for the inevitability of using the Internet for remotes, and you probably should as well.
Using the Internet for remotes, though, really represents a paradigm shift. Up to this point, the means by which audio was sent back to the station from the remote site was yours and yours alone (with the exception of those shared RPU frequencies). The audio quality was a function of the bandwidth - and the bandwidth was known ahead of time. Now, in making use of an Internet connection, you will contend with an unknown number of users, each of who will occupy varying amounts of bandwidth at random times, all through a pipeline with a fixed maximum bandwidth for its shared users. Sounds a little scary, but not surprisingly, many manufacturers of broadcast remote equipment have studied and attacked the problems associated with this shared usage successfully. Let's examine some of those problems.
When contending with other users for bandwidth in a bandwidth-limited system, there is an advantage in minimizing your own bandwidth requirements. For this reason, codecs that use the Internet for connectivity make use of many of the same audio compression schemes we've become familiar with that work over synchronous networks (such as ISDN or TDM).
However, the packet-switched nature of the Internet (as opposed to the circuit-switched nature of the PSTN) complicates the situation considerably. The data stream that represents the audio output of the encoder is broken into pieces - called packets - and each packet has additional information appended prior to its injection into the network. That packet overhead is the same on a per-packet basis, so changing the packet size affects the overall bandwidth needed to move the packets.
Say you have an audio encoder with an output of 128kb/s, or 16 kilobytes every second (16KB/s). Now, say you break that data stream into 800-byte pieces; add overhead of (for example) 100 bytes to each of the 800 bytes of payload. So, now you've generated packets that add up to 18KB/s, for an increase in required bandwidth of 12.5 percent. Or, say you break the data up in to packets of 200 bytes; add 100 bytes per packet of overhead. Now the bandwidth requirement is 24KB/s, or a 50 percent increase over the size of the encoder output by itself.
An unfortunate characteristic of the Internet is that sometimes these packets are lost along the way, for various reasons. Ideally, you would want the packet size to be large because, as I just demonstrated, the overall bandwidth requirement is reduced. However, if one of those large packets is lost, then a substantial amount of the encoded audio data will be missing at the far end.
One aspect then, considered to be important in gaining success in transmitting audio across the Internet, is the ability to alter the packet size on the sending end, so that different network conditions can be met, and the effect of dropouts can be minimized. Some of the units actually adjust the packet size dynamically based on changing network conditions. In any case, the user needs to be able to adjust the packet size so the best compromise between packet size and overall bandwidth can be met.
Inevitably, some packets will still be lost though, and there are other mechanisms designed to further minimize the negative effects of packet loss. One such method is known as forward error correction (FEC). FEC is basically the addition of redundant packets to the data stream - the idea being that these redundant bits will effectively take the place of the packets that somehow end up missing at the far end. One can easily see that the addition of too many redundant packets could possibly create a problem in and of itself with respect to network congestion. Therefore, like packet size, the amount of FEC should be adjustable by the user, to best meet network conditions.
Oh, but it doesn't end there friends. The nature of the Internet also means the packets getting to the receive end may be late, or even out of order. For an audio stream, this is obviously a problem - one addressed by way of a packet jitter buffer. This buffer stores received packets for a certain amount of time, allowing late packets to catch up; out of order packets can also be re-sequenced prior to being sent to the audio decoder. The obvious problem here is that the buffer adds delay time, generally considered bad when doing remotes. Therefore, once again, a compromise must be struck between problems in the audio caused by late or out of sequence packets, and the amount of delay that can be dealt with at the remote site.
Acceptable Use Policy blog comments powered by Disqus
[an error occurred while processing this directive]
Today in Radio History
The history of radio broadcasting extends beyond the work of a few famous inventors.
EAS Information More on EAS
The feed provides feeds for all US states and territories.
Need a calendar for your computer desktop? Use one of ours.
Information from manufacturers and associations about industry news, products, technology and business announcements.
This high-visibility and high-traffic area got the full acoustic treatment.
Browse Back Issues[an error occurred while processing this directive]
Also in the May Issue
- Remote Access and Site Connectivity: Wireless
- Standards of FM Allocation and Interference
- Side by Side: Mic Processors
- Field Report: Deva Broadcast DB4004
- Field Report: APT WorldCast Systems Horizon NextGen
- New Products
- 20 Years of Radio magazine: May 1994