Technical Reference Guide 8
Technical Reference Guide 8
.431
.533
.431
.533
iNFINITI, Evolution MxD1 Yes Yes Yes X 1K
43
56
.660 iNFINITI, Evolution MxD1 Yes Yes Yes
Yes
(No 3100)
1K 72
II+ II, II+ X X Yes X
.793 iNFINITI, Evolution MxD1 X Yes Yes X 4K 394
II, II+ II+ X X Yes X
iSCPC
FEC
Spread
Spectrum
BPSK
(8350 and
M1D1-TSS
Only)
BPSK QPSK 8PSK
Block
Size
Payload
Bytes
.431
.533
X Yes Yes X 1K
53
66
Yes Yes Yes X
X X Yes X
Yes Yes Yes Yes
X X Yes X
.879 Yes Yes Yes Yes 16K 1800
Not recommended for new designs.
Modulation Mode
.495
.793
SCPC
FEC
TDMA
FEC
Hardware Support
M1D1-iSCPC, 5xxx, 73xx, 8350
4K
Modulation Mode
S
t
a
r
a
n
d
M
e
s
h
N
e
t
w
o
r
k
s
Hardware Support
Hardware Support
i
S
C
P
C
L
i
n
k
s
.495
M1D1-iSCPC, 5xxx, 73xx, 8350
II+
251
404
4K 251
4K
4K
Modulation Mode
The TDMA Payload Bytes value removes the TDMA header overhead of 10 bytes: Demand=2 +LL=6 +PAD=2.
SAR, Encryption, and VLAN features add additional overhead.
404
SCPC channel framing uses a modified HDLC header, which requires bit-stuffing to prevent false end-of-frame
detection. The actual payload is variable, and always slightly less than the numbers indicated in the table.
M1D1-iSCPC, 5xxx, 73xx, 8350
.793
M1D1-iSCPC, 5xxx, 73xx, 8350
II+
Modulation Modes and FEC Rates
Technical Reference Guide iDS Release 8.2 49
UPSTREAM AND DOWNSTREAM COMBINATIONS
Any supported combination of modulation modes and FEC rates can be used on either the
upstream or the downstream. However, the remote antenna size, radio size, and price
constraints limit the combinations that are most useful to you.
Table 6 shows the combinations of the upstream and downstream modulation types, along with
of how likely they are to be implemented in an operational network. The combinations shown in
italic font have not been tested for iDS release 8.2. If you need to use one of these
combinations, please contact the iDirect Technical Assistance Center (TAC) for more information
(refer to Getting Help on page xvi).
Table 6. Upstream and Downstream Combinations
Note: For specific Eb/No values for each FEC rate and Modulation combination, refer to the
iDirect Link Budget Analysis Guide, which is available for download from the TAC
web page located at http://tac.idirect.net.
Likeliness Downstream UpStream
Most Likely 8PSK 8PSK
QPSK QPSK
BPSK BPSK
Likely BPSK QPSK
QPSK 8PSK
Less Likely QPSK BPSK
8PSK QPSK
Not Likely 8PSK BPSK
BPSK 8PSK
Modulation Modes and FEC Rates
50 Technical Reference Guide iDS Release 8.2
iDirect Spread Spectrum Networks
Technical Reference Guide iDS Release 8.2 51
4 IDIRECT SPREAD SPECTRUM
NETWORKS
This section provides information about Spread Spectrum technology in an iDirect network. It
discusses the following topics:
What is Spread Spectrum?
Downstream Specifications
Upstream Specifications
WHAT IS SPREAD SPECTRUM?
Spread Spectrum (SS) is a transmission technique in which a pseudo-noise (PN) code is employed
as a modulation waveform to spread the signal energy over a bandwidth much greater than
the signal information bandwidth. The signal is despread at the receiver by using a
synchronized replica of the pseudo-noise code. By spreading the signal information over greater
bandwidth, less transmit power is required. A sample SS network diagram is shown in Figure 41.
Figure 41. Spread Spectrum Network Diagram
iDirect Spread Spectrum Networks
52 Technical Reference Guide iDS Release 8.2
Spreading takes place when the input data (d
t
) is multiplied with the PN code (pn
t
) which results
in the transmit baseband signal (tx
b
). The baseband signal is then modulated and transmitted to
the receiving station. Despreading takes place at the receiving station when the baseband signal
is demodulated (rx
b
) and correlated with the replica PN (pn
r
) which results in the data output
(d
r
).
Beginning with iDS Release 8.0, Spread Spectrum transmission is supported in both TDMA and
SCPC configurations. SS mode is employed in iDirect networks to minimize adjacent satellite
interference (ASI). ASI can occur in applications such as Comms-On-The-Move (COTM) because
the small antenna (typically sub-meter) used on mobile vehicles has small aperture size, large
beam width, and high pointing error which can combine to cause ASI. Enabling SS reduces the
spectral density of the transmission so that it is low enough to avoid interfering with adjacent
satellites.
Conversely, when receiving through a COTM antenna, SS improves carrier performance in cases
of ASI (channel/interference).
The iDirect SS is an extension of BPSK modulation in both upstream and downstream. The signal
is spread over wider bandwidth according to a Spreading Factor (SF) that you select. You can
select an SF of 1, 2, or 4. Each symbol in the spreading code is called a chip, and the spread
rate is the rate at which chips are transmitted. For example, selecting an SF of 1 means that the
spread rate is one chip per symbol (which is equivalent to regular BPSK, and therefore, there is
no spreading). Selecting an SF of 4 means that the spread rate is four chips per symbol.
Release 8.2 adds an additional Spreading Factor for upstream TDMA carriers only. This new
spreading factor, COTM SF=1, is intended for use by fast moving mobile remotes only. Like an SF
of 1, if you select COTM SF=1, there is no spreading. However, the size of the carrier unique
word is increased, allowing mobile remotes to remain in the network when they might otherwise
drop out. A slightly lower C/N is required to receive a carrier with this spreading factor.
However, carriers with COTM SF=1 transmit at a slightly lower information rate.
SS may also be useful in situations where local or RF interference is unavoidable, such as hostile
jamming. SS can also be used to hide a carrier in the noise of an empty transponder. However, SS
should not be confused with Code Division Multiple Access (CDMA), which is the process of
transmitting multiple SS channels simultaneously on the same bandwidth.
iDirect Spread Spectrum Networks
Technical Reference Guide iDS Release 8.2 53
SPREAD SPECTRUM HARDWARE COMPONENTS
The Hub Line Card (HLC) that supports Spread Spectrum is the M1D1-TSS line card and it
occupies two slots in the hub chassis. Therefore, the maximum number of SS HLCs you can have
in one chassis is 10, and you cannot install a M1D1-TSS HLC in slot 20.
Note: You must install the M1D1-TSS HLC in a slot that has one empty slot to the right. For
example, if you want to install the HLC in slot 4, slot 5 must be empty. Be sure that
you also check chassis slot configuration in iBuilder to verify that you are not
installing the HLC in a reserved slot.
The remote that supports spread spectrum is the 8350 series iNFINITI remote. The 3000, 5000,
and 7000 series iNFINITI series remotes do not support spread spectrum.
DOWNSTREAM SPECIFICATIONS
The specifications for the spread spectrum downstream channel are outlined in Table 5.
Note: In Release 8.2, the iBuilder selections for Spreading Factors of 2 and 4 on the
iBuilder Carrier Information tab have changed to COTM SF=2 and COTM SF=4. These
Spreading Factors are identical to 2 and 4 in Release 8.0.
TABLE 5. Downstream Specification
PARAMETERS VALUES ADDITIONAL INFORMATION
Modulation
BPSK QPSK is not supported in SS
Spreading Factor
1, 2, or 4 SF=1 results in no spreading
Symbol Rate
64 ksym/s - 15 Msym/s
Chip Rate
15 Mchip/s maximum
FEC Rate
0.879, 0.793, 0.495
BER Performance
<1
-8
at 1 dB above theoretical
C/N threshold
iDirect Spread Spectrum Networks
54 Technical Reference Guide iDS Release 8.2
SUPPORTED FORWARD ERROR CORRECTION
(FEC) RATES
The upstream and downstream FEC rates that are supported in this release are described in
Table 6.
TABLE 6. Supported FEC Rates
Occupied BW
1.2 * Chip Rate
plus hub downcoverter
oscillator stability factor
Spectral Mask
IESS-308/309, MIL-STD
188xxx
Carrier Suppression
>-30 dBc
Hardware Platform
M1D1-TSS HLC
BLOCK SIZE UPSTREAM FEC DOWNSTREAM FEC
1k
.66 N/A
.431 N/A
.533 N/A
4K
N/A .495
N/A .793
16K
N/A .879
TABLE 5. Downstream Specification
PARAMETERS VALUES ADDITIONAL INFORMATION
iDirect Spread Spectrum Networks
Technical Reference Guide iDS Release 8.2 55
CALCULATING CRL ACQUISITION TIME
Carrier Recovery Loop (CRL) acquisition time is a function of LNB stability and the transmit
symbol rate. The acquisition range, measured in Hz, is set equal to or slightly larger than the
LNB stability range. The demodulator uses a step-and-dwell approach to step through the
acquisition range until it detects carrier lock.
Total Acquisition Time
Total Acquisition time is the total time of the DLL (Delay Locked Loop) acquisition, CRL
acquisition, and TDM lock time.
During acquisition, the SS DLL controls the despreading code generator which aligns the local
copy of the PN code with the incoming SS signal to allow the despreader to successfully recover
symbols. CRL lock is achieved immediately following DLL lock. TDM lock occurs four frames (0.5
second) after CRL lock is achieved.
UPSTREAM SPECIFICATIONS
The specifications for the spread spectrum upstream channel are outlined in Table 7. The
Spreading Factor COTM 1, used in fast moving mobile applications, is described on page 52.
Note: In Release 8.2, the iBuilder selections for Spreading Factors of 2 and 4 on the
iBuilder Carrier Information tab have changed to COTM SF=2 and COTM SF=4. These
Spreading Factors are identical to 2 and 4 in Release 8.0.
TABLE 7. Upstream Specifications
PARAMETERS VALUES
Modulation
BPSK
Spreading Factor
1, COTM 1, 2, or 4
Symbol Rate
64 ksym/s - 1.875 Msym/s
iDirect Spread Spectrum Networks
56 Technical Reference Guide iDS Release 8.2
Chip Rate
1.875 Mchip maximum
FEC Rate
.66, .431, .533
BER Performance
Refer to the iDirect Link Budget
Analysis Guide
Maximum Frequency Offset
1/8% of Fsym
Unique Word Overhead
128 symbols
Burst Size
1024 bits
Occupied Bandwidth
1.2 * Symbol Rate
Hardware Platform
iNFINITI series 8350
TABLE 7. Upstream Specifications
PARAMETERS VALUES
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 57
5 QOS IMPLEMENTATION PRINCIPLES
This chapter describes how you can configure Quality of Service definitions to achieve maximum
efficiency by prioritizing traffic.
QUALITY OF SERVICE (QOS)
Quality of Service is defined as the way IP traffic is classified and prioritized as it flows through
the iDirect system.
QOS MEASURES
When discussing QoS, at least four interrelated measures are considered. These are Throughput,
Latency, Jitter, and Packet Loss. This section describes these parameters in general terms,
without specific regard to an iDirect network.
Throughput. Throughput is a measure of capacity and indicates the amount of user data that is
received by the end user application. For example, a G729 voice call without additional
compression (such as cRTP), or voice suppression, requires a constant 24 Kbps of application
level RTP data to achieve acceptable voice quality for the duration of the call. Therefore this
application requires 24 Kbps of throughput. When adequate throughput cannot be achieved on a
continuous basis to support a particular application, Qos can be adversely affected.
Latency. Latency is a measure of the amount of time between events. Unqualified latency is the
amount of time between the transmission of a packet from its source and the receipt of that
packet at the destination. If explicitly qualified, it may also mean the amount of time between
a request for a network resource and the time when that resource is received. In general,
latency accounts for the total delay between events and it includes transit time, queuing, and
processing delays. Keeping latency to a minimum is very important for VoIP applications for
human factor reasons.
Jitter. Jitter is a measure of the variation of latency on a packet-by-packet basis. Referring to
the G729 example again, if voice packets (containing two 10 ms voice samples) are transmitted
every 20 ms from the source VoIP equipment, ideally those voice packets arrive at the
QoS Implementation Principles
58 Technical Reference Guide iDS Release 8.2
destination every 20 ms; this is a jitter value of zero. When dealing with a packet-switched
network, zero jitter is particularly difficult to guarantee. To compensate for this, all VoIP
equipment contains a jitter buffer that collects voice packets and sends them at the appropriate
interval (20 ms in this example).
Packet Loss. Packet Loss is a measure of the number of packets that are transmitted by a
source, but not received by the destination. The most common cause of packet loss on a
network is network congestion. Congestion occurs whenever the volume of traffic exceeds the
available bandwidth. In these cases, packets are filling queues internal to network devices at a
rate faster than those packets can be transmitted from the device. When this condition exists,
network devices drop packets to keep the network in a stable condition. Applications that are
built on a TCP transport interpret the absence of these packets (and the absence of their
related ACKs) as congestion and they invoke standard TCP slow-Start and congestion avoidance
techniques. With real time applications, such as VoIP or streaming video, it is often impossible
to gracefully recover these lost packets because there is not enough time to retransmit lost
packets. Packet loss may affect the application in adverse ways. For example, parts of words in
a voice call may be missing or there maybe an echo; video images may break up or become
block-like (pixilation effects).
QoS Application, iSCPC and Filter Profiles
QoS Profiles are defined by Application Profiles, iSCPC Profiles and Filter Profiles. An
Application or iSCPC Profile is a group of service levels, collected together and given a user-
defined name. A QoS Filter Profile encapsulates a single filter definition, and it consists of a set
of rules rather than a set of service levels. Application, iSCPC and Filter Profiles are applied to
downstream and upstream traffic independently, so that upstream traffic may have certain QoS
definitions, whereas downstream traffic may have a different set of QoS (Figure 42).
iSCPC Profiles and Application Profiles are used differently in TDMA networks than they are in
iSCPC connections.
For TDMA networks, Application Profiles define the Group QoS Applications that you add
to your Service Profiles. You then assign the Service Profile to your TDMA remotes using
the Group QoS tab for your Bandwidth Pools.
iSCPC Profiles are assigned directly to iSCPC line cards on the QoS tab. The Line Card
assignments of iSCPC Profiles are mirrored on the iSCPC remote.
Application Profiles are only used for Group QoS. iSCPC Profiles are used only by iSCPC line cards
and remotes and are not associated with Group QoS. See Group QoS on page 63 for a general
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 59
discussion of Group QoS. For details on configuring profiles, see chapter 8, Configuring Quality
of Service for iDirect Networks of the iBuilder User Guide.
Figure 42. Remote and QoS Profile Relationship
QoS Implementation Principles
60 Technical Reference Guide iDS Release 8.2
CLASSIFICATION PROFILES FOR APPLICATIONS
This section describes how the iDirect system distinguishes application IP packets from less
important background traffic. Each packet that enters the iDirect system is classified into one of
the configured Service Levels.
Service Levels
A Service Level may represent a single application (such as VoIP traffic from a single IP address)
or a broad class of applications (such as all TCP based applications). Each Service Level is
defined by one or more packet-matching rules. The set of rules for a Service Level allows logical
combinations of comparisons to be made between the following IP packet fields:
Source IP address
Destination IP address
Source port
Destination port
Protocol (such as DiffServ DSCP)
TOS priority
TOS precedence
VLAN ID
Packet Scheduling
Packet Scheduling is a method used to transmit traffic according to priority and classification.
In a network that has a remote that always has enough bandwidth for all of its applications,
packets are transmitted in the order that they are received without significant delay.
Application priority makes little difference since the remote never has to select which packet to
transmit next.
In a network where there are periods of time in which a remote does not have sufficient
bandwidth to transmit all queued packets the remote scheduling algorithm must determine
which packet from a set of queued packets across a number of service levels to transmit next.
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 61
For each service level you define in iBuilder, you can select any one of three queue types to
determine how packets using that service level are to be selected for transmission. These are
Priority Queue, Class-Based Weighted Fair Queue (CBWFQ), and Best-Effort Queue.
The procedures for defining profiles and service levels are detailed in chapter 8, Configuring
Quality of Service for iDirect Networks of the iBuilder User Guide.
Priority Queues are emptied before CBWFQ queues are serviced and CBWFQ queues are in turn
emptied before Best Effort queues are serviced. Figure 43 presents an overview of the iDirect
packet scheduling algorithm.
Figure 43. iDirect Packet Scheduling Algorithm
QoS Implementation Principles
62 Technical Reference Guide iDS Release 8.2
The packet scheduling algorithm (Figure 43) first services packets from Priority Queues in order
of priority, P1 being the highest priority. It selects CBWFQ packets only after all Priority Queues
are empty. Similarly, packets are taken from Best Effort Queues only after all CBWFQ packets
are serviced.
You can define multiple service levels using any combination of the three queue types. For
example, you can use a combination of Priority and Best Effort Queues only.
Priority Queues
There are four levels of user Priority Queues:
Level 1: P1 (Highest priority)
Level 2: P2
Level 3: P3
Level 4: P4 (Lowest priority)
All queues of higher priority must be empty before any lower-priority queue are serviced. If two
or more queues are set to the same priority level, then all queues of equal priority are emptied
using a round-robin selection algorithm prior to selecting any packets from lower priority
queues.
Class-Based Weighted Fair Queues
Packets are selected from Class-Based Weighted Fair Queues for transmission based on the
service level (or class) of the packet. Each service level is assigned a cost. Packet cost is
defined as the cost of its service level multiplied by its length. Packets with the lowest cost are
transmitted first, regardless of service level.
The cost of a service level changes during operation. Each time a queue is passed over in favor
of other service levels, the cost of the skipped queue is credited, which lowers the cost of the
packets in that queue. Over time, all service levels get an opportunity to transmit occasionally
even in the presence of higher priority traffic. Assuming there is a continuously congested link
with an equal amount of traffic on each service level, the total bandwidth available is divided
more evenly by deciding transmission priority based on each service level cost.
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 63
Best Effort Queues
Packets in Best Effort queues do not have priority or cost. All packets in these queues are
treated equally by applying a round-robin selection algorithm to the queues. Best Effort queues
are only serviced if there are no packets waiting in Priority Queues and no packets waiting
CBWFQ Queues.
GROUP QOS
Group QoS (GQoS), introduced in iDirect Release 8.0, enhances the power and flexibility of
iDirects QoS feature for TDMA networks. It allows advanced network operators a high degree of
flexibility in creating subnetworks and groups of remotes with various levels of service tailored
to the characteristics of the user applications being supported.
Group QoS is built on the Group QoS tree: a hierarchical construct within which containership
and inheritance rules allow the iterative application of basic allocation methods across groups
and subgroups. QoS properties configured at each level of the Group QoS tree determine how
bandwidth is distributed when demand exceeds availability.
Group QoS enables the construction of very sophisticated and complex allocation models. It
allows network operators to create network subgroups with various levels of service on the same
outbound carrier or inroute group. It allows bandwidth to be subdivided among customers or
Service Providers, while also allowing oversubscription of one groups configured capacity when
bandwidth belonging to another group is available.
Note: Group QoS applies only to TDMA networks. It does not apply to iDirect iSCPC
connections.
Note: If you are upgrading from a pre-8.0 iDirect Release, your TDMA networks can be
converted from the older QoS implementation to comply with the Group QoS feature.
See your Network Upgrade Procedure for special upgrade instructions regarding this
conversion.
For details on using the Group QoS feature, see the chapter titled Configuring Quality of
Service for iDirect Networks in the iBuilder User Guide.
QoS Implementation Principles
64 Technical Reference Guide iDS Release 8.2
Group QoS Structure
The iDirect Group QoS model has the following structure as shown in Figure 16:
Figure 16. Group QoS Structure
Bandwidth Pool
A Bandwidth Pool is the highest node in the Group QoS hierarchy. As such, all sub-nodes of a
Bandwidth Pool represent subdivisions of the bandwidth within that Bandwidth Pool. In the
iDirect network, a Bandwidth Pool consists of an outbound carrier or an inroute group.
Bandwidth Group
A Bandwidth Pool can be divided into multiple Bandwidth Groups. Bandwidth Groups allow a
network operator to subdivide the bandwidth of an outroute or inroute group. Different
Bandwidth Groups can then be assigned to different Service Providers or Virtual Network
Operators (VNO).
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 65
Bandwidth Groups can be configured with any of the following:
CIR and MIR: Typically, the sum of the CIR bandwidth of all Bandwidth Groups equals the
total bandwidth. When MIR is larger than CIR, the Bandwidth Group is allowed to exceed
its CIR when bandwidth is available.
Priority: A group with highest priority receives its bandwidth before lower-priority
groups.
Cost: Cost allows bandwidth allocations to different groups to be unequally apportioned
within the same priority. For equal requests, lower cost nodes are granted more
bandwidth than higher cost nodes.
Bandwidth Groups are typically configured using CIR and MIR for a strict division of the total
bandwidth among the groups. By default, any Bandwidth Pool is configured with a single
Bandwidth Group.
Service Group
A Service Provider or a Virtual Network Operator can further divide a Bandwidth Group into sub-
groups called Service Groups. A Service Group can be used strictly to group remotes into sub-
groups or, more typically, to differentiate groups by class of service. For example, a platinum,
gold, silver and best effort service could be defined as Service Groups under the same
Bandwidth Group.
Like Bandwidth Groups, Service Groups can be configured with CIR, MIR, Priority and Cost.
Service Groups are typically configured with either a CIR and MIR for a physical separation of the
groups, or with a combination of Priority, Cost and CIR/MIR to create tiered service. By default,
a single Service Group is created for each Bandwidth Group.
Application Group
An Application defines a specific service available to the end user. Application Groups are
associated with any Service Group. The following are examples:
VoIP
Video
Oracle
Citrix
VLAN
QoS Implementation Principles
66 Technical Reference Guide iDS Release 8.2
NMS Traffic
Default
Each Application List can have one or more matching rules such as:
Protocol: TCP, UDP, and ICMP
Source and/or Destination IP or IP Subnet
Source and/or Destination Port Number
DSCP Value or DSCP Ranges
VLAN
Each Application List can be configured with any of the following:
CIR/MIR
Priority
Cost
Service Profiles
Service Profiles are derived from the Application Group by selecting Applications and matching
rules and assigning per remote CIR and MIR when applicable. While the Application Group
specifies the CIR/MIR by Application for the whole Service Group, the Service Profile specifies
the per-remote CIR/MIR by Application. For example, the VoIP Application could be configured
with a CIR of 1 Mbps for the Service Group in the Application Group and a CIR of 14 Kbps per-
remote in the Service Profile.
Typically, all remotes in a Service Group use the Default Profile for that Service Group. When a
remote is created under an inroute group, the QoS Tab allows the operator to assign the remote
to a Bandwidth Group and Service Group. The new remote automatically receives the default
profile for the Service Group. The Group QoS interface can also be used to assign a remote to a
Service Group or change the assignment of the remote from one Service Group to another.
In order to accommodate special cases, however, additional profiles (other than the Default
Profile) can be created. For example, profiles can be used by a specific remote to prioritize an
Application that is not used by other remotes; to prioritize a specific VLAN on a remote; or to
prioritize traffic to a specific IP address (such as a file server) connected to a specific remote in
the Service Group. Or a Network Operator may want to configure some remotes for a single VoIP
call and others for two VoIP calls. This can be accomplished by assigning different profiles to
each group of remotes.
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 67
Group QoS Scenarios
Physical Segregation Scenario
Example: A satellite provider would like to split a network with a 10 Mbps outbound carrier for
two Service Providers, allocating 6 Mbps for one and 4 Mbps for the other. The first group should
be allowed to burst up to 8 Mbps when the bandwidth is not being used by the second group.
Configuration:
The satellite provider could configure two Bandwidth Groups as follows:
The first group with: CIR/MIR of 6 Mbps/8 Mbps
The second group with: CIR/MIR of 4 Mbps/4 Mbps
The sum of all CIR bandwidth should not exceed the total bandwidth. A scenario depicting
physical segregation is shown in Figure 17 on page 68.
QoS Implementation Principles
68 Technical Reference Guide iDS Release 8.2
Figure 17. Physical Segregation Scenario
Note: Another solution would be to create a single Bandwidth Group with two Service
Groups. This solution would limit the flexibility, however, if the satellite provider
decides in the future to further split each group into sub-groups.
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 69
CIR Per Application Scenario
Example: A Service Provider has a 1 Mbps outbound carrier and would like to make sure that half
of it is dedicated to VoIP with up to two VoIP calls per remote. He also has a critical application
with Citrix traffic that requires an average of 8 Kbps per remote requiring 128 Kbps.
Configuration:
The Service Groups Application List could be configured as follows:
VoIP CIR 512 Kbps
Citrix CIR 128 Kbps
NMS Priority 1, MIR 16K (Set NMS MIR to 1% to 2% of total BW)
Default Cost 1.0 (Default cost is 1.0)
The derived Default Application Profile could be configured as follows:
VoIP CIR 28 Kbps
Citrix CIR 8 Kbps
NMS Priority 1
Default Cost 1.0
A scenario depicting CIR per application is shown in Figure 18 on page 70.
QoS Implementation Principles
70 Technical Reference Guide iDS Release 8.2
Figure 18. CIR Per Application Scenario
VoIP could also be configured as priority 1 traffic. In that case, demand for VoIP must be
fully satisfied before serving lower priority applications. Therefore, it is important to
configure an MIR to avoid having VoIP consume all available bandwidth.
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 71
Tiered Service Scenario
Example: A network operator with an 18 Mbps outbound carrier would like to provide different
classes of service for customers. The Platinum service will have the highest priority and is
designed for 50 remotes bursting up to an MIR of 256 Kbps. The Gold Service, sold to 200
customers, will have an MIR of 128 Kbps. The Silver Service will be a best effort service, and
will allow bursting up to 128 Kbps when bandwidth is available.
Configuration:
There are several ways to configure tiered services. The operator should keep in mind that when
priority is used for a Service Group, the Service Group is satisfied up to the MIR before lower
priority Service Groups are served. Here is one example of how the tiered service could be
configured:
Platinum Priority 1 MIR 12 Mbps
Gold Priority 2 MIR 18 Mbps (Identical to no MIR, since the Bandwidth Pool is only 18
Mbps.)
Silver Priority 3 No MIR Defined (The same as an MIR of 18 Mbps)
A scenario depicting tiered service is shown in Figure 19 on page 72.
QoS Implementation Principles
72 Technical Reference Guide iDS Release 8.2
Figure 19. Tiered Service Scenario
Note that cost could be used instead of priority if the intention were to have a fair allocation
rather than to satisfy the Platinum service before any bandwidth is allocated to Gold; and then
satisfy the Gold service before any bandwidth is allocated to Silver. For example:
Platinum Cost 0.1 - CIR 6 Mbps, MIR 12 Mbps
Gold Cost 0.2 - CIR 6 Mbps, MIR 18 Mbps
Silver Cost 0.3 - No CIR, No MIR Defined
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 73
Third Level of Segregation by VLAN Scenario
The iDirect Group QoS model is designed for two levels of physical segregation of bandwidth. If
the user has a need to split the bandwidth into a third level, this could be accomplished by using
VLANs.
Example: A satellite provider would like to divide an 18 Mbps carrier among six distributors,
each with 3 Mbps of bandwidth. One of the distributors would like to offer service to three
network operators, giving them 1 Mbps each. Another would like to provide a tiered service
(Platinum, Gold and Silver), dedicating 256 Kbps for the Platinum VoIP service. This effectively
provides a third level of physical segregation. It could be accomplished by using VLANs as shown
in the example below.
Configuration:
The Service Groups Application Group for the tiered service could be configured as follows:
Platinum VLAN-91 & VoIP - Priority 1 CIR 256 Kbps, MIR 256 Kbps
Platinum VLAN-91 & All Others - Priority 1 CIR 256 Kbps, MIR 512 Kbps
Gold VLAN 92 - Priority 2 CIR 256 Kbps, MIR 1 Mbps
Silver VLAN 93 - Priority 2 CIR 0, MIR 1 Mbps
A scenario depicting a third level VLAN is shown in Figure 20 on page 74.
QoS Implementation Principles
74 Technical Reference Guide iDS Release 8.2
Figure 20. Third Level VLAN Scenario
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 75
The Exxon/Shell Scenario
Example: A network operator provides service to oil rigs for Exxon and Shell. Many of the oil rigs
have both Exxon and Shell present. Exxon bought 8 Mbps of outbound bandwidth, while Shell
bought 2 Mbps of outbound bandwidth. The network operator would like to use a single
outbound carrier of 10 Mbps to provide service for both Exxon and Shell, while ensuring that
each customer receives the bandwidth that they paid for. This scenario is complicated by the
fact that, on oil rigs with both Exxon and Shell present, the network operator would like to use
a single remote to provide service to both by separating their terminals into VLAN-51 for Exxon
and VLAN-52 for Shell. Both Exxon and Shell would also like to prioritize their VoIP.
Configuration:
If we had separate remotes for Exxon and Shell, this would be a simple Physical Segregation
scenario. However, keeping both Exxon and Shell in the same Service Group and allocating
bandwidth by VLAN and application would not provide the strict separation of 8 Mbps for Exxon
and 2 Mbps for Shell. Instead, the solution is to create two Service Groups:
Exxon: CIR/MIR 8 Mbps/8 Mbps
Shell: CIR/MIR 2 Mbps /2 Mbps
Both Service Profiles for Exxon and Shell would have VoIP and Default with the appropriate
priority, cost, CIR and MIR. In order to allow the same remote to serve both Exxon and Shell, the
remote is assigned to both Service Groups as shown in Figure on page 76. Note that this is an
unusual configuration and is not recommended for the typical application.
QoS Implementation Principles
76 Technical Reference Guide iDS Release 8.2
Exxon/Shell Scenario
APPLICATION THROUGHPUT
Application throughput depends on properly classified and prioritized QoS and on properly
available bandwidth management. For example, if a VoIP application requires 16 Kbps and a
remote is only given 10 Kbps the application fails regardless of priority, since there is not enough
available bandwidth.
Bandwidth assignment is controlled by the Protocol Processor. As a result of the various network
topologies (for example, a shared TDM downstream with a deterministic TDMA upstream), the
Protocol Processor has different mechanisms for downstream control versus upstream control.
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 77
Downstream control of bandwidth is provided by continuously evaluating network traffic flow to
assigning bandwidth to remotes as needed. The Protocol Processor assigns bandwidth and
controls the transmission of packets for each remote according to the QoS parameters defined
for the remotes downstream.
Upstream bandwidth is requested continuously with each TDMA burst from each remote. A
centralized bandwidth manager integrates the information contained in each request and
produces a TDMA burst time plan which assigns individual bursts to specific remotes. The burst
time plan is produced once per TDMA frame (typically 125 ms or 8 times per second).
Note: There is a 250 ms delay from the time that the remote makes a request for
bandwidth and when the Protocol Processor transmits the burst time plan to it.
iDirect has developed a number of features to address the challenges of providing adequate
bandwidth for a given application. These features are discussed in the sections that follow.
QoS Properties
There are several QoS properties that you can configure based on your traffic throughput
requirements. These are discussed in the sections that follow. For information of configuring
these properties, see chapter 8, Configuring Quality of Service for iDirect Networks of the
iBuilder User Guide.
Static CIR
You can configure a static Committed Information Rate (CIR) or an upstream minimum
information rate for any upstream (TDMA) channel. Static CIR is bandwidth that is guaranteed
even if the remote does not need the capacity. By default, a remote is configured with a single
slot per TDMA frame. Increasing this value is considered as an inefficient configuration because
these slots are wasted if the remote is inactive. No other remote can be given these slots unless
the remote with the static CIR has not been acquired into the network. A static CIR is considered
as the highest priority upstream bandwidth. Static CIR only applies in the upstream direction.
The downstream does not need or support the concept of a static CIR.
Dynamic CIR
You can configure Dynamic CIR values for remotes in both the downstream and upstream
directions. Dynamic CIR is not statically committed and is granted only when demand is actually
present. This allows you to support CIR based service level agreements and, based on statistical
QoS Implementation Principles
78 Technical Reference Guide iDS Release 8.2
analysis, oversubscribe networks with respect to CIR. If a remote has a CIR but demand is less
than the CIR, only the actual demanded bandwidth is granted. It is also possible to indicate that
only certain QoS service levels trigger a CIR request. In these cases, traffic must be present in
a triggering service level before the CIR is granted. Triggering is specified on a per-service level
basis.
Additional burst bandwidth is assigned evenly among all remotes in the network by default. All
available burstable bandwidth (BW) is equally divided between all remotes requesting additional
BW, regardless of already allocated CIR.
Previously, a remote in a highly congested network would often not get burst bandwidth above
its CIR. For example, consider a network with a 3 Mbps upstream and three remotes, R1, R2, and
R3. R1 and R2 are assigned a CIR of 1 Mbps each and R3 has no CIR. If all remotes request 2 Mbps
each, 1 Mbps is given to R3, making the total used BW 3 Mbps. In this case, R1 and R2 receive no
additional BW.
Using the same example network, the additional 1 Mbps BW is evenly distributed by giving each
remote an additional 333 Kbps. The default configuration is to allow even bandwidth
distribution.
Further QoS configuration procedures can be found in chapter 8, Configuring Quality of Service
for iDirect Networks of the iBuilder User Guide.
Free Slot Allocation
Free slot allocation is a round-robin distribution of unused TDMA slots by the centralized
bandwidth manager on a frame-by-frame basis. The bandwidth manager assigns TDMA slots to
particular remotes for each TDMA allocation interval based on current demand and configuration
constraints (such as minimum and maximum data rates, static CIR, dynamic CIR, and others). At
the end of this process it is possible that there are unused TDMA slots. In this case, if Free Slot
Allocation is enabled, the bandwidth manager gives these extra slots to remotes in a fair
manner, respecting any remotes maximum configured data rate. Beginning with Release 8.2,
Free Slot Allocation is always enabled. It is no longer configurable in iBuilder. You can disable
Free Slot Allocation with a custom key.
Compressed Real-Time Protocol (cRTP)
You can enable Compressed Real-Time Protocol (cRTP) to significantly reduce the bandwidth
requirements of VoIP flows. cRTP is implemented via standard header compression techniques.
It allows for better use of real-time bandwidth especially for RTP-based applications, which
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 79
utilize large numbers of small packets since the 40-byte IP/UDP/RTP header often accounts for a
significant fraction of the total packet length. iDirect has implemented a standard header
compression scheme including heuristic-based RTP detection with negative cache support for
misidentified UDP streams. For example, G729 voice RTP results in less than 12 Kbps
(uncompressed is 24 Kbps). To enable cRTP, see the section titled QoS Tab in chapter 7,
Configuring Remotes of the iBuilder User Guide.
Configurable Minimum CIR
It is possible to configure a remote upstream minimum statically committed CIR to less than one
burst in each TDMA frame. This feature allows many remotes to be packed into a single
upstream. Reducing a remotes minimum statically committed CIR increases ramp latency. Ramp
latency is the amount of time it takes a remote to acquire the necessary bandwidth. The lower
the upstream static CIR, the fewer TDMA time plans contain a burst dedicated to that remote,
and the greater the ramp latency. Some applications may be sensitive to this latency and may
result in a poor user experience. iDirect recommends that this feature be used with care. The
iBuilder GUI enforces a minimum of one slot per remote every two seconds. For more
information, please see the section titled Upstream and Downstream Rate Shaping in chapter
7, Configuring Remotes of the iBuilder User Guide.
Sticky CIR
Sticky CIR is activated only when CIR is over-subscribed on the downstream or on the upstream.
When enabled, Sticky CIR favors remotes that have already received their CIR over remotes that
are currently asking for it. When disabled (the default setting), The Protocol Processor reduces
assigned bandwidth to all remotes to accommodate a new remote in the network. Sticky CIT can
be configured in the Bandwidth Group and Service Group level interfaces in iBuilder.
Application Jitter
Jitter is the variation of latency on a packet-by-packet basis of application traffic. For an
application like VoIP, the transmitting equipment spaces each packet at a known fixed interval
(every 20 ms, for example). However, in a packet switched network, there is no guarantee that
the packets will arrive at their destination with the same interval rate. To compensate for this,
the receiving equipment employs a jitter buffer that attempts to play out the arriving packets at
the desired perfect interval rate. To do this it must introduce latency by buffering packets for a
certain amount of time and then playing them out at the fixed interval.
QoS Implementation Principles
80 Technical Reference Guide iDS Release 8.2
While jitter plays a role in both downstream and upstream directions, a TDMA network tends to
introduce more jitter in the upstream direction. This is due to the discrete nature of the TDMA
time plan where a remote may only burst in an assigned slot. The inter-slot times assigned to a
particular remote do not match the desired play out rate, which results in jitter.
Another source of jitter is other traffic that a node transmits between (or in front of) successive
packets in the real-time stream. In situations where a large packet needs to be transmitted in
front of a real-time packet, jitter is introduced because the node must wait longer than normal
before transmission.
The iDirect system offers features that limit the effect of such problems; these features are
described the sections that follow.
TDMA Slot Feathering
The Protocol Processor bandwidth manager attempts to feather or spread out each individual
remote TDMA slots across the upstream frame. This is a desirable attribute in that a particular
remotes bursts are spread out in time often reducing TDMA induced jitter. This feature is
enabled by selecting Reduce Jitter for an Applications Service Level in iBuilder. For details,
see the chapter titled Configuring Quality of Service for iDirect Networks in the iBuilder User
Guide.
PACKET SEGMENTATION
Beginning with Release 8.2, Segmentation and Reassembly (SAR) and Packet Assembly and
Disassembly (PAD) have been replaced by a more efficient iDirect application. Although you can
continue to configure the downstream segment size in iBuilder, all upstream packet
segmentation is handled internally to optimize upstream packet segmentation.
You may wish to change the downsteam segment size if you have a small outbound carrier and
need to reduce jitter in your downstream packets. Typically, this is not required. For details on
configuring the downstream segment size, see the chapter on Configuring Remotes in the
iBuilder User Guide.
QoS Implementation Principles
Technical Reference Guide iDS Release 8.2 81
APPLICATION LATENCY
Application latency is typically a concern for transaction-based applications such as credit card
verification systems. For applications like these, it is important that the priority traffic be
expedited through the system and sent, regardless of the less important background traffic. This
is especially important in bandwidth-limited conditions where a remote may only have a single
or a few TDMA slots. In this case, it is important to minimize latency as much as possible after
the distributors QoS decision. This allows a highly prioritized packet to make its way
immediately to the front of the transmit queue.
MAXIMUM CHANNEL EFFICIENCY VS. MINIMUM
LATENCY
Each TDMA burst carries a discrete number of payload bytes. The remote must break higher-
level packets into TDMA-burst-sized chunks to pack these bursts for transmission. You can
control how bursts are packaged for transmission by selecting between two options on the
iBuilder Service level dialog box: Maximum Channel Efficiency (default) and Minimum Latency.
Maximum Channel Efficiency delays the release of a partially filled TDMA burst to allow for the
possibility that the next packet will fill the burst completely. In this configuration, the system
waits for up to four TDMA transmission attempts before releasing a partial burst. Minimum
Latency never delays partially filled TDMA bursts. Instead, it transmits them immediately.
In general, Maximum Channel Efficiency is the desired choice, except in certain situations when
it is vitally important to achieve minimum latency for a prioritized service level. For example, if
your network is typically congested and you are configuring the system to work with a
transaction-based application which is bursty in nature and requires a minimum round trip time,
then Minimum Latency may be the better choice. You can configure these settings in iBuilder
from the QoS Service Level dialog box. For details, see the chapter titled Configuring Quality of
Service for iDirect Networks in the iBuilder User Guide.
QoS Implementation Principles
82 Technical Reference Guide iDS Release 8.2
Configuring Transmit Initial Power
Technical Reference Guide iDS Release 8.2 83
6 CONFIGURING TRANSMIT INITIAL
POWER
During acquisition, the iNFINITI remote attempts to join the network according to the burst plan
assigned to the remote by the hub. The initial transmit power must be set correctly so that the
remote can join the network and stay in the network. This chapter describes the best practices
for setting Transmit (TX) Initial Power in an iDirect network.
Note: It is important to set TX Initial Power on a Netmodem correctly to ensure optimal
Upstream channel performance.
WHAT IS TX INITIAL POWER?
TX Initial Power is the power level at which a Netmodem transmits when joining the network.
You can set the Initial Power through iSite or iBuilder. When a Netmodem is attempting to join
the network the hub sends SWEEP commands to it. These tell the Netmodem to burst in to the
acquisition slot of the upstream channel. Each SWEEP command contains a different frequency
offset which tells the Netmodem to change its frequency slightly and then send a burst. During
these acquisition bursts, the Netmodem sets its output power to the TX Initial Power parameter.
If TX Initial Power is not set correctly, the acquisition bursts may not be received and the
Netmodem cannot join the network.
HOW TO DETERMINE THE CORRECT TX INITIAL
POWER
There are two ways to determine the correct TX Initial power:
Locally, by using iSite during site commissioning.
Remotely, by using iBuilder any time after site commissioning.
Configuring Transmit Initial Power
84 Technical Reference Guide iDS Release 8.2
During site commissioning, the installer uses iSite to set TX Initial Power. This parameter is set at
a low value and it is manually increased until the Netmodem is acquired into the network. The
hub then automatically adjusts the Netmodem output power to a nominal setting. With the acq
on command enabled, UCP messages are displayed at the console and the installer can observe
the TX power adjustments being made by the hub. When the hub determines that the bursts are
arriving in the nominal C/N range, power adjustments are stopped (displayed at the console as
0.0 dB adjustment). The installer can type tx power to read the current power setting.
iDirect recommends that you set the TX Initial Power value to 3 dB above the tx power reading.
For example, if the tx power is -17 dBm, set TX Initial Power to -14 dBm.
At any time after site commissioning, you can check the TX Initial Power setting by observing the
Remote Status and UCP tabs in iMonitor. If the Netmodem is in a steady state and no power
adjustments are being made, you can compare the current TX Power to the TX Initial Power
parameter to verify that TX Initial Power is 3 dB higher than the TX Power. For detailed
information on how to set TX Initial Power, refer to the Remote Installation and Commissioning
Guide.
Note: Best nominal Tx Power measurements are made during clear sky conditions at the
hub and remote sites.
ALL REMOTES NEED TO TRANSMIT BURSTS IN THE
SAME C/N RANGE
In a burst mode demodulator, the gain must be set at some nominal point prior to the arrival of
a data burst so that the burst is correctly detected and demodulated. Since a single Hub Line
Card receives bursts from many different remote modems, it constantly calculates the optimal
gain point by taking into account the average levels of all bursts arriving at that Hub Line Card.
If all the bursts are arrive at similar C/N levels, the average is very near optimal for all of them.
However, if many bursts arrive at a varying C/N levels, the highest and lowest level bursts can
skew the average such that so that it is no longer optimal.
Configuring Transmit Initial Power
Technical Reference Guide iDS Release 8.2 85
The nominal range is 2 dB wide (the green range in the iBuilder Acquisition/Uplink Control tab).
The actual range at which bursts can be optimally detected is approximately 8 dB wide centered
at the nominal gain point (Figure 22).
Figure 22. C/N Nominal Range
WHAT HAPPENS WHEN TX INITIAL POWER IS SET
INCORRECTLY?
If the Initial Power is not set correctly, your network performance can me negatively impacted.
When remote is acquired by the hub, the center point of the 8 dB wide detection range is set at
the C/N value at the time that is acquired. This section described what happens if the Initial
Power is too high or too low.
9 11 8 12 C/N (dB)
Under ideal circumstances , the average C /N of all remotes on the upstream channel is equal
to the center of the UCP adjustment range. Therefore the optimal detection range extends to
below the threshold C /N. (This example illustrates the TPC Rate 0.66 threshold)
Optimal Detection Range
10 13 14 7 6
Threshold C /N
Ideal Case:
Configuring Transmit Initial Power
86 Technical Reference Guide iDS Release 8.2
When TX Initial Power is Too High
If the if TX Initial Power is set too high, and the C/N at the time of acquisition is 11.0 dB, The C/
N detection window range is from 7 dB to 15 dB and the Hub Line Card gain approaches the
upper limit of the nominal range. Since UCP updates occur every 20 seconds, it may take a
minute or more for carriers with too much initial power to adjust lower into the nominal range.
During this time, remotes that are operating under atmospheric fade conditions could drop out
of the network because the bursts no longer fall within the optimal detection range. Remotes
that are trying to acquire with a C/N value of less than 7 dB will not acquire the network
(Figure 23).
Figure 23. TX Initial Power Too High
When TX Initial Power is Too Low
If the if TX Initial Power is set too low, and the C/N at the time of acquisition is 9.0 dB, the C/N
detection window range is from 5 dB to 13 dB and the Hub Line Card gain approaches the lower
limit of the nominal range. Since UCP updates occur every 20 seconds, it may take a minute or
more for carriers with too much initial power to adjust higher into the nominal range. During
this time, remotes that are operating under atmospheric fade conditions could drop out of the
network because the bursts no longer fall within the optimal detection range. Remotes that are
9 11 8 12 C/N (dB)
When the TX Initial Power is set too high, remotes entering the network skew the average C/N to
be above the center of the UCP Adjustment Range. Therefore, during this period the optimal
detection range does not include the threshold C/N and remotes experiencing rain fade may
experience a performance degradation.
Skewed Detection Range
10 13 14 7 6
Threshold C/N
TX Initial P ower Too High:
Configuring Transmit Initial Power
Technical Reference Guide iDS Release 8.2 87
trying to acquire with a C/N value of greater than 13 dB will not acquire the network. Bursts can
still be detected below threshold but the probability of detection and demodulation reduces.
This can lead to long acquisition times (Figure 24).
Figure 24. TX Initial Power Too Low
9 11 8 12 C/N (dB)
When the TX Initial P ower is set too low, remotes entering the network skew the average C/N to be
above the center of the UCP Adjustment Range. Therefore, during this period the optimal
detection range does not include the threshold C/N and remotes experiencing rain fade may
experience a performance degradation.
Skewed Detection Range
10 13 14 7 6
Threshold C/N
TX Initial P ower Too Low:
Configuring Transmit Initial Power
88 Technical Reference Guide iDS Release 8.2
Global NMS Architecture
Technical Reference Guide iDS Release 8.2 89
7 GLOBAL NMS ARCHITECTURE
This chapter describes how the Global NMS works in a global architecture and a sample Global
NMS architecture.
HOW THE GLOBAL NMS WORKS
The Global NMS allows you to add a single physical remote, as identified by its Derived ID (DID),
to multiple networks at the same time.
A remote that is a member of multiple networks is called a roaming remote. For details on
defining and managing roaming remotes, refer to Chapter 7, Configuring Remotes, Roaming
Remotes of the iBuilder User Guide.
Figure 25 illustrates the current and Global NMS database relationships.
Global NMS Architecture
90 Technical Reference Guide iDS Release 8.2
Figure 25. Global NMS Database Relationships
Global NMS Architecture
Technical Reference Guide iDS Release 8.2 91
SAMPLE GLOBAL NMS NETWORK
This section illustrates a sample global NMS architecture, and it explains how the NMS works in
this type of network (Figure 26).
Figure 26. Sample Global NMS Network Diagram
In this example, there are 4 different networks connected to three different Regional Network
Control Centers (RNCCs). A group of remote terminals has been configured to roam among the
four networks.
Note: This diagram shows only one example from the set of possible network
configurations. In practice, there may be any number RNCCs and any number of
protocol processors at each RNCC.
Global NMS Architecture
92 Technical Reference Guide iDS Release 8.2
On the left side of the diagram, a single NMS installed at the Global Network Control Center
(GNCC) manages all the RNCC components and the group of roaming remotes. Network
operators, both remote and local, can share the NMS server simultaneously with any number of
VNOs. (Only one VNO is shown in the Figure 26.) All users can run iBuilder, iMonitor, or both on
their PCs.
The connection between the GNCC and each RNCC must be a dedicated high-speed link.
Connections between NOC stations and the NMS server are typically standard Ethernet. Remote
NMS connections are made either over the public Internet protected by a VPN, port forwarding,
or a dedicated leased line.
Hub Network Security Recommendations
Technical Reference Guide iDS Release 8.2 93
8 HUB NETWORK SECURITY
RECOMMENDATIONS
This chapter describes basic recommended security measures to ensure that the NMS and
Protocol Processor servers are secure when connected to the public Internet. iDirect
recommends that you implement additional security measures over and above these minimal
steps.
LIMITED REMOTE ACCESS
Access to the NMS and Protocol Processor servers should be protected behind a commercial-
grade firewall. If remote access is necessary for support, the iDirect Technical Assistance Center
can help you set up appropriate VPN access. Contact the TAC for details (see Getting Help on
page xvi).
ROOT PASSWORDS
Root password access to the NMS and Protocol Processor servers should be reserved for only
those you want to have administrator-level access to your network. Restrict the distribution of
this password information.
Servers are shipped with default passwords. Change the default passwords after the installation
is complete and make sure these passwords are changed on a regular basis and when an
employee leaves your company.
When selecting your new passwords, iDirect recommends that you follow these practices for
constructing difficult-to-guess passwords:
Use passwords that are at least 8 characters in length.
Do not base passwords on dictionary words.
Use passwords that contain a mixture of letters, numbers, and symbols.
Hub Network Security Recommendations
94 Technical Reference Guide iDS Release 8.2
Global Protocol Processor Architecture
Technical Reference Guide iDS Release 8.2 95
9 GLOBAL PROTOCOL PROCESSOR
ARCHITECTURE
This chapter describes how the Protocol Processor works in a global architecture. Specifically it
contains Remote Distribution, which describes how the Protocol Processor balances remote
traffic loading and De-coupling of NMS and Datapath Components, which describes how the
Protocol Processor Blades continue to function in the event of a Protocol Processor Controller
failure.
REMOTE DISTRIBUTION
The actual distribution of remotes and processes across a blade set is determined by the Proto-
col Processor controller dynamically in the following situations:
At system Startup, the Protocol Processor Controller determines the distribution of processes
based on the number of remotes in the network(s).
When a new remote is added in iBuilder, the Protocol Processor Controller analyzes the
current system load and adds the new remote to the blade with the least load.
When a blade fails, the Protocol Processor Controller re-distributes the load across the
remaining blades, ensuring that each remaining blade takes a portion of the load.
The Protocol Processor controller does not perform dynamic load-balancing on remotes. Once a
remote is assigned to a particular blade, it remains there unless it is moved due to one of the
situations described above.
Global Protocol Processor Architecture
96 Technical Reference Guide iDS Release 8.2
DE-COUPLING OF NMS AND DATAPATH
COMPONENTS
If the Protocol Processor Controller fails, the Protocol Processor Blades continue to function
normally since the NMS and Protocol Processor Controller are independent. However, during a
period of Controller failure, automatic failover does not occur and you cannot reconfigure it.
You can build process redundancy into your design by running duplicate processes over multiple
Protocol Processor Blades. A high-level architecture of the Protocol Processor, with one possible
configuration of processes across two blades is shown in Figure 27.
Figure 27. Protocol Processor Architecture
NMS Server
pp_controller
NMS Servers
Monitor and Control
PP Blade 1
samnc
Monitor and Control
sarmt
sada
sana sarouter
spawn
and
control
PP Blade 2
samnc
sarmt
sarouter
spawn
and
control
Monitor and Control
Distributed NMS Server
Technical Reference Guide iDS Release 8.2 97
10 DISTRIBUTED NMS SERVER
This chapter describes how you can design your network through a Distributed NMS server,
manage it through iDS supporting software, and back up or restore the configuration.
You can distribute your NMS server processes across multiple IBM eServers. The primary benefits
of machine distribution are improved server performance and better utilization of disk space.
iDirect recommends a distributed NMS server configuration once the number of remotes being
controlled by a single NMS exceeds 500-600. iDirect has tested the new distributed platform
with over 3000 remotes with iDS 7.0.0. Future releases continue to push this number higher.
DISTRIBUTED NMS SERVER ARCHITECTURE
The distributed NMS architecture allows you to match your NMS server processes to the IBM
eServers. For example, you can run all servers on a single platform (the current default) you can
either assign each server process to its own server, or you can assign groups of processes to
individual servers.
Server configuration is performed one time using a special script distributed with the NMS
servers installation package. Once configured, the distribution of server processes across the
servers remains unchanged unless you reconfigure it. This is true even when you upgrade your
system.
Distributed NMS Server
98 Technical Reference Guide iDS Release 8.2
The most common distribution scheme for larger networks is shown in Figure 28.
Figure 28. Sample Distributed NMS Configuration
This configuration has the following process distribution:
NMS Server 1 runs the configuration server (nmssvr), latency server (latsvr), and the PP
controller (cntrlsvr) process.
NMS Server 2 runs only the Statistics processes (nrdsvr).
NMS Server 3 runs only the Event processes (evtsvr).
The busiest NMS processes, nrdsvr and evtsvr, are placed on their own servers for maximum
processing efficiency. All other NMS server processes are grouped on NMS Server 1.
Distributed NMS Server
Technical Reference Guide iDS Release 8.2 99
IBUILDER AND IMONITOR
From the iBuilder or iMonitor user perspective, a distributed NMS server functions identically to
a single NMS server. In both server configurations, users provide a user name, password, and the
IP address or Host Name of the NMS configuration server at the time of login. The configuration
server stores the location of all other NMS servers and provides this information to the iBuilder
or iMonitor client. Using this information, the client automatically establishes connections to
the server processes on the correct machines.
To set up a D-NMS, refer to the iBuilder User Guide.
Distributed NMS Server
100 Technical Reference Guide iDS Release 8.2
DBBACKUP/DBRESTORE AND THE DISTRIBUTED
NMS
The dbBackup and dbRestore scripts are completely compatible with the new distributed NMS.
You can have 1:1 or 1:n redundancy for your NMS servers.
1:n redundancy means that one physical machine backs up all of your active servers. If you
choose this form of redundancy, you must modify the dbBackup.ini file on each NMS server to
ensure that the separate databases are copied to separate locations on the backup machine.
Distributed NMS Server
Technical Reference Guide iDS Release 8.2 101
The following diagram shows three servers, each copying its database to a single backup NMS. If
NMS 1 fails, you do not need to run dbRestore prior to switch-over since the configuration data
has already been sent to the backup NMS. If NMS 2 or NMS 3 fails, you need to run dbRestore
prior to the switch-over if you want to preserve and add to the archive data in the failed servers
database. See Figure 29.
Figure 29. dbBackup and dbRestore with a Distributed NMS
Distributed NMS Server
102 Technical Reference Guide iDS Release 8.2
DISTRIBUTED NMS RESTRICTIONS
Some of the server processes must be run on the configuration server, and others can be run on
separate machines, as listed below.
Server processes that must be run on the configuration server machine are:
Control Server
Revision Server
SNMP Proxy Agent Server
Server processes that can run on separate machines are:
Latency Server
Event Server
Real-Time Data Server (nrdsvr)
Transmission Security (TRANSEC)
Technical Reference Guide iDS Release 8.2 103
11 TRANSMISSION SECURITY
(TRANSEC)
This section describes how TRANSEC and FIPS is implemented in an iDirect Network. It includes:
What is TRANSEC?" which defines Transmission Security.
iDirect TRANSEC" which describes protocol implementation.
TRANSEC Downstream" which describes the data path from the hub to the remote.
TRANSEC Upstream" which describes the data path from the remote to the hub.
TRANSEC Key Management" which describes public and private key usage.
TRANSEC Remote Admission Protocol" which describes acquisition and authentication.
Reconfiguring the Network for TRANSEC" which describes conversion requirements.
WHAT IS TRANSEC?
Transmission Security (TRANSEC) prevents an adversary from exploiting information available in
a communications channel without necessarily having defeated the encryption inherent in the
channel. Even if an encrypted wireless transmission is not compromised, information such as
timing and traffic volumes can be determined by using basic signal processing techniques. This
information could provide someone monitoring the network a variety of information on unit
activity. For example, even if an adversary cannot defeat the encryption placed on individual
packets, it might be able to determine answers to questions such as:
What types of applications are active on the network currently?
Who is talking to whom?
Is the network or a particular remote site active now?
Is it possible to determine between network activity and real world activity, based on traffic
analysis and correlation?
There are a number of components to TRANSEC, one of them being activity detection. With
current VSAT systems an adversary can determine traffic volumes and communications activities
with a simple spectrum analyzer. With a TRANSEC compliant VSAT system an adversary is
presented with a strongly encrypted and constant wall of data. Other components of TRANSEC
Transmission Security (TRANSEC)
104 Technical Reference Guide iDS Release 8.2
include remote and hub authentication. TRANSEC eliminates the ability of an adversary to bring
a non-authorized remote into a secured network.
IDIRECT TRANSEC
iDirect achieves full TRANSEC compliance by presenting to an adversary who may be
eavesdropping on the RF link a constant wall of fixed-size, strongly encrypted (such as
Advanced Encryption Standard (AES) and 256 bit key Cipher Block Chaining (CBC) Mode) traffic
segments, which do not vary in frequency in response to network utilization.
Other than network messages that control the admission of a remote terminal into the network,
all portions of all packets are encrypted, and their original size is hidden. The content and size
of all user traffic (Layer 3 and above), as well as network link layer (Layer 2) traffic is
completely indeterminate from an adversarys perspective. Further, no higher layer information
is revealed by monitoring the physical layer (Layer 1) signal.
The solution includes a remote-to-hub and a hub-to-remote authentication protocol based on
standard X.509 certificates designed to prevent man-in-the-middle attacks. This authentication
mechanism prevents an adversarys remote from joining an iDirect TRANSEC secured network. In
a similar manner, it prevents an adversary from coercing a TRANSEC remote into joining the
adversarys network. While these types of attacks are extremely difficult to achieve even on a
non-TRANSEC iDirect network, the mechanisms put in place for the TRANSEC feature render
them completely impossible.
All hub line cards and remote model types associated with a protocol processor must be
TRANSEC compatible. You must also ensure that all protocol processor blades have HiFin
encryption cards installed. These cards are required for TRANSEC key management.
The only iDirect hardware that operate in TRANSEC mode are the M1D1-T and M1D1-TSS Hub
Line Cards, the iNFINITI 7350 and 8350 remotes, and the iConnex 100 and iConnex 700 remotes.
Therefore these are the only iDirect products that are capable of operating in a FIPS 140-2 Level
1 compliant mode.
For more information, see Chapter 16, Converting an Existing Network to TRANSEC of the
iBuilder User Guide.
Transmission Security (TRANSEC)
Technical Reference Guide iDS Release 8.2 105
TRANSEC DOWNSTREAM
A simplified block diagram for the iDirect TRANSEC downstream data path is shown in Figure 30.
Each function represented in the diagram is implemented in software and firmware on a
TRANSEC capable line card.
Figure 30. Downstream Data Path
Consider the diagram from left to right with variable length packets arriving on the far left into
the block named Packet Ingest. In this diagram, the encrypted path is shown as solid black, and
the unencrypted (clear) path is shown in dashed red. The Packet Ingest function receives
variable length packets which can belong to four logical classes: User Data, Bypass Burst Time
plan (BTP), Encrypted BTP, and Bypass Queue. All packets arriving at the transmit Hub Line Card
have this indication present as a pre-pended header placed there by the protocol processor (not
shown). The Packet Ingest function determines the message type and places the packet in the
appropriate queue. If the packet is not valid, it is not placed in any queue and it is dropped.
Packets extracted from the Data Queue are always encrypted. Packets extracted from the Clear
Queue are always sent unencrypted, and time-sensitive BTP messages from the BTP Queue can
be sent in either mode. A BTP sent in the clear contains minimal traffic analysis information for
an adversary and is only utilized to allow remotes attempting to exchange admission control
messages with the hub to do so. Traffic sent in the clear bypasses the Segmentation Engine and
the AES Encryption Engine, and precedes the physical framing and FEC engines for transmission.
Clear, unencrypted packets are transmitted without regard to segmentation; they are allowed to
exist on the RF link with variable sized framing.
Encrypted traffic next enters the Segmentation Engine. The Segmentation Engine segments
incoming packets based on a configured size and provides fill-packets when necessary. The
Transmission Security (TRANSEC)
106 Technical Reference Guide iDS Release 8.2
Segmentation Engine allows the iDirect TRANSEC downstream to transmit a configurable, fixed
size TDM packet segment on a continuous basis.
After segmentation, fixed sized packets enter the Encryption Engine. The encryption algorithm
utilizes the AES algorithm with a 256 bit key and operates in CBC Mode. Packets exit the
Encryption Engine with a pre-pended header as shown in Figure 31.
Figure 31. SCPC TRANSEC Frame
The Encryption Header consists of five 32 bit words with four fields. The fields are:
Code. This field indicates if the frame is encrypted or not, and if encrypted indicates the
entry within the key ring (described under the key management section later in this
document) to be utilized for this frame. The Code field is one byte in length.
Seq. This field is a sequence number that increments with each segment. The Seq field is two
bytes in length (16 bits, unsigned).
Rsvd. This field is 1 byte and is reserved for future use.
Initialization Vector (IV). IV is utilized by the encryption/decryption algorithm and contains
random data. The IV field is 16 bytes in length (128 bits unsigned).
A new IV is generated for each segment. The first IV is generated from the cipher text of the
initial Known Answer Test (KAT) conducted at system boot time. Subsequent IVs are taken from
the last 128 bits of the cipher text of the previously encrypted segment. IVs are continuously
updated regardless of key rotations and they are independent of the key rotation process. They
are also continuously updated regardless of the presence of user traffic since the filler segments
are encrypted. While no logic is included to ensure that IVs do not repeat, the chance of
repetition is very small; estimates place the probability of an IV repeating at 1:2
102
for a
maximum iDirect downstream data rate.
The Segment is of fixed, configurable length and consists of a series of fixed length Fragment
Headers (FH) followed by variable length data Fragments (F). The entire Segment is encrypted
in a single operation by the encryption engine. The FH contains sufficient information for the
Encryption Header Segment
Code Seq Rsvd Initialization Vector
F1 FH1 Fn FHn
FEC Coding
SCPC TRANSEC FRAME
Transmission Security (TRANSEC)
Technical Reference Guide iDS Release 8.2 107
source packet stream, post decryption on the receiver, to be reconstructed. Each Fragment
contains a portion of a source packet.
The Encryption Header is transmitted unencrypted but contains only enough information for a
receiver to decrypt the segment if it is in possession of the symmetric key.
Once an encrypted packet exits the Encryption Engine it undergoes normal processing such as
framing and forward error correction coding. These functions are essentially independent of
TRANSEC but complete the downstream transmission chain and are thus depicted in figure 1.
TRANSEC UPSTREAM
A simplified block diagram for the iDirect TRANSEC upstream data path is shown in Figure 32.
The functions represented in this diagram are implemented in software and firmware on a
TRANSEC capable remote.
Figure 32. Upstream Data Path
The encrypted path is shown is solid black, and the unencrypted (clear) path is shown in dashed
red. The Packet Ingest function determines the message type and places the packet in the
appropriate queue or drops it if it is not valid.
Consider the diagram from left to right with variable length packets arriving on the far left into
the block named Packet Ingest. The upstream (remote to hub) path differs from the downstream
(hub to remote) in that on the upstream is configured for TDMA. Variable length packets from a
remote LAN are segmented in software, and can be considered as part of the Packet Ingest
function. Therefore there is no need for the firmware level segmentation present in the
Transmission Security (TRANSEC)
108 Technical Reference Guide iDS Release 8.2
downstream. Additionally, since the remote is not responsible for the generation of BTPs, there
is no need for the additional queues present in the downstream.
Packets extracted from the Data Queue are always encrypted. Packets exacted from the Clear
Queue are always sent unencrypted. The overwhelming majority of traffic will be extracted
from the Data Queue. Traffic sent in the clear bypasses the Encryption Engine and precedes the
FEC engine for transmission.
The encryption algorithm utilizes AES algorithm with a 256 bit key and will operate in CBC Mode.
Packets exit the Encryption Engine with a pre-pended header as described in Figure 33.
Figure 33. TDMA TRANSEC Slot
Note: TRANSEC overhead reduces the payload size shown in Table 5 on page 48 by the
following amounts for each FEC rate: .431: 7 bytes; .533: 4 bytes; . 660: 4 bytes;
.793: 6 bytes.
The Encryption Header consists of a single 32 bit word with 3 fields. The fields are:
IV Seed. This field is a 29 bit field utilized to generate an 128 bit IV. The IV Seed field starts at
zero and increments for each transmitted burst. The full 128 bit IV is generated from the padded
seed by passing it though the encryption engine. The IV is expanded into a 128-bit IV by
encrypting it with the current AES key for the inroute. Remotes can therefore expand the same
seed into the same full IV. However, this does not create any problems because due to
addressing requirements, it is impossible for any two remotes within the same upstream to
generate the same plain text data. While no logic is included to ensure that IVs do not repeat for
a single terminal, repetition is impossible because the key rotates every two hours by default.
Since the seed increments for each transmission burst, the number of total bursts prior to a seed
wrapping around is 2
29
or 536,870,912. Given the two-hour key rotation period, a single terminal
would need to send over 75,000 TDMA bursts per second to exhaust the range of the seed. This
exceeds any possible iDirect upstream data rate by far.
Key ID. This field indicates the entry within the key ring (described under the key management
section later in this document) to be utilized for this frame.
Transmission Security (TRANSEC)
Technical Reference Guide iDS Release 8.2 109
Enc. This field indicates if the frame is encrypted or not.
The Segment is of fixed, configurable length and consists of what we might call the standard
iDirect TDMA frame. A description of the details of the standard frame are beyond the scope of
this document, but as a general description, consist of a Demand Header which indicates the
amount of bandwidth a remote is requesting, the iDirect Link Layer (LL) Header, and ultimately
the actual Payload. This Segment is encrypted. The Encryption Header is transmitted
unencrypted but contains only enough information for a receiver to decrypt the segment if it is
in possession of the symmetric key.
Once an encrypted packet exits the Encryption Engine it undergoes normal processing such as
forward error correction coding. This function is essentially independent of TRANSEC but
completes the upstream transmission chain (as shown in figure 3).
A remote will always burst in its assigned slots even when traffic is not present by generating
encrypted fill payloads as needed. The iDirect Hub dynamic allocation algorithm will always
operate in a mode whereby all available time slots within all time plans are filled.
TRANSEC KEY MANAGEMENT
All hosts in an iDirect Network must have X.509 public key certificates. Hosts include NMS
servers, protocol processor blades, TRANSEC hub line cards, and TRANSEC remotes. Certificates
are required to join an authenticated network. They serve to prevent man-in-the-middle attacks
and unauthorized admission to the network. You must use the iDirect Certificate Authority (CA)
utility (called the CA Foundry) to issue the certificates for your TRANSEC network. For more
information on using and creating certificates, see Appendix A, Using the iDirect CA Foundry
of the iBuilder User Guide.
You must ensure that all protocol processor blades are equipped with HiFin encryption cards for
TRANCSEC key management. The 1401, which supports AES encryption, is replacing the 1201
card. However, if you have 1201 cards currently installed, you do not need to replace them with
1401 cards.
Key Distribution Protocol (Figure 34), Key Rolling (Figure 35), and Host Keying Protocol (Figure
36) are based on standard techniques utilized within an X.509 based PKI.
Transmission Security (TRANSEC)
110 Technical Reference Guide iDS Release 8.2
Figure 34. Key Distribution Protocol
Key Distribution Protocol assumes that upon the receipt of a certificate from a peer that the
host is able to validate and establish a chain of trust based on the contents of the certificate.
iDirect TRANSEC utilizes standard X.509 certificates and methodologies to verify the peers
certificate.
After the completion of the sequence shown in Figure 34, a peer may provide a key update
message again in an unsolicited fashion as needed. The data structure utilized to complete key
update (also called a key roll) is shown in Figure 35.
Transmission Security (TRANSEC)
Technical Reference Guide iDS Release 8.2 111
Figure 35. Key Rolling and Key Ring
This data structure conceptually consists of a set of pointers (Current, Next, Fallow), a two bit
identification field (utilized in the Encryption Headers described above), and the actual
symmetric keys themselves. A key update consists of generating a new key, placing it in the last
fallow slot just prior to the Current pointer, updating the next pointers (circular update so 11
rolls to 00) and current pointers and generating a Key Update message reflecting these changes.
The key roll mechanism allows for multiple keys to be in play simultaneously so that seamless
key rolls can be achieved. By default the iDirect TRANSEC solution rolls any symmetric key every
two hours, but this is a user configurable parameter. The iDirect Host Keying Protocol is
shown Figure 36.
Transmission Security (TRANSEC)
112 Technical Reference Guide iDS Release 8.2
Figure 36. Host Keying Protocol
This protocol describes how hosts are originally provided an X.509 certificate from a
Certificate Authority. iDirect provides a Certificate Authority Foundry module with its
TRANSEC hub. Host key generation is done on the host in all cases.
TRANSEC REMOTE ADMISSION PROTOCOL
Remotes acquire into the network over the clear channel. Specifically, a protocol processor
blade is designated to be in charge of controlling remote admission into the network. The only
time unencrypted traffic is permitted to traverse the network is during the remote admission
Transmission Security (TRANSEC)
Technical Reference Guide iDS Release 8.2 113
sequence. When a remote is given the opportunity to acquire into the network, the acquisition
sequence takes place as follows:
First, the protocol processor generates two time plans per inroute. One is the normal time plan
utilized to indicate to remotes which slots in which inroutes they may burst on. This time plan is
always encrypted. The second time plan is not encrypted, and it indicates the owner of the
acquisition slot and which remotes may burst in the clear (unencrypted) on selected slots. The
union of the two time plans covers all slots in all inroutes.
The time plans are then forwarded and broadcast to all remotes in the normal method. Remotes
that are not yet acquired receive the unencrypted time plan and wait for an invitation to join
the network via this unencrypted message.
The remote designated in the acquisition slot acquires in the normal fashion by sending an
unencrypted response in the acquisition slot of a specific inroute.
Once the physical layer acquisition occurs, the remote must follow the key distribution protocol
before it is trusted by the network, and for it to trust the network it is a part of. This step must
be carried out in the clear. Therefore remotes in this state will request bandwidth normally and
they will be granted unencrypted TDMA slots. The hub and remotes exchange key negotiation
messages in the cleartext channel. Three message types exist:
Solicitations, which are used to synchronize, request, inform, and acknowledge a peer.
Certificate Presentations, which contain X.509 certificates.
Key Updates, which contain AES key information that is signed and RSA encrypted; the RSA
encryption is accomplished by using the remotes public key and the signature is created by
using the hubs private key.
After authentication, the key update message must also be completed in the clear. The actual
symmetric keys are encrypted using the remotes public key information obtained in the
exchanged certificate. Once the symmetric key is exchanged, the remote enters the network as
a trusted entity, and begins normal operation in an encrypted mode.
RECONFIGURING THE NETWORK FOR TRANSEC
Once you have ensured that all hardware is TRANSEC-compatible and you have issued
certificates to all X.509 hosts, you can reconfigure your network to operate in TRANSEC mode.
For detailed configuration procedures, see Reconfiguring the Network for TRANSEC section in
the Converting an Existing Network to TRANSEC chapter of the iBuilder User Guide.
Transmission Security (TRANSEC)
114 Technical Reference Guide iDS Release 8.2
Fast Acquisition
Technical Reference Guide iDS Release 8.2 115
12 FAST ACQUISITION
The Fast Acquisition feature reduces the average acquisition time for remotes, particularly in
large networks with hundreds or thousands of remotes. The acquisition messaging process used
in prior versions is included in this release. However, the Protocol Processor now makes better
use of the information available regarding hub receive frequency offsets common to all remotes
to reduce the overall network acquisition time. No additional license requirements are required
for this feature.
FEATURE DESCRIPTION
Fast Acquisition is configured on a per-remote basis. When a remote is attempting to acquire
the network, the Protocol Processor determines the frequency offset at which a remote should
transmit and conveys it to the remote in a time plan message. From the time plan message, the
remote learns when to transmit and at what frequency offset. The remote transmit power level
is configured in the option file. Based on the time plan message, the remote calculates the
correct Frame Start Delay (FSD). The fundamental aspects of acquisition are how often a remote
gets an opportunity to come into the network, and how many frequency offsets need to be tried
for each remote before it acquires the network.
If a remote can acquire the network more quickly by trying fewer frequency offsets, the number
of remotes that are out of the network at any one time can be reduced. This determines how
often other remotes get a chance to acquire. This feature reduces the number of frequency
offsets that need to be tried for each remote.
By using a common hub receive frequency offset, the fast acquisition algorithm can determine
an anticipated range smaller than the complete frequency sweep space configured for each
remote. As the common receive frequency offset is updated and refined, the sweep window is
reduced.
If an acquisition attempt fails within the reduced sweep window, the sweep window is widened
to include the entire sweep range. Fast Acquisition is enabled by default. You can disable it by
applying a custom key.
Fast Acquisition
116 Technical Reference Guide iDS Release 8.2
For a given ratio x:y, the hub informs the remote to acquire using the smaller frequency offset
range calculated based on the Fast Acquisition scheme. After x number of attempts, the remote
sweeps the entire range y times before it will sweep the narrower acquisition range. The default
ratio is 100:1. That is, try 100 frequency offsets within the reduced (common) range before
resorting to one full sweep of the remotes frequency offsets.
If you want to modify the ratio, you can use custom keys that follow to override the defaults.
You must apply the custom key to the hub side for each remote in the network.
[ REMOTE_DEFI NI TI ON]
sweep_f r eq_f ast = 100
sweep_f r eq_ent i r e_r ange = 1
[ SWEEP_METHOD]
sweep_met hod = 1 (Fast Acquisition enabled)
sweep_met hod = 0 (Fast Acquisition disabled)
A number of new console commands are available related to this feature. These are described in
Console Commands Reference Guide.
Fast Acquisition cannot be used on 3100 series remotes when the upstream symbol rate is less
than 260 Ksym/s. This is because the FLL on 3100 series remotes is disabled for upstream rates
less than 260 Ksym/s.
The NMS disables Fast Acquisition for any remote that is enabled for an iDirect Music Box and for
any remote that is not configured to utilize the 10 MHz reference clock. In IF-only networks,
such as a test environment, the 10 MHz reference clock is not used.
Remote Sleep Mode
Technical Reference Guide iDS Release 8.2 117
13 REMOTE SLEEP MODE
The Remote Sleep Mode feature conserves remote power consumption during periods of network
inactivity. This section explains how Remote Sleep Mode is implemented. It includes:
Feature Description" which explains how Remote Sleep Mode works.
Awakening Methods" which describes how remotes exit Remote Sleep Mode.
FEATURE DESCRIPTION
Remote Sleep mode is supported on all iNFINITI series remotes. In this mode, the BUC is
powered down, thus saving power consumption.
When Sleep Mode is enabled on the iBuilder GUI for a remote, the remote enters Remote Sleep
Mode after a configurable period elapses with no data to transmit. By default, the remote exits
Remote Sleep Mode whenever packets arrive on the local LAN for transmission on the inbound
carrier.
Note: You can use the powermgt mode set sleep console command to enable or powermgt mode set
awake to disable remote sleep mode.
The stimulus for a remote to exit sleep mode is also configurable in iBuilder. You can select
which types of traffic automatically trigger wakeup on the remote by selecting or clearing a
check box for the any of the QoS service levels used by the remote. If no service levels are
configured to trigger wakeup the remote, you can manually force the remote to exit sleep mode
by disabling sleep mode on the remote configuration screen.
AWAKENING METHODS
There are two methods by which a remote is awakened from Sleep Mode. They are Operator-
Commanded Awakening, and Activity-Related Awakening.
Remote Sleep Mode
118 Technical Reference Guide iDS Release 8.2
Operator-Commanded Awakening
With Operator Command Awakening, you can manually force a remote into Remote Sleep Mode
and subsequently awake it via the NMS. This can be done remotely from the Hub since the
remote continues to receive the downstream while in sleep mode.
Activity Related Awakening
With Activity-Related Sleep Awakening, the remote enters Remote Sleep Mode after a
configurable period elapses with no data to transmit. The remote wakes up as soon as it
receives traffic with these service level markings. When a remote is reset, the activity timer
also resets.
When the remote sees no traffic that triggers the wake up condition for the configured sleep
time-out, it goes into Remote Sleep Mode. In this mode, all the IP traffic that does not trigger a
wake up condition is dropped. When a packet with the service level marking that triggers a
wakeup is detected, the remote resets the sleep timer and wakes up. In Remote Sleep Mode, the
remote processes the burst time plans but it does not apply them to the firmware. No indication
is sent to the remotes router that the interface is down, and therefore the packets from the
local LAN are still passed to the remotes distributor queues. Packets that would wake up the
interface will not be dropped by the router and are available to the layers that process this
information. The protocol layer that manages the sleep function drops the packets that do not
trigger the wakeup mode.
Power consumed by the remote under normal and low power (Partial Sleep Mode) is shown in
Table Table 8 on page 119.
Remote Sleep Mode
Technical Reference Guide iDS Release 8.2 119
TABLE 8. Power Consumption in Remote Sleep Mode
Remote Sleep Mode
120 Technical Reference Guide iDS Release 8.2
ENABLING REMOTE SLEEP MODE
You can enable Remote Sleep Mode by using iBuilder. You can also configure the service levels
that trigger the remote to wake up. A sleep time-out period is configurable for each remote.
The sleep time-out is the period of inactivity after which the remote enters low power mode.
The iDirect Sleep Mode feature requires a custom key in Release 8.0. When you enable Sleep
Mode on the Remote QoS tab, the remote will conserve power by disabling the 10 MHz reference
for the BUC after the specified number of seconds have elapsed with no remote upstream data
transmissions. A remote should automatically wake from sleep mode when packets arrive for
transmission on the upstream carrier, provided that Trigger Wakeup is selected for the service
level associated with the packets.
However, in Release 8.0, a remote will not wake from Sleep Mode even if packets arrive for
transmission that match a service level with Trigger Wakeup selected without the appropriate
custom key. You must configure the following remote-side custom key in iBuilder on the Remote
Custom Tab for all remotes with Sleep Mode enabled:
[ SAT 0]
f or ced = 1
Note: When this custom key is set to 1, a remote with RIP enabled will always advertise the
satellite route as available on the local LAN, even if the satellite link is down.
Therefore, Sleep Mode feature is not compatible with configurations that rely on the
ability of the local router to detect loss of the satellite link.
To enable Remote Sleep Mode, see Chapter 7, Configuring Remotes, Information Tab of the
iBuilder User Guide.
To configure service level based wake up, see Chapter 8, Creating and Managing QoS Profiles,
Adding a Service Level of the iBuilder User Guide.
Automatic Beam Selection
Technical Reference Guide iDS Release 8.2 121
14 AUTOMATIC BEAM SELECTION
This section contains information pertaining to Automatic Beam Selection (ABS) for roaming
remotes in a maritime environment.
AUTOMATIC BEAM SELECTION OVERVIEW
An iDirect network is defined as a single outroute and one or more inroutes, all operating with
one satellite and one hub. A Network Management System (NMS) can manage and control
multiple networks.
You can define remotes that roam from network to network around the globe. These roaming
remotes are not constrained to a single location or limited to any geographic region. Instead, by
using the capabilities provided by the iDirect Global NMS feature, remote terminals have true
global IP access.
The decision of which network a particular remote joins is made by the remote. When joining a
new network, the remote must re-point its antenna to receive a new beam and tune to a new
outroute. Selection of the new beam can be performed manually (by using remote modem
console commands) or automatically. This chapter describes how automatic beam selection is
implemented in an iDirect network.
For detailed information on configuring and monitoring roaming remotes, see the iBuilder User
Guide and iMonitor User Guide. For additional information on the ABS feature, see the iBuilder
User Guide.
THEORY OF OPERATION
Since the term network is used in many ways, the term beam is used rather than the term
network to refer to an outroute and its associated inroutes.
ABS is built on iDirects existing mobile remote functionality. When a modem is in a particular
beam, it operates as a traditional mobile remote in that beam.
Automatic Beam Selection
122 Technical Reference Guide iDS Release 8.2
In a maritime environment, a roaming remote terminal consists of an iDirect modem and a
controllable, steerable, stabilized antenna. The ABS software in the modem can command the
antenna to find and lock to any satellite. Using iBuilder, you can define an instance of the
remote in each beam that the modem is permitted to use. You can also configure and monitor all
instances of the remote as a single entity. The remote options file (which conveys configuration
parameters to the remote from the NMS) contains the definition of each of the remotes beams.
Options files for roaming remotes, called consolidated options files, are described in detail in
Chapter 11, Retrieving and Applying Active and Saved Configurations, Configuration Changes
on Roaming Remotes of the iBuilder User Guide.
As a vessel moves from the footprint of one beam into the footprint of another, the remote must
shift from the old beam to the new beam. Automatic Beam Selection enables the remote to
select a new beam, decide when to switch, and to perform the switch-over, without human
intervention. ABS logic in the modem reads the current location from the antenna and decides
which beam will provide optimal performance for that location. This decision is made by the
remote, rather than by the NMS, because the remote must be able to select a beam even if it is
not communicating with the network.
To determine the best beam for the current location, the remote relies on a beam map file that
is downloaded from the NMS to the remote and stored in memory. The beam map file is a large
data file containing beam quality information for each point on the Earth's surface as computed
by the satellite provider. Whenever a new beam is required by remotes using ABS, the satellite
provider must generate new map data in a pre-defined format referred to as a conveyance
beam map file. iDirect provides a utility that converts the conveyance beam map file from the
satellite provider into a beam map file that can be used by the iDirect system.
Note: In order to use the iDirect ABS feature, the satellite provider must enter into an
agreement with iDirect to provide the beam map data in a specified format.
The iDirect NMS software consists of multiple server applications. One such server application,
know as the map server, manages the iDirect beam maps for remotes in its networks. The map
server reads the beam maps and waits for map requests from remote modems.
A modem has a limited amount of non-volatile storage, so it cannot save an entire map of all
beams. Instead, the remote asks the map server to send a map of a smaller area (called a beam
maplet) that encompasses its current location. When the vessel nears the edge of its current
maplet, the remote asks for another beam maplet centered on its new location. The
geographical size of these beam maplets varies in order to keep the file size approximately
constant. A beam maplet typically covers a 1000 km square.
Automatic Beam Selection
Technical Reference Guide iDS Release 8.2 123
Beam Characteristics: Visibility and Usability
The remote can determine two characteristics of each beam even without the map:
A beam is defined as visible if the look elevation to the satellite is greater than the minimum
look elevation. The minimum look elevation defaults to ten degrees above the horizon.
A beam is usable unless an attempt to use it fails. The beam is considered unusable for a
period of one hour after the failure, or until all visible beams are unusable.
If the selected beam is unusable, the remote attempts to use another beam, provided one or
more usable beams are available. A beam can become unusable for many reasons, but each
reason ultimately results in the inability of the remote to communicate with the outside world
using the beam. Therefore the only usability check is based on the layer 3 state of the
satellite link, such as whether or not the remote can exchange IP data with the upstream router.
Examples of causes that might result in a beam becoming unusable include:
The NMS operator disables the modem instance.
A Hub Line Card fails with no available backup.
The Protocol Processor fails with no backup.
A component in the upstream or downstream RF chain fails.
The satellite fails.
The beam is reconfigured.
The remote cannot lock to the downstream carrier.
The receive line card stops receiving the modem.
Anything that causes the remote to inhibit its transmitter causes the receive line card to stop
receiving the modem, which eventually causes Layer 3 to fail. The modem stops transmitting if
it loses downstream lock. A mobile remote will also stop transmitting under the following
conditions:
The remote has not acquired and no GPS information is available.
The remote antenna declares loss-of-lock.
The antenna declares a blockage.
Selecting a Beam without a Map
Under certain circumstances the remote will not have a beam maplet that covers its current
location. When this occurs, remotes use a round-robin selection algorithm, attempting to use
Automatic Beam Selection
124 Technical Reference Guide iDS Release 8.2
each visible, usable beam defined in its options file in turn for five minutes until the remote is
acquired. This can occur under various conditions:
When a remote is being commissioned.
If the vessel travels with the modem turned off and must locate a beam when returned to
service.
If the remote cannot remain in the network for an extended period due to blockage or
network outage.
If the map server is unreachable.
In all cases, after the remote establishes communications with the map server, it immediately
asks for a new maplet. When a maplet becomes available, the remote uses the maplet to
compute the optimal beam, and switches to that beam if it is not the current beam.
Controlling the Antenna
To make the system work, the remote must be able to control the antenna. The remote software
communicates with the antenna control unit supplied with the antenna over the local LAN. Since
there is no standard antenna control protocol, the remote code must be written specifically for
each protocol. The following antenna protocols are currently supported:
Orbit-Marine AL-7104
Schlumberger SpaceTrack 4000
SeaTel DAC
Open AMIP
A steerable, stabilized antenna must know its geographical location in order to point to the
antenna. The antenna includes a GPS receiver for this purpose. The remote must also know its
geographical location to select the correct beam and to compute its distance from the satellite.
The remote periodically commands the antenna controller to send the current location to the
modem.
IP Mobility
Communications to the customer intranet (or to the Internet) are automatically re-established
after a beam switch-over. The process of joining the network after a new beam is selected uses
the same internet routing protocols that are already established in the iDirect system. When a
remote joins a beam, the Protocol Processor for that beam begins advertising the remote's IP
addresses to the upstream router using the RIP protocol. When a remote leaves a beam, the
Automatic Beam Selection
Technical Reference Guide iDS Release 8.2 125
Protocol Processor for that beam withdraws the advertisement for the remote's IP addresses.
When the upstream routers see these advertisements and withdrawals, they communicate with
each other using the appropriate IP protocols to determine their routing tables. This permits
other devices on the Internet to send data to the remote over the new path with no manual
intervention.
OPERATIONAL SCENARIOS
This section presents a series of top-level operational scenarios that can be followed when
configuring and managing iDirect networks that contain roaming remotes using Automatic Beam
Selection. Steps for configuring network elements such as iDirect networks (beams) and roaming
remotes are documented in iBuilder User Guide. Steps specific to configuring ABS functionality,
such as adding an ABS-capable antenna or converting a conveyance beam map file, are
described in Appendix C, Configuring Networks for Automatic Beam Selection of the iBuilder
User Guide.
Creating the Network
This scenario outlines the steps that must be performed by the customer, the satellite provider,
and the network operator to create a network that uses ABS.
1. The customer determines the satellite provider and agree on the set of beams (satellites,
transponders, frequencies and footprints) to be used by remotes using ABS.
2. The satellite provider enters into an agreement with iDirect specifying the format of the
conveyance beam map file.
3. The satellite provider supplies the link budget for the hub and remotes.
4. iDirect delivers the map conversion program to the customer specific to the conveyance
beam map file specification.
5. The satellite provider delivers to the customer one conveyance beam map file for each
beam that the customer will use.
6. The customer orders and installs all required equipment and an NMS.
7. The NMS operator configures the beams (iDirect networks).
Automatic Beam Selection
126 Technical Reference Guide iDS Release 8.2
8. The NMS operator runs the conversion program to create the server beam map file from the
conveyance beam map file or files.
9. The NMS operator runs the map server as part of the NMS.
Adding a Vessel
This scenario outlines the steps required to add a roaming remote using ABS to all available
beams.
1. The NMS operator configures the remote modem in one beam.
2. The NMS operator adds the remote to the remaining beams.
3. The NMS operator saves the modem's options file and delivers it to the installer.
4. The installer installs the modem aboard a ship.
5. The installer copies the options file to the modem using iSite.
6. The installer manually selects a beam for commissioning.
7. The modem commands the antenna to point to the satellite.
8. The modem receives the current location from antenna.
9. The installer commissions the remote in the initial beam.
10. The modem enters the network and requests a maplet from the NMS map server.
11. The modem checks the maplet. If the commissioning beam is not the best beam, the modem
switches to the best beam as indicated in the maplet. This beam is then assigned a high
preference rating by the modem to prevent the modem from switching between overlapping
beams of similar quality.
12. Assuming center beam in clear sky conditions:
The installer sets the initial transmit power to 3 dB above the nominal transmit power.
The installer sets the maximum power to 6 dB above the nominal transmit power.
Automatic Beam Selection
Technical Reference Guide iDS Release 8.2 127
Note: Check the levels the first time the remote enters each new beam and adj ust
the transmit power settings if necessary.
Normal Operations
This scenario describes the events that occur during normal operations when a modem is receiv-
ing map information from the NMS.
1. The ship leaves port and travels to next destination.
2. The modem receives the current location from antenna every five minutes.
3. While in the beam, the antenna automatically tracks the satellite.
4. As the ship approaches the edge of the current maplet, the modem requests a new maplet
from the map server.
5. When the ship reaches a location where the maplet shows a better beam, the remote
switches by doing the following:
a. Computes best beam.
b. Saves best beam to non-volatile storage.
c. Reboots.
d. Reads the new best beam from non-volatile storage.
e. Commands the antenna to move to the correct satellite and beam.
f. Joins the new beam.
Mapless Operations
This scenario describes the events that occur during operations when a modem is not receiving
beam mapping information from the NMS.
1. While operational in a beam, the remote periodically asks the map server for a maplet. The
remote does not attempt to switch to a new beam unless one of the following conditions are
true:
a. The remote drops out of the network.
b. The remote receives a maplet indicating that a better beam exists.
c. The satellite drops below the minimum look elevation defined for that beam.
Automatic Beam Selection
128 Technical Reference Guide iDS Release 8.2
2. If not acquired, the remote selects a visible, usable beam based only on satellite longitude
and attempts to switch to that beam.
3. After five minutes, if the remote is still not acquired, it marks the new beam as unusable
and selects the best beam from the remaining visible, usable beams in the options file. This
step is repeated until the remote is acquired in a beam, or all visible beams are marked as
unusable.
4. If all visible beams are unusable, the remote marks them all as usable, and continues to
attempt to use each beam in a round-robin fashion as described in step 3.
Blockages and Beam Outages
This scenario describes the events that occur when a modem cannot join or loses the
selected beam.
1. If the remote fails to join the selected beam after five minutes, it marks the beam as
unusable and selects a new beam based on the maplet.
2. If the remote loses network connectivity for five minutes, it marks the current beam as
unusable and selects a new beam based on the maplet.
3. Any beam marked as unusable remains unusable for an hour or until all beams are marked as
unusable.
4. If only the current beam is visible, the remote will not attempt to switch from that beam,
even after losing connectivity for five minutes.
Error Recovery
This section describes the actions taken by the modem under certain error conditions.
1. If the remote cannot communicate with the antenna and is not acquired into the network, it
will reboot after five minutes.
2. If the antenna is initializing, the remote waits for the initialization to complete. It will not
attempt to switch beams during this time.
Hub Geographic Redundancy
Technical Reference Guide iDS Release 8.2 129
15 HUB GEOGRAPHIC REDUNDANCY
This chapter describes how you can establish a primary and back up hub that are geographically
diverse. It includes:
Feature Description" which describes how geographic redundancy is accomplished.
Configuring Wait Time Interval for an Out-of-Network Remote" which describes how you can
set the wait period before switchover.
FEATURE DESCRIPTION
The Hub Geographic Redundancy feature builds on the previously developed Global NMS feature
(see iDS Release 8.0 Features), and the existing dbBackup/dbRestore utility. You configure the
Hub Geographic Redundancy feature by defining all the network information for both the
Primary and Backup Teleports in the Primary NMS. All remotes are configured as roaming
remotes and they are defined identically in both the Primary and Backup Teleport network
configurations.
Only iNFINITI remotes can currently participate in Global NMS networks. Since the backup
teleport feature also uses the Global NMS capability, this feature is also restricted to iNFINITI
remotes.
During normal (non-failure) operations, carrier transmission is inhibited on the Backup Teleport.
During failover conditions (when roaming network remotes fail to see the downstream carrier
through the Primary Teleport NMS) you can manually enable the downstream transmission on the
Backup Teleport, allowing the remotes to automatically (after the configured default wait
period of five minutes) acquire the downstream transmission through the Backup Teleport NMS.
iDirect recommends the following for most efficient switchover:
A separate IP connection (at least 128 Kbps) between the Primary and Backup Teleport NMS
for database backup and restore operations. A higher rate line can be employed to reduce
this database archive time.
Hub Geographic Redundancy
130 Technical Reference Guide iDS Release 8.2
The downstream carrier characteristics for the Primary and Backup Teleports MUST be
different. For example, either the FEC, frequency, frame length, or data rate values must be
different.
On a periodic basis, backup and restore your NMS configuration database between your
Primary and Backup Teleports. See the NMS Redundancy and Failover Technical Note for
complete NMS redundancy procedures.
CONFIGURING WAIT TIME INTERVAL FOR
AN OUT-OF-NETWORK REMOTE
If a roaming remote is configured at both a Primary and Backup Hub, and the remote loses the
Downstream carrier from the Primary Hub, the remote attempts to lock to the Downstream
carrier from the Backup Hub, after a configured interval in seconds. By default this wait time
before attempting the switch is 300 seconds (5 minutes). This wait time for beam switchover
can be changed by setting the net _st at e_t i meout custom key value (in seconds) to the
desired wait period.
For example, if you want to make the wait period 10 minutes, use the following custom key:
[ REMOTE_DEFI NI TI ON]
net _st at e_t i meout =600
For further configuration information, see Chapter 5, Defining Network Components, Adding a
Backup Teleport of the iBuilder User Guide.
Carrier Bandwidth Optimization
Technical Reference Guide iDS Release 8.2 131
16 CARRIER BANDWIDTH
OPTIMIZATION
This chapter describes carrier bandwidth optimization and carrier spacing. It includes:
Overview" which describes how reducing carrier spacing increases overall available
bandwidth.
Increasing User Data Rate" which provides an example of how you can increase user data
rates with out increasing occupied bandwidth.
Decreasing Channel Spacing to Gain Additional Bandwidth" which provides an example of
how you can increase occupied bandwidth.
OVERVIEW
The Field Programmable Gated Array (FPGA) firmware uses optimized digital filtering which
reduces the amount of satellite bandwidth required for an iDirect carrier. Instead of using a 40%
guard band between carriers, now the guard band may be reduced to as low as 20% on both the
broadcast Downstream channel and the TDMA Upstream. Figure 37 on page 132 shows an overlay
of the original spectrum and the optimized spectrum.
Carrier Bandwidth Optimization
132 Technical Reference Guide iDS Release 8.2
Figure 37. Overlay of Carrier Spectrums
This optimization translates directly into a cost savings for existing and future networks
deployed with iDirect NetModems.
The spectral shape of the carrier is not the only factor contributing to the guard band
requirement. Frequency stability parameters of a system may result in the need for a guard
band of slightly greater than 20% to be used. iDirect complies with the adjacent channel
interference specification in IESS 308 which accounts for adjacent channels on either side with
+7 dB higher power.
Be sure to consult the designer of your satellite link prior to changing any carrier parameters to
verify that they do not violate the policy of your satellite operator.
INCREASING USER DATA RATE
Since the amount of required guard band between carriers has been reduced, it is now possible
to fit a higher bit rate carrier into the same satellite bandwidth that was required previously.
Therefore, a network operator can increase the bit rate of existing carriers without purchasing
additional bandwidth.
Carrier Bandwidth Optimization
Technical Reference Guide iDS Release 8.2 133
A consequence of choosing this option is that increasing the bit rate of the carrier to fill the
extra bandwidth requires slightly more power. Increasing the bit rate by 15% would result in an
additional 0.5 dB of power. Be sure to consult the provider of your link budget prior to adjusting
the bit rate of your carriers.
Frequency stability in the system may limit the amount of bit rate increase by increasing the
guard band requirement.
The example that follows illustrates a scenario applicable to a system with negligible frequency
stability concerns. It shows how the occupied bandwidth does not increase when the user data
rate increases. In this example, FEC rate 0.793 with 4 kbit Turbo Product Code is used.
Current Carrier Parameters:
User Bit (info) Rate:1000 kbps
Carrier Bit Rate:1261.034 kbps
Carrier Symbol Rate:630.517 ksps
Occupied Bandwidth:882.724 kHz
Guard Band Between Carriers: 40% (Channel Spacing = 1.4)
New Carrier Parameters
User Bit (info) Rate: 1166.667 kbps
Carrier Bit Rate: 1471.206 kbps
Carrier Symbol Rate: 735.603 ksps
Occupied Bandwidth: 882.724 kHz
Guard Band Between Carriers: 20% (Channel Spacing = 1.2)
A 16.67% improvement in user data rate is achieved at no additional cost.
It is possible that due to instability of frequency references in a satellite network system, a
carrier may not fall exactly on its assigned center frequency. iDirect networks combat frequency
Carrier Bandwidth Optimization
134 Technical Reference Guide iDS Release 8.2
offset using an automatic frequency control algorithm. Any additional instability must be
accommodated by additional guard band.
The frequency references to the hub transmitter and to the satellite itself are generally very
stable so the main source of frequency instability is the downconverter at the hub. This is
because the automatic frequency control algorithm uses the hub receivers estimate of
frequency offset to adjust each remote transmitter frequency. Hub stations which use a
feedback control system to lock their downconverter to an accurate reference may have
negligible offsets. Hub stations using a locked LNB will have a finite frequency stability range.
Another reason to add guard band is to account for frequency stability of other carriers directly
adjacent on the satellite which are not part of an iDirect network. Be sure to review this
situation with your satellite link designer before changing carrier parameters.
The example that follows accounts for a frequency stability range for systems using equipment
with more significant stability concerns. Given the Current Carrier Parameters the previous
example and a total frequency stability of +/-5 kHz, compute the new carrier parameters:
Solution:
Subtract the total frequency uncertainty from the available bandwidth to determine the
amount of bandwidth left for the carrier (882.724 kHz 10 kHz = 872.724 kHz).
Divide this result by the minimum channel spacing (872.724 / 1.2 = 727.270 kHz).
Use the result as the carrier symbol rate and compute the remaining parameters.
New Carrier Parameters
User Bit (info) Rate: 1153.450 kbps
Carrier Bit Rate: 1454.540 kbps
Carrier Symbol Rate: 727.270 ksps
Occupied Bandwidth: 882.724 kHz
Guard Band Between Carriers: 21.375% (Channel Spacing = 1.21375)
A 15.345% improvement in user bit rate was achieved at no additional cost.
Carrier Bandwidth Optimization
Technical Reference Guide iDS Release 8.2 135
DECREASING CHANNEL SPACING TO GAIN
ADDITIONAL BANDWIDTH
The amount of required guard band between carriers can also be expressed as the channel
spacing requirement. For example, if the required guard band is 20%, the channel spacing
requirement is 1.2*Carrier_Symbol_Rate (Hz).
Therefore, a network operator may take advantage of the new carrier bandwidth optimization
by reworking their frequency plan such that excess bandwidth is available for use by another
carrier.
For example, consider an iDirect network with a user data (information) rate of 5 Mbps on the
downstream and three upstream carriers of 1 Mbps each. FEC rate 0.793 with 4 kbit TPC is used
for all carriers in this example. Figure 38 shows that an additional Upstream carrier may be
added by reducing the channel spacing of the existing carriers.
Figure 38. Adding an Upstream Carrier By Reducing Carrier Spacing
Carrier Bandwidth Optimization
136 Technical Reference Guide iDS Release 8.2
Hub Line Card Failover
Technical Reference Guide iDS Release 8.2 137
17 HUB LINE CARD FAILOVER
This chapter describes basic hub line card failover concepts, transmit/receive verses receive-
only line card failover, failover sequence of events, and failover operation from a users point of
view.
For information about configuring your line cards for failover, refer the Networks, Line Cards,
and Inroute Groups chapter of the iBuilder User Guide.
BASIC FAILOVER CONCEPTS
Each second, every line card sends a diagnostic message to the NMS. This message contains the
status of various onboard components. If the NMS fails to receive any diagnostic messages from
a line card in a five-second time period, and all failover prerequisites are met, it begins the
failover process.
For Tx(Rx) line cards, the standby assumes the failed cards role immediately. For Rx line cards,
the standby needs to flash a new options file and reset. The estimated time to complete a
Tx(Rx) line card failover is less than 10 seconds; the estimated time to complete a Rx-only line
card is less than 1 minute.
Note: If your Tx line card fails, or you only have a single Rx line card and it fails, all
remotes must re-acquire into the network after failover is complete.
TX(RX) VERSUS RX-ONLY LINE CARD FAILOVER
The most important line card in a network is the Tx(Rx) line card; if this card fails, all remotes
drop out of the network. When an Rx-only card in a frequency-hopping inroute group fails, all
remotes automatically begin sharing the other inroutes. While this may result in diminished
bandwidth, remotes do not drop out of the network.
iDirects failover method guarantees the fastest failover possible for the Tx(Rx) line cards. The
standby line card in each network is pre-configured with the parameters of the Tx card for that
Hub Line Card Failover
138 Technical Reference Guide iDS Release 8.2
network, and has those parameters loaded into memory. The only difference between the active
Tx(Rx) card and the standby is that the standby mutes its transmitter (and receiver). When the
NMS detects a Tx(Rx) line card failure, it sends a command to the standby to un-mute its
transmitter (and receiver), and the standby immediately assumes the role of the Tx(Rx) card.
Rx-only line cards take longer to failover than Tx(Rx) cards because they need to receive a new
options file, flash it, and reset.
FAILOVER SEQUENCE OF EVENTS
The flow chart that shows the sequence of events performed on the NMS server to execute a
complete failover is shown in Figure 39.
Hub Line Card Failover
Technical Reference Guide iDS Release 8.2 139
Figure 39. Failover Sequence of Events, NMS Server Perspective
Event Server
determines line
card has failed
Automatic
failover
selected?
DONE. NO
Configuration
Server is notified
YES
Prerequisites Met? NO DONE.
YES
User initiates
manual failover
iMonitor will show the line
card in the Alarm state.
User may initiate manual
failover if desired.
Configuration
Server powers
down slot of failed
card.
Rx-only line
card?
NO
Send command to
spare to switch
role from Standby
to Primary; send
ACTIVE options
file of failed card
but DO NOT reset
Send ACTIVE
options file of
failed card to
spare and reset
YES
Apply necessary
changes to puma
(serial number)
Former spare gets
role of failed card
(Tx, TxRx, or Rx)
and carrier/inroute
group assignments
Failed unit gets
new role: Failed.
All subsequent operations are
handled by the Configuration
Server unless otherwise noted
Configuration Server must
grab exclusive write lock at
this point. Any user with the
lock will lose the lock and
any unsaved changes.
User will have already
been notified that failover
cannot happen.
Hub Line Card Failover
140 Technical Reference Guide iDS Release 8.2
FAILOVER OPERATION: USERS PERSPECTIVE
Portions of the failover sequence of events are revealed in real-time. You may perform a
historical condition query at any time to see the alarms and warnings that are generated and
archived during the failover operation. The flow chart that shows the sequence of events seen
on the screen in real-time during a failover operation is shown in Figure 40.
Hub Line Card Failover
Technical Reference Guide iDS Release 8.2 141
Figure 40. Failover Sequence of Events, NMS User Perspective
iMonitor is notified
the line card has
failed. This is a
standard ALARM
with the
appropriate alarm
type
Automatic
failover
selected?
DONE. NO
The ALARM is
displayed in all
appropriate
displays
YES
Server
Prerequisites Met?
NO DONE.
YES
User initiates
manual failover
iMonitor will show the line
card in the Alarm state.
User may initiate manual
failover if desired.
iMonitor is notified
the spare is
assuming role of
Primary (fromline
card via event
server). iMonitor
displays line card
in WARNING state
iMonitor is notified
the Spare has
assumed the role
of the primary and
clears WARNING
Rx-only line
card?
NO YES
iMonitor is notified
the spare has
gone down and
displays ALARM
iMonitor is notified
the spare has
come up and
clears the ALARM
and WARNING
Accept Changes
button highlights
on iMonitor (and
iBuilder)
User accepts
changes and sees
that failed card is
in Failed role,
spare now has
failed cards old
role.
TBD: can we do this for
receive-only line cards?
User will have already
been notified that failover
cannot happen.
Hub Line Card Failover
142 Technical Reference Guide iDS Release 8.2