Neptune (Hybrid) V6.0 Reference Manual
Neptune (Hybrid) V6.0 Reference Manual
Version 6.0
Reference Manual
Neptune (Hybrid) Reference Manual
V6.0
Catalog No: X92376
Drawing No: 417006-2710-063-A00
May 2017
Rev01
ECI's NPT-1800, NPT-1200, NPT-1050, NPT-1021, and NPT-1010 are CE2.0 certified.
ECI's qualification lab is accredited by A2LA for competence in electrical testing according to
the International Standard ISO IEC 17025-2005 General Requirements for the Competence of
Testing and Calibration Laboratories.
Related documents
Neptune General Description
NPT-1200 Installation and Maintenance Manual
NPT-1030 Installation and Maintenance Manual
NPT-1050 Installation and Maintenance Manual
NPT-1021 Installation and Maintenance Manual
NPT-1020 Installation and Maintenance Manual
NPT-1010 Installation and Maintenance Manual
EMS-NPT User Manual
LCT-NPT User Manual
LightSoft® User Manual
Contact information
Telephone Email
The second variant, Packet NPT, is equipped with a central Ethernet/MPLS switch and supports TDM
services through Circuit Emulation Service (CES). NPT is equipped with a broad mix of Ethernet and TDM
interfaces, supporting both packet and TDM based services over a converged packet infrastructure.
Both Hybrid and Packet NPT comply with all MEF CE2.0 service standards, as well as offering extensive
synchronization, protection, and resiliency schemes. Whether the network traffic is transported over legacy
equipment, supporting only TDM, or over packet equipment, supporting only Ethernet, NPT can provide
the optimal solution. As the network evolves, there is no need for costly replacements of existing
infrastructure or cumbersome external adaptive boxes.
NPT's flexible traffic handling architecture offers the most cost efficient traffic handling in a mixed TDM and
packet environment while supporting all transport attributes. The result is the lowest TCO throughout the
network life cycle and over the course of the network transition from TDM to packet. This is also correct
when building new carrier Ethernet and packet based transport networks.
The NPT's value propositions include:
Lowest TCO
Flexible multi-service (Packet, Optics, TDM)
Cost-effective scalability through the modular architecture
Dual Stack MPLS, offering seamless interworking, service optimized
Transport grade service assurance
Performance: Predictable and guaranteed
Availability: Carrier grade redundancy and protection
Security: Secure and transparent
E2E control
Intuitive GUI: Easy point-and-click operation
Unified multi-layer NMS: Enabling smooth, converged control
Visibility: Providing extensive OAM for E2E SLA visibility
10 Gbps Add/Drop Multiplexer (ADM) service on a double card for GbE, 1GFC, 2GFC, OTU1, and
STM-16 services. ADM on a Card (MXP10) benefits include the ability to route client signals to
different locations along the optical ring, as well as per-service selectable protection and
drop-and-continue features. The MXP10 can also be used as a multi-rate combiner up to OTU2. The
MXP10 combines the cost efficiency of an optical platform with the granularity and flexibility
previously available only in SDH networks.
Up to 320 Gbps capacity with 40 Gbps per slot Neptune Hybrid platforms.
High Order (HO) and Low Order (LO) transmission paths available for both high-order and low-order
subnetworks, with a high-capacity matrix that maintains LO connectivity.
Comprehensive MPLS Carrier Ethernet capabilities, including use of MPLS technology to carry
Ethernet services across the network metro and core.
HO transmission paths for IP networks (for example, LAN-to-LAN connectivity at the GbE-to-GbE
level).
Multireach, for metro and regional applications spanning up to 800 km without electrical
regeneration.
Supporting cost-effective access CWDM applications and core DWDM networks of up to 44/88
channels.
Transport of Ethernet traffic over WDM.
Subrate traffic aggregation over optical cards.
Channel by channel, non-traffic-affecting upgrade, starting from a single channel.
Full compliance with applicable ITU-T and Telcordia standards for optical equipment and safety
standards.
Extremely powerful management that renders the system easy to control, monitor, and maintain.
NOTE: All installation instructions, technical specifications, restrictions, and safety warnings
are provided in the Neptune Installation and Maintenance Manuals. See these manuals for
specific instructions before beginning any Neptune platform installation.
The NPT-1020 offers enhanced MPLS-TP data network functionality, including the complete range of
Ethernet based services (CES, EoS, MoT, MoE, and PoE+), as described in the NPT General Description.
The NPT-1020 is a compact (1U) base platform housed in a 243 mm deep, 465 mm wide, and 44 mm high
equipment cage with all interfaces accessible from the front of the unit. The platform includes the following
components:
Traffic processing through:
21 built in native E1s
14 ports, divided between:
2 x STM-1/STM-4 ports (native)
8 x 10/100/1000BaseT electrical ports (with 4 x PoE+)
4 x SFP – each can be configured as 100/1000Base-X or 10/100/1000Base-T (with ETGBE) or
GPON ONU interface (with GTGBE_L3BD)
1 traffic card slot (Tslot)
Compact flash card (NVM)
Traffic connector to the (optional) EXT 2U expansion unit
Timing module (T3/T4, ToD, and 1pps)
Alarms connector
Redundant power supply modules (INF) or Non redundant
Figure 2-5: NPT-1020 platform
The NPT-1020 can be fed by either -48 VDC or 110 VAC to 230 VAC. In DC power feeding, two INF modules
can be configured in two power supply module slots for redundant power supply. AC power feeding
requires the use of a conversion module to implement AC/DC conversion.
The NPT-1020 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The rugged platform
design also makes this platform a good choice for street cabinet use, withstanding temperatures up to
70°C.
The platform offers unique non traffic affecting upgrades from 1G-based configurations to 10GE-based
(with up to 4 x 10GE interfaces). This is supported through the CPS50 card, a central packet switch (CPS)
Tslot card for the NPT-1021. This card provides the NPT-1021 with scalable upgrades to high capacity 10GE
configurations. The CPS50 makes it possible to upgrade the system packet switching capacity to 60 Gbps. It
supports up to 2 × 10GE (SFP+) and 2 flexible SFP houses. Each of these can support 1 × 10GE with SFP+,
1 × GE with SFP, or 2 × GE with CSFP.
NPT-1021 offers enhanced MPLS-TP data network functionality, including the complete range of
Ethernet-based services (CES, MoE, and PoE+), as described in MPLS-TP and Ethernet solutions.
NPT-1021 is a compact (1U) base platform housed in a 243 mm deep, 465 mm wide, and 44 mm high
equipment cage. All its interfaces are accessible from the front of the unit. The platform includes the
following components:
Traffic processing modules:
12 ports, divided between:
4 x SFP – each can be configured as 100/1000Base-X or 10/100/1000Base-T (with ETGBE) or
GPON ONU interface (with GTGBE_L3BD)
8 x RJ-45 – 10/100/1000Base-T, 4 of 8 support PoE
One traffic card slot (Tslot)
Compact flash card (NVM)
Traffic connector to the (optional) EXT 2U expansion unit
Timing module (T3/T4, ToD, and 1pps)
Redundant or non-redundant power supply modules (INF)
Figure 2-6: NPT-1021 platform
NPT-1021 can be fed by 24 VDC, -48 VDC, or 110 to 230 VAC. In DC power feeding, two INF modules can be
configured in the two power module slots for redundancy. One double slot INF module with dual-feeding
can be configured as well. AC power feeding requires the use of a conversion module to implement AC/DC
conversion.
NPT-1021 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The rugged platform design
also makes this platform suitable for street cabinet use, withstanding temperatures up to 70°C.
NPT-1021 can also be configured as an expanded platform when combined with the EXT-2U expansion unit,
as illustrated in the following figure.
Figure 2-7: NPT-1021 with EXT-2U expansion unit
Typical power consumption of the NPT-1021 is 40 W. Power consumption is monitored through the
management software. For more information about power consumption requirements, see the Neptune
System Specifications.
NPT-1010DC can be fed by -48 VDC and the NPT-1010AC by 110 VAC to 230 VAC. In DC power
(NPT-1010DC) feeding, dual DC feed is supported. AC power feeding (NPT-1010AC) requires the use of a
conversion module to implement AC/DC conversion.
NPT-1010 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The rugged platform design
also makes this platform a good choice for street cabinet use, withstanding temperatures up to 70°C.
Used in many sub network topologies, NPT-1200 can handle a mixture of P2P, hub, and mesh traffic
patterns. This combined functionality means that operators benefit from improved network efficiency and
significant savings in terms of cost and footprint.
The NPT-1200 platform:
Increases the number of Ethernet interfaces, and upgrades from 10M to 100GE (100GE in future
version) easily and smoothly.
Increases the number of STM-1 interfaces, and upgrades from STM-1 to STM-4/STM-16/STM-64 easily
and smoothly.
Add on OTN capabilities for seamless integration and interconnection with optical based networking.
Allows you to start as small as necessary and attain ultrahigh expandability in a build-as-you-grow™
fashion by combining the standard Neptune unit with an expansion unit (EXT-2U).
Aggregates traffic arriving over Ethernet, PCM low-bitrate interfaces, E1/T1, E3/DS-3, and STM-1
directly over STM-1/STM-4/STM-16/STM-64 and GbE/10GbE/100GbE.
Is suitable for indoor and outdoor installations.
Supports an extended operating temperature range up to 70°C (with CPS/CPTS100 only).
The NPT-1200 platform is housed in a 243 mm deep, 442.4 mm wide, and 88.9 mm high equipment cage
with all interfaces accessible from the front of the unit.
Figure 3-2: NPT-1200 general view
The following table lists the modules that can be configured in each NPT-1200 slot.
DC PSA DC PSB MS XS A XS B TS 1# to TS 7# FS
INF_1200
FCU_1200
DC PSA DC PSB MS XS A XS B TS 1# to TS 7# FS
MCP1200
CPTS100
CPS100
CPTS320
CPS320
XIO64
XIO16_4
PME1_211
PME1_21B
PME1_632
PM345_3
SMQ1
SMQ1&4
SMS16
DMFE_4_L1
DMFX_4_L1
DMFE_4_L2
DMFX_4_L2
DMGE_2_L2
1 The PME1_21/PME1_21B module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced
conversion unit, the xDDF-21.
2 The PME1_63 module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced conversion
unit, the xDDF-21.
3 Depends on the platform power consumption.
4 Depends on the platform power consumption.
DC PSA DC PSB MS XS A XS B TS 1# to TS 7# FS
All cards support live insertion. The NPT-1200 platform provides full 1+1 redundancy in power feeding,
cross connections, and the TMU, as well as 1:N redundancy in the fans.
NOTE: Failure of the MCP1200 does not affect any existing TDM and Packet traffic on the
platform.
The NPT-1200 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks.
NOTES:
The NPT-1200 platform with CPTS100/CPS100 supports max. 48 x GbE or max. 10 x 10
GbE.
The NPT-1200 platform with CPTS320/CPS320 supports max. 64 x GbE or max. 32 x 10 GbE
(MBP-1200HW revision should be>=B01).
The NPT-1200 platform must be configured with identical switching card types.
The NPT-1200 main controller card (MCP-1200) is the most essential card of the system, creating virtually a
complete standalone native packet system. Moreover, it accommodates one service traffic slot for flexible
configuration of virtually any type of PDH, SDH, and Ethernet interfaces. This integrated flexible design
ensures a very compact equipment structure and reduces costs, making NPT an ideal native choice for the
access and metro access layers.
NPT-1200 control and communication functions include:
Internal control and processing
Communication with external equipment and management
Network element (NE) software and configuration backup
Built-in Test (BIT)
NOTE: The NPT-1200 supports in band and DCN management connections for PB and MPLS:
4Mbps policer for PB UNI which connects to external DCN
10Mbps shaper for MCC packet to MCP
No rate limit for the MNG port rate up to 100M full duplex
3.3 Timing
NPT-1200 provides high-quality system timing to all traffic modules and functions in compliance with
applicable ITU-T recommendations for functionality and performance.
The main component in the NPT-1200 synchronization subsystem is the timing and synchronization unit
(TMU). Timing is distributed redundantly from the TMUs to all traffic and matrix cards, minimizing unit
types and reducing operation and maintenance costs.
The TMU and the internal and external timing paths are fully redundant. The high-level distributed BIT
mechanism ensures top performance and availability of the synchronization subsystem. In case of
hardware failure, the redundant synchronization subsystem takes over the timing control with no traffic
disruption.
To support reliable timing, NPT-1200 provides multiple synchronization reference options. Up to four
timing references can be monitored simultaneously:
1PPS and ToD interfaces, using external timing input sources
2 x 2 MHz (T3) external timing input sources
2 x 2 Mbps (T3) external timing input sources
STM-n line timing from any SDH interface card
E1 2M PDH line timing from any PDH interface card
Local interval clock
Holdover mode
SyncE
1588V2 – Master, Slave, transparent, and boundary clock
In the NPT, any timing signal can be selected as a reference source. The TMU provides direct control over
the source selection (received from the system software) and the frequency control loop. The definition of
the synchronization source depends on the source quality and synchronization mode of the network timing
topology (set by the EMS-NPT or LCT-NPT):
Synchronization references are classified at any given time according to a predefined priority and
prevailing signal quality. The NPT-1200 synchronization subsystem synchronizes to the best available
timing source using the SSM protocol. The TMU is frequency-locked to this source, providing internal
system and SDH line transmission timing. The platform is synchronized to this central timing source.
NPT-1200 provides synchronization outputs for the synchronization of external equipment within the
exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to
synchronize any peripheral equipment or switch.
NPT-1200 supports SyncE synchronization, which is fully compatible with the asynchronous nature of
traditional Ethernet. SyncE is defined in ITU T standards G.8261, G.8262, G.8263, and G.8264.
The IEEE 1588 Precision Time Protocol (PTP) provides a standard method for high precision synchronization
of network connected clocks. PTP is a time transfer protocol enabling slave clocks to synchronize to a
known master clock, ensuring that multiple devices operate using the same time base. The protocol
operates in master/slave configuration using UDP packets over IP or multicast packets over Ethernet. The
IEEE 1588v2 is supported in the NPT-1200 to provide Ordinary Clock (OC) and Boundary Clock (BC)
capabilities.
In the CPS100, central packet switch, 100G packet switching, includes timing control with 2 x 10GE
interfaces card.
In the CPTS320, central packet and TDM switch, 320G packet switching and 40G TDM, includes timing
control with 1 x STM-64/2 x STM-1/4/16 and 4 x 10GE interfaces card.
In the CPS320, central packet switch, 320G packet switching, includes timing control with 4 x 10GE
interfaces card.
3.6.1 INF_1200
The INF_1200 is a DC power-filter module that can be plugged into the NPT-1200 platform. Two INF_1200
modules are needed for power feeding redundancy. It performs the following functions:
Single DC power input and power supply for all modules in the NPT-1200
Input filtering function for the entire NPT-1200 platform
Adjustable output voltage for fans in the NPT-1200
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply when under-/over-voltage is detected
High-power INF for up to 550 W and 650W (INF_1200 HW revision D02 and above)
Figure 3-5: INF_1200 front panel
3.6.2 FCU_1200
The FCU_1200 is a pluggable fan control module with eight fans for cooling the NPT-1200 platform. The
fans’ running speed can be set to 16 different levels. The speed is controlled by the MCP1200 according to
the installed cards temperature.
Figure 3-6: FCU_1200 front panel
3.6.3 MCP1200
The MCP1200 card is the main processing card of the NPT-1200. It integrates functions such as control,
communications and overhead processing. It provides:
Control-related functions:
Communications with and control of all other modules in the NPT-1200 and EXT-2U through the
backplane (by the CPU)
Communications with the EMS-APT, LCT-APT, or other NEs through a management interface
(MNG) or DCC, or MCC, or VLAN
Routing and handling of up to 32 x RS DCC, 32 x MS DCC, (total 32 channels), and two clear
channels
Alarms and maintenance
Fan control
Overhead processing, including overhead byte cross connections, OW interface, and user channel
interface
External timing reference interfaces (T3/T4), which provide the line interface unit for one 2 Mbps
T3/T4 interface and one 2 MHz T3/T4 interface
The MCP1200 supports the following interfaces:
MNG and T3/T4 directly from its front panel
RS-232, OW access, housekeeping alarms, and V.11 through a concentrated SCSI auxiliary I/F
connector (on the front panel)
In addition, the MCP1200 has LED indicators and one reset pushbutton. As the NPT-1200 is a front-access
platform, all its interfaces, LEDs, and pushbutton are located on the front panel of the MCP1200.
NOTE: An MCP30 ICP can be used to distribute the concentrated auxiliary connector into
dedicated connectors for each function.
NOTE: ACT, FAIL, MJR, and MNR. LEDs are combined to show various failure reasons during
the system boot. For details, see the Troubleshooting Using Component Indicators section in
the NPT-1200 Installation, Operation, and Maintenance Manual.
3.7.1 CPTS100
The CPTS100 is a powerful, high capacity, non-blocking dual cross connect matrix. It includes a TDM matrix
for native-SDH switching and a packet switch to support native packet-level switching.
Legacy TDM-level cross connect through the TDM matrix (like in the XIO cards) consume much of the
available bandwidth; the bandwidth allocation was done statically. In the CPTS100, bandwidth is
dynamically allocated, ensuring high flexibility and efficient utilization of this limited resource.
Furthermore, in case unassignment and reassignment of slots is required, the matrix uses a sophisticated
bandwidth rearrangement algorithm to support best bandwidth utilization.
NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic-affecting.
The CPTS100 has an RJ-45 connector marked TOD/1PPS that provides timing and synchronization
input/output signals, supporting IEEE 1588v2 standard.
3.7.2 CPS100
The CPS100 is a powerful, high capacity, non-blocking switching card. It includes a pure switching and a
packet switch to support native packet-level switching.
In the CPS100, bandwidth is dynamically allocated, ensuring high flexibility and efficient utilization of this
limited resource. Furthermore, in case unassignment and reassignment of slots is required, the matrix uses
a sophisticated bandwidth rearrangement algorithm to support best bandwidth utilization.
A functional diagram of the CPS100 matrix is shown in the following figure.
Figure 3-10: CPS100 functional diagram
NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic affecting.
The CPS100 has an RJ-45 connector marked TOD/1PPS that provides timing and synchronization
input/output signals, supporting IEEE 1588v2 standard.
3.7.3 CPTS320
CPTS320 dual matrix cards are centralized packet and TDM switches that support any to any direct data
card connectivity as well as native TDM switching capacity. These matrix cards, designed for use in the
NPT-1200 metro access platform, offer a choice of capacity and configuration options, including:
All Native Ethernet packet switch, supporting native packet-level switching with a capacity of up to
320 Gbps with 240 Gbps TM, providing:
Management and internal control, in addition to user traffic switching
Non-blocking data switch fabric
P2P MPLS internal links via the packet switch
Any slot to any slot connectivity
Any card installed in any slot
HO/LO nonblocking TDM cross connections, enabling native SDH/SONET switching with a capacity of
up to 40G (256 x VC 4 fully LO traffic)
5G HEoS connectivity between the packet and TDM matrix
Aggregate ports:
1 x STM 64 XFP based interface
2 x STM 16/STM 4/STM 1 SFP based configurable interfaces
4 x 10 GbE SFP+ based interfaces
Comprehensive range of timing and synchronization capabilities (IEEE 1588v2, SyncE)
The following figure shows the traffic flow in an NPT-1200 configured with a CPTS320 matrix card.
Figure 3-12: CPTS320 traffic flow
NOTE: The CPTS320 HEoS functionality is supported as of V6.0; make sure to use the version
with HEoS FIX.
NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade is non-traffic-affecting.
The CPTS320 has an RJ-45 connector marked TOD/1PPS that provides timing and synchronization
input/output signals, supporting IEEE 1588v2 standard.
3.7.5 CPS320
The CPS320 is a centralized packet switch that supports any to any direct data card connectivity. This switch
card, designed for use in the NPT-1200 metro access platform, offers a choice of capacity and configuration
options, including:
All Native Ethernet packet switch, supporting native packet-level switching with a capacity of up to 320
Gbps with 240 Gbps TM, providing:
Management and internal control, in addition to user traffic switching
Non-blocking data switch fabric
P2P MPLS internal links via the packet switch
Traffic management including:
Guaranteed CIR
Eight CoS with differentiated services
Two CoS (within the switch)
E2E flow control
Any card installed in any slot
Any to any slot connectivity
Aggregate ports:
4 x 10 GbE SFP+ based interfaces
Comprehensive range of timing and synchronization capabilities (ToD, 1pps)
The following figure shows the traffic flow in an NPT-1200 configured with a CPS320 matrix card.
Figure 3-15: CPS320 traffic flow
NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic affecting.
The CPS320 has an RJ-45 connector marked TOD/1PPS that provides timing and synchronization
input/output signals, supporting IEEE 1588v2 standard.
3.7.6 XIO64
The XIO64 card is the cross-connect matrix card with one aggregation line interface for the NPT-1200. It
also includes the TMU. The NPT-1200 should always be configured with two XIO64 cards for the
cross-connect matrix and TMU redundancy. The XIO64 has a cross-connect capability of 40 Gbps.
In addition, the XIO64 provides one STM-64 aggregate line interface based on the XFP module. The XFP
housing on the XIO64 panel supports STM-64 optical transceivers with a pair of LC optical connectors. The
card also supports OTN (with FEC) by a unique XFP type, the OTRN_xx.
3.7.7 XIO16_4
The XIO16_4 is the cross-connect matrix card with four aggregation line interfaces for the NPT-1200. It also
includes the TMU. The NPT-1200 should always be configured with two XIO16_4 cards for the
cross-connect matrix and TMU redundancy. The XIO16_4 has a cross-connect capability of 40 Gbps.
In addition, the XIO16_4 provides four STM-16/4/1 aggregate line interfaces based on the SFP modules. The
SFP housings on the XIO16_4 panel support STM-1, STM-4, and STM-16 (colored, non-colored, and BD)
optical transceivers, each with a pair of LC optical connectors. The type of the interface can be configured
separately for each port through the management.
The interface between the OW and the NPT-1200 and NPT-1030 platforms is based on a framed E1
interface. A special cable connects the host NPT-1200 and NPT-1030 unit and the OW unit providing the E1
connection. The framed E1 carries various information from/to the OW unit.
The OW module consists of an integrated DTMF handset, cable connections, and configuration interfaces.
No other ancillary equipment is required.
Type Designation
Optical GbE interface module with direct connection to the packet switch DHGE_24
Optical 10GE interface module with direct connection to the packet switch DHXE_2
Optical 10GE interface module with direct connection to the packet switch DHXE_4
Optical 10GE interface module with direct connection to the packet switch DHXE_4O
with OTN wrapping.
NFV module with 4 x GbE front panel ports for Virtual Network Functions. NFVG_4
This fully redundant Packet Optical Access (POA) platform offers enhanced MPLS-TP data network
functionality, including full traffic and IOP protection and the complete range of Ethernet based services
(CES, EoS, MoT, MoE, and PoE), as described in MPLS-TP and Ethernet Solutions.
The NPT-1050 is designed around a centralized dual matrix card that supports any to any direct data card
connectivity as well as native TDM switching capacity. The platform can be configured with the MCPTS100
matrix card (100G packet switch + 15G TDM switch), MCPS100 switching card (100G packet switch),
MCIPS300 switching card (300G packet switch, in future version). MCPTS100 cards provide a TDM capacity
of up to 15G (96 x VC 4 fully LO traffic).
The NPT-1050 is a 1U base platform housed in a 243 mm deep, 465 mm wide, and 44 mm high equipment
cage with all interfaces accessible from the front of the unit. The platform includes the following
components:
Redundant dual matrix cards for robust provisioning of the following functionalities:
All native packet switching (MCPTS) or pure packet switching (MCPS).
HO/LO nonblocking TDM cross connections (MCPTS).
Two SFP+ based 10 GbE interfaces (MCPTS and MCPS).
Two SFP based GE interfaces (MCPTS and MCPS).
One SFP based STM-16/STM-4/STM-1 interface (MCPTS).
Comprehensive range of timing and synchronization capabilities (T3/T4, ToD, and 1pps) (MCPTS
and MCPS).
In band management interfaces.
Three I/O card slots (TS1 to TS3), for processing a comprehensive range of traffic interfaces, including
PDH/Async, SDH/SONET, Ethernet Layer 1, and Ethernet Layer 2/MPLS.
The Tslots can be configured for 2.5G, 20GbE, or 40GbE service.
Traffic connector for the (optional) EXT-2U expansion unit.
Redundant DC power supply modules (INF_B1UH).
Fan unit (FCU_1050) with alarm indications and monitoring.
The following figure identifies the slot arrangement in the NPT-1050 platform.
Figure 4-2: NPT-1050 platform slots layout
The following table lists the modules that can be configured in each NPT-1050 slot.
INF-B1UH
FCU_1050
MCPS100/MCPTS100
AIM100
PME1_215
PME1_21B6
PME1_637
PM345_3
SMQ1
SMQ1&4
SMS16
DMCES1_4
MSE1_16
MS1_4
MSE1_32
DHGE_4E
DHGE_8
DHGE_16
DHGE_24
5 The PME1_21/PME1_21B module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced
conversion unit, the xDDF-21.
6 The PME1_21/PME1_21B module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced
conversion unit, the xDDF-21.
7 The PME1_63 module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced conversion
unit, the xDDF-21.
DHXE_2
NFVG_4
The NPT-1050 is fed from -48 VDC. Two INF_B1UH modules can be configured in two power supply module
slots for a redundant power supply.
The NPT-1050 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The NPT-1050 can also
be configured as an NPT-1050E, when combined with the EXT-2U expansion unit.
Typical power consumption for the NPT-1050 is less than 250 W. Power consumption is monitored through
the management software. For more information about power consumption requirements, see the
NPT-1050 Installation and Maintenance Manual and the NPT System Specifications.
NPT-1050 main controller card (MCPS) is the most essential card of the system, creating virtually a
complete standalone native packet system. NPT-1050 control and communication functions include:
Internal control and processing
Communication with external equipment and management
Network element (NE) software and configuration backup
Built-in Test (BIT)
NOTE: NPT-1050 supports in band and DCN management connections for PB and MPLS:
4 Mbps policer for PB UNI which connects to external DCN
10 Mbps shaper for MCC packet to MCP
No rate limit for the MNG port rate up to 100M full duplex
4.3 Timing
NPT-1050 provides high-quality system timing to all traffic modules and functions in compliance with
applicable ITU-T recommendations for functionality and performance.
The main component in the NPT-1050 synchronization subsystem is the timing and synchronization unit
(TMU). Timing is distributed redundantly from the TMUs to all traffic and matrix cards, minimizing unit
types and reducing operation and maintenance costs.
The TMU and the internal and external timing paths are fully redundant. The high-level distributed BIT
mechanism makes sure top performance and availability of the synchronization subsystem. If there is a
hardware failure, the redundant synchronization subsystem takes over the timing control with no traffic
disruption.
To support reliable timing, NPT-1050 provides multiple synchronization reference options. Up to four
timing references can be monitored simultaneously:
1PPS and ToD interfaces, using external timing input sources
2 x 2 MHz (T3) external timing input sources
2 x 2 Mbps (T3) external timing input sources
STM-n line timing from any SDH interface card
E1 2M PDH line timing from any PDH interface card
Local interval clock
Holdover mode
SyncE
1588V2 – Master, Slave, transparent, and boundary clock
In the Neptune, any timing signal can be selected as a reference source. The TMU provides direct control
over the source selection (received from the system software) and the frequency control loop. The
definition of the synchronization source depends on the source quality and synchronization mode of the
network timing topology (set by the EMS-NPT or LCT-NPT):
Synchronization references are classified at any given time according to a predefined priority and
prevailing signal quality. NPT-1050 synchronization subsystem synchronizes to the best available
timing source using the SSM protocol. The TMU is frequency-locked to this source, providing internal
system and SDH line transmission timing. The platform is synchronized to this central timing source.
NPT-1050 provides synchronization outputs for the synchronization of external equipment within the
exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to
synchronize any peripheral equipment or switch.
NPT-1050 supports SyncE synchronization, which is fully compatible with the asynchronous nature of
traditional Ethernet. SyncE is defined in ITU-T standards G.8261, G.8262, G.8263, and G.8264.
The IEEE 1588 Precision Time Protocol (PTP) provides a standard method for high precision synchronization
of network connected clocks. PTP is a time transfer protocol enabling slave clocks to synchronize to a
known master clock, ensuring that multiple devices operate using the same time base. The protocol
operates in master/slave configuration using UDP packets over IP or multicast packets over Ethernet. The
IEEE 1588v2 is now supported in the NPT-1050 to provide Ordinary Clock (OC) and Boundary Clock (BC)
capabilities.
4.6.1 INF_B1UH
The INF_B1UH is a DC power-filter module that can be plugged into the NPT-1050 platform. Two INF_B1UH
modules are needed for power feeding redundancy. It performs the following functions:
Single DC power input and power supply for all modules in the NPT-1050
Input filtering function for the entire NPT-1050 platform
Adjustable output voltage for fans in the NPT-1050
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply when under-/over-voltage is detected
High-power INF for up to 450 W
Figure 4-4: INF_B1UH front panel
4.6.2 FCU_1050
The FCU_1050 is a pluggable fan control module with four fans for cooling the NPT-1050 platform. The
fans’ running speed can be set to 16 different levels. The speed is controlled by the MCPS/MCPTS according
to the installed cards temperature.
In addition the FCU_1050 includes the ALARM interface connector of the NPT-1050 platform.
NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic-affecting.
The following figure shows the traffic flow in an NPT-1050 configured with an MCPS100 switching card.
Figure 4-8: MCPS100 traffic flow
NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic-affecting.
4.7.4 AIM100
The AIM100 is an aggregate interface module (AIM) for the aggregate (MCPS/MCPTS) slot in a
non-redundant configuration. The card enables to achieve the max interfaces with one MCPS/MCPTS card
as non-redundant installation .The AIM100, designed for use in the NPT-1050 metro access platform, offers
a choice of configuration option, including:
Aggregate ports:
2 x 10 GbE SFP+ based interfaces
4 x GbE CSFP based interfaces
2 x GbE SFP based interfaces
1 x STM-1/4/16 SFP based interface
NOTE: The MCP slot can be equipped with the MCP30B and an NVM (CF).
The following figure identifies the slot arrangement in the NPT-1030 platform.
Figure 5-2: NPT-1030 platform slots layout
The following table lists the modules that can be configured in each NPT-1030 slot.
DC PSA DC PSB AC PS MS XS A XS B TS 1# TS 2# TS 3# FS
INF-B1U
AC_PS-B1U
MCP30B
XIO30_4
XIO30Q_1&4
XIO30_16
PME1_218
PME1_639
PM345_3
SMD4
SMS4
SMD1B
SMQ1
SMQ1&4
SMS16
DMFE_4_L1
DMFX_4_L1
DMGE_1_L1
DMGE_4_L1
DMFE_4_L2
DMFX_4_L2
DMGE_4_L2
8 The PME1_21/PME1_21B module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced
conversion unit, the xDDF-21.
9 The PME1_63 module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced conversion
unit, the xDDF-21.
DC PSA DC PSB AC PS MS XS A XS B TS 1# TS 2# TS 3# FS
DMXE_22_L2
DMCES1_4
All cards support live insertion. The NPT-1030 platform provides full 1+1 redundancy in power feeding,
cross connections, and the TMU, as well as 1:N redundancy in the fans. Failure of the MCP30B does not
affect any existing traffic on the platform.
All cards are connected using a backplane that supports one traffic connector for the connection between
the NPT-1030 and the EXT-2U.
The NPT-1030 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks.
The NPT-1030 main controller card (MCP30B) is the most essential card of the system, creating virtually a
complete standalone native packet system. Moreover, it accommodates one service traffic slot for flexible
configuration of virtually any type of PDH, SDH, and Ethernet interfaces. This integrated flexible design
makes sure a very compact equipment structure and reduces costs, making NPT an ideal native choice for
the access and metro access layers.
NPT-1030 control and communication functions include:
Internal control and processing
Communication with external equipment and management
Network element (NE) software and configuration backup
Built-in Test (BIT)
NOTE: The NPT-1030 supports in band and DCN management connections for PB and MPLS:
4Mbps policer for PB UNI which connects to external DCN
No rate limit for the MNG port rate up to 100M full duplex
5.4 Timing
NPT-1030 provides high-quality system timing to all traffic modules and functions in compliance with
applicable ITU-T recommendations for functionality and performance.
The main component in the NPT-1030 synchronization subsystem is the timing and synchronization unit
(TMU). Timing is distributed redundantly from the TMUs to all traffic and matrix cards, minimizing unit
types and reducing operation and maintenance costs.
The TMU and the internal and external timing paths are fully redundant. The high-level distributed BIT
mechanism makes sure top performance and availability of the synchronization subsystem. If a hardware
failure occurs, the redundant synchronization subsystem takes over the timing control with no traffic
disruption.
To support reliable timing, NPT-1030 provides multiple synchronization reference options. Up to four
timing references can be monitored simultaneously.
In NPT-1030, any timing signal can be selected as a reference source. The TMU provides direct control over
the source selection (received from the system software) and the frequency control loop. The definition of
the synchronization source depends on the source quality and synchronization mode of the network timing
topology (set by the EMS-APT or LCT-APT).
Synchronization references are classified at any given time according to a predefined priority and prevailing
signal quality. The NPT-1030 synchronization subsystem synchronizes to the best available timing source
using the Synchronization Status Marker (SSM) protocol. The TMU is frequency-locked to this source,
providing internal system and SDH line transmission timing. The shelf is synchronized to this central timing
source.
NPT-1030 provides synchronization outputs for the synchronization of external equipment within the
exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to synchronize
any peripheral equipment or switch.
NPT-1030 supports synchronous Ethernet as per ITU-T G.8261.
NOTE: The NPT-1030 platform must be configured with two XIO cards. However, in case of
pure optical configuration (with OBC card only) the XIO card isn’t required at all
5.6.1 INF_B1U
The INF_B1U is a DC power-filter module for high-power applications that can be plugged into the
NPT-1030 platform. Two INF_B1U modules are needed for power feeding redundancy. It performs the
following functions:
High-power INF for up to 200 W for more than one DMXE_22_L2, DMGE_4_L2, or DMCES1_4 module
Single DC power input and power supply for all modules in the NPT-1030
CAUTION: When more than one DMXE_22_L2, DMGE_4_L2, or DMCES1_4 card is installed in
the NPT-1030, an INF_B1U must be configured in the platform.
5.6.2 AC_PS-B1U
The AC_PS-B1U is an AC power module that can be plugged into the NPT-1030 platform. It performs the
following functions:
Converts AC power to DC power for the NPT-1030.
Filters input for the entire NPT-1030 platform.
Supplies adjustable output voltage for fans in the NPT-1030.
Supplies up to 180 W.
Figure 5-6: AC_PS-B1U front panel
5.6.3 FCU_1030
The FCU_1030 is a pluggable fan control module for high power applications with four fans for cooling the
NPT-1030 platform. The FCU_1030 fans provide cooling air in an environment that dissipates up to 200 W,
and are intended to work in conjunction with DMGE_4_L2 modules. The fans’ running speed can be low,
normal, or turbo. The speed is controlled by the MCP30B according to the environmental temperature and
fan failure status.
The following figure shows the front panel of the FCU_1030.
Figure 5-7: FCU_1030 front panel
5.6.4 MCP30B
The MCP30B is the second generation of MCP30 cards and serves as the main processing card of the
NPT-1030. It integrates functions such as control and communication and overhead processing. It provides
the following functions:
Control-related functions:
Communications with and control of all other modules in the NPT-1030 and EXT-2U through the
backplane (by the CPU)
Communications with the EMS-APT, LCT-APT, or other NEs through a management interface
(MNG) or DCC
Routing and handling of up to 32 x RS DCC, 32 x MS DCC (total 32 channels), and two clear
channels
Alarms and maintenance
Fan control
NOTE: An MCP30 ICP can be used to distribute the concentrated auxiliary connector into
dedicated connectors for each function.
NOTE: ACT, FAIL, MJR, and MNR LEDs are combined to show various failure reasons during
the system boot. For details, see the Troubleshooting Using Component Indicators section in
the NPT-1030 Installation, Operation, and Maintenance Manual.
NOTE: The connectivity of the XIO30_4 to the Tslots (TS1, TS2, and TS3) is limited for up to 6 x
VC-4s. There are no limitations for Hardware Rev. B00 and above.
The total NPT-1030 capacity accommodating two XIO30_4 cards is 2.5 Gbps. The capacity is
distributed as follows: 1 slot with 622 Mbps and 2 slots with 2 x 622 Mbps.
The slot capacity is depicted in the following figure.
Figure 5-9: NPT-1030 with two XIO30_4 slot capacity
XIO30Q_1&4: In addition to 15 Gbps cross-connect matrix and TMU, this card provides four
STM-1/STM-4 compatible aggregate line interfaces based on the SFP modules. The interface rate and
STM-1 or STM-4 is configurable per port from the management. The SFP housing on the XIO30Q_1&4
panel support STM-1 and STM-4 optical transceivers with a pair of LC optical connectors (bidirectional
STM-1 and STM-4 Tx/Rx over a single fiber using two different lambdas). STM-1 electrical SFPs with
coaxial connectors are also supported.
XIO30_16: In addition to 15 Gbps cross-connect matrix and TMU, this card provides one STM-4/16
aggregate line interface based on the SFP module. The SFP housing on the XIO30_16 panel supports
STM-4 or STM-16 optical transceivers with a pair of LC optical connectors (bidirectional STM-4 and
STM-16 Tx/Rx over a single fiber using two different lambdas).
The total NPT-1030 capacity accommodating two XIO30Q_1&4 or two XIO30_16 cards is 15 Gbps. The
capacity is evenly distributed between the three I/O slots and is 2.5 Gbps per slot.
The slot capacity is depicted in the following figure.
Figure 5-10: NPT-1030 with two XIO30Q_1&4 or two XIO30_16 slot capacity
The following figures show the front panel of the XIO30 cards.
Figure 5-11: XIO30_4 front panel
The panels of the XIO30_4, XIO30Q_1&4, and XIO30_16 include the LED indications described in the
following table.
Type Designation
Electrical/optical GbE interface module with L2 functionality DMGE_4_L2
Optical 10 GbE and GbE interface module with L2 functionality DMXE_22_L2
CES services for STM-1/STM-4 interfaces module DMCES1_4
The NPT-1021 can be fed by 24 VDC, -48 VDC, or 110 to 230 VAC. In DC power feeding, two INF modules
can be configured in the two power module slots for redundancy. One double slot INF module with
dual-feeding can be configured as well. AC power feeding requires the use of a conversion module to
implement AC/DC conversion.
The NPT-1021 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The rugged platform
design also makes this platform suitable for street cabinet use, withstanding temperatures up to 70°C.
The NPT-1021 can also be configured as an expanded platform when combined with the EXT-2U expansion
unit, as illustrated in the following figure.
Figure 6-2: NPT-1021 with EXT-2U expansion unit
Typical power consumption of the NPT-1021 is 40 W. Power consumption is monitored through the
management software. For more information about power consumption requirements, see the NPT System
Specifications.
6.2 CPS50
CPS50 is a T-slot 60 Gbps central packet switch card for the NPT-1021 or NPT-1020 with up to 4 x 10 GE
aggregate ports or 2 x 10GE aggregate ports plus 4 x GbE ports. The card supports the following main
functions:
60 Gbps packet switching capacity with MPLS-TP and PB functionality
Flexible port type definition for front panel ports:
Two SFP+ based 10GE ports, each can be configured as 10GBase-R or 10GBase-W with EDC
support
Two SFP+/SFP/CSFP compatible cages, each one can be configured as:
1 x 10 GE port with SFP+ (10GBase-R/10GBase-W with EDC support)
1 x GbE port with SFP (1000Base-X)
2 x GbE ports with CSFP (1000Base-X)
Summary – supported port assignments in CPS50:
4 x 10GE
3 x 10GE + 2 x GbE
2 x 10GE + 4 x GbE
When the CPS50 is assigned and switch engine is enabled; the 10G switch on the base card is
disabled, built-in 12 x GbE ports in base card, and the Ethernet bus of three E-slots are connected to
the switch core of CPS50.
A CPS50 card can be inserted and replaced without affecting the traffic flow.
The following figure shows the front panel of the CPS50.
NOTE: NPT-1021 supports in band and DCN management connections for PB and MPLS:
4 Mbps policer for PB UNI which connects to external DCN
10 Mbps shaper for MCC packet to MCP
No rate limit for the MNG port rate up to 100M full duplex
6.5 Timing
NPT-1021 provides high-quality system timing to all traffic modules and functions in compliance with
applicable ITU-T recommendations for functionality and performance. Timing functionality and
performance should comply with ITU-T G.781, G.783 and G.813.
The main component in the NPT-1021 synchronization subsystem is the timing and synchronization unit
(TMU). Timing is distributed from the TMUs to all traffic and matrix cards, to minimize unit types and
reduce operation and maintenance costs.
Timing system in NPT-1021 includes two clock domains, System TMU and PTP TMU. The System TMU clock
sources can be from T1/T2/T3, Sync-E or PTP slave clock; the PTP TMU clock sources can be from T0, Sync-E
or external 1PPS+ToD.
To support reliable timing, the NPT-1021 provides multiple synchronization reference options:
2 x 2 MHz (T3) external timing input sources
2 x 2 Mbps (T3) external timing input sources
E1/T1 interfaces
STM-1 of CES cards
Local interval clock
Holdover mode
SyncE
1588V2 – Master, Slave, and transparent
1PPS+ToD interface
In the NPT, any timing signal can be selected as a reference source. The TMU provides direct control over
the source selection (received from the system software) and the frequency control loop. The definition of
the synchronization source depends on the source quality and synchronization mode of the network timing
topology (set by the EMS-NPT or LCT-NPT):
Synchronization references are classified at any given time according to a predefined priority and
prevailing signal quality. NPT-1021 synchronization subsystem synchronizes to the best available
timing source using the SSM (ESMC) protocol. The TMU is frequency-locked to this source, providing
internal system timing. The platform is synchronized to this central timing source.
NPT-1021 provides synchronization outputs for the synchronization of external equipment within the
exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to
synchronize any peripheral equipment or switch.
NPT-1021 supports SyncE synchronization, which is fully compatible with the asynchronous nature of
traditional Ethernet. SyncE is defined in ITU T standards G.8261, G.8262, G.8263, and G.8264.
The IEEE 1588 Precision Time Protocol (PTP) provides a standard method for high precision synchronization
of network connected clocks. PTP is a time transfer protocol enabling slave clocks to synchronize to a
known master clock, ensuring that multiple devices operate using the same time base. The protocol
operates in master/slave configuration using UDP packets over IP.
6.9.1 INF-B1U
The INF-B1U is a -48 VDC power-filter module for high-power applications that can be plugged into the
NPT-1020/NPT-1021 platforms. Two INF-B1U modules are needed for power feeding redundancy. It
performs the following functions:
High-power INF for up to 200 W
Single DC power input and power supply for all modules in the NPT-1020/NPT-1021
Input filtering function for the entire NPT-1020/NPT-1021 platforms
Adjustable output voltage for fans in the NPT-1020/NPT-1021
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply when under-/over-voltage is detected
6.9.2 INF-B1U-24V
The INF-B1U-24V is a 24 VDC power-filter module for high-power applications that can be plugged into the
NPT-1020/NPT-1021 platforms. Two INF-B1U-24V modules are needed for power feeding redundancy. It
performs the following functions:
Feed power supply for all modules in the NPT-1020/NPT-1021 products
Input filtering function for the entire NPT-1020/NPT-1021 platforms
Adjustable output voltage for fans in the NPT-1020/NPT-1021
Support of fan power loss alarm and LED display
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply in the event of under-/over-voltage
6.9.3 INF-B1U-D
The INF-B1U-D is a DC power-filter module that can be plugged into the NPT-1020/NPT-1021 platforms. It
performs the following functions:
Dual DC power input and power supply for all modules in the NPT-1020/NPT-1021
Input filtering for the entire NPT-1020/NPT-1021 platforms
Adjustable output voltage for fans in the NPT-1020/NPT-1021
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply when under-/over-voltage is detected
Figure 6-8: INF-B1U-D front panel
6.9.4 AC_PS-B1U
The AC_PS-B1U is an AC power module that can be plugged into the NPT-1020/NPT-1021 platforms. It
performs the following functions:
Converts AC power to DC power for the NPT-1020/NPT-1021.
Filters input for the entire NPT-1020/NPT-1021 platforms.
Supplies adjustable output voltage for fans in the NPT-1020/NPT-1021.
Supplies up to 180 W.
The NPT-1020 can be fed by 24 VDC, -48 VDC or 110 VAC to 203 VAC. In DC power feeding, two INF
modules can be configured in two power supply module slots for redundant power supply. AC power
feeding requires the use of a conversion module to implement AC/DC conversion.
The NPT-1020 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The rugged platform
design also makes this platform suitable for street cabinet use, withstanding temperatures up to 70°C.
The NPT-1020 can also be configured as an NPT-1020E, when combined with the EXT-2U expansion unit, as
illustrated in the following figure.
Figure 7-2: NPT-1020 platform with expansion unit
Typical power consumption for the NPT-1020 is 50 W. Power consumption is monitored through the
management software. For more information about power consumption requirements, see the Neptune
Installation and Maintenance Manual and the Neptune System Specifications.
7.2 CPS50
CPS50 is a T-slot 60 Gbps central packet switch card for the NPT-1021 or NPT-1020 with up to 4 x 10 GE
aggregate ports or 2 x 10GE aggregate ports plus 4 x GbE ports. The card supports the following main
functions:
60 Gbps packet switching capacity with MPLS-TP and PB functionality
Flexible port type definition for front panel ports:
Two SFP+ based 10GE ports, each can be configured as 10GBase-R or 10GBase-W with EDC
support
Two SFP+/SFP/CSFP compatible cages, each one can be configured as:
1 x 10 GE port with SFP+ (10GBase-R/10GBase-W with EDC support)
1 x GbE port with SFP (1000Base-X)
2 x GbE ports with CSFP (1000Base-X)
Summary – supported port assignments in CPS50:
4 x 10GE
3 x 10GE + 2 x GbE
2 x 10GE + 4 x GbE
When the CPS50 is assigned and switch engine is enabled; the 10G switch on the base card is
disabled, built-in 12 x GbE ports in base card, and the Ethernet bus of three E-slots are connected to
the switch core of CPS50.
A CPS50 card can be inserted and replaced without affecting the traffic flow.
The following figure shows the front panel of the CPS50.
NOTE: The NPT-1020 supports in band and DCN management connections for PB and MPLS:
4Mbps policer for PB UNI which connects to external DCN
10Mbps shaper for MCC packet to MCP
No rate limit for the MNG port rate up to 100M full duplex
7.5 Timing
The NPT-1020 provides high-quality system timing to all traffic modules and functions in compliance with
applicable ITU-T recommendations for functionality and performance. Timing functionality and
performance should comply with ITU-T G.781, G.783 and G.813.
The main component in the NPT-1020 synchronization subsystem is the timing and synchronization unit
(TMU). Timing is distributed from the TMUs to all traffic and matrix cards, to minimize unit types and
reduce operation and maintenance costs.
Timing system in NPT-1020 includes two clock domains, System TMU and PTP TMU. The System TMU clock
sources can be from T1/T2/T3, Sync-E or PTP slave clock; the PTP TMU clock sources can be from T0, Sync-E
or external 1PPS+ToD.
To support reliable timing, the NPT-1020 provides multiple synchronization reference options:
2 x 2 MHz (T3) external timing input sources
2 x 2 Mbps (T3) external timing input sources
STM-n line timing from any SDH interface card
E1 2M PDH line timing from any PDH interface card
Local interval clock
Holdover mode
SyncE
1588V2 – Master, Slave, and transparent
1PPS+ToD interface
In the NPT, any timing signal can be selected as a reference source. The TMU provides direct control over
the source selection (received from the system software) and the frequency control loop. The definition of
the synchronization source depends on the source quality and synchronization mode of the network timing
topology (set by the EMS-APT or LCT-APT):
Synchronization references are classified at any given time according to a predefined priority and
prevailing signal quality. The NPT-1020 synchronization subsystem synchronizes to the best available
timing source using the SSM protocol. The TMU is frequency-locked to this source, providing internal
system and SDH line transmission timing. The platform is synchronized to this central timing source.
The NPT-1020 provides synchronization outputs for the synchronization of external equipment within
the exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to
synchronize any peripheral equipment or switch.
The NPT-1020 supports SyncE synchronization, which is fully compatible with the asynchronous nature of
traditional Ethernet. SyncE is defined in ITU T standards G.8261, G.8262, G.8263, and G.8264.
The IEEE 1588 Precision Time Protocol (PTP) provides a standard method for high precision synchronization
of network connected clocks. PTP is a time transfer protocol enabling slave clocks to synchronize to a
known master clock, ensuring that multiple devices operate using the same time base. The protocol
operates in master/slave configuration using UDP packets over IP.
7.8.1 INF-B1U
The INF-B1U is a -48 VDC power-filter module for high-power applications that can be plugged into the
NPT-1020/NPT-1021 platforms. Two INF-B1U modules are needed for power feeding redundancy. It
performs the following functions:
High-power INF for up to 200 W
Single DC power input and power supply for all modules in the NPT-1020/NPT-1021
Input filtering function for the entire NPT-1020/NPT-1021 platforms
Adjustable output voltage for fans in the NPT-1020/NPT-1021
7.8.2 INF-B1U-24V
The INF-B1U-24V is a 24 VDC power-filter module for high-power applications that can be plugged into the
NPT-1020/NPT-1021 platforms. Two INF-B1U-24V modules are needed for power feeding redundancy. It
performs the following functions:
Feed power supply for all modules in the NPT-1020/NPT-1021 products
Input filtering function for the entire NPT-1020/NPT-1021 platforms
Adjustable output voltage for fans in the NPT-1020/NPT-1021
Support of fan power loss alarm and LED display
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply in the event of under-/over-voltage
Single DC power input: 18 VDC to 36 VDC
Maximum power consumption 85W (the CPS50 card isn’t supported)
The front panel of the INF-B1U-24V is shown in the following figure.
7.8.3 INF-B1U-D
The INF-B1U-D is a DC power-filter module that can be plugged into the NPT-1020/NPT-1021 platforms. It
performs the following functions:
Dual DC power input and power supply for all modules in the NPT-1020/NPT-1021
Input filtering for the entire NPT-1020/NPT-1021 platforms
Adjustable output voltage for fans in the NPT-1020/NPT-1021
7.8.4 AC_PS-B1U
The AC_PS-B1U is an AC power module that can be plugged into the NPT-1020/NPT-1021 platforms. It
performs the following functions:
Converts AC power to DC power for the NPT-1020/NPT-1021.
Filters input for the entire NPT-1020/NPT-1021 platforms.
Supplies adjustable output voltage for fans in the NPT-1020/NPT-1021.
Supplies up to 180 W.
Type Designation
Electrical PDH E3/DS-3 interface Tslot module PM345_3
2 x STM-1 electrical or optical ports SDH interface card SMD1B
1 x STM-4 port SDH interface card SMS4
CES services for STM-1/STM-4 interfaces module DMCES1_4
Electrical GbE interface module with direct connection to DHGE_4E
the packet switch
Optical GbE interface module with direct connection to the DHGE_8
packet switch
CES multi-service card with 16 x E1/T1 interfaces MSE1_16
CES multiservice card with 8 x E1/T1 and 2 x STM-1/OC-3 MSC_2_8
interfaces
CES multi-service module for 4 x OC3/STM-1 or 1 x MS1_4
OC12/STM-4 interfaces
CES multi-service module with 32 x E1/T1 interfaces MSE1_32
Central packet switching card CPS50
NFV module with 4 x GbE front panel ports for Virtual NFVG_4
Network Functions
The NPT-1010 is a single board system with all traffic interfaces housed on its front panel and optional mini
slot for CES E1 and 1588V2. The interfaces are identical on both NPT-1010 options, including:
4 x 10/100/1000BaseT interfaces (with PoE+)
4 x 100/1000BaseT, SFP based, interfaces
The NPT-1010 performs the following functions:
Integrated switching, timing, system control, and in band management (MCC)
Control-related functions:
Communications and control
NOTE: ACT, FAIL, MJR, and MNR LEDs are combined to show various failure reasons during
the system boot. For details, see the Troubleshooting Using Component Indicators section in
the NPT-1010 Installation, Operation, and Maintenance Manual.
The four SFP housings on the NPT-1010 support four types of SFP module:
GE SFP optical transceivers with a pair of LC optical connectors
Electrical GE SFP electrical transceivers with a RJ-45 connectors
Bidirectional GE SFP optical transceivers with one LC optical connector (bidirectional GE Tx/Rx over a
single fiber using two different lambdas)
Colored GE SFP optical transceivers with a pair of LC optical connectors (colored C/DWDM SFP)
NOTE: NPT-1010 supports in band and DCN management connections for PB and MPLS:
4 Mbps policer for PB UNI which connects to external DCN
3 Mbps shaper for MCC packet to MCP
No rate limit for the MNG port rate up to 100M full duplex
CES services for E1/T1 interfaces module with 1588V2 slave TMSE1_8
8.3.1 TMSE1_8
The TMSE1_8 is a CES and timing module that provides Circuit Emulation Services (CES) for up to 8 x E1/T1
interfaces. It supports the SAToP and CESoPSN standards and has a SCSI 36-pin connector for connecting
the customer E1/T1 signals. It also provides the Time of Day (ToD) and 1PPS signals for supporting Ethernet
timing per IEEE 1588v2 standard.
The front panel of the TMSE1_8 is shown in the following figure.
Figure 8-5: TMSE1_8 front panel
8.3.2 TM10
The TM10 is optional in the NPT-1010 mini slot. It provides the Time of Day (ToD) and 1PPS signals for
supporting Ethernet timing per IEEE 1588v2 standard.
The front panel of the TM10 is shown in the following figure.
Figure 8-6: TM10 front panel
Type Designation
Electrical and Optical GbE interface module with direct connection to the DHGE_16
packet switch
Optical GbE interface module with direct connection to the packet switch DHGE_24
Optical 10GE interface module with direct connection to the packet switch DHXE_2
Optical 10GE interface module with direct connection to the packet switch DHXE_4
Optical 10GE interface module with direct connection to the packet switch
DHXE_4O
with OTN wrapping
NFV module with 4 x GbE front panel ports for Virtual Network Functions. NFVG_4
NOTE: The PME1_21 supports only balanced E1s directly from its connectors. For unbalanced
E1s, configure an xDDF 21, an external DDF with E1 balanced-to-unbalanced conversion.
9.1.2 PME1_21B
The PME1_21B is a Tslot module with 21 x E1 (2.048 Mbps) balanced electrical interfaces. The PME1_21B
can be configured in any Tslot and supports retiming of up to 8 x E1s.
The cabling of the PME1_21B module is directly from the front panel with a 100-pin SCSI female connector.
PME1_63 enables easy expansion of I/O slots equipped with PME1_21B modules by additional 42 x E1
interfaces, while significantly reducing the cost per E1 interface. This is done by removing the working
PME1_21B, replacing it with a PME1_63, and connecting it with appropriate cables. The I/O slot must then
be reassigned through the management as a PME1_63. The attributes (including cross connects, trails, and
so on) of the first 21 x E1s are retained as they were in the replaced PME1_21B. For a detailed description
of this procedure, see the corresponding IMM.
NOTES:
The PME1_21B is back-compatible in the Neptune product line from the first Version
(V1.2) and in the BG product line from V14.
When the card is installed in a platform of the supported previous Versions it is simulating
a PME1_63 card but with 21 x E1s only.
The management system will not display PME1_21B, but PME1_63.
When trying to assign PME1_21 you will see a "Card-Underutilized" warning. Ignore this
alarm.
In the inventory info it will be displayed as PME1_63. This in normal for this card.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of E1 interfaces are listed in the following table.
NOTE: The PME1_21B supports only balanced E1s directly from its connectors. For
unbalanced E1s, configure an xDDF 21, an external DDF with E1 balanced-to-unbalanced
conversion.
9.1.3 PME1_63
The PME1_63 is a Tslot module with 63 x E1 (2.048 Mbps) balanced electrical interfaces. The PME1_63 can
be configured in any Tslot and supports retiming of up to 63 x E1s. It supports LOS inhibit functionality (very
low sensitivity signal detection). This actually means that the LOS alarm is masked up to a level of -20 dB
signals.
The cabling of the PME1_63 module is directly from the front panel with one dense unique 272-pin VHDCI
female connector.
PME1_63 enables easy expansion of I/O slots equipped with PME1_21 modules by additional 42 x E1
interfaces, while significantly reducing the cost per E1 interface. This is done by removing the working
PME1_21, replacing it with a PME1_63, and connecting it with appropriate cables. The I/O slot must then
be reassigned through the management as a PME1_63. The attributes (including cross connects, trails, and
so on) of the first 21 x E1s are retained as they were in the replaced PME1_21. For a detailed description of
this procedure, see the corresponding IMM.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of E1 interfaces are listed in the following table.
NOTE: The PME1_63 supports only balanced E1s directly from its connectors. For unbalanced
E1s, configure an xDDF 21, an external DDF with E1 balanced-to-unbalanced conversion.
9.1.4 PM345_3
The PM345_3 is a Tslot module with 3 x E3/DS-3 (34 Mbps/45 Mbps) unchannelized electrical interfaces.
Each interface can be configured independently as E3 or DS-3 by the EMS-APT or the LCT-APT. The
PM345_3 can be configured in any Tslot.
The cabling of the PM345_3 module is directly from the front panel with six DIN 1.0/2.3 connectors.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of E3/DS-3/STS-1 interfaces are listed in the following table.
9.2.2 SMQ1
The SMQ1 is an SDH interface card used to expand ring closures and SDH tributaries. It provides four STM-1
ports, which can be optical or electrical.
SMQ1 enables easy expansion of I/O slots equipped with SMD1B modules by additional two STM-1
interfaces, while significantly reducing the cost per STM-1 interface. This is done by removing the working
SMD1B, replacing it with a SMQ1, and connecting it with appropriate fibers. The I/O slot must then be
reassigned through the management as a SMQ1. The attributes (including cross-connects, trails, and so on)
of the first two STM-1s are retained as they were in the replaced SMD1B. For a detailed description of this
procedure see the corresponding IMM.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-1 interfaces are listed in the following table.
9.2.3 SMQ1&4
The SMQ1&4 is an SDH interface card used to expand ring closures and SDH tributaries. It provides four
configurable STM-1 or STM-4 ports, which can be optical or electrical for STM-1 configuration. The interface
rate is configurable per port from the management.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-1/STM-4 interfaces are listed in the following table.
9.2.4 SMS4
The SMS4 is an SDH interface card used to expand ring closures and SDH tributaries. It provides one STM-4
port.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-4 interfaces are listed in the following table.
9.2.5 SMD4
The SMD4 is an SDH interface card used to expand ring closures and SDH tributaries. It provides two STM-4
ports.
NOTE: The NPT-1030 supports SMD4 only in TS2 and TS3, and is only applicable in an ADM16
or QADM-1/4 (4 x ADM-1/4) system.
A maximum of two SMD4 modules can be installed in the NPT-1030, totaling in four STM-4 interfaces in the
platform.
9.2.6 SMS16
The SMS16 is an SDH interface card used to expand ring closures and SDH tributaries. It provides one
STM-16 SFP-based port.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-16 interfaces are listed in the following table.
The cabling of the DMFE_4_L1 module is directly from the front panel with four RJ-45 connectors.
9.3.2 DMFX_4_L1
The DMFX_4_L1 is an EoS processing module with L1 functionality. It provides four optical FE (also referred
to as FX) LAN interfaces for the insertion of SFP transceivers, and four EoS WAN interfaces. The total WAN
bandwidth is up to 4 x VC-4. The DMFX_4_L1 can be configured in any Tslot.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of FX interfaces are listed in the following table.
9.3.3 DMGE_1_L1
The DMGE_1_L1 is an L1 data Tslot module with one GbE interface on the LAN side and one EoS interface
on the WAN side. The total WAN bandwidth is 4 x VC-4. The DMGE_1_L1 supports electrical or optical
inputs (both inputs are internally connected to the GbE interface) as follows:
RJ-45 connector for connecting electrical signals
SFP housing for connecting optical signals
The DMGE_1_L1 can be configured in any Tslot.
NOTE: The DMGE_1_L1 is supported only by the NPT-1030 platform (with XIO30_4 only).
A maximum of three DMGE_1_L1 modules can be installed in the NPT-1030, thus totaling in three GbE
(electrical/optical) interfaces in the platform.
9.3.4 DMGE_4_L1
The DMGE_4_L1 is an EoS processing module with L1 functionality. It provides four GbE LAN interfaces for
the insertion of SFP transceivers, and four EoS WAN interfaces. Both electrical and optical GbE interfaces
are supported by insertion of different types of SFP - copper SFP for electrical GbE with RJ45 connector, and
optical SFP for optical GbE with LC connectors. The total WAN bandwidth is up to 16 x VC-4.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of GbE (electrical/optical) interfaces are listed in the following table.
9.4.2 DMFX_4_L2
The DMFX_4_L2 is an EoS/MoT processing module with L2 functionality (MPLS ready).It provides four
optical FX LAN interfaces for the insertion of SFP transceivers, and 8 x EoS WAN interfaces. The total WAN
bandwidth is up to 4 x VC-4. The DMFE_4_L2 can be configured in any Tslot.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of FX interfaces are listed in the following table.
9.4.3 DMGE_2_L2
The DMGE_2_L2 is an L2 data Tslot module with two GbE interfaces on the LAN side and 64 x EoS interfaces
on the WAN side. The module supports MPLS by appropriate licensing. The total WAN bandwidth is 14 x
VC-4. The DMGE_2_L2 supports electrical or optical GbE by insertion of different types of SFP – copper or
optical.
The DMGE_2_L2 can be configured in any Tslot.
DMGE_4_L2 enables easy expansion of I/O slots equipped with DMGE_2_L2 modules by additional two GbE
interfaces, while significantly reducing the cost per GbE interface. This is done by removing the working
DMGE_2_L2, replacing it with a DMGE_4_L2, and connecting it with appropriate fibers. The I/O slot must
then be reassigned through the management as a DMGE_4_L2. The attributes of the first two GbE
interfaces are retained as they were in the replaced DMGE_2_L2. For a detailed description of this
procedure refer to corresponding IMM.
The maximum number of modules that can be installed in each of the supported platforms and the
resulting total number of GbE (electrical/optical) interfaces are listed in the following table.
The maximum number of modules that can be installed in the NPT-1200 and the resulting total number of
GbE interfaces are listed in the following table.
9.4.4 DMGE_4_L2
The DMGE_4_L2 is an L2 data Tslot module with four GbE interfaces on the LAN side and 64 x EoS or up to
30 x MoT interfaces on the WAN side. The module supports MPLS by appropriate licensing. The total WAN
bandwidth can be configured to 16 x VC-4. The DMGE_4_L2 supports electrical or optical GbE by insertion
of different types of SFP – copper or optical.
NOTE: The DMGE_4_L2 can be configured in any Tslot in the NPT-1200, except for TS5.
DMGE_4_L2 enables easy expansion of I/O slots equipped with DMGE_2_L2 modules by additional two GbE
interfaces, while significantly reducing the cost per GbE interface. This is done by removing the working
DMGE_2_L2, replacing it with a DMGE_4_L2, and connecting it with appropriate fibers. The I/O slot must
then be reassigned through the management as a DMGE_4_L2. The attributes of the first two GbE
interfaces are retained as they were in the replaced DMGE_2_L2. For a detailed description of this
procedure see the corresponding IMM.
NOTE: The expansion of a slot capacity (accommodating a DMGE_2_L2 module) from two GbE
to four GbE interfaces by installing a DMGE_4_L2 is not relevant for the NPT-1030, as it
doesn't support the DMGE_2_L2.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of GbE (electrical/optical) interfaces are listed in the following table.
NOTE: It is highly recommended to install the DMGE_4_L2 close to the fan units (FCUs) in TS2
and TS3 of the NPT-1030 and TS2, TS3, TS4, and TS6 of the NPT-1200.
9.4.5 DMGE_8_L2
The DMGE_8_L2 is an L2 data Tslot module with 8 GbE interfaces on the LAN side and 96 x EoS or up to 60 x
MoT interfaces on the WAN side. The module occupies a double slot in the Tslot module space and can be
installed only in slot pairs TS1+TS2 and TS6+TS7 of the NPT-1200. A spacer between each of these slot pairs
must be removed to enable the installation of the DMGE_8_L2. The procedure for removing this spacer is
described in the NPT-1200 Installation, Operation, and Maintenance Manual.
The module supports MPLS by appropriate licensing. The total WAN bandwidth is 32 x VC-4. The
DMGE_8_L2 has two combo ports and six optical ports. The combo ports support direct connection of
electrical signals through dedicated RJ-45 connectors, or optical signals through SFP housings. The other six
ports are SFP-based and enable electrical or optical GbE by insertion of different types of SFP – copper or
optical.
NOTE: The DMGE_8_L2 is supported only by the NPT-1200 platform (up to two modules in
slots TS1+TS2 and TS6+TS7).
NOTE:
It is highly recommended to offer and install one DMGE_8_L2 instead of two DMGE_4_L2
cards.
The DMGE_8_L2 supports in band management over MOE by the NPT-1200.
Because the DMGE_8_L2 occupies a double slot, it can be installed only in two adjacent horizontal slots
(TS1+TS2 and TS6+TS7) in the NPT-1200. Therefore, a maximum of two DMGE_8_L2 modules can be
installed in the NPT-1200, totaling in 16 GbE (electrical/optical) interfaces per platform.
9.4.6 DMXE_22_L2
The DMXE_22_L2 is an L2 (MPLS-TP ready) data Tslot module with two 10GbE and 2 x GbE interfaces on the
LAN side and 64 x EoS or up to 30 x MoT interfaces on the WAN side. The module occupies a single slot in
the Tslot module space. The module supports MPLS by appropriate licensing. The total WAN bandwidth is
16 x VC-4.The card supports 1588v2 master, slave, and transparent modes.
NOTE: The DMXE_22_L2 supports unique TM. Make sure to read about the card TM
functionalities and features before the implementation. See DMXE_22_L2 Traffic
Management (TM).
NOTE: The DMXE_22_L2 supports in band management over MoE by the NPT-1200.
The 10 GbE ports use new SFP+ (OTP10_xx) transceivers that provide 10 GbE connectivity in a small form
factor, similar in size to the legacy SFPs. The SFP+ enables design modules with higher density and lower
power consumption.
The two GbE ports are SFP-based and enable electrical or optical GbE by insertion of different types of SFP
– copper or optical.
NOTE: The DMXE_22_L2 is supported in the NPT-1030 with up to 2 modules, and in the
NPT-1200 platform with up to four modules.
The maximum number of modules that can be installed in the supported platforms, and the resulting total
number of GbE (electrical/optical) and 10 GbE interfaces are listed in the following table.
Table 9-37: DMXE_22_L2 modules, GbE, and 10 GbE interfaces per platform
Platform Max.DMXE_22_L2 Max. GbE (electrical/optical) Max. 10 GbE interfaces
modules interfaces
NPT-1030 2 4 4
NPT-1200 4 8 8
9.4.7 DMXE_48_L2
The DMXE_48_L2 is an L2 (MPLS-ready) data Tslot module with four 10GbE and 8 x GbE interfaces on the
LAN side and 96 x EoS/MoT interfaces on the WAN side. The module occupies a double slot in the Tslot
module space and can be installed only in slot pairs TS1+TS2 and TS6+TS7 of the NPT-1200. A spacer
between each of these slot pairs must be removed to enable the installation of the DMXE_48_L2. The
procedure for removing this spacer is described in the NPT-1200 Installation, Operation, and Maintenance
Manual. The module supports MPLS by appropriate licensing. The total WAN bandwidth is 32 x VC-4. The
card supports 1588v2 master, slave, and transparent modes.
The 10 GbE ports use new SFP+ (OTP10_xx) transceivers that provide 10 GbE connectivity in a small form
factor, similar in size to the legacy SFPs. The SFP+ enables design modules with higher density and lower
power consumption relative to XFP transceivers.
The eight GbE ports are SFP-based and enable electrical or optical GbE by insertion of different types of SFP
– copper or optical.
NOTE: The DMXE_48_L2 is supported only in the NPT-1200 platform with XIO matrix (up to
two modules in slots TS1+TS2 and TS6+TS7).
NOTE: The DMXE_48_L2 supports in band management over MOE by the NPT-1200.
Because the DMXE_48_L2 occupies a double slot, it can be installed only in two adjacent horizontal slots
(TS1+TS2 and TS6+TS7) in the NPT-1200. Therefore, a maximum of two DMXE_48_L2 modules can be
installed in the NPT-1200, totaling in 8 x 10 GbE and 16 x GbE (electrical/optical) interfaces per platform.
NOTES:
The DMCES1_4 can be installed in any Tslot in the NPT-1200, except for TS5.
The DMCES1_4 can be installed in the NPT-1030 with MCP30B and XIO30_16 or
XIO30Q_1&4 only.
The supported maximum number of modules that can be installed the supported platforms, and the
resulting total number of STM-1/STM-4 interfaces are listed in the following table.
Connectivity to the packet network is made through one of the following options:
Direct 1.25G SGMII connection to central packet switch on CPS cards through backplane.
Connection to 3rd party device (router/switch) through SFP based GbE port on the front panel,
working in standalone mode with CESoETH and CESoIP/UDP encapsulation.
Each client port can be configured to support an STM-1 interface. Port No. 1 can also be configured to
support channelized STM-4; when this is the case all three other ports are disabled.
NOTE: The GbE port is not needed by the supported platforms; connection to this port is
made through the backplane.
9.5.2 MSE1_16
The MSE1_16 is a CES multiservice card that provides CES for up to 16 x E1/T1 interfaces. It supports the
SAToP and CESoPSN standards and has a SCSI 100-pin female connector on the front panel for connecting
the E1/T1 customer signals.
Connectivity to the packet network is made by direct 1.25G SGMII connection to the central packet switch
on CPS card through the backplane.
NOTES:
When the MSE1_16 is installed in the NPT-1200 the card can be configured in any Tslot,
except for TS5.
The MSE1_16 card isn’t supported by the NPT-1800 and NPT-1200 with MCIPS320.
The maximum number of modules that can be installed in the supported platforms, and the resulting total
number of E1/T1 interfaces are listed in the following table.
The cabling of the MSE1_16 module is directly from the front panel with a 100-pin SCSI female connector.
Figure 9-23: MSE1_16 front panel
9.5.3 MSC_2_8
The MSC_2_8 is a CES multiservice card that provides CES for up to 8 x E1/T1 and 2 x STM-1/OC-3
interfaces. It supports the SAToP and CESoPSN standards and has two SFP housings for connecting
STM-1/OC-3 customer signals and a 36-pin SCSI female connector for connecting E1/T1 customer signals on
front panel.
Connectivity to the packet network is made by direct 1.25G SGMII connection to central packet switch on
CPS card through backplane.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-1 interfaces are listed in the following table.
Table 9-44: MSC_2_8 modules and STM-1/OC-3 and E1/T1 interfaces per platform
Platform Max. MSC_2_8 modules Max. STM-1/OC-3 Max. E1/T1 interfaces
interfaces
NPT-1020/NPT-1021 1 2 8
NPT-1050 3 6 24
9.5.4 MS1_4
MES1_4 is a CES multiservice card that provides Circuit Emulation Services (CES) for up to 4 x STM-1
interfaces, or a single STM-4 interface. It supports the SAToP and CESoPSN standards and has four SFP
housings for connecting STM-1 or STM-4customer signals on the front panel, totally it can support up to
252 E1 CES services. In addition, it supports the CESoETH and CESoMPLS Emulation formats.
NOTE: STM-4 interface is supported only in the leftmost port (P1) of the MES1_4.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-1 interfaces are listed in the following table.
9.5.5 MSE1_32
MSE1_32 is a CES multiservice card that provides CES for up to 32 x E1/T1 balanced interfaces. It supports
the SAToP and CESoPSN standards and has two SCSI 100-pin female connectors on the front panel for
connecting the E1/T1 customer signals.
Connectivity to the packet network is made by direct 1.25G SGMII connection to the central packet switch
on CPS card through the backplane.
NOTES:
When the MSE1_32 is installed in the NPT-1200 the card can be configured in any Tslot,
except for TS5.
When the MSE1_32 is installed in the NPT-1800 the card can be configured in any Tslot,
except for TS22.
NOTE: Two external xDDF-21 units are required for connecting 32 x E1/T1 unbalanced
interfaces to the MSE1_32.
The maximum number of modules that can be installed in the supported platforms, and the resulting total
number of E1/T1 interfaces are listed in the following table.
32 clock domains.
PM counters support per channel.
Alarm support per channel.
The cabling of the MSE1_32 module is directly from the front panel with two 100-pin SCSI female
connectors.
NOTES:
The DHGE_4E can be configured in any Tslot in the NPT-1200, except for TS5.
The DHGE_4E can be configured in Group I Tslots only in the NPT-1800, except for TS22.
The maximum number of modules that can be installed in the supported platforms, and the resulting total
number of 10/100/1000BaseT interfaces are listed in the following table.
The cabling of the DHGE_4E module is directly from the front panel with four RJ-45 connectors.
Figure 9-27: DHGE_4E front pane
NOTE: When the DHGE_4E is installed in the NPT-1021, only optics cards are supported by the
EXT-2U.
NOTE: PoE+
When the DHGE_4E is configured with PoE, the main power feeding voltage must be less
than 58 VDC.
The DHGE_4E card MAX power consumption for PoE is 62 W, any mixture of PD devices is
allowed up to 62W.
9.6.2 DHGE_8
The DHGE_8 is a data hybrid card that supports up to 8 x GbE/FX ports with connection to the packet
switching matrix (CSFP for 8 ports, SFP for 4 ports).
NOTE:
When the DHGE_8 is installed in the NPT-1200 it can be configured in any Tslot, except for
TS5.
When installed in the NPT-1020/NPT-1021 the DHGE_8 supports up to four GbE ports, as
the NPT-1020/NPT-1021 doesn't support CSFPs.
When the DHGE_8 is installed in the NPT-1020/NPT-1021, the EXT-2U packet based cards
are not supported (optics and TDM only).
The maximum number of modules that can be installed in the supported platforms, and the resulting total
number of GbE interfaces are listed in the following table.
The cabling of the DHGE_8 module is directly from the front panel with four SFP or CSFP transceivers. The
card has four positions for installing SFP or CSFP transceivers; the positions are gathered in pairs: P1~P5,
P2~P6, P3~P7, and P4~P8. Each pair can house one SFP or one CSFP. Each SFP supports one optical GbE/FX
port, totaling 4 x GbE/FX ports in a card. Each CSFP supports two GbE/FX ports, totaling 8 x GbE/FX ports in
a card. A Mix of SFP and CSFP transceivers in the same card is also supported.
NOTE:
When DHGE_8 is installed in TS3 or TS4 of an NPT-1200, equipped with MCIPS320 it
supports only SFPs in these slots.
When DHGE_8 is installed in TS7 to TS18 of an NPT-1800, it supports only SFPs in these
slots.
NOTE: When the DHGE_8 or DHGE_4E are installed in the NPT-1020 Tslot, only TDM and
EoS/MoT cards are supported by the EXT-2U.
9.6.3 DHGE_16
The DHGE_16 is a data hybrid card that supports up to 8 x 10/100/1000BaseT ports and 8 x GbE/FX ports
with connection to the packet switching matrix (CSFP support for 8 optical ports, SFP for 4 optical ports).
The module occupies a double slot in the Tslot module space and can be installed only in slot pairs TS1+TS2
and TS6+TS7 of the NPT-1200, or TS2+TS3 of the NPT-1050 and NPT-1800 (Group I). A spacer between each
of these slot pairs must be removed to enable the installation of the DHGE_16. The procedure for removing
this spacer is described in the NPT-1800, NPT-1200 and NPT-1050 Installation, Operation, and Maintenance
Manual.
The module supports MPLS by appropriate licensing. The card supports 1588v2 master, slave, and
transparent modes.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of 1000Base-X/100Base-FX and 10/100/1000BaseT (electrical) interfaces are listed in the following
table.
Table 9-54: DHGE_16 modules, 1000Base-X/100Base-FX and 10/100/1000BaseT interfaces per platform
Platform Max. DHGE_16 Max.1000Base-X Max 100BaseX Max.
modules interfaces interfaces 10/100/1000BaseT
(electrical) interfaces
NPT-1050 1 8 4 12
NPT-1200 2 16 8 24
NPT-1800 4 32 16 48
Ports P1 to P8 are RJ-45 connectors for 8 x 10/100/1000BaseT electrical interfaces. Ports P9 to P16 are
grouped in pairs: P9~P13, P10~P14, P11~P15, and P12~P16. Each pair position can house one SFP or one
CSFP transceiver, supporting one 1000Base-X/100Base-FX port (for SFP) or two bidirectional 1000Base-X
ports (for CSFP).
Figure 9-29: DHGE_16 front panel
9.6.4 DHGE_24
The DHGE_24 is a data hybrid card that supports up to 24 x GbE/FX ports with connection to the packet
switching matrix (CSFP/SFP support).
The module occupies a double slot in the Tslot module space and can be installed in slot pairs TS1+TS2 and
TS6+TS7 of the NPT-1200 and TS2+TS3 of the NPT-1050. A spacer between each of these slot pairs must be
removed to enable the installation of the DHGE_24. The procedure for removing this spacer is described in
the NPT-1800, NPT-1200 and NPT-1050 Installation, Operation, and Maintenance Manual.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of 1000Base-X/100Base-FX and 10/100/1000BaseT (electrical) interfaces are listed in the following
table.
Table 9-56: DHGE_24 modules, 1000Base-X/100Base-FX and 10/100/1000BaseT interfaces per platform
Platform Max. DHGE_24 Max.1000Base-X interfaces Max. 10/100/1000BaseT
modules (electrical) & 100BaseX
interfaces
NPT-1050 1 24 12
NPT-1200 2 48 24
NPT-1800 4 72 36
The card ports are grouped in pairs: P1~P13, P2~P14, P3~P15, P4~P16, P5~P17, P6~P18, P7~P19, P8~P20,
P9~P21, P10~P22, P11~P23, and P12~P24. Each pair position can house one SFP or one CSFP transceiver,
supporting one 1000Base-X/100Base-FX port (for SFP) or two bidirectional 1000Base-X ports (for CSFP).
9.6.5 DHXE_2
The DHXE_2 is a data hybrid card that supports up to 2 x 10GbE ports with connection to the packet
switching matrix.
NOTES:
Up to six cards are supported by the NPT-1200, except for TS5.
With this card the NPT-1200 (100G) and NPT-1050 support max 10 x 10GbEs - only 3 cards
can be assigned when CPS100 is used in NPT-1200.
The DHXE_2 card isn't supported by the NPT-1800.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of 10GbE interfaces are listed in the following table.
The cabling of the DHXE_2 module is directly from the front panel with two SFP+ transceivers. The card has
two positions for installing SFP+ transceivers.
9.6.6 DHXE_4
The DHXE_4 is a data hybrid card that supports up to 4 x 10GbE ports with connection to the packet
switching matrix.
NOTES:
The DHXE_4 is supported in the NPT-1200 with CPS320/CPTS320 and in the NPT-1800.
Up to six cards are supported in the NPT-1200, excluding TS5.
Up to 18 cards are supported in the NPT-1800, excluding TS23.
The NPT-1200 supports max. 32 x 10GbEs with the CPS320/CPTS320.
A maximum of 6 modules can be installed in the NPT-1200, thus resulting in a total of 24 x 10GE interfaces
in the platform and 32 x 10GE including the 8 x 10GE on the CPTS320/CPS320 matrix cards.
The cabling of the DHXE_4 module is directly from the front panel with four SFP+ transceivers. The card has
four positions for installing SFP+ transceivers.
Figure 9-32: DHXE_4 front panel
9.6.7 DHXE_4O
The DHXE_4O is a data hybrid card that supports up to 4 x 10GbE/OUT2/2e ports with OTN wrapping and
connection to the packet switching matrix.
NOTES:
The DHXE_4O is supported in the NPT-1800 and NPT-1200 with CPS320/MCIPS320.
NPT-1800 supports max. 71 x 10GbEs (up to 18 cards).
The cabling of the DHXE_4O module is directly from the front panel with four SFP+ transceivers. The card
has four positions for installing SFP+ transceivers.
9.7.1 NFVG_4
The NFVG_4 card is a common Tslot card for Neptune platforms that can implement various VNFs
(embedded NFV solution). NFVG_4 is a single slot NFV card with four GE ports. It can be installed in any
Neptune platform with Tslots. The max. connection bandwidth to central packet switch is 4 x 1GB and may
be affected by the NP NIF resource limitation (due to dynamic allocation mechanism) in following systems.
The following figure shows the NFVG_4 general view.
Figure 9-34: NFVG_4 general view
The VNF traffic processing is based on Intel x86 (E3-1105Cv2) CPU with DH8903CC PCH (Platform Controller
Hub), 2 x 8GB DDR-3, and 32GB (High Endurance). This system can process the packets from 4 x GE ports
arriving through the i350 Ethernet controller. The GE lanes can be from the front panel ports (SFP based) or
from internal backplane SGMII ports. To support flexible routing, all GE lanes from the panel and backplane
are connected to the matrix (X-point). This supports traffic path provisioning.
The FPGA block mainly implements the control interfaces for MCP to manage the NFVG_4 card in Tslot of
Neptune platforms, and the timing interfaces to/from CPS/CIPS. The NFVG_4 includes an IDPROM so that
MCP can read it via IIC to identify the card.
The NFVG_4 block diagram include the following main parts:
Power supply
Traffic subsystem
Control subsystem
Timing
Backplane and front panel Interfaces
Because NFVG_4 has to be supported in all Neptune platforms with Tslot and different Neptune's have
different control interface, the control interface of the card must be flexible and compatible with all
supported Neptune platforms.
NOTE: The applications for Inline NFV and Mirroring will only be supported in later versions of
the NFVG_4 card.
NOTE: In general it is recommended to install NFVG_4 cards as near as possible to the cooling
fans of the platform.
NPT-1020 250 1 59 1
Backplane connectivity of Neptune platforms should be considered when planning a system with NFVG_4
cards. The connectivity of the platforms is as follows:
NPT-1020 - 2 x 1 GbE
NPT-1050 - 4 x 1 GbE in all Tslots
NPT-1200 with CPS320:
2 x 1 GbE in TS2/3/4/7
4 x 1 GbE in TS1/6
NPT-1200 with any other matrix cards: - 4 x 1 GbE in all Tslots
NPT-1800 - 4 x 1 GbE in all Tslots
When installing NFVG_4 in an NPT-1050 it is recommended to install up to two cards in slots TS2/TS3.
NOTE: For more information on the “move slot” see the EMS-APT User’s Manual.
NOTE: The EXT-2U expansion unit can be combined with the NPT-1020, NPT-1021, NPT-1030,
NPT-1050, NPT-1200, and NPT-1800 platforms. For easier reading, the shelf layout is not
repeated in the sections describing each of those platforms. The reader is simply referred back
to this shelf layout description.
The EXT-2U expansion unit is housed in a 243 mm deep, 465 mm wide, and 88 mm high equipment cage
with all interfaces accessible from the front of the unit. The expansion unit includes its own independent
power supply and fan unit, for additional reliability and security. The platform includes the following
components:
Three multipurpose slots (ES1 to ES3) for any combination of extractable traffic cards. PCM, TDM,
ADM, Ethernet, and CES traffic are all handled through cards in these traffic slots. All interfaces are
configured through convenient SFP modules, supporting up to 2.5G or 2GbE traffic per slot. Each slot
in the EXT-2U has a TDM capacity of up to 16 x VC-4s; the total capacity of the EXT-2U is 48 x VC-4s.
Two slots for INF power supply units. There are two units for system redundancy. Note that the INF
modules are extractable in the EXT-2U.
One FCU fan unit consisting of multiple separate fans to support cooling system redundancy.
The following figure shows the slot layout for the EXT-2U platform.
Typical power consumption of the EXT-2U is less than 150 W. Power consumption is monitored through the
management software. For more information about power consumption requirements, see the
corresponding Neptune platform Installation and Maintenance Manual and the Neptune System
Specifications.
The following table lists the modules supported in the EXT-2U.
PSA PSB ES 1# ES 2# ES 3# FS
INF_E2U
FCU_E2U
SM_10E
EM_10E
PE1_63
P345_3E
S1_4
MPS_2G_8F
TP63_1
DMCES1_32
DMPoE_12G
DHFE_12
DHXE_12
MXP10
TPS1_1
TPEH8_1
OBC
10.1.1 INF_E2U
The INF_E2U is a DC power-filter module that can be plugged into the EXT-2U platform. Two INF_E2U
modules are needed for power feeding redundancy. The module performs the following functions:
Single DC power input and power supply for all modules in the EXT-2U
Input filtering function for the entire EXT-2U platform
Adjustable output voltage for fans in the EXT-2U
Indication of input power loss and detection of under-/over-voltage
10.1.2 AC_PS-E2U
The AC_PS-E2U is an AC power module that can be plugged into the EXT-2U platform. It performs the
following functions:
Converts AC power to DC power for the EXT-2U.
Filters input for the entire EXT-2U platform.
Supplies adjustable output voltage for fans in the EXT-2U.
Supplies up to 180 W of power.
Figure 10-4: AC_PS-E2U front panel
NOTE: When using the MPoE_12G with PoE+ functionality with AC_PS-E2U feeding , check the
power consumption calculation. Only one card of this type is allowed.
10.1.3 FCU_E2U
The FCU_E2U is a pluggable fan control module with four fans for cooling the EXT-2U platform. The fans’
running speed can be low, medium, or turbo and is controlled by the corresponding MCP card in the base
platform according to the environmental temperature and fan failure status.
Figure 10-5: FCU_E2U front panel
10.2.1 PE1_63
The PE1_63 is an electrical traffic card with 63 x E1 (2 Mbps) balanced electrical interfaces that supports
retiming of up to 63 x E1s. A maximum of three PE1_63 cards can be installed in one EXT-2U platform. The
PE1_63 supports LOS inhibit functionality (very low sensitivity signal detection). This actually means that
the LOS alarm is masked up to a level of -20 dB signals.
The cabling of the PE1_63 card is directly from the front panel with three twin 68-pin VHDCI female
connectors.
10.2.2 P345_3E
The P345_3E is an electrical traffic card with 3 x E3/DS-3 (34 Mbps/45 Mbps) electrical interfaces. A
maximum of three P345_3E cards can be installed in one EXT-2U platform.
The cabling of the P345_3E card is directly from the front panel with DIN 1.0/2.3 connectors.
NOTE: ACTIVE, FAIL, and ALARM LEDs are combined to show various failure reasons during
the expansion card boot. For details, see the Troubleshooting Using Component Indicators
section in the NPT-1200 Installation, Operation, and Maintenance Manual.
10.2.3 S1_4
The S1_4 card is an SDH expansion card with four STM-1 (155 Mbps) interfaces (either optical or electrical).
Each SFP house in the S1_4 supports three types of SFP module, as follows:
SFP STM-1 optical transceivers with a pair of LC optical connectors. Interfaces can be S1.1, L1.1, or
L1.2, depending on the SFP module.
SFP STM-1 electrical transceivers with a pair of DIN 1.0/2.3 connectors.
SFP STM-1 optical transceivers with one LC optical connector (bidirectional STM-1 TX/RX over a single
fiber using two different lambdas). The wavelength of the Tx laser can be 1310 nm (BD3) or
1550 nm (BD5).
The four STM-1 interfaces in the S1_4 can be assigned using these three SFP module types independently.
NOTE: ACTIVE, FAIL, and ALARM LEDs are combined to show various failure reasons during
the expansion card boot. For details, see the Troubleshooting Using Component Indicators
section in the NPT-1200 Installation, Operation, and Maintenance Manual.
10.2.4 S4_1
The S4_1 is an SDH expansion card with one STM-4 (622 Mbps) interface. The SFP house in the S4_1
supports SFP modules, as follows:
SFP STM-4 optical transceivers with a pair of LC optical connectors. Interfaces can be S4.1, L4.1, or
L4.2, depending on the SFP module.
NOTE: ACTIVE, FAIL, and ALARM LEDs are combined to show various failure reasons during
the expansion card boot. For details, see the Troubleshooting Using Component Indicators
section in the Neptune Installation, Operation, and Maintenance Manuals.
10.2.5.1 OM_BA
The OM_BA is a single channel booster amplifier module with constant output power for links up to 10
Gbps. The OM_BA can be installed in the Optical base card (OBC) wide sub-slots. Up to two modules can be
installed in each OBC, totaling six modules in an EXT-2U platform.
The module has two LC connectors: Rx (input), and Tx (output), protected by a spring-loaded cover.
10.2.5.2 OM_PA
The OM_PA is a single channel amplifier working in Channel 35 of the C-band for links up to 10 Gbps. The
amplifier works in a constant power mode and provides a power output of -15 dBm. The OM_PA can be
installed in the Optical base card (OBC) wide sub-slots. Up to two modules can be installed in each OBC,
totaling six modules in an EXT-2U platform.
The module can be connected in two link applications:
Receives optical signals from an SFP/XFP transmitter and the preamplifier connected before the
receiver. In this mode the module is capable of delivering signals between 80 to 120 Km.
Includes a booster amplifier after the SFP/XFP transmitter and the preamplifier connected before the
receiver. In this option the total power budget enables the amplifier to deliver signals between 120
km to 180 km.
The module has two LC connectors: Rx (input), and Tx (output), protected by a spring-loaded cover.
10.2.5.3 OM_ILA
The OM_ILA is a DWDM amplifier working in the C-band for links up to 44/88 channels. It is a fixed 21 dB
gain EDFA based DWDM amplifier for links of up to 500 km with up to 80 channels. The OM_ILA can be
installed in the wide sub-slots. Up to two modules can be installed in each OBC, totaling six modules in an
EXT-2U platform.
The OM_ILA provides the following main functions:
Operation as a preamplifier, booster, or inline amplifier
Output power of 16 dBm with a gain of 21 dB
Minimum input power of -24 dBm
Monitoring and alarms
Support for DWDM filters (Mux/DeMux or OADM) in a separate Artemis shelf
The module has two LC connectors: Rx (input), and Tx (output), protected by a spring-loaded cover.
10.2.5.4 OA_LVM
The OM_LVM is a DWDM two stage VGA amplifier working in the C-band for links up to 44/88 DWDM
channels. The module includes a 20.5 dBm variable gain EDFA with mid-stage access (MSA). The OM_LVM
can be installed in the Optical base card (OBC) wide sub-slots. Up to two modules can be installed in each
OBC, totaling six modules in an EXT-2U platform.
The OM_LVM provides the following main functions:
Operation as a preamplifier, booster, or inline amplifier
Output power of 20.5 dBm with a variable gain of 15 to 30 dB
Minimum input power of -28 dBm
Monitoring and alarms
Support for DWDM filters (Mux/DeMux or OADM) in a separate Artemis shelf
10.2.5.5 OM_DCMxx
The OM_DCMxx is a micro dispersion compensation module used to correct excessive dispersion on long
fibers. The OM_DCMxx is available for several distance ranges: 40, 80, and 100 km (xx in the module name
designates the distance in km). The OM_DCMxx can be installed in the Optical base card (OBC) narrow
sub-slot. One module can be installed in the OBC, totaling three modules in an EXT-2U platform.
The module has two LC connectors: Rx (input), and Tx (output), protected by a spring-loaded cover.
10.2.6 MXP10
The MXP10 is a muxponder base card supporting up to 12 (CSFP) built in client interfaces, which are
multiplexed into G.709 multiplexing structure and sent via two OTU-2/2e line interfaces. It can be installed
in the Eslots of EXT-2U platforms; up to three MXP10 cards can be installed in an EXT-2U.
The MXP10 can also be configured to operate as a transponder where it can map any 10
GbE/STM-64/FC-800/FC-1200 signal into an OTU2/2e line.
In addition, the MXP10 has an optical module slot for installing an OM_AOC4. This module expands the
client interface capacity by 4 additional ports, totaling 16 client ports per MXP10.
Any of the client interfaces can be configured to accept an STM-1, STM-4, STM-16, GbE, FC/FC2/FC4,
OTU-1, or HD-SDI signal. The card has integrated cross-connect capabilities, providing more efficient
utilization of the lambda. Any of the signals can be added or dropped at each site, while the rest of the
traffic continues on to the next site. Broadcast TV services can be dropped and continued (duplicated),
eliminating the need for external equipment to provide this functionality.
Hardware protection is supported; using a pair of MXP10 cards, configured is slots ES1 and ES2 of the
EXT-2U. In the protection mode each service is connected to both MXP10 cards by splitters/couplers. In
case a traffic or equipment failure occurs it will trigger a switch to the protection card.
The MXP10 is a single Eslot card with the following main features:
12 CSFP-based client ports, software configurable to support GbE, FC/FC2/FC4, STM-1, STM-4,
STM-16, and OTU-1 services
Client interfaces can be expanded by 4, by installing a OA_AOC4 module in the card's Tslot
Two independent SFP+ based OTU-2/2e line ports
Can be used as a multi-rate combiner up to OTU-2/2e
Can be used as a multi OTU-1 transponder – up to 5
Can operate as two separate muxponders with sets of eight clients multiplexed into one OTU-2 line
Can operate as 5 separate 2.5G muxponders with up to 5 clients multiplexed into OUT-1 line
Regeneration mode is supported for OTU-2 (single) and OUT-1 (up to 5)
Any mix of functionality is supported as long occupied resources are not exceeding MXP10 OTN
capacity of 40G.
Per port HW protection
Supports G.709 FEC for OUT-1 and G.709 FEC and EFEC (I.4 and I.7) for OTU-2 and ignore-FEC modes
towards the line
Supports Subnetwork Connection Protection (SNCP) mechanisms
Complies with ITU-T standards for 50 GHz and 100 GHz multichannel spacing (DWDM)
Support two GCC channels one for each OTU-2 interface, to allow management over OTN interface
Supports in-service module insertion and removal without any effect on other active ports
Supports interoperability with Apollo AoC cards
The cabling of the MXP10 card is directly from the front panel. It includes 6 positions for installing CSFP
client transceivers; the positions are gathered in pairs: P1~P7, P2~P8, P3~P9, P4~P10, P5~P11, and P6~P12.
Each pair can house one CSFP. Each CSFP supports two configurable ports, totaling 12 client ports on the
base card. In addition, the MXP10 has two positions for installing SFP transceivers that serve the line ports.
10.2.6.1 OM_AOC4
The OM_AOC4 is an optical ADM on a card module for installing in the MXP10 card. It enables to expand
the MXP10 capacity with 4 client ports.
The OM_AOC4 module provides 4 client ports; each port can be configured to operate as one of the
following interfaces:
STM-1/STM-4/STM-16
GbE
FC1/2/4/8
HD-SDI
ODU-1
When operating in the base card, each port supports the same functionality as the client ports incorporated
on the MXP10.
The following figure shows the front panel of the OM_AOC4.
OTU1
GbE
FC-100/200/400
HDSDI1485/HDSDI3G
Video270
MXP10 can support TRP25/REG25/AoC25 applications:
Up to 5 x OTU1 transponders/combiners
Supported client interfaces:
STM-1/OC-3
STM-4/OC-12
STM-16/OC-48
GbE
FC-100/200
Video270
NOTE: The MXP10 is not supported in NPT-1800 and NPT-1200 with MCIPS320.
OTU1
GbE
FC-100/200/400
HDSDI1485/HDSDI3G
Video270
MXP10 can support TRP25/REG25/AoC25 applications:
Up to 5 x OTU1 transponders/combiners
Supported client interfaces:
STM-1/OC-3
STM-4/OC-12
STM-16/OC-48
GbE
FC-100/200
Video270
10.2.7 DHFE_12
The DHFE_12 is a data hybrid card that supports up to 12 x FE ports with connection to the packet
switching matrix.
The cabling of the DHFE_12 module is directly from the front panel with RJ-45 based connectors.
NOTE:
The DHFE_12 with NPT-1020/NPT-1021, up to 8 x FE ports are supported.
The DHFE_12 with NPT-1020/NPT-1021 and CPS50 , up to 12 x FE ports are supported.
The DHFE_12 with NPT-1200 decreases the base unit MAX GE fan out by 16 ports.
10.2.8 DHFX_12
The DHFX_12 is a data hybrid card that supports up to 12 x 10/100 FX ports with connection to the packet
switching matrix.
The cabling of the DHFX_12 module is directly from the front panel with SFP based slots.
NOTE:
The DHFX_12 with NPT-1020/NPT-1021, up to 8 x FE ports are supported.
The DHFX_12 with NPT-1020/NPT-1021 and CPS50, up to 12 x FE ports are supported.
The DHFX_12 with NPT-1200 decreases the base unit MAX GE fan out by 16 ports.
10.2.9 MPS_2G_8F
The MPS_2G_8F is an EoS metro Ethernet L2 switching card with MPLS capabilities. It includes 8 x
10/100BaseT LAN interfaces, 2 x GbE/FE combo LAN interfaces, and 64 EoS WAN interfaces. The total WAN
bandwidth is up to 4 x VC-4. A maximum of three MPS_2G_8F cards can be installed in one EXT-2U
platforms.
10.2.10 MPoE_12G
The MPoE_12G is a metro Ethernet L2 and MPLS switching card with MPLS capabilities and Power over
Ethernet support (PoE). It can be installed in the EXT-2U providing four GbE/FX and eight
10/100/1000BaseT interfaces with power over Ethernet functionality (IEEE802.af and IEEE802.at). It
provides Layer 1 and Layer 2 with MPLS-TP switch functionality (64 EoS WAN interfaces) over native
Ethernet (MoE) and SDH (MoT) virtual concatenated streams. Suitable for IP Phone, IP cameras and RF "all
outdoor unit" power feeding directly from the Ethernet port.
The card supports 1588v2 master, slave, and transparent modes. It provides up to 64 EoS WAN interfaces
and the total WAN bandwidth is up to 4 x VC-4. A maximum of three MPoE_12G cards can be installed in
one EXT-2U platform.
10.2.11 DMCE1_32
The DMCE1_32 is a CES multiservice card that provides CES for up to 32 x E1 interfaces. It supports the
SAToP and CESoPSN standards and has two SCSI 68-pin connectors for connecting the E1 customer signals
on the front panel.
Connectivity to the packet network is made through one of the following options:
Direct 1.25G SGMII connection to the central packet switch on CPS cards through the backplane.
Connection to 3rd party device (router/switch) through the combo GbE port on the front panel,
working in standalone mode with CESoETH and CESoIP/UDP encapsulation.
Connectivity to the packet network is made through backplane connection (to central packet switch), or
combo GbE port on front panel.
10.2.12 SM_10E
The SM_10E is a multiservice access card platform that introduces various 64 Kbps, N x 64 Kbps PCM
interfaces, and DXC1/0 functionality. It provides the mappers for up to 44 E1s, and a DXC1/0 with a total
capacity of 1,216 DS-0 x 1,216 DS-0. There are three module slots, each of which accommodates traffic
bandwidth of six E1s per slot. Through the configuration of different types of traffic modules, the SM_10E
can provide up to 24 channels of different types of PCM interfaces, such as FXO, FXS, 2W, 4W, 6W, E&M,
V.24, V.35, V.11, Omni, V.36, RS-422, RS-449 C37.94, EoP, and codirectional 64 Kbps interfaces. A maximum
of three SM_10E cards can be installed in one EXT-2U platform.
The SM_10E base card has no external interfaces. Each traffic module for the SM_10E has its own external
interfaces on its front panel.
10.2.13 EM_10E
The EM_10E is a multiservice access card that introduces various 64 Kbps, N x 64 Kbps PCM interfaces, and
DXC1/0 functionality. It provides the mappers for up to 16 E1s, and a DXC1/0 with a total capacity of 589
DS-0 x 589 DS-0. There are three module slots, each of which accommodates traffic bandwidth of six E1s
per slot. Through the configuration of different types of traffic modules, the EM_10E can provide up to 24
channels of different types of PCM interfaces, such as FXO, FXS, 2W, 4W, 6W, E&M, V.24, V.35, V.11, Omni,
V.36, RS-422, RS-449 C37.94, and codirectional 64 Kbps interfaces. A maximum of three EM_10E cards can
be installed in one EXT-2U platform.
NOTE: The EM_10E is not supported by the NPT-1800 and NPT-1200 with MCIPS320.
The EM_10E base card has no external interfaces. Each traffic module for the EM_10E has its own external
interfaces on its front panel.
SM_EM_24W_6E: EM_10E/SM_10E traffic module for six 24W E&M interfaces. Each interface can be
set to 2W, 4W, 6W, 2WE&M, or 4WE&M independently.
SM_V24E: EM_10E/SM_10E traffic module for V.24 interfaces that supports three modes:
Transparent (eight channels), Asynchronous with controls (four channels), and Synchronous with
controls (two channels). Both point-to-point and point-to-multipoint services are supported.
SM_V35_V11: EM_10E/SM_10E traffic module for two V.35/V.11/V.24/V.36/RS-422/RS-449 (64 Kbps
only) compatible interfaces with full controls. Each interface can independently be configured as V.35
or V.11/X.24 or V.24 64 Kbps.
SM_CODIR_4E: EM_10E/SM_10E traffic module for four codirectional 64 Kbps (G.703) interfaces.
SM_OMNI_E: EM_10E/SM_10E traffic module for one OmniBus 64 Kbps interface.
SM_EOP: SM_10E traffic module for Ethernet data. It supports standard EoP functionality of E1 VCAT,
GFP and LCAS.
SM_C37.94S: EM_10E/SM_10E traffic module for two teleprotection (IEEE C37.94) interfaces.
SM_IO18: EM_10E/SM_10E traffic module for 18 input/output configurable ports (dry contacts) for
utilities teleprotection interfaces.
Additional types of EM_10E/SM_10E traffic modules will be supported in a later version. Each
EM_10E/SM_10E traffic module can be inserted into any of the three module slots in the EM_10E/SM_10E.
All EM_10E/SM_10E traffic modules support live insertion.
Each module provides corresponding traffic interfaces through a SCSI-36 connector on its front panel. The
cabling of these interfaces can be directly via the SCSI-36 connector, or via the corresponding ICP that
connects the SCSI-36 connector through a special cable.
SM_FXO_8E
SM_FXO_8E is a traffic module with eight FXO interfaces for the SM_10E/EM_10E card. Up to three
modules can be configured in one SM_10E/EM_10E card, totaling 24 FXO interfaces. The SM_FXO_8E
provides telephone line interfaces for the central office side.
SM_FXS_8E
SM_FXS_8E is a traffic module with eight FXS or FXD interfaces for the SM_10E/EM_10E card. Each
interface can be set to FXS or FXD independently. Up to three modules can be configured in one
SM_10E/EM_10E card, totaling 24 FXS or FXD interfaces. The SM_FXS_8E provides telephone line interfaces
for the remote side.
SM_EM_24W_6E
SM_EM_24W6E is a traffic module with six 2/4W/6W E&M interfaces for the SM_10E/EM_10E card. It
provides two wire and four wire voice frequency interfaces, with ear and mouth signaling interfaces. Each
interface can be set to 2W, 4W, 6W, 2WE&M, or 4WE&M independently. Up to three modules can be
configured in one SM_10E/EM_10E card, totaling 18 2W, 4W, 6W, 2WE&M, or 4WE&M interfaces.
SM_V24E
SM_V24E is a traffic module with V.24 interfaces (RS232) for the SM_10E/EM_10E. V.24 is low bit rate data
interface also known as RS232. It supports three types of the module:
Transparent mode with eight channels
Asynchronous mode with controls has four channels
Synchronous mode with controls has two channels
The SM_V24E supports a wide range of bit rates in two grades (low and high) and three operating modes as
described in the following table.
Rate grade Mode TC mode Band rate (bps) Operation mode Rate adaptation
SM_V35_V11
SM_V35_V11 is a traffic module with two V.35/V.11/V.24/V.36/RS-422/RS-449 64 Kbps compatible
interfaces with full controls. Each interface can independently be configured as V.35 or V.11/X.24 and can
be mapped to unframed E1 or N x 64K of framed E1 (the interface rate N is configurable). Up to three
modules can be configured in one SM_10E/EM_10E card, totaling six V.35/V.11/V.24 64 Kbps interfaces.
SM_CODIR_4E
SM_CODIR_4E is a traffic module with four codirectional 64 Kbps (per ITU-T G.703) interfaces for the
SM_10E/EM_10E.
SM_EOP
The SM_EOP is a traffic module with two Ethernet interfaces for the SM_10E. It provides two 10/100BaseT
interfaces and supports EoP functionality, including N x E1 virtual concatenation, GFP-F encapsulation, and
LCAS. It also supports N x 64K HDLC encapsulation. The total bandwidth of the SM_EoP is four E1s.
SM_OMNI_E
SM_OMNI_E is a traffic module with OmniBus functionality and four 2W/4W interfaces for the
SM_10E/EM_10E. Each interface can be set to 2W or 4W mode by the management.
Omnibus is a special interface for railway application featuring P2MP communications. This interface is very
similar in nature to SDH OW.
SM_C37.94
The SM_C37.94 module provides two teleprotection interfaces per IEEE C37.94 for the EM_10E/SM_10E.
The interfaces enable transparent communications between different vendors' teleprotection equipment
and multiplexer devices, using multimode optical fibers.
In general, teleprotection equipment is employed to control and protect different system elements in
electricity distribution lines.
Traditionally, the interface between teleprotection equipment and multiplexers in high-voltage
environments at electric utilities was copper-based. This media transfers the critical information to the
network operation center. These high-speed, low-energy signal interfaces are vulnerable to
intra-substation electromagnetic and frequency interference (EMI and RFI), signal ground loops, and
ground potential rise, which considerably reduce the reliability of communications during electrical faults.
The optimal solution is based on optical fibers. Optical fibers don't have ground paths and are immune to
noise interference, which eliminates data errors common to electrical connections.
SM_C37.94S
SM_C37.94S Sub module provides two teleprotection interfaces per IEEE C37.94 for the SM_10E/EM_10E.
The interfaces enable transparent communications between different vendors' teleprotection equipment
and multiplexer devices, using multimode optical fibers.
In general, teleprotection equipment is employed to control and protect different system elements in
electricity distribution lines.
Traditionally, the interface between teleprotection equipment and multiplexers in high-voltage
environments at electric utilities was copper-based. This media transfers the critical information to the
network operation center. These high-speed, low-energy signal interfaces are vulnerable to
intra-substation electromagnetic and frequency interference (EMI and RFI), signal ground loops, and
ground potential rise, which considerably reduce the reliability of communications during electrical faults.
The optimal solution is based on optical fibers. Optical fibers don't have ground paths and are immune to
noise interference, which eliminates data errors common to electrical connections.
NOTE: SM_C37.94S supports two SFP based C37.94 interfaces (OTR2M_MM and OTR2M_SM,
which should be ordered separately).
SM_IO18
The SM_IO18 is a sub module of the SM_10E/EM_10E, which provides 18 dry contact ports and is used for
substation alarm monitoring and control. Each port can be defined as input or output by configuration:
Input port
Port name and severity is configurable
Monitor type is configurable between alarm and event
Output
Support manual control
Support automatic control by associating an input port.
NOTE: MPLS and Ethernet data cards, with optical ports, incorporate SFP transceivers with LC
connectors. Purchase these SFPs only through your local sales representative.
Type Designation
DHFE_12
DHFX_12
Ethernet Layer 1 cards
DMFE_4_L1
DMFX_4_L1
DMFX_1_L1
DMGE_4_L1
NOTE: All modules have a handle to enable easy removal and insertion. The handle has been
removed from the illustrations in this section so as not to obscure the front panel markings.
SDH mapper – supporting standard Ethernet, and MPLS mapping to GFP/VCAT/LCAS with SDH
VC-12/3/4 granularity.
Ethernet ports – incorporating GbE ports and FE ports.
Up to 8/64 EoS/MoT (WAN) ports - with standard GFP/VCAT/LCAS mapping with SDH n x VC-12/3/4
(VCG) granularity.
Each MPLS card includes an Ethernet switch, an MPLS switch, and an SDH mapper. A powerful Network
Processor Unit (NPU) incorporated in each card fulfills the functions of Ethernet and MPLS switches. The
NPU is software programmable, allowing the cards to work as an Ethernet Provider Bridge (QinQ) switch
and/or as Ethernet Provider Bridge plus MPLS switch.
Neptune platforms enable E1/T1 CES emulation, providing TDM transport over PSNs for backhaul
applications offering a wide range of new broadband data services. These boost the advantages inherent in
packet based networks, including flexibility, simplicity, and cost effectiveness. Neptune platforms support
CESoPSN for E1/T1 interfaces with encapsulation support for CES over MPLS-TP (CESoMPLS).
For cellular operators managing 2G, 2.5G, and 3G base stations connected to the BSC/RNC via multiple
E1/T1 lines, Neptune enables lower cost transport between these locations, replacing more expensive
leased E1/T1 lines.
At the hub or BSC/RNC sites, the Neptune functions as a carrier class multiservice aggregator, optimizing
cellular backhaul by multiplexing various TDM services into a single ChSTM-n. STM-1 support includes
channelized STM-1 with up to 63 x VC-12 channels for SDH or 84 VT1.5 channels for SONET, and
channelized STM-4 with up to 252 x VC-12 channels for SDH or 336 VT1.5 channels for SONET.
Layer 2 Control Protocol Handling - Ethernet ports handle some specific MAC addresses in a special
way to provide predicted efficient behavior of the network. As opposed to a standard service frame
that is transported untouched from side to side, these special frames should be treated differently.
For example, PAUSE frames have their meaning only within the local link and therefore should be
discarded immediately upon reception. Other MAC addresses are configurable to be discarded or
forwarded transparently.
End to End Service Management through a single comprehensive Network Management System
(NMS) that provisions, monitors, and controls many network layers simultaneously. Advancement in
the management of converged networks takes advantage of the “condensed” transport layer for
provisioning and troubleshooting while presenting operators with tiered physical and technology
views that are familiar and easy to navigate. The comprehensive NMS simplifies operations by
allowing customers and member companies to monitor and/or control well-defined and secure
resource domains with partitioning down to the port.
Security, with a safe environment that protects subscribers, servers, and network devices, blocking
malicious users, Denial of Service (DoS), and other types of attacks. Use of provider network
constraints, as well as complete traffic segregation, ensures the highest level of security and privacy
for even the most sensitive data transmissions.
Figure 11-4: Carrier class Ethernet requirements
MPLS-TP is both a subset and an extension of MPLS, already widely used in core networks. It bridges the
gap between packet and transport worlds by combining the efficiency of packet networks with the
reliability, carrier-grade features, and OAM tools traditionally found in SDH transport networks. MPLS-TP
builds upon existing MPLS forwarding and MPLS-based pseudowires, extending these features with in-band
active and reactive OAM enhancements, deterministic path protection, and a network management-based
static provisioning option. To strengthen transport and management functionality, MPLS-TP excludes
certain functions of IP/MPLS, such as label-switched path (LSP) merge, Penultimate Hop Popping (PHP) and
Equal Cost Multi Path (ECMP).
As the following figure illustrates, MPLS-TP is both a subset of MPLS and an extension of MPLS, tailored for
transport networks.
Figure 11-5: Relationship of MPLS-TP to IP/MPLS
As part of MPLS, MPLS-TP falls under the umbrella of the IETF standards. RFC 5317 outlined the general
approach for the MPLS-TP standard and has been followed by more than 10 additional requirement and
framework RFCs. There are also many more working group documents in the editor's queue or in late-stage
development. Although not fully standardized yet, operators are comfortable enough with the status to
have begun rolling out networks based on MPLS-TP.
MPLS-TP is supported across product lines, enabling E2E QoS assurance across network domains. As a
leader in MPLS-TP technology, we are participating in the standards development process as it unfolds. Our
MPLS-TP components are designed to be future proof, capable of incorporating and supporting new
standard requirements as they are defined.
Classic VPLS service creates a full mesh between all network nodes and, under certain circumstances,
this may not be the most efficient use of network resources. With H-VPLS, full mesh is created only
between hub nodes using Split Horizon Groups (SHGs). Spoke nodes are only connected to their hubs,
without SHGs. This efficient approach improves MP2MP service scaling and allows less powerful
devices such as access switches to be used as spoke nodes, since it removes the burden of
unnecessary connections.
E-Tree (Rooted-Multipoint) for point-to-multipoint (P2MP) multicast tree connectivity, designed for
BTV/IPTV services. These include:
Ethernet Private Tree (EP-Tree): In its simplest form, an E-Tree service type provides a single
root for multiple leaf UNIs. Each leaf UNI only exchanges data with the root UNI. This service is
useful and enables very efficient bandwidth use for BTV or IPTV applications, such as
multicast/broadcast packet video. With this approach, different copies of the packet need to be
sent only to roots that are not sharing the same branch of the tree.
Ethernet Virtual Private Tree (EVP-Tree): An EVP-Tree is an E-Tree service that provides
rooted-multipoint connectivity across a shared infrastructure supporting statistical multiplexing
and over-subscription. EVP-Tree is used for hub and spoke architectures in which multiple
remote offices require access to a single headquarters, or multiple customers require access to
an internet SP's point of presence (POP).
E-Tree services may be implemented, for example, through an MPLS Rooted-P2MP Multicast Tree
that provides an MPLS drop-and-continue multicast tree on a shared P2MP multicast tree tunnel,
supporting multiple Digital TV (DTV)/IPTV services as part of a full triple play solution. LightSOFT
provides full support for classic E-Tree functionality as of the current release.
E-Access (Ethernet Access) for Ethernet services between UNI and E-NNI endpoints, based on
corresponding Operator Virtual Connection (OVC) associated endpoints. Ethernet services defined
within the scope of this specification use a P2P OVC which associates at least one OVC endpoint as an
E-NNI and at least one OVC endpoint as a UNI. These services are typically Ethernet access services
offered by an Ethernet Access Provider. The Ethernet Access Provider operates the access network
used to reach SP out-of-franchise subscriber locations as part of providing E2E service to subscribers.
Figure 11-6: MEF definitions for Ethernet services
The Neptune product line supports the full set of MEF services, including E2E QoS, C-VLAN translation, flow
control, and Differentiated Services Code Point (DSCP) classification (see MPLS-TP and Ethernet Solutions).
Sites that belong to the same MPLS VPN expect their packets to be forwarded to the correct destinations.
This is accomplished through the following means:
Establishing a full mesh of MPLS LSPs or tunnels between the PE sites.
MAC address learning on a per-site basis at the PE devices.
MPLS tunneling of customer Ethernet traffic over PWs while it is forwarded across the provider
network.
Packet replication onto MPLS tunnels at the PE devices, for multicast-/broadcast-type traffic and for
flooding unknown unicast traffic.
The triple play service delivery network architecture includes the following components:
E2E MPLS carrier class capabilities. MPLS capabilities assure the QoS of IPTV service delivery over
dedicated P2MP tunnels (MPLS multicast tree).
Multiple distributed PE service edges (leaf PE). Leaf PEs terminate the IPTV downstream traffic
arriving over P2MP tunnels and apply IGMP snooping, policing, and traffic engineering on upstream
traffic. This gives SPs the ability to scale their IPTV network.
Efficient IPTV multicast distribution. IPTV distribution utilizes an efficient drop-and-continue
methodology, using an MPLS P2MP multicast tree to deliver IPTV content across the metro
aggregation network. This allows SPs to optimize bandwidth utilization over the metro aggregation
network. It also enables simple scaling capabilities as IPTV service demands increase.
IGMP snooping at the PE leaf service edges allows the PE device to deliver only the IPTV channels
requested by the user, further improving bandwidth consumption over the Ethernet access ports and
enabling easy scalability as the number of IPTV channels grows.
Star VPLS topology to carry the VoIP, VoD, and HSI P2P services. The star VPLS is built over the
aggregation network from the root PE (aggregator) device that connects the edge router/BRAS to the
leaf PE that connects the IPDSLAM/MSAN. This star VPLS also carries the bidirectional IPTV control
traffic that is either sent by the router downstream (IGMP query), or sent by the subscriber
set-top-box (STB) upstream at channel zapping events (IGMP join/leave requests).
E2E interoperability with the DSLAM/MSAN and MSER, implemented either by the Ethernet or the
MPLS layer. The P2MP multicast tree continues from the PIM-SM multicast tree over the core
network.
A P2MP tunnel originates at the source PE and terminates at multiple destination PEs. This tunnel has a
tree-and-branch structure, where packet replication occurs only at branching points along the tree. This
scheme achieves high multicast efficiency since only one copy of each packet ever traverses an MPLS
P2MP tunnel. The Neptune can act as both a transit P and as a destination PE within the same P2MP
tunnel, in which case it can be referred to as a Transit PE rather than a Transit P.
The following figure illustrates a P2MP multicast tree with PE1 as the source PE (root), P1 as a transit P, PE2
as a transit PE (leaf PE), and PE3, PE4, and PE5 as the destination or leaf PEs. The link from PE1 to P1 is
shared by all transit and destination leaf PEs; therefore the data plane sends only one packet copy on that
link.
Figure 11-9: P2MP multicast tunnel example
The following figure illustrates a second example of a P2MP multicast tree arranged over a multi-ring
topology network. The multicast tunnel paths are illustrated in both a physical layout and a logical
presentation. In this example, PE1 is the source PE (root); P1 and P2 are transit Ps; PE2, PE3, PE5, and PE6
are transit leaf PEs; and PE4 and PE7 are destination leaf PEs.
Figure 11-10: P2MP multicast tunnel example - physical and logical networks
The full triple play solution, incorporating P2MP multicast tunnels, star VPLS, and IGMP snooping, is
illustrated in the following figure. The P2MP multicast tunnels carry IPTV content in an efficient
drop-and-continue manner from the TV channel source, headend router, and MSER, through the root PE
(PE1) to all endpoint leaf PEs. The VPLS star carries all other P2P triple play services, such as VoIP, VoD, and
HSI. The VPLS star also carries the IGMP messages both upstream (request/leave messages from the
customer) and downstream (query messages from the router). IGMP snooping is performed at the
endpoint leaf PEs to deliver only the IPTV channels requested by the user. This allows scalability in the
number of channels, as well as freeing up bandwidth for other triple play services.
Figure 11-11: Triple play network solution for IPTV VoD VoIP and HSI services
IGMP-aware MP2MP VSIs augment the network elements illustrated in the preceding figure by combining
multicast and unicast traffic on the same interfaces, and reducing multicast traffic towards subscribers at
the domain edge. This approach uses standard VPLS mechanisms for intra-domain delivery. Multicast
delivery is implemented through ingress replication across a full mesh of PWs, filtered based on subscriber
requests to eliminate unnecessary traffic. These elements are highlighted in the following figure.
On the management plane, this approach is implemented through an enhanced VSI configuration that
includes enabling IGMP proxy functionality. Upstream (host) and downstream (router) AC (link) and peer
(node) must be explicitly configured as IGMP-aware, and assigned their own IP addresses and subnet
masks. On the control plane, IGMP proxy is implemented through configuring one instance per VSI,
including the corresponding upstream and downstream node and interface parameters. IGMP queries and
responses are handled at the control plane level. On the data plane, traffic received from an IGMP-aware
AC or peer is separated and handled according to its type (IGMP traffic, non-IGMP routable IP multicast, or
other MP2MP VSI traffic).
For example, the following figure illustrates a network reference model for IGMP-aware VSI.
Figure 11-13: Simple network reference model for IGMP-aware VSI
This diagram shows an IP/MPLS domain representing a single AS with GP (IS-IS or OSPF) running on all
intra-AS links. An MP2MP L2VPN service (VPLS) is set up between some PEs, with a full mesh of PWs set up
between all VSIs representing this service in each of the affected NEs using tLDP.
An edge multicast router is connected to one of the PEs of an MP2MP L2VPN (VPLS) service. Multiple
subscribers to this content are connected to other PEs participating in this VPLS instance via access LANs.
Each subscriber indicates its interest in one or more IPTV channels using IGMPv3, with each IPTV channel
mapped to exactly one SSM Multicast Channel.
The VSI representing the VPLS service in question in each of the affected PEs is marked as IGMP-aware. Its
relevant ACs are marked as Upstream or Downstream. Each PW that connects the VSI that is directly
connected through the edge multicast router to a VSI that is directly connected to a subscriber LAN is
treated as an Upstream interface in the former and as a Downstream interface in the latter. An IGMP Proxy
instance is associated with this VSI and treats its Downstream and Upstream ACs and PWs as if they were
Upstream and Downstream.
When an Ethernet frame is received from the Upstream AC or PW associated with an IGMP-aware VSI, it is
checked to see whether it belongs to one of the following traffic types:
IGMP packets. These are identified by Ethertype being IPv4 and IP protocol number being IGMP. The
IGMP packets are trapped to the IGMP Proxy instance for processing.
Routable IP multicast packets. These are identified by Ethertype being IP, IP protocol number being
different from IGMP, and Destination IP address being a routable IP multicast address. The routable IP
multicast packets undergo normal VPLS flooding, subject to additional filtering based on the contents
of the Group Membership DB built by the corresponding IGMP Proxy instance.
All other packets. These frames receive normal VSI forwarding in accordance with the L2 FIB of the VSI
created by the normal MAC Learning process.
With this network model, these rules result in the following handling of routable multicast traffic
transmitted by the IPTV Content Server:
Unicast traffic will be forwarded as if in the normal MP2MP VSI. For example:
Unicast traffic generated by triple play services (such as VoIP, internet access, or VoD traffic)
Fast delivery of the baseline picture after selecting a new IPTV channel by the subscriber
Each routable IP multicast packet received from the server by the directly-connected PE would be
forwarded (using ingress replication) to all PEs connected to the subscriber LANs that have requested
the corresponding Multicast Channel.
The PE that is directly connected to the subscriber LANs will forward each routable IP multicast packet
received from its single Upstream PW to all subscriber LANs where subscribers have requested this
channel. The packet will not be sent it to the LANs where nobody has requested the channel.
Figure 11-14: IPTV solution - focus on IGMP awareness
Auto WRED mechanism for TCP-friendly congestion management. Optional manual WRED, where
user can configure WRED curves and assign them per CoS on both MPLS and non-MPLS ports.
Auto Shaping that provides rate limiting and burst smoothing. Optional manual shaping, where user
can configure committed and excess rate limits per CoS on non-MPLS ports.
Auto Weighted Fair Queuing (WFQ) scheduling mechanism, ensuring that bandwidth is distributed
fairly between individual queues. Optional manual scheduling, where user can configure weight per
CoS per switch.
Policing: TM in the Neptune utilizes two-rate three-color policing to achieve a notable combination of
efficiency and flexibility, supporting CIR, EIR, Committed Burst Size (CBS), and Excess Burst Size (EBS)
traffic categories. Intelligent bandwidth management enables profile enhancement capabilities that
improve handling of 'bursty' traffic as well. Bandwidth management profiles are extended based on
MEF5 standards. Policing is implemented on both the ingress and egress sides, allowing greater
flexibility when managing different customer scenarios.
Strict TM: QoS is implemented on a per-flow basis, with SPQ between two CoS groups, high and low.
This service ensures that each traffic queue receives its guaranteed bandwidth and other resources
while simultaneously allocating extra available bandwidth fairly among the queues. The TE manager
implements buffer management (WRED), scheduling (WFQ), shaping, and counting on a three-level
hierarchy per port, per class, and per tunnel.
Figure 11-16: Network traffic management
DiffServ TM: QoS is implemented on a per-port basis. This method bypasses the hierarchical approach
of Strict TM. DiffServ TM improves scalability by dividing traffic into a small number of classes, and
allocating resources on a per-class basis.
TIP: Neptune platforms allow you to configure both TM models within a single port,
increasing the service options available to network operators. Some of the port LSPs can be
configured with Strict TM, and other LSPs in the same port can be configured with DiffServ
TM.
Flow control with frame buffering (802.3x) reduces traffic congestion. When the input buffer
memory on an Ethernet port is nearly full, the data card sends a 'Pause' packet back to the traffic
source, requesting a halt in packet transmission for a specified time period. After the period has
passed, traffic transmission is resumed. This approach gives the overloaded input buffer a little
'breathing room' while the card clears out the input data and sends it on its way. The following figure
illustrates an NE sending a 'Pause' packet to the link partner.
Figure 11-17: Pause frame example
11.8.3 Shaping
Dual-rate token bucket shaping provides both maximum BW limits and smoothing. Shaping is applied at the
port and CoS level with the following objectives:
Rate limiting for high-CoS traffic, thereby avoiding starvation to low-CoS traffic.
Marking excess traffic (in excess of the guaranteed quota). This marking serves as input to the WFQ
scheduler, allowing it to distinguish between guaranteed and excess bandwidth usage.
Smoothing the output rate before transmission to the line.
Each element is assigned values for CIR/CBS and PIR/PBS to determine the element's committed and excess
rates and burst size limits.
In addition to automatic WRED, PACKET supports user-configurable (manual) WRED profiles. Each CoS
within every port can use any one of these profiles. PACKET WRED is hierarchical, meaning it is applied on
multiple levels (flow or tunnel, CoS, port). A packet is queued for transmission only if the WRED decision at
all three levels is Pass or when it is in guaranteed range. Otherwise the packet is dropped.
11.8.8 Policing
High granularity policing and priority marking (802.1p) per SLA enables the provider to control the amount
of bandwidth for each individual user and service. Two-rate three-color policing enhances the service
offering, combining high priority service with BE traffic for the same user. Policer profiles, encapsulating the
bandwidth parameters defined for Ethernet services, allow greater flexibility when managing different
customer scenarios. Bandwidth allocations and traffic priority can be configured per ingress or egress ports,
as well as per EVC and per CoS. This hierarchical approach is illustrated in the following figure.
These MPLS cards implement two-rate three-color dual token bucket policing that support 1000 profiles,
defining rate limitations and achieving a notable combination of efficiency and flexibility. Intelligent
bandwidth management improves handling of bursty traffic. Based on MEF5 standards, bandwidth
management profiles are extended. Traffic policing is configured in two stages, in this order:
Per VLAN (EVC): A single ingress BW profile is applied to all ingress service frames for a specific EVC.
This BW profile attribute is associated with each VLAN (EVC) in the UNI port. The following figure
illustrates how the BW profiles are assigned per EVC.
Per Ingress UNI Port: A single ingress BW profile is applied to all ingress service frames for a specific
UNI port. This BW profile attribute is independent of the EVCs in the UNI port. The following figure
illustrates how the BW profiles are assigned per UNI.
E2E OAM can be achieved by combining the various OAM techniques, as illustrated in the following figure.
Ethernet link OAM can be used to monitor and localize failure at the connection point between the
customer and the NE. MPLS tunnel OAM can be used to monitor the connections along the provider's MPLS
network. Service OAM provides E2E service monitoring.
Figure 11-26: E2E OAM model for a mobile backhaul network
BFD provides proactive E2E tunnel CC (Continuity Check), CV (Connectivity Verification), and Remote Defect
Indication (RDI):
Continuity Check (CC): Continuously monitors the integrity of the continuity of the path. In addition to
failure indication, detection of Loss of Continuity may trigger the switch over to a backup LSP.
Connectivity Verification (CV): Monitors the integrity of routing of the path between sink and source
for any connectivity issues, continuously or on-demand. Detection of unintended continuity blocks
the traffic received from the misconnected transport path.
Remote Defect Indication (RDI): Enables an End Point to report to its peer a fault or defect condition
that it detects on a path.
NE platforms work with BFD according to IETF RFC 5880, using the CC mechanism for pro-active monitoring
of MPLS-TP LSPs. Similar to other transport technologies, AoC10_L2/Neptune provides sub-50 msec
protection switchover in case of forwarding path failure, triggered by BFD's consistent failure detection
method.
The following figure illustrates a typical OAM editing window, through which you could, for example,
enable or disable BFD or LDI on the main and protected LSPs.
Figure 11-29: Edit OAM
CFM relies on a functional model consisting of hierarchical Maintenance Domains (MDs). Each MD is an
administrative domain for the purpose of managing and administering a network. A typical domain is
illustrated in the following figure. The service network in this figure is partitioned into customer, provider,
and operator maintenance levels.
Figure 11-31: Multidomain Ethernet service OAM
Continuity Check: A simple, reliable, and effective tool for fault detection. These multicast
transmissions are transmitted regularly and automatically by each MEP, providing a constant network
'heartbeat' that verifies transmission integrity. If a MEP misses three consecutive 'heartbeats' of
transmission from another MEP, the network is immediately alerted to a connectivity problem.
Continuity check functionality is illustrated in the following figure.
Figure 11-32: Continuity check functionality
Loopback: A request/response protocol similar to the classic IP Ping tool. MEPs send Loopback
Messages (LBMs) to verify connectivity with another MP (MEP or MIP) within a specific MA. The
target MP generates a Loopback Reply Message (LBR) in response. LBMs and LBRs are used to verify
bidirectional connectivity, and are initiated by operator command. The path of a typical loopback
sequence is illustrated in the following figure.
Figure 11-33: Loopback protocol
Link Trace: Another request/response protocol similar to the classic IP Traceroute tool. Link trace may
be used to trace the path to a target MP (MEP or MIP) and for fault isolation. MEPs send multicast
Link Trace Messages (LTMs) within a specific MA to identify adjacency relationships with remote MPs
at the same administrative level. When an MP receives an LTM, it completes one of the following
actions:
If the NE is aware of the target MP destination MAC address in the LTM frame and associates
that address with a single egress port, the current MP generates a unicast Link Trace Reply (LTR)
to the initiating MEP and forwards the LTM to the target MEP destination MAC address.
Otherwise the LTM frame is relayed unchanged to all egress ports associated with the MA
except for the port from which the message was received.
The path of a short link trace sequence is illustrated in the following figure.
Figure 11-34: Link trace
CFM Alarm Management: Various types of CFM alarms can be received at the service level when
Alarms functionality is enabled for an MA.
CFM-PM (Y.1731) performance management operations are configured through the Performance
Management windows. The selected service name appears at the top of the window. For example, you
would configure a DM session through the Set DM Session pane, used to define a new DM session or to
reconfigure an existing DM session.
Figure 11-35: Set DM Session pane
The throughput test must be performed for each frame size. The test time during which frames are
transmitted must be at least 60 seconds. Each throughput test result is recorded in a report, using frames
per second (f/s or fps) or bits per second (bit/s or bps) as the measurement unit.
Connection type
QoS (including VLAN information), traffic type (data vs management), etc.
Bandwidth profile: CIR, CBS, EIR, EBS, CF, and CM
Performance criteria: FTD, FDV, FLR, AVAIL, etc.
The service bandwidth is named bandwidth profile and the SLA features are named Service Acceptance
Criteria (SAC). The bandwidth profile specifies the traffic volume allowed for the client and the way of
which the frames are prioritized within the network. The following values describe the profile of the service
bandwidth:
Committed Information Rate (CIR)
Excess Information Rate (EIR)
Committed Burst Size (CBS)
Excess Burst Size (EBS)
Color mode (CM)
The service acceptance criteria are a series of features defining the objectives of performance. The series of
values define the minimum demand to ensure that the service complies with the Service Level Agreement
(SLA).
The service acceptance criteria include the following values:
Frame Transfer Delay (FTD)
Frame Delay Variation (FDV)
Frame Loss Ratio (FLR)
Availability(AVAIL)
The test methodology checks if the service is in accordance with the bandwidth profile and with the
acceptance criteria. It includes two phases:
Service configuration test. The services running on the same line are tested one by one, to check the
correct provisioning of their profile.
Service performance test. The services functioning on the same line are tested simultaneously for a
significant period of time, to check the robustness of the network.
The built in tester is based on ITU-T Y.1564 and supported in NPT-1200 with MCIPS320 and NPT-1800. The
test performed by the tester is an out-of-service application that enables to check the SLA performance of
any connections before it is commissioned.
11.11 DMXE_22_L2 TM
The DMXE_22_L2 card has a unique and intelligent Traffic Management (TM), which enables reliable
provisioning of different SLA levels. For example, policer profiles encapsulating the bandwidth parameters,
defined for Ethernet services, is one of the tools used by TM, allowing greater flexibility when managing
different customer scenarios.
Basically, the TM has a simple architecture to provide high capacity infrastructure (up to 10 GbE) for access
rings with small amount of capacity per node.
Below are the basic building blocks for access applications for the egress 10GE traffic flow and traffic
management.
Ingress classification
On ingress all traffic is classified into two groups:
High CoS - traffic is CIR only
Low CoS
Egress scheduling
In general, strict priority is implemented between High CoS and Low CoS traffic.
Either High CoS or Low CoS traffic can reach 10Gbps line rate with some burst handling with tail drop
algorithm.
The 10 GbE port egress queue has a threshold as shown in the following figure.
Figure 11-36: 10 GbE port egress queue threshold
Low CoS traffic is checked upon 10 GbE port egress queue threshold. If the threshold is reached, the packet
is discarded.
High CoS traffic is colored by 10 Gbps Token Bucket. If the packet color is green, then it is allowed to egress
queue. If the packet color is red, then egress queue threshold is checked. If threshold is reached, the packet
is discarded.
The following figure shows a P2MP tunnel that flows from P1 to P2, where it branches towards destination
PEs (PE3 and PE4). If P1 detects that the link to P2 has failed, it switches the traffic to the bypass tunnel.
When the rerouted traffic merges at P2, the FRR label is removed.
Figure 11-40: P2MP link protection example
With facility backup FRR node protection for a P2MP tunnel, the node upstream from the failure redirects
the traffic through a bypass tunnel that merges with the original P2MP tree at the NNH node. If the NH is a
P2MP branching point to N links, N bypass tunnels are required for complete protection.
The following figure shows a P2MP tunnel that flows from P1 to P2, where the tunnel branches towards
destinations PE3 and PE4. If the P2 branching point fails, P1 switches all traffic meant for PE3 to go through
bypass tunnel 1 to PE3. P1 also switches all traffic meant for PE4 to go through bypass tunnel 2 to PE4.
Figure 11-41: P2MP node protection example
FRR protection provides alternative traffic routes. These routes are activated if a connection link or a
connecting node fails. The following figure shows a portion of a P2MP tunnel. Node PE2 connects to both
transit and tail subtunnels. The transit subtunnel leads to node PE3, and the tail subtunnel terminates at
the access port of PE2. To fully protect the tunnels leading from PE2, the preceding node PE1 has been
designated the PLR. Protection bypass tunnel B1 runs from PE1 to PE2, providing link protection in case the
link from PE1 to PE2 fails. Protection bypass tunnel B2 runs from PE1 to PE3, providing node protection in
case node PE2 fails. Note that both link and node protection is required for this network configuration,
since node protection alone does not provide a backup for the subtunnel that terminates at PE2.
Figure 11-42: FRR protection typical scenario
This scenario is a classic illustration of the traffic duplication problem which, when it occurs, invalidates all
the traffic of the P2MP tunnel. If link PE1-PE2 fails and triggers both link and node protection, protective
traffic can be sent via bypass tunnel B1 (to reach node PE2) as well as via bypass tunnel B2 (to reach nodes
PE3 and continue to node PE4). Because node PE2 is also a tail endpoint for B1, node PE2 forwards traffic
that has been received onto PE3 along the P2MP tunnel. Therefore, PE3 receives two duplicate copies of
the packet (one from PE2 and one over B2), and traffic is thus rendered useless.
To resolve this problem, the data cards implement a method called Dual FRR. A single bypass tunnel is
defined that provides both link and node protection simultaneously. A corresponding rule is defined to
avoid traffic duplication. The Dual FRR bypass tunnel originates at PE1, the point of local repair, then drops
node-protected traffic at PE3, the node protection merge point, and continues on to drop link-protected
traffic at PE2, the link protection merge point. The protective behavior at node PE3 can be referred to as
drop-and-continue. The traffic packets dropped at PE2 as part of Dual FRR are identified as such and
therefore are not transmitted back to PE3, thus avoiding the problem of traffic duplication. Dual FRR
enables concurrent link and node protection. In this example, Dual FRR works in the event of a failure of
the link between PE1 and PE2 and/or failure of the node PE2. This is illustrated in the following figure.
Figure 11-43: Dual FRR protection
In this H-VPLS network, the dual-homed PE has configured spoke PWs to H-VPLS gateways PE1 and PE2.
One of the PEs is currently active, linked to the PE via the primary PW. The primary PW is given priority by
the EMS and is responsible for forwarding traffic to the peer H-VPLS domain. Failure of an H-VPLS gateway
PE generates an OAM defect, which in turn triggers the dual-homed PE to select a new primary PW. A
hold-off timer can be used to mask temporary server layer faults.
Another option to trigger PW redundancy is by using PW status from the gateway PE. The end to end PW is
traversing two H-VPLS domains and tunnel OAM is maintained over each domain. Hence, in case of a failure
in Domain #2 which is not recovered by the tunnel protection, the gateway PE will mark the PW as down
and generate a defect status message towards the pivot node that will trigger a PWR switch.
A PW switchover requires an FDB flush at PE1, PE2, and the far H-VPLS domain. This is achieved by the
transmission of CCN messages between data cards that indicate for which PE(s) the FDB entries should be
deleted (see Configuring CCN).
PW Redundancy can also be used for load balancing between the H-VPLS gateways. By configuring some
PEs with the primary PWs toward PE1 (where PE1 becomes the default H-VPLS gateway), and other PEs
with primary PWs toward PE2, the traffic load can be reasonably balanced between the two gateway PEs.
NOTE: In dual-homing to H-VPLS topology, BFD must be used to monitor the status of the
remote PE and the status of the transport layer, in order for the pivot PE to select the
appropriate PW. BFD should therefore be enabled on the tunnel carrying the PW (see
Configuring MPLS-TP Linear Protection).
11.13.9 Multi-segment PW
An L2VPN multisegment pseudowire (MS-PW) is a set of two or more PW segments that function as a single
PW, as illustrated in the following figure. The routers participating in the PW segments are identified as
switching provider edge (S-PE) routers, which are located at the switching points connecting the tunnels of
the participating PW segments, or terminating provider edge (T-PE) routers, which are located at the
MS-PW endpoints. The S-PE routers can switch the control and data planes of the preceding and
succeeding PW segments. MS-PWs can span multiple cores or autonomous systems of the same or
different carrier networks.
Figure 11-47: Stitching PE
MS-PW service enables a hierarchical network structure for data networks, similar to H-VPLS capabilities.
MS-PW functionality improves scalability, facilitates multi-operator deployments, and facilitates use of
different control plane techniques in different domains. These are valuable capabilities in network
configurations that must typically be able to integrate static PW segments in the access domains and
signaled PW segments in the IP/MPLS core.
Signaling gateways (SGW) are used to tie PW segments together into a single connection (stitching) at a
given point. This functionality is implemented within a single platform located at the border of two network
domains. The two domains may both be static, both dynamic, or one static and one dynamic. Network
interworking enables LSP and service stitching, interaction between the data planes, and E2E OAM.
Figure 11-48: Signaling gateway concept
MPLS-TP and IP/MPLS domains can be connected through SGWs. In PW-based backhaul, this is
implemented through multisegment PWs (MS-PWs), including:
Static MPLS-TP segments
Dynamic IP-MPLS segments
Gateway interconnections or "stitches" of both types of segments
In the current Neptune Hybrid products, MS-PWs are used to stitch together static MPLS-TP segments.
With NPT-1800 and NPT-1200 with MCIPS320, MS-PWs can also be configured as SGWs, stitching together
static and dynamic segments. MS-PWs make it possible to offer a single E2E service that seamlessly spans
network domains, simplifying service management and OAM.
Figure 11-49: PW switching point
The following figure shows the link aggregation approach. Two variations are displayed, one for Ethernet
ports, MoE and one for EoS WAN ports.
Figure 11-50: LAG: link aggregation examples
MC-LAG enables to improve the performance of data networks and provides a higher network protection
with improved reliability. It extends the link-level redundancy capabilities of link aggregation, and adds
support for the device-level redundancy. This is achieved by allowing one end of the link aggregated port
group to be dual-homed into two different devices to provide device-level redundancy.
In the MC-LAG protection scheme the CE behaves as a normal LAG device from the perception of hashing
and traffic distribution. The PE1 and PE2 devices communicate to each other, exchanging LAG messages by
multi-chassis LACP (mLACP) over Inter-Chassis Communication Protocol (ICCP).
As a result of the communication, the group of member ports on a PE is either Active or Standby. When the
member ports are active, Load Sharing is applied locally between the ports. Since port(s) can be active only
on one PE, the two PEs exchange port status information between them, so PE1 knows if the LAG on PE2 is
up or down, and vice versa. If multiple ports in the LAG, LAG Link Down Threshold is used to decide if the
LAG is up or down. A local decision is done on each PE whether to activate its local ports or to keep it as
standby. Equipment failure of the peer PE is detected via OAM (MPLS-TP BFD) and triggers the local LAG to
activate its ports.
When a failure is detected, the system reacts by triggering a switchover from the Active PE to the Standby
PE.
A link failure along LSP1 automatically triggers 1:1 protection and traffic is redirected to LSP2, the
original protection path.
Figure 11-53: Automatic restoration: phase 2 (link failure)
The NMS now recalculates and downloads a new path, restoring traffic to most of the original LSP1
route while bypassing the link failure.
Figure 11-54: Automatic restoration: phase 3 (NMS recalculates)
If multiple link failures are detected in the original LSP, LightSOFT dynamically restores the relevant tunnels
by configuring alternative routes, working link by link and taking all active failures into account when
performing restoration. As the participating links are repaired, LightSOFT reverts the tunnels where
possible to the original links. Network restoration is a dynamic, flexible feature that intelligently chooses
the most efficient route, based on the current network status, correlating all affected tunnels and
identifying the most efficient route for the current network functional topology. As link failures are fixed,
LightSOFT efficiently reverts the affected tunnels, correlating the tunnels and repaired links and completing
either full or partial reversions.
Automatic network restoration can be configured for protected and unprotected tunnels, for either one or
both main and protection paths. Operators can choose how they prefer to optimize resource usage, either
maximizing disjoint route selection or focusing on resource sharing to minimize resource utilization.
Network restoration provides protection from multiple network failures, since new LSP paths are
dynamically prepared and ready for use before they are needed.
You can view the tunnel status in the Tunnel List window. In the event of a failure, a dotted line indicates
the original path of the tunnel and a solid line of the same color indicates the active (restoration) path.
Figure 11-55: Tunnel restoration
Intelligent use of CCN enhances network resiliency and enables more effective use of dual-homed device
protection in dual homing scenarios as well as H-VPLS networks. In some H-VPLS dual homing topologies,
when there is a need for CCN to cross VPLS domains, CCN forwarding can be enabled on the relevant NEs.
Figure 11-56: CCN functionality
The Neptune supports mesh and ring traffic protection schemes by Dual Node Interconnection (DNI), Dual
Ring Interconnection (DRI) and restoration. The restoration mechanism makes sure traffic rerouting in the
event of a major contingency. Telecom operators can define their own major contingencies based on
individual operating parameters. Traffic restoration time is generally dependent on network complexity and
traffic load.
For more information about the traffic restoration feature, see the LightSoft User Manual and the relevant
EMS user manuals.
11.14.1 SNCP
SNCP provides independent trail protection for individual subnetworks connected to the Neptune Product
Line platforms. Combined with the system’s drop-and-continue capability, SNCP is a powerful defense
against multifailure conditions in a mesh topology. By integrating SNCP into the Neptune Products,
operators achieve superior traffic availability figures. Therefore, SNCP is extremely important for leased
lines or other traffic requiring superior SLA availability.
SNCP/N and SNCP/I at any VC level (VC-4, VC-3, VC-12) are supported. The SNCP mode can be configured
through the EMS-APT/LCT-APT per VC. Automatic SNCP switching is enabled, without operator intervention
or path redefinition. The Neptune Product Line can support path protection by TDM based matrices, such
as : XIOxx, CPTSxxx, MCPTSxxx. The result is exceptionally fast protection switching in less than 30 msec,
with typical switching taking only a few milliseconds. Protection switching is performed via the
cross-connect matrix in the XIOxx, CPTSxxx, MCPTSxxx cards.
11.14.2.1 MSP
MSP is designed to protect single optical links. This protection is most suitable for appendage TM/star links
or for 4-fiber links in chain topologies.
The Neptune supports MSP in all optical line cards (STM-1, STM-4, STM-16, and STM-64). MSP 1+1
unidirectional and bidirectional modes are supported. MSP 1+1 is implemented between two SDH
interfaces (working and protection) of the same bitrate that communicate with two interfaces on another
platform. As with SNCP and path protection, in MSP mode the Neptune provides protection for both fiber
and hardware faults.
The following figure shows a 4-fiber star Neptune with all links protected. This makes sure uninterrupted
service even in the case of a double fault. The Neptune automatically performs MSP switching within
50 msec.
Figure 11-57: MSP protection modes
11.14.2.2 MS-SPRing
In addition to SNCP protection that may also be implemented in mesh topologies, the Neptune supports
MS-SPRing that provides bandwidth advantages for selected ring-based traffic patterns.
Two-fiber MS-SPRing supports any 2.5 Gbps and/or 10 Gbps rings closed by the Neptune via
XIO30_16/XIO64/XIO16_4/CPTS100//CPTS320 cards, in compliance with applicable ITU-T standards. This is
fully automatic and performed in less than 50 msec.
NOTES:
In the NPT-1030 and NPT-1200 products, MS-SPRing is supported by the following card
sets:
XIO30_16
XIO64
XIO16_4
CPTS100
As explained in this section, MS-SPRing is a network protocol that runs on the ring
aggregate cards. The PDH, STM-1, STM-4, and data cards (electrical and optical) that serve
as drop cards connected to the client are not part of the MS-SPRing ring protocol.
However, all client services can be delivered via MS-SPRing on Neptune networks through
the drop cards and the SDH aggregate cards that create the MS-SPRing protection ring.
MS-SPRing can support LO traffic arriving at the nodes in the same way it does HO traffic.
In MS-SPRing modes, the STM-n signal is divided into working and protection capacity per MS. In case of a
failure in one MS of the ring, the protection capacity loops back the affected traffic at both ends of the
faulty MS. The platform supports the full squelching protocol to prevent traffic misconnections in cases of
failure at isolated nodes. Trails to be dropped at such nodes are muted to prevent their being delivered to
the wrong destination.
MS-SPRing is particularly beneficial in ring applications with uniform or adjacent traffic patterns, as it offers
significant capacity advantages compared to other protection schemes.
The following figure shows an Neptune in a 2-fiber MS-SPRing. In this configuration, two fibers are
connected between each site. Each fiber delivers 50% of the active and 50% of the shared protection
traffic. For example, in an STM-16 ring, 8 VC-4s are active and 8 VC-4s are reserved for shared protection.
In the event of a fiber cut between sites A and D, traffic is transported through sites B and C on the black
portion of the counterclockwise fiber. The switch in traffic is triggered by the APS protocol that transmits
control signals over the K1 and K2 bytes in the fiber from site D to site A.
Figure 11-58: Two-fiber protection
The preceding figure portrays two endpoints linked by main and protection paths. Two links are configured
between the two paths, represented by the X shape link topology in the center of the figure. The first fiber
cut on the main path (labeled A), triggers a switch at both endpoints from the main path to the protection
path. A second fiber cut on the protection path (labeled B), triggers a switch at the appropriate points from
the protection path back to the main path. After each fiber cut, the optical equipment used at the DRI
configured nodes at either end of the DRI links must also switch their internal Rx/Tx settings accordingly.
If PM on the main transponder/combiner does not indicate a problem, a message is sent through the
backplane to the protection transponder/combiner for it to shut down its laser to the client, thereby
ensuring transmission to the client from only one transponder/combiner (the main). Protection switching
to the protection transponder/combiner occurs automatically when a failure is detected by the main
transponder/combiner.
The protected channels in the following figure are user selected.
Figure 11-61: OCH 1+1 protection
OCH protection is currently the most popular optical protection method for the optical layer. The
mechanism transports each optical channel in two directions, clockwise and counterclockwise. The shortest
path is defined as the main or working channel; the longer path as the protection channel.
The main benefit of OCH protection is its ability to separately choose the shortest path as the working path
for each channel. There are no dedicated working and protection fibers. Each fiber carries traffic with both
working and protection signals in a single direction.
The OCH 1+1 protection scheme provides separate protection for each channel. For SDH, GbE, and 10G
protection switching is based on PM parameters. Switching criteria can be Loss of Signal (LOS), Loss of
Frame (LOF), or Degraded Signal (SD). The switch-to-protection mode is automatic when a malfunction is
detected in a single channel. This is very convenient as users can choose the channels for protection and
the main or protection paths. Switch-to-protection time in the OCH 1+1 protection scheme is less than
50 msec.
With the MXP10, you can choose any combination of protected network traffic, unprotected traffic, fully
protected traffic including client port protection, and so on. Dual homing from access to ring is also
supported.
With traditional Fast IOP, a link failure between DM #1 and the router would result in traffic loss, since DM
#2 remains designated as standby. This means that the router would not be able to find any route available
for traffic. To prevent this loss of traffic, the links are configured over splitter/coupler cables that link both
cards to the router ports (see illustrated in the figure Fast IOP: 1+1 Card Protection).
DM cards resolve this problem through the use of eIOP, by adding LOS as an IOP trigger on selected LAN
ports. With eIOP, a failure on the link to the active DM card triggers an IOP switchover. DM #2 becomes
active and activates transmissions on the LAN ports. The router detects this link is now up and
sets/advertises a new traffic route. Traffic is restored.
With eIOP, the splitter/coupler cable is no longer required. A regular fiber cable can be used between the
DM cards and the router, as illustrated in the preceding figure. This frees a port on each DM card to carry
additional traffic.
Protected cards: One or two tributary card(s) (one for a 1:1 scheme) can be selected as protected
cards. A protected card can have existing trails. This means that TP can be performed for a card
carrying traffic, without removing existing traffic.
Associate the protecting card and protected cards with a proper TP card.
The Neptune has three types of managed TP cards:
TPEH8_1
TPS1_1
TP63_1
The following tables list the various tributary protection options for the platforms.
11.16.5.1 TP63_1
The TP63_1 provides 1:1 protection for two PE1_63 cards installed in the EXT-2U platform and PME1_63
cards in the base unit. It is activated by the MCP1200, enabling a single I/O backup card to protect the main
(working) I/O card when a failure is detected.
11.16.5.2 TPS1_1
The TPS1_1 provides 1:1 protection for up to four high rate interfaces. It is activated by the MCP1200
according to the corresponding platform it is installed on, enabling a single I/O backup module to protect
the main (working) card when a failure is detected.
The TPS1_1 is connected as follows:
The traffic connectors on the protection I/O module are connected to the PROTECTING CARD1 coaxial
8W8 connector on the TPS1_1.
The traffic connectors on the active I/O module are connected to the PROTECTED CARD2 coaxial 8W8
connector on the TPS1_1.
The traffic cables from the DDF are connected to the CUSTOMER CONNECTION connectors on the
TPS1_1.
11.16.5.3 TPEH8_1
The TPEH8_1 provides 1:1 protection for up to eight electrical Ethernet interfaces (10/100/1000BaseT). It is
activated by the MCP1200, enabling a single I/O backup module to protect the main (working) card when a
failure is detected.
The card design also supports the protection of two separate modules, each with up to four electrical
Ethernet ports. The markings on the TPEH8_1 are divided into two groups that indicate such an option.
The TPEH8_1 is connected as follows:
The customer's Ethernet traffic is connected to the four RJ-45 connectors marked CUSTOMER
CONNECTION 1.
The protected (operating) module is connected to the SCSI connector marked PROTECTED CARD 1.
The protecting (standby) module is connected to the SCSI connector marked PROTECTING CARD 1.
The second group of connectors marked with the suffix 2 is connected similarly for protecting a
second set of four electrical Ethernet interfaces.
The purpose of the protection module is to replace a malfunctioning I/O card automatically with the
redundant I/O card. When the protection is activated, the protection module disconnects the external
ports connected to the electrical protection module of the malfunctioning I/O card and connects them to
the redundant card. In parallel, the matrix card switches the traffic from the malfunctioning card slot to the
protection slot (the slot of the redundant I/O card).
11.17 Security
Comprehensive security mechanisms protect both the complete transport network and individual clients
within the network. ECI is committed to incorporating powerful, advanced security technology and
methodology across the full range of our product offering. The current Neptune release includes certain
new security features, with additional key security enhancements now in development, to be implemented
in upcoming releases.
EMS-APT can be upgraded to apply enhanced security settings to the EMS and to selected NEs managed by
the EMS. Communication channels between entities with enhanced security settings are secured and
information sent via SSH2 protocol.
The main security functions are implemented through the following functionality:
Radius clients (authentication, and two levels of authorization – viewer and administrator)
SSH V2.0 and SFTP
SW integrity based on SHA-2
Public key authentication for NEs
Extensible Authentication Protocol (EAP) – is the protocol that is used between the client and the
authenticator. The 802.1x protocol specifies encapsulation methods for transmitting EAP messages so
they can be carried over different media types.
Port Access Entry (PAE) – is the 802.1x "logical" device of the client and authenticator that exchange
EAP messages.
12.1 RAP-4B
The RAP-4B is a power distribution and alarm panel for ECI platforms installed in racks.
NOTE: The RAP-4B supports operation with BG, XDM (100, 300, 900), 9600 series, and
OPT9603 platforms.
Power distribution for up to four protected platforms installed on the same rack. The nominal DC
power voltage is 48 VDC or 60 VDC. Since the supported platforms can use redundant power sources,
the RAP-4B supports connection to two separate DC power circuits.
Each DC power circuit of each platform is protected by a circuit breaker, which also serves as a power
on/off switch for the corresponding circuit. The required circuit breakers are included in the
installation parts kit supplied with the platforms, and therefore their current rating is in accordance
with the order requirements. The CB rating installed in the RAP-4B for feeding a single platform is
max. 35 A. The total power that can be provided by the RAP-4B is max. 4 x 1.1 kW (4.4 kW).
NOTE: The maximum power that can be supplied by the RAP-4B to a single platform is not
more than 1.1 kW.
The circuit breakers are installed during the RAP-4B installation. To prevent accidentally changing a
circuit breaker state, the circuit breakers can be reached only after removing the RAP-4B front cover.
The circuit breaker state (ON/OFF) can be seen through translucent covers.
Bay alarm indications: The RAP-4B includes three alarm indicators, one for each alarm severity. When
alarms of different severities are received simultaneously, the different alarm indications light
simultaneously.
NOTE: BG platforms support only two alarm indications, Major and Minor.
A buzzer is activated whenever a Major or Critical alarm is present in an XDM platform or a Major
alarm in a BG or 9600 series platform connected to the RAP-4B.
Connection of alarms from up to four platforms, with max. four alarm inputs and two alarm outputs.
The following figure shows the front panel of the RAP-4B, and the table lists the functions of the front panel
components corresponding to the figure callout numbers.
Figure 12-1: RAP-4B front panel
The RAP-4B alarm connectors are on its circuit board, as shown in the following figure. The table lists the
connector functions. The index numbers in the table correspond to those in the figure.
Figure 12-2: Rap-4B alarm connectors
12.2 RAP-BG
The RAP-BG is a DC power distribution panel for BG and other telecommunication platforms installed in
racks. It distributes power for up to four NPT series platforms installed on the same rack. The nominal DC
power voltage is -48 VDC, -60 VDC, or 24 VDC. Since NPT series platforms can use redundant power
sources, the RAP-BG supports connection to two separate DC power circuits.
Each DC power circuit of each platform is protected by a circuit breaker, which also serves as a power
ON/OFF switch for the corresponding circuit. The required circuit breakers are included in the installation
parts kit supplied with the NPT series platforms, and therefore their current rating is in accordance with the
order requirements. The maximum current that can be supplied to a platform fed from the RAP-BG is 16A.
The circuit breakers are installed during the RAP-BG installation. To prevent accidental changing of a circuit
breaker state, the circuit breakers can be reached only after opening the front cover of the RAP-BG. The
circuit breaker state (ON or OFF) can be seen through translucent covers.
The following figure shows the front panel of the RAP-BG, and the table lists the functions of the front
panel components as indicated by the figure callouts.
12.3 xRAP-100
The xRAP-100 is a power distribution and alarm panel for different ECI communication platforms installed
in racks. The xRAP-100 performs the following main functions:
Power distribution for up to four platforms: The nominal DC power voltage is -48 VDC or -60 VDC.
Since most ECI platforms can use redundant power sources, the xRAP-100 supports connection to two
separate DC power circuits. The internal circuits of the xRAP-100 are powered whenever at least one
power source is connected. The presence of DC power within the xRAP-100 is indicated by a POWER
ON indicator.
Each DC power circuit of each platform is protected by a circuit breaker, which also serves as a power
ON/OFF switch for the corresponding circuit. The required circuit breakers are included in the
installation parts kit supplied with the platforms, and therefore their current rating is in accordance
with the order requirements. The 5-pin high power connector supplies power to one platform. The
3-pin connector supplies power to three platforms.
The xRAP-100 is designed to support one high powered and three regular platforms, or four regular
platforms.
The circuit breakers are installed during the xRAP-100 installation. To prevent accidental changing of a
circuit breaker state, the circuit breakers can be reached only after opening the front cover of the
xRAP-100. The circuit breaker state (ON or OFF) can be seen through translucent covers.
Bay alarm indications: The xRAP-100 includes four alarm indicators, one for each alarm severity.
When alarms of different severities are received simultaneously, the different alarm indications light
simultaneously.
A buzzer is activated whenever a Major or Critical alarm is present in the platforms installed in the
rack.
Connection of alarms from up to four platforms, each one with a maximum of four alarm inputs and
two alarm outputs.
The following figure shows the front panel of the xRAP-100, and the table lists the functions of the front
panel components as indicated by the figure callouts.
The xRAP-100 connectors are on its circuit board, as shown in the following figure. The table lists the
connector functions. The index numbers in the table correspond to those in the figure.
The state-of-the-art power supply is designed to match constant-power characteristics of modern telecom
loads, and thus reduces the number of rectifiers required in battery backed-up systems. The
AC/DC-DPS850-48-3 features a modular architecture for easy system maintenance and repair.
Figure 12-6: AC/DC-DPS850-48-3 power system, general view
The AC/DC-DPS850-48-3 system has connectors for connecting the load, batteries, AC source, and the
system alarms, at the rear of the unit. It also includes circuit breakers that protect the power supply against
load overcurrent at the battery and rectifier outputs.
The CSU-502 module provides control and monitoring functions. It is supplied preconfigured, ready for
immediate use. It supports system voltage, load current, status, and alarms that can be changed and
displayed on the LCD display.
The AC/DC-DPS850-48-3 platform is preconfigured for fast installation and setup. All system settings are
fully software-configured and stored in transferable configuration files for repeated one-step system setup.
The AC/DC-DPS850-48-3 platform is supplied with two kits of brackets for installation in 19" or ETSI racks.
The following main features are supported by the AC/DC-DPS850-48-3 power system:
19"/ETSI power platform for 48 VDC @ 2250 W (max.) in non-redundant application
Single phase 220 VAC input source
Three DPR-850B-48 rectifier units
Light weight plug-in modules for simple installation and maintenance
Hot swappable rectifier and control modules
Front access to the circuit breakers and control module for simplified operation and maintenance
To enable understanding enlarged views of sections on the rear panel are provided in the following figures.
The AC/DC-DPS850-48-3 detailed battery and load connections are shown in the following figure.
The AC/DC-DPS850-48-3 AC source and alarm connections are shown in the following figure.
The following table describers the AC/DC-DPS850-48-3 rear panel component functions (connections).
12.5 ICP_MCP30
Due to limited space on the MCP30 or MCP1200 panel, there is a single connector on the front panel for
the following auxiliary interfaces: External Alarms, RS-232, OW, and V.11. The ICP_MCP30 is configured to
distribute the concentrated Auxiliary connector into dedicated connectors for each function. If none of
these interfaces is used in your application, there is no must to install the ICP_MCP30. If only an External
Alarms interface is used, there is also no must to install the ICP_MCP30 as a special alarm cable leading only
to the External Alarms interface is provided by ECI.
The ICP_MCP30 is connected to the MCP30 or MCP64 using a back-to-back cable.
J3 SCSI-36
Connector for special cable connecting ICP to applied SM_10E module
CH1-CH4 RJ-45 SM_FXO_8E FXO interface channel #1 to channel #4
SM_FXS_8E FXS interface channel #1 to channel #4
SM_EM_24W6E 2/4 wire E&M interface channel #1 to channel
#4
NOTE: ICP-V24F supports connection to V.24 interfaces with standard female connectors.
12.7 AC_CONV_UNIT
The AC_CONV_UNIT is an AC power platform that can be mounted separately in the rack. It performs the
following functions:
Converts AC power to DC power
Filters input for the NPT-1600CB platform
Provides backup for AC power
12.8 AC_CONV_MODULE
The AC_CONV_MODULE is an AC power module that can be plugged into the AC_CONV_ UNIT. It performs
the following functions:
Converts AC power to DC power for the NPT-1600CB only
Filters input for the NPT-1600CB platform
Provides up to 130 W of power
All fiber connections are made on a swing-out tray that opens to the right at 90° and houses the splicing
trays, optical adapter panels, and the fiber support. Left-side tray opening is available on request. The
swing-out tray enables quick and easy access to all internal parts for connection or maintenance activities.
The fiber connections are protected by a front cover, which latches to the assembly and prevents
unintended disconnection of fibers.
Optical terminal fibers can enter the ODF from the right or left side and be connected to the optical
adapters from one side. Pigtails connect to the adapters from the other side. Excess length of pigtails and
patch cords is threaded on a fiber support that maintains the minimum bend radius to prevent fiber breaks.
A durable and robust tube leads the external fibers cable to the swing-out tray and protects them from
breaks. The adapters are arranged on panels in groups of four or two (depending on the total number of
ports). A large space between the adapters enables easy access to each individual fiber and quick
reconfiguration.
12.12 xDDF-21
The PME1_21 supports only balanced E1s directly from its connectors. For unbalanced E1s, an external DDF
with E1 balanced-to-unbalanced conversion must be configured.
When unbalanced 75 interfaces are required, the xDDF-21 patch panel enables connection and
conversion of these interfaces to the balanced 120 interfaces of the PME1_21.
The xDDF-21 is 1U high and can be installed in ETSI A and ETSI B racks, as well as in 19” racks. It has a
capacity of 21 E1 lines.
The following figure shows a general view of the patch panel. The channel numbers of the various
connectors are marked on the patch panel, and the inside of the cover contains a label for cable
identification (illustrated in the following figure). The customer’s cables are connected to the connectors
inside the patch panel, while the cable leading from the PME1_21 connector is connected to the SCSI
connectors at the rear of the xDDF-21. A special split cable is available to convert the output from the
PME1_21 to SCSI connector pairs at the back of the xDDF-21.
The xDDF-21 can be supplied with BT43, DIN1.6/5.6, or BNC connectors for connecting to the customer’s
traffic cables.
12.14 Cables
The product line platforms are supplied with a number of cables, as described in the following table.
EN 60870-2-2 (1996) Telecontrol equipment and system - Part 2Operation condition – Section 2:
Environmental condition (3k6).
EN 60950-1 -Information Technology Equipment Safety- Part1 General Requirements
EN 61000-4-2:1995 +A1:98+A2:2001 Electrostatic Discharge (ESD) Immunity test
EN 61000-4-3:2008 Electromagnetic compatibility (EMC), Section 3: Radiated, radio frequency,
electromagnetic field immunity IEC test
EN 61000-4-4: 2008 Electromagnetic compatibility (EMC), Section 4: Electrical fast transient/burst
immunity test
EN 61000-4-5: 2006 Electromagnetic compatibility (EMC), Section 5: Surge immunity test
EN 61000-4-6: 2007 Electromagnetic compatibility (EMC), Section 6: Immunity to conducted
disturbances, induced by radio- frequency fields
EN 61000-6-2: Electromagnetic compatibility (EMC) - Part 6-2: Generic standards - Immunity for
industrial environments
EN 61000-6-4: Electromagnetic compatibility (EMC) - Part 6-4: Generic standards - Emission standard
for industrial environments
EN 61000-6-5 (2001) Generic standards – Immunity for power station and substation environments
EN 61850-3 (2002) Communication network and systems in substations – Part 3: General
requirements
ETR 114: Functional Architecture of SDH Transport Networks.
ETR 275: Considerations on Transmission Delay and Transmission Delay value for components on
connections supporting speech communication over evolving digital networks.
FTZ 1TR9: Deutsche Telekom A.G. EMC Requirements.
FTZ 153 TL 1part 1: Synchronous Multiplexing Equipment (SM) for Synchronous Multiplex Hierarchy.
IEEE 1588: IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement
and Control Systems
IEEE 1613 (2003) Environmental and testing requirements for communication networking devices in
electric power station (class B).
RFC 3443: Time To Live (TTL) Processing in Multi-Protocol Label Switching (MPLS) Networks.
RFC 3584: Coexistence between Version 1, Version 2, and Version 3 of
RFC 3644: Policy quality of service (QoS) Information model
RFC 3670: Information model for describing network device QoS datapath
RFC 3812: Multiprotocol Label Switching (MPLS) Traffic Engineering (TE) Management Information
Base (MIB).
RFC 3916: Requirements for Pseudo-Wire Emulation Edge-to-Edge (PWE3).
RFC 3985: Pseudo Wire Emulation Edge-to-Edge (PWE3) Architecture.
RFC 4125: Maximum Allocation Bandwidth Constraints Model for Diffserv-aware MPLS Traffic
Engineering.
RFC 4126: Max Allocation with Reservation Bandwidth Constraints Model for Diffserv-aware MPLS
Traffic Engineering & Performance Comparisons.
RFC 4250: The Secure Shell (SSH) Protocol Assigned Numbers
RFC 4251: The Secure Shell (SSH) Protocol Architecture
RFC 4252: The Secure Shell (SSH) Authentication Protocol
RFC 4253: The Secure Shell (SSH) Transport Layer Protocol
RFC 4254: The Secure Shell (SSH) Connection Protocol
RFC 4379: Detecting Multi-Protocol Label Switched (MPLS) Data Plane Failures.
RFC 4448: Encapsulation Methods for Transport of Ethernet over MPLS Networks.
RFC 4541: “Considerations for IGMP and MLD Snooping Switches”
RFC 4553: Structure-Agnostic Time Division Multiplexing (TDM) over Packet (SAToP)
RFC 4664: Framework for Layer 2 Virtual Private Networks (L2VPNs)
RFC 4665: Service Requirements for Layer 2 Provider-Provisioned Virtual Private Networks
RFC 4781: Graceful Restart Mechanism for BGP with MPLS
RFC 5086: Structure-Aware Time Division Multiplexed (TDM) Circuit Emulation Service over Packet
Switched Network (CESoPSN)
RFC 5087: Time Division Multiplexing over IP (TDMoIP)
RFC 5254: Requirements for Multi-Segment Pseudowire Emulation Edge-to-Edge (PWE3).
RFC 5462: Multiprotocol Label Switching (MPLS) Label Stack Entry: "EXP" Field Renamed to "Traffic
Class" Field.
RFC 5586: MPLS Generic Associated Channel
RFC 5654: Requirements of an MPLS Transport Profile
RFC 5659: An Architecture for Multi-Segment Pseudowire Emulation Edge-to-Edge.
RFC 5718: An In-Band Data Communication Network For the MPLS-TP
RFC 5860: Requirements for OAM in MPLS-TP Networks
RFC 5880: Bidirectional Forwarding Detection (BFD)
RFC 5884: Bidirectional Forwarding Detection (BFD) for MPLS Label Switched Paths (LSPs)