0% found this document useful (0 votes)
54 views

Neptune (Hybrid) V6.0 Reference Manual

Uploaded by

The Quan Bui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Neptune (Hybrid) V6.0 Reference Manual

Uploaded by

The Quan Bui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 314

Neptune (Hybrid)

Version 6.0

Reference Manual
Neptune (Hybrid) Reference Manual
V6.0
Catalog No: X92376
Drawing No: 417006-2710-063-A00
May 2017
Rev01

ECI's NPT-1800, NPT-1200, NPT-1050, NPT-1021, and NPT-1010 are CE2.0 certified.

ECI's qualification lab is accredited by A2LA for competence in electrical testing according to
the International Standard ISO IEC 17025-2005 General Requirements for the Competence of
Testing and Calibration Laboratories.

ECI's management applications run on VMWare virtualization hypervisors.

© Copyright by ECI, 2013-2017. All rights reserved worldwide.


This is a legal agreement between you, the end user, and ECI Ltd. (“ECI”). BY OPENING THE DOCUMENTATION AND/OR DISK PACKAGE, YOU ARE
AGREEING TO BE BOUND BY THE TERMS OF THIS AGREEMENT. IF YOU DO NOT AGREE TO THE TERMS OF THIS AGREEMENT, PROMPTLY RETURN
THE UNOPENED DOCUMENTATION AND/OR DISK PACKAGE AND THE ACCOMPANYING ITEMS (INCLUDING WRITTEN MATERIALS AND BINDERS OR
OTHER CONTAINERS), TO THE PLACE FROM WHICH YOU OBTAINED THEM.
All documentation and/or disk and all information and/or data contained in the documentation and/or disk ["ECI's Proprietary"] is ECI's proprietary
and is subject to all copyright, patent, and other laws protecting intellectual property, and any international treaty provisions, as well as any specific
agreement protecting ECI's rights in the aforesaid information. Any use of ECI's Proprietary for any purposes [included but not limited: published,
reproduced, or disclosed to third parties, in whole or in part] other than those for which it was disclosed, without the express prior written
permission of ECI, is strictly forbidden.
ECI's Proprietary is provided "AS IS" and may contain flaws, omissions, or typesetting errors. No responsibility and or liability whatsoever are
assumed by ECI for you or any other party, for the use thereof, nor for the rights of third parties, nor for any loss or damage whatsoever or
howsoever caused, arising directly or indirectly in connection with ECI's Proprietary, which may be affected in any way by the use and/or
dissemination thereof. ECI reserves the right, without prior notice or liability, to make changes in equipment design or specifications including any
change in and to the ECI's Proprietary.
Any representation(s) in ECI's Proprietary concerning performance of ECI's product(s) are for informational purposes only and are not warranties of
product performance or otherwise, either express or implied. No warranty is granted nor liability assumed in relation thereto, unless specifically
undertaken in ECI's sales contract or order confirmation. ECI's Proprietary is periodically updated, and changes will be incorporated in subsequent
editions. All graphics included in this document are for illustrative purposes only and might not correspond with your specific product version.
The documentation and/or disk and all information contained therein is owned by ECI and is protected by all relevant copyright, patent, and other
applicable laws and international treaty provisions. Therefore, you must treat the information contained in the documentation and disk as any
other copyrighted material (for example, a book or musical recording).
Other Restrictions. You may not rent, lease, sell, or otherwise dispose of ECI's Proprietary, as applicable.
YOU MAY NOT USE, COPY, MODIFY, OR TRANSFER THE DOCUMENTATION AND/OR DISK OR ANY COPY IN WHOLE OR PART, EXCEPT AS EXPRESSLY
PROVIDED IN THIS LICENSE. ALL RIGHTS NOT EXPRESSLY GRANTED ARE RESERVED BY ECI.
All trademarks mentioned herein are the property of their respective holders.
Notwithstanding the generality of the aforementioned, you expressly waive any claim and/or demand regarding liability for indirect, special,
incidental, or consequential loss or damage which may arise in respect of ECI's Proprietary contained therein, howsoever caused, even if advised of
the possibility of such damages.
The end user hereby undertakes and acknowledges that they read the "Before You Start/Safety Guidelines" instructions (when provided by ECI) and
that such instructions were understood by them. ECI shall not be liable to you or to any other party for any loss or damage whatsoever or
howsoever caused, arising directly or indirectly in connection with you fulfilling and/or failure to fulfill in whole or in part the "Before You
Start/Safety Guidelines" instructions.
Contents
Useful information ............................................................................................... xiii
Related documents ............................................................................................................................. xiii
Contact information ............................................................................................................................ xiii

1 Introducing Neptune ................................................................................... 1-1


1.1 Neptune product lines ........................................................................................................... 1-2
1.2 Expansion platform ................................................................................................................ 1-3
1.3 Features and functions .......................................................................................................... 1-3
1.4 Implementation principles..................................................................................................... 1-4
1.5 Cards and modules ................................................................................................................ 1-5

2 Neptune platform overview ........................................................................ 2-1


2.1 NPT-1200 Platform ................................................................................................................ 2-1
2.2 NPT-1050 platform ................................................................................................................ 2-2
2.3 NPT-1030 platform ................................................................................................................ 2-3
2.4 NPT-1020 platform ................................................................................................................ 2-3
2.5 NPT-1021 platform ................................................................................................................ 2-4
2.6 NPT-1010 platform ................................................................................................................ 2-6

3 NPT-1200 system architecture ..................................................................... 3-1


3.1 Control subsystem ................................................................................................................. 3-5
3.1.1 Internal control and processing .............................................................................................. 3-6
3.1.2 Software and configuration backup ........................................................................................ 3-6
3.1.3 Built-in test .............................................................................................................................. 3-6
3.2 Communications with external equipment and management.............................................. 3-7
3.3 Timing .................................................................................................................................... 3-7
3.4 Traffic and switching functionality......................................................................................... 3-8
3.5 Power feed subsystem ........................................................................................................... 3-9
3.6 NPT-1200 common cards ....................................................................................................... 3-9
3.6.1 INF_1200 ............................................................................................................................... 3-10
3.6.2 FCU_1200 .............................................................................................................................. 3-10
3.6.3 MCP1200 ............................................................................................................................... 3-11
3.7 NPT-1200 switching cards .................................................................................................... 3-13
3.7.1 CPTS100 ................................................................................................................................. 3-13
3.7.2 CPS100................................................................................................................................... 3-17

ECI Telecom Ltd. Proprietary iii


Neptune (Hybrid) Reference Manual Contents

3.7.3 CPTS320 ................................................................................................................................. 3-19


3.7.4 HEoS_16 (CPTS internal connection) ..................................................................................... 3-22
3.7.5 CPS320................................................................................................................................... 3-23
3.7.6 XIO64 ..................................................................................................................................... 3-26
3.7.7 XIO16_4 ................................................................................................................................. 3-27
3.8 Engineering orderwire ......................................................................................................... 3-27
3.9 NPT-1200 Tslot I/O modules ................................................................................................ 3-28
3.10 Expansion Platform .............................................................................................................. 3-29

4 NPT-1050 system architecture ..................................................................... 4-1


4.1 Control subsystem ................................................................................................................. 4-3
4.1.1 Software and configuration backup ........................................................................................ 4-4
4.1.2 Built-in test .............................................................................................................................. 4-4
4.2 Communications with external equipment and management.............................................. 4-4
4.3 Timing .................................................................................................................................... 4-5
4.4 Traffic and Switching Functionality........................................................................................ 4-6
4.5 Power feed subsystem ........................................................................................................... 4-6
4.6 NPT-1050 common cards ....................................................................................................... 4-7
4.6.1 INF_B1UH ................................................................................................................................ 4-7
4.6.2 FCU_1050 ................................................................................................................................ 4-8
4.7 NPT-1050 Switching Cards ..................................................................................................... 4-8
4.7.1 MCPTS100 Dual Matrix Cards.................................................................................................. 4-9
4.7.2 MCPS100 switching card ....................................................................................................... 4-11
4.7.3 MCPS/MCPTS control functionality ....................................................................................... 4-14
4.7.4 AIM100 .................................................................................................................................. 4-14
4.8 NPT-1050 Tslot I/O Modules................................................................................................ 4-15
4.9 Expansion Platform .............................................................................................................. 4-16

5 NPT-1030 system architecture ..................................................................... 5-1


5.1 Modular Architecture ............................................................................................................ 5-4
5.2 Control Subsystem ................................................................................................................. 5-5
5.2.1 Internal Control and Processing .............................................................................................. 5-6
5.2.2 Software and Configuration Backup ....................................................................................... 5-6
5.2.3 Built-In Test ............................................................................................................................. 5-6
5.3 Communications with External Equipment and Management ............................................. 5-7
5.4 Timing .................................................................................................................................... 5-7
5.5 Traffic and Switching Functionality........................................................................................ 5-8

ECI Telecom Ltd. Proprietary iv


Neptune (Hybrid) Reference Manual Contents

5.6 NPT-1030 Common Cards ...................................................................................................... 5-8


5.6.1 INF_B1U .................................................................................................................................. 5-8
5.6.2 AC_PS-B1U .............................................................................................................................. 5-9
5.6.3 FCU_1030 .............................................................................................................................. 5-10
5.6.4 MCP30B ................................................................................................................................. 5-10
5.6.5 XIO30 Cards ........................................................................................................................... 5-12
5.7 NPT-1030 Tslot I/O modules ................................................................................................ 5-14
5.8 Expansion Platform .............................................................................................................. 5-15

6 NPT-1021 system architecture ..................................................................... 6-1


6.1 Modular architecture ............................................................................................................. 6-2
6.2 CPS50 ..................................................................................................................................... 6-3
6.3 Control Subsystem ................................................................................................................. 6-4
6.3.1 Internal control and processing .............................................................................................. 6-5
6.3.2 Software and configuration backup ........................................................................................ 6-5
6.3.3 Built-in test .............................................................................................................................. 6-5
6.4 Communications with external equipment and management.............................................. 6-6
6.5 Timing .................................................................................................................................... 6-6
6.6 Traffic and switching functionality......................................................................................... 6-7
6.7 Ethernet configuration options ............................................................................................. 6-8
6.8 NPT-1021 Tslot modules ........................................................................................................ 6-8
6.9 Power feed subsystem ........................................................................................................... 6-8
6.9.1 INF-B1U ................................................................................................................................... 6-9
6.9.2 INF-B1U-24V ............................................................................................................................ 6-9
6.9.3 INF-B1U-D .............................................................................................................................. 6-10
6.9.4 AC_PS-B1U ............................................................................................................................ 6-10
6.10 Expansion Platform .............................................................................................................. 6-11

7 NPT-1020 system architecture ..................................................................... 7-1


7.1 Modular Architecture ............................................................................................................ 7-2
7.2 CPS50 ..................................................................................................................................... 7-3
7.3 Control Subsystem ................................................................................................................. 7-4
7.3.1 Internal control and processing .............................................................................................. 7-5
7.3.2 Software and configuration backup ........................................................................................ 7-5
7.3.3 Built-in test .............................................................................................................................. 7-5
7.4 Communications with external equipment and management.............................................. 7-6
7.5 Timing .................................................................................................................................... 7-6

ECI Telecom Ltd. Proprietary v


Neptune (Hybrid) Reference Manual Contents

7.6 Traffic and switching functionality......................................................................................... 7-7


7.7 Ethernet and TDM configuration options .............................................................................. 7-8
7.8 Power feed subsystem ........................................................................................................... 7-8
7.8.1 INF-B1U ................................................................................................................................... 7-8
7.8.2 INF-B1U-24V ............................................................................................................................ 7-9
7.8.3 INF-B1U-D ................................................................................................................................ 7-9
7.8.4 AC_PS-B1U ............................................................................................................................ 7-10
7.9 NPT-1020 Tslot modules ...................................................................................................... 7-10
7.10 Expansion Platform .............................................................................................................. 7-11

8 NPT-1010 system architecture ..................................................................... 8-1


8.1 NPT-1010 user interfaces....................................................................................................... 8-2
8.2 Communications with external equipment and management.............................................. 8-4
8.3 NPT-1010 Mslot modules ...................................................................................................... 8-4
8.3.1 TMSE1_8 ................................................................................................................................. 8-5
8.3.2 TM10 ....................................................................................................................................... 8-5

9 Tslot I/O modules ........................................................................................ 9-1


9.1 PDH cards ............................................................................................................................... 9-2
9.1.1 PME1_21 ................................................................................................................................. 9-2
9.1.2 PME1_21B ............................................................................................................................... 9-3
9.1.3 PME1_63 ................................................................................................................................. 9-5
9.1.4 PM345_3 ................................................................................................................................. 9-6
9.2 SDH cards ............................................................................................................................... 9-7
9.2.1 SMD1B ..................................................................................................................................... 9-7
9.2.2 SMQ1 ....................................................................................................................................... 9-8
9.2.3 SMQ1&4 .................................................................................................................................. 9-9
9.2.4 SMS4...................................................................................................................................... 9-10
9.2.5 SMD4 ..................................................................................................................................... 9-10
9.2.6 SMS16.................................................................................................................................... 9-11
9.3 Ethernet layer 1 (EoS) cards................................................................................................. 9-12
9.3.1 DMFE_4_L1 ........................................................................................................................... 9-12
9.3.2 DMFX_4_L1 ........................................................................................................................... 9-13
9.3.3 DMGE_1_L1 ........................................................................................................................... 9-14
9.3.4 DMGE_4_L1 ........................................................................................................................... 9-15
9.4 Ethernet layer 2 (EoS/MoT) cards........................................................................................ 9-16
9.4.1 DMFE_4_L2 ........................................................................................................................... 9-16
9.4.2 DMFX_4_L2 ........................................................................................................................... 9-17

ECI Telecom Ltd. Proprietary vi


Neptune (Hybrid) Reference Manual Contents

9.4.3 DMGE_2_L2 ........................................................................................................................... 9-18


9.4.4 DMGE_4_L2 ........................................................................................................................... 9-19
9.4.5 DMGE_8_L2 ........................................................................................................................... 9-20
9.4.6 DMXE_22_L2 ......................................................................................................................... 9-21
9.4.7 DMXE_48_L2 ......................................................................................................................... 9-23
9.5 Multi service cards (CES) ...................................................................................................... 9-24
9.5.1 DMCES1_4 ............................................................................................................................. 9-24
9.5.2 MSE1_16 ............................................................................................................................... 9-25
9.5.3 MSC_2_8 ............................................................................................................................... 9-26
9.5.4 MS1_4 ................................................................................................................................... 9-27
9.5.5 MSE1_32 ............................................................................................................................... 9-29
9.6 Pure packet cards................................................................................................................. 9-30
9.6.1 DHGE_4E ............................................................................................................................... 9-30
9.6.2 DHGE_8 ................................................................................................................................. 9-32
9.6.3 DHGE_16 ............................................................................................................................... 9-33
9.6.4 DHGE_24 ............................................................................................................................... 9-35
9.6.5 DHXE_2 .................................................................................................................................. 9-36
9.6.6 DHXE_4 .................................................................................................................................. 9-37
9.6.7 DHXE_4O ............................................................................................................................... 9-38
9.7 NFV cards ............................................................................................................................. 9-39
9.7.1 NFVG_4 ................................................................................................................................. 9-40
9.7.2 NFVG_4 block diagram .......................................................................................................... 9-41
9.7.3 NFVG_4 applications ............................................................................................................. 9-43
9.7.4 Installation considerations for NFVG_4 in Neptune platforms ............................................. 9-44
9.8 Slot reassignment and product replacement ...................................................................... 9-45
9.8.1 Product replacement............................................................................................................. 9-45
9.8.2 Card reassignment ................................................................................................................ 9-45
9.8.3 Card move ............................................................................................................................. 9-46
9.9 Pluggable interfaces (CSFP/SFPs/SFP+/XFPs) ...................................................................... 9-46
9.9.1 SFP types ............................................................................................................................... 9-47
9.9.2 Compact SFP (CSFP) types ..................................................................................................... 9-47
9.9.3 SFP+ types ............................................................................................................................. 9-47
9.9.4 XFP types ............................................................................................................................... 9-48
9.9.5 ETR-1 SFP electrical transceiver ............................................................................................ 9-48
9.9.6 ETGbE SFP electrical transceiver ........................................................................................... 9-49

10 EXT-2U expansion platform ....................................................................... 10-1


10.1 EXT-2U common cards ......................................................................................................... 10-2

ECI Telecom Ltd. Proprietary vii


Neptune (Hybrid) Reference Manual Contents

10.1.1 INF_E2U ................................................................................................................................. 10-2


10.1.2 AC_PS-E2U ............................................................................................................................. 10-3
10.1.3 FCU_E2U ................................................................................................................................ 10-4
10.2 EXT-2U Traffic Cards ............................................................................................................ 10-4
10.2.1 PE1_63................................................................................................................................... 10-5
10.2.2 P345_3E................................................................................................................................. 10-6
10.2.3 S1_4 ....................................................................................................................................... 10-6
10.2.4 S4_1 ....................................................................................................................................... 10-7
10.2.5 Optical Base Card (OBC) ........................................................................................................ 10-8
10.2.6 MXP10 ................................................................................................................................. 10-12
10.2.7 DHFE_12 .............................................................................................................................. 10-17
10.2.8 DHFX_12 .............................................................................................................................. 10-18
10.2.9 MPS_2G_8F ......................................................................................................................... 10-19
10.2.10 MPoE_12G........................................................................................................................... 10-20
10.2.11 DMCE1_32 ........................................................................................................................... 10-21
10.2.12 SM_10E ............................................................................................................................... 10-22
10.2.13 EM_10E ............................................................................................................................... 10-23

11 Neptune MPLS-TP and Ethernet cards ....................................................... 11-1


11.1 Ethernet layer 2 switching cards.......................................................................................... 11-2
11.2 MPLS-TP Service Cards ......................................................................................................... 11-2
11.3 Layer 1 service cards ............................................................................................................ 11-3
11.4 Multiservice CES cards ......................................................................................................... 11-3
11.5 Layer 2 service cards functionality....................................................................................... 11-4
11.5.1 Generic Framing Procedure .................................................................................................. 11-4
11.5.2 Virtual Concatenation ........................................................................................................... 11-5
11.5.3 Link Capacity Assignment Scheme ........................................................................................ 11-5
11.5.4 Layer 2 switching capabilities ................................................................................................ 11-6
11.5.5 FDB quota provisioning ......................................................................................................... 11-6
11.5.6 Triggers for MSTP .................................................................................................................. 11-7
11.5.7 Port-based VLANs .................................................................................................................. 11-8
11.5.8 UNI on EoS ports ................................................................................................................... 11-8
11.5.9 NNI on ETY ports ................................................................................................................... 11-8
11.5.10 Additional features ................................................................................................................ 11-8
11.5.11 Access control list .................................................................................................................. 11-9
11.5.12 C-VLAN translation ................................................................................................................ 11-9
11.5.13 C-VLAN bundling ................................................................................................................. 11-10
11.5.14 Access-controlled management .......................................................................................... 11-10

ECI Telecom Ltd. Proprietary viii


Neptune (Hybrid) Reference Manual Contents

11.5.15 Port mirroring ...................................................................................................................... 11-11


11.5.16 Ingress/egress C-VLAN filtering ........................................................................................... 11-11
11.5.17 L2CP flooding protection ..................................................................................................... 11-11
11.5.18 Layer 2 card services ........................................................................................................... 11-12
11.6 Evolution: Ethernet to MPLS to MPLS-TP .......................................................................... 11-13
11.7 MEF CE2.0 based services .................................................................................................. 11-16
11.7.1 Comprehensive set of MEF CE2.0 services .......................................................................... 11-16
11.7.2 Ethernet Private Line (EPL)/Ethernet Virtual Private Line (EVPL) ....................................... 11-19
11.7.3 Ethernet Private LAN (EPLAN)/Ethernet Virtual Private LAN (EVPLAN) .............................. 11-20
11.7.4 Multicast optimized rooted-MP services ............................................................................ 11-20
11.7.5 IGMP-aware MP2MP VSI ..................................................................................................... 11-24
11.8 Quality of Service (QoS) ..................................................................................................... 11-27
11.8.1 Traffic management and performance ............................................................................... 11-28
11.8.2 Hierarchical QoS .................................................................................................................. 11-30
11.8.3 Shaping ................................................................................................................................ 11-30
11.8.4 WRED (Weighted Random Early Discard) ........................................................................... 11-31
11.8.5 Queuing and scheduling ...................................................................................................... 11-32
11.8.6 Connection Admission Control (CAC) .................................................................................. 11-32
11.8.7 Class of Service (CoS)........................................................................................................... 11-33
11.8.8 Policing ................................................................................................................................ 11-33
11.8.9 Ingress policers .................................................................................................................... 11-35
11.8.10 Classification and marking................................................................................................... 11-36
11.8.11 DiffServ per port based TM ................................................................................................. 11-36
11.9 OAM and Performance Monitoring ................................................................................... 11-39
11.9.1 Ethernet link OAM ............................................................................................................... 11-40
11.9.2 MPLS-TP tunnel OAM .......................................................................................................... 11-41
11.9.3 MPLS-TP fault management ................................................................................................ 11-42
11.9.4 Service OAM (CFM) ............................................................................................................. 11-43
11.9.5 CFM-PM (Y.1731) ................................................................................................................ 11-46
11.9.6 Throughput (RFC 2544) ....................................................................................................... 11-47
11.9.7 SLA (Y.1564) ........................................................................................................................ 11-48
11.10 Ethernet services built-in tester (Y.1564) .......................................................................... 11-48
11.10.1 Built-in tester (Y.1564) features .......................................................................................... 11-49
11.11 DMXE_22_L2 TM................................................................................................................ 11-50
11.12 Multiple Protection Schemes..................................................................................................... 1
11.13 MPLS protection schemes ......................................................................................................... 1
11.13.1 Facility backup FRR ..................................................................................................................... 2

ECI Telecom Ltd. Proprietary ix


Neptune (Hybrid) Reference Manual Contents

11.13.2 FRR for P2P tunnels .................................................................................................................... 2


11.13.3 FRR for P2MP tunnels................................................................................................................. 2
11.13.4 Dual FRR protection ................................................................................................................... 3
11.13.5 Additional FRR capabilities ......................................................................................................... 5
11.13.6 MPLS-TP 1:1 linear protection.................................................................................................... 6
11.13.7 PW redundancy for H-VPLS DH topology ................................................................................... 7
11.13.8 Dual-homed device protection in H-VPLS networks .................................................................. 8
11.13.9 Multi-segment PW ..................................................................................................................... 9
11.13.10 Link Aggregation (LAG) ........................................................................................................ 10
11.13.11 Multi-chassis link aggregation (MC-LAG) ............................................................................ 11
11.13.12 LSP tunnel restoration ......................................................................................................... 12
11.13.13 Customer Change Notification (CCN) .................................................................................. 14
11.14 SDH protection schemes ..........................................................................................................15
11.14.1 SNCP ......................................................................................................................................... 16
11.14.2 SDH line protection .................................................................................................................. 16
11.14.3 Dual Node Interconnection (DRI) ............................................................................................. 19
11.14.4 Dual Node Interconnection (DNI) ............................................................................................. 20
11.15 Optical layer protection ...........................................................................................................20
11.15.1 Optical protection mechanisms ............................................................................................... 20
11.16 Equipment protection ..............................................................................................................22
11.16.1 Common units .......................................................................................................................... 22
11.16.2 Traffic unit (I/O card) hardware protection ............................................................................. 22
11.16.3 Fast IOP: 1:1 card protection.................................................................................................... 22
11.16.4 Enhanced IOP (eIOP) ................................................................................................................ 23
11.16.5 Tributary Protection (TP) .......................................................................................................... 23
11.16.6 Integrated protection for I/O cards with electrical interfaces ................................................. 28
11.17 Security ....................................................................................................................................28
11.17.1 Secured FTP and SSH ................................................................................................................ 29
11.17.2 Public key authentication ......................................................................................................... 29
11.17.3 Port authentication control (IEEE 802.1x based) ..................................................................... 29
11.17.4 OSPF encryption with HMAC-SHA256 ...................................................................................... 30

12 Accessories ................................................................................................ 12-1


12.1 RAP-4B ................................................................................................................................. 12-1
12.2 RAP-BG ................................................................................................................................. 12-4
12.3 xRAP-100 .............................................................................................................................. 12-5
12.4 AC/DC-DPS850-48-3 power system ..................................................................................... 12-7

ECI Telecom Ltd. Proprietary x


Neptune (Hybrid) Reference Manual Contents

12.4.1 AC/DC-DPS850-48-3 front view ............................................................................................. 12-9


12.4.2 AC/DC-DPS850-48-3 rear view ............................................................................................ 12-10
12.5 ICP_MCP30......................................................................................................................... 12-12
12.6 SM_10E ICPs ...................................................................................................................... 12-12
12.7 AC_CONV_UNIT ................................................................................................................. 12-17
12.8 AC_CONV_MODULE........................................................................................................... 12-18
12.9 Fiber storage tray ............................................................................................................... 12-19
12.10 Optical distribution frame.................................................................................................. 12-19
12.11 Optical patch panel ............................................................................................................ 12-21
12.12 xDDF-21.............................................................................................................................. 12-22
12.13 Cable guiding accessories .................................................................................................. 12-22
12.13.1 Cable guide frame ............................................................................................................... 12-22
12.13.2 PME1_63 cable guide and holder........................................................................................ 12-23
12.13.3 Fiber guide for ETSI A racks ................................................................................................. 12-23
12.13.4 Cable slack tray.................................................................................................................... 12-24
12.14 Cables 12-24

13 Standards and references .......................................................................... 13-1


13.1 Environmental Standards .................................................................................................... 13-1
13.2 ETSI: European Telecommunications Standards Institute ................................................... 13-2
13.3 IEC: International Electrotechnical Commission .................................................................. 13-3
13.4 IEEE: Institute of Electrical and Electronic Engineers .......................................................... 13-4
13.5 IETF: Internet Engineering Task Force ................................................................................. 13-5
13.6 ISO: International Organization for Standardization ........................................................... 13-9
13.7 ITU-T: International Telecommunication Union .................................................................. 13-9
13.8 MEF: Metro Ethernet Forum ............................................................................................. 13-13
13.9 NIST: National Institute of Standards and Technology ...................................................... 13-14
13.10 North American Standards ................................................................................................ 13-14
13.11 OMG: Object Management Group .................................................................................... 13-15
13.12 TMF: TeleManagement Forum .......................................................................................... 13-15
13.13 Web Protocol Standards .................................................................................................... 13-15

ECI Telecom Ltd. Proprietary xi


ECI Telecom Ltd. Proprietary xii
Useful information
This Neptune Hybrid Reference Manual describes the key components of the Neptune, including cards,
modules, accessories, and related capabilities. It also provides detailed descriptions for interpreting
indicator functions, enabling field personnel to troubleshoot hardware-related problems.

Related documents
 Neptune General Description
 NPT-1200 Installation and Maintenance Manual
 NPT-1030 Installation and Maintenance Manual
 NPT-1050 Installation and Maintenance Manual
 NPT-1021 Installation and Maintenance Manual
 NPT-1020 Installation and Maintenance Manual
 NPT-1010 Installation and Maintenance Manual
 EMS-NPT User Manual
 LCT-NPT User Manual
 LightSoft® User Manual

Contact information
Telephone Email

ECI Documentation Group +972-3-9268145 [email protected]


ECI Customer Support +972-3-9266000 [email protected]

ECI Telecom Ltd. Proprietary xiii


1 Introducing Neptune
Neptune is a family of carrier class MPLS-based multiservice packet-optical transport platforms. It offers the
best-in-class carrier Ethernet and packet transport solutions for the metro. NPT combines the gold standard
of transport network reliability and ease of management with packet efficiency. Through convergence of
Ethernet/MPLS, WDM, OTN, and TDM managed through a state of the art GUI based intuitive NMS NPT
offers a powerful, flexible packet transport solution for the lowest total cost of ownership (TCO).
The NPT has a flexible switching architecture that provides a cost effective, risk free means for operators to
tailor their configuration to any network scenario. This flexibility offers advantages during the network
change from TDM to packet. Based on this, operators can adapt their network configuration over time to
match changing network requirements, optimized per specific network location.
All NPT products (with the exception of the NPT-1010, NPT-1030, and NPT-1800) are available in two
variants:
 With a Centralized Packet and TDM Switch (CPTS)
 With a Centralized Packet Switch (CPS)
The first variant, Hybrid NPT, is equipped with a dual core switching fabric which supports simultaneous
native handling of both packet and TDM. With native handling, Ethernet traffic is processed by a packet
switch and delivered over Ethernet interfaces without traversing any TDM components. Simultaneously,
TDM traffic is cross connected over the TDM HO/LO matrix. Keeping the TDM and Ethernet matrixs
independent enables complete separation between the different traffic types, preventing one type of
traffic load from adversely affecting the other.
Figure 1-1: Centralized switching options

The second variant, Packet NPT, is equipped with a central Ethernet/MPLS switch and supports TDM
services through Circuit Emulation Service (CES). NPT is equipped with a broad mix of Ethernet and TDM
interfaces, supporting both packet and TDM based services over a converged packet infrastructure.
Both Hybrid and Packet NPT comply with all MEF CE2.0 service standards, as well as offering extensive
synchronization, protection, and resiliency schemes. Whether the network traffic is transported over legacy
equipment, supporting only TDM, or over packet equipment, supporting only Ethernet, NPT can provide
the optimal solution. As the network evolves, there is no need for costly replacements of existing
infrastructure or cumbersome external adaptive boxes.

ECI Telecom Ltd. Proprietary 1-1


Neptune (Hybrid) Reference Manual Introducing Neptune

NPT's flexible traffic handling architecture offers the most cost efficient traffic handling in a mixed TDM and
packet environment while supporting all transport attributes. The result is the lowest TCO throughout the
network life cycle and over the course of the network transition from TDM to packet. This is also correct
when building new carrier Ethernet and packet based transport networks.
The NPT's value propositions include:
 Lowest TCO
 Flexible multi-service (Packet, Optics, TDM)
 Cost-effective scalability through the modular architecture
 Dual Stack MPLS, offering seamless interworking, service optimized
 Transport grade service assurance
 Performance: Predictable and guaranteed
 Availability: Carrier grade redundancy and protection
 Security: Secure and transparent
 E2E control
 Intuitive GUI: Easy point-and-click operation
 Unified multi-layer NMS: Enabling smooth, converged control
 Visibility: Providing extensive OAM for E2E SLA visibility

1.1 Neptune product lines


Neptune is a family of future proof carrier class multiservice packet optical transport platforms optimized
for metro carrier Ethernet and packet transport networks. Neptune family provides a comprehensive
selection of platforms, covering a range of size, configuration, and service level requirements. The broad
range of Neptune products enables you to find the best balance of cost and performance.
Neptune platforms are organized into three groups:
 Neptune metro core platforms:
 NPT-1800: A compact (8U) high capacity multiservice packet-optical transport platform for the
metro core, offering ultimate flexibility and cost effective aggregation and delivery of
Ethernet/MPLS-TP and CES services for metro and regional networks.
 Neptune metro aggregation platforms:
 NPT-1200: A compact (2U) and fully redundant multiservice packet-optical aggregation
transport system, offering multidimensional flexibility together with high capacity and
cost-effective aggregation and delivery of Ethernet/MPLS-TP, OTN, and TDM services.
 NPT-1050: A very compact (1U) fully redundant multiservice packet-optical aggregation
transport system, offering multidimensional flexibility together with high capacity and
cost-effective aggregation and delivery of Ethernet/MPLS-TP, OTN, and TDM services.
 Neptune access platforms:
 NPT-1030: A compact (1U) multiservice fully redundant multi ADM1/4/16 and GE/10GE access
packet transport for access networks.

ECI Telecom Ltd. Proprietary 1-2


Neptune (Hybrid) Reference Manual Introducing Neptune

 NPT-1020/21: A compact (1U) multiservice packet-optical access system, offering


multidimensional flexibility with high capacity and cost-effective aggregation and delivery of
Ethernet/MPLS-TP, OTN, and TDM services.
 NPT-1010: A small, fully managed, packet-based demarcation system, offering cost-effective
CPE/CLE for business customers and small 3G/LTE cell sites and delivery of Ethernet/MPLS-TP
and TDM up to customer premises.
Figure 1-2: NPT Product Line Platforms

1.2 Expansion platform


The expansion platform is a fully modular 2U unit that can be installed on top of the
NPT-1800/NPT-1200/NPT-1021/NPT-1050 platforms to expand its capabilities.
All traffic processing, cross-connect, packet switching, timing and synchronization, control and
communication and main power supply functions are performed by the corresponding system in the base
unit. The expansion unit is installed on top of the base unit.
In addition the expansion platform has separate sections including:
 Power feed: Including local power supply circuits on each card and two INF_E2U units
 Cooling fans: Provided by the FCU_E2U unit on the right side of the expansion unit
 Three Eslots for installing traffic cards supporting PDH, SDH, PCM, and Ethernet

1.3 Features and functions


Neptune shelves support a multitude of features for today's bandwidth-hungry networks, including:
 Transparent support of transmission channels for SDH, PDH, Fast Ethernet, Gigabit Ethernet (GbE),
MPLS, IP, SAN (FC, FICON, ESCON), digital video, and more.
 Supporting any service; VPWS, VPLS, IPTV MPLS multicast, 3G mobile backhaul, bandwidth services,
and Ethernet leased lines.
 Transport of Ethernet traffic and support of Ethernet services over SDH.
 Wavelength services for all bandwidth demands
 Add and drop of any signal (SDH, PDH, data, wavelength) at any node using a single shelf.

ECI Telecom Ltd. Proprietary 1-3


Neptune (Hybrid) Reference Manual Introducing Neptune

 10 Gbps Add/Drop Multiplexer (ADM) service on a double card for GbE, 1GFC, 2GFC, OTU1, and
STM-16 services. ADM on a Card (MXP10) benefits include the ability to route client signals to
different locations along the optical ring, as well as per-service selectable protection and
drop-and-continue features. The MXP10 can also be used as a multi-rate combiner up to OTU2. The
MXP10 combines the cost efficiency of an optical platform with the granularity and flexibility
previously available only in SDH networks.
 Up to 320 Gbps capacity with 40 Gbps per slot Neptune Hybrid platforms.
 High Order (HO) and Low Order (LO) transmission paths available for both high-order and low-order
subnetworks, with a high-capacity matrix that maintains LO connectivity.
 Comprehensive MPLS Carrier Ethernet capabilities, including use of MPLS technology to carry
Ethernet services across the network metro and core.
 HO transmission paths for IP networks (for example, LAN-to-LAN connectivity at the GbE-to-GbE
level).
 Multireach, for metro and regional applications spanning up to 800 km without electrical
regeneration.
 Supporting cost-effective access CWDM applications and core DWDM networks of up to 44/88
channels.
 Transport of Ethernet traffic over WDM.
 Subrate traffic aggregation over optical cards.
 Channel by channel, non-traffic-affecting upgrade, starting from a single channel.
 Full compliance with applicable ITU-T and Telcordia standards for optical equipment and safety
standards.
 Extremely powerful management that renders the system easy to control, monitor, and maintain.

1.4 Implementation principles


Neptune platforms have a fully modular design based on a redundant switching core implemented as a
cross-connect matrix surrounded by I/O ports on plug in I/O cards. The function of the I/O cards is to
provide interfaces to the various types of signals that can be transported by the platforms. Internally, all the
I/O cards exchange information with the routing core using a proprietary format that does not depend on
the characteristics of the external interfaces.
The information flow in the Neptune is based on a single proprietary format for all information types. This
results in a highly flexible system, supporting a wide range of applications and can easily be expanded to
suit virtually any customer need. Moreover, support for new signal formats can be added simply by
developing new I/O cards. This protects customer investment in the Neptune platform against
obsolescence, and makes sure cost effective upgrade paths.
All subsystems are modular and implemented as plug-in cards. Except for traffic interfaces, all subsystems
are fully redundant and/or implemented as distributed functions. This avoids single point of failure, thereby
ensuring high availability needed to meet the stringent requirements of telecom operators.
The Neptune platforms consist of the following main subsystems:
 Traffic processing
 Control and communications

ECI Telecom Ltd. Proprietary 1-4


Neptune (Hybrid) Reference Manual Introducing Neptune

 Timing and synchronization


 Switching core
 Power feed
 Cooling
All subsystems are modular and implemented as plug-in cards. Except for the traffic interfaces, all
subsystems are fully redundant and/or implemented as distributed functions. This prevents single point of
failure, thereby ensuring high availability needed to meet the stringent requirements of telecom operators.
Moreover, Neptune platforms are designed to permit live insertion and hot swapping of cards and
modules, and their software can be downloaded from a remote location. In addition to maintenance
efficiency, these characteristics enable non-traffic-affecting in-service upgrading and expansion of system
capabilities.

1.5 Cards and modules


Neptune platforms have an outstanding functionality and cost-effectiveness with a variety of cards and
modules. They are carefully designed for almost every application:
 PDH I/O (PIO) cards: 2 Mbps, 34 Mbps, 45 Mbps.
 SDH I/O (SIO) cards: STM-1e, STM-1o, STM-4, STM-16, STM-64.
 MPLS and Ethernet cards:
 For NPT-1200: Data I/O (MGE, ME, DMFE_x_L1, DMFX_x_L1, DMGE_x_L1) cards for Layer 1
(GbE/FE) services; (DMFE_x_L2, DMFX_x_L2, DMGE_x_L2,DMXE_x_L2) for Layer 2 Ethernet
services and full MPLS as well as Carrier Ethernet services.
 For NPT-1020/NPT-1021: Data I/O (DHGE_4E, DHGE_8, DMCES1_4, MSE1_16) for Layer 2
Ethernet services and full MPLS as well as Carrier Ethernet services.
 For NPT-1010: Data I/O (TMSE1_8) for Layer 2 Ethernet services and full MPLS as well as Carrier
Ethernet services.
 Optical accessories: a wide range of accessories (including splitters, couplers, splitter and coupler
combination modules, variable optical attenuators, auxiliary cards, and filters) available either as
compact standalone components or in a physical form that matches the physical form of other
Neptune components.

ECI Telecom Ltd. Proprietary 1-5


2 Neptune platform overview
This chapter describes the shelf layout of each platform in the Neptune product line of Native Packet
Transport (NPT) the packet-based transport using All-Native handling of both Ethernet and TDM.
Neptune platforms have been designed to enable simple installation and easy maintenance. Hot insertion
of cards and modules is allowed to support quick maintenance and repair activities without affecting traffic.
The cage design and mechanical practice of all platforms conform to international mechanical standards
and specifications.

NOTE: All installation instructions, technical specifications, restrictions, and safety warnings
are provided in the Neptune Installation and Maintenance Manuals. See these manuals for
specific instructions before beginning any Neptune platform installation.

2.1 NPT-1200 Platform


The NPT-1200 is a 2U fully modular redundant converged multiservice packet transport platform optimized
for the metro access aggregation and access nodes, supports up to 560G capacity (future version). The
main functional subsystems of the NPT-1200 are:
 Traffic processing: I/O cards, and HO/LO dual matrix card cross-connect and packet switching
(XIO/CPTS/CPS)
 Timing and synchronization: In the CPTS/CPS cards
 Control and communication: In the MCP1200
 Power feed: Including local power supply circuits on each card and INF_1200 units
 Cooling fans: In the FCU_1200 unit
Figure 2-1: NPT-1200 general view

Figure 2-2: NPT-1200 slot allocation

ECI Telecom Ltd. Proprietary 2-1


Neptune (Hybrid) Reference Manual Neptune platform overview

The NPT-1200 includes the following sections:


 Input power connection panel with two INF_1200 units.
 Modules cage with seven I/O slots designated TS1 through TS7, for installing I/O cards of any type,
including PDH, SDH, Ethernet Layer 1, Ethernet Layer 2/MPLS.
 Two slots designated matrix cards.
 One slot designated MCP for the MCP1200 card that provides the following interfaces:
 Alarm indications and monitoring
 Timing and synchronization interfaces (T3/T4)
 In-band management interfaces (10/100BaseT)
 Compact flash card (NVM)
 Connector on top for connecting an expansion unit

2.2 NPT-1050 platform


NPT-1050 is an extremely compact, high capacity MPLS-based multiservice packet platform. It is only 1U in
height, fully redundant, supports up to 300 Gbps capacity (future version) and optimized for high capacity
metro access nodes.
Used in many sub network topologies, NPT-1050 can handle a mixture of P2P, hub, and mesh traffic
patterns. This combined functionality means that operators benefit from improved network efficiency and
significant savings in terms of cost and footprint.
NPT-1050 platform:
 Increases the number of Ethernet interfaces, and upgrades from 10M to 100GE (in future version)
easily and smoothly.
 Allows you to start as small as necessary and attain ultrahigh expandability in a build-as-you-grow™
fashion by combining the standard Neptune unit with an expansion unit (EXT-2U).
 Aggregates traffic arriving over Ethernet, PCM low-bitrate interfaces, E1/T1 and STM-1 directly over
GbE/10GbE/100GbE.
 Is suitable for indoor and outdoor installations.
 Supports an extended operating temperature range up to 70°C.
NPT-1050 is a 1U base platform housed in a 243 mm deep, 465 mm wide, and 44 mm high equipment cage
with all interfaces accessible from the front of the unit.
Figure 2-3: NPT-1050 general view

The platform consists of:


 Two DC power-supply module slots
 Two MCPS/MCPTS card slots

ECI Telecom Ltd. Proprietary 2-2


Neptune (Hybrid) Reference Manual Neptune platform overview

 One fan unit slot


 Three traffic cards slots (Tslots)

2.3 NPT-1030 platform


The NPT-1030 delivers a cost-effective and affordable mix of Ethernet, SDH, PDH, and PCM services that
result in new revenue-generating opportunities. It offers a wide variety of features and benefits, including:
 High traffic survivability through main hardware duplication.
 Ultra-high scalability based on coupling the EXT-2U to the NPT-1030 to create a build-as-you-grow
solution.
 Gradual capacity expansion based on service provisioning needs from ADM-1 up to ADM-16.
 Ability to add interfaces easily while the network element (NE) is working by plugging in the
appropriate cards ranging from E1 cards for multiple ports to STM-16 cards.
 Carrier class Ethernet-over-WAN/MAN solution with SDH reliability, security, and management of
data services.
 Sublambda grooming for high utilization of existing fiber and top efficiency in transmission of
different types of services.
 PCM service interfaces and 1/0 digital cross-connect functions to enable the construction and
maintenance of various private networks.
 Multi-ADM and cross-connect functionality, ideal for deployment in flexible network topologies like
ring, mesh, and star.
 Compactness and resiliency, making it perfectly suited for both indoor and outdoor enclosures. Due
to its extended operating temperature range, it is also most suitable for harsh environmental
conditions.
For a detailed description of the NPT-1030 platform, see the Neptune Product Line General Description.
Figure 2-4: Typical NPT-1030 platform

2.4 NPT-1020 platform


The NPT-1020 is a multiservice packet transport platform for the metro access, offering an All Native
solution that optimizes both TDM and packet handling. The NPT-1020 is a cost effective choice for the first
aggregation stage, geared for cellular tail locations (3G and LTE). It provides a unique hybrid solution for
high capacity access rings, and optimized for popular triple play applications.
As a Packet Optical Access (POA) platform with enhanced MPLS-TP support, the NPT-1020 is designed
around a centralized hybrid matrix card. It supports any to any direct data card connectivity as well as
native TDM switching capacity. The NPT-1020 offers a packet switching capacity of up to 10 Gbps or 60
Gbps (with a choice of two modes on the same unit). It also supports a TDM capacity of up to 2.5G (16 x
VC-4 fully low order traffic).

ECI Telecom Ltd. Proprietary 2-3


Neptune (Hybrid) Reference Manual Neptune platform overview

The NPT-1020 offers enhanced MPLS-TP data network functionality, including the complete range of
Ethernet based services (CES, EoS, MoT, MoE, and PoE+), as described in the NPT General Description.
The NPT-1020 is a compact (1U) base platform housed in a 243 mm deep, 465 mm wide, and 44 mm high
equipment cage with all interfaces accessible from the front of the unit. The platform includes the following
components:
 Traffic processing through:
 21 built in native E1s
 14 ports, divided between:
 2 x STM-1/STM-4 ports (native)
 8 x 10/100/1000BaseT electrical ports (with 4 x PoE+)
 4 x SFP – each can be configured as 100/1000Base-X or 10/100/1000Base-T (with ETGBE) or
GPON ONU interface (with GTGBE_L3BD)
 1 traffic card slot (Tslot)
 Compact flash card (NVM)
 Traffic connector to the (optional) EXT 2U expansion unit
 Timing module (T3/T4, ToD, and 1pps)
 Alarms connector
 Redundant power supply modules (INF) or Non redundant
Figure 2-5: NPT-1020 platform

The NPT-1020 can be fed by either -48 VDC or 110 VAC to 230 VAC. In DC power feeding, two INF modules
can be configured in two power supply module slots for redundant power supply. AC power feeding
requires the use of a conversion module to implement AC/DC conversion.
The NPT-1020 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The rugged platform
design also makes this platform a good choice for street cabinet use, withstanding temperatures up to
70°C.

2.5 NPT-1021 platform


NPT-1021 is a packet transport platform for the access, offering a pure packet solution that optimizes
packet handling. NPT-1021 is a cost-effective choice for the first aggregation stage, geared for cellular tail
locations (3G and LTE). It provides a unique hybrid solution for high capacity access rings, and is optimized
for popular triple play applications.
As a Packet Optical Access (POA) platform with enhanced MPLS-TP support, NPT-1021 is designed around a
centralized packet switch. It supports any-to-any direct data card connectivity. NPT-1021 offers a packet
switching capacity of up to 10 Gbps or 60 Gbps (with a choice of two modes on the same unit).

ECI Telecom Ltd. Proprietary 2-4


Neptune (Hybrid) Reference Manual Neptune platform overview

The platform offers unique non traffic affecting upgrades from 1G-based configurations to 10GE-based
(with up to 4 x 10GE interfaces). This is supported through the CPS50 card, a central packet switch (CPS)
Tslot card for the NPT-1021. This card provides the NPT-1021 with scalable upgrades to high capacity 10GE
configurations. The CPS50 makes it possible to upgrade the system packet switching capacity to 60 Gbps. It
supports up to 2 × 10GE (SFP+) and 2 flexible SFP houses. Each of these can support 1 × 10GE with SFP+,
1 × GE with SFP, or 2 × GE with CSFP.
NPT-1021 offers enhanced MPLS-TP data network functionality, including the complete range of
Ethernet-based services (CES, MoE, and PoE+), as described in MPLS-TP and Ethernet solutions.
NPT-1021 is a compact (1U) base platform housed in a 243 mm deep, 465 mm wide, and 44 mm high
equipment cage. All its interfaces are accessible from the front of the unit. The platform includes the
following components:
 Traffic processing modules:
 12 ports, divided between:
 4 x SFP – each can be configured as 100/1000Base-X or 10/100/1000Base-T (with ETGBE) or
GPON ONU interface (with GTGBE_L3BD)
 8 x RJ-45 – 10/100/1000Base-T, 4 of 8 support PoE
 One traffic card slot (Tslot)
 Compact flash card (NVM)
 Traffic connector to the (optional) EXT 2U expansion unit
 Timing module (T3/T4, ToD, and 1pps)
 Redundant or non-redundant power supply modules (INF)
Figure 2-6: NPT-1021 platform

NPT-1021 can be fed by 24 VDC, -48 VDC, or 110 to 230 VAC. In DC power feeding, two INF modules can be
configured in the two power module slots for redundancy. One double slot INF module with dual-feeding
can be configured as well. AC power feeding requires the use of a conversion module to implement AC/DC
conversion.
NPT-1021 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The rugged platform design
also makes this platform suitable for street cabinet use, withstanding temperatures up to 70°C.
NPT-1021 can also be configured as an expanded platform when combined with the EXT-2U expansion unit,
as illustrated in the following figure.
Figure 2-7: NPT-1021 with EXT-2U expansion unit

ECI Telecom Ltd. Proprietary 2-5


Neptune (Hybrid) Reference Manual Neptune platform overview

Typical power consumption of the NPT-1021 is 40 W. Power consumption is monitored through the
management software. For more information about power consumption requirements, see the Neptune
System Specifications.

2.6 NPT-1010 platform


NPT-1010 is a packet transport CPE/CLE platform for the access, offering a packet solution that optimizes
customer premises packet handling. NPT-1010 is a cost effective choice for the CLE, geared for cellular tail
locations (3G and LTE), providing a unique access solution for cost effective CPE, and optimized for business
application and small cell site.
As a CPE platform with enhanced MPLS-TP support, the NPT-1010 is designed around a centralized packet
MPLS-TP switch that supports any to any direct data card connectivity. NPT-1010 offers a packet switching
capacity of up to 5 Gbps.
NPT-1010 provides enhanced MPLS-TP data network functionality, including the complete range of
Ethernet based services (CES, MoE, and PoE+), as described in the NPT General Description.
NPT-1010 is a compact (1U) base platform housed in a 224 mm deep, 223.5 mm wide, and 44 mm high
equipment cage with all interfaces accessible from the front of the unit. The platform includes the following
components:
 Traffic processing through:
 8 ports, divided between:
 4 x 10/100/1000BaseT electrical ports (with 4 x PoE+)
 4 x SFP - each can be configured as 100/1000Base-X or 10/100/1000Base-T (with ETGBE) or
GPON ONU interface (with GTGBE_L3BD)
 1 mini card slot (Mini slot):
 1588 V2 ToD/1PPS
 8 x T1/E1 and ToD/1PPS
 Alarms connector
 Management port
 Dual feed DC power supply or single AC power feed
Figure 2-8: NPT-1010_DC general view

Figure 2-9: NPT-1010_AC general view

NPT-1010DC can be fed by -48 VDC and the NPT-1010AC by 110 VAC to 230 VAC. In DC power
(NPT-1010DC) feeding, dual DC feed is supported. AC power feeding (NPT-1010AC) requires the use of a
conversion module to implement AC/DC conversion.

ECI Telecom Ltd. Proprietary 2-6


Neptune (Hybrid) Reference Manual Neptune platform overview

NPT-1010 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The rugged platform design
also makes this platform a good choice for street cabinet use, withstanding temperatures up to 70°C.

ECI Telecom Ltd. Proprietary 2-7


3 NPT-1200 system architecture
As a powerful metro access Neptune platform for metro/access and access networks, NPT-1200 can deliver
a variety of services. Designed for installation in COs and second aggregation locations, NPT-1200 integrates
MPLS-TP, Ethernet with SDH, PCM, optics, and PDH capabilities into a 2U (88 mm) unit that can be coupled
with the standard Neptune unit.
NPT-1200 eliminates the boundaries between data and voice communication environments, and paves the
way for service provisioning without sacrificing equipment reliability, robustness, and hard QoS (H-QoS).
Thus, both operators and service providers benefit from the best of both worlds: the cost-effectiveness and
universality of Ethernet and H-QoS, and the scalability and survivability of TDM.
Figure 3-1: NPT-1200 traffic flow via centralized dual matrix

Used in many sub network topologies, NPT-1200 can handle a mixture of P2P, hub, and mesh traffic
patterns. This combined functionality means that operators benefit from improved network efficiency and
significant savings in terms of cost and footprint.
The NPT-1200 platform:
 Increases the number of Ethernet interfaces, and upgrades from 10M to 100GE (100GE in future
version) easily and smoothly.
 Increases the number of STM-1 interfaces, and upgrades from STM-1 to STM-4/STM-16/STM-64 easily
and smoothly.

ECI Telecom Ltd. Proprietary 3-1


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

 Add on OTN capabilities for seamless integration and interconnection with optical based networking.
 Allows you to start as small as necessary and attain ultrahigh expandability in a build-as-you-grow™
fashion by combining the standard Neptune unit with an expansion unit (EXT-2U).
 Aggregates traffic arriving over Ethernet, PCM low-bitrate interfaces, E1/T1, E3/DS-3, and STM-1
directly over STM-1/STM-4/STM-16/STM-64 and GbE/10GbE/100GbE.
 Is suitable for indoor and outdoor installations.
 Supports an extended operating temperature range up to 70°C (with CPS/CPTS100 only).
The NPT-1200 platform is housed in a 243 mm deep, 442.4 mm wide, and 88.9 mm high equipment cage
with all interfaces accessible from the front of the unit.
Figure 3-2: NPT-1200 general view

The platform consists of:


 Two DC power-supply module slots
 One MCP1200 slot + a CF memory
 Two CPTS/CPS/XIO card slots
 One fan unit slot
 Seven traffic cards slots (Tslots)
The following figure identifies the slot arrangement in the NPT-1200 platform.
Figure 3-3: NPT-1200 platform slot assignment

The following table lists the modules that can be configured in each NPT-1200 slot.

Table 3-1: NPT-1200 modules


Name Applicable slots in NPT-1200

DC PSA DC PSB MS XS A XS B TS 1# to TS 7# FS

INF_1200  

FCU_1200 

ECI Telecom Ltd. Proprietary 3-2


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

Name Applicable slots in NPT-1200

DC PSA DC PSB MS XS A XS B TS 1# to TS 7# FS

MCP1200 

CPTS100  

CPS100  

CPTS320  

CPS320  

XIO64  

XIO16_4  

PME1_211 

PME1_21B 

PME1_632 

PM345_3 

SMQ1 

SMQ1&4 

SMS16 

DMFE_4_L1 

DMFX_4_L1 

DMGE_4_L1 TS1-TS4, TS6, TS7

DMFE_4_L2 

DMFX_4_L2 

DMGE_2_L2 

DMGE_4_L23 TS1-TS4, TS6, TS7

DMXE_22_L24 TS1-TS4, TS6, TS7

DMGE_8_L2 TS1 + TS2, TS6 +TS7


(only)

DMXE_48_L2 TS1 + TS2, TS6 +TS7


(only)

1 The PME1_21/PME1_21B module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced
conversion unit, the xDDF-21.
2 The PME1_63 module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced conversion
unit, the xDDF-21.
3 Depends on the platform power consumption.
4 Depends on the platform power consumption.

ECI Telecom Ltd. Proprietary 3-3


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

Name Applicable slots in NPT-1200

DC PSA DC PSB MS XS A XS B TS 1# to TS 7# FS

DMCES1_4 TS1-TS4, TS6, TS7

MS1_4 TS1-TS4, TS6, TS7

MSE1_16 TS1-TS4, TS6, TS7

MS1_32 TS1-TS4, TS6, TS7

DHGE_4E TS1-TS4, TS6, TS7

DHGE_8 TS1-TS4, TS6, TS7

DHGE_16 TS1 + TS2, TS6 +TS7


(only)

DHGE_24 TS1 + TS2, TS6 +TS7


(only)

DHXE_2 TS1-TS4, TS6, TS7


DHXE_4/4O TS1-TS4, TS6, TS7
NFVG_4 TS1-TS4, TS6, TS7

All cards support live insertion. The NPT-1200 platform provides full 1+1 redundancy in power feeding,
cross connections, and the TMU, as well as 1:N redundancy in the fans.

NOTE: Failure of the MCP1200 does not affect any existing TDM and Packet traffic on the
platform.

The NPT-1200 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks.

NOTES:
 The NPT-1200 platform with CPTS100/CPS100 supports max. 48 x GbE or max. 10 x 10
GbE.
 The NPT-1200 platform with CPTS320/CPS320 supports max. 64 x GbE or max. 32 x 10 GbE
(MBP-1200HW revision should be>=B01).
 The NPT-1200 platform must be configured with identical switching card types.

ECI Telecom Ltd. Proprietary 3-4


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

3.1 Control subsystem


In the NPT-1200 platform, a single main controller card (MCP-1200) controls the entire NPT-1200 system
via a high-performance CPU, which also processes communication with the EMS/LCT and other equipment.
A large capacity flash memory stores equipment configuration data and software versions (up to two). Both
online and remote software upgrades are supported. NPT-1200 supports the processing of RS DCC channels
and MS DCC channels, plus up to two Clear Channels (DCC over framed or unframed E1). The NPT-1200 unit
can send network management information through third-party SDH or PDH networks using the Clear
Channels. In addition, in-band Management Control Channels (MCC) are supported as well.
Figure 3-4: Control system block diagram

The NPT-1200 main controller card (MCP-1200) is the most essential card of the system, creating virtually a
complete standalone native packet system. Moreover, it accommodates one service traffic slot for flexible
configuration of virtually any type of PDH, SDH, and Ethernet interfaces. This integrated flexible design
ensures a very compact equipment structure and reduces costs, making NPT an ideal native choice for the
access and metro access layers.
NPT-1200 control and communication functions include:
 Internal control and processing
 Communication with external equipment and management
 Network element (NE) software and configuration backup
 Built-in Test (BIT)

ECI Telecom Ltd. Proprietary 3-5


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

3.1.1 Internal control and processing


NPT-1200 main controller card provides central control, alarm, maintenance, and communication functions
for Neptune NEs. If required, it can also communicate with the control processors of various cards in the
extension unit, using a master-slave control hierarchy.
The control subsystem is separate from the traffic subsystem. If a failure or extraction of the control card,
traffic is not impaired.

3.1.2 Software and configuration backup


NPT-1200 contains a large-capacity onboard NVM that stores a complete backup of the system’s software
and node configuration. This makes sure superior management and control availability.
NPT-1200 main controller card enables easy software upgrade using a remote software procedure
operated from the EMS-NPT management station or LCT-NPT craft terminal. The card can store two
different software versions simultaneously, and enables a quick switchover between the different versions
when required.

3.1.3 Built-in test


The BIT hardware and its related software assist in the identification of any faulty card or module.
The BIT outputs provide:
 Management reports
 System reset
 Maintenance alarms
 Fault detection
 Protection switch for an XIO/CPTS/CPS card
Dedicated test circuits implement the BIT procedure under the control of an integrated software package.
After the NPT-1200 unit is switched on, a BIT program is automatically activated for both the initialization
and normal operation phases. Alarms are sent to the EMS-NPT if any failures are detected by the BIT.
BIT testing covers general tests, including module presence tests and periodic sanity checks of I/O module
processors. It performs traffic path tests, card environment tests, data tests, and detects traffic-affecting
failures, as well as failures in other system modules.

ECI Telecom Ltd. Proprietary 3-6


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

3.2 Communications with external equipment


and management
In the Neptune metro access platform product line, the main controller card is responsible for
communicating with other NEs and management stations.
The main controller card communicates with the remote EMS/LCT systems and other SDH NEs via the DCC
or clear channel. It communicates with the local EMS and LCT systems via the Ethernet interface. The
communications between SDH NEs, or between SDH NEs and the EMS/LCT, can also be via the DCN. The
controller can connect to the DCN via Ethernet or V.35. In addition, the controller can connect to external
equipment via Ethernet or RS-232, using DCC channels of the SDH network to build a narrow bandwidth
DCN for the external equipment. In-band Management Control Channel (MCC) is supported in the
platforms as well, enabling management of pure packet NE configurations through in-band channels.

NOTE: The NPT-1200 supports in band and DCN management connections for PB and MPLS:
 4Mbps policer for PB UNI which connects to external DCN
 10Mbps shaper for MCC packet to MCP
 No rate limit for the MNG port rate up to 100M full duplex

3.3 Timing
NPT-1200 provides high-quality system timing to all traffic modules and functions in compliance with
applicable ITU-T recommendations for functionality and performance.
The main component in the NPT-1200 synchronization subsystem is the timing and synchronization unit
(TMU). Timing is distributed redundantly from the TMUs to all traffic and matrix cards, minimizing unit
types and reducing operation and maintenance costs.
The TMU and the internal and external timing paths are fully redundant. The high-level distributed BIT
mechanism ensures top performance and availability of the synchronization subsystem. In case of
hardware failure, the redundant synchronization subsystem takes over the timing control with no traffic
disruption.
To support reliable timing, NPT-1200 provides multiple synchronization reference options. Up to four
timing references can be monitored simultaneously:
 1PPS and ToD interfaces, using external timing input sources
 2 x 2 MHz (T3) external timing input sources
 2 x 2 Mbps (T3) external timing input sources
 STM-n line timing from any SDH interface card
 E1 2M PDH line timing from any PDH interface card
 Local interval clock
 Holdover mode

ECI Telecom Ltd. Proprietary 3-7


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

 SyncE
 1588V2 – Master, Slave, transparent, and boundary clock
In the NPT, any timing signal can be selected as a reference source. The TMU provides direct control over
the source selection (received from the system software) and the frequency control loop. The definition of
the synchronization source depends on the source quality and synchronization mode of the network timing
topology (set by the EMS-NPT or LCT-NPT):
 Synchronization references are classified at any given time according to a predefined priority and
prevailing signal quality. The NPT-1200 synchronization subsystem synchronizes to the best available
timing source using the SSM protocol. The TMU is frequency-locked to this source, providing internal
system and SDH line transmission timing. The platform is synchronized to this central timing source.
 NPT-1200 provides synchronization outputs for the synchronization of external equipment within the
exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to
synchronize any peripheral equipment or switch.
NPT-1200 supports SyncE synchronization, which is fully compatible with the asynchronous nature of
traditional Ethernet. SyncE is defined in ITU T standards G.8261, G.8262, G.8263, and G.8264.
The IEEE 1588 Precision Time Protocol (PTP) provides a standard method for high precision synchronization
of network connected clocks. PTP is a time transfer protocol enabling slave clocks to synchronize to a
known master clock, ensuring that multiple devices operate using the same time base. The protocol
operates in master/slave configuration using UDP packets over IP or multicast packets over Ethernet. The
IEEE 1588v2 is supported in the NPT-1200 to provide Ordinary Clock (OC) and Boundary Clock (BC)
capabilities.

3.4 Traffic and switching functionality


The heart of NPT-1200 is a nonblocking HO/LO cross-connect matrix. It is NPT’s architecture that enables its
outstanding configuration flexibility.
NPT-1200 supports the following range of non-blocking HO/LO cross connect configurations:
 256 VC-4 x 256 VC-4 as ADM-64/MADM-16
 100G packet MPLS-TP switching and TDM, 256 VC-4 x 256 VC-4 as ADM-64/MADM-16
 320G packet MPLS-TP switching and TDM, 256 VC-4 x 256 VC-4 as ADM-64/MADM-16
NPT-1200 implements hardware-based subnet connection protection on all interfaces. Switch to protection
time is in less than 50 msec.
In the NPT-1200, the high-capacity, nonblocking 4/4/3/1 HO/LO cross-connect matrix is in the
XIO64/XIO16_4/CPTS100/CPTS320 redundant cards. Based on the type of
XIO64/XIO16_4/CPTS100/CPTS320 card, different matrix cores are used, as follows:
 In the XIO64, the matrix core uses 256 VC-4 equivalents (4/4/3/1) and provides an STM-64 optical
interface.
 In the XIO16_4, the matrix core uses 256 VC-4 equivalents (4/4/3/1) and provides four STM-1/4/16
optical interfaces.
 In the CPTS100, central packet and TDM switch, 100G packet switching and 40G TDM, includes timing
control with 1 x STM-64/2 x STM-1/4/16 and 2 x 10GE interfaces card.

ECI Telecom Ltd. Proprietary 3-8


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

 In the CPS100, central packet switch, 100G packet switching, includes timing control with 2 x 10GE
interfaces card.
 In the CPTS320, central packet and TDM switch, 320G packet switching and 40G TDM, includes timing
control with 1 x STM-64/2 x STM-1/4/16 and 4 x 10GE interfaces card.
 In the CPS320, central packet switch, 320G packet switching, includes timing control with 4 x 10GE
interfaces card.

3.5 Power feed subsystem


NPT-1200 features a distributed power feed subsystem. This distributed power concept assures system
upgrading and efficient heat distribution. It also makes sure maximum reliability of the power feed
interface.
In the NPT-1200 product, two INF_1200 power feed modules serve as a redundant interconnection device
between the NPT-1200 cards and –48 VDC power sources. The main purpose of this unit is to decouple the
noise generated/received from the DC power source lines. Each INF_1200 has an external power input. The
filter is connected to a power input to distribute the -48 VDC battery plant input to all cards via fully
redundant power buses. Each card of the NPT-1200 generates its own local voltage using high-quality
DC/DC converters.
Additional features of the power feed subsystem include:
 Reverse polarity protection
 Overvoltage alarm and protection
 Undervoltage alarm and protection
 Redundancy between INF units
 Hot swapping
 Power-fail detection and 10 msec holdup
 Lightning-strike protection
 Redundant fan power supply with adjustable voltages

3.6 NPT-1200 common cards


The following modules in the NPT-1200 provide the common functionality for a Neptune system:
 INF_1200: DC power feeding with input filtering. Usually, a NPT-1200 platform is configured with two
INF_1200s for redundancy.
 FCU_1200: Cooling for the NPT-1200 platform.
 MCP1200: Main control, communication, and overhead processing functionality.
 Switching cards: NPT-1200 supports various switching and matrix cards described in NPT-1200
switching cards

NOTE: Non redundant central switching configuration is supported (1+0) as well.

ECI Telecom Ltd. Proprietary 3-9


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

3.6.1 INF_1200
The INF_1200 is a DC power-filter module that can be plugged into the NPT-1200 platform. Two INF_1200
modules are needed for power feeding redundancy. It performs the following functions:
 Single DC power input and power supply for all modules in the NPT-1200
 Input filtering function for the entire NPT-1200 platform
 Adjustable output voltage for fans in the NPT-1200
 Indication of input power loss and detection of under-/over-voltage
 Shutting down of the power supply when under-/over-voltage is detected
 High-power INF for up to 550 W and 650W (INF_1200 HW revision D02 and above)
Figure 3-5: INF_1200 front panel

3.6.2 FCU_1200
The FCU_1200 is a pluggable fan control module with eight fans for cooling the NPT-1200 platform. The
fans’ running speed can be set to 16 different levels. The speed is controlled by the MCP1200 according to
the installed cards temperature.
Figure 3-6: FCU_1200 front panel

Table 3-2: FCU_1200 front panel LEDs


Marking Full name Color Function
ACT. System active Green Normally on when the fan unit is powered on. Off
indicates a power failure of the fan unit.
FAIL System fail Red Normally off. Lights when a fan failure is detected.

ECI Telecom Ltd. Proprietary 3-10


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

3.6.3 MCP1200
The MCP1200 card is the main processing card of the NPT-1200. It integrates functions such as control,
communications and overhead processing. It provides:
 Control-related functions:
 Communications with and control of all other modules in the NPT-1200 and EXT-2U through the
backplane (by the CPU)
 Communications with the EMS-APT, LCT-APT, or other NEs through a management interface
(MNG) or DCC, or MCC, or VLAN
 Routing and handling of up to 32 x RS DCC, 32 x MS DCC, (total 32 channels), and two clear
channels
 Alarms and maintenance
 Fan control
 Overhead processing, including overhead byte cross connections, OW interface, and user channel
interface
 External timing reference interfaces (T3/T4), which provide the line interface unit for one 2 Mbps
T3/T4 interface and one 2 MHz T3/T4 interface
The MCP1200 supports the following interfaces:
 MNG and T3/T4 directly from its front panel
 RS-232, OW access, housekeeping alarms, and V.11 through a concentrated SCSI auxiliary I/F
connector (on the front panel)
In addition, the MCP1200 has LED indicators and one reset pushbutton. As the NPT-1200 is a front-access
platform, all its interfaces, LEDs, and pushbutton are located on the front panel of the MCP1200.

Figure 3-7: MCP1200 front panel

Table 3-3: MCP1200 front panel interfaces


Marking Interface type Function
AUXILIARY I/F SCSI-36 A concentrated auxiliary connector for the following
interfaces:
 1 x V.11 overhead interface
 1 x RS-232 interface for debugging or managing
external ancillary equipment
 1 x OW interface connecting an external OW box
 1 x alarm input and output interface connecting to
the RAP
T3/T4 RJ-45 T3 and T4 timing interfaces (one 2 Mbps and one 2 MHz)

ECI Telecom Ltd. Proprietary 3-11


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

Marking Interface type Function


MNG. RJ-45 10/100BaseT Ethernet interface for management
AUX MNG. RJ-45 Auxiliary 10/100BaseT Ethernet interface for local
management and debug

NOTE: An MCP30 ICP can be used to distribute the concentrated auxiliary connector into
dedicated connectors for each function.

Table 3-4: MCP1200 LED indicators and pushbutton


Marking Full name Color Function
- (left LED in MNG and Link Green Lights when MNG link is on. Off when MNG
AUX MNG. ports) link is off.
Blinks when packets are received or
transmitted.
- (right LED in MNG and Speed Orange Lights when the MNG link is 100 Mbps. Off
AUX MNG. ports) when the MNG link is 10 Mbps.
ACT. System active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates card not running
normally.
FAIL System fail Red Normally off. Lights when card failure
detected.
MJR. System major Red Lights when the system has a critical or major
alarm alarm.
MNR. System minor Yellow Lights when the highest severity of the NE
alarm current alarms is minor or warning. Off when
the system has no alarm or the highest
severity of the NE current alarms is higher
than minor or warning.
CF ACT. Compact Flash Green Lights when the CF card is present and
memory active properly locked. Off when the CF card is not
present or not locked.
Blinks when the CF card is being written or
read.
CF FAIL Compact Flash Red Normally off. Lights when the CF card is not
memory fail present, is not locked, or has a failure.

NOTE: ACT, FAIL, MJR, and MNR. LEDs are combined to show various failure reasons during
the system boot. For details, see the Troubleshooting Using Component Indicators section in
the NPT-1200 Installation, Operation, and Maintenance Manual.

ECI Telecom Ltd. Proprietary 3-12


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

3.7 NPT-1200 switching cards


The NPT-1200 platform can operate with different matrix cards according to its configuration:
 CPTS100: Dual matrix card that provides native TDM cross connect and native packet switching. Most
convenient for applications that require high volume of packet and TDM communications. The card
also supports 1 x STM-64 XFP based, 2 x STM 16/STM 4/STM 1 SFP based, and 2 x 10 GbE SFP+ based
aggregation interfaces via corresponding ports.
 CPS100: Single switching card that provides packet switching. Most convenient for applications that
require high volume of pure packet handling. The card also supports 2 x 10 GbE SFP+ based
aggregation interfaces via corresponding ports
 CPTS320: Dual matrix card that provides native TDM cross connect and native packet switching. Most
convenient for applications that require very high volume of packet and TDM communications. The
card also supports 1 x STM-64 XFP based, 2 x STM-16/STM-4/STM-1 SFP based, and 4 x 10 GbE SFP+
based aggregation interfaces via corresponding ports.
 CPS320: Single switching card that provides packet switching. Most convenient for applications that
require very high volume of pure packet handling. The card also supports 4 x 10 GbE SFP+ based
aggregation interfaces via corresponding ports.
 XIO64: TDM matrix card that provides pure TDM cross connect. Supports STM-64 aggregation
through a single XFP port.
 XIO16_4: TDM matrix card that provides pure TDM cross connect. Supports STM-16/4/1 aggregation
through four SFP ports.
The NPT-1200 must be equipped with a pair of cards of the same type to support system redundancy.
The following sections detail each card functionality.

3.7.1 CPTS100
The CPTS100 is a powerful, high capacity, non-blocking dual cross connect matrix. It includes a TDM matrix
for native-SDH switching and a packet switch to support native packet-level switching.
Legacy TDM-level cross connect through the TDM matrix (like in the XIO cards) consume much of the
available bandwidth; the bandwidth allocation was done statically. In the CPTS100, bandwidth is
dynamically allocated, ensuring high flexibility and efficient utilization of this limited resource.
Furthermore, in case unassignment and reassignment of slots is required, the matrix uses a sophisticated
bandwidth rearrangement algorithm to support best bandwidth utilization.

NOTE: The CPTS100 supports 1 x STM-64 or 2 x STM-1/4/16 ports.

ECI Telecom Ltd. Proprietary 3-13


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

A functional diagram of the CPTS100 matrix is shown in the following figure.


Figure 3-8: CPTS100 functional diagram

The matrix includes the following main components:


 100 Gbps packet switch with 72 Gbps TM
 40 Gbps TDM matrix
 5G HEoS connectivity between the packet and TDM matrix
All I/O card slots are connected to the cross-point switch by eight lanes/channels. The cross-point switch
routes eight high-speed channels from each I/O slot either to the TDM matrix or to the packet switch. Each
high-speed channel can carry 2.5 Gbps between SDH cards and other TDM cards and the TDM matrix or
20 Gbps between data cards and the packet switch.
The CPTS100 provides the following main functions:
 Packet switch with 100 Gbps capacity, providing:
 Management and internal control, in addition to user traffic switching
 Non-blocking data switch fabric
 P2P MPLS internal links via the packet switch
 Any slot to any slot connectivity
 Any card installed in any slot
 Synchronization 1588V2, Master, Slave, Transparent, and Boundary Clock
 Aggregate ports:
 1 x STM64 XFP based line
 2 x STM16/STM4/STM1 SFP based lines

ECI Telecom Ltd. Proprietary 3-14


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

 2 x 10 GbE SFP+ based lines


 Built-in OTN (OTU-1/OTU-2/2e) for STM16/STM64 and OTU-2/2e for 10GE ports

3.7.1.1 CPTS100 functional description


The CPTS100 card has three main subsystems:
 Central cross-connect matrix: Performs all the NPT-1200 TDM traffic cross-connect operations.
 Central packet switch: Performs all the NPT-1200 packet switching operations.
 TMU: Generates and distributes timing and clock signals to all the cards installed in the NPT-1200
platform. In addition to its internal timing reference, the TMU can use up to four user-specified
reference sources. See Timing for a description of the TMU capabilities.
The CPTS100 card is a critical NPT-1200 subsystem, and therefore, for redundancy purposes, two CPTS100
cards should be installed in any NPT-1200 platform, one on each side of the cards cage. Both cards must be
of the same type and option and running the same NPT-1200 release.
When two identical cards are installed in the platform, the cards operate in a master-slave configuration:
 At any time, only one card is active and the other is in standby.
 Upon failure or removal of the active card, the standby card becomes active without any disruption in
system operation.
A CPTS100 card can be inserted and replaced without affecting traffic flow.

NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic-affecting.

The following figure shows the front panel of the CPTS100.

Figure 3-9: CPTS100 front panel

The CPTS100 has an RJ-45 connector marked TOD/1PPS that provides timing and synchronization
input/output signals, supporting IEEE 1588v2 standard.

ECI Telecom Ltd. Proprietary 3-15


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

Table 3-5: CPTS100 indicators and functions


Marking Full name Color Function
ACT. System active Green Off indicates no power supply.
On steadily indicates the CPTS100 not
downloaded successfully or that the CPTS100
cannot be controlled normally by the MCP1200.
Blinking with a frequency (1 sec ON and 1 sec
OFF) indicates the CPTS100 card is running
normally and is active.
FAIL System fail Red Normally off. Lights when a card failure is
detected.
STBY System standby Orange Lights when the card is in standby. Off when the
card is active.
LSR ON (separate Laser on indication Green Lights steadily when laser is on.
LED for each port)

ECI Telecom Ltd. Proprietary 3-16


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

3.7.2 CPS100
The CPS100 is a powerful, high capacity, non-blocking switching card. It includes a pure switching and a
packet switch to support native packet-level switching.
In the CPS100, bandwidth is dynamically allocated, ensuring high flexibility and efficient utilization of this
limited resource. Furthermore, in case unassignment and reassignment of slots is required, the matrix uses
a sophisticated bandwidth rearrangement algorithm to support best bandwidth utilization.
A functional diagram of the CPS100 matrix is shown in the following figure.
Figure 3-10: CPS100 functional diagram

The matrix includes the following main components:


 100 Gbps packet switch with 72 Gbps TM
The CPS100 provides the following main functions:
 Packet switch with 100 Gbps capacity, providing:
 Management and internal control, in addition to user traffic switching
 Non-blocking data switch fabric
 Guaranteed CIR
 Eight CoS with differentiated services
 P2P MPLS internal links via the packet switch
 Any slot to any slot connectivity

ECI Telecom Ltd. Proprietary 3-17


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

 Any card installed in any slot


 Synchronization 1588V2, Master, Slave, Transparent, and Boundary Clock
 Aggregate ports:
 2 x 10 GbE SFP+ based lines with OTN framing option (OTU-2/2e FEC/EFEC)

3.7.2.1 CPS100 functional description


The CPS100 card has two main subsystems:
 Central packet switch: Performs all the NPT-1200 packet switching operations.
 TMU: Generates and distributes timing and clock signals to all cards installed in the NPT-1200
platform. In addition to its internal timing reference, the TMU can use up to four user specified
reference sources. See Timing for a description of the TMU capabilities.
The CPS100 card is a critical NPT-1200 subsystem, and therefore, for redundancy purposes, two CPS100
cards must be installed in any NPT-1200 platform, one on each side of the cards cage. Both cards must be
of the same type and option and running the same NPT-1200 release.
When two identical cards are installed in the platform, the cards operate in a master-slave configuration:
 At any time, only one card is active and the other is in standby mode.
 Upon a failure or removal of the active card, the standby card becomes active without any disruption
in the system operation.
A CPS100 card can be inserted and replaced without affecting the traffic flow.

NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic affecting.

The following figure shows the front panel of the CPS100.

Figure 3-11: CPS100 front panel

The CPS100 has an RJ-45 connector marked TOD/1PPS that provides timing and synchronization
input/output signals, supporting IEEE 1588v2 standard.

ECI Telecom Ltd. Proprietary 3-18


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

Table 3-6: CPS100 indicators and functions


Marking Full name Color Function
ACT. System active Green Off indicates no power supply.
On steadily indicates the CPS100 not downloaded
successfully or that the CPS100 cannot be
controlled normally by the MCP1200.
Blinking with a frequency (1 sec ON and 1 sec
OFF) indicates the CPS100 card is running
normally and is active.
FAIL System fail Red Normally off. Lights when a card failure is
detected.
STBY System standby Orange Lights when the card is in standby. Off when the
card is active.
LSR ON (separate Laser on indication Green Lights steadily when laser is on.
LED for each port)

3.7.3 CPTS320
CPTS320 dual matrix cards are centralized packet and TDM switches that support any to any direct data
card connectivity as well as native TDM switching capacity. These matrix cards, designed for use in the
NPT-1200 metro access platform, offer a choice of capacity and configuration options, including:
 All Native Ethernet packet switch, supporting native packet-level switching with a capacity of up to
320 Gbps with 240 Gbps TM, providing:
 Management and internal control, in addition to user traffic switching
 Non-blocking data switch fabric
 P2P MPLS internal links via the packet switch
 Any slot to any slot connectivity
 Any card installed in any slot
 HO/LO nonblocking TDM cross connections, enabling native SDH/SONET switching with a capacity of
up to 40G (256 x VC 4 fully LO traffic)
 5G HEoS connectivity between the packet and TDM matrix
 Aggregate ports:
 1 x STM 64 XFP based interface
 2 x STM 16/STM 4/STM 1 SFP based configurable interfaces
 4 x 10 GbE SFP+ based interfaces
 Comprehensive range of timing and synchronization capabilities (IEEE 1588v2, SyncE)

NOTE: The CPTS320 supports 1 x STM-64 or 2 x STM-1/4/16 ports.

ECI Telecom Ltd. Proprietary 3-19


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

The following figure shows the traffic flow in an NPT-1200 configured with a CPTS320 matrix card.
Figure 3-12: CPTS320 traffic flow

NOTE: The CPTS320 HEoS functionality is supported as of V6.0; make sure to use the version
with HEoS FIX.

ECI Telecom Ltd. Proprietary 3-20


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

3.7.3.1 CPTS320 functional description


The CPTS320 card has three main subsystems:
 Central cross-connect matrix: Performs all the NPT-1200 TDM traffic cross-connect operations.
 Central packet switch: Performs all the NPT-1200 packet switching operations.
 TMU: Generates and distributes timing and clock signals to all the cards installed in the NPT-1200
platform. In addition to its internal timing reference, the TMU can use up to four user-specified
reference sources. See Timing for a description of the TMU capabilities.
The CPTS320 card is a critical NPT-1200 subsystem, and therefore, for redundancy purposes, two CPTS320
cards should be installed in any NPT-1200 platform for redundancy (non-redundant configuration is
supported as well). Both cards must be of the same type and option and running the same NPT-1200
release.
When two identical cards are installed in the platform, the cards operate in a master-slave configuration:
 At any time, only one card is active and the other is in standby.
 Upon failure or removal of the active card, the standby card becomes active without any disruption in
system operation.
A CPTS320 card can be inserted and replaced without affecting traffic flow.

NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade is non-traffic-affecting.

The following figure shows the front panel of the CPTS320.

Figure 3-13: CPTS320 front panel

The CPTS320 has an RJ-45 connector marked TOD/1PPS that provides timing and synchronization
input/output signals, supporting IEEE 1588v2 standard.

Table 3-7: CPTS320 indicators and functions


Marking Full name Color Function
ACT. System active Green Off indicates no power supply.
On steadily indicates the CPTS320 not
downloaded successfully or that the CPTS320
cannot be controlled normally by the MCP1200.
Blinking with a frequency (1 sec ON and 1 sec
OFF) indicates the CPTS320 card is running
normally and is active.
FAIL System fail Red Normally off. Lights when a card failure is
detected.

ECI Telecom Ltd. Proprietary 3-21


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

Marking Full name Color Function


STBY System standby Orange Lights when the card is in standby. Off when the
card is active.
LSR ON (separate Laser on indication Green Lights steadily when laser is on.
LED for each port)

3.7.4 HEoS_16 (CPTS internal connection)


The HEoS_16 is a high order EoS module located internally between the TDM matrix and the central packet
switch within the CPTS100/320 central matrix card. The module has a total bandwidth of 5 Gbps and up to
16 EoS/MoT ports.
The HEoS_16 supports high order (i.e., VC-4 only) VCAT, LCAS, and GFP-F encapsulation. It has the following
connections:
 On the WAN side, to the central TDM matrix, where flexible XC can be made to any SDH port or to any
L1/L2 DMxx_x_L2 EoS/MoT based cards.
 On LAN side, to the central packet switch through channelized link with Flow control functionality.
The following figure shows the HEoS_16 connectivity.
Figure 3-14: HEOS_16 connections

ECI Telecom Ltd. Proprietary 3-22


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

NOTE: The HEOS_16 supports up to 16 channels (HO only):


 8 x EoS/MoT channels (CH1# ~ CH8#) with jumbo frames up to 9736 Bytes
 8 x EoS/MoT channels (CH9#~CH16#) with jumbo frames up to 2400 Bytes (mainly for CES
cards connection).
 VCG1 can be 0~16VC-4’s, 32 VC4’s;
 VCG[2-8] can be 0~16 VC-4’s;
 VCG[9-16] can be 0~16 VC-4’s in CPTS100 and 0~2 VC-4’s in CPTS320

The HEoS_16 can be used for the following main applications:


 Support EoS and MoT ports, like in DMXE and DMGE cards. In other words, CPTS100/320 with the
HEoS can replace “XIO64 + DMXE” for MoT based applications.
 The central packet switch in the CPTS100/320 can be connected to DMXE/DMGE/MPoE cards through
the HEOS_16 within the same NE and send traffic to any internal or DHxx_XX card.

3.7.5 CPS320
The CPS320 is a centralized packet switch that supports any to any direct data card connectivity. This switch
card, designed for use in the NPT-1200 metro access platform, offers a choice of capacity and configuration
options, including:
All Native Ethernet packet switch, supporting native packet-level switching with a capacity of up to 320
Gbps with 240 Gbps TM, providing:
 Management and internal control, in addition to user traffic switching
 Non-blocking data switch fabric
 P2P MPLS internal links via the packet switch
 Traffic management including:
 Guaranteed CIR
 Eight CoS with differentiated services
 Two CoS (within the switch)
 E2E flow control
 Any card installed in any slot
 Any to any slot connectivity
 Aggregate ports:
 4 x 10 GbE SFP+ based interfaces
 Comprehensive range of timing and synchronization capabilities (ToD, 1pps)

NOTE: Extended temperature is not supported by the CPS320 switch card.

ECI Telecom Ltd. Proprietary 3-23


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

The following figure shows the traffic flow in an NPT-1200 configured with a CPS320 matrix card.
Figure 3-15: CPS320 traffic flow

3.7.5.1 CPS320 functional description


The CPS320 card has two main subsystems:
 Central packet switch: Performs all the NPT-1200 packet switching operations.
 TMU: Generates and distributes timing and clock signals to all cards installed in the NPT-1200
platform. In addition to its internal timing reference, the TMU can use up to four user specified
reference sources. See Timing for a description of the TMU capabilities.
The CPS320 card is a critical NPT-1200 subsystem, and therefore, for redundancy purposes, two CPS320
cards must be installed in any NPT-1200 platform for redundancy (non-redundant configuration is
supported as well). Both cards must be of the same type and option and running the same NPT-1200
release.
When two identical cards are installed in the platform, the cards operate in a master-slave configuration:
 At any time, only one card is active and the other is in standby mode.
 Upon a failure or removal of the active card, the standby card becomes active without any disruption
in the system operation.
A CPS320 card can be inserted and replaced without affecting the traffic flow.

ECI Telecom Ltd. Proprietary 3-24


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic affecting.

The following figure shows the front panel of the CPS320.

Figure 3-16: CPS320 front panel

The CPS320 has an RJ-45 connector marked TOD/1PPS that provides timing and synchronization
input/output signals, supporting IEEE 1588v2 standard.

Table 3-8: CPS320 indicators and functions


Marking Full name Color Function
ACT. System active Green Off indicates no power supply.
On steadily indicates the CPS320 not downloaded
successfully or that the CPS320 cannot be
controlled normally by the MCP1200.
Blinking with a frequency (1 sec ON and 1 sec
OFF) indicates the CPS320 card is running
normally and is active.
FAIL System fail Red Normally off. Lights when a card failure is
detected.
STBY System standby Orange Lights when the card is in standby. Off when the
card is active.
LSR ON (separate Laser on indication Green Lights steadily when laser is on.
LED for each port)

ECI Telecom Ltd. Proprietary 3-25


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

3.7.6 XIO64
The XIO64 card is the cross-connect matrix card with one aggregation line interface for the NPT-1200. It
also includes the TMU. The NPT-1200 should always be configured with two XIO64 cards for the
cross-connect matrix and TMU redundancy. The XIO64 has a cross-connect capability of 40 Gbps.
In addition, the XIO64 provides one STM-64 aggregate line interface based on the XFP module. The XFP
housing on the XIO64 panel supports STM-64 optical transceivers with a pair of LC optical connectors. The
card also supports OTN (with FEC) by a unique XFP type, the OTRN_xx.

Figure 3-17: XIO64 front panel

Table 3-9: XIO64 front panel LED indicators


Marking Full name Color Function
ACT. System active Green Off indicates no power supply.
On steadily indicates the XIO64 not downloaded
successfully or the XIO64 cannot be controlled
normally by the MCP1200.
Blinking with a frequency (1 sec ON and 1 sec
OFF) indicates the XIO64 card is running normally
and is active.
FAIL System fail Red Normally off. Lights when card failure detected.
STANDBY System standby Orange Lights when the card is in standby. Off when the
card is active.
LSR ON Laser on indication Green Lights steadily when laser is on.

ECI Telecom Ltd. Proprietary 3-26


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

3.7.7 XIO16_4
The XIO16_4 is the cross-connect matrix card with four aggregation line interfaces for the NPT-1200. It also
includes the TMU. The NPT-1200 should always be configured with two XIO16_4 cards for the
cross-connect matrix and TMU redundancy. The XIO16_4 has a cross-connect capability of 40 Gbps.
In addition, the XIO16_4 provides four STM-16/4/1 aggregate line interfaces based on the SFP modules. The
SFP housings on the XIO16_4 panel support STM-1, STM-4, and STM-16 (colored, non-colored, and BD)
optical transceivers, each with a pair of LC optical connectors. The type of the interface can be configured
separately for each port through the management.

Figure 3-18: XIO16_4 front panel

Table 3-10: XIO16_4 front panel LED indicators


Marking Full name Color Function
ACT. System active Green Off indicates no power supply.
On steadily indicates the XIO16_4 not
downloaded successfully or the XIO16_4 cannot
be controlled normally by the MCP1200.
Blinking with a frequency (1 sec ON and 1 sec
OFF) indicates the XIO16_4 card is running
normally and is active.
FAIL System fail Red Normally off. Lights when card failure detected.
STANDBY System standby Orange Lights when the card is in standby. Off when the
card is active.
LSR ON Laser on indication Green Lights steadily when laser is on.

3.8 Engineering orderwire


EOW provides 64 Kbps voice communication channels between NEs through E1, E2, or F1 bytes in the
STM-n interface overhead, or via clear channels (in Framed E1 mode) provisioned between NEs.
The OW facility is based on a telephone “party line” concept, where all connected parties, typically
technicians, can participate in concurrent voice-based service calls. As such, it enables one or more
technicians to make calls simultaneously using dedicated OW channels instead of regular SDH lines.
OW lines are normally used between a remote site and a CO during initial installation of the system, or
when no telephone line is available. All calls are bidirectional.

ECI Telecom Ltd. Proprietary 3-27


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

The interface between the OW and the NPT-1200 and NPT-1030 platforms is based on a framed E1
interface. A special cable connects the host NPT-1200 and NPT-1030 unit and the OW unit providing the E1
connection. The framed E1 carries various information from/to the OW unit.
The OW module consists of an integrated DTMF handset, cable connections, and configuration interfaces.
No other ancillary equipment is required.

3.9 NPT-1200 Tslot I/O modules


The NPT-1200 has seven Tslots to accommodate these modules. It supports various PDH, SDH, CES and
Ethernet I/O modules as listed in the following table. For a detailed description of the Tslot modules see
Tslot I/O modules.

Table 3-11: NPT-1200 Tslot modules


Type Designation
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21B
Electrical PDH E1 interface Tslot module with 63 interfaces PME1_63
Electrical PDH E3/DS-3 interface Tslot module PM345_3
4 x STM-1 ports SDH interface card SMQ1
4 x STM-1 or STM-4 ports SDH interface card SMQ1&4
1 x STM-16 port SDH interface card SMS16
Electrical Ethernet interface module with L1 functionality DMFE_4_L1
Optical Ethernet interface module with L1 functionality DMFX_4_L1
Electrical/optical GbE interface module with L1 functionality DMGE_4_L1
Electrical Ethernet interface module with L2 functionality DMFE_4_L2
Optical Ethernet interface module with L2 functionality DMFX_4_L2
Electrical/optical GbE interface module with L2 functionality DMGE_2_L2
Electrical/optical GbE interface module with L2 functionality DMGE_4_L2
Electrical/optical GbE interface module with L2 functionality DMGE_8_L2
Optical 10 GbE and GbE interface module with L2 functionality DMXE_22_L2
Optical 10 GbE and GbE interface module with L2 functionality DMXE_48_L2
CES services for STM-1/STM-4 interfaces module DMCES1_4
CES multi-service module with 16 x E1/T1 interfaces MSE1_16
CES multiservice card with 8 x E1/T1 and 2 x STM-1/OC-3 interfaces MSC_2_8
CES multi-service module for 4 x OC3/STM-1 or 1 x OC12/STM-4 interfaces MS1_4
CES multi-service module with 32 x E1/T1 interfaces MSE1_32
Electrical GbE interface module with direct connection to the packet switch DHGE_4E
Optical GbE interface module with direct connection to the packet switch DHGE_8
Electrical and Optical GbE interface module with direct connection to the DHGE_16
packet switch

ECI Telecom Ltd. Proprietary 3-28


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

Type Designation
Optical GbE interface module with direct connection to the packet switch DHGE_24
Optical 10GE interface module with direct connection to the packet switch DHXE_2
Optical 10GE interface module with direct connection to the packet switch DHXE_4
Optical 10GE interface module with direct connection to the packet switch DHXE_4O
with OTN wrapping.
NFV module with 4 x GbE front panel ports for Virtual Network Functions. NFVG_4

3.10 Expansion Platform


The traffic capabilities of the Neptune platform can be expanded by installing the EXT-2U expansion unit on
top.
The EXT-2U platform is a high density modular expansion unit for the Neptune multiservice platforms. It
supports the complete range of PDH, SDH, CES, PCM, optics and Ethernet services. Integrating this add-on
platform into your network configuration is not traffic-affecting.
The EXT-2U is compact and versatile and can be used with different base units from the NPT Product Line.
The type of traffic delivered by the unit depends on the type of matrix (TDM and Packet or Packet only)
installed in the base unit. I/O expansion cards are supported in accordance.
The EXT-2U has three multipurpose slots (ES1 to ES3) for any combination of extractable traffic cards. PDH,
SDH, optics, PCM, Ethernet, and CES traffic are all handled through cards in these traffic slots.
The following table lists the traffic cards supported in the EXT-2U when installed on the Neptune platform.
For a detailed description of the EXT-2U features, functionality, and supported traffic cards refer to the
chapter EXT-2U expansion platform.

Table 3-12: EXT-2U supported traffic cards for NPT-1200


Card type Designation
Electrical PDH E1 interface card. PE1_63
Electrical PDH E3/DS-3 interface card. P345_3E
Optical or electrical SDH STM-1 interface card. S1_4
Optical SDH STM-4 interface card. S4_1
Optical Base Card (OBC) for optical amplifiers and DCM modules. Optical Base Card (OBC)
Data cards with internal direct connection to the packet switch. DHFE_12
Data cards with internal direct connection to the packet switch. DHFX_12
CES multiservice card for 32 x E1 interfaces. DMCE1_32
Muxponder card with 12 client ports and a slot for installing an MO_AOC4 MXP10
optical module.
EoS processing and metro L2 switching card with GbE/FE interfaces and MPS_2G_8F
MPLS capabilities.

ECI Telecom Ltd. Proprietary 3-29


Neptune (Hybrid) Reference Manual NPT-1200 system architecture

Card type Designation


EoS processing and metro L2 switching card with GbE/FE interfaces and MPoE_12G
MPLS capabilities. Enables to provide power over the Ethernet interface
(PoE).
E1 1:1 protection card for 63 ports. TP63_1
High rate (E3/DS3/STM-1e) 1:1 protection card for up to 4 ports. TPS1_1
Electrical Ethernet 1:1 protection card for up to 8 ports. TPEH8_1
Multiservice access card for N x 64 Kbps various PCM interfaces. SM_10E
Multiservice access card for N x 64 Kbps PCM interfaces. EM_10E

ECI Telecom Ltd. Proprietary 3-30


4 NPT-1050 system architecture
The NPT-1050 is a unique miniature 1U redundant native packet/TDM add/drop multiplexer optimized for
metro access and access, RAN cellular networks, and utilities. This converged multiservice packet transport
platform offers an all native solution that optimizes both TDM and packet handling. The NPT-1050 is a cost
effective choice for second and third level aggregation, geared for cellular hub locations (3G, LTE, and RNC),
providing a unique hybrid solution for high capacity access rings, and optimized for popular triple play
applications.

Figure 4-1: NPT-1050 general view

This fully redundant Packet Optical Access (POA) platform offers enhanced MPLS-TP data network
functionality, including full traffic and IOP protection and the complete range of Ethernet based services
(CES, EoS, MoT, MoE, and PoE), as described in MPLS-TP and Ethernet Solutions.
The NPT-1050 is designed around a centralized dual matrix card that supports any to any direct data card
connectivity as well as native TDM switching capacity. The platform can be configured with the MCPTS100
matrix card (100G packet switch + 15G TDM switch), MCPS100 switching card (100G packet switch),
MCIPS300 switching card (300G packet switch, in future version). MCPTS100 cards provide a TDM capacity
of up to 15G (96 x VC 4 fully LO traffic).
The NPT-1050 is a 1U base platform housed in a 243 mm deep, 465 mm wide, and 44 mm high equipment
cage with all interfaces accessible from the front of the unit. The platform includes the following
components:
 Redundant dual matrix cards for robust provisioning of the following functionalities:
 All native packet switching (MCPTS) or pure packet switching (MCPS).
 HO/LO nonblocking TDM cross connections (MCPTS).
 Two SFP+ based 10 GbE interfaces (MCPTS and MCPS).
 Two SFP based GE interfaces (MCPTS and MCPS).
 One SFP based STM-16/STM-4/STM-1 interface (MCPTS).
 Comprehensive range of timing and synchronization capabilities (T3/T4, ToD, and 1pps) (MCPTS
and MCPS).
 In band management interfaces.
 Three I/O card slots (TS1 to TS3), for processing a comprehensive range of traffic interfaces, including
PDH/Async, SDH/SONET, Ethernet Layer 1, and Ethernet Layer 2/MPLS.
 The Tslots can be configured for 2.5G, 20GbE, or 40GbE service.
 Traffic connector for the (optional) EXT-2U expansion unit.
 Redundant DC power supply modules (INF_B1UH).
 Fan unit (FCU_1050) with alarm indications and monitoring.

ECI Telecom Ltd. Proprietary 4-1


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

The following figure identifies the slot arrangement in the NPT-1050 platform.
Figure 4-2: NPT-1050 platform slots layout

The following table lists the modules that can be configured in each NPT-1050 slot.

Table 4-1: NPT-1050 supported modules


Name Applicable slots in NPT-1050

DC PSA DC PSB MXS A MXS B TS 1# TS 2# TS 3# FS

INF-B1UH  

FCU_1050 

MCPS100/MCPTS100  

AIM100  

PME1_215   

PME1_21B6   

PME1_637   

PM345_3   

SMQ1   

SMQ1&4   

SMS16   

DMCES1_4   

MSE1_16   

MS1_4   

MSE1_32   

DHGE_4E   

DHGE_8   

DHGE_16  

DHGE_24  

5 The PME1_21/PME1_21B module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced
conversion unit, the xDDF-21.
6 The PME1_21/PME1_21B module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced
conversion unit, the xDDF-21.
7 The PME1_63 module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced conversion
unit, the xDDF-21.

ECI Telecom Ltd. Proprietary 4-2


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

Name Applicable slots in NPT-1050

DC PSA DC PSB MXS A MXS B TS 1# TS 2# TS 3# FS

DHXE_2   

NFVG_4   

The NPT-1050 is fed from -48 VDC. Two INF_B1UH modules can be configured in two power supply module
slots for a redundant power supply.
The NPT-1050 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The NPT-1050 can also
be configured as an NPT-1050E, when combined with the EXT-2U expansion unit.
Typical power consumption for the NPT-1050 is less than 250 W. Power consumption is monitored through
the management software. For more information about power consumption requirements, see the
NPT-1050 Installation and Maintenance Manual and the NPT System Specifications.

4.1 Control subsystem


In the NPT-1050 platform the main controller is incorporated in the matrix cards (MCPS or MCPTS), it
controls the entire NPT-1050 system via a high-performance CPU, which also processes communication
with the EMS/LCT and other equipment. Regularly, two matrix cards are configured in the NPT-1050 for
traffic redundancy thereby, supporting the control system redundancy. A large capacity flash memory
stores equipment configuration data and software versions (up to two). Both online and remote software
upgrades are supported. NPT-1050 supports the processing of RS DCC channels and MS DCC channels, plus
up to two Clear Channels (DCC over framed or unframed E1). The NPT-1050 unit can send network
management information through third-party SDH or PDH networks using the Clear Channels. In addition,
in-band Management Control Channels (MCC) are supported as well.
Figure 4-3: Control system block diagram

NPT-1050 main controller card (MCPS) is the most essential card of the system, creating virtually a
complete standalone native packet system. NPT-1050 control and communication functions include:
 Internal control and processing
 Communication with external equipment and management
 Network element (NE) software and configuration backup
 Built-in Test (BIT)

ECI Telecom Ltd. Proprietary 4-3


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

4.1.1 Software and configuration backup


NPT-1050 features a large-capacity on-board NVM that stores a complete backup of the system’s software
and node configuration. This makes sure a superior management and control availability.
NPT-1050 main controller card enables easy software upgrade using a remote software procedure
operated from the EMS-NPT management station or LCT-NPT craft terminal. The card can store two
different software versions simultaneously, and enables a quick switchover between the different versions
when required.

4.1.2 Built-in test


The BIT hardware and its related software assist in the identification of any faulty card or module.
The BIT outputs provide:
 Management reports
 System reset
 Maintenance alarms
 Fault detection
 Protection switch for an MCPS/MCPTS card
Dedicated test circuits implement the BIT procedure under the control of an integrated software package.
After the NPT-1050 unit is switched on, a BIT program is automatically activated for both initialization and
normal operation phases. Alarms are sent to the EMS-NPT if any failures are detected by the BIT.
BIT testing covers general tests, including module presence tests and periodic sanity checks of I/O module
processors. It performs traffic path tests, card environment tests, data tests, and detects traffic-affecting
failures, as well as failures in other system modules.

4.2 Communications with external equipment


and management
In the Neptune metro access platform product line, the main controller card is responsible for
communicating with other NEs and management stations.
The main controller card communicates with the remote EMS/LCT systems and other SDH NEs via the DCC
or clear channel. It communicates with the local EMS and LCT systems via the Ethernet interface. The
communications between SDH NEs, or between SDH NEs and the EMS/LCT, can also be via the DCN. The
controller can connect to the DCN via Ethernet or V.35. In addition, the controller can connect to external
equipment via Ethernet or RS-232, using DCC channels of the SDH network to build a narrow bandwidth
DCN for the external equipment. In-band Management Control Channel (MCC) is supported in the
platforms as well, enabling management of pure packet NE configurations through in-band channels.
In-band Management Control Channels (MCC) are supported in the platforms as well.

ECI Telecom Ltd. Proprietary 4-4


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

NOTE: NPT-1050 supports in band and DCN management connections for PB and MPLS:
 4 Mbps policer for PB UNI which connects to external DCN
 10 Mbps shaper for MCC packet to MCP
 No rate limit for the MNG port rate up to 100M full duplex

4.3 Timing
NPT-1050 provides high-quality system timing to all traffic modules and functions in compliance with
applicable ITU-T recommendations for functionality and performance.
The main component in the NPT-1050 synchronization subsystem is the timing and synchronization unit
(TMU). Timing is distributed redundantly from the TMUs to all traffic and matrix cards, minimizing unit
types and reducing operation and maintenance costs.
The TMU and the internal and external timing paths are fully redundant. The high-level distributed BIT
mechanism makes sure top performance and availability of the synchronization subsystem. If there is a
hardware failure, the redundant synchronization subsystem takes over the timing control with no traffic
disruption.
To support reliable timing, NPT-1050 provides multiple synchronization reference options. Up to four
timing references can be monitored simultaneously:
 1PPS and ToD interfaces, using external timing input sources
 2 x 2 MHz (T3) external timing input sources
 2 x 2 Mbps (T3) external timing input sources
 STM-n line timing from any SDH interface card
 E1 2M PDH line timing from any PDH interface card
 Local interval clock
 Holdover mode
 SyncE
 1588V2 – Master, Slave, transparent, and boundary clock
In the Neptune, any timing signal can be selected as a reference source. The TMU provides direct control
over the source selection (received from the system software) and the frequency control loop. The
definition of the synchronization source depends on the source quality and synchronization mode of the
network timing topology (set by the EMS-NPT or LCT-NPT):
 Synchronization references are classified at any given time according to a predefined priority and
prevailing signal quality. NPT-1050 synchronization subsystem synchronizes to the best available
timing source using the SSM protocol. The TMU is frequency-locked to this source, providing internal
system and SDH line transmission timing. The platform is synchronized to this central timing source.
 NPT-1050 provides synchronization outputs for the synchronization of external equipment within the
exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to
synchronize any peripheral equipment or switch.

ECI Telecom Ltd. Proprietary 4-5


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

NPT-1050 supports SyncE synchronization, which is fully compatible with the asynchronous nature of
traditional Ethernet. SyncE is defined in ITU-T standards G.8261, G.8262, G.8263, and G.8264.
The IEEE 1588 Precision Time Protocol (PTP) provides a standard method for high precision synchronization
of network connected clocks. PTP is a time transfer protocol enabling slave clocks to synchronize to a
known master clock, ensuring that multiple devices operate using the same time base. The protocol
operates in master/slave configuration using UDP packets over IP or multicast packets over Ethernet. The
IEEE 1588v2 is now supported in the NPT-1050 to provide Ordinary Clock (OC) and Boundary Clock (BC)
capabilities.

4.4 Traffic and Switching Functionality


The heart of NPT-1050 is a non-blocking switching fabric. It is NPT’s architecture that enables its
outstanding configuration flexibility.
NPT-1050 supports the following range of non-blocking switching fabric configurations:
 100G packet MPLS-TP switching
 100G packet MPLS-TP switching and TDM, 96 VC-4 x 96 VC-4 as ADM-16/MADM-16
In the NPT-1050, the high-capacity, nonblocking 4/4/3/1 HO/LO cross-connect matrix is in the MCPTS100
redundant cards. Based on the type of MCPTS100/MCPS100 card, different matrix cores are used, as
follows:
 In the MCPTS100, central packet and TDM switch, 100G packet switching and 15G TDM, includes
timing control with 1 x STM-16, 4 x GE (CSFP) and 2 x 10GE interfaces card.
 In the MCPS100, central packet switch, 100G packet switching, includes timing control with 4 x GE
(CSFP) and 2 x 10GE interfaces card.

4.5 Power feed subsystem


NPT-1050 features a distributed power feed subsystem. This distributed power concept assures system
upgrading and efficient heat distribution. It also makes sure maximum reliability of the power feed
interface.
In the NPT-1050 product, two INF_B1UH power feed modules serve as a redundant interconnection device
between the NPT-1050 cards and the –48 VDC power sources. The main purpose of this unit is to decouple
the noise generated/received from the DC power source lines. Each INF_B1UH has an external power input.
The filter is connected to a power input to distribute the -48 VDC battery plant input to all cards via fully
redundant power buses. Each card of the NPT-1050 generates its own local voltage using high-quality
DC/DC converters.
Additional features of the power feed subsystem include:
 Reverse polarity protection
 Overvoltage alarm and protection
 Undervoltage alarm and protection
 Redundancy between INF units
 Hot swapping

ECI Telecom Ltd. Proprietary 4-6


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

 Power-fail detection and 10 msec holdup


 Lightning-strike protection
 Redundant fan power supply with adjustable voltages

4.6 NPT-1050 common cards


The following modules in the NPT-1050 provide the common functionality for a Neptune system:
 INF_B1UH: DC power feeding with input filtering. Usually, a NPT-1050 platform is configured with two
INF_B1UHs for redundancy.
 FCU_1050: Cooling fan and control unit for the NPT-1050 platform.
 MCPTS100: Main controller card (MCP) and TDM cross-connect, packet switching, timing, one
aggregate STM-16 interface, or four aggregate GEs interfaces, and two 10 GbE interfaces. The
NPT-1050 platform must be configured with two MCPTS100 cards for redundancy.
 MCPS100: Main controller card (MCP) and Packet switching, timing, four aggregate GEs and two 10
GbE interfaces. The NPT-1050 platform must be configured with two MCPS100 cards for redundancy.

4.6.1 INF_B1UH
The INF_B1UH is a DC power-filter module that can be plugged into the NPT-1050 platform. Two INF_B1UH
modules are needed for power feeding redundancy. It performs the following functions:
 Single DC power input and power supply for all modules in the NPT-1050
 Input filtering function for the entire NPT-1050 platform
 Adjustable output voltage for fans in the NPT-1050
 Indication of input power loss and detection of under-/over-voltage
 Shutting down of the power supply when under-/over-voltage is detected
 High-power INF for up to 450 W
Figure 4-4: INF_B1UH front panel

ECI Telecom Ltd. Proprietary 4-7


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

4.6.2 FCU_1050
The FCU_1050 is a pluggable fan control module with four fans for cooling the NPT-1050 platform. The
fans’ running speed can be set to 16 different levels. The speed is controlled by the MCPS/MCPTS according
to the installed cards temperature.
In addition the FCU_1050 includes the ALARM interface connector of the NPT-1050 platform.

Figure 4-5: FCU_1050 front panel

Table 4-2: FCU_1050 front panel components


Marking Full name Type Color Function
ACT. System active LED Green Normally on when the fan unit is powered on.
Off indicates a power failure of the fan unit.
FAIL System fail LED Red Normally off. Lights when a fan failure is
detected.
ALARM Alarm connector SCSI 36-pin - Alarm input and output interface connecting
connector to the RAP.

4.7 NPT-1050 Switching Cards


The NPT-1050 platform can operate with different matrix cards according to its configuration:
 MCPTS100: Main controller processor (MCP) with dual matrix card; provides system control and
management, packet switching, TDM cross-connect, and timing .The card also supports one aggregate
STM-16/4/1 SFP based port, 4 x GE CSFP based, or 2 x GE SFP based ports, and 2 x 10 GbE SFP+ based
interfaces.
 MCPS100: Main controller processor (MCP) with packet switching card; provides system control and
management, packet switching, and timing. The card also supports 4 x GE CSFP based, or 2 x GE SFP
based ports, and 2 x 10 GbE SFP+ based interfaces.
The NPT-1050 must be equipped with a pair of these cards of the same type to support system redundancy.
For non-redundant configuration the user can consider the AIM100 card or none (blank panel).
The following sections detail each card functionality.

ECI Telecom Ltd. Proprietary 4-8


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

4.7.1 MCPTS100 Dual Matrix Cards


MCPTS100 dual matrix cards are centralized packet and TDM switches that support any to any direct data
card connectivity as well as native TDM switching capacity. These matrix cards, designed for use in the
NPT-1050 metro access platform, offer a choice of capacity and configuration options, including:
 All Native Ethernet packet switch, supporting native packet level switching with a 100G switching
capacity with up to 72G traffic management (MPLS processing), providing:
 Management and internal control, in addition to user traffic switching
 Non-blocking data switch fabric
 HO/LO nonblocking TDM cross connections, enabling native SDH switching with a capacity of up to
15G (96 x VC 4 fully LO traffic)
 Any card installed in any slot
 Any slot to any slot connectivity
 Aggregate ports:
 1 x STM-1/STM-4/STM-16 SFP based configurable interface
 2 x 10 GbE SFP+ based interfaces
 4 x GbE CSFP based interfaces
 Comprehensive range of timing and synchronization capabilities (T3/T4, ToD, 1pps)
The following figure shows the traffic flow in an NPT-1050 configured with an MCPTS100 switching card.

Figure 4-6: MCPTS100 traffic flow

ECI Telecom Ltd. Proprietary 4-9


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

4.7.1.1 MCPTS100 functional description


The MCPTS100 card has four main subsystems:
 Main processing and control: Performs all integrated functions like control, communication, and
overhead processing.
 Central cross-connect matrix: Performs all the NPT-1050 TDM traffic cross-connect operations.
 Central packet switch: Performs all the NPT-1050 packet switching operations.
 TMU: Generates and distributes timing and clock signals to all the cards installed in the NPT-1050
platform. In addition to its internal timing reference, the TMU can use up to four user-specified
reference sources. See Timing for a description of the TMU capabilities.
The MCPTS100 card is a critical NPT-1050 subsystem, and therefore, for redundancy purposes, two
MCPTS100 cards should be installed in any NPT-1050 platform, one on each side of the cards cage. Both
cards must be of the same type and option and running the same NPT-1050 release.
When two identical cards are installed in the platform, the cards operate in a master-slave configuration:
 At any time, only one card is active and the other is in standby.
 Upon failure or removal of the active card, the standby card becomes active without any disruption in
system operation.
A MCPTS100 card can be inserted and replaced without affecting traffic flow.

NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic-affecting.

The following figure shows the front panel of the MCPTS100.

Figure 4-7: MCPTS100 front panel

Table 4-3: MCPTS100 front panel interfaces


Marking Interface type Function
1PPS/ToD RJ-45 1PPS and Time of Day input/output signals for supporting
Ethernet timing per IEEE 1588v2 standard.
T3/T4 RJ-45 T3 and T4 timing interfaces (one 2 Mbps and one 2 MHz).
MNG. RJ-45 10/100BaseT Ethernet interface for management.

ECI Telecom Ltd. Proprietary 4-10


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

Table 4-4: MCPTS100 indicators and functions


Marking Full name Color Function
ACT. System active Green Off indicates no power supply.
On steadily indicates the MCPTS100 not downloaded
successfully or that the MCPTS100 cannot be
controlled normally by the MCP1200.
Blinking with a frequency (1 sec ON and 1 sec OFF)
indicates the MCPTS100 card is running normally and is
active.
FAIL System fail Red Normally off. Lights when a card failure is detected.
STBY System standby Orange Lights when the card is in standby. Off when the card is
active.
MJR. System major alarm Red Lights when the system has a critical or major alarm.
MNR. System minor Yellow Lights when the highest severity of the NE current
alarm alarms is minor or warning. Off when the system has
no alarm or the highest severity of the NE current
alarms is higher than minor or warning.
LSR ON (separate Laser on indication Green Lights steadily when laser is on.
LED for each 10GE
and STM-1/4/16
port)
ON (separate LED Laser on indication Green Lights steadily when laser is on.
for each GE port,
P1 to P4)

4.7.2 MCPS100 switching card


The MCPS100 is a centralized packet switch that supports any to any direct data card connectivity. This
matrix card, designed for use in the NPT-1050 metro access platform, offers a choice of capacity and
configuration options, including:
 All Native Ethernet packet switch, supporting native packet-level switching with a 100G switching
capacity with up to 72G traffic management (MPLS processing), providing:
 Management and internal control, in addition to user traffic switching
 Aggregate ports:
 2 x 10 GbE SFP+ based interfaces
 4 x GbE CSFP based interfaces
 2 x GbE SFP based interfaces
 Comprehensive range of timing and synchronization capabilities (T3/T4, ToD, 1pps)

ECI Telecom Ltd. Proprietary 4-11


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

The following figure shows the traffic flow in an NPT-1050 configured with an MCPS100 switching card.
Figure 4-8: MCPS100 traffic flow

4.7.2.1 MCPS100 functional description


The MCPS100 card has three main subsystems:
 Main processing and control: Performs all integrated functions like control, communication, and
overhead processing.
 Central packet switch: Performs all the NPT-1050 packet switching operations.
 TMU: Generates and distributes timing and clock signals to all the cards installed in the NPT-1050
platform. In addition to its internal timing reference, the TMU can use up to four user-specified
reference sources. See Timing for a description of the TMU capabilities.
The MCPS100 card is a critical NPT-1050 subsystem, and therefore, for redundancy purposes, two MCPS100
cards should be installed in any NPT-1050 platform, one on each side of the cards cage. Both cards must be
of the same type and option and running the same NPT-1050 release.
When two identical cards are installed in the platform, the cards operate in a master-slave configuration:
 At any time, only one card is active and the other is in standby.
 Upon failure or removal of the active card, the standby card becomes active without any disruption in
system operation.
A MCPS100 card can be inserted and replaced without affecting traffic flow.

ECI Telecom Ltd. Proprietary 4-12


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic-affecting.

The following figure shows the front panel of the MCPS100.

Figure 4-9: MCPS100 front panel

Table 4-5: MCPS100 front panel interfaces


Marking Interface type Function
1PPS/ToD RJ-45 1PPS and Time of Day input/output signals for supporting
Ethernet timing per IEEE 1588v2 standard.
T3/T4 RJ-45 T3 and T4 timing interfaces (one 2 Mbps and one 2 MHz).
MNG. RJ-45 10/100BaseT Ethernet interface for management.

Table 4-6: MCPS100 indicators and functions


Marking Full name Color Function
ACT. System active Green Off indicates no power supply.
On steadily indicates the MCPS100 not downloaded
successfully or that the MCPS100 cannot be controlled
normally by the MCP1200.
Blinking with a frequency (1 sec ON and 1 sec OFF)
indicates the MCPS100 card is running normally and is
active.
FAIL System fail Red Normally off. Lights when a card failure is detected.
STBY System standby Orange Lights when the card is in standby. Off when the card is
active.
MJR. System major alarm Red Lights when the system has a critical or major alarm.
MNR. System minor Yellow Lights when the highest severity of the NE current
alarm alarms is minor or warning. Off when the system has
no alarm or the highest severity of the NE current
alarms is higher than minor or warning.
LSR ON (separate Laser on indication Green Lights steadily when laser is on.
LED for each 10GE
port)
ON (separate LED Laser on indication Green Lights steadily when laser is on.
for each GE port,
P1 to P4)

ECI Telecom Ltd. Proprietary 4-13


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

4.7.3 MCPS/MCPTS control functionality


The MCPS/MCPTS are the main processing cards of the NPT-1050. They integrate functions such as control,
communications and overhead processing and provide:
 Control functions:
 Communications with and control of all other modules in the NPT-1050 and EXT-2U through the
backplane (by the CPU).
 Communications with the EMS-NPT, LCT-NPT, or other NEs through a management interface
(MNG) or DCC, or MCC, or VLAN.
 Routing and handling of up to 16 x RS DCC, 16 x MS DCC, (total 32 channels) 24 MCC channels,
and two clear channels.
 Alarms and maintenance.
 Fan control.
 External timing reference interfaces (T3/T4), which provide the line interface unit for a single 2 Mbps
T3/T4 interface and a single 2 MHz T3/T4 interface.
The MCPS/MCPTS support the following interfaces from its front panel:
 MNG for management
 T3/T4 for timing
 1PPS/ToD for timing

4.7.4 AIM100
The AIM100 is an aggregate interface module (AIM) for the aggregate (MCPS/MCPTS) slot in a
non-redundant configuration. The card enables to achieve the max interfaces with one MCPS/MCPTS card
as non-redundant installation .The AIM100, designed for use in the NPT-1050 metro access platform, offers
a choice of configuration option, including:
 Aggregate ports:
 2 x 10 GbE SFP+ based interfaces
 4 x GbE CSFP based interfaces
 2 x GbE SFP based interfaces
 1 x STM-1/4/16 SFP based interface

Figure 4-10: AIM100 front panel

ECI Telecom Ltd. Proprietary 4-14


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

Table 4-7: AIM100 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5
Hz. Off or on steadily indicates the card is
not running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault
is detected.
ON (P1 to P4) Laser on indication (GbE Green Lights steadily when the corresponding
ports) laser is on.
LSR ON Laser on indication (10GE Green Lights steadily when the corresponding
and STM-1/4/16 ports) laser is on.

4.8 NPT-1050 Tslot I/O Modules


The NPT-1050 has three slots for installing I/O (Tslot) modules. It supports various PDH, SDH, CES and
Ethernet I/O modules as listed in the following table. For a detailed description of the Tslot modules see
Tslot I/O modules.

Table 4-8: NPT-1050 Tslot modules


Type Designation
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21B
Electrical PDH E1 interface Tslot module with 63 interfaces PME1_63
Electrical PDH E3/DS-3 interface Tslot module PM345_3
4 x STM-1 ports SDH interface card SMQ1
4 x STM-1 or STM-4 ports SDH interface card SMQ1&4
1 x STM-16 port SDH interface card SMS16
CES services for STM-1/STM-4 interfaces module DMCES1_4
CES multi-service module with 16 x E1/T1 interfaces MSE1_16
CES multiservice card with 8 x E1/T1 and 2 x STM-1/OC-3 interfaces MSC_2_8
CES multi-service module for 4 x OC3/STM-1 or 1 x OC12/STM-4 interfaces MS1_4
CES multi-service module with 32 x E1/T1 interfaces MSE1_32
Electrical GbE interface module with direct connection to the packet switch DHGE_4E
Optical GbE interface module with direct connection to the packet switch DHGE_8
Electrical and Optical GbE interface module with direct connection to the DHGE_16
packet switch
Optical GbE interface module with direct connection to the packet switch DHGE_24
Optical 10GE interface module with direct connection to the packet switch DHXE_2
NFV module with 4 x GbE front panel ports for Virtual Network Functions. NFVG_4

ECI Telecom Ltd. Proprietary 4-15


Neptune (Hybrid) Reference Manual NPT-1050 system architecture

4.9 Expansion Platform


The traffic capabilities of the Neptune platform can be expanded by installing the EXT-2U expansion unit on
top.
The EXT-2U platform is a high density modular expansion unit for the Neptune multiservice platforms. It
supports the complete range of PDH, SDH, CES, PCM, optics and Ethernet services. Integrating this add-on
platform into your network configuration is not traffic-affecting.
The EXT-2U is compact and versatile and can be used with different base units from the NPT Product Line.
The type of traffic delivered by the unit depends on the type of matrix (TDM and Packet or Packet only)
installed in the base unit. I/O expansion cards are supported in accordance.
The EXT-2U has three multipurpose slots (ES1 to ES3) for any combination of extractable traffic cards. PDH,
SDH, optics, PCM, Ethernet, and CES traffic are all handled through cards in these traffic slots.
The following table lists the traffic cards supported in the EXT-2U when installed on the Neptune platform.
For a detailed description of the EXT-2U features, functionality, and supported traffic cards refer to the
chapter EXT-2U expansion platform.

Table 4-9: EXT-2U supported traffic cards for NPT-1050


Card type Designation
Electrical PDH E1 interface card. PE1_63
Electrical PDH E3/DS-3 interface card. P345_3E
Optical or electrical SDH STM-1 interface card. S1_4
Optical Base Card (OBC) for optical amplifiers and DCM modules. Optical Base Card (OBC)
Data cards with internal direct connection to the packet switch. DHFE_12
Data cards with internal direct connection to the packet switch. DHFX_12
CES multiservice card for 32 x E1 interfaces. DMCE1_32
Muxponder card with 12 client ports and a slot for installing an MO_AOC4 MXP10
optical module.
EoS processing and metro L2 switching card with GbE/FE interfaces and MPoE_12G
MPLS capabilities. Enables to provide power over the Ethernet interface
(PoE).
E1 1:1 protection card for 63 ports. TP63_1
High rate (E3/DS3/STM-1e) 1:1 protection card for up to 4 ports. TPS1_1
Multiservice access card for N x 64 Kbps various PCM interfaces. SM_10E
Multiservice access card for N x 64 Kbps PCM interfaces. EM_10E

ECI Telecom Ltd. Proprietary 4-16


5 NPT-1030 system architecture
As a powerful metro access Neptune platform for metro/access and access networks, NPT-1030 can deliver
a variety of services. Designed for installation in COs and second aggregation locations, NPT-1030 integrates
MPLS-TP, Ethernet with SDH, PCM, and PDH capabilities into a 1U (44 mm) unit that can be coupled with
the standard Neptune unit.
NPT-1030 eliminates the boundaries between data and voice communication environments, and paves the
way for service provisioning without sacrificing equipment reliability, robustness, and hard QoS (H-QoS).
Thus, both operators and service providers benefit from the best of both worlds: the cost-effectiveness and
universality of Ethernet and H-QoS, and the scalability and survivability of TDM.
Used in many subnetwork topologies, NPT-1030 can handle a mixture of point-to-point, hub, and mesh
traffic patterns. This combined functionality means operators benefit from improved network efficiency
and significant savings in terms of cost and footprint. The NPT-1030 platform:
 Increases the number of STM-1 interfaces, or upgrades from STM-1 to STM-4/STM-16 easily and
smoothly.
 Allows you to start very small and attain ultra-high expandability in a build-as-you-grow® fashion by
combining the standard NPT-1030 unit with an expansion unit.
 Aggregates traffic arriving over Ethernet, PCM low-bitrate interfaces, E1, E3, and STM-1 directly over
STM-1/STM-4/STM-16 and GbE.
 Is suitable for indoor and outdoor installations.
 Supports an extended operating temperature range up to 70°C.
The NPT-1030 is a compact (1U) base platform housed in a 243 mm deep, 440 mm wide, and 44.4 mm high
equipment cage with all interfaces accessible from the front of the unit.
Figure 5-1: NPT-1030 front view

The platform consists of the following parts:


 Two DC power-supply module slots or one AC power-supply module slot
 One MCP slot + NVM

NOTE: The MCP slot can be equipped with the MCP30B and an NVM (CF).

 Two XIO30 card slots


 One fan unit slot
 Three traffic cards slots (Tslots)

ECI Telecom Ltd. Proprietary 5-1


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

The following figure identifies the slot arrangement in the NPT-1030 platform.
Figure 5-2: NPT-1030 platform slots layout

The following table lists the modules that can be configured in each NPT-1030 slot.

Table 5-1: NPT-1030 supported modules


Name Applicable slots in NPT-1030

DC PSA DC PSB AC PS MS XS A XS B TS 1# TS 2# TS 3# FS

INF-B1U  

AC_PS-B1U 

MCP30B 

XIO30_4  

XIO30Q_1&4  

XIO30_16  

PME1_218   

PME1_639   

PM345_3   

SMD4  

SMS4   

SMD1B   

SMQ1   

SMQ1&4   

SMS16   

DMFE_4_L1   

DMFX_4_L1   

DMGE_1_L1   

DMGE_4_L1   

DMFE_4_L2   

DMFX_4_L2   

DMGE_4_L2  

8 The PME1_21/PME1_21B module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced
conversion unit, the xDDF-21.
9 The PME1_63 module only supports balanced E1 interfaces. For unbalanced E1 interfaces, use an external balanced-to-unbalanced conversion
unit, the xDDF-21.

ECI Telecom Ltd. Proprietary 5-2


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

Name Applicable slots in NPT-1030

DC PSA DC PSB AC PS MS XS A XS B TS 1# TS 2# TS 3# FS

DMXE_22_L2  

DMCES1_4   

All cards support live insertion. The NPT-1030 platform provides full 1+1 redundancy in power feeding,
cross connections, and the TMU, as well as 1:N redundancy in the fans. Failure of the MCP30B does not
affect any existing traffic on the platform.
All cards are connected using a backplane that supports one traffic connector for the connection between
the NPT-1030 and the EXT-2U.
The NPT-1030 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks.

ECI Telecom Ltd. Proprietary 5-3


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

5.1 Modular Architecture


The NPT-1030 is a miniature add/drop multiplexer optimized especially for metro access, access, and RAN
cellular networks.
With the NPT-1030 build-as-you-grow strategy, network operators can provide new services as needed,
expanding traffic capacity with minimum investment.
Figure 5-3: NPT-1030 system architecture

The NPT-1030 architecture allows expansion according to market demand, including:


 Adding or replacing plug-in modules while the system is in operation, without affecting traffic in any
way.
 Optimizing aggregate matrix capacity and transceiver module assignment. Several types of unified
matrix/aggregate cards are available, each supporting a different bitrate, from STM-1 through
STM-1/STM-4 to STM-4 and STM-16.
 Optimizing tributary I/O slot assignment. Three slots in the basic unit and three additional slots in the
expansion unit can accommodate different I/O modules of different fan-out and technology types.
 In-service scalability of SDH links. An optical connection operating at a specific bitrate can be
upgraded from STM-1 through STM-4 to STM-16.
 Ultra-high resiliency through matrix, timing unit, and power supplies redundancy.
 Tributary protection for E1, E3 and STM-1e interfaces and data card IOP for optical ports.

ECI Telecom Ltd. Proprietary 5-4


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

5.2 Control Subsystem


In the NPT-1030 platform, a single main controller card (MCP30B) controls the entire NPT-1030 system via a
high-performance CPU, which also processes communication with the EMS/LCT and other equipment. A
large capacity flash memory stores equipment configuration data and software versions (up to two). Both
online and remote software upgrades are supported. NPT-1030 supports the processing of RS DCC channels
and MS DCC channels, plus up to two Clear Channels (DCC over framed or unframed E1). The NPT-1030 unit
can send network management information through third-party SDH or PDH networks using the Clear
Channels.

Figure 5-4: NPT-1030 control system block diagram

The NPT-1030 main controller card (MCP30B) is the most essential card of the system, creating virtually a
complete standalone native packet system. Moreover, it accommodates one service traffic slot for flexible
configuration of virtually any type of PDH, SDH, and Ethernet interfaces. This integrated flexible design
makes sure a very compact equipment structure and reduces costs, making NPT an ideal native choice for
the access and metro access layers.
NPT-1030 control and communication functions include:
 Internal control and processing
 Communication with external equipment and management
 Network element (NE) software and configuration backup
 Built-in Test (BIT)

ECI Telecom Ltd. Proprietary 5-5


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

5.2.1 Internal Control and Processing


The NPT-1030 main controller card provides central control, alarm, maintenance, and communication
functions for Neptune NEs. If required, it can also communicate with the control processors of various cards
in the expansion unit, using a master-slave control hierarchy.
The control subsystem is separate from the traffic subsystem. If a failure occurs or extraction of the control
card, traffic is not impaired.

5.2.2 Software and Configuration Backup


NPT-1030 contains a large-capacity onboard NVM that stores a complete backup of the system’s software
and node configuration. This makes sure superior management and control availability.
The NPT-1030 main controller card enables easy software upgrade using a remote software procedure
operated from the EMS-APT management station or LCT-APT craft terminal. The card can store two
different software versions simultaneously, and enables a quick switchover between the different versions
when required.

5.2.3 Built-In Test


The BIT hardware and its related software assist in the identification of any faulty card or module.
The BIT outputs provide:
 Management reports
 System reset
 Maintenance alarms
 Fault detection
 Protection switch for an XIO card
Dedicated test circuits implement the BIT procedure under the control of an integrated software package.
After the NPT-1030 unit is switched on, a BIT program is automatically activated for both the initialization
and normal operation phases. Alarms are sent to the EMS-APT if any failures are detected by the BIT.
BIT testing covers general tests, including module presence tests and periodic sanity checks of I/O module
processors. It performs traffic path tests, card environment tests, data tests, and detects traffic-affecting
failures, as well as failures in other system modules.

ECI Telecom Ltd. Proprietary 5-6


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

5.3 Communications with External Equipment


and Management
In the Neptune metro access platform product line, the main controller card is responsible for
communicating with other NEs and management stations.
The main controller card communicates with the remote EMS/LCT systems and other SDH NEs via the DCC
or clear channel. It communicates with the local EMS and LCT systems via the Ethernet interface. The
communications between SDH NEs, or between SDH NEs and the EMS/LCT, can also be via the DCN. The
controller can connect to the DCN via Ethernet or V.35. In addition, the controller can connect to external
equipment via Ethernet or RS-232, using DCC channels of the SDH network to build a narrow bandwidth
DCN for the external equipment.

NOTE: The NPT-1030 supports in band and DCN management connections for PB and MPLS:
 4Mbps policer for PB UNI which connects to external DCN
 No rate limit for the MNG port rate up to 100M full duplex

5.4 Timing
NPT-1030 provides high-quality system timing to all traffic modules and functions in compliance with
applicable ITU-T recommendations for functionality and performance.
The main component in the NPT-1030 synchronization subsystem is the timing and synchronization unit
(TMU). Timing is distributed redundantly from the TMUs to all traffic and matrix cards, minimizing unit
types and reducing operation and maintenance costs.
The TMU and the internal and external timing paths are fully redundant. The high-level distributed BIT
mechanism makes sure top performance and availability of the synchronization subsystem. If a hardware
failure occurs, the redundant synchronization subsystem takes over the timing control with no traffic
disruption.
To support reliable timing, NPT-1030 provides multiple synchronization reference options. Up to four
timing references can be monitored simultaneously.
In NPT-1030, any timing signal can be selected as a reference source. The TMU provides direct control over
the source selection (received from the system software) and the frequency control loop. The definition of
the synchronization source depends on the source quality and synchronization mode of the network timing
topology (set by the EMS-APT or LCT-APT).
Synchronization references are classified at any given time according to a predefined priority and prevailing
signal quality. The NPT-1030 synchronization subsystem synchronizes to the best available timing source
using the Synchronization Status Marker (SSM) protocol. The TMU is frequency-locked to this source,
providing internal system and SDH line transmission timing. The shelf is synchronized to this central timing
source.

ECI Telecom Ltd. Proprietary 5-7


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

NPT-1030 provides synchronization outputs for the synchronization of external equipment within the
exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to synchronize
any peripheral equipment or switch.
NPT-1030 supports synchronous Ethernet as per ITU-T G.8261.

5.5 Traffic and Switching Functionality


The heart of NPT-1030 is a nonblocking HO/LO cross-connect matrix. It is NPT’s architecture that enables its
outstanding configuration flexibility.
NPT-1030 supports the following range of nonblocking HO/LO cross connect configurations:
 16 VC-4 x 16 VC-4 in NPT-1030 MADM-1/4
 96 VC-4 x 96 VC-4 in NPT-1030 MADM-1/4/16
In the NPT-1030, the high-capacity, nonblocking 4/4/3/1 HO/LO cross-connect matrix is in the XIO30
redundant cards. Based on the type of XIO30 card, different matrix cores are used, as follows:
 In the XIO30_4 the matrix core uses 16 VC-4 equivalents (4/4/3/1) and provides STM-1 or STM-4
optical interfaces.
 In the XIO30_16, the matrix core uses 96 VC-4 equivalents (4/4/3/1) and provides STM-4 or STM-16
optical interface.
 In the XIO30Q_1&4, the matrix core uses 96 VC-4 equivalents (4/4/3/1) and provides four STM-1 or
STM-4 optical interfaces.

5.6 NPT-1030 Common Cards


The following modules in the NPT-1030 provide the common functionality for an MSPP system:
 INF_B1U: provides DC power feeding with input filtering. Usually, an NPT-1030 platform is configured
with two INF_B1Us for redundancy.
 MCP30B: provides main control, communication, and overhead processing functionality.
 XIO30_4/XIO30Q_1&4/XIO30_16: provide cross-connect, timing, and one aggregate STM-1/4/16
interface in all XIO30 cards, or four aggregate STM-1/4 interfaces in the XIO30Q_1&4.

NOTE: The NPT-1030 platform must be configured with two XIO cards. However, in case of
pure optical configuration (with OBC card only) the XIO card isn’t required at all

5.6.1 INF_B1U
The INF_B1U is a DC power-filter module for high-power applications that can be plugged into the
NPT-1030 platform. Two INF_B1U modules are needed for power feeding redundancy. It performs the
following functions:
 High-power INF for up to 200 W for more than one DMXE_22_L2, DMGE_4_L2, or DMCES1_4 module
 Single DC power input and power supply for all modules in the NPT-1030

ECI Telecom Ltd. Proprietary 5-8


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

 Input filtering function for the entire NPT-1030 platform


 Adjustable output voltage for fans in the NPT-1030
 Indication of input power loss and detection of under-/over-voltage
 Shutting down of the power supply when under-/over-voltage is detected

CAUTION: When more than one DMXE_22_L2, DMGE_4_L2, or DMCES1_4 card is installed in
the NPT-1030, an INF_B1U must be configured in the platform.

Figure 5-5: INF_B1U front panel

5.6.2 AC_PS-B1U
The AC_PS-B1U is an AC power module that can be plugged into the NPT-1030 platform. It performs the
following functions:
 Converts AC power to DC power for the NPT-1030.
 Filters input for the entire NPT-1030 platform.
 Supplies adjustable output voltage for fans in the NPT-1030.
 Supplies up to 180 W.
Figure 5-6: AC_PS-B1U front panel

ECI Telecom Ltd. Proprietary 5-9


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

5.6.3 FCU_1030
The FCU_1030 is a pluggable fan control module for high power applications with four fans for cooling the
NPT-1030 platform. The FCU_1030 fans provide cooling air in an environment that dissipates up to 200 W,
and are intended to work in conjunction with DMGE_4_L2 modules. The fans’ running speed can be low,
normal, or turbo. The speed is controlled by the MCP30B according to the environmental temperature and
fan failure status.
The following figure shows the front panel of the FCU_1030.
Figure 5-7: FCU_1030 front panel

Table 5-2: FCU_1030 front panel LEDs


Marking Full name Color Function
ACT. System active Green Normally blinks with the frequency of 0.5 Hz. Off or
lights when the card is not running normally.
FAIL System fail Red Normally off. Lights when a card failure is detected.

5.6.4 MCP30B
The MCP30B is the second generation of MCP30 cards and serves as the main processing card of the
NPT-1030. It integrates functions such as control and communication and overhead processing. It provides
the following functions:
 Control-related functions:
 Communications with and control of all other modules in the NPT-1030 and EXT-2U through the
backplane (by the CPU)
 Communications with the EMS-APT, LCT-APT, or other NEs through a management interface
(MNG) or DCC
 Routing and handling of up to 32 x RS DCC, 32 x MS DCC (total 32 channels), and two clear
channels
 Alarms and maintenance
 Fan control

ECI Telecom Ltd. Proprietary 5-10


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

 Accommodates the Compact Flash (CF) memory (NVM)


 Overhead processing, including overhead byte cross connections, OW interface, and user channel
interface
 External timing reference interfaces (T3/T4), which provide the line interface unit for one 2 Mbps
T3/T4 interface and one 2 MHz T3/T4 interface
The MCP30B supports the following interfaces:
 MNG and T3/T4 directly from its front panel
 RS-232, OW access, housekeeping alarms, and V.11 through a concentrated SCSI auxiliary I/F
connector (on the front panel)
In addition, the MCP30B has LED indicators and one reset push button. As the NPT-1030 is a front-access
platform, all its interfaces, LEDs, and push button are on the front panel of the MCP30B.
Figure 5-8: MCP30B front panel

Table 5-3: Interfaces on the MCP30B panel


Marking Interface type Function
AUXILIARY I/F SCSI-36 A concentrated auxiliary connector for the following
interfaces:
 1 x V.11 overhead interface
 1 x RS-232 interface for debugging or managing
external ancillary equipment
 1 x OW interface connecting an external OW box
 1 x alarm input and output interface connecting to
the RAP
T3/T4 RJ-45 T3 and T4 timing interfaces (one 2 Mbps and one 2 MHz)
MNG. RJ-45 10/100BaseT Ethernet interface for management

NOTE: An MCP30 ICP can be used to distribute the concentrated auxiliary connector into
dedicated connectors for each function.

Table 5-4: MCP30B front panel LEDs


Marking Full name Color Function
- (left LED in MNG port) Link Green Lights when MNG link is on. Off when MNG
link is off.
ACT. System active Green Normally blinks with the frequency of 0.5 Hz.
Off or lights when the card is not running
normally.

ECI Telecom Ltd. Proprietary 5-11


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

Marking Full name Color Function


FAIL System fail Red Normally off. Lights when card failure
detected.
MJR. System major Red Lights when system has critical or major
alarm alarm.
MNR. System minor Yellow Lights when the highest severity of the NE
alarm current alarms is minor. Off when the system
has no alarm or the highest severity of the NE
current alarms is higher than minor.

NOTE: ACT, FAIL, MJR, and MNR LEDs are combined to show various failure reasons during
the system boot. For details, see the Troubleshooting Using Component Indicators section in
the NPT-1030 Installation, Operation, and Maintenance Manual.

5.6.5 XIO30 Cards


The XIO30 card is the cross-connect matrix card with one or four aggregation line interfaces for the
NPT-1030. It also includes the TMU. The NPT-1030 must always be configured with two XIO30 cards for the
cross-connect matrix and TMU redundancy. Three types of XIO30 cards are available:
 XIO30_4: In addition to 2.5 Gbps cross-connect matrix and TMU, this card provides one STM-1/4
aggregate line interface based on the SFP module. The SFP housing on the XIO30_4 panel supports
STM-1 and STM-4 optical transceivers with a pair of LC optical connectors (bidirectional STM-1 and
STM-4 Tx/Rx over a single fiber using two different lambdas). STM-1 electrical SFPs with coaxial
connectors are supported as well.

NOTE: The connectivity of the XIO30_4 to the Tslots (TS1, TS2, and TS3) is limited for up to 6 x
VC-4s. There are no limitations for Hardware Rev. B00 and above.

The total NPT-1030 capacity accommodating two XIO30_4 cards is 2.5 Gbps. The capacity is
distributed as follows: 1 slot with 622 Mbps and 2 slots with 2 x 622 Mbps.
The slot capacity is depicted in the following figure.
Figure 5-9: NPT-1030 with two XIO30_4 slot capacity

 XIO30Q_1&4: In addition to 15 Gbps cross-connect matrix and TMU, this card provides four
STM-1/STM-4 compatible aggregate line interfaces based on the SFP modules. The interface rate and
STM-1 or STM-4 is configurable per port from the management. The SFP housing on the XIO30Q_1&4
panel support STM-1 and STM-4 optical transceivers with a pair of LC optical connectors (bidirectional
STM-1 and STM-4 Tx/Rx over a single fiber using two different lambdas). STM-1 electrical SFPs with
coaxial connectors are also supported.

ECI Telecom Ltd. Proprietary 5-12


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

 XIO30_16: In addition to 15 Gbps cross-connect matrix and TMU, this card provides one STM-4/16
aggregate line interface based on the SFP module. The SFP housing on the XIO30_16 panel supports
STM-4 or STM-16 optical transceivers with a pair of LC optical connectors (bidirectional STM-4 and
STM-16 Tx/Rx over a single fiber using two different lambdas).
The total NPT-1030 capacity accommodating two XIO30Q_1&4 or two XIO30_16 cards is 15 Gbps. The
capacity is evenly distributed between the three I/O slots and is 2.5 Gbps per slot.
The slot capacity is depicted in the following figure.
Figure 5-10: NPT-1030 with two XIO30Q_1&4 or two XIO30_16 slot capacity

The following figures show the front panel of the XIO30 cards.
Figure 5-11: XIO30_4 front panel

Figure 5-12: XIO30Q_1&4 front panel

Figure 5-13: XIO30_16 front panel

The panels of the XIO30_4, XIO30Q_1&4, and XIO30_16 include the LED indications described in the
following table.

ECI Telecom Ltd. Proprietary 5-13


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

Table 5-5: LEDs on the XIO30Q_1&4, XIO30_4, and XIO30_16 panels


Marking Full name Color Function
ACT. System active Green  Off indicates no power supply.
 On steadily indicates that the FPGA in the
XIO30 not downloaded successfully or the
XIO30 cannot be controlled by the MCP30B
normally.
 Blinking with a higher frequency (1 sec ON
and 1 sec OFF) indicates the XIO30 card is
running normally and is active.
 Blinking with a lower frequency (5 sec ON
and 5 sec OFF) indicates the XIO30 card is
running normally and is in standby.
FAIL System fail Red Normally off. Lights when card failure detected.
LSR ON or ON1 to Laser on indication Green Lights steadily when laser is on.
ON4

5.7 NPT-1030 Tslot I/O modules


The NPT-1030 has three slots for installing I/O (Tslot) modules. It supports various PDH, SDH, CES and
Ethernet I/O modules as listed in the following table. For a detailed description of the Tslot modules see
Tslot I/O Modules.

Table 5-6: NPT-1030 Tslot modules


Type Designation
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21B
Electrical PDH E1 interface Tslot module with 63 interfaces PME1_63
Electrical PDH E3/DS-3 interface Tslot module PM345_3
2 x STM-1 ports SDH interface card SMD1B
2 x STM-4 ports SDH interface card SMD4
4 x STM-1 ports SDH interface card SMQ1
4 x STM-1 or STM-4 ports SDH interface card SMQ1&4
1 x STM-1 port SDH interface card SMS4
1 x STM-16 port SDH interface card SMS16
Electrical Ethernet interface module with L1 functionality DMFE_4_L1
Optical Ethernet interface module with L1 functionality DMFX_4_L1
Electrical/optical GbE interface module with L1 functionality DMGE_1_L1
Electrical/optical GbE interface module with L1 functionality DMGE_4_L1
Electrical Ethernet interface module with L2 functionality DMFE_4_L2
Optical Ethernet interface module with L2 functionality DMFX_4_L2

ECI Telecom Ltd. Proprietary 5-14


Neptune (Hybrid) Reference Manual NPT-1030 system architecture

Type Designation
Electrical/optical GbE interface module with L2 functionality DMGE_4_L2
Optical 10 GbE and GbE interface module with L2 functionality DMXE_22_L2
CES services for STM-1/STM-4 interfaces module DMCES1_4

5.8 Expansion Platform


The traffic capabilities of the Neptune platform can be expanded by installing the EXT-2U expansion unit on
top.
The EXT-2U platform is a high density modular expansion unit for the Neptune multiservice platforms. It
supports the complete range of PDH, SDH, CES, PCM, optics and Ethernet services. Integrating this add-on
platform into your network configuration is not traffic-affecting.
The EXT-2U is compact and versatile and can be used with different base units from the NPT Product Line.
The type of traffic delivered by the unit depends on the type of matrix (TDM and Packet or Packet only)
installed in the base unit. I/O expansion cards are supported in accordance.
The EXT-2U has three multipurpose slots (ES1 to ES3) for any combination of extractable traffic cards. PDH,
SDH, optics, PCM, Ethernet, and CES traffic are all handled through cards in these traffic slots.
The following table lists the traffic cards supported in the EXT-2U when installed on the Neptune platform.
For a detailed description of the EXT-2U features, functionality, and supported traffic cards refer to the
chapter EXT-2U expansion platform.
EXT-2U supported traffic cards for NPT-1030
Card type Designation
Electrical PDH E1 interface card PE1_63
Electrical PDH E3/DS-3 interface card P345_3E
Optical SDH STM-4 interface card S4_1
Optical or electrical SDH STM-1 interface card S1_4
Optical Base Card (OBC) for optical amplifiers and DCM modules Optical Base Card (OBC)
EoS processing and metro L2 switching card with GbE/FE interfaces and MPS_2G_8F
MPLS capabilities
CES multiservice card for 32 x E1 interfaces. DMCE1_32
EoS processing and metro L2 switching card with GbE/FE interfaces and MPoE_12G
MPLS capabilities. Enables to provide power over the Ethernet interface
(PoE).
E1 1:1 protection card for 63 ports TP63_1
High rate (E3/DS3/STM-1e) 1:1 protection card for up to 4 ports TPS1_1
Electrical Ethernet 1:1 protection card for up to 8 ports TPEH8_1
Multiservice PCM and 1/0 XC card over TDM SM_10E

ECI Telecom Ltd. Proprietary 5-15


6 NPT-1021 system architecture
The NPT-1021 is a packet transport platform for the access, offering a pure packet solution that optimizes
packet handling. The NPT-1021 is a cost-effective choice for the first aggregation stage, geared for cellular
tail locations (3G and LTE). It provides a unique hybrid solution for high capacity access rings, and optimized
for popular triple play applications.
As a Packet Optical Access (POA) platform with enhanced MPLS-TP support, the NPT-1021 is designed
around a centralized packet switch. It supports any-to-any direct data card connectivity. The NPT-1021
offers a packet switching capacity of up to 10 Gbps or 60 Gbps (with a choice of two modes on the same
unit).
The platform offers unique non traffic affecting upgrades from 1G-based configurations to 10GE-based
(with up to 4 x 10GE interfaces). This is supported through the CPS50 card, a central packet switch (CPS)
Tslot card for the NPT-1021. This card provides the NPT-1021 with scalable upgrades to high capacity 10GE
configurations. The CPS50 makes it possible to upgrade the system packet switching capacity to 60 Gbps. It
supports up to 2 × 10GE (SFP+) and 2 flexible SFP houses. Each of these can support 1 × 10GE with SFP+,
1 × GE with SFP, or 2 × GE with CSFP.
The NPT-1021 offers enhanced MPLS-TP data network functionality, including the complete range of
Ethernet-based services (CES, MoE, and PoE+), as described in MPLS-TP and Ethernet solutions.
The NPT-1021 is a compact (1U) base platform housed in a 243 mm deep, 465 mm wide, and 44 mm high
equipment cage. All its interfaces are accessible from the front of the unit. The platform includes the
following components:
 Traffic processing modules:
 12 ports, divided between:
 4 x SFP – each can be configured as 100/1000Base-X or 10/100/1000Base-T (with ETGBE) or
GPON ONU interface (with GTGBE_L3BD)
 8 x RJ-45 – 10/100/1000Base-T, 4 of 8 support PoE
 One traffic card slot (Tslot)
 Compact flash card (NVM)
 Traffic connector to the (optional) EXT 2U expansion unit
 Timing module (T3/T4, ToD, and 1pps)
 Redundant or non-redundant power supply modules (INF)
Figure 6-1: NPT-1021 platform

The NPT-1021 can be fed by 24 VDC, -48 VDC, or 110 to 230 VAC. In DC power feeding, two INF modules
can be configured in the two power module slots for redundancy. One double slot INF module with
dual-feeding can be configured as well. AC power feeding requires the use of a conversion module to
implement AC/DC conversion.
The NPT-1021 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The rugged platform
design also makes this platform suitable for street cabinet use, withstanding temperatures up to 70°C.

ECI Telecom Ltd. Proprietary 6-1


Neptune (Hybrid) Reference Manual NPT-1021 system architecture

The NPT-1021 can also be configured as an expanded platform when combined with the EXT-2U expansion
unit, as illustrated in the following figure.
Figure 6-2: NPT-1021 with EXT-2U expansion unit

Typical power consumption of the NPT-1021 is 40 W. Power consumption is monitored through the
management software. For more information about power consumption requirements, see the NPT System
Specifications.

6.1 Modular architecture


NPT-1021 is a miniature pure packet multiplexer specially optimized for metro access and access, RAN
cellular networks, and utilities.
With the NPT-1021 build-as-you-grow strategy, network operators can provide new services as needed,
expanding traffic capacity with minimum investment.

Figure 6-3: NPT-1021 system architecture

ECI Telecom Ltd. Proprietary 6-2


Neptune (Hybrid) Reference Manual NPT-1021 system architecture

NOTE: NPT-1021 supports up to 16 x GbE.

6.2 CPS50
CPS50 is a T-slot 60 Gbps central packet switch card for the NPT-1021 or NPT-1020 with up to 4 x 10 GE
aggregate ports or 2 x 10GE aggregate ports plus 4 x GbE ports. The card supports the following main
functions:
 60 Gbps packet switching capacity with MPLS-TP and PB functionality
 Flexible port type definition for front panel ports:
 Two SFP+ based 10GE ports, each can be configured as 10GBase-R or 10GBase-W with EDC
support
 Two SFP+/SFP/CSFP compatible cages, each one can be configured as:
 1 x 10 GE port with SFP+ (10GBase-R/10GBase-W with EDC support)
 1 x GbE port with SFP (1000Base-X)
 2 x GbE ports with CSFP (1000Base-X)
 Summary – supported port assignments in CPS50:
 4 x 10GE
 3 x 10GE + 2 x GbE
 2 x 10GE + 4 x GbE
 When the CPS50 is assigned and switch engine is enabled; the 10G switch on the base card is
disabled, built-in 12 x GbE ports in base card, and the Ethernet bus of three E-slots are connected to
the switch core of CPS50.
A CPS50 card can be inserted and replaced without affecting the traffic flow.
The following figure shows the front panel of the CPS50.

Figure 6-4: CPS50 front panel

ECI Telecom Ltd. Proprietary 6-3


Neptune (Hybrid) Reference Manual NPT-1021 system architecture

Table 6-1: CPS50 indicators and functions


Marking Full name Color Function
ACT. System active Green Off indicates no power supply.
On steadily indicates the CPS50 not downloaded
successfully or that the CPS50 cannot be
controlled normally by the corresponding
platform.
Blinking with a frequency (1 sec ON and 1 sec
OFF) indicates the CPS50 card is running normally
and is active.
FAIL System fail Red Normally off. Lights when a card failure is
detected.
ON (separate LED Laser on indication Green Lights steadily when laser is on.
for each port, P1
to P6)

6.3 Control Subsystem


In the NPT-1020/NPT-1021 platform, the main controller is integrated on the MXC-1021 card together with
core traffic processing. MXC-1021 card controls the entire NPT-1020/NPT-1021 system via a
high-performance CPU, which also processes communication with the EMS/LCT and other equipment. A
large capacity flash memory stores the equipment configuration data and software versions (up to two).
Both online and remote software upgrades are supported.
NPT-1020/NPT-1021 supports 8 In-band MCC channels over MoE and PB ETY ports.
Figure 6-5: NPT-1021 control system block diagram

The platform control and communication main functions include:


 Internal control and processing
 Network element (NE) software and configuration backup
 Communication with external equipment and management
 Built-in Test (BIT)

ECI Telecom Ltd. Proprietary 6-4


Neptune (Hybrid) Reference Manual NPT-1021 system architecture

6.3.1 Internal control and processing


The MXC-1021 main controller provides central control, configuration, maintenance, alarm and
communication functions for the NPT-1021 system. If required, it can also communicate with the control
processors of various cards in the expansion unit, using a master-slave control hierarchy.
The MXC-1021 main controller can also provide NE management interface for management stations
(EMS/LCT), support MCC, and channel manage VLAN processing.

6.3.2 Software and configuration backup


NPT-1021 contains a large-capacity onboard NVM that stores a complete backup of the system’s software
and node configuration. This makes sure superior management and control availability.
The MXC-1021 main controller enables easy software upgrade using a remote software procedure
operated from the EMS-NPT management station or LCT-NPT craft terminal. The card can store two
different software versions simultaneously, and enables a quick switchover between the different versions
when required.

6.3.3 Built-in test


The BIT hardware and its related software assist in the identification of any faulty card or module.
The BIT outputs provide:
 Management reports
 System reset
 Maintenance alarms
 Fault detection
Dedicated test circuits implement the BIT procedure under the control of an integrated software package.
After the NPT-1020/NPT-1021 unit is switched on, a BIT program is automatically activated for both the
initialization and normal operation phases. Alarms are sent to the EMS-NPT if any failures are detected by
the BIT.
BIT testing covers general tests, including module presence tests and periodic sanity checks of I/O module
processors. It performs traffic path tests, card environment tests, data tests, and detects traffic-affecting
failures, as well as failures in other system modules.

ECI Telecom Ltd. Proprietary 6-5


Neptune (Hybrid) Reference Manual NPT-1021 system architecture

6.4 Communications with external equipment


and management
In the NPT metro access platform product line, the main controller unit is responsible for communicating
with other NEs and management stations.
The main controller unit communicates with the remote EMS/LCT systems and other NEs via the MCC or
clear channel. It communicates with the local EMS and LCT systems via the Ethernet interface. The
communications between other NEs, or between the NEs and the EMS/LCT, can also be via the DCN. The
controller can connect to the DCN via Ethernet or V.35. In addition, the controller can connect to external
equipment via Ethernet port, using management VLANs to build a narrow bandwidth DCN over external
equipment. NPT-1020/NPT-1021 can also support up to 12 in-band MCC over MoE interfaces, when the NE
is connected by ETY link with other MPLS-TP NEs.

NOTE: NPT-1021 doesn’t support Order Wire (OW)

NOTE: NPT-1021 supports in band and DCN management connections for PB and MPLS:
 4 Mbps policer for PB UNI which connects to external DCN
 10 Mbps shaper for MCC packet to MCP
 No rate limit for the MNG port rate up to 100M full duplex

6.5 Timing
NPT-1021 provides high-quality system timing to all traffic modules and functions in compliance with
applicable ITU-T recommendations for functionality and performance. Timing functionality and
performance should comply with ITU-T G.781, G.783 and G.813.
The main component in the NPT-1021 synchronization subsystem is the timing and synchronization unit
(TMU). Timing is distributed from the TMUs to all traffic and matrix cards, to minimize unit types and
reduce operation and maintenance costs.
Timing system in NPT-1021 includes two clock domains, System TMU and PTP TMU. The System TMU clock
sources can be from T1/T2/T3, Sync-E or PTP slave clock; the PTP TMU clock sources can be from T0, Sync-E
or external 1PPS+ToD.
To support reliable timing, the NPT-1021 provides multiple synchronization reference options:
 2 x 2 MHz (T3) external timing input sources
 2 x 2 Mbps (T3) external timing input sources
 E1/T1 interfaces
 STM-1 of CES cards
 Local interval clock
 Holdover mode

ECI Telecom Ltd. Proprietary 6-6


Neptune (Hybrid) Reference Manual NPT-1021 system architecture

 SyncE
 1588V2 – Master, Slave, and transparent
 1PPS+ToD interface
In the NPT, any timing signal can be selected as a reference source. The TMU provides direct control over
the source selection (received from the system software) and the frequency control loop. The definition of
the synchronization source depends on the source quality and synchronization mode of the network timing
topology (set by the EMS-NPT or LCT-NPT):
 Synchronization references are classified at any given time according to a predefined priority and
prevailing signal quality. NPT-1021 synchronization subsystem synchronizes to the best available
timing source using the SSM (ESMC) protocol. The TMU is frequency-locked to this source, providing
internal system timing. The platform is synchronized to this central timing source.
 NPT-1021 provides synchronization outputs for the synchronization of external equipment within the
exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to
synchronize any peripheral equipment or switch.
NPT-1021 supports SyncE synchronization, which is fully compatible with the asynchronous nature of
traditional Ethernet. SyncE is defined in ITU T standards G.8261, G.8262, G.8263, and G.8264.
The IEEE 1588 Precision Time Protocol (PTP) provides a standard method for high precision synchronization
of network connected clocks. PTP is a time transfer protocol enabling slave clocks to synchronize to a
known master clock, ensuring that multiple devices operate using the same time base. The protocol
operates in master/slave configuration using UDP packets over IP.

6.6 Traffic and switching functionality


NPT-1021 supports a centralized packet switch card that supports any to any direct data card connectivity.
NPT-1021 supports the following range of non-blocking cross connection configurations:
 10 Gbps packet for Ethernet and MPLS-TP switching
 60 Gbps packet for Ethernet and MPLS-TP switching after adding the CPS50 card in Tslot

NOTE: NPT-1021 internal packet switch supports up to 16 x 1 GbE interfaces.

ECI Telecom Ltd. Proprietary 6-7


Neptune (Hybrid) Reference Manual NPT-1021 system architecture

6.7 Ethernet configuration options


The NPT-1021 platform supports a wide range of legacy PCM, CES and Ethernet services. The 8 x
10/100/1000 BaseT ports can be configured as MoE interfaces.

6.8 NPT-1021 Tslot modules


The NPT-1021 has a single slot for installing I/O modules. It supports installation of several CES and
Ethernet I/O modules as described in the following table. For a detailed description of the Tslot modules
refer to Tslot I/O Cards.

Table 6-2: NPT-1021 Tslot modules


Type Designation
60G central packet switch card with up to 4 x 10GBE CPS50
interfaces or 2 x 10GBE+4 x GBE
CES services for STM-1/STM-4 interfaces module DMCES1_4
CES multi-service card with 16 x E1/T1 interfaces MSE1_16
CES multiservice card with 8 x E1/T1 and 2 x STM-1/OC-3 MSC_2_8
interfaces
CES multi-service module for 4 x OC3/STM-1 or 1 x
MS1_4
OC12/STM-4 interfaces
CES multi-service module with 32 x E1/T1 interfaces MSE1_32
Electrical GbE interface module with direct connection to DHGE_4E
the packet switch
Optical GbE interface module with direct connection to the DHGE_8
packet switch (up to 4 ports only)
NFV module with 4 x GbE front panel ports for Virtual NFVG_4
Network Functions.

6.9 Power feed subsystem


NPT-1020/NPT-1021 features a distributed power feed subsystem. This distributed power concept assures
system upgrading and efficient heat distribution. It also makes sure maximum reliability of the power feed
interface.
NPT-1020/NPT-1021 supports three types of power modules:
 INF-B1U, single input -48 VDC power supply for NPT-1020/NPT-1021. Both PS slots can be assigned.
 INF-B1U-24V, single input 24V DC power supply for NPT-1020/NPT-1021. Both PS slots can be
assigned.
 INF-B1U-D, dual input DC power supply for NPT-1020/NPT-1021. Only PSA slot can be assigned.
 AC-PS-B1U, single input AC to DC converter for NPT-1020/NPT-1021. Only PSA slot can be assigned.
In DC power feeding, two INF modules can be configured in two power supply module slots for redundant
power supply. AC power feeding requires the use of a conversion module to implement AC/DC conversion.

ECI Telecom Ltd. Proprietary 6-8


Neptune (Hybrid) Reference Manual NPT-1021 system architecture

Additional features of the power feed subsystem include:


 Reverse polarity protection
 Overvoltage alarm and protection
 Undervoltage alarm and protection
 Redundancy between INF-B1U units
 Hot swapping
 Power-fail detection and 10 msec holdup
 Lightning-strike protection
 Fan power supply with adjustable voltages

6.9.1 INF-B1U
The INF-B1U is a -48 VDC power-filter module for high-power applications that can be plugged into the
NPT-1020/NPT-1021 platforms. Two INF-B1U modules are needed for power feeding redundancy. It
performs the following functions:
 High-power INF for up to 200 W
 Single DC power input and power supply for all modules in the NPT-1020/NPT-1021
 Input filtering function for the entire NPT-1020/NPT-1021 platforms
 Adjustable output voltage for fans in the NPT-1020/NPT-1021
 Indication of input power loss and detection of under-/over-voltage
 Shutting down of the power supply when under-/over-voltage is detected

Figure 6-6: INF_B1U front panel

6.9.2 INF-B1U-24V
The INF-B1U-24V is a 24 VDC power-filter module for high-power applications that can be plugged into the
NPT-1020/NPT-1021 platforms. Two INF-B1U-24V modules are needed for power feeding redundancy. It
performs the following functions:
 Feed power supply for all modules in the NPT-1020/NPT-1021 products
 Input filtering function for the entire NPT-1020/NPT-1021 platforms
 Adjustable output voltage for fans in the NPT-1020/NPT-1021
 Support of fan power loss alarm and LED display
 Indication of input power loss and detection of under-/over-voltage
 Shutting down of the power supply in the event of under-/over-voltage

ECI Telecom Ltd. Proprietary 6-9


Neptune (Hybrid) Reference Manual NPT-1021 system architecture

 Single DC power input: 18 VDC to 36 VDC


 Maximum power consumption 85W (the CPS50 card isn’t supported)
The front panel of the INF-B1U-24V is shown in the following figure.

Figure 6-7: INF-B1U-24V front panel

6.9.3 INF-B1U-D
The INF-B1U-D is a DC power-filter module that can be plugged into the NPT-1020/NPT-1021 platforms. It
performs the following functions:
 Dual DC power input and power supply for all modules in the NPT-1020/NPT-1021
 Input filtering for the entire NPT-1020/NPT-1021 platforms
 Adjustable output voltage for fans in the NPT-1020/NPT-1021
 Indication of input power loss and detection of under-/over-voltage
 Shutting down of the power supply when under-/over-voltage is detected
Figure 6-8: INF-B1U-D front panel

6.9.4 AC_PS-B1U
The AC_PS-B1U is an AC power module that can be plugged into the NPT-1020/NPT-1021 platforms. It
performs the following functions:
 Converts AC power to DC power for the NPT-1020/NPT-1021.
 Filters input for the entire NPT-1020/NPT-1021 platforms.
 Supplies adjustable output voltage for fans in the NPT-1020/NPT-1021.
 Supplies up to 180 W.

ECI Telecom Ltd. Proprietary 6-10


Neptune (Hybrid) Reference Manual NPT-1021 system architecture

Figure 6-9: AC_PS-B1U front panel

6.10 Expansion Platform


The traffic capabilities of the Neptune platform can be expanded by installing the EXT-2U expansion unit on
top.
The EXT-2U platform is a high density modular expansion unit for the Neptune multiservice platforms. It
supports the complete range of CES, PCM, optics and Ethernet services. Integrating this add-on platform
into your network configuration is not traffic-affecting.
The EXT-2U is compact and versatile and can be used with different base units from the NPT Product Line.
The type of traffic delivered by the unit depends on the type of matrix (PCM or Packet only) installed in the
base unit. I/O expansion cards are supported in accordance.
The EXT-2U has three multipurpose slots (ES1 to ES3) for any combination of extractable traffic cards. PCM,
Ethernet, and CES traffic are all handled through cards in these traffic slots.
The following table list the traffic cards supported in the EXT-2U when installed on the Neptune platform.
For a detailed description of the EXT-2U features, functionality, and supported traffic cards refer to the
chapter EXT-2U Expansion Platform.

Table 6-3: EXT-2U supported traffic cards for NPT-1021


Card type Designation
Optical amplifiers base card. Optical Base Card (OBC)
Muxponder card with 2 x 10G line ports and 12 client ports and a slot for MXP10
installing an MO_AOC4 optical module (additional 4 client ports).
Data card with internal direct connection to the packet switch. DHFE_12
Data card with internal direct connection to the packet switch. DHFX_12
CES multiservice card for 32 x E1 interfaces. DMCE1_32
Multiservice access card that introduces various 64 Kbps, N x 64 Kbps PCM EM_10E
interfaces, and DXC1/0 functionality.

ECI Telecom Ltd. Proprietary 6-11


7 NPT-1020 system architecture
The NPT-1020 is a multiservice packet transport platform for the access, offering an All-Native solution that
optimizes both TDM and packet handling. The NPT-1020 is a cost-effective choice for the first aggregation
stage, geared for cellular tail locations (3G and LTE), providing a unique hybrid solution for high capacity
access rings, and optimized for popular triple play applications.
As a Packet Optical Access (POA) platform with enhanced MPLS-TP support, the NPT-1020 is designed
around a centralized hybrid matrix card that supports any-to-any direct data card connectivity as well as
native TDM switching capacity. The NPT-1020 offers a packet switching capacity of up to 10 Gbps or 60
Gbps (with a choice of two modes on the same unit) as well as a TDM capacity of up to 2.5G (16 x VC4 fully
low order traffic).
The NPT-1020 offers enhanced MPLS-TP data network functionality, including the complete range of
Ethernet-based services (CES, EoS, MoT, MoE, and PoE+).
The NPT-1020 is a compact (1U) base platform housed in a 243 mm deep, 465 mm wide, and 44 mm high
equipment cage with all interfaces accessible from the front of the unit. The platform includes the following
components:
 Traffic processing through:
 21 built-in native E1s
 14 ports, divided between:
 2 x STM-1/STM-4 ports (native)
 8 x 10/100/1000BaseT electrical ports (with 4 x PoE+)
 4 x 4 x SFP – each can be configured as 100/1000Base-X or 10/100/1000Base-T (with
ETGBE) or GPON ONU interface (with GTGBE_L3BD)
 1 traffic card slot (Tslot)
 Compact flash card (NVM)
 Traffic connector to the (optional) EXT-2U expansion unit
 Timing module (T3/T4, ToD, and 1pps)
 Synchronization 1588V2, Master, Slave, and Transparent
 Redundant or non-redundant power supply modules (INF)
Figure 7-1: NPT-1020 platform

The NPT-1020 can be fed by 24 VDC, -48 VDC or 110 VAC to 203 VAC. In DC power feeding, two INF
modules can be configured in two power supply module slots for redundant power supply. AC power
feeding requires the use of a conversion module to implement AC/DC conversion.
The NPT-1020 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The rugged platform
design also makes this platform suitable for street cabinet use, withstanding temperatures up to 70°C.

ECI Telecom Ltd. Proprietary 7-1


Neptune (Hybrid) Reference Manual NPT-1020 system architecture

The NPT-1020 can also be configured as an NPT-1020E, when combined with the EXT-2U expansion unit, as
illustrated in the following figure.
Figure 7-2: NPT-1020 platform with expansion unit

Typical power consumption for the NPT-1020 is 50 W. Power consumption is monitored through the
management software. For more information about power consumption requirements, see the Neptune
Installation and Maintenance Manual and the Neptune System Specifications.

7.1 Modular Architecture


The NPT-1020 is a miniature native packet and TDM add/drop multiplexer specially optimized for metro
access and access, RAN cellular networks, and utilities.
With the NPT-1020 build-as-you-grow strategy, network operators can provide new services as needed,
expanding traffic capacity with minimum investment.
Figure 7-3: NPT-1020 system architecture

ECI Telecom Ltd. Proprietary 7-2


Neptune (Hybrid) Reference Manual NPT-1020 system architecture

NOTE: The NPT-1020 supports up to 16 x GbE.

7.2 CPS50
CPS50 is a T-slot 60 Gbps central packet switch card for the NPT-1021 or NPT-1020 with up to 4 x 10 GE
aggregate ports or 2 x 10GE aggregate ports plus 4 x GbE ports. The card supports the following main
functions:
 60 Gbps packet switching capacity with MPLS-TP and PB functionality
 Flexible port type definition for front panel ports:
 Two SFP+ based 10GE ports, each can be configured as 10GBase-R or 10GBase-W with EDC
support
 Two SFP+/SFP/CSFP compatible cages, each one can be configured as:
 1 x 10 GE port with SFP+ (10GBase-R/10GBase-W with EDC support)
 1 x GbE port with SFP (1000Base-X)
 2 x GbE ports with CSFP (1000Base-X)
 Summary – supported port assignments in CPS50:
 4 x 10GE
 3 x 10GE + 2 x GbE
 2 x 10GE + 4 x GbE
 When the CPS50 is assigned and switch engine is enabled; the 10G switch on the base card is
disabled, built-in 12 x GbE ports in base card, and the Ethernet bus of three E-slots are connected to
the switch core of CPS50.
A CPS50 card can be inserted and replaced without affecting the traffic flow.
The following figure shows the front panel of the CPS50.

Figure 7-4: CPS50 front panel

ECI Telecom Ltd. Proprietary 7-3


Neptune (Hybrid) Reference Manual NPT-1020 system architecture

Table 7-1: CPS50 indicators and functions


Marking Full name Color Function
ACT. System active Green Off indicates no power supply.
On steadily indicates the CPS50 not downloaded
successfully or that the CPS50 cannot be
controlled normally by the corresponding
platform.
Blinking with a frequency (1 sec ON and 1 sec
OFF) indicates the CPS50 card is running normally
and is active.
FAIL System fail Red Normally off. Lights when a card failure is
detected.
ON (separate LED Laser on indication Green Lights steadily when laser is on.
for each port, P1
to P6)

7.3 Control Subsystem


In the NPT-1020/NPT-1021 platform, the main controller is integrated on the MXC-1020 card together with
core traffic processing. MXC-1020 card controls the entire NPT-1020/NPT-1021 system via a
high-performance CPU, which also processes communication with the EMS/LCT and other equipment. A
large capacity flash memory stores the equipment configuration data and software versions (up to two).
Both online and remote software upgrades are supported.
NPT-1020/NPT-1021 supports the processing of RS DCC channels, MS DCC channels, plus up to three Clear
Channels (DCC over framed or unframed E1). The NPT-1020/NPT-1021 unit can send network management
information through third-party SDH or PDH networks using these Clear Channels. The NPT-1020/NPT-1021
can also support In-band MCC channels over MoE port.
Figure 7-5: NPT-1020 control system block diagram

The platform control and communication main functions include:


 Internal control and processing
 Network element (NE) software and configuration backup
 Communication with external equipment and management
 Built-in Test (BIT)

ECI Telecom Ltd. Proprietary 7-4


Neptune (Hybrid) Reference Manual NPT-1020 system architecture

7.3.1 Internal control and processing


The MXC-1020 main controller provides central control, configuration, maintenance, alarm and
communication functions for the NPT-1020 system. If required, it can also communicate with the control
processors of various cards in the expansion unit, using a master-slave control hierarchy.
The MXC-1020 main controller can also provide NE management interface for management stations
(EMS/LCT), support MCC, DCC and clear channel processing.

7.3.2 Software and configuration backup


The NPT-1020 contains a large-capacity onboard NVM that stores a complete backup of the system’s
software and node configuration. This makes sure superior management and control availability.
The MXC-1020 main controller enables easy software upgrade using a remote software procedure
operated from the EMS-NPT management station or LCT-NPT craft terminal. The card can store two
different software versions simultaneously, and enables a quick switchover between the different versions
when required.

7.3.3 Built-in test


The BIT hardware and its related software assist in the identification of any faulty card or module.
The BIT outputs provide:
 Management reports
 System reset
 Maintenance alarms
 Fault detection
Dedicated test circuits implement the BIT procedure under the control of an integrated software package.
After the NPT-1020/NPT-1021 unit is switched on, a BIT program is automatically activated for both the
initialization and normal operation phases. Alarms are sent to the EMS-NPT if any failures are detected by
the BIT.
BIT testing covers general tests, including module presence tests and periodic sanity checks of I/O module
processors. It performs traffic path tests, card environment tests, data tests, and detects traffic-affecting
failures, as well as failures in other system modules.

ECI Telecom Ltd. Proprietary 7-5


Neptune (Hybrid) Reference Manual NPT-1020 system architecture

7.4 Communications with external equipment


and management
In the NPT metro access platform product line, the main controller unit is responsible for communicating
with other NEs and management stations.
The main controller unit communicates with the remote EMS/LCT systems and other SDH NEs via the DCC
or clear channel. It communicates with the local EMS and LCT systems via the Ethernet interface. The
communications between SDH NEs, or between SDH NEs and the EMS/LCT, can also be via the DCN. The
controller can connect to the DCN via Ethernet or V.35. In addition, the controller can connect to external
equipment via Ethernet, using DCC channels of the SDH network to build a narrow bandwidth DCN for the
external equipment. The NPT-1020/NPT-1021 can also support up to 12 in-band MCC over MoE interfaces,
when the NE is connected by ETY link with other MPLS-TP network.

NOTE: The NPT-1020 doesn't support Order Wire (OW)

NOTE: The NPT-1020 supports in band and DCN management connections for PB and MPLS:
 4Mbps policer for PB UNI which connects to external DCN
 10Mbps shaper for MCC packet to MCP
 No rate limit for the MNG port rate up to 100M full duplex

7.5 Timing
The NPT-1020 provides high-quality system timing to all traffic modules and functions in compliance with
applicable ITU-T recommendations for functionality and performance. Timing functionality and
performance should comply with ITU-T G.781, G.783 and G.813.
The main component in the NPT-1020 synchronization subsystem is the timing and synchronization unit
(TMU). Timing is distributed from the TMUs to all traffic and matrix cards, to minimize unit types and
reduce operation and maintenance costs.
Timing system in NPT-1020 includes two clock domains, System TMU and PTP TMU. The System TMU clock
sources can be from T1/T2/T3, Sync-E or PTP slave clock; the PTP TMU clock sources can be from T0, Sync-E
or external 1PPS+ToD.
To support reliable timing, the NPT-1020 provides multiple synchronization reference options:
 2 x 2 MHz (T3) external timing input sources
 2 x 2 Mbps (T3) external timing input sources
 STM-n line timing from any SDH interface card
 E1 2M PDH line timing from any PDH interface card
 Local interval clock
 Holdover mode

ECI Telecom Ltd. Proprietary 7-6


Neptune (Hybrid) Reference Manual NPT-1020 system architecture

 SyncE
 1588V2 – Master, Slave, and transparent
 1PPS+ToD interface
In the NPT, any timing signal can be selected as a reference source. The TMU provides direct control over
the source selection (received from the system software) and the frequency control loop. The definition of
the synchronization source depends on the source quality and synchronization mode of the network timing
topology (set by the EMS-APT or LCT-APT):
 Synchronization references are classified at any given time according to a predefined priority and
prevailing signal quality. The NPT-1020 synchronization subsystem synchronizes to the best available
timing source using the SSM protocol. The TMU is frequency-locked to this source, providing internal
system and SDH line transmission timing. The platform is synchronized to this central timing source.
 The NPT-1020 provides synchronization outputs for the synchronization of external equipment within
the exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to
synchronize any peripheral equipment or switch.
The NPT-1020 supports SyncE synchronization, which is fully compatible with the asynchronous nature of
traditional Ethernet. SyncE is defined in ITU T standards G.8261, G.8262, G.8263, and G.8264.
The IEEE 1588 Precision Time Protocol (PTP) provides a standard method for high precision synchronization
of network connected clocks. PTP is a time transfer protocol enabling slave clocks to synchronize to a
known master clock, ensuring that multiple devices operate using the same time base. The protocol
operates in master/slave configuration using UDP packets over IP.

7.6 Traffic and switching functionality


The NPT-1020 supports a centralized hybrid matrix card that supports any to any direct data card
connectivity as well as native TDM switching capacity.
The heart of NPT-1020 is a non-blocking TDM matrix with 2.5 Gbps capacity (16 x VC4 fully low order
traffic), and a packet matrix with up to 10 Gbps switching and TM capability. The dual planes architecture
enables its outstanding configuration flexibility.
The NPT-1020 supports the following range of non-blocking cross connection configurations:
 16 VC-4 x 16 VC-4 as ADM-1/4 XC
 10 Gbps packet for Ethernet and MPLS-TP switching
 60 Gbps packet for Ethernet and MPLS-TP switching after installing the CPS50 card in Tslot

NOTE: The NPT-1020 internal packet switch supports up to 16 x 1 GbE interfaces.

ECI Telecom Ltd. Proprietary 7-7


Neptune (Hybrid) Reference Manual NPT-1020 system architecture

7.7 Ethernet and TDM configuration options


The NPT-1020 platform supports a wide range of legacy PCM, TDM and Ethernet services. In TDM plane, it
supports two STM-1/4 compatible SDH ports, with built-in 21 x E1, built-in EoS (4 x VC-4) and internal buses
for additional IO cards (Tslot+3 x Eslots). The 8 x 10/100/1000 BaseT ports can be configured as EoS/MoT
interfaces. NPT-1020 TDM matrix supports 16 x 16 VC4s capacity (2.5G).

7.8 Power feed subsystem


NPT-1020/NPT-1021 features a distributed power feed subsystem. This distributed power concept assures
system upgrading and efficient heat distribution. It also makes sure maximum reliability of the power feed
interface.
NPT-1020/NPT-1021 supports three types of power modules:
 INF-B1U, single input -48 VDC power supply for NPT-1020/NPT-1021. Both PS slots can be assigned.
 INF-B1U-24V, single input 24V DC power supply for NPT-1020/NPT-1021. Both PS slots can be
assigned.
 INF-B1U-D, dual input DC power supply for NPT-1020/NPT-1021. Only PSA slot can be assigned.
 AC-PS-B1U, single input AC to DC converter for NPT-1020/NPT-1021. Only PSA slot can be assigned.
In DC power feeding, two INF modules can be configured in two power supply module slots for redundant
power supply. AC power feeding requires the use of a conversion module to implement AC/DC conversion.
Additional features of the power feed subsystem include:
 Reverse polarity protection
 Overvoltage alarm and protection
 Undervoltage alarm and protection
 Redundancy between INF-B1U units
 Hot swapping
 Power-fail detection and 10 msec holdup
 Lightning-strike protection
 Fan power supply with adjustable voltages

7.8.1 INF-B1U
The INF-B1U is a -48 VDC power-filter module for high-power applications that can be plugged into the
NPT-1020/NPT-1021 platforms. Two INF-B1U modules are needed for power feeding redundancy. It
performs the following functions:
 High-power INF for up to 200 W
 Single DC power input and power supply for all modules in the NPT-1020/NPT-1021
 Input filtering function for the entire NPT-1020/NPT-1021 platforms
 Adjustable output voltage for fans in the NPT-1020/NPT-1021

ECI Telecom Ltd. Proprietary 7-8


Neptune (Hybrid) Reference Manual NPT-1020 system architecture

 Indication of input power loss and detection of under-/over-voltage


 Shutting down of the power supply when under-/over-voltage is detected

Figure 7-6: INF_B1U front panel

7.8.2 INF-B1U-24V
The INF-B1U-24V is a 24 VDC power-filter module for high-power applications that can be plugged into the
NPT-1020/NPT-1021 platforms. Two INF-B1U-24V modules are needed for power feeding redundancy. It
performs the following functions:
 Feed power supply for all modules in the NPT-1020/NPT-1021 products
 Input filtering function for the entire NPT-1020/NPT-1021 platforms
 Adjustable output voltage for fans in the NPT-1020/NPT-1021
 Support of fan power loss alarm and LED display
 Indication of input power loss and detection of under-/over-voltage
 Shutting down of the power supply in the event of under-/over-voltage
 Single DC power input: 18 VDC to 36 VDC
 Maximum power consumption 85W (the CPS50 card isn’t supported)
The front panel of the INF-B1U-24V is shown in the following figure.

Figure 7-7: INF-B1U-24V front panel

7.8.3 INF-B1U-D
The INF-B1U-D is a DC power-filter module that can be plugged into the NPT-1020/NPT-1021 platforms. It
performs the following functions:
 Dual DC power input and power supply for all modules in the NPT-1020/NPT-1021
 Input filtering for the entire NPT-1020/NPT-1021 platforms
 Adjustable output voltage for fans in the NPT-1020/NPT-1021

ECI Telecom Ltd. Proprietary 7-9


Neptune (Hybrid) Reference Manual NPT-1020 system architecture

 Indication of input power loss and detection of under-/over-voltage


 Shutting down of the power supply when under-/over-voltage is detected
Figure 7-8: INF-B1U-D front panel

7.8.4 AC_PS-B1U
The AC_PS-B1U is an AC power module that can be plugged into the NPT-1020/NPT-1021 platforms. It
performs the following functions:
 Converts AC power to DC power for the NPT-1020/NPT-1021.
 Filters input for the entire NPT-1020/NPT-1021 platforms.
 Supplies adjustable output voltage for fans in the NPT-1020/NPT-1021.
 Supplies up to 180 W.

Figure 7-9: AC_PS-B1U front panel

7.9 NPT-1020 Tslot modules


The NPT-1020 has a single slot for installing I/O modules. It supports installation of several PDH, SDH, and
Ethernet I/O modules as described in the following table. For a detailed description of the Tslot modules
see Tslot I/O modules

Table 7-2: NPT-1020 Tslot modules


Type Designation
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21B
Electrical PDH E1 interface Tslot module with 63 interfaces PME1_63

ECI Telecom Ltd. Proprietary 7-10


Neptune (Hybrid) Reference Manual NPT-1020 system architecture

Type Designation
Electrical PDH E3/DS-3 interface Tslot module PM345_3
2 x STM-1 electrical or optical ports SDH interface card SMD1B
1 x STM-4 port SDH interface card SMS4
CES services for STM-1/STM-4 interfaces module DMCES1_4
Electrical GbE interface module with direct connection to DHGE_4E
the packet switch
Optical GbE interface module with direct connection to the DHGE_8
packet switch
CES multi-service card with 16 x E1/T1 interfaces MSE1_16
CES multiservice card with 8 x E1/T1 and 2 x STM-1/OC-3 MSC_2_8
interfaces
CES multi-service module for 4 x OC3/STM-1 or 1 x MS1_4
OC12/STM-4 interfaces
CES multi-service module with 32 x E1/T1 interfaces MSE1_32
Central packet switching card CPS50
NFV module with 4 x GbE front panel ports for Virtual NFVG_4
Network Functions

7.10 Expansion Platform


The traffic capabilities of the Neptune platform can be expanded by installing the EXT-2U expansion unit on
top.
The EXT-2U platform is a high density modular expansion unit for the Neptune multiservice platforms. It
supports the complete range of PDH, SDH, CES, PCM, optics and Ethernet services. Integrating this add-on
platform into your network configuration is not traffic-affecting.
The EXT-2U is compact and versatile and can be used with different base units from the NPT Product Line.
The type of traffic delivered by the unit depends on the type of matrix (TDM and Packet or Packet only)
installed in the base unit. I/O expansion cards are supported in accordance.
The EXT-2U has three multipurpose slots (ES1 to ES3) for any combination of extractable traffic cards. PDH,
SDH, optics, PCM, Ethernet, and CES traffic are all handled through cards in these traffic slots.
The following table lists the traffic cards supported in the EXT-2U when installed on the Neptune platform.
For a detailed description of the EXT-2U features, functionality, and supported traffic cards refer to the
chapter EXT-2U expansion platform.

Table 7-3: EXT-2U supported traffic cards for NPT-1020


Card type Designation
Electrical PDH E1 interface card PE1_63
Electrical PDH E3/DS-310 interface card P345_3E

10 The card is also supported in NPT-1600CE installed on NPT-1200.

ECI Telecom Ltd. Proprietary 7-11


Neptune (Hybrid) Reference Manual NPT-1020 system architecture

Card type Designation


Optical or electrical SDH STM-111 interface card S1_4
Optical amplifiers base card Optical base card (OBC)
Muxponder card with 12 client ports and a slot for installing an MXP10
MO_AOC4 optical module.
Data card with internal direct connection to the packet switch DHFE_12
Data card with internal direct connection to the packet switch DHFX_12
Data card with EoS/MoT/MoE support MPoE_12G
CES multiservice card for 32 x E1 interfaces DMCE1_32
Multiservice access card that introduces various 64 Kbps, N x 64 Kbps EM_10E
PCM interfaces, and DXC1/0 functionality.
Multiservice PCM and 1/0 XC card SM_10E

11 The card is also supported in NPT-1600CE installed on NPT-1200.

ECI Telecom Ltd. Proprietary 7-12


8 NPT-1010 system architecture
NPT-1010 is a miniature platform, 224 mm deep, 223.5 mm wide, and 44 mm high. All platform interfaces
are accessible from the front of the unit. The NPT-1010 can be installed in 2,200 mm or 2,600 mm ETSI
racks, and in 19" or in 23" racks. Up to two NPT-1010 platforms can be installed in the width of an ETSI or
23" rack, using a dedicated mounting platform. One unit can be installed in the width of a 19" rack, using
mounting brackets.
Two models of NPT-1010 are available according to their power supply:
 NPT-1010_DC - operates from -48 VDC power feed, provides two connectors for external power line
connection, and supports dual power feed for redundancy.
 NPT-1010_AC - is fed from a 100-240 VAC power source, supports external power line connection
through a special power connector. A 2 A fuse, accessible from the front panel, is incorporated in the
AC POWER IN connector, for protection.
Figure 8-1: NPT-1010_DC, general view

Figure 8-2: NPT-1010_AC, general view

The NPT-1010 is a single board system with all traffic interfaces housed on its front panel and optional mini
slot for CES E1 and 1588V2. The interfaces are identical on both NPT-1010 options, including:
 4 x 10/100/1000BaseT interfaces (with PoE+)
 4 x 100/1000BaseT, SFP based, interfaces
The NPT-1010 performs the following functions:
 Integrated switching, timing, system control, and in band management (MCC)
 Control-related functions:
 Communications and control

ECI Telecom Ltd. Proprietary 8-1


Neptune (Hybrid) Reference Manual NPT-1010 system architecture

 Routing and handling of up to 4 MCC channels


 Alarms and maintenance
 Fan control
 Platform dual feeding power supply (in the NPT-1010_DC version only)

8.1 NPT-1010 user interfaces


NPT-1010 supports the following interfaces:
 4 x 10/100/1000BaseT with PoE+
 4 x 100/1000BaseX, L2 with MPLS-TP functionality
 MNG
In addition, the NPT-1010 includes LED indicators.
The following figures show the front panel of the NPT-1010_DC and NPT-1010_AC platforms. All
components and functionalities of the NPT-1010_AC are identical, except for the power feed interface
which is an AC line connector.
Figure 8-3: NPT-1010_DC front panel

Figure 8-4: NPT-1010_AC front panel

Table 8-1: NPT-1010 front panel interfaces


Marking Interface type Function Notes
POWER IN A D-type, 3-pins DC power input connector, source A On NPT-1010_DC
only.
POWER IN B D-type, 3-pins DC power input connector, source B On NPT-1010_DC
only.
AC POWER IN IEC 14, 3-pins AC power input connector On NPT-1010_AC
only.
ALARMS D-type, 15-pins Alarm input and output interface connector ---

ECI Telecom Ltd. Proprietary 8-2


Neptune (Hybrid) Reference Manual NPT-1010 system architecture

Marking Interface type Function Notes


PORT 1 to PORT RJ-45 10/100/1000BaseT L2 interfaces (MPLS ready) ---
4
PORT 5 to PORT SFP housing SFP housing for 100/1000BaseX, L2 interface ---
8
MNG RJ-45 10/100BaseT Ethernet interface for ---
management

NPT-1010AC has an AC POWER IN connector for connecting the AC power source.

Table 8-2: NPT-1010 LED indicators and functions


Marking Full name Color Function
- (left LED in the FE Link and Active Green Lights when the link is OK. Blinks when
ports) packets are received or transmitted.
- (right LED in the FE Speed Orange Off when the speed is 10 Mbps. Lights when
ports) the speed is 100 Mbps.
ON5 to ON8 Laser on Green Lights steadily when the corresponding laser
is on.
ACT. System active Green Normally blinks with the frequency of 0.5 Hz.
Off or lights when the platform is not
running normally.
FAIL System fail Red Normally off. Lights when a card failure is
detected.
MJR. System Major alarm Red Lights when the system has a Critical or
Major alarm.
MNR. System Minor alarm Orange Lights when the system has a Minor or
Warning alarm(and no Critical or Major
alarm).

NOTE: ACT, FAIL, MJR, and MNR LEDs are combined to show various failure reasons during
the system boot. For details, see the Troubleshooting Using Component Indicators section in
the NPT-1010 Installation, Operation, and Maintenance Manual.

The four SFP housings on the NPT-1010 support four types of SFP module:
 GE SFP optical transceivers with a pair of LC optical connectors
 Electrical GE SFP electrical transceivers with a RJ-45 connectors
 Bidirectional GE SFP optical transceivers with one LC optical connector (bidirectional GE Tx/Rx over a
single fiber using two different lambdas)
 Colored GE SFP optical transceivers with a pair of LC optical connectors (colored C/DWDM SFP)

ECI Telecom Ltd. Proprietary 8-3


Neptune (Hybrid) Reference Manual NPT-1010 system architecture

8.2 Communications with external equipment


and management
In the NPT-1010 platform, the main board and controller unit is responsible for communicating with other
NEs and management stations.
The main controller unit communicates with the remote EMS/LCT systems and other Packet NEs via the
MCC or Management VSI (PB). It communicates with the local EMS and LCT systems via the RJ 45 Ethernet
based interface. The communications between NEs, or between PEs and the EMS/LCT, can also be via the
DCN. The main board and controller can connect to the DCN via Ethernet. In addition, the controller can
connect to external equipment via Ethernet, using MCC channels of the MPLS network and management
VSI for PB (layer 2) to build a narrow bandwidth DCN for the external equipment. NPT-1010 can also
support up to 4 in-band MCC over MoE interfaces, when the NE is connected by ETY link with other
MPLS-TP networks.

NOTE: NPT-1010 supports in band and DCN management connections for PB and MPLS:
 4 Mbps policer for PB UNI which connects to external DCN
 3 Mbps shaper for MCC packet to MCP
 No rate limit for the MNG port rate up to 100M full duplex

8.3 NPT-1010 Mslot modules


NPT-1010 has a single mini slot (Mslot) for installing I/O modules. It supports installation of CES E1 and
timing I/O modules as described in the following table.

Table 8-3: NPT-1010 Mslot modules


Type Designation

CES services for E1/T1 interfaces module with 1588V2 slave TMSE1_8

Timing module for 1588V2 slave support TM10

ECI Telecom Ltd. Proprietary 8-4


Neptune (Hybrid) Reference Manual NPT-1010 system architecture

8.3.1 TMSE1_8
The TMSE1_8 is a CES and timing module that provides Circuit Emulation Services (CES) for up to 8 x E1/T1
interfaces. It supports the SAToP and CESoPSN standards and has a SCSI 36-pin connector for connecting
the customer E1/T1 signals. It also provides the Time of Day (ToD) and 1PPS signals for supporting Ethernet
timing per IEEE 1588v2 standard.
The front panel of the TMSE1_8 is shown in the following figure.
Figure 8-5: TMSE1_8 front panel

Table 8-4: TMSE1_8 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz. Off
or on steadily indicates the card is not running
normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.

8.3.2 TM10
The TM10 is optional in the NPT-1010 mini slot. It provides the Time of Day (ToD) and 1PPS signals for
supporting Ethernet timing per IEEE 1588v2 standard.
The front panel of the TM10 is shown in the following figure.
Figure 8-6: TM10 front panel

Table 8-5: TM10 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz. Off
or on steadily indicates the card is not running
normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.

ECI Telecom Ltd. Proprietary 8-5


9 Tslot I/O modules
Neptune offers a wide range of I/O Tslot modules supporting PDH, SDH, PCM, CES, optics, and Ethernet
services. This section provides a detailed description of the Tslot modules available for the Neptune
platforms.
The following table lists the available Tslot modules.

Table 9-1: Neptune Tslot modules


Type Designation
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21B
Electrical PDH E1 interface Tslot module with 63 interfaces PME1_63
Electrical PDH E3/DS-3 interface Tslot module PM345_3
2 x STM-1 ports SDH electrical or optical interface card SMD1B
4 x STM-1 ports SDH interface card SMQ1
4 x STM-1 or STM-4 ports SDH interface card SMQ1&4
1 x STM-4 port SDH interface card SMS4
2 x STM-4 ports SDH interface card SMD4
1 x STM-16 port SDH interface card SMS16
Electrical Ethernet interface module with L1 functionality DMFE_4_L1
Optical Ethernet interface module with L1 functionality DMFX_4_L1
Electrical/optical GbE interface module with L1 functionality DMGE_1_L1
Electrical/optical GbE interface module with L1 functionality DMGE_4_L1
Electrical Ethernet interface module with L2 functionality DMFE_4_L2
Optical Ethernet interface module with L2 functionality DMFX_4_L2
Electrical/optical GbE interface module with L2 functionality DMGE_2_L2
Electrical/optical GbE interface module with L2 functionality DMGE_4_L2
Electrical/optical GbE interface module with L2 functionality DMGE_8_L2
Optical 10 GbE and GbE interface module with L2 functionality DMXE_22_L2
Optical 10 GbE and GbE interface module with L2 functionality DMXE_48_L2
CES services for STM-1/STM-4 interfaces module DMCES1_4
CES multi-service module with 16 x E1/T1 interfaces MSE1_16
CES multi-service module with 8 x E1/T1 and 2 x STM-1/OC-3 interfaces MSC_2_8
CES multi-service module for 4 x OC3/STM-1 or 1 x OC12/STM-4 interfaces MS1_4
CES multi-service module with 32 x E1/T1 interfaces MSE1_32
Electrical GbE interface module with direct connection to the packet switch
DHGE_4E
and with PoE+
Optical GbE interface module with direct connection to the packet switch DHGE_8

ECI Telecom Ltd. Proprietary 9-1


Neptune (Hybrid) Reference Manual Tslot I/O modules

Type Designation
Electrical and Optical GbE interface module with direct connection to the DHGE_16
packet switch
Optical GbE interface module with direct connection to the packet switch DHGE_24
Optical 10GE interface module with direct connection to the packet switch DHXE_2
Optical 10GE interface module with direct connection to the packet switch DHXE_4
Optical 10GE interface module with direct connection to the packet switch
DHXE_4O
with OTN wrapping
NFV module with 4 x GbE front panel ports for Virtual Network Functions. NFVG_4

9.1 PDH cards


9.1.1 PME1_21
The PME1_21 is a Tslot module with 21 x E1 (2.048 Mbps) balanced electrical interfaces. The PME1_21 can
be configured in any Tslot and supports retiming of up to 8 x E1s.
The cabling of the PME1_21 module is directly from the front panel with a 100-pin SCSI female connector.
PME1_63 enables easy expansion of I/O slots equipped with PME1_21 modules by additional 42 x E1
interfaces, while significantly reducing the cost per E1 interface. This is done by removing the working
PME1_21, replacing it with a PME1_63, and connecting it with appropriate cables. The I/O slot must then
be reassigned through the management as a PME1_63. The attributes (including cross connects, trails, and
so on) of the first 21 x E1s are retained as they were in the replaced PME1_21. For a detailed description of
this procedure, see the corresponding IMM.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of E1 interfaces are listed in the following table.

Table 9-2: PME1_21 modules and E1 interfaces per platform


Platform Max. PME1_21 modules Max. E1 interfaces
NPT-1020 1 21
NPT-1030 3 63
NPT-1050 3 63
NPT-1200 7 147

ECI Telecom Ltd. Proprietary 9-2


Neptune (Hybrid) Reference Manual Tslot I/O modules

Figure 9-1: PME1_21 module front panel

Table 9-3: PME1_21 front panel indicators


Marking Full name Color Function
ACT. Module active Green Off indicates no power supply.
On steadily indicates the module was not downloaded
successfully or the module cannot be controlled normally
by the MCP1200.
Blinking indicates the module is running normally.
FAIL Module fail Red Normally off. Lights when module failure detected.

NOTE: The PME1_21 supports only balanced E1s directly from its connectors. For unbalanced
E1s, configure an xDDF 21, an external DDF with E1 balanced-to-unbalanced conversion.

9.1.2 PME1_21B
The PME1_21B is a Tslot module with 21 x E1 (2.048 Mbps) balanced electrical interfaces. The PME1_21B
can be configured in any Tslot and supports retiming of up to 8 x E1s.
The cabling of the PME1_21B module is directly from the front panel with a 100-pin SCSI female connector.
PME1_63 enables easy expansion of I/O slots equipped with PME1_21B modules by additional 42 x E1
interfaces, while significantly reducing the cost per E1 interface. This is done by removing the working
PME1_21B, replacing it with a PME1_63, and connecting it with appropriate cables. The I/O slot must then
be reassigned through the management as a PME1_63. The attributes (including cross connects, trails, and
so on) of the first 21 x E1s are retained as they were in the replaced PME1_21B. For a detailed description
of this procedure, see the corresponding IMM.

ECI Telecom Ltd. Proprietary 9-3


Neptune (Hybrid) Reference Manual Tslot I/O modules

NOTES:
 The PME1_21B is back-compatible in the Neptune product line from the first Version
(V1.2) and in the BG product line from V14.
 When the card is installed in a platform of the supported previous Versions it is simulating
a PME1_63 card but with 21 x E1s only.
 The management system will not display PME1_21B, but PME1_63.
 When trying to assign PME1_21 you will see a "Card-Underutilized" warning. Ignore this
alarm.
 In the inventory info it will be displayed as PME1_63. This in normal for this card.

The maximum number of modules that can be installed in the supported platforms and the resulting total
number of E1 interfaces are listed in the following table.

Table 9-4: PME1_21B modules and E1 interfaces per platform


Platform Max. PME1_21B modules Max. E1 interfaces
NPT-1020 1 21
NPT-1030 3 63
NPT-1050 3 63
NPT-1200 7 147

Figure 9-2: PME1_21B front panel

Table 9-5: PME1_21B front panel indicators


Marking Full name Color Function
ACT. Module active Green Off indicates no power supply.
On steadily indicates the module was not downloaded
successfully or the module cannot be controlled normally
by the MCP1200.
Blinking indicates the module is running normally.
FAIL Module fail Red Normally off. Lights when module failure detected.

NOTE: The PME1_21B supports only balanced E1s directly from its connectors. For
unbalanced E1s, configure an xDDF 21, an external DDF with E1 balanced-to-unbalanced
conversion.

ECI Telecom Ltd. Proprietary 9-4


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.1.3 PME1_63
The PME1_63 is a Tslot module with 63 x E1 (2.048 Mbps) balanced electrical interfaces. The PME1_63 can
be configured in any Tslot and supports retiming of up to 63 x E1s. It supports LOS inhibit functionality (very
low sensitivity signal detection). This actually means that the LOS alarm is masked up to a level of -20 dB
signals.
The cabling of the PME1_63 module is directly from the front panel with one dense unique 272-pin VHDCI
female connector.
PME1_63 enables easy expansion of I/O slots equipped with PME1_21 modules by additional 42 x E1
interfaces, while significantly reducing the cost per E1 interface. This is done by removing the working
PME1_21, replacing it with a PME1_63, and connecting it with appropriate cables. The I/O slot must then
be reassigned through the management as a PME1_63. The attributes (including cross connects, trails, and
so on) of the first 21 x E1s are retained as they were in the replaced PME1_21. For a detailed description of
this procedure, see the corresponding IMM.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of E1 interfaces are listed in the following table.

Table 9-6: PME1_63 modules and E1 interfaces per platform


Platform Max. PME1_63 modules Max. E1 interfaces
NPT-1020 1 63
NPT-1030 3 189
NPT-1050 3 189
NPT-1200 7 441

Figure 9-3: PME1_63 front panel

Table 9-7: PME1_63 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Off indicates no power supply.
On steadily indicates the module was not downloaded
successfully or the module cannot be controlled normally
by the MCP1200.
Blinking indicates the module is running normally.
FAIL Module fail Red Normally off. Lights when module failure detected.

ECI Telecom Ltd. Proprietary 9-5


Neptune (Hybrid) Reference Manual Tslot I/O modules

NOTE: The PME1_63 supports only balanced E1s directly from its connectors. For unbalanced
E1s, configure an xDDF 21, an external DDF with E1 balanced-to-unbalanced conversion.

9.1.4 PM345_3
The PM345_3 is a Tslot module with 3 x E3/DS-3 (34 Mbps/45 Mbps) unchannelized electrical interfaces.
Each interface can be configured independently as E3 or DS-3 by the EMS-APT or the LCT-APT. The
PM345_3 can be configured in any Tslot.
The cabling of the PM345_3 module is directly from the front panel with six DIN 1.0/2.3 connectors.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of E3/DS-3/STS-1 interfaces are listed in the following table.

Table 9-8: PM345_3 modules and E3/DS-3/STS-1 interfaces per platform


Platform Max. PM345_3 modules Max. E3/DS-3/STS-1 interfaces
NPT-1020 1 3
NPT-1030 3 9
NPT-1050 3 9
NPT-1200 7 21

Figure 9-4: PM345_3 module front panel

Table 9-9: PM345_3 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Off indicates no power supply.
On steadily indicates that the module was not
downloaded successfully or the module cannot be
controlled normally by the MCP1200.
Blinking indicates the module is running normally.
FAIL Module fail Red Normally off. Lights when module failure
detected.
E3/DS3 (for each DS-3 indication Orange On when the channel is working in DS-3 mode. Off
channel) when the channel is working in E3 mode.

ECI Telecom Ltd. Proprietary 9-6


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.2 SDH cards


9.2.1 SMD1B
The SMD1B is an SDH interface card used to expand ring closures and SDH tributaries. It provides two
STM-1 ports, which can be optical or electrical.
SMQ1 enables easy expansion of I/O slots equipped with SMD1B modules by additional two STM-1
interfaces, while significantly reducing the cost per STM-1 interface. This is done by removing the working
SMD1B, replacing it with a SMQ1, and connecting it with appropriate fibers. The I/O slot must then be
reassigned through the management as a SMQ1. The attributes (including cross-connects, trails, and so on)
of the first two STM-1s are retained as they were in the replaced SMD1B. For a detailed description of this
procedure refer to corresponding IMM.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-1 interfaces are listed in the following table.

Table 9-10: SMD1B modules and STM-1 interfaces per platform


Platform Max. SMD1B modules Max. STM-1 interfaces
NPT-1020 1 2
NPT-1030 3 6

Figure 9-5: SMD1B front panel

Table 9-11: SMD1B front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicate the card is not
running normally.
FAIL Module fail Red Normally off. Lights when module failure
detected.
LSR1 ON Laser on indication Green Lights steadily when LSR1 is on.
LSR2 ON Laser on indication Green Lights steadily when LSR2 is on.

ECI Telecom Ltd. Proprietary 9-7


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.2.2 SMQ1
The SMQ1 is an SDH interface card used to expand ring closures and SDH tributaries. It provides four STM-1
ports, which can be optical or electrical.
SMQ1 enables easy expansion of I/O slots equipped with SMD1B modules by additional two STM-1
interfaces, while significantly reducing the cost per STM-1 interface. This is done by removing the working
SMD1B, replacing it with a SMQ1, and connecting it with appropriate fibers. The I/O slot must then be
reassigned through the management as a SMQ1. The attributes (including cross-connects, trails, and so on)
of the first two STM-1s are retained as they were in the replaced SMD1B. For a detailed description of this
procedure see the corresponding IMM.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-1 interfaces are listed in the following table.

Table 9-12: SMQ1 modules and STM-1 interfaces per platform


Platform Max. SMQ1 modules Max. STM-1 interfaces
NPT-1030 3 12
NPT-1050 3 12
NPT-1200 7 28

Figure 9-6: SMQ1 front panel

Table 9-13: SMQ1 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights when module failure
detected.
ON1 Laser on indication Green Lights steadily when LSR1 is on.
ON2 Laser on indication Green Lights steadily when LSR2 is on.
ON3 Laser on indication Green Lights steadily when LSR3 is on.
ON4 Laser on indication Green Lights steadily when LSR4 is on.

ECI Telecom Ltd. Proprietary 9-8


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.2.3 SMQ1&4
The SMQ1&4 is an SDH interface card used to expand ring closures and SDH tributaries. It provides four
configurable STM-1 or STM-4 ports, which can be optical or electrical for STM-1 configuration. The interface
rate is configurable per port from the management.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-1/STM-4 interfaces are listed in the following table.

Table 9-14: SMQ1&4 modules and STM-1/STM-4 interfaces per platform


Platform Max. SMQ1&4 modules Max. STM-1/STM-4 interfaces
NPT-1030 3 12
NPT-1050 3 12
NPT-1200 7 28

Figure 9-7: SMQ1&4 front panel

Table 9-15: SMQ1&4 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights when module failure
detected.
ON1 Laser on indication Green Lights steadily when LSR1 is on.
ON2 Laser on indication Green Lights steadily when LSR2 is on.
ON3 Laser on indication Green Lights steadily when LSR3 is on.
ON4 Laser on indication Green Lights steadily when LSR4 is on.

ECI Telecom Ltd. Proprietary 9-9


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.2.4 SMS4
The SMS4 is an SDH interface card used to expand ring closures and SDH tributaries. It provides one STM-4
port.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-4 interfaces are listed in the following table.

Table 9-16: SMS4 modules and STM-4 interfaces per platform


Platform Max. SMS4 modules Max. STM-4 interfaces
NPT-1020 1 1
NPT-1030 3 3

Figure 9-8: SMS4 front panel

Table 9-17: SMS4 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicate the card is not
running normally.
FAIL Module fail Red Normally off. Lights when module failure
detected.
LSR ON Laser on indication Green Lights steadily when LSR is on.

9.2.5 SMD4
The SMD4 is an SDH interface card used to expand ring closures and SDH tributaries. It provides two STM-4
ports.

NOTE: The NPT-1030 supports SMD4 only in TS2 and TS3, and is only applicable in an ADM16
or QADM-1/4 (4 x ADM-1/4) system.

A maximum of two SMD4 modules can be installed in the NPT-1030, totaling in four STM-4 interfaces in the
platform.

ECI Telecom Ltd. Proprietary 9-10


Neptune (Hybrid) Reference Manual Tslot I/O modules

Figure 9-9: SMD4 front panel

Table 9-18: SMD4 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicate the card is not
running normally.
FAIL Module fail Red Normally off. Lights when module failure
detected.
LSR1 ON Laser on indication Green Lights steadily when LSR1 is on.
LSR2 ON Laser on indication Green Lights steadily when LSR2 is on.

9.2.6 SMS16
The SMS16 is an SDH interface card used to expand ring closures and SDH tributaries. It provides one
STM-16 SFP-based port.

NOTE: The SMS16 is supported by NPT-1030 with XIO30_16 or XIO30Q_1&4 only

The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-16 interfaces are listed in the following table.

Table 9-19: SMS16 modules and STM-16 interfaces per platform


Platform Max. SMS16 modules Max. STM-16 interfaces
NPT-1030 3 3
NPT-1050 3 3
NPT-1200 7 7

ECI Telecom Ltd. Proprietary 9-11


Neptune (Hybrid) Reference Manual Tslot I/O modules

Figure 9-10: SMS16 front panel

Table 9-20: SMS16 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicate the card is not
running normally.
FAIL Module fail Red Normally off. Lights when module failure
detected.
ON Laser on indication Green Lights steadily when LSR is on.

9.3 Ethernet layer 1 (EoS) cards


9.3.1 DMFE_4_L1
The DMFE_4_L1 is an EoS processing module with L1 functionality. It provides 4 x 10/100BaseT LAN
interfaces, and four EoS WAN interfaces. The total WAN bandwidth is up to 4 x VC-4. The DMFE_4_L1 can
be configured in any Tslot.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of 10/100BaseT interfaces are listed in the following table.

Table 9-21: DMFE_4_L1 modules and 10/100BaseT interfaces per platform


Platform Max. DMFE_4_L1 modules Max. 10/100BaseT interfaces
NPT-1030 3 12
NPT-1200 7 28

The cabling of the DMFE_4_L1 module is directly from the front panel with four RJ-45 connectors.

ECI Telecom Ltd. Proprietary 9-12


Neptune (Hybrid) Reference Manual Tslot I/O modules

Figure 9-11: DMFE_4_L1 front panel

Table 9-22: DMFE_4_L1 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Off indicates no power supply.
On steadily indicates the module wasn't
downloaded successfully or the module
cannot normally be controlled by the
MCP1200.
Blinking indicates the module is running
normally.
FAIL Module fail Red Normally off. Lights when module failure
detected.
- (left LED in the Link/Active (PORT1 to Green Lights when the link is OK. Blinks when
PORT1 to PORT4 PORT4 FE interface) packets are received or transmitted.
RJ-45)
- (right LED in the Speed (PORT1 to Orange Off when the speed is 10 Mbps. Lights
PORT1 to PORT4 PORT4 FE interface) steadily when the speed is 100 Mbps.
RJ-45)

9.3.2 DMFX_4_L1
The DMFX_4_L1 is an EoS processing module with L1 functionality. It provides four optical FE (also referred
to as FX) LAN interfaces for the insertion of SFP transceivers, and four EoS WAN interfaces. The total WAN
bandwidth is up to 4 x VC-4. The DMFX_4_L1 can be configured in any Tslot.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of FX interfaces are listed in the following table.

Table 9-23: DMFX_4_L1 modules and FX interfaces per platform


Platform Max. DMFX_4_L1 modules Max. FX interfaces
NPT-1030 3 12
NPT-1200 7 28

ECI Telecom Ltd. Proprietary 9-13


Neptune (Hybrid) Reference Manual Tslot I/O modules

Figure 9-12: DMFX_4_L1 front panel

Table 9-24: DMFX_4_L1 front panel LED indicators


Marking Full name Color Function
ACT Module active Green Lights steadily when the module is powered
and running normally.
FAIL Module fail Red Normally off. Lights when module failure is
detected or the module is not loaded.
ON1 to ON4 Laser on Green Lights steadily when the laser in the
corresponding port is on.

9.3.3 DMGE_1_L1
The DMGE_1_L1 is an L1 data Tslot module with one GbE interface on the LAN side and one EoS interface
on the WAN side. The total WAN bandwidth is 4 x VC-4. The DMGE_1_L1 supports electrical or optical
inputs (both inputs are internally connected to the GbE interface) as follows:
 RJ-45 connector for connecting electrical signals
 SFP housing for connecting optical signals
The DMGE_1_L1 can be configured in any Tslot.

NOTE: The DMGE_1_L1 is supported only by the NPT-1030 platform (with XIO30_4 only).

A maximum of three DMGE_1_L1 modules can be installed in the NPT-1030, thus totaling in three GbE
(electrical/optical) interfaces in the platform.

Figure 9-13: DMGE_1_L1 front panel

ECI Telecom Ltd. Proprietary 9-14


Neptune (Hybrid) Reference Manual Tslot I/O modules

Table 9-25: DMGE_1_L1 front panel LED indicators


Marking Full name Color Function
ACT Module active Green Lights steadily when the module is powered
and running normally.
FAIL Module fail Red Normally off. Lights when module failure is
detected or the main component in the
module is not loaded.
- (left LED in the Link and Tx/Rx Green Lights when the link is OK. Blinks when packets
P1 RJ-45) (FE interface) are received or transmitted.
- (right LED in Speed (FE interface) Orange Off when the speed is 10/100 Mbps. Lights
the P1 RJ-45) steadily when the speed is 1000 Mbps.
LSR ON Laser on Green Lights steadily when the laser is working.

9.3.4 DMGE_4_L1
The DMGE_4_L1 is an EoS processing module with L1 functionality. It provides four GbE LAN interfaces for
the insertion of SFP transceivers, and four EoS WAN interfaces. Both electrical and optical GbE interfaces
are supported by insertion of different types of SFP - copper SFP for electrical GbE with RJ45 connector, and
optical SFP for optical GbE with LC connectors. The total WAN bandwidth is up to 16 x VC-4.

NOTE: The DMGE_4_L1 is supported by the following platforms:


 NPT-1030 (with XIO30_16 or XIO30Q_1&4 only)
 NPT-1200 (the module can be configured in any Tslot, except TS5)

The maximum number of modules that can be installed in the supported platforms and the resulting total
number of GbE (electrical/optical) interfaces are listed in the following table.

Table 9-26: DMGE_4_L1 modules and GbE interfaces per platform


Platform Max. DMGE_4_L1 modules Max. GbE (electrical/optical) interfaces
NPT-1030 3 12
NPT-1200 6 24

Figure 9-14: DMGE_4_L1 front panel

ECI Telecom Ltd. Proprietary 9-15


Neptune (Hybrid) Reference Manual Tslot I/O modules

Table 9-27: DMGE_4_1 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates, the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
ON1 Laser on indication Green Lights steadily when LSR1 is on.
ON2 Laser on indication Green Lights steadily when LSR2 is on.
ON3 Laser on indication Green Lights steadily when LSR3 is on.
ON4 Laser on indication Green Lights steadily when LSR4 is on.

9.4 Ethernet layer 2 (EoS/MoT) cards


9.4.1 DMFE_4_L2
The DMFE_4_L2 is an EoS/MoT processing module with L2 functionality (MPLS ready). It provides 4 x
10/100BaseT LAN interfaces and 8 x EoS WAN interfaces. The total WAN bandwidth is up to 4 x VC-4. The
DMFE_4_L2 can be configured in any Tslot.
The cabling of the DMFE_4_L2 module is directly from the front panel with four RJ-45 connectors.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of 10/100BaseT interfaces are listed in the following table.

Table 9-28: DMFE_4_L2 modules and 10/100BaseT interfaces per platform


Platform Max. DMFE_4_L2 modules Max. 10/100BaseT interfaces
NPT-1030 3 12
NPT-1200 7 28

Figure 9-15: DMFE_4_L2 front panel

ECI Telecom Ltd. Proprietary 9-16


Neptune (Hybrid) Reference Manual Tslot I/O modules

Table 9-29: DMFE_4_L2 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Off indicates no power supply.
On steadily indicates the module wasn't
downloaded successfully or the module cannot
normally be controlled by the MCP1200.
Blinking indicates the module is running
normally.
FAIL Module fail Red Normally off. Lights when module failure is
detected.
- (left LED in the Link/Active (PORT1 to Green Lights when the link is OK. Blinks when packets
PORT1 to PORT4 PORT4 FE interface) are received or transmitted.
RJ-45)
- (right LED in the Speed (PORT1 to Orange Off when the speed is 10 Mbps. Lights steadily
PORT1 to PORT4 PORT4 FE interface) when the speed is 100 Mbps.
RJ-45)

9.4.2 DMFX_4_L2
The DMFX_4_L2 is an EoS/MoT processing module with L2 functionality (MPLS ready).It provides four
optical FX LAN interfaces for the insertion of SFP transceivers, and 8 x EoS WAN interfaces. The total WAN
bandwidth is up to 4 x VC-4. The DMFE_4_L2 can be configured in any Tslot.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of FX interfaces are listed in the following table.

Table 9-30: DMFX_4_L2 modules and FX interfaces per platform


Platform Max. DMFX_4_L2 modules Max. FX interfaces
NPT-1030 3 12
NPT-1200 7 28

Figure 9-16: DMFX_4_L2 front panel

ECI Telecom Ltd. Proprietary 9-17


Neptune (Hybrid) Reference Manual Tslot I/O modules

Table 9-31: DMFX_4_L2 front panel LED indicators


Marking Full name Color Function
ACT Module active Green Lights steadily when the module is powered
and running normally.
FAIL Module fail Red Normally off. Lights when module failure is
detected or the module is not loaded.
ON1 to ON4 Laser on Green Lights steadily when the laser in the
corresponding port is on.

9.4.3 DMGE_2_L2
The DMGE_2_L2 is an L2 data Tslot module with two GbE interfaces on the LAN side and 64 x EoS interfaces
on the WAN side. The module supports MPLS by appropriate licensing. The total WAN bandwidth is 14 x
VC-4. The DMGE_2_L2 supports electrical or optical GbE by insertion of different types of SFP – copper or
optical.
The DMGE_2_L2 can be configured in any Tslot.
DMGE_4_L2 enables easy expansion of I/O slots equipped with DMGE_2_L2 modules by additional two GbE
interfaces, while significantly reducing the cost per GbE interface. This is done by removing the working
DMGE_2_L2, replacing it with a DMGE_4_L2, and connecting it with appropriate fibers. The I/O slot must
then be reassigned through the management as a DMGE_4_L2. The attributes of the first two GbE
interfaces are retained as they were in the replaced DMGE_2_L2. For a detailed description of this
procedure refer to corresponding IMM.
The maximum number of modules that can be installed in each of the supported platforms and the
resulting total number of GbE (electrical/optical) interfaces are listed in the following table.
The maximum number of modules that can be installed in the NPT-1200 and the resulting total number of
GbE interfaces are listed in the following table.

Table 9-32: DMGE_2_L2 modules and GbE interfaces per platform


Platform Max. DMGE_2_L2 modules Max. GbE (electrical/optical) interfaces
NPT-1200 7 14

Figure 9-17: DMGE_2_L2 front panel

ECI Telecom Ltd. Proprietary 9-18


Neptune (Hybrid) Reference Manual Tslot I/O modules

Table 9-33: DMGE_2_L2 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicate the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
LSR1 ON Laser on indication Green Lights steadily when LSR1 is on.
LSR2 ON Laser on indication Green Lights steadily when LSR2 is on.

9.4.4 DMGE_4_L2
The DMGE_4_L2 is an L2 data Tslot module with four GbE interfaces on the LAN side and 64 x EoS or up to
30 x MoT interfaces on the WAN side. The module supports MPLS by appropriate licensing. The total WAN
bandwidth can be configured to 16 x VC-4. The DMGE_4_L2 supports electrical or optical GbE by insertion
of different types of SFP – copper or optical.

NOTE: The DMGE_4_L2 can be configured in any Tslot in the NPT-1200, except for TS5.

DMGE_4_L2 enables easy expansion of I/O slots equipped with DMGE_2_L2 modules by additional two GbE
interfaces, while significantly reducing the cost per GbE interface. This is done by removing the working
DMGE_2_L2, replacing it with a DMGE_4_L2, and connecting it with appropriate fibers. The I/O slot must
then be reassigned through the management as a DMGE_4_L2. The attributes of the first two GbE
interfaces are retained as they were in the replaced DMGE_2_L2. For a detailed description of this
procedure see the corresponding IMM.

NOTE: The expansion of a slot capacity (accommodating a DMGE_2_L2 module) from two GbE
to four GbE interfaces by installing a DMGE_4_L2 is not relevant for the NPT-1030, as it
doesn't support the DMGE_2_L2.

The maximum number of modules that can be installed in the supported platforms and the resulting total
number of GbE (electrical/optical) interfaces are listed in the following table.

Table 9-34: DMGE_4_L2 modules and GbE interfaces per platform


Platform Max. DMGE_4_L2 modules Max. GbE (electrical/optical) interfaces
NPT-1030 3 12
NPT-1200 6 24

NOTE: It is highly recommended to install the DMGE_4_L2 close to the fan units (FCUs) in TS2
and TS3 of the NPT-1030 and TS2, TS3, TS4, and TS6 of the NPT-1200.

ECI Telecom Ltd. Proprietary 9-19


Neptune (Hybrid) Reference Manual Tslot I/O modules

Figure 9-18: DMGE_4_L2 front panel

Table 9-35: DMGE_4_L2 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
ON/LINK1 to Laser on indication Green Lights steadily when the corresponding laser is
ON/LINK4 on.

9.4.5 DMGE_8_L2
The DMGE_8_L2 is an L2 data Tslot module with 8 GbE interfaces on the LAN side and 96 x EoS or up to 60 x
MoT interfaces on the WAN side. The module occupies a double slot in the Tslot module space and can be
installed only in slot pairs TS1+TS2 and TS6+TS7 of the NPT-1200. A spacer between each of these slot pairs
must be removed to enable the installation of the DMGE_8_L2. The procedure for removing this spacer is
described in the NPT-1200 Installation, Operation, and Maintenance Manual.
The module supports MPLS by appropriate licensing. The total WAN bandwidth is 32 x VC-4. The
DMGE_8_L2 has two combo ports and six optical ports. The combo ports support direct connection of
electrical signals through dedicated RJ-45 connectors, or optical signals through SFP housings. The other six
ports are SFP-based and enable electrical or optical GbE by insertion of different types of SFP – copper or
optical.

NOTE: The DMGE_8_L2 is supported only by the NPT-1200 platform (up to two modules in
slots TS1+TS2 and TS6+TS7).

NOTE:
 It is highly recommended to offer and install one DMGE_8_L2 instead of two DMGE_4_L2
cards.
 The DMGE_8_L2 supports in band management over MOE by the NPT-1200.

ECI Telecom Ltd. Proprietary 9-20


Neptune (Hybrid) Reference Manual Tslot I/O modules

Because the DMGE_8_L2 occupies a double slot, it can be installed only in two adjacent horizontal slots
(TS1+TS2 and TS6+TS7) in the NPT-1200. Therefore, a maximum of two DMGE_8_L2 modules can be
installed in the NPT-1200, totaling in 16 GbE (electrical/optical) interfaces per platform.

Figure 9-19: DMGE_8_L2 front panel

Table 9-36: DMGE_8_L2 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
- (left LED in the P1, Link and Tx/Rx Green Lights when the link is OK. Blinks when packets
P2 RJ-45) (FE interface) are received or transmitted.
- (right LED in the P1, Speed (FE interface) Orange Off when the speed is 10/100 Mbps. Lights
P2 RJ-45) steadily when the speed is 1000 Mbps.
LSR1 ON Laser on indication Green Lights steadily when LSR1 is on.
LSR2 ON Laser on indication Green Lights steadily when LSR2 is on.
ON/LINK3 to Laser on indication Green Lights steadily when the corresponding laser is
ON/LINK8 on.

9.4.6 DMXE_22_L2
The DMXE_22_L2 is an L2 (MPLS-TP ready) data Tslot module with two 10GbE and 2 x GbE interfaces on the
LAN side and 64 x EoS or up to 30 x MoT interfaces on the WAN side. The module occupies a single slot in
the Tslot module space. The module supports MPLS by appropriate licensing. The total WAN bandwidth is
16 x VC-4.The card supports 1588v2 master, slave, and transparent modes.

NOTE: The DMXE_22_L2 supports unique TM. Make sure to read about the card TM
functionalities and features before the implementation. See DMXE_22_L2 Traffic
Management (TM).

NOTE: The DMXE_22_L2 supports in band management over MoE by the NPT-1200.

ECI Telecom Ltd. Proprietary 9-21


Neptune (Hybrid) Reference Manual Tslot I/O modules

The 10 GbE ports use new SFP+ (OTP10_xx) transceivers that provide 10 GbE connectivity in a small form
factor, similar in size to the legacy SFPs. The SFP+ enables design modules with higher density and lower
power consumption.
The two GbE ports are SFP-based and enable electrical or optical GbE by insertion of different types of SFP
– copper or optical.

NOTE: The DMXE_22_L2 is supported in the NPT-1030 with up to 2 modules, and in the
NPT-1200 platform with up to four modules.

The maximum number of modules that can be installed in the supported platforms, and the resulting total
number of GbE (electrical/optical) and 10 GbE interfaces are listed in the following table.

Table 9-37: DMXE_22_L2 modules, GbE, and 10 GbE interfaces per platform
Platform Max.DMXE_22_L2 Max. GbE (electrical/optical) Max. 10 GbE interfaces
modules interfaces
NPT-1030 2 4 4
NPT-1200 4 8 8

Figure 9-20: DMXE_22_L2 front panel

Table 9-38: DMXE_22_L2 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
ON/LINK1 to Laser on indication Green Lights steadily when the corresponding laser is
ON/LINK4 (10 GbE ports) on.
ON/LINK1 to Laser on indication Green Lights steadily when the corresponding laser is
ON/LINK8 (GbE ports) on.

ECI Telecom Ltd. Proprietary 9-22


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.4.7 DMXE_48_L2
The DMXE_48_L2 is an L2 (MPLS-ready) data Tslot module with four 10GbE and 8 x GbE interfaces on the
LAN side and 96 x EoS/MoT interfaces on the WAN side. The module occupies a double slot in the Tslot
module space and can be installed only in slot pairs TS1+TS2 and TS6+TS7 of the NPT-1200. A spacer
between each of these slot pairs must be removed to enable the installation of the DMXE_48_L2. The
procedure for removing this spacer is described in the NPT-1200 Installation, Operation, and Maintenance
Manual. The module supports MPLS by appropriate licensing. The total WAN bandwidth is 32 x VC-4. The
card supports 1588v2 master, slave, and transparent modes.
The 10 GbE ports use new SFP+ (OTP10_xx) transceivers that provide 10 GbE connectivity in a small form
factor, similar in size to the legacy SFPs. The SFP+ enables design modules with higher density and lower
power consumption relative to XFP transceivers.
The eight GbE ports are SFP-based and enable electrical or optical GbE by insertion of different types of SFP
– copper or optical.

NOTE: The DMXE_48_L2 is supported only in the NPT-1200 platform with XIO matrix (up to
two modules in slots TS1+TS2 and TS6+TS7).

NOTE: The DMXE_48_L2 supports in band management over MOE by the NPT-1200.

Because the DMXE_48_L2 occupies a double slot, it can be installed only in two adjacent horizontal slots
(TS1+TS2 and TS6+TS7) in the NPT-1200. Therefore, a maximum of two DMXE_48_L2 modules can be
installed in the NPT-1200, totaling in 8 x 10 GbE and 16 x GbE (electrical/optical) interfaces per platform.

Figure 9-21: DMXE_48_L2 front panel

Table 9-39: DMXE_48_L2 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.

ECI Telecom Ltd. Proprietary 9-23


Neptune (Hybrid) Reference Manual Tslot I/O modules

Marking Full name Color Function


ON/LINK1 to Laser on indication Green Lights steadily when the corresponding laser is
ON/LINK4 (10 GbE ports) on.
ON/LINK1 to Laser on indication Green Lights steadily when the corresponding laser is
ON/LINK8 (GbE ports) on.

9.5 Multi service cards (CES)


9.5.1 DMCES1_4
The DMCES1_4 is a CES multiservice module that provides Circuit Emulation Services (CES) for up to 4 x
Channelized STM-1 interfaces, or a single STM-4 interface. It supports the SAToP and CESoPSN standards
and has four SFP housings for connecting STM-1 or STM-4customer signals on the front panel, totally it can
support up to 252 E1 CES services.

NOTES:
 The DMCES1_4 can be installed in any Tslot in the NPT-1200, except for TS5.
 The DMCES1_4 can be installed in the NPT-1030 with MCP30B and XIO30_16 or
XIO30Q_1&4 only.

The supported maximum number of modules that can be installed the supported platforms, and the
resulting total number of STM-1/STM-4 interfaces are listed in the following table.

Table 9-40: DMCES1_4 modules and STM-1/STM-4 interfaces per platform


Platform Max. DMCES1_4 modules Max. STM-1/STM-4 interfaces
NPT-1020/NPT-1021 1 4/1
NPT-1030 3 12/3
NPT-1050 3 12/3
NPT-1200 6 24/4

Connectivity to the packet network is made through one of the following options:
 Direct 1.25G SGMII connection to central packet switch on CPS cards through backplane.
 Connection to 3rd party device (router/switch) through SFP based GbE port on the front panel,
working in standalone mode with CESoETH and CESoIP/UDP encapsulation.
Each client port can be configured to support an STM-1 interface. Port No. 1 can also be configured to
support channelized STM-4; when this is the case all three other ports are disabled.

NOTE: The GbE port is not needed by the supported platforms; connection to this port is
made through the backplane.

ECI Telecom Ltd. Proprietary 9-24


Neptune (Hybrid) Reference Manual Tslot I/O modules

Figure 9-22: DMCES1_4 front panel

Table 9-41: DMCES1_4 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
LSR ON/LINK1 to LSR Laser on indication Green Lights steadily when the corresponding laser is
ON/LINK4 on.
GbE Laser on indication Green Lights steadily when the GbE port laser is on.

9.5.2 MSE1_16
The MSE1_16 is a CES multiservice card that provides CES for up to 16 x E1/T1 interfaces. It supports the
SAToP and CESoPSN standards and has a SCSI 100-pin female connector on the front panel for connecting
the E1/T1 customer signals.
Connectivity to the packet network is made by direct 1.25G SGMII connection to the central packet switch
on CPS card through the backplane.

NOTES:
 When the MSE1_16 is installed in the NPT-1200 the card can be configured in any Tslot,
except for TS5.
 The MSE1_16 card isn’t supported by the NPT-1800 and NPT-1200 with MCIPS320.

The maximum number of modules that can be installed in the supported platforms, and the resulting total
number of E1/T1 interfaces are listed in the following table.

ECI Telecom Ltd. Proprietary 9-25


Neptune (Hybrid) Reference Manual Tslot I/O modules

Table 9-42: MSE1_16 modules and E1/T1 interfaces per platform


Platform Max. MSE1_16 modules Max. E1/T1 interfaces
NPT-1020/NPT-1021 1 16
NPT-1050 3 48
NPT-1200 6 96

The cabling of the MSE1_16 module is directly from the front panel with a 100-pin SCSI female connector.
Figure 9-23: MSE1_16 front panel

Table 9-43: MSE1_16 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz. Off
or on steadily indicates the card is not running
normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.

9.5.3 MSC_2_8
The MSC_2_8 is a CES multiservice card that provides CES for up to 8 x E1/T1 and 2 x STM-1/OC-3
interfaces. It supports the SAToP and CESoPSN standards and has two SFP housings for connecting
STM-1/OC-3 customer signals and a 36-pin SCSI female connector for connecting E1/T1 customer signals on
front panel.
Connectivity to the packet network is made by direct 1.25G SGMII connection to central packet switch on
CPS card through backplane.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-1 interfaces are listed in the following table.

Table 9-44: MSC_2_8 modules and STM-1/OC-3 and E1/T1 interfaces per platform
Platform Max. MSC_2_8 modules Max. STM-1/OC-3 Max. E1/T1 interfaces
interfaces
NPT-1020/NPT-1021 1 2 8
NPT-1050 3 6 24

ECI Telecom Ltd. Proprietary 9-26


Neptune (Hybrid) Reference Manual Tslot I/O modules

Platform Max. MSC_2_8 modules Max. STM-1/OC-3 Max. E1/T1 interfaces


interfaces
NPT-1200 6 12 48
NPT-1800 23 46 184

Figure 9-24: MSC_2_8 front panel

Table 9-45: MSC_2_8 front panel LED indicators


Marking Full name Color Function
FAIL Card fail Red Normally off. Lights steadily when card failure
is detected.
ACT. Card active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicate the card is not
running normally.
LSR ON (separate LED Laser on Green Lights steadily when laser is on.
for each port P1, P2)

9.5.4 MS1_4
MES1_4 is a CES multiservice card that provides Circuit Emulation Services (CES) for up to 4 x STM-1
interfaces, or a single STM-4 interface. It supports the SAToP and CESoPSN standards and has four SFP
housings for connecting STM-1 or STM-4customer signals on the front panel, totally it can support up to
252 E1 CES services. In addition, it supports the CESoETH and CESoMPLS Emulation formats.

NOTE: STM-4 interface is supported only in the leftmost port (P1) of the MES1_4.

The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-1 interfaces are listed in the following table.

Table 9-46: MS1_4 modules and STM-1/OC-3 interfaces per platform


Platform Max. MS1_4 modules Max. STM-1/OC-3 interfaces
NPT-1020/NPT-1021 1 4
NPT-1050 3 12
NPT-1200 6 24
NPT-1800 23 94

ECI Telecom Ltd. Proprietary 9-27


Neptune (Hybrid) Reference Manual Tslot I/O modules

Figure 9-25: MS1_4 front panel

The MS1_4 provides the following main functions:


 4 x STM-1 or 1 x STM-4 interfaces
 CES Services:
 STM-1 channelized to 63 x VC-12 (E1) interfaces
 STM-4 channelized to 256 x VC-12 (E1) interfaces
 OC-3 channelized to 84 x VT-15 (DS1) interfaces
 OC-12 channelized to 336 x VT-15 (DS1) interfaces
 CESoETH and CESoMPLS modes
 SAToP and CESoPSN
 Clock recovery for CES Services:
 Adaptive and differential clock recovery as per ITU-T G.8261
 For 4 x STM-1/OC-3 interfaces, each E1/DS1 channel have an independent clock domain in
Differential or Adaptive clock recovery
 CEP Services:
 CEP service based on VC-3, VC-4, VC4-4c
 CEP service based on STS-1, STS-3c, STS-12c
 MSP1+1 protection between two STM-1/OC-3 port Intra-card
 MSP1+1protection between STM-1/OC3 ports cross-card

Table 9-47: MS1_4 front panel LED indicators


Marking Full name Color Function
FAIL Card fail Red Normally off. Lights steadily when card failure
is detected.
ACT. Card active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicate the card is not
running normally.
LSR ON (separate LED Laser on Green Lights steadily when laser is on.
for each port P1 to
P4)

ECI Telecom Ltd. Proprietary 9-28


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.5.5 MSE1_32
MSE1_32 is a CES multiservice card that provides CES for up to 32 x E1/T1 balanced interfaces. It supports
the SAToP and CESoPSN standards and has two SCSI 100-pin female connectors on the front panel for
connecting the E1/T1 customer signals.
Connectivity to the packet network is made by direct 1.25G SGMII connection to the central packet switch
on CPS card through the backplane.

NOTES:
 When the MSE1_32 is installed in the NPT-1200 the card can be configured in any Tslot,
except for TS5.
 When the MSE1_32 is installed in the NPT-1800 the card can be configured in any Tslot,
except for TS22.

NOTE: Two external xDDF-21 units are required for connecting 32 x E1/T1 unbalanced
interfaces to the MSE1_32.

The maximum number of modules that can be installed in the supported platforms, and the resulting total
number of E1/T1 interfaces are listed in the following table.

Table 9-48: MSE1_32 modules and E1/T1 interfaces per platform


Platform Max. MSE1_32 modules Max. E1/T1 interfaces
NPT-1020 1 32
NPT-1050 3 96
NPT-1200 5 160
NPT-1800 23 736

Figure 9-26: MSE1_32 front panel

The MSE1_32 provides the following main functions:


 32 x E1 /DS1 balanced interfaces, external xDDF_21 is required for balanced interfaces.
 CES Services:
 CESoETH and CESoMPLS mode.
 SAToP and CESoPSN.
 Clock recovery for CES Services:
 Adaptive and differential clock recovery as per ITU-T G.8261.

ECI Telecom Ltd. Proprietary 9-29


Neptune (Hybrid) Reference Manual Tslot I/O modules

 32 clock domains.
 PM counters support per channel.
 Alarm support per channel.
The cabling of the MSE1_32 module is directly from the front panel with two 100-pin SCSI female
connectors.

Table 9-49: MSE1_32 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz. Off
or on steadily indicates the card is not running
normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.

9.6 Pure packet cards


9.6.1 DHGE_4E
The DHGE_4E is a data hybrid card that supports up to 4 x 10/100/100BaseT ports with connection to the
packet switching matrix, and with PoE+ functionality.

NOTES:
 The DHGE_4E can be configured in any Tslot in the NPT-1200, except for TS5.
 The DHGE_4E can be configured in Group I Tslots only in the NPT-1800, except for TS22.

The maximum number of modules that can be installed in the supported platforms, and the resulting total
number of 10/100/1000BaseT interfaces are listed in the following table.

Table 9-50: DHGE_4E modules and 10/100/1000BaseT interfaces per platform


Platform Max. DHGE_4E modules Max. 10/100/1000BaseT interfaces
NPT-1020/NPT-1021 1 4
NPT-1050 3 12
NPT-1200 6 24
NPT-1800 11 44

ECI Telecom Ltd. Proprietary 9-30


Neptune (Hybrid) Reference Manual Tslot I/O modules

The cabling of the DHGE_4E module is directly from the front panel with four RJ-45 connectors.
Figure 9-27: DHGE_4E front pane

NOTE: When the DHGE_4E is installed in the NPT-1021, only optics cards are supported by the
EXT-2U.

NOTE: PoE+
 When the DHGE_4E is configured with PoE, the main power feeding voltage must be less
than 58 VDC.
 The DHGE_4E card MAX power consumption for PoE is 62 W, any mixture of PD devices is
allowed up to 62W.

Table 9-51: DHGE_4E front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Off indicates no power supply.
On steadily indicates that the module was
not downloaded successfully or the module
cannot be controlled by the MCP1200
normally.
Blinking indicates the module is running
normally.
FAIL Module fail Red Normally off. Lights when module failure
detected.
- (left LED in the Link/Active (P1 to P4 Green Lights when the link is OK. Blinks when
P1 to P4 RJ-45) 10/100/1000BaseT packets are received or transmitted.
interface)
- (right LED in the Speed (P1 to P4 FE Orange Off when the speed is 10/100 Mbps. Lights
P1 to P4 RJ-45) 10/100/1000BaseT steadily when the speed is 1000 Mbps.
interface)

ECI Telecom Ltd. Proprietary 9-31


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.6.2 DHGE_8
The DHGE_8 is a data hybrid card that supports up to 8 x GbE/FX ports with connection to the packet
switching matrix (CSFP for 8 ports, SFP for 4 ports).

NOTE:
 When the DHGE_8 is installed in the NPT-1200 it can be configured in any Tslot, except for
TS5.
 When installed in the NPT-1020/NPT-1021 the DHGE_8 supports up to four GbE ports, as
the NPT-1020/NPT-1021 doesn't support CSFPs.
 When the DHGE_8 is installed in the NPT-1020/NPT-1021, the EXT-2U packet based cards
are not supported (optics and TDM only).

The maximum number of modules that can be installed in the supported platforms, and the resulting total
number of GbE interfaces are listed in the following table.

Table 9-52: DHGE_8 modules and GbE interfaces per platform


Platform Max. DHGE_8 Max.1000Base-X interfaces Max. 10/100/1000BaseT
modules (electrical) and 100BaseX
interfaces
NPT-1020/NPT-1021 1 4 4
NPT-1050 3 24 12
NPT-1200 6 48 24
NPT-1800 23 119 92

The cabling of the DHGE_8 module is directly from the front panel with four SFP or CSFP transceivers. The
card has four positions for installing SFP or CSFP transceivers; the positions are gathered in pairs: P1~P5,
P2~P6, P3~P7, and P4~P8. Each pair can house one SFP or one CSFP. Each SFP supports one optical GbE/FX
port, totaling 4 x GbE/FX ports in a card. Each CSFP supports two GbE/FX ports, totaling 8 x GbE/FX ports in
a card. A Mix of SFP and CSFP transceivers in the same card is also supported.

NOTE:
 When DHGE_8 is installed in TS3 or TS4 of an NPT-1200, equipped with MCIPS320 it
supports only SFPs in these slots.
 When DHGE_8 is installed in TS7 to TS18 of an NPT-1800, it supports only SFPs in these
slots.

ECI Telecom Ltd. Proprietary 9-32


Neptune (Hybrid) Reference Manual Tslot I/O modules

Figure 9-28: DHGE_8 front panel

NOTE: When the DHGE_8 or DHGE_4E are installed in the NPT-1020 Tslot, only TDM and
EoS/MoT cards are supported by the EXT-2U.

Table 9-53: DHGE_8 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
ON (P1 to P8) Laser on indication Green Lights steadily when the corresponding laser is
(GbE ports) on.

9.6.3 DHGE_16
The DHGE_16 is a data hybrid card that supports up to 8 x 10/100/1000BaseT ports and 8 x GbE/FX ports
with connection to the packet switching matrix (CSFP support for 8 optical ports, SFP for 4 optical ports).
The module occupies a double slot in the Tslot module space and can be installed only in slot pairs TS1+TS2
and TS6+TS7 of the NPT-1200, or TS2+TS3 of the NPT-1050 and NPT-1800 (Group I). A spacer between each
of these slot pairs must be removed to enable the installation of the DHGE_16. The procedure for removing
this spacer is described in the NPT-1800, NPT-1200 and NPT-1050 Installation, Operation, and Maintenance
Manual.
The module supports MPLS by appropriate licensing. The card supports 1588v2 master, slave, and
transparent modes.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of 1000Base-X/100Base-FX and 10/100/1000BaseT (electrical) interfaces are listed in the following
table.

ECI Telecom Ltd. Proprietary 9-33


Neptune (Hybrid) Reference Manual Tslot I/O modules

Table 9-54: DHGE_16 modules, 1000Base-X/100Base-FX and 10/100/1000BaseT interfaces per platform
Platform Max. DHGE_16 Max.1000Base-X Max 100BaseX Max.
modules interfaces interfaces 10/100/1000BaseT
(electrical) interfaces
NPT-1050 1 8 4 12
NPT-1200 2 16 8 24
NPT-1800 4 32 16 48

Ports P1 to P8 are RJ-45 connectors for 8 x 10/100/1000BaseT electrical interfaces. Ports P9 to P16 are
grouped in pairs: P9~P13, P10~P14, P11~P15, and P12~P16. Each pair position can house one SFP or one
CSFP transceiver, supporting one 1000Base-X/100Base-FX port (for SFP) or two bidirectional 1000Base-X
ports (for CSFP).
Figure 9-29: DHGE_16 front panel

Table 9-55: DHGE_16 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
- (left LED in the P9 to Link/Active (P1 to P8 Green Lights when the link is OK. Blinks when packets
P16 RJ-45) interface) are received or transmitted.
- (right LED in the P9 Speed (P1 to P8 Orange Off when the speed is 10/100 Mbps. Lights
to P16 RJ-45) interface) steadily when the speed is 1000 Mbps.
ON (P1 to P8) Laser on indication Green Lights steadily when the corresponding laser is
(GbE ports) on.

ECI Telecom Ltd. Proprietary 9-34


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.6.4 DHGE_24
The DHGE_24 is a data hybrid card that supports up to 24 x GbE/FX ports with connection to the packet
switching matrix (CSFP/SFP support).
The module occupies a double slot in the Tslot module space and can be installed in slot pairs TS1+TS2 and
TS6+TS7 of the NPT-1200 and TS2+TS3 of the NPT-1050. A spacer between each of these slot pairs must be
removed to enable the installation of the DHGE_24. The procedure for removing this spacer is described in
the NPT-1800, NPT-1200 and NPT-1050 Installation, Operation, and Maintenance Manual.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of 1000Base-X/100Base-FX and 10/100/1000BaseT (electrical) interfaces are listed in the following
table.

Table 9-56: DHGE_24 modules, 1000Base-X/100Base-FX and 10/100/1000BaseT interfaces per platform
Platform Max. DHGE_24 Max.1000Base-X interfaces Max. 10/100/1000BaseT
modules (electrical) & 100BaseX
interfaces
NPT-1050 1 24 12
NPT-1200 2 48 24
NPT-1800 4 72 36

The card ports are grouped in pairs: P1~P13, P2~P14, P3~P15, P4~P16, P5~P17, P6~P18, P7~P19, P8~P20,
P9~P21, P10~P22, P11~P23, and P12~P24. Each pair position can house one SFP or one CSFP transceiver,
supporting one 1000Base-X/100Base-FX port (for SFP) or two bidirectional 1000Base-X ports (for CSFP).

Figure 9-30: DHGE_24 front panel

Table 9-57: DHGE_24 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
ON (P1 to P24) Laser on indication Green Lights steadily when the corresponding laser is
(GbE ports) on.

ECI Telecom Ltd. Proprietary 9-35


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.6.5 DHXE_2
The DHXE_2 is a data hybrid card that supports up to 2 x 10GbE ports with connection to the packet
switching matrix.

NOTES:
 Up to six cards are supported by the NPT-1200, except for TS5.
 With this card the NPT-1200 (100G) and NPT-1050 support max 10 x 10GbEs - only 3 cards
can be assigned when CPS100 is used in NPT-1200.
 The DHXE_2 card isn't supported by the NPT-1800.

The maximum number of modules that can be installed in the supported platforms and the resulting total
number of 10GbE interfaces are listed in the following table.

Table 9-58: DHXE_2 modules and 10GbE interfaces per platform


Platform Max. DHXE_2 modules Max. 10GbE interfaces
NPT-1050 3 6
NPT-1200 (100G) 3 6
NPT-1200 (320G) 6 12

The cabling of the DHXE_2 module is directly from the front panel with two SFP+ transceivers. The card has
two positions for installing SFP+ transceivers.

Figure 9-31: DHXE_2 front panel

Table 9-59: DHXE_2 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
ON (P1 and P2) Laser on indication Green Lights steadily when the corresponding laser is
(10GbE ports) on.

ECI Telecom Ltd. Proprietary 9-36


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.6.6 DHXE_4
The DHXE_4 is a data hybrid card that supports up to 4 x 10GbE ports with connection to the packet
switching matrix.

NOTES:
 The DHXE_4 is supported in the NPT-1200 with CPS320/CPTS320 and in the NPT-1800.
 Up to six cards are supported in the NPT-1200, excluding TS5.
 Up to 18 cards are supported in the NPT-1800, excluding TS23.
 The NPT-1200 supports max. 32 x 10GbEs with the CPS320/CPTS320.

A maximum of 6 modules can be installed in the NPT-1200, thus resulting in a total of 24 x 10GE interfaces
in the platform and 32 x 10GE including the 8 x 10GE on the CPTS320/CPS320 matrix cards.

Table 9-60: DHXE_4 modules and 10GbE interfaces per platform


Platform Max. DHXE_4 modules Max. 10GbE interfaces
NPT-1200 (320 G) 6 24
NPT-1800 18 71

The cabling of the DHXE_4 module is directly from the front panel with four SFP+ transceivers. The card has
four positions for installing SFP+ transceivers.
Figure 9-32: DHXE_4 front panel

Table 9-61: DHXE_4 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
ON (P1 and P2) Laser on indication Green Lights steadily when the corresponding laser is
(10GbE ports) on.

ECI Telecom Ltd. Proprietary 9-37


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.6.7 DHXE_4O
The DHXE_4O is a data hybrid card that supports up to 4 x 10GbE/OUT2/2e ports with OTN wrapping and
connection to the packet switching matrix.

NOTES:
 The DHXE_4O is supported in the NPT-1800 and NPT-1200 with CPS320/MCIPS320.
 NPT-1800 supports max. 71 x 10GbEs (up to 18 cards).

Table 9-62: DHXE_4O modules and 10GbE interfaces per platform


Platform Max. DHXE_4O modules Max. 10GbE interfaces
NPT-1200 6 24
NPT-1800 18 71

The cabling of the DHXE_4O module is directly from the front panel with four SFP+ transceivers. The card
has four positions for installing SFP+ transceivers.

Figure 9-33: DHXE_4O front panel

Table 9-63: DHXE_4O front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
ON (P1 and P2) Laser on indication Green Lights steadily when the corresponding laser is
(10GbE ports) on.

ECI Telecom Ltd. Proprietary 9-38


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.7 NFV cards


Network Function Virtualization (NFV) can be used to simplify and automate networks. Legacy networks
provide assorted services, each running on dedicated peaces of hardware and connected to the
organizational network. NFV allows customers to use a single platform to run all these applications and
services. This reduces the required hardware mix, energy consumption, labor and more. In addition, it
provides an enhanced customer experience, while increasing his overall satisfaction.
NFV is a mechanism for taking network functions that were implemented through dedicated hardware, and
running them virtually, using more flexible, standard-based server equipment. The NVF line cards for
Neptune product line, introduced here are functionally equivalent to a standard server.
NFV cards help operators bring specialized services closer to customers, which means that complex services
no longer have to be carried across the entire backbone of the network. This move lets service providers
reconfigure service delivery models and further maximize efficiency across the environment as a whole
The NFV line card can host multiple VNFs, such as vRouter, vSBC, vWAN optimization, vProbing, vSLA
monitoring, vRAN, vFirewall, and vEPC (evolved packet core). These VNFs can be hosted together in
different combinations on the same card, running on a virtual infrastructure. Service function chaining (SFC)
enables different connections between the VNFs themselves, as well as stitching the VNF connectivity to
the physical NE.
A number of NFV card types are defined to address different application needs (with different throughput,
different number of ports and port types, single or double slot, etc.). NFV cards can be installed in any Tslot
of Neptune platforms to provide VNFs inside the NEs in order to add the value of Neptune – such as secure
the management channel and/or the services provided by its platforms with strong security functions
implemented by VNFs.
The following NFV cards are offered for Neptune platforms in the current version:
 NFVG_4
The cards features and functions are described in the following section.

ECI Telecom Ltd. Proprietary 9-39


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.7.1 NFVG_4
The NFVG_4 card is a common Tslot card for Neptune platforms that can implement various VNFs
(embedded NFV solution). NFVG_4 is a single slot NFV card with four GE ports. It can be installed in any
Neptune platform with Tslots. The max. connection bandwidth to central packet switch is 4 x 1GB and may
be affected by the NP NIF resource limitation (due to dynamic allocation mechanism) in following systems.
The following figure shows the NFVG_4 general view.
Figure 9-34: NFVG_4 general view

The NFVG_4 has the following main features and functions:


 Based on a x86 intel (E3-1105Cv2) CPU
 Processing power of up to 4 x 1GE full duplex
 4 x 1GE ports (SFP based) on the front panel (3 x service ports and 1 x service/Management port)
 Dual (A/B for redundancy) 4 x SGMII internal ports to the backplane (for connecting central packet
switch in CPS/MCPS/CIPS/MCIPS cards)
 Controls to/from MCP/MCPS/MCIPS cards
 Support various traffic models as below
The following figure shows the NFVG_4 front panel.
Figure 9-35: NFVG_4 front panel

ECI Telecom Ltd. Proprietary 9-40


Neptune (Hybrid) Reference Manual Tslot I/O modules

Table 9-64: NFVG_4 front panel interfaces


Marking Interface type Function
USB/MNG micro-USB 2.0 10/100BaseT Ethernet interface for management
connector
GE1 to GE4 SFP housing GbE traffic ports
CONSOLE mini-USB connector Serial RS-232 communication port for use by technical
support personnel (debug, maintenance, etc.).

Table 9-65: NFVG_4 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
ON (GE1 to GE4) Laser on indication Green Lights steadily when the corresponding laser is
(GbE ports) on.

9.7.2 NFVG_4 block diagram


The following figure shows the FVG_4 block diagram.

Figure 9-36: NFVG_4 simplified block diagram

ECI Telecom Ltd. Proprietary 9-41


Neptune (Hybrid) Reference Manual Tslot I/O modules

The VNF traffic processing is based on Intel x86 (E3-1105Cv2) CPU with DH8903CC PCH (Platform Controller
Hub), 2 x 8GB DDR-3, and 32GB (High Endurance). This system can process the packets from 4 x GE ports
arriving through the i350 Ethernet controller. The GE lanes can be from the front panel ports (SFP based) or
from internal backplane SGMII ports. To support flexible routing, all GE lanes from the panel and backplane
are connected to the matrix (X-point). This supports traffic path provisioning.
The FPGA block mainly implements the control interfaces for MCP to manage the NFVG_4 card in Tslot of
Neptune platforms, and the timing interfaces to/from CPS/CIPS. The NFVG_4 includes an IDPROM so that
MCP can read it via IIC to identify the card.
The NFVG_4 block diagram include the following main parts:
 Power supply
 Traffic subsystem
 Control subsystem
 Timing
 Backplane and front panel Interfaces
Because NFVG_4 has to be supported in all Neptune platforms with Tslot and different Neptune's have
different control interface, the control interface of the card must be flexible and compatible with all
supported Neptune platforms.

ECI Telecom Ltd. Proprietary 9-42


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.7.3 NFVG_4 applications


hen an NFV card is integrated in a Neptune platform it can support various applications including:
1. Local:
 Input and output traffic via the front panel ports
 No backplane connectivity to the NPT platform
 The NPT provides power and basic management
The following figure shows the traffic flow in each of the applications. The numbers in the figure
corresponds to the numbering in this section.
Figure 9-37: Traffic flow in each service mode

2. Service port on NFVG_4:


 The panel port is a service port for the NPT platform
 Traffic flows to/from the NPT ports and the NVFG_4 panel ports
 Normal mode: possible NFV processing
 Transparent mode: no NFV processing is done
 UNI, NNI, MoE, CESoETH, CESoMPLS
 CES is a private case of MPLS/PB

ECI Telecom Ltd. Proprietary 9-43


Neptune (Hybrid) Reference Manual Tslot I/O modules

 Scada may be over ETH or TDM


 There is no possibility to analyze TDM traffic (GFP demapping is required)
3. Inline NFV:
 The Neptune service runs between the NPT ports
 The NFV is inline data path (using the NFVG_4 backplane connectivity)
 Per Port mode: All VSIs on the port are NFV enabled
 Per VSI mode: Some VSIs on a port are NFV enabled
4. Mirroring:
 The NPT service runs between the NPT ports
 Ports are mirrored to the NFVG_4 card (using the NFVG_4 backplane connectivity)

NOTE: The applications for Inline NFV and Mirroring will only be supported in later versions of
the NFVG_4 card.

9.7.4 Installation considerations for NFVG_4 in Neptune


platforms
The NFVG_4 is a highly computerized card that consumes much more power than regular Tslot cards. This
must be considered when planning the population of Tslot cards the platforms. The type of matrix card in
the platform also affects the number of possible NFVG_4 cards. Another factor to be considered is the
connectivity from the platform's backplane to the Tslots.
The maximum power consumption of the NFVG_4 is 45 W. According to this the following table lists the
installation possibilities in different Neptune platforms.

NOTE: In general it is recommended to install NFVG_4 cards as near as possible to the cooling
fans of the platform.

Table 9-66: NFVG_4 installation considerations


Platform PS capability (W) Number of Tslots Power left for all Tslots Max. NFVG_4 cards
(W)

NPT-1020 250 1 59 1

NPT-1050 450 3 162 3

NPT-1200 with CPTS100 550 6 220 4

NPT-1200 with CPTS320 550 6 100 2

NPT-1200 with CPTS320 and 650 6 200 4


new INF-1200

NPT-1800 1800 32 744 16

ECI Telecom Ltd. Proprietary 9-44


Neptune (Hybrid) Reference Manual Tslot I/O modules

Backplane connectivity of Neptune platforms should be considered when planning a system with NFVG_4
cards. The connectivity of the platforms is as follows:
 NPT-1020 - 2 x 1 GbE
 NPT-1050 - 4 x 1 GbE in all Tslots
 NPT-1200 with CPS320:
 2 x 1 GbE in TS2/3/4/7
 4 x 1 GbE in TS1/6
 NPT-1200 with any other matrix cards: - 4 x 1 GbE in all Tslots
 NPT-1800 - 4 x 1 GbE in all Tslots
When installing NFVG_4 in an NPT-1050 it is recommended to install up to two cards in slots TS2/TS3.

9.8 Slot reassignment and product replacement


The Neptune Product Line supports unique optional procedures for simplifying network maintenance and
upgrade. These procedures are supported by the EMS-NPT with a user friendly GUI.
The procedures include:
 Product replacement
 Slot reassignment
 Card move

9.8.1 Product replacement


The product replacement procedure is very important for network maintenance and upgrade. It allows the
customer to migrate the existing product to a new one with more capacity and functionalities. The Neptune
Product Line enables product replacement by a unified migration process.
This current version supports the following replacements:
 BG-20 to NPT-1020
 BG-30 to NPT-1200
 BG-64 to NPT-1200
 BG-30 to NPT-1030

9.8.2 Card reassignment


Reassignment is an “Edit” operation that actually means to change the expected equipment type to a new
and compatible one, logically in a slot. Unlike an “assign" or "unassign” procedure, reassignment can be
done without deleting existing traffic and configurations. All traffic is recovered automatically if the actual
equipment is compatible with the new equipment type after reassignment.
Reassignment is based on the expected card type and has no relationship to the actual card type in the slot
and its status.

ECI Telecom Ltd. Proprietary 9-45


Neptune (Hybrid) Reference Manual Tslot I/O modules

The following cards reassignment is supported:


 PME1_21 to PME1_63
 DMGE_2_L2 to DMGE_4_L2
The following matrix cards reassignment is supported:
 XIO16_4/XIO64 to CPTS100
 CPS100 to CPS320
 CPTS100 to CPTS320

9.8.3 Card move


The Neptune Product Line supports a “move slot” operation that allows the customer to change location of
a card between platform slots by EMS support. This operation allows the customer, for example, to free
slots for a double-slot card, or change a card location to enable cabling in the platform.

NOTE: For more information on the “move slot” see the EMS-APT User’s Manual.

9.9 Pluggable interfaces (CSFP/SFPs/SFP+/XFPs)


SFPs are a variety of modular optical transceivers with a small footprint and low power consumption. SFP
transceivers operate at rates of up to 2.7 Gbps with either electrical or optical ports, including both colored
and noncolored interfaces (C/DWDM).
The SFP transceiver modules are used for the entire spectrum of interfaces, including intraoffice, short and
long ranges, and the interchangeable transceiver components that are used throughout the product line.
The standardized modular design of the transceiver components enables network maintenance and
upgrades. Instead of replacing an entire circuit board containing a number of soldered modules, a single
module can be removed/replaced for repair or upgrade, providing significant cost savings.
All transceivers provide power monitoring capabilities. The SFPs for STM-16 have the added capability of
using low-cost colored interfaces (C/DWDM), further reducing maintenance costs. Transceivers provide a
significant advantage for the SDH and data cards used in the Neptune platforms.
Figure 9-38: SFP transceiver

ECI Telecom Ltd. Proprietary 9-46


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.9.1 SFP types


The SFP transceivers support a variety of transmission rates for several wavelengths and distances,
including:
 Short haul and long haul 1310 nm transceivers for STM-1, STM-4,and STM-16
 Short haul and long haul 1550 nm transceivers for STM-1, STM-4, and STM-16
 Short haul and long haul (1310/1550 nm) BD transceivers for STM-1, STM-4, and STM-16
 Intraoffice 1310 nm STM-16 transceivers
 Short reach 850 nm optical GbE transceivers (SX)
 Short reach 1310 nm optical GbE transceivers (LX)
 Long reach 1310/1550 nm optical GbE transceivers (LX, EX, ZX)
 GbE BD transceivers for GbE (OTGBE_xxBD)
 Short reach 1550 nm CWDM STM-16/GE SFP transceivers
 Long haul 1550 nm C/DWDM STM-16/GE SFP transceivers

9.9.2 Compact SFP (CSFP) types


CSFP is a Compact Small Form-Factor transceiver that supports network systems, especially those deploying
single-fiber bidirectional transceivers in high density applications. It integrates two Bi-directional
transceivers in a similar size of legacy SFP. The CSFP enables network system vendors to double port
density and data throughput, while reducing network equipment cost.
CSFP transceivers support includes:
 Short and long haul 1310/1490 nm CSFP GbE transceivers for up to 10km/20km/40km (CTGBE_xxxx)

9.9.3 SFP+ types


The 10GbE ports use new SFP+ transceivers that provide 10 GbE connectivity in a small form factor, similar
in size to the legacy SFPs. The SFP+ enables design modules with higher density and lower power
consumption relative to XFP transceivers.
SFP+ transceivers support:
 Short haul SFP+ 850 nm multimode transceiver for up to 300 m (OTP10_SR)
 Short haul SFP+ 1310 nm multimode transceiver for up to 220 m (OTP10_LMR)
 Short haul SFP+ 1310 nm single-mode transceiver for up to 10 km (OTP10_LR)
 Long haul SFP+ 1550 nm single-mode transceiver for up to 40 km (OTP10_ER)
 Long haul SFP+ 1550 nm single-mode transceiver for up to 80 km (OTP10_ZR)
 10GbE BD transceivers for 10GbE (short and long)
 Short reach 1550 nm CWDM 10 GE SFP+ transceivers for up to 40 km (OTP10C_xx)
 Long haul 1550 nm DWDM 10 GE SFP+ transceivers (OTP10D_xx)
 Long haul T-SFP+ (Tunable SFP+)

ECI Telecom Ltd. Proprietary 9-47


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.9.4 XFP types


XFP modules are 10 Gigabit small form factor pluggable modules used in XIO64 cards.
Figure 9-39: XFP transceiver

10 Gbps XFP transceivers support:


 Short haul and long haul 1310 nm transceivers for STM-64
 Short haul and long haul 1550 nm transceivers for STM-64
 Intraoffice 1310 nm STM-64 transceivers
 Short reach 1310 nm CWDM STM-64 XFP transceivers
 Long haul 1550 nm C/DWDM STM-64 XFP transceivers
 Long haul 1550 nm DWDM OTN XFP (OTRNxx)
 Long haul T-XFP

9.9.5 ETR-1 SFP electrical transceiver


Neptune now offers the ETR-1 SFP electrical transceiver that enables full duplex STM-1 electrical (155
Mbps) SDH transport over coaxial cables. The interface fully complies with ITU T G.703 signal specifications.
The SFP electrical transceiver is interchangeable with STM-1 optical SFP modules and provides easy
migration to STM-1 electrical interfaces. With this SFP module, any system that already supports STM-1
optical SFPs can now also support STM-1 electrical. This results in increased flexibility of the equipment and
reduced inventory and spare parts. While the system is in operation, changing between the optical and
electrical SFPs can be done in the field to optimize the system port types to the application (hot insertion;
non-traffic-affecting).
The main features of the ETR-1 are:
 Compliant with industry standard MSA for SFP ports
 Standard coaxial cable connector with 75 Ohm impedance
 Low-power high-performance CMI encoder/decoder integrated in the module
 Fully interchangeable with STM-1 optical SFP modules
 Hot insertion without affecting traffic

ECI Telecom Ltd. Proprietary 9-48


Neptune (Hybrid) Reference Manual Tslot I/O modules

9.9.6 ETGbE SFP electrical transceiver


Neptune offers the ETGbE SFP electrical transceiver that enables full duplex 1000BaseT electrical traffic
over CAT5E SFTP copper cables. The interface fully complies with 1000BaseT and GbE standards as per IEEE
802.3.
The SFP electrical transceiver is interchangeable with GbE optical SFP modules and provides easy migration
to 1000BaseT electrical interfaces. With this SFP module, any system that already supports GbE optical SFPs
can now also support 1000BaseT. This results in increased flexibility of the equipment and reduced
inventory and spare parts. Changing between optical and electrical SFPs can be done in the field while the
system is in operation to optimize the system port types for the application (hot insertion;
non-traffic-affecting).
The ETGbE supports the following modes of operation:
 1000BaseT with autonegotiation
 10/100/1000BaseT with autonegotiation
The ETGbE enables a mix of electrical and optical interfaces on the same module for DMGE_4_L1,
DMGE_2_L2, DMGE_4_L2, DMGE_8_L2, DMXE_22_L2, DMXE_48_L2, DHGE_8, DHGE_16, and DHGE_24
optical modules.
The main features of the ETGbE are:
 Compact RJ-45 connector assembly
 Low power dissipation
 Fully interchangeable with GbE optical SFP modules
 Hot insertion

ECI Telecom Ltd. Proprietary 9-49


10 EXT-2U expansion platform
The EXT-2U platform is a high density modular expansion unit for the multiservice Neptune platforms. It
supports the complete range of native and hybrid PCM, TDM, PDH, SDH, optics, and Ethernet interfaces.
Integrating this add-on platform into your network configuration is not traffic-affecting. All traffic
processing, cross-connect, packet switching, timing and synchronization, control and communication and
main power supply functions are performed by the corresponding system in the base unit on which the
expansion unit is installed. Integrating this add-on platform into your network configuration is not
traffic-affecting.

NOTE: The EXT-2U expansion unit can be combined with the NPT-1020, NPT-1021, NPT-1030,
NPT-1050, NPT-1200, and NPT-1800 platforms. For easier reading, the shelf layout is not
repeated in the sections describing each of those platforms. The reader is simply referred back
to this shelf layout description.

Figure 10-1: EXT-2U platform

The EXT-2U expansion unit is housed in a 243 mm deep, 465 mm wide, and 88 mm high equipment cage
with all interfaces accessible from the front of the unit. The expansion unit includes its own independent
power supply and fan unit, for additional reliability and security. The platform includes the following
components:
 Three multipurpose slots (ES1 to ES3) for any combination of extractable traffic cards. PCM, TDM,
ADM, Ethernet, and CES traffic are all handled through cards in these traffic slots. All interfaces are
configured through convenient SFP modules, supporting up to 2.5G or 2GbE traffic per slot. Each slot
in the EXT-2U has a TDM capacity of up to 16 x VC-4s; the total capacity of the EXT-2U is 48 x VC-4s.
 Two slots for INF power supply units. There are two units for system redundancy. Note that the INF
modules are extractable in the EXT-2U.
 One FCU fan unit consisting of multiple separate fans to support cooling system redundancy.
The following figure shows the slot layout for the EXT-2U platform.

Figure 10-2: EXT-2U slot layout

Typical power consumption of the EXT-2U is less than 150 W. Power consumption is monitored through the
management software. For more information about power consumption requirements, see the
corresponding Neptune platform Installation and Maintenance Manual and the Neptune System
Specifications.
The following table lists the modules supported in the EXT-2U.

ECI Telecom Ltd. Proprietary 10-1


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Table 10-1: EXT-2U modules


Name Applicable slots in EXT-2U

PSA PSB ES 1# ES 2# ES 3# FS

INF_E2U  

FCU_E2U 

SM_10E   

EM_10E   

PE1_63   

P345_3E   

S1_4   

MPS_2G_8F   

TP63_1 

DMCES1_32   

DMPoE_12G   

DHFE_12   

DHXE_12   

MXP10   

TPS1_1   

TPEH8_1 

OBC   

10.1 EXT-2U common cards


EXT-2U has its own power-feeding modules. One type of power module must be configured in the EXT-2U
unit for the platform to work, and the EXT-2U unit is always shipped with one type of power module
installed. Power modules can be replaced in the field. In addition, the EXT-2U features a fan unit, the
FCU_E2U that is shipped with the platform.

10.1.1 INF_E2U
The INF_E2U is a DC power-filter module that can be plugged into the EXT-2U platform. Two INF_E2U
modules are needed for power feeding redundancy. The module performs the following functions:
 Single DC power input and power supply for all modules in the EXT-2U
 Input filtering function for the entire EXT-2U platform
 Adjustable output voltage for fans in the EXT-2U
 Indication of input power loss and detection of under-/over-voltage

ECI Telecom Ltd. Proprietary 10-2


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

 Shutting down of the power supply when under-/over-voltage detected


 Supplies up to 503 W of power
Figure 10-3: INF_E2U front panel

10.1.2 AC_PS-E2U
The AC_PS-E2U is an AC power module that can be plugged into the EXT-2U platform. It performs the
following functions:
 Converts AC power to DC power for the EXT-2U.
 Filters input for the entire EXT-2U platform.
 Supplies adjustable output voltage for fans in the EXT-2U.
 Supplies up to 180 W of power.
Figure 10-4: AC_PS-E2U front panel

NOTE: When using the MPoE_12G with PoE+ functionality with AC_PS-E2U feeding , check the
power consumption calculation. Only one card of this type is allowed.

ECI Telecom Ltd. Proprietary 10-3


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

10.1.3 FCU_E2U
The FCU_E2U is a pluggable fan control module with four fans for cooling the EXT-2U platform. The fans’
running speed can be low, medium, or turbo and is controlled by the corresponding MCP card in the base
platform according to the environmental temperature and fan failure status.
Figure 10-5: FCU_E2U front panel

Table 10-2: FCU_E2U front panel LEDs


Marking Full name Color Function
ACT. System active Green Normally On. Off indicates the module is not
running normally.
FAIL System fail Red Normally off. Lights when module failure detected.

10.2 EXT-2U Traffic Cards


The EXT-2U platform has three expansion slots (ES1 to ES3) to accommodate the various types of traffic
cards. The following table lists the various traffic cards that can be installed in the EXT-2U.

Table 10-3: EXT-2U traffic cards


Card type Designation
Electrical PDH E1 interface card. PE1_63
Electrical PDH E3/DS-3 interface card. P345_3E
Optical or electrical SDH STM-1 interface card. S1_4
Optical SDH STM-4 interface card. S4_1
Optical Base Card (OBC) for optical amplifiers and DCM modules. Optical base card (OBC)
Data cards with internal direct connection to the packet switch. DHFE_12
Data cards with internal direct connection to the packet switch. DHFX_12
CES multiservice card for 32 x E1 interfaces. DMCE1_32
Muxponder card with 12 client ports and a slot for installing an MO_AOC4 MXP10
optical module.
EoS processing and metro L2 switching card with GbE/FE interfaces and MPS_2G_8F
MPLS capabilities.

ECI Telecom Ltd. Proprietary 10-4


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Card type Designation


EoS processing and metro L2 switching card with GbE/FE interfaces and MPoE_12G
MPLS capabilities. Enables to provide power over the Ethernet interface
(PoE).
E1 1:1 protection card for 63 ports. TP63_1
High rate (E3/DS3/STM-1e) 1:1 protection card for up to 4 ports. TPS1_1
Electrical Ethernet 1:1 protection card for up to 8 ports. TPEH8_1
Multiservice access card for N x 64 Kbps various PCM interfaces. SM_10E
Multiservice access card for N x 64 Kbps PCM interfaces. EM_10E

10.2.1 PE1_63
The PE1_63 is an electrical traffic card with 63 x E1 (2 Mbps) balanced electrical interfaces that supports
retiming of up to 63 x E1s. A maximum of three PE1_63 cards can be installed in one EXT-2U platform. The
PE1_63 supports LOS inhibit functionality (very low sensitivity signal detection). This actually means that
the LOS alarm is masked up to a level of -20 dB signals.
The cabling of the PE1_63 card is directly from the front panel with three twin 68-pin VHDCI female
connectors.

Figure 10-6: PE1_63 front panel

Table 10-4: PE1_63 front panel LED indicators


Marking Full name Color Function
FAIL Card fail Red Normally off. Lights steadily when card failure is
detected.
ACT. Card active Green Normally blinks with the frequency of 0.5 Hz. Off
or on steadily indicate the card is not running
normally.
ALARM Alarm detected Red Normally off. Lights steadily when an alarm
relevant to the card is detected.

ECI Telecom Ltd. Proprietary 10-5


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

10.2.2 P345_3E
The P345_3E is an electrical traffic card with 3 x E3/DS-3 (34 Mbps/45 Mbps) electrical interfaces. A
maximum of three P345_3E cards can be installed in one EXT-2U platform.
The cabling of the P345_3E card is directly from the front panel with DIN 1.0/2.3 connectors.

Figure 10-7: P345_3E front panel

Table 10-5: P345_3E front panel LED indicators


Marking Full name Color Function
FAIL Card fail Red Normally off. Lights steadily when card failure is
detected.
ACT. Card active Green Normally blinks with the frequency of 0.5 Hz. Off or
on steadily indicate the card is not running
normally.
ALARM Alarm detected Red Normally off. Lights steadily when an alarm is
detected.
E3/DS3 (for each DS-3 indication Orange On when the channel is working in DS-3 mode. Off
channel) when the channel is working in E3 mode.

NOTE: ACTIVE, FAIL, and ALARM LEDs are combined to show various failure reasons during
the expansion card boot. For details, see the Troubleshooting Using Component Indicators
section in the NPT-1200 Installation, Operation, and Maintenance Manual.

10.2.3 S1_4
The S1_4 card is an SDH expansion card with four STM-1 (155 Mbps) interfaces (either optical or electrical).
Each SFP house in the S1_4 supports three types of SFP module, as follows:
 SFP STM-1 optical transceivers with a pair of LC optical connectors. Interfaces can be S1.1, L1.1, or
L1.2, depending on the SFP module.
 SFP STM-1 electrical transceivers with a pair of DIN 1.0/2.3 connectors.
 SFP STM-1 optical transceivers with one LC optical connector (bidirectional STM-1 TX/RX over a single
fiber using two different lambdas). The wavelength of the Tx laser can be 1310 nm (BD3) or
1550 nm (BD5).
The four STM-1 interfaces in the S1_4 can be assigned using these three SFP module types independently.

ECI Telecom Ltd. Proprietary 10-6


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Figure 10-8: S1_4 front panel

Table 10-6: S1_4 front panel LED indicators


Marking Full name Color Function
FAIL Card fail Red Normally off. Lights steadily when card failure
is detected.
ACTIVE Card active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily
indicate the card is not running normally.
ALARM Alarm detected Red Normally off. Lights steadily when an alarm is
detected.
ON (near the four SFP Laser on Green Lights steadily when laser is on.
houses)

NOTE: ACTIVE, FAIL, and ALARM LEDs are combined to show various failure reasons during
the expansion card boot. For details, see the Troubleshooting Using Component Indicators
section in the NPT-1200 Installation, Operation, and Maintenance Manual.

10.2.4 S4_1
The S4_1 is an SDH expansion card with one STM-4 (622 Mbps) interface. The SFP house in the S4_1
supports SFP modules, as follows:
 SFP STM-4 optical transceivers with a pair of LC optical connectors. Interfaces can be S4.1, L4.1, or
L4.2, depending on the SFP module.

Figure 10-9: S4_1 front panel

ECI Telecom Ltd. Proprietary 10-7


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Table 10-7: S4_1 front panel LED indicators


Marking Full name Color Function
FAIL Card fail Red Normally off. Lights steadily when card failure
is detected.
ACTIVE Card active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily
indicate the card is not running normally.
ALARM Alarm detected Red Normally off. Lights steadily when an alarm is
detected.
ON (near the SFP Laser on Green Lights steadily when laser is on.
house)

NOTE: ACTIVE, FAIL, and ALARM LEDs are combined to show various failure reasons during
the expansion card boot. For details, see the Troubleshooting Using Component Indicators
section in the Neptune Installation, Operation, and Maintenance Manuals.

10.2.5 Optical Base Card (OBC)


Neptune offers an OBC that can be inserted in the Eslots of the EXT-2U. Up to three OBC cards can be
installed in each EXT-2U platform. The OBC is designed with high modularity for maximum flexible
configuration.
Each OBC has three sub-slots: two for installing optical amplifier modules, and a smaller one for installing a
DCM module. The OBC and its modules support live insertion.
The following optical amplifier modules are available:
 OM_BA: single channel booster amplifier with constant output power for links up to 10 Gbps
 OM_PA: single channel pre-amplifier with constant output power of -14 dBm for links up to 10 Gbps
 OM_ILA: DWDM amplifier in the C-band range configurable as Booster, Preamplifier, and In-line
amplifier
 OM_LVM: DWDM amplifier in the C-band range for in line applications with midstage for DCF
Each of these amplifiers can be installed in any of the wider slots in the OBC without limitations. The
amplifiers support full management capabilities.
The smaller (left-most) slot of the OBC supports installation of an OM_DCMxx module (xx designates the
dispersion compensation distance in km). The module is available for distances of 40, 80, and 100 km.

Figure 10-10: OBC front panel

ECI Telecom Ltd. Proprietary 10-8


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Table 10-8: OBC front panel LED indicators


Marking Full name Color Function
ACT. Card active Green Normally blinks with a frequency of 0.5 Hz.
Off or on steadily,
indicate the card is not running normally.
FAIL Card fail Red Normally off. Lights steadily when a failure is
detected in the card.

10.2.5.1 OM_BA
The OM_BA is a single channel booster amplifier module with constant output power for links up to 10
Gbps. The OM_BA can be installed in the Optical base card (OBC) wide sub-slots. Up to two modules can be
installed in each OBC, totaling six modules in an EXT-2U platform.

Figure 10-11: OM_BA front panel

Table 10-9: OM_BA front panel LED indicators


Marking Full name Color Functions
Tx Active Transmit active Green Lights when the module's output power is at a normal
level.
LOS Loss of signal Red Normally off. The indicator lights red when the stage
input signal is missing or is too low for normal
operation.
ACT. Module active Green Lights when the module is powered and operating
normally.
FAIL Module fail Red Lights when a general fault condition is detected.

The module has two LC connectors: Rx (input), and Tx (output), protected by a spring-loaded cover.

10.2.5.2 OM_PA
The OM_PA is a single channel amplifier working in Channel 35 of the C-band for links up to 10 Gbps. The
amplifier works in a constant power mode and provides a power output of -15 dBm. The OM_PA can be
installed in the Optical base card (OBC) wide sub-slots. Up to two modules can be installed in each OBC,
totaling six modules in an EXT-2U platform.
The module can be connected in two link applications:

ECI Telecom Ltd. Proprietary 10-9


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

 Receives optical signals from an SFP/XFP transmitter and the preamplifier connected before the
receiver. In this mode the module is capable of delivering signals between 80 to 120 Km.
 Includes a booster amplifier after the SFP/XFP transmitter and the preamplifier connected before the
receiver. In this option the total power budget enables the amplifier to deliver signals between 120
km to 180 km.

Figure 10-12: OM_PA front panel

Table 10-10: OM_PA front panel LED indicators


Indicator Functions
Tx Active Green indicator, lights when the module's output power is at a normal level.
LOS Loss of signal indicator, which is normally off. The indicator lights red when the
stage input signal is missing or is too low for normal operation.
AC Green indicator, lights when the card is powered and running normally.
FL Red indicator, lights when a general fault condition is detected.

The module has two LC connectors: Rx (input), and Tx (output), protected by a spring-loaded cover.

10.2.5.3 OM_ILA
The OM_ILA is a DWDM amplifier working in the C-band for links up to 44/88 channels. It is a fixed 21 dB
gain EDFA based DWDM amplifier for links of up to 500 km with up to 80 channels. The OM_ILA can be
installed in the wide sub-slots. Up to two modules can be installed in each OBC, totaling six modules in an
EXT-2U platform.
The OM_ILA provides the following main functions:
 Operation as a preamplifier, booster, or inline amplifier
 Output power of 16 dBm with a gain of 21 dB
 Minimum input power of -24 dBm
 Monitoring and alarms
 Support for DWDM filters (Mux/DeMux or OADM) in a separate Artemis shelf

NOTE: The module is configured by AGC in the EMS-NPT.

ECI Telecom Ltd. Proprietary 10-10


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Figure 10-13: OM_ILA front panel

Table 10-11: OM_ILA front panel LED indicators


Indicator Functions
Tx Active Green indicator, lights when the module's output power is at a normal level.
LOS Loss of signal indicator, which is normally off. The indicator lights red when the
stage input signal is missing or is too low for normal operation.
AC Green indicator, lights when the card is powered and running normally.
FL Red indicator, lights when a general fault condition is detected.

The module has two LC connectors: Rx (input), and Tx (output), protected by a spring-loaded cover.

10.2.5.4 OA_LVM
The OM_LVM is a DWDM two stage VGA amplifier working in the C-band for links up to 44/88 DWDM
channels. The module includes a 20.5 dBm variable gain EDFA with mid-stage access (MSA). The OM_LVM
can be installed in the Optical base card (OBC) wide sub-slots. Up to two modules can be installed in each
OBC, totaling six modules in an EXT-2U platform.
The OM_LVM provides the following main functions:
 Operation as a preamplifier, booster, or inline amplifier
 Output power of 20.5 dBm with a variable gain of 15 to 30 dB
 Minimum input power of -28 dBm
 Monitoring and alarms
 Support for DWDM filters (Mux/DeMux or OADM) in a separate Artemis shelf

Figure 10-14: OM_LVM front panel

ECI Telecom Ltd. Proprietary 10-11


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Table 10-12: OM_LVM front panel LED indicators


Indicator Functions
OUT Green indicator, lights when the EDFA output power is at a normal level.
IN Red indicator, lights when the EDFA input power is missing or is too low for
normal operation.
ACT Green indicator, lights when the card is powered and running normally.
FAIL Red indicator, lights when a general fault condition is detected.

The module has four LC connectors, protected by a spring-loaded cover:


 AMP IN - input to the first amplifier stage
 IN (STAGE) - input to the mid-stage
 OUT (STAGE) - output from the mid-stage
 AMP OUT - output from the second amplifier stage

10.2.5.5 OM_DCMxx
The OM_DCMxx is a micro dispersion compensation module used to correct excessive dispersion on long
fibers. The OM_DCMxx is available for several distance ranges: 40, 80, and 100 km (xx in the module name
designates the distance in km). The OM_DCMxx can be installed in the Optical base card (OBC) narrow
sub-slot. One module can be installed in the OBC, totaling three modules in an EXT-2U platform.

Figure 10-15: Typical OM_DCMxx front panel

The module has two LC connectors: Rx (input), and Tx (output), protected by a spring-loaded cover.

10.2.6 MXP10
The MXP10 is a muxponder base card supporting up to 12 (CSFP) built in client interfaces, which are
multiplexed into G.709 multiplexing structure and sent via two OTU-2/2e line interfaces. It can be installed
in the Eslots of EXT-2U platforms; up to three MXP10 cards can be installed in an EXT-2U.
The MXP10 can also be configured to operate as a transponder where it can map any 10
GbE/STM-64/FC-800/FC-1200 signal into an OTU2/2e line.
In addition, the MXP10 has an optical module slot for installing an OM_AOC4. This module expands the
client interface capacity by 4 additional ports, totaling 16 client ports per MXP10.

ECI Telecom Ltd. Proprietary 10-12


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Any of the client interfaces can be configured to accept an STM-1, STM-4, STM-16, GbE, FC/FC2/FC4,
OTU-1, or HD-SDI signal. The card has integrated cross-connect capabilities, providing more efficient
utilization of the lambda. Any of the signals can be added or dropped at each site, while the rest of the
traffic continues on to the next site. Broadcast TV services can be dropped and continued (duplicated),
eliminating the need for external equipment to provide this functionality.
Hardware protection is supported; using a pair of MXP10 cards, configured is slots ES1 and ES2 of the
EXT-2U. In the protection mode each service is connected to both MXP10 cards by splitters/couplers. In
case a traffic or equipment failure occurs it will trigger a switch to the protection card.
The MXP10 is a single Eslot card with the following main features:
 12 CSFP-based client ports, software configurable to support GbE, FC/FC2/FC4, STM-1, STM-4,
STM-16, and OTU-1 services
 Client interfaces can be expanded by 4, by installing a OA_AOC4 module in the card's Tslot
 Two independent SFP+ based OTU-2/2e line ports
 Can be used as a multi-rate combiner up to OTU-2/2e
 Can be used as a multi OTU-1 transponder – up to 5
 Can operate as two separate muxponders with sets of eight clients multiplexed into one OTU-2 line
 Can operate as 5 separate 2.5G muxponders with up to 5 clients multiplexed into OUT-1 line
 Regeneration mode is supported for OTU-2 (single) and OUT-1 (up to 5)
 Any mix of functionality is supported as long occupied resources are not exceeding MXP10 OTN
capacity of 40G.
 Per port HW protection
 Supports G.709 FEC for OUT-1 and G.709 FEC and EFEC (I.4 and I.7) for OTU-2 and ignore-FEC modes
towards the line
 Supports Subnetwork Connection Protection (SNCP) mechanisms
 Complies with ITU-T standards for 50 GHz and 100 GHz multichannel spacing (DWDM)
 Support two GCC channels one for each OTU-2 interface, to allow management over OTN interface
 Supports in-service module insertion and removal without any effect on other active ports
 Supports interoperability with Apollo AoC cards
The cabling of the MXP10 card is directly from the front panel. It includes 6 positions for installing CSFP
client transceivers; the positions are gathered in pairs: P1~P7, P2~P8, P3~P9, P4~P10, P5~P11, and P6~P12.
Each pair can house one CSFP. Each CSFP supports two configurable ports, totaling 12 client ports on the
base card. In addition, the MXP10 has two positions for installing SFP transceivers that serve the line ports.

ECI Telecom Ltd. Proprietary 10-13


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Figure 10-16: MXP10 front panel

Table 10-13: MXP10 front panel LED indicators


Marking Full name Color Function
ACT. Card active Green Normally blinks with a frequency of 0.5 Hz.
Off or on steadily,
indicate the card is not running normally.
FAIL Card fail Red Normally off. Lights steadily when a failure is
detected in the card.
ALARM Card alarm Red Normally off. Lights steadily when an alarm is
detected in the card.
ON (P1 to P12) Green Lights steadily when the corresponding laser
Laser on indication
is on.
(ports P1 to P12)
LSR ON (LINE 1 and Laser on indication Green Lights steadily when the corresponding laser
LINE 2) (ports LINE 1 and is on.
LINE 2)

10.2.6.1 OM_AOC4
The OM_AOC4 is an optical ADM on a card module for installing in the MXP10 card. It enables to expand
the MXP10 capacity with 4 client ports.
The OM_AOC4 module provides 4 client ports; each port can be configured to operate as one of the
following interfaces:
 STM-1/STM-4/STM-16
 GbE
 FC1/2/4/8
 HD-SDI
 ODU-1
When operating in the base card, each port supports the same functionality as the client ports incorporated
on the MXP10.
The following figure shows the front panel of the OM_AOC4.

ECI Telecom Ltd. Proprietary 10-14


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Figure 10-17: OM_AOC4 front panel

Table 10-14: OM_AOC4 LED indicators


Marking Full name Color Functions
ACT. Module active Green Lights when the module is powered and
operating normally.
FAIL Module fail Red Lights when a general fault condition is detected.
LSR ON (P1 to Laser on indication Green Lights steadily when the corresponding laser is
P4) (ports P1 to P4) on.

10.2.6.2 MXP10 applications


The MXP10 supports the following applications:
 One 10 GbE transponder, which can be:
 STM64/OC192 to OTU2
 10 GbE to OTU2e or OTU2
 FC-1200 to OTU2e or OTU2f
 FC-800 to OTU2
 One 10 GbE regenerator, which can be:
 OTU2 regenerator
 OTU2e regenerator
 OTU2f regenerator
 Can support an AoC10 including:
 Two OTU2, SFP+ based, interfaces
 Up to 16 client interfaces with a max. capacity of 20 G, which can be:
 STM-1/OC-3
 STM-4/OC-12
 STM-16/OC-48

ECI Telecom Ltd. Proprietary 10-15


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

 OTU1
 GbE
 FC-100/200/400
 HDSDI1485/HDSDI3G
 Video270
 MXP10 can support TRP25/REG25/AoC25 applications:
 Up to 5 x OTU1 transponders/combiners
 Supported client interfaces:
 STM-1/OC-3
 STM-4/OC-12
 STM-16/OC-48
 GbE
 FC-100/200
 Video270

NOTE: The MXP10 is not supported in NPT-1800 and NPT-1200 with MCIPS320.

10.2.6.3 MXP10 applications


The MXP10 supports the following applications:
 One 10 GbE transponder, which can be:
 STM64/OC192 to OTU2
 10 GbE to OTU2e or OTU2
 FC-1200 to OTU2e or OTU2f
 FC-800 to OTU2
 One 10 GbE regenerator, which can be:
 OTU2 regenerator
 OTU2e regenerator
 OTU2f regenerator
 Can support an AoC10 including:
 Two OTU2, SFP+ based, interfaces
 Up to 16 client interfaces with a max. capacity of 20 G, which can be:
 STM-1/OC-3
 STM-4/OC-12
 STM-16/OC-48

ECI Telecom Ltd. Proprietary 10-16


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

 OTU1
 GbE
 FC-100/200/400
 HDSDI1485/HDSDI3G
 Video270
 MXP10 can support TRP25/REG25/AoC25 applications:
 Up to 5 x OTU1 transponders/combiners
 Supported client interfaces:
 STM-1/OC-3
 STM-4/OC-12
 STM-16/OC-48
 GbE
 FC-100/200
 Video270

10.2.7 DHFE_12
The DHFE_12 is a data hybrid card that supports up to 12 x FE ports with connection to the packet
switching matrix.
The cabling of the DHFE_12 module is directly from the front panel with RJ-45 based connectors.

NOTE:
 The DHFE_12 with NPT-1020/NPT-1021, up to 8 x FE ports are supported.
 The DHFE_12 with NPT-1020/NPT-1021 and CPS50 , up to 12 x FE ports are supported.
 The DHFE_12 with NPT-1200 decreases the base unit MAX GE fan out by 16 ports.

Figure 10-18: DHFE_12 front panel

ECI Telecom Ltd. Proprietary 10-17


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Table 10-15: DHFE_12 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
ALARM Module alarm Red Normally off. Lights steadily when an alarm is
detected in the card.

10.2.8 DHFX_12
The DHFX_12 is a data hybrid card that supports up to 12 x 10/100 FX ports with connection to the packet
switching matrix.
The cabling of the DHFX_12 module is directly from the front panel with SFP based slots.

NOTE:
 The DHFX_12 with NPT-1020/NPT-1021, up to 8 x FE ports are supported.
 The DHFX_12 with NPT-1020/NPT-1021 and CPS50, up to 12 x FE ports are supported.
 The DHFX_12 with NPT-1200 decreases the base unit MAX GE fan out by 16 ports.

Figure 10-19: DHFX_12 front panel

Table 10-16: HFX_12 front panel LED indicators


Marking Full name Color Function
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicates the card is not
running normally.
FAIL Module fail Red Normally off. Lights steadily when a fault is
detected.
ALARM Module alarm Red Normally off. Lights steadily when an alarm is
detected in the card.
LSR ON (P1 to Laser on FX indication Green Normally off. Lights steadily when an alarm is
P12) (ports) detected in the card.

ECI Telecom Ltd. Proprietary 10-18


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

10.2.9 MPS_2G_8F
The MPS_2G_8F is an EoS metro Ethernet L2 switching card with MPLS capabilities. It includes 8 x
10/100BaseT LAN interfaces, 2 x GbE/FE combo LAN interfaces, and 64 EoS WAN interfaces. The total WAN
bandwidth is up to 4 x VC-4. A maximum of three MPS_2G_8F cards can be installed in one EXT-2U
platforms.

Figure 10-20: MPS_2G_8F front panel

Table 10-17: MPS_2G_8F front panel LED indicators


Marking Full name Color Function
FAIL Card fail Red Normally off. Lights steadily when card failure
is detected.
ACTIVE Card active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicate the card is not
running normally.
ALARM Alarm detected Red Normally off. Lights steadily when an alarm is
detected.
GE/FE PORT 1 and Link and Rx/Tx Green Lights when the link is OK. Blinks when
PORT2 (left LED in the packets are received or transmitted.
RJ-45 connectors)
GE/FE PORT 1 to Speed/laser on Orange Acts as speed indication when the port works
PORT 2 (right LED in in the electrical mode.
the RJ-45 connectors) Off when the speed is 10/100Mbps. Lights
steadily when the speed is 1000 Mbps.
Acts as laser-on indication when the port
works in the optical mode
Lights when the laser is on.
Off when the laser is off.
FE PORT 1 to FE PORT Link and Rx/Tx (FE Green Lights when the link is OK. Blinks when
8 (left LED in the PORT 1 to FE PORT 8 packets are received or transmitted.
RJ-45 connectors) interface)
FE PORT 1 to FE PORT Speed (FE PORT 1 to Orange Off when the speed is 10 Mbps. Lights
8 (right LED in the FE PORT 8 interface) steadily when the speed is 100 Mbps.
RJ-45 connectors)

ECI Telecom Ltd. Proprietary 10-19


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

10.2.10 MPoE_12G
The MPoE_12G is a metro Ethernet L2 and MPLS switching card with MPLS capabilities and Power over
Ethernet support (PoE). It can be installed in the EXT-2U providing four GbE/FX and eight
10/100/1000BaseT interfaces with power over Ethernet functionality (IEEE802.af and IEEE802.at). It
provides Layer 1 and Layer 2 with MPLS-TP switch functionality (64 EoS WAN interfaces) over native
Ethernet (MoE) and SDH (MoT) virtual concatenated streams. Suitable for IP Phone, IP cameras and RF "all
outdoor unit" power feeding directly from the Ethernet port.
The card supports 1588v2 master, slave, and transparent modes. It provides up to 64 EoS WAN interfaces
and the total WAN bandwidth is up to 4 x VC-4. A maximum of three MPoE_12G cards can be installed in
one EXT-2U platform.

Figure 10-21: MPoE_12G front panel

Table 10-18: MPoE_12G front panel LED indicators


Marking Full name Color Function
FAIL Card fail Red Normally off. Lights steadily when card failure is
detected.
ACTIVE Card active Green Normally blinks with the frequency of 0.5 Hz. Off
or on steadily indicate the card is not running
normally.
ALARM Alarm detected Red Normally off. Lights steadily when an alarm is
detected.
GE/FE PORT 1 to Link and Rx/Tx Green Lights when the link is OK. Blinks when packets
PORT4 (left LED in the are received or transmitted.
RJ-45 connectors)
FE PORT 1 to FE PORT Link and Rx/Tx (FE Green Lights when the link is OK. Blinks when packets
12 (left LED in the PORT 1 to FE PORT are received or transmitted.
RJ-45 connectors) 8 interface)

ECI Telecom Ltd. Proprietary 10-20


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

10.2.11 DMCE1_32
The DMCE1_32 is a CES multiservice card that provides CES for up to 32 x E1 interfaces. It supports the
SAToP and CESoPSN standards and has two SCSI 68-pin connectors for connecting the E1 customer signals
on the front panel.
Connectivity to the packet network is made through one of the following options:
 Direct 1.25G SGMII connection to the central packet switch on CPS cards through the backplane.
 Connection to 3rd party device (router/switch) through the combo GbE port on the front panel,
working in standalone mode with CESoETH and CESoIP/UDP encapsulation.
Connectivity to the packet network is made through backplane connection (to central packet switch), or
combo GbE port on front panel.

NOTE: The DMCE1_32 is not supported by the NPT-1800.

Figure 10-22: DMCE1_32 front panel

Table 10-19: DMCE1_32 front panel LED indicators


Marking Full name Color Function
FAIL Card fail Red Normally off. Lights steadily when card failure
is detected.
ACTIVE Card active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily indicate the card is not
running normally.
ALARM Alarm detected Red Normally off. Lights steadily when an alarm is
detected.
GE/FE port (left LED in Link and Rx/Tx Green Lights when the link is OK. Blinks when
the RJ-45 connector) packets are received or transmitted.

ECI Telecom Ltd. Proprietary 10-21


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Marking Full name Color Function


GE/FE port (right LED Speed/laser on Orange Acts as speed indication when the port works
in the RJ-45 in the electrical mode.
connector) Off when the speed is 10/100Mbps. Lights
steadily when the speed is 1000 Mbps.
Acts as laser-on indication when the port
works in the optical mode
Lights when the laser is on.
Off when the laser is off.
LSR ON ( slot) Laser on Green Lights steadily when laser is on.

10.2.12 SM_10E
The SM_10E is a multiservice access card platform that introduces various 64 Kbps, N x 64 Kbps PCM
interfaces, and DXC1/0 functionality. It provides the mappers for up to 44 E1s, and a DXC1/0 with a total
capacity of 1,216 DS-0 x 1,216 DS-0. There are three module slots, each of which accommodates traffic
bandwidth of six E1s per slot. Through the configuration of different types of traffic modules, the SM_10E
can provide up to 24 channels of different types of PCM interfaces, such as FXO, FXS, 2W, 4W, 6W, E&M,
V.24, V.35, V.11, Omni, V.36, RS-422, RS-449 C37.94, EoP, and codirectional 64 Kbps interfaces. A maximum
of three SM_10E cards can be installed in one EXT-2U platform.
The SM_10E base card has no external interfaces. Each traffic module for the SM_10E has its own external
interfaces on its front panel.

Figure 10-23: SM_10E front panel

Table 10-20: SM_10E front panel LED indicators


Marking Full name Color Function
FAIL Card fail Red Normally off. Lights steadily when card failure
is detected.
ACTIVE Card active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily, indicate the card is not
running normally.
ALARM Alarm detected Red Normally off. Lights steadily when an alarm is
detected.

ECI Telecom Ltd. Proprietary 10-22


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

10.2.13 EM_10E
The EM_10E is a multiservice access card that introduces various 64 Kbps, N x 64 Kbps PCM interfaces, and
DXC1/0 functionality. It provides the mappers for up to 16 E1s, and a DXC1/0 with a total capacity of 589
DS-0 x 589 DS-0. There are three module slots, each of which accommodates traffic bandwidth of six E1s
per slot. Through the configuration of different types of traffic modules, the EM_10E can provide up to 24
channels of different types of PCM interfaces, such as FXO, FXS, 2W, 4W, 6W, E&M, V.24, V.35, V.11, Omni,
V.36, RS-422, RS-449 C37.94, and codirectional 64 Kbps interfaces. A maximum of three EM_10E cards can
be installed in one EXT-2U platform.

NOTE: The EM_10E is not supported by the NPT-1800 and NPT-1200 with MCIPS320.

The EM_10E base card has no external interfaces. Each traffic module for the EM_10E has its own external
interfaces on its front panel.

Figure 10-24: EM_10E front panel

Table 10-21: EM_10E front panel LED indicators


Marking Full name Color Function
FAIL Card fail Red Normally off. Lights steadily when card failure
is detected.
ACTIVE Card active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily, indicate the card is not
running normally.
ALARM Alarm detected Red Normally off. Lights steadily when an alarm is
detected.

10.2.13.1 EM_10E/SM_10E Traffic Modules


Three traffic module slots are available in an EM_10E/SM_10E card to accommodate the various types of
PCM traffic modules.
The following EM_10E/SM_10E traffic modules are supported:
 SM_FXO_8E: EM_10E/SM_10E traffic module for eight FXO interfaces.
 SM_FXS_8E: EM_10E/SM_10E traffic module for eight FXS or FXD interfaces. Each interface can be set
to FXS or FXD independently.

ECI Telecom Ltd. Proprietary 10-23


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

 SM_EM_24W_6E: EM_10E/SM_10E traffic module for six 24W E&M interfaces. Each interface can be
set to 2W, 4W, 6W, 2WE&M, or 4WE&M independently.
 SM_V24E: EM_10E/SM_10E traffic module for V.24 interfaces that supports three modes:
Transparent (eight channels), Asynchronous with controls (four channels), and Synchronous with
controls (two channels). Both point-to-point and point-to-multipoint services are supported.
 SM_V35_V11: EM_10E/SM_10E traffic module for two V.35/V.11/V.24/V.36/RS-422/RS-449 (64 Kbps
only) compatible interfaces with full controls. Each interface can independently be configured as V.35
or V.11/X.24 or V.24 64 Kbps.
 SM_CODIR_4E: EM_10E/SM_10E traffic module for four codirectional 64 Kbps (G.703) interfaces.
 SM_OMNI_E: EM_10E/SM_10E traffic module for one OmniBus 64 Kbps interface.
 SM_EOP: SM_10E traffic module for Ethernet data. It supports standard EoP functionality of E1 VCAT,
GFP and LCAS.
 SM_C37.94S: EM_10E/SM_10E traffic module for two teleprotection (IEEE C37.94) interfaces.
 SM_IO18: EM_10E/SM_10E traffic module for 18 input/output configurable ports (dry contacts) for
utilities teleprotection interfaces.
Additional types of EM_10E/SM_10E traffic modules will be supported in a later version. Each
EM_10E/SM_10E traffic module can be inserted into any of the three module slots in the EM_10E/SM_10E.
All EM_10E/SM_10E traffic modules support live insertion.
Each module provides corresponding traffic interfaces through a SCSI-36 connector on its front panel. The
cabling of these interfaces can be directly via the SCSI-36 connector, or via the corresponding ICP that
connects the SCSI-36 connector through a special cable.

Figure 10-25: Example of an EM_10E/SM_10E traffic module

Table 10-22: EM_10E/SM_10E traffic module front panel LED indicators


Marking Full name Color Function
ACT Module active Green Normally on. Off indicates no power supply.
FAIL Module fail Red Normally off. Lights when module failure is
detected.

ECI Telecom Ltd. Proprietary 10-24


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

SM_FXO_8E
SM_FXO_8E is a traffic module with eight FXO interfaces for the SM_10E/EM_10E card. Up to three
modules can be configured in one SM_10E/EM_10E card, totaling 24 FXO interfaces. The SM_FXO_8E
provides telephone line interfaces for the central office side.

Figure 10-26: SM_FXO_8E front panel

SM_FXS_8E
SM_FXS_8E is a traffic module with eight FXS or FXD interfaces for the SM_10E/EM_10E card. Each
interface can be set to FXS or FXD independently. Up to three modules can be configured in one
SM_10E/EM_10E card, totaling 24 FXS or FXD interfaces. The SM_FXS_8E provides telephone line interfaces
for the remote side.

Figure 10-27: SM_FXS_8E front panel

SM_EM_24W_6E
SM_EM_24W6E is a traffic module with six 2/4W/6W E&M interfaces for the SM_10E/EM_10E card. It
provides two wire and four wire voice frequency interfaces, with ear and mouth signaling interfaces. Each
interface can be set to 2W, 4W, 6W, 2WE&M, or 4WE&M independently. Up to three modules can be
configured in one SM_10E/EM_10E card, totaling 18 2W, 4W, 6W, 2WE&M, or 4WE&M interfaces.

Figure 10-28: SM_EM_24W_6E front panel

ECI Telecom Ltd. Proprietary 10-25


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

SM_V24E
SM_V24E is a traffic module with V.24 interfaces (RS232) for the SM_10E/EM_10E. V.24 is low bit rate data
interface also known as RS232. It supports three types of the module:
 Transparent mode with eight channels
 Asynchronous mode with controls has four channels
 Synchronous mode with controls has two channels
The SM_V24E supports a wide range of bit rates in two grades (low and high) and three operating modes as
described in the following table.

Table 10-23: SM_V24E (RS232) supported bit rates and modes


V.24 attribute setting

Rate grade Mode TC mode Band rate (bps) Operation mode Rate adaptation

Transparent (8 Sampling --- --- ---


channels)
TC 50-19200 --- ---

Low Async. with --- 600-38400 Duplex V110/HCM


control

Sync with control --- 600-38400 Duplex V110/HCM

Transparent (8 Sampling --- --- ---


channels)
High TC 0-19200 --- ---

Async. with --- 57600 Duplex HCM


control

Sync with control --- 56000, 64000 Duplex V110/HCM

All module types support P2P and P2MP services.

Figure 10-29: SM_V24E front panel

ECI Telecom Ltd. Proprietary 10-26


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

SM_V35_V11
SM_V35_V11 is a traffic module with two V.35/V.11/V.24/V.36/RS-422/RS-449 64 Kbps compatible
interfaces with full controls. Each interface can independently be configured as V.35 or V.11/X.24 and can
be mapped to unframed E1 or N x 64K of framed E1 (the interface rate N is configurable). Up to three
modules can be configured in one SM_10E/EM_10E card, totaling six V.35/V.11/V.24 64 Kbps interfaces.

Figure 10-30: SM_V35_V11 front panel

SM_CODIR_4E
SM_CODIR_4E is a traffic module with four codirectional 64 Kbps (per ITU-T G.703) interfaces for the
SM_10E/EM_10E.

Figure 10-31: SM_CODIR_4E front panel

SM_EOP
The SM_EOP is a traffic module with two Ethernet interfaces for the SM_10E. It provides two 10/100BaseT
interfaces and supports EoP functionality, including N x E1 virtual concatenation, GFP-F encapsulation, and
LCAS. It also supports N x 64K HDLC encapsulation. The total bandwidth of the SM_EoP is four E1s.

NOTE: The SM_EOP is supported by the SM_10E only.

Figure 10-32: SM_EOP front panel

ECI Telecom Ltd. Proprietary 10-27


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

SM_OMNI_E
SM_OMNI_E is a traffic module with OmniBus functionality and four 2W/4W interfaces for the
SM_10E/EM_10E. Each interface can be set to 2W or 4W mode by the management.
Omnibus is a special interface for railway application featuring P2MP communications. This interface is very
similar in nature to SDH OW.

Figure 10-33: SM_OMNI_E front panel

SM_C37.94
The SM_C37.94 module provides two teleprotection interfaces per IEEE C37.94 for the EM_10E/SM_10E.
The interfaces enable transparent communications between different vendors' teleprotection equipment
and multiplexer devices, using multimode optical fibers.
In general, teleprotection equipment is employed to control and protect different system elements in
electricity distribution lines.
Traditionally, the interface between teleprotection equipment and multiplexers in high-voltage
environments at electric utilities was copper-based. This media transfers the critical information to the
network operation center. These high-speed, low-energy signal interfaces are vulnerable to
intra-substation electromagnetic and frequency interference (EMI and RFI), signal ground loops, and
ground potential rise, which considerably reduce the reliability of communications during electrical faults.
The optimal solution is based on optical fibers. Optical fibers don't have ground paths and are immune to
noise interference, which eliminates data errors common to electrical connections.

Figure 10-34: SM_C37.94 front panel

ECI Telecom Ltd. Proprietary 10-28


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Table 10-24: SM_C37.94 front panel LED indicators


Marking Full name Color Function
FAIL Module fail Red Normally off. Lights steadily when card failure is
detected.
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz. Off
or on steadily
indicate the card is not running normally.
LSR ON Laser on Green Lights steadily when the corresponding laser is on.

SM_C37.94S
SM_C37.94S Sub module provides two teleprotection interfaces per IEEE C37.94 for the SM_10E/EM_10E.
The interfaces enable transparent communications between different vendors' teleprotection equipment
and multiplexer devices, using multimode optical fibers.
In general, teleprotection equipment is employed to control and protect different system elements in
electricity distribution lines.
Traditionally, the interface between teleprotection equipment and multiplexers in high-voltage
environments at electric utilities was copper-based. This media transfers the critical information to the
network operation center. These high-speed, low-energy signal interfaces are vulnerable to
intra-substation electromagnetic and frequency interference (EMI and RFI), signal ground loops, and
ground potential rise, which considerably reduce the reliability of communications during electrical faults.
The optimal solution is based on optical fibers. Optical fibers don't have ground paths and are immune to
noise interference, which eliminates data errors common to electrical connections.

NOTE: SM_C37.94S supports two SFP based C37.94 interfaces (OTR2M_MM and OTR2M_SM,
which should be ordered separately).

Figure 10-35: SM_C37.94S front panel

ECI Telecom Ltd. Proprietary 10-29


Neptune (Hybrid) Reference Manual EXT-2U expansion platform

Table 10-25: SM_C37.94S front panel LED indicators


Marking Full name Color Function
FAIL Module fail Red Normally off. Lights steadily when card failure is
detected.
ACT. Module active Green Normally blinks with the frequency of 0.5 Hz. Off
or on steadily
indicate the card is not running normally.
LSR ON Laser on Green Lights steadily when the corresponding laser is on.

SM_IO18
The SM_IO18 is a sub module of the SM_10E/EM_10E, which provides 18 dry contact ports and is used for
substation alarm monitoring and control. Each port can be defined as input or output by configuration:
 Input port
 Port name and severity is configurable
 Monitor type is configurable between alarm and event
 Output
 Support manual control
 Support automatic control by associating an input port.

Figure 10-36: SM_IO18 front panel

The main functions supported by the SM_IO18 include:


 18 input/output dry contact ports
 The dry contact port can be defined as input or output on per port basis
 Used for substation alarm monitoring and control
 Up to 4 input commands
 Up to 8 output commands

ECI Telecom Ltd. Proprietary 10-30


11 Neptune MPLS-TP and Ethernet cards
This section provides a detailed description of the data modules available for the Neptune Product Line.
Neptune offers the following types of data cards:
 MPLS-TP Service Cards support an advanced Ethernet-based metro-core layer. These are MPLS carrier
class switch cards that enable next-generation Ethernet applications such as triple play, VPLS business
connectivity, 3G Ethernet-based aggregation, and CoC bandwidth applications, supporting up to
2.5 Gbps. The MPLS cards provide complete Provider Bridge (QinQ) functionality and MPLS switching
functionality, offering scalability and smooth interoperability with IP/MPLS core routers. The MPLS
cards work with both electrical and optical interfaces.
 Ethernet Layer 2 Service Cards for adding Layer 2 Ethernet capabilities to the Neptune Product Line
platforms. Ethernet Layer 2 cards support FE and GbE services with both electrical and optical
interfaces.
 Ethernet Layer 1 Service Cards for FE and GbE services with both electrical and optical interfaces.
Enable efficient data transport of Ethernet traffic over SDH infrastructures.

NOTE: MPLS and Ethernet data cards, with optical ports, incorporate SFP transceivers with LC
connectors. Purchase these SFPs only through your local sales representative.

Table 11-1: MPLS-TP/Ethernet card types


Type Designation
MPLS cards
DMFE_4_L2
DMFX_4_L2
DMGE_2_L2
DMGE_4_L2
DMGE_8_L2
DMXE_22_L2
DMXE_48_L2
MPS_2G_8F
MPoE_12G
DHGE_4E
DHGE_8
DHGE_16
DHGE_24
DHXE_2
DHXE_4
DHXE_4O

ECI Telecom Ltd. Proprietary 11-1


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

Type Designation
DHFE_12
DHFX_12
Ethernet Layer 1 cards
DMFE_4_L1
DMFX_4_L1
DMFX_1_L1
DMGE_4_L1

NOTE: All modules have a handle to enable easy removal and insertion. The handle has been
removed from the illustrations in this section so as not to obscure the front panel markings.

11.1 Ethernet layer 2 switching cards


Ethernet Layer 2 cards are used to provide Layer 2 services with full VLAN support to end users over SDH
networks.
To support these services, Layer 2 cards installed in the platforms are interconnected by SDH trails that
serve as EoS links. Virtual Concatenation (VCAT) with Generic Frame Procedure (GFP) encapsulation at
VC-4, VC-3, and VC-12 levels is used to provide the required bandwidth. The resulting network appears as
an Ethernet switch to the end users.
Ethernet Layer 2 cards use link capacity adjustment scheme (LCAS) to provide dynamic bandwidth
allocation on trails carrying the EoS links. The number of SDH containers in the link is automatically
decreased if one or more VCs fail, and automatically increased when the fault is repaired.
Neptune product line Layer 2 service cards provide optical or electrical Ethernet interfaces and support
VC-12/3/4 granularity on their EoS side. They offer full interoperability with the data cards of the Neptune
product line and 3rd party equipment.

11.2 MPLS-TP Service Cards


The MPLS family cards are the MPLS carrier class switch cards for the Neptune Product Line. They enable
service providers to build a cost-effective carrier class Ethernet network over new and existing SDH
networks, supporting any Ethernet-based application and service, including business connectivity (VPLS),
triple play (IPTV drop-and-continue multicast), 3G mobile services, and wholesale/CoC Ethernet leased line
(LL) and bandwidth services, all with carrier grade capability.
The MPLS cards offer up to 3.5 Gbps Ethernet and/or MPLS switching capacity, and up to 2.5 Gbps EoS
bandwidth.
The MPLS cards incorporate the following functions:
 Ethernet Provider Bridge (QinQ) switch – a standard base 802.1d/q/ad bridge/switch.
 MPLS Layer 2 switch – a standard base IETF MPLS and ITU-T T-MPLS switch supporting Ethernet
PseudoWire (PW), Virtual Private LAN Service (VPLS), and P2MP multicast trees.

ECI Telecom Ltd. Proprietary 11-2


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 SDH mapper – supporting standard Ethernet, and MPLS mapping to GFP/VCAT/LCAS with SDH
VC-12/3/4 granularity.
 Ethernet ports – incorporating GbE ports and FE ports.
 Up to 8/64 EoS/MoT (WAN) ports - with standard GFP/VCAT/LCAS mapping with SDH n x VC-12/3/4
(VCG) granularity.
Each MPLS card includes an Ethernet switch, an MPLS switch, and an SDH mapper. A powerful Network
Processor Unit (NPU) incorporated in each card fulfills the functions of Ethernet and MPLS switches. The
NPU is software programmable, allowing the cards to work as an Ethernet Provider Bridge (QinQ) switch
and/or as Ethernet Provider Bridge plus MPLS switch.

11.3 Layer 1 service cards


NPT-1200 Layer1 cards can offer Ethernet over SDH Layer 1 applications. Typical applications of these cards
include:
 EPL services Layer 1 P2P interconnection between two Ethernet (10/100/1000 Mbps) UNI over
dedicated EoS trails.
 Front-end to Layer 2 EoS networks based on Neptune Layer 2 cards (port extension services).

11.4 Multiservice CES cards


Even though Ethernet has a compelling value proposition and carriers roll out broadband packet based
services, not all companies can generate a business case for abandoning their investments in legacy
infrastructure such as PBXs, ATM switches, and radio networks. Operators cannot ignore profitable
revenues gained from traditional TDM services and equipment still dominating the current transport
networks. They would naturally prefer to extend the capabilities and profitability of those networks with
the ability to carry TDM traffic over PSNs such as Ethernet.
CESoPSN, or shortly CES, is a method by which a TDM circuit (such as T1 or E1) is “tunneled” through a PSN
that is transparent to the end user. An internetworking function on each end of the PSN transforms TDM
data into packets on ingress and reverses the process on egress. As a result, the TDM equipment on either
end of the PSN perceives a direct connection to the opposite end and is unaware of the intermediary
network emulating the behavior of a TDM circuit. This is illustrated in the following figure.
Figure 11-1: CES over packet network

ECI Telecom Ltd. Proprietary 11-3


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

Neptune platforms enable E1/T1 CES emulation, providing TDM transport over PSNs for backhaul
applications offering a wide range of new broadband data services. These boost the advantages inherent in
packet based networks, including flexibility, simplicity, and cost effectiveness. Neptune platforms support
CESoPSN for E1/T1 interfaces with encapsulation support for CES over MPLS-TP (CESoMPLS).
For cellular operators managing 2G, 2.5G, and 3G base stations connected to the BSC/RNC via multiple
E1/T1 lines, Neptune enables lower cost transport between these locations, replacing more expensive
leased E1/T1 lines.
At the hub or BSC/RNC sites, the Neptune functions as a carrier class multiservice aggregator, optimizing
cellular backhaul by multiplexing various TDM services into a single ChSTM-n. STM-1 support includes
channelized STM-1 with up to 63 x VC-12 channels for SDH or 84 VT1.5 channels for SONET, and
channelized STM-4 with up to 252 x VC-12 channels for SDH or 336 VT1.5 channels for SONET.

11.5 Layer 2 service cards functionality


Ethernet Layer 2 cards perform the following functions:
 Layer 2 provider bridge switching
 Wire-speed MAC and VLAN ID based forwarding
 Provider bridge double tagging (QinQ)
 Support of four Classes of Service (CoS)
 RMON statistic counters on ETY and EoS ports
 Traffic protection based on, Ethernet, (MSTP), and ERP
The following sections provide a detailed description of the main functions.

11.5.1 Generic Framing Procedure


GFP, defined in ITU-T Rec. G.7041, is a protocol for mapping data packets into a synchronous transport
system like SDH. GFP requires a fixed amount of overhead for encapsulation that is independent of the data
packets. This allows deterministic matching of bandwidth between the Ethernet flow and the virtually
concatenated SDH stream.
To cater for all mapping requirements, two mapping modes are defined for GFP:
 Frame mapped - used for connections where efficiency and flexibility are essential and reasonable
latency can be tolerated
 Transparent mapped - enables the transport of block-coded client signals (like Fiber Channel, ESCON,
or FICON) that require very low transmission latency
Neptune Layer 2 service cards support the GFP-F (frame-based) mode.

ECI Telecom Ltd. Proprietary 11-4


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.5.2 Virtual Concatenation


VCAT, defined in ITU-T Rec. G.707, is a technique used to give SDH additional flexibility in transporting client
signals requiring a bandwidth that does not match the bandwidth granularity of SDH networks.
The approach used by VCAT is to combine the bandwidth available on an arbitrary number of SDH
containers (configured as a virtual concatenated group - VCG) in a way that creates a single logical channel
capable of carrying a single byte-synchronous data stream.
With virtual concatenation, the individual containers are transported over the SDH network independently
and then recombined to restore the original payload signal at the endpoint of the transmission path.
Differential delay due to the different path of each VC is compensated at the end of the path as part of
regrouping the VCs of a VCG.
Virtual concatenation has the following benefits:
 Scalability - allows bandwidth to be selected in VC-4, VC-3, or VC-12 increments, as required, to
match the required payload data rate.
 Efficiency - the resulting signals are easily routed through the SDH network, making more efficient use
of available bandwidth on existing networks.
 Compatibility - virtual concatenation requires only the end nodes to be aware of the containers being
virtually concatenated, making the signals transparent to the core NEs.
The fine bandwidth management made possible by VCAT is particularly effective for the efficient transport
of data services that inherently comprise variable bitrates. For example, consider the transport of a partially
filled GbE signal. Although the nominal bandwidth is 1 Gbps, often the instantaneous rate is only 200 Mbps
to 300 Mbps. Thus, continuous allocation to this GbE signal of a bandwidth equal to the peak value (1
Gbps), as done in pure transport applications, wastes on average 70% of the network bandwidth. With
VCAT, an optimal bandwidth close to the average bandwidth requirements is selected, for example, a
bandwidth of 300 Mbps. To handle the peak bandwidth requirements, ingress buffers are used to shape
the peak traffic to match the provisioned bandwidth.

11.5.3 Link Capacity Assignment Scheme


LCAS, defined in ITU-T Rec. G.7042, enables dynamic changes in the amount of bandwidth used for a virtual
concatenation channel. Signaling messages are exchanged within the SDH overhead to change the number
of members included in the VCG. The number of members can either be reduced or increased, and the
resulting bandwidth change is applied without loss of data.
Neptune Layer 2 cards support the dynamic bandwidth adjustment provided by LCAS functionality.
Dynamic bandwidth adjustment allows the increasing or decreasing of the bandwidth of a VCG link. Typical
scenarios where this capability is used include:
 Automatic removal of failed members temporarily from an active VCG and transferring traffic only via
the remaining operational members. When the failure condition is fixed, the Layer 2 card set adds the
members back into the group. This is also useful for traffic protection.
 Adjusting the link bandwidth to the bandwidth required by an application. If the bandwidth allocation
is only for the average amount of traffic and not the full peak bandwidth, and the average bandwidth
usage changes overtime, the allocation can be modified to reflect this change.

ECI Telecom Ltd. Proprietary 11-5


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.5.4 Layer 2 switching capabilities


The Neptune Layer 2 cards incorporate a Layer 2 Provider Bridge Ethernet switch that supports VLANs and
double tagging per IEEE 802.1Q and 802.1p. QoS is controlled by four CoS, together with strict priority
queuing and full buffer allocation per QoS.
To provide prioritized/differentiated services, the client's traffic is policed and classified as follows:
 The rate of the flow at the ingress is limited according to a configured Committed Information Rate
(CIR) and Committed Burst Size (CBS).
 Each packet is classified according to one of four CoS by marking it with a configured 802.1Q and a
user priority (802.1p).
Client's flows are policed to conform to the specific Service Level Agreement (SLA). 128 policers are
available on each module that can be allocated to each of the module ports.
The Weighed Random Early Discard (WRED) technique used to smooth traffic pattern under congestion
conditions supports the policing procedure as follows:
 When traffic exceeds the module capabilities, it must discard packets. Any TCP client detecting
discarded packets will reduce its transmission rate to half. After packets are no longer discarded, the
rate will slowly be increased as long as there is no packet loss.
 If the module would discard packets only during congestion conditions, traffic volume would suffer
from the sawtooth syndrome: after a series of packets are discarded, all the clients would drop their
rate to half, resulting in partial utilization of the network. Together they will again increase the rate
until the network is congested again.
WRED prevents this behavior by discarding a small part of the packets before its buffers are full. As a result,
only a small number of TCP clients decrease their rate and traffic utilization does not drop.
Neptune Layer 2 cards support WRED, and its characteristics are separately configurable for each of the
four priority classes of the internal Ethernet switch.

11.5.5 FDB quota provisioning


Ethernet frames are forwarded according to their Destination MAC address and VLAN ID. The forwarding
information is stored in a filtering data base (FDB) (routing table). The size of this database is limited;
therefore, to free space for new addresses, the entries are automatically removed after a configurable
aging time. If the address and VID of a packet don't match any entry in the table, the packet is flooded (sent
to all output ports).
A MAC address storm from a VPN can occupy all free resources of the address table. In the absence of free
resources, packets with new addresses are not learned. This causes the addresses to be flooded and
overload the egress ports.
The FDB quota provisioning minimizes this effect by letting the operator to set a limited amount of entries
(MAC addresses) per VPN (S-VLAN) and block any client port that exceeds the limit. Although in Neptune
Layer 2 cards any client can exceed their quota, they, since FDB quota violation, reduce the aging time to
minimum to free the FDB from the new entries and block the interfering client port to prevent it from
continuing to overload the FDB.

ECI Telecom Ltd. Proprietary 11-6


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.5.6 Triggers for MSTP


MSTP prevents the creation of loops and enables the protection of Ethernet traffic by the ring topologies
used in SDH networks.
Link bandwidth reduction as a result of failures in VC members of a VCG can in turn cause service
degradation performance. In such cases the MSTP is activated to change the network topology and
overcome the failure. Neptune Layer 2 cards have a set of link capacity-related parameters that are used as
triggers for MSTP:
 No members provisioned on the Tx direction
 No members provisioned on the Rx direction
 Partial Loss of Capacity (PLCr) on the receive side
 Partial Loss of Capacity (PLCt) on the transmit side
 Total Loss of Capacity (TLCr) on the receive side
 Total Loss of Capacity (TLCr) on the transmit side
 Link Fail Detection (LFD)
 Remote Defect Indication (RDI) on at least one member in the VCG (only for VCAT mode)
The threshold values of the TLCr and PLCr can be set by the user.

ECI Telecom Ltd. Proprietary 11-7


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.5.7 Port-based VLANs


Client frames can enter the provider's network tagged or untagged. A client that provides tagged frames
attaches his CVLAN (Client VLAN ID) and priority bits to the Ethernet frames. The provider uses this
information to identify the client and decide how to handle the traffic within his network.
In many cases the provider is not allowed to change the client tagging because the client needs it to
continue the traffic handling at the far end. To enable traffic handling, the provider attaches his SVLAN
(Service Provider VLAN) containing VLAN ID (VID) and CoS bits at the ingress port, and removes them at the
egress.

11.5.7.1 Attach/Detach VLAN


Neptune Layer 2 cards enable the provider to add a VLAN tag to incoming untagged frames. This VLAN is
named PVID and is maintained throughout the network. The PVID enables the operator to identify different
clients arriving from different ports, even after being multiplexed in point-to-multipoint (P2MP)
configurations. The PVID is detached from the frames that are outgoing from the same port which was
configured to attach and detach PVID.

11.5.8 UNI on EoS ports


The Neptune Layer 2 cards enable serving client traffic arriving through EoS ports as if they were received
from ETY ports. This enables the provider to give the client “port extension” services. For example, a client
that is far from the providers' Neptune Layer 2 card location uses a Neptune Layer 1 card to map his
Ethernet traffic over SDH, and reaches the provider via the SDH network. This traffic is directed to a Layer 2
card EoS port. The traffic is demapped and enters the card. It is then handled as regular traffic that enters
the card via a regular ETY port.

11.5.9 NNI on ETY ports


Neptune Layer 2 cards enable configuring the ETY port as NNI. Traffic in such ports use QinQ where the
S-VLAN of the Neptune Layer 2 cards network is maintained and transmitted towards the client equipment.

11.5.10 Additional features


Additional supported features include:
 Autonegotiation - supported by ETY ports to select common transmission parameters with the
customer's equipment connected. The port capable of transmission at various rates (10/100 Mbps or
10/100/1000 Mbps) and mode (full duplex) exchanges data with the connected device. The two
devices then choose the best possible mode of operation that is shared by both, where higher speed
is preferred over lower speed.
 Store and Forwarding - mechanism used to check the integrity of the frame at the ingress port. The
frame's source and destination address and CRC are checked. Only error-free frames are forwarded,
and frames with errors are dropped.

ECI Telecom Ltd. Proprietary 11-8


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 Layer 2 Control Protocol Handling - Ethernet ports handle some specific MAC addresses in a special
way to provide predicted efficient behavior of the network. As opposed to a standard service frame
that is transported untouched from side to side, these special frames should be treated differently.
For example, PAUSE frames have their meaning only within the local link and therefore should be
discarded immediately upon reception. Other MAC addresses are configurable to be discarded or
forwarded transparently.

11.5.11 Access control list


The Access Control List (ACL) is a list of permissions attached to an object. The list specifies who or what is
allowed to access the object and what operations are allowed to be performed on the object. In a typical
ACL, each entry in the list specifies a subject and an operation. One of the most important implementations
is to protect routers from various risks, both accidental and malicious. Infrastructure protection ACLs
should be deployed at network ingress points.

11.5.12 C-VLAN translation


This unique feature enables the merging of two different customer VLANs from different locations. Due to
the VLAN translation at the edge of the provider network, two different VLANs that are in different places
can be merged into one VLAN, or an external S-VLAN of another provider can be mapped to an internal
S-VLAN.
Only one C-VLAN translation per UNI port per VSI is supported.
C-VLAN translation is bidirectional.

ECI Telecom Ltd. Proprietary 11-9


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.5.13 C-VLAN bundling


VLAN bundling carries the traffic of multiple VLANs. Multiple customer C-VLANs can map through a single
Ethernet service on the UNI. All-to-one bundling is a special case whereby all customer VLANs map to a
single Ethernet service at the UNI.

11.5.14 Access-controlled management


Intelligent management access controls are needed at the customer edge to keep unauthorized users from
accessing the provider’s network. Preventing denial of service attacks involves deciding whether to accept,
discard, or monitor certain traffic.
Figure 11-2: ACL description

ECI Telecom Ltd. Proprietary 11-10


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.5.15 Port mirroring


Port mirroring helps the supervisor monitor the network. Port mirroring copies traffic from a specific port
to a target port, namely, copying the packets entering or exiting a port, or entering a VLAN, to a local port
for local monitoring. This mechanism helps track network errors or abnormal packet transmission without
interrupting the flow of data. Moreover, the port mirroring feature enables a basic "lawful interception"
application for future development.
Figure 11-3: Port mirroring description

11.5.16 Ingress/egress C-VLAN filtering


Ingress/Egress C-VLAN filtering is a means of filtering out unrequired traffic on a port. When VLAN filtering
is enabled, packets are only accepted or transmitted into a port if they match the VLAN configuration of
that port. Based on whether the port is on the ingress list of the VLAN associated with a frame, the port
determines whether the frame can be processed.

11.5.17 L2CP flooding protection


The Neptune Product Line provides protection against Layer 2 Control Protocol (L2CP) flooding sent by
malicious users. Protection is implemented by limiting the number of L2CP frames which can be received
from Neptune ports through the combined application of the following techniques:
 BPDU Blocking
 CFM
 IGMP Policing
 Link OAM
 Tunnel OAM

ECI Telecom Ltd. Proprietary 11-11


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.5.17.1 L2CP processing overview


This section provides an overview of the L2CP processing.
 For a given L2 Control Protocol or OAM there are four possibilities for processing:
 Pass to an EVC for tunneling
 Peer at the UNI
 Peer and pass to an EVC for tunneling
 Discard at the UNI
 The requirements of L2CP processing on UNI port are defined in MEF20:
 Pass to EVC
 No pass to EVC (Filter)
 Filter means the L2CP or OAM frames could be either peered or discarded, depending on the service
type
Neptune's platform functionality covers L2CP processing requirements in MEF20.

11.5.18 Layer 2 card services


Neptune Layer 2 cards support the following services:
 Transparent LAN Service (TLS) - connects multiple ports belonging to the same customers over
shared SDH bandwidth, with user-defined grades of service.
 Virtual Leased Line (VLL) - connects two external ports over shared SDH bandwidth.
 Dedicated VLL - VLL service over SDH capacity dedicated only to a single customer. Provides zero
packet loss, virtually no delay, and zero delay variation.
 Guaranteed VLL - VLL with zero frame loss rate and bounded delay and delay variation.
 ISP connectivity - a single port configured as an ISP port connected to multiple customer ports (up to
a maximum of 4096).
 Ethernet Private Line (EPL) - support provisioning to provide EPL services point-to-point (P2P)
interconnection between two Ethernet UNI over dedicated EoS trails.
Multiple types of services can be provisioned on one port, where each service is policed independently at
the ingress. This imposes a strict limit on the input rate, separately for each service.
The customer can mark frames destined for different services based on port 802.1Q and 802.1p tag.

ECI Telecom Ltd. Proprietary 11-12


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.6 Evolution: Ethernet to MPLS to MPLS-TP


Ethernet service, the preeminent LAN technology, is now becoming the dominant service for the metro
domain (WAN) as well. Consumers require guaranteed service delivery of the appropriate quality, expecting
operators to provide differentiated services with comprehensive carrier class capabilities, from access to
core.
MultiProtocol Label Switching (MPLS) is a mechanism for transporting data using a connection-oriented
approach. Standardized by the IETF, MPLS is a scalable protocol-agnostic mechanism designed to carry both
circuit and packet traffic over virtual circuits known as LSPs. MPLS fits into the category of packet-switched
networks, falling in between the traditional OSI definitions of the Data Link Layer (Layer 2) and the Network
Layer (Layer 3). MPLS makes packet-forwarding decisions based on the contents of the label without
examining the packet itself.
MPLS provides a unified data-carrying service for circuit-like packet-switching client data. MPLS can be used
to carry many different kinds of traffic, including IP packets, native ATM, and Ethernet frames. MPLS has
gradually been replacing traditional transport technologies, such as frame relay and ATM, mostly because it
is better aligned with current and future technology needs.
MPLS Transport Profile (MPLS-TP) is a connection-oriented packet-switched (CO-PS) application for Layer 2
transport network technology that incorporates elements of both MPLS and PW architectures, such as the
MPLS forwarding paradigm and PW Emulation Edge to Edge (PWE3) client mapping. MPLS-TP is based on
the same architectural principles of layered networking used in transport network technologies like SDH,
SONET, and OTN. MPLS-TP extends IP/MPLS beyond the core network into the metro network, providing
reliable packet-switching transport between these networks. MPLS-TP simplifies MPLS by eliminating
elements of MPLS that are not necessary in a transport-oriented network.
The MPLS-TP protocol enables more affordable E2E MPLS deployments by streamlining operations models
and consolidating/simplifying network topologies. For example, one of the key elements is eliminating the
costs associated with distributed control plane functionality being integrated into each node across an
MPLS-based network. This is accomplished through the use of a more affordable transport-oriented static
configuration through a transport-grade NMS, helping operators reduce their OPEX significantly and get
networks ready to offer true NG service convergence.
MPLS-TP as a transport layer for metro Carrier Ethernet services, rather than using Ethernet as both
transport and service layers, enhances the Ethernet service, enabling it to meet a complete carrier class
standard. MPLS-TP addresses all key attributes defined by MEF for Carrier Ethernet:
 Hard Quality of Service (QoS), with guaranteed end-to-end (E2E) Service Level Agreements (SLAs) for
business, mobile, and residential users that enables efficient differentiated services, allowing service
providers (SPs) to tailor the level of service and performance to the requirements of their customers
(real-time, mission-critical, BE, etc.), as well as assuring the necessary network resources for
Committed Information Rate (CIR) and Extended Information Rate (EIR).
 Reliability, with a robust, resilient network that can provide uninterrupted service across each path.
This includes network protection of less than 50 msec using link/node Fast ReRoute (FRR) and
meeting a five 9s standard of E2E service availability.
 Scalability of both services and bandwidth, ranging from megabits to hundreds of gigabytes with
variable granularity and hundreds of thousands of flows supporting controlled scalability for both the
number of elements and the number of services on the network.

ECI Telecom Ltd. Proprietary 11-13


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 End to End Service Management through a single comprehensive Network Management System
(NMS) that provisions, monitors, and controls many network layers simultaneously. Advancement in
the management of converged networks takes advantage of the “condensed” transport layer for
provisioning and troubleshooting while presenting operators with tiered physical and technology
views that are familiar and easy to navigate. The comprehensive NMS simplifies operations by
allowing customers and member companies to monitor and/or control well-defined and secure
resource domains with partitioning down to the port.
 Security, with a safe environment that protects subscribers, servers, and network devices, blocking
malicious users, Denial of Service (DoS), and other types of attacks. Use of provider network
constraints, as well as complete traffic segregation, ensures the highest level of security and privacy
for even the most sensitive data transmissions.
Figure 11-4: Carrier class Ethernet requirements

MPLS-TP is both a subset and an extension of MPLS, already widely used in core networks. It bridges the
gap between packet and transport worlds by combining the efficiency of packet networks with the
reliability, carrier-grade features, and OAM tools traditionally found in SDH transport networks. MPLS-TP
builds upon existing MPLS forwarding and MPLS-based pseudowires, extending these features with in-band
active and reactive OAM enhancements, deterministic path protection, and a network management-based
static provisioning option. To strengthen transport and management functionality, MPLS-TP excludes
certain functions of IP/MPLS, such as label-switched path (LSP) merge, Penultimate Hop Popping (PHP) and
Equal Cost Multi Path (ECMP).

ECI Telecom Ltd. Proprietary 11-14


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

As the following figure illustrates, MPLS-TP is both a subset of MPLS and an extension of MPLS, tailored for
transport networks.
Figure 11-5: Relationship of MPLS-TP to IP/MPLS

As part of MPLS, MPLS-TP falls under the umbrella of the IETF standards. RFC 5317 outlined the general
approach for the MPLS-TP standard and has been followed by more than 10 additional requirement and
framework RFCs. There are also many more working group documents in the editor's queue or in late-stage
development. Although not fully standardized yet, operators are comfortable enough with the status to
have begun rolling out networks based on MPLS-TP.
MPLS-TP is supported across product lines, enabling E2E QoS assurance across network domains. As a
leader in MPLS-TP technology, we are participating in the standards development process as it unfolds. Our
MPLS-TP components are designed to be future proof, capable of incorporating and supporting new
standard requirements as they are defined.

ECI Telecom Ltd. Proprietary 11-15


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.7 MEF CE2.0 based services


MPLS based data services are the basis for the profitable triple play, enterprise, wholesale, and mobile
customer services that are in such demand today. These services are provided in Neptune platforms using
PB technology, PW technology, or mixtures of both technologies. Examples of these services, with simple
explanations of the Neptune implementation features, are provided in the following sections.

11.7.1 Comprehensive set of MEF CE2.0 services


The Metro Ethernet Forum (MEF) has defined carrier class transport solutions for emerging Ethernet-based
applications, including:
 Triple play
 Business connectivity (enterprise and SMB)
 Ethernet-based mobile aggregation
 DSLAM transport and aggregation
These services are technology-agnostic, and can be offered over IP/MPLS, MPLS-TP, Ethernet, or any
combination of technologies. The range of data-centric services defined by the MEF standards includes:
 Ethernet Line (E-Line) for P2P connectivity, used to create Ethernet private line services,
Ethernet-based internet access services, and P2P Ethernet VPNs. These include:
 Ethernet Private Line (EPL): P2P Ethernet connection that uses dedicated bandwidth, providing
a fully managed, highly transparent transport service for Ethernet. EPL provides an extremely
reliable and secure service, as would be expected from a private line.
 Ethernet Virtual Private Line (EVPL): P2P connectivity over shared bandwidth. Service can be
multiplexed at the user-to-network interface (UNI) level.
E-Line services may be implemented, for example, through an MPLS-based Virtual PseudoWire
Service (VPWS). This implementation provides P2P connectivity over MPLS PW, sharing the same
tunnel on the same locations and benefiting from MPLS E2E hard QoS (H-QoS) and carrier class
capabilities.
 Ethernet LAN (E-LAN) for multipoint-to-multipoint (MP2MP) (any-to-any) connectivity, designed for
multipoint Ethernet VPNs and native Ethernet Transparent LAN Services (TLS). These include:
 Ethernet Private LAN (EPLAN): Multipoint connectivity over dedicated bandwidth, where each
subscriber site is connected to multiple sites using dedicated resources (so different customers'
Ethernet frames are not multiplexed together).
 Ethernet Virtual Private LAN (EVPLAN): Multipoint connectivity over shared bandwidth, where
each subscriber site is connected to multiple sites using shared resources. This is a highly
cost-effective service, as it can leverage shared transmission bandwidth in the network.
E-LAN services may be implemented, for example, through an MPLS-based VPLS. This implementation
provides multipoint connectivity over MPLS PW, sharing the same tunnel, and enables delivery of
any-to-any connectivity that expands a business LAN across the WAN. VPLS enables SPs to expand
their L2 VPN service offerings to enterprise customers. VPLS provides the operational cost benefits of
Ethernet with the E2E QoS of MPLS.

ECI Telecom Ltd. Proprietary 11-16


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

Classic VPLS service creates a full mesh between all network nodes and, under certain circumstances,
this may not be the most efficient use of network resources. With H-VPLS, full mesh is created only
between hub nodes using Split Horizon Groups (SHGs). Spoke nodes are only connected to their hubs,
without SHGs. This efficient approach improves MP2MP service scaling and allows less powerful
devices such as access switches to be used as spoke nodes, since it removes the burden of
unnecessary connections.
 E-Tree (Rooted-Multipoint) for point-to-multipoint (P2MP) multicast tree connectivity, designed for
BTV/IPTV services. These include:
 Ethernet Private Tree (EP-Tree): In its simplest form, an E-Tree service type provides a single
root for multiple leaf UNIs. Each leaf UNI only exchanges data with the root UNI. This service is
useful and enables very efficient bandwidth use for BTV or IPTV applications, such as
multicast/broadcast packet video. With this approach, different copies of the packet need to be
sent only to roots that are not sharing the same branch of the tree.
 Ethernet Virtual Private Tree (EVP-Tree): An EVP-Tree is an E-Tree service that provides
rooted-multipoint connectivity across a shared infrastructure supporting statistical multiplexing
and over-subscription. EVP-Tree is used for hub and spoke architectures in which multiple
remote offices require access to a single headquarters, or multiple customers require access to
an internet SP's point of presence (POP).
E-Tree services may be implemented, for example, through an MPLS Rooted-P2MP Multicast Tree
that provides an MPLS drop-and-continue multicast tree on a shared P2MP multicast tree tunnel,
supporting multiple Digital TV (DTV)/IPTV services as part of a full triple play solution. LightSOFT
provides full support for classic E-Tree functionality as of the current release.

ECI Telecom Ltd. Proprietary 11-17


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 E-Access (Ethernet Access) for Ethernet services between UNI and E-NNI endpoints, based on
corresponding Operator Virtual Connection (OVC) associated endpoints. Ethernet services defined
within the scope of this specification use a P2P OVC which associates at least one OVC endpoint as an
E-NNI and at least one OVC endpoint as a UNI. These services are typically Ethernet access services
offered by an Ethernet Access Provider. The Ethernet Access Provider operates the access network
used to reach SP out-of-franchise subscriber locations as part of providing E2E service to subscribers.
Figure 11-6: MEF definitions for Ethernet services

The Neptune product line supports the full set of MEF services, including E2E QoS, C-VLAN translation, flow
control, and Differentiated Services Code Point (DSCP) classification (see MPLS-TP and Ethernet Solutions).

ECI Telecom Ltd. Proprietary 11-18


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.7.2 Ethernet Private Line (EPL)/Ethernet Virtual Private


Line (EVPL)
VPWS forms a P2P Ethernet service between two sites belonging to the same customer. P2P services can
be dedicated per customer or shared through statistical multiplexing between customers.
VPWS uses P2P tunnels originating at the source PE devices, traveling through Transit Ps, and terminating
at the destination PE. The term PseudoWire (PW) encapsulation is used to refer to transporting P2P
Ethernet traffic over an MPLS tunnel.
As illustrated in the following figure, the Source PE pushes two MPLS labels into each customer's Ethernet
packet as it enters the tunnel. The inner MPLS label is the VC label, and represents the VPN to which the
packet belongs. The VC label serves as a demultiplexer field, allowing aggregation of multiple VPNs into a
single tunnel and thereby providing a scalable tunneling solution rather than a dedicated tunnel per VPN.
The outer MPLS label is the Tunnel label, and represents the tunnel to which the packet is mapped.
The Transit P provider devices simply swap the MPLS labels from the incoming port to the outgoing port.
The Destination PE terminates the tunnel and identifies the packet VPN based on the VC label. The
Destination PE then removes (pops) the two MPLS labels and forwards the packet to the customer
equipment (CE) port(s).
Figure 11-7: P2P MPLS tunnel example

ECI Telecom Ltd. Proprietary 11-19


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.7.3 Ethernet Private LAN (EPLAN)/Ethernet Virtual


Private LAN (EVPLAN)
VPLSs and TLSs provide connectivity between geographically dispersed customer Ethernet sites across the
SP network, creating a virtual LAN network. The interconnected customer sites form a Layer 2 VPN.
VPLS service can be configured for MP2MP services (VPLS full mesh), hub and spoke services (VPLS partial
mesh), and statistical multiplexing between various virtual LAN customer VPNs.
VPLS uses the same tunnels and PWs used in VPWS service, using MP2MP connectivity. In the following
figure, the three customer sites are connected via the provider's VPLS network, and can communicate
among themselves using standard Ethernet bridging and MAC learning as if they were all on a single LAN.
Figure 11-8: VPLS service example

Sites that belong to the same MPLS VPN expect their packets to be forwarded to the correct destinations.
This is accomplished through the following means:
 Establishing a full mesh of MPLS LSPs or tunnels between the PE sites.
 MAC address learning on a per-site basis at the PE devices.
 MPLS tunneling of customer Ethernet traffic over PWs while it is forwarded across the provider
network.
 Packet replication onto MPLS tunnels at the PE devices, for multicast-/broadcast-type traffic and for
flooding unknown unicast traffic.

11.7.4 Multicast optimized rooted-MP services


Neptune platforms provide E-Tree services with maximum efficiency at minimum cost. Metro network
optimization is achieved by an efficient MPLS P2MP multicast tree carrying IPTV services concurrently with
hub and spoke ("Star VPLS") connectivity for other triple play services such as VoD, VoIP, and HSI.
The P2MP tunnels carry multicast content such as IPTV in a triple play network, but P2MP tunnels are not
enough on their own. Two other functionalities complete the triple play solution:
 Star VPLS
 IGMP snooping

ECI Telecom Ltd. Proprietary 11-20


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

The triple play service delivery network architecture includes the following components:
 E2E MPLS carrier class capabilities. MPLS capabilities assure the QoS of IPTV service delivery over
dedicated P2MP tunnels (MPLS multicast tree).
 Multiple distributed PE service edges (leaf PE). Leaf PEs terminate the IPTV downstream traffic
arriving over P2MP tunnels and apply IGMP snooping, policing, and traffic engineering on upstream
traffic. This gives SPs the ability to scale their IPTV network.
 Efficient IPTV multicast distribution. IPTV distribution utilizes an efficient drop-and-continue
methodology, using an MPLS P2MP multicast tree to deliver IPTV content across the metro
aggregation network. This allows SPs to optimize bandwidth utilization over the metro aggregation
network. It also enables simple scaling capabilities as IPTV service demands increase.
 IGMP snooping at the PE leaf service edges allows the PE device to deliver only the IPTV channels
requested by the user, further improving bandwidth consumption over the Ethernet access ports and
enabling easy scalability as the number of IPTV channels grows.
 Star VPLS topology to carry the VoIP, VoD, and HSI P2P services. The star VPLS is built over the
aggregation network from the root PE (aggregator) device that connects the edge router/BRAS to the
leaf PE that connects the IPDSLAM/MSAN. This star VPLS also carries the bidirectional IPTV control
traffic that is either sent by the router downstream (IGMP query), or sent by the subscriber
set-top-box (STB) upstream at channel zapping events (IGMP join/leave requests).
 E2E interoperability with the DSLAM/MSAN and MSER, implemented either by the Ethernet or the
MPLS layer. The P2MP multicast tree continues from the PIM-SM multicast tree over the core
network.
A P2MP tunnel originates at the source PE and terminates at multiple destination PEs. This tunnel has a
tree-and-branch structure, where packet replication occurs only at branching points along the tree. This
scheme achieves high multicast efficiency since only one copy of each packet ever traverses an MPLS
P2MP tunnel. The Neptune can act as both a transit P and as a destination PE within the same P2MP
tunnel, in which case it can be referred to as a Transit PE rather than a Transit P.
The following figure illustrates a P2MP multicast tree with PE1 as the source PE (root), P1 as a transit P, PE2
as a transit PE (leaf PE), and PE3, PE4, and PE5 as the destination or leaf PEs. The link from PE1 to P1 is
shared by all transit and destination leaf PEs; therefore the data plane sends only one packet copy on that
link.
Figure 11-9: P2MP multicast tunnel example

ECI Telecom Ltd. Proprietary 11-21


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

The following figure illustrates a second example of a P2MP multicast tree arranged over a multi-ring
topology network. The multicast tunnel paths are illustrated in both a physical layout and a logical
presentation. In this example, PE1 is the source PE (root); P1 and P2 are transit Ps; PE2, PE3, PE5, and PE6
are transit leaf PEs; and PE4 and PE7 are destination leaf PEs.
Figure 11-10: P2MP multicast tunnel example - physical and logical networks

ECI Telecom Ltd. Proprietary 11-22


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

The full triple play solution, incorporating P2MP multicast tunnels, star VPLS, and IGMP snooping, is
illustrated in the following figure. The P2MP multicast tunnels carry IPTV content in an efficient
drop-and-continue manner from the TV channel source, headend router, and MSER, through the root PE
(PE1) to all endpoint leaf PEs. The VPLS star carries all other P2P triple play services, such as VoIP, VoD, and
HSI. The VPLS star also carries the IGMP messages both upstream (request/leave messages from the
customer) and downstream (query messages from the router). IGMP snooping is performed at the
endpoint leaf PEs to deliver only the IPTV channels requested by the user. This allows scalability in the
number of channels, as well as freeing up bandwidth for other triple play services.
Figure 11-11: Triple play network solution for IPTV VoD VoIP and HSI services

ECI Telecom Ltd. Proprietary 11-23


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.7.5 IGMP-aware MP2MP VSI


IPTV services are in demand, but they can be complicated to implement. This section describes a typical
IPTV network scenario, and highlights the efficient solutions provided by Neptune networks.
IPTV services typically combine elements of both unicast and multicast traffic. The unicast and non-routable
multicast or broadcast traffic is carried on the same interface (DHCP, VoD, etc.). Limiting the service to a
source-specific multicast (SSM) configuration resolves privacy issues, since different content providers use
different source addresses. Subscribers are managed using IGMPv3, and distribution is limited to a single
IP/MPLS domain. IGP provides full mesh IP connectivity and external route distribution as needed, and LDP
provides full mesh MPLS connectivity between PE nodes. The data plane supports native IP forwarding for
SSM, with strict RPF to eliminate loops. The control plane supports native IP multicast through PIM-SM in
SSM mode.
This basic network configuration enables native IP multicast in the IP/MPLS domain, by enabling PIM-SM on
all interfaces within a single autonomous system (AS). Since only SSM mode is supported, there is no need
to discover and configure PIM rendezvous points (RPs). This network configuration can also enhance an
existing MP2MP L2VPN service by making it IGMP-aware. IGMP-aware VSI functionality provides a solution
for the problem of delivery of routable IP multicast traffic for IPTV applications. Regular unicast and
broadcast traffic can continue to be handled in the standard VPLS manner. Routable multicast traffic can be
forwarded natively along source-based MDTs created by PIM-SM within the domain. Traffic is filtered
based on subscriber requests at the subscriber-facing edge. The elements of this solution are illustrated in
the following figure.
Figure 11-12: IPTV solution

ECI Telecom Ltd. Proprietary 11-24


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

IGMP-aware MP2MP VSIs augment the network elements illustrated in the preceding figure by combining
multicast and unicast traffic on the same interfaces, and reducing multicast traffic towards subscribers at
the domain edge. This approach uses standard VPLS mechanisms for intra-domain delivery. Multicast
delivery is implemented through ingress replication across a full mesh of PWs, filtered based on subscriber
requests to eliminate unnecessary traffic. These elements are highlighted in the following figure.
On the management plane, this approach is implemented through an enhanced VSI configuration that
includes enabling IGMP proxy functionality. Upstream (host) and downstream (router) AC (link) and peer
(node) must be explicitly configured as IGMP-aware, and assigned their own IP addresses and subnet
masks. On the control plane, IGMP proxy is implemented through configuring one instance per VSI,
including the corresponding upstream and downstream node and interface parameters. IGMP queries and
responses are handled at the control plane level. On the data plane, traffic received from an IGMP-aware
AC or peer is separated and handled according to its type (IGMP traffic, non-IGMP routable IP multicast, or
other MP2MP VSI traffic).
For example, the following figure illustrates a network reference model for IGMP-aware VSI.
Figure 11-13: Simple network reference model for IGMP-aware VSI

This diagram shows an IP/MPLS domain representing a single AS with GP (IS-IS or OSPF) running on all
intra-AS links. An MP2MP L2VPN service (VPLS) is set up between some PEs, with a full mesh of PWs set up
between all VSIs representing this service in each of the affected NEs using tLDP.
An edge multicast router is connected to one of the PEs of an MP2MP L2VPN (VPLS) service. Multiple
subscribers to this content are connected to other PEs participating in this VPLS instance via access LANs.
Each subscriber indicates its interest in one or more IPTV channels using IGMPv3, with each IPTV channel
mapped to exactly one SSM Multicast Channel.

ECI Telecom Ltd. Proprietary 11-25


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

The VSI representing the VPLS service in question in each of the affected PEs is marked as IGMP-aware. Its
relevant ACs are marked as Upstream or Downstream. Each PW that connects the VSI that is directly
connected through the edge multicast router to a VSI that is directly connected to a subscriber LAN is
treated as an Upstream interface in the former and as a Downstream interface in the latter. An IGMP Proxy
instance is associated with this VSI and treats its Downstream and Upstream ACs and PWs as if they were
Upstream and Downstream.
When an Ethernet frame is received from the Upstream AC or PW associated with an IGMP-aware VSI, it is
checked to see whether it belongs to one of the following traffic types:
 IGMP packets. These are identified by Ethertype being IPv4 and IP protocol number being IGMP. The
IGMP packets are trapped to the IGMP Proxy instance for processing.
 Routable IP multicast packets. These are identified by Ethertype being IP, IP protocol number being
different from IGMP, and Destination IP address being a routable IP multicast address. The routable IP
multicast packets undergo normal VPLS flooding, subject to additional filtering based on the contents
of the Group Membership DB built by the corresponding IGMP Proxy instance.
 All other packets. These frames receive normal VSI forwarding in accordance with the L2 FIB of the VSI
created by the normal MAC Learning process.
With this network model, these rules result in the following handling of routable multicast traffic
transmitted by the IPTV Content Server:
 Unicast traffic will be forwarded as if in the normal MP2MP VSI. For example:
 Unicast traffic generated by triple play services (such as VoIP, internet access, or VoD traffic)
 Fast delivery of the baseline picture after selecting a new IPTV channel by the subscriber

ECI Telecom Ltd. Proprietary 11-26


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 Each routable IP multicast packet received from the server by the directly-connected PE would be
forwarded (using ingress replication) to all PEs connected to the subscriber LANs that have requested
the corresponding Multicast Channel.
 The PE that is directly connected to the subscriber LANs will forward each routable IP multicast packet
received from its single Upstream PW to all subscriber LANs where subscribers have requested this
channel. The packet will not be sent it to the LANs where nobody has requested the channel.
Figure 11-14: IPTV solution - focus on IGMP awareness

11.8 Quality of Service (QoS)


MPLS, together with Connection Admission Control (CAC) and Traffic Engineering (MPLS-TE), support
guaranteed end-to-end (E2E) SLAs for business, mobile, and residential users. This level of QoS enables
efficient differentiated services (DiffServ), allowing service providers to tailor the level of service and
performance to the requirements of their customers (real-time, mission-critical, best-effort, etc.), as well as
assuring the necessary network resources for CIR and EIR. Built-in TM capabilities support the following
QoS mechanisms:
 Hierarchical QoS enables fine tuning of traffic flow based on a structured approach and a finer
granularity of traffic categorization.
 Eight CoS levels per port used for service differentiation, maximizing SLA diversity and optimizing
packet handling throughout the network. Each CoS can be assigned a scheduling priority.
 Auto Queuing, with 64K queue traffic manager, for true E2E bandwidth guarantees per MPLS tunnel.

ECI Telecom Ltd. Proprietary 11-27


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 Auto WRED mechanism for TCP-friendly congestion management. Optional manual WRED, where
user can configure WRED curves and assign them per CoS on both MPLS and non-MPLS ports.
 Auto Shaping that provides rate limiting and burst smoothing. Optional manual shaping, where user
can configure committed and excess rate limits per CoS on non-MPLS ports.
 Auto Weighted Fair Queuing (WFQ) scheduling mechanism, ensuring that bandwidth is distributed
fairly between individual queues. Optional manual scheduling, where user can configure weight per
CoS per switch.

11.8.1 Traffic management and performance


Intelligent TM enables reliable provision of different SLA levels. For example, policer profiles encapsulating
the bandwidth parameters defined for Ethernet services are one of the tools used by TM, allowing greater
flexibility when managing different customer scenarios. Bandwidth allocations and traffic priority can be
configured per ingress or egress UNI ports, as well as per port, per EVC, and per CoS.
The Neptune supports three types of queue scheduling mode, with each port handling up to eight CoS
queues:
 Strict Priority: Higher priority queues are entitled to utilize all the bandwidth allocated to that port.
Packets of lower priority are only transmitted when the higher priority queue is empty.
 Weighted Round Robin (WRR): Packets in all queues are sent in order based on the weighted value of
each queue.
 Enhanced (Strict Priority + WRR [SPQ]): The port's eight queues are organized into two groups, based
on the CoS delimiter configured by the user. Strict Priority mode is applied to scheduling decisions
between queues in the higher priority group and queues in the lower priority group. WRR mode is
applied to scheduling decisions between queues within the same priority group.
Enhanced mode is configurable on PB based ports. MPLS ports are automatically set to Enhanced
mode.
This hierarchical approach is illustrated in the following figure.
Figure 11-15: Traffic management with policer profiles

Some of the TM tools utilized by the Neptune platforms include:


 Classification: A method for categorizing network traffic CoS upon ingress and marking packets upon
egress. Neptune platforms support classification based on C-VLAN as well as Differentiated Services
(DiffServ) Code Point (DSCP for IPv4 and IPv6), implemented on ingress and egress for both IP and
non-IP traffic. DSCP implementation enables TM that skillfully incorporates DSCP capabilities
wherever DSCP is in use.

ECI Telecom Ltd. Proprietary 11-28


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 Policing: TM in the Neptune utilizes two-rate three-color policing to achieve a notable combination of
efficiency and flexibility, supporting CIR, EIR, Committed Burst Size (CBS), and Excess Burst Size (EBS)
traffic categories. Intelligent bandwidth management enables profile enhancement capabilities that
improve handling of 'bursty' traffic as well. Bandwidth management profiles are extended based on
MEF5 standards. Policing is implemented on both the ingress and egress sides, allowing greater
flexibility when managing different customer scenarios.
 Strict TM: QoS is implemented on a per-flow basis, with SPQ between two CoS groups, high and low.
This service ensures that each traffic queue receives its guaranteed bandwidth and other resources
while simultaneously allocating extra available bandwidth fairly among the queues. The TE manager
implements buffer management (WRED), scheduling (WFQ), shaping, and counting on a three-level
hierarchy per port, per class, and per tunnel.
Figure 11-16: Network traffic management

 DiffServ TM: QoS is implemented on a per-port basis. This method bypasses the hierarchical approach
of Strict TM. DiffServ TM improves scalability by dividing traffic into a small number of classes, and
allocating resources on a per-class basis.

TIP: Neptune platforms allow you to configure both TM models within a single port,
increasing the service options available to network operators. Some of the port LSPs can be
configured with Strict TM, and other LSPs in the same port can be configured with DiffServ
TM.

ECI Telecom Ltd. Proprietary 11-29


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 Flow control with frame buffering (802.3x) reduces traffic congestion. When the input buffer
memory on an Ethernet port is nearly full, the data card sends a 'Pause' packet back to the traffic
source, requesting a halt in packet transmission for a specified time period. After the period has
passed, traffic transmission is resumed. This approach gives the overloaded input buffer a little
'breathing room' while the card clears out the input data and sends it on its way. The following figure
illustrates an NE sending a 'Pause' packet to the link partner.
Figure 11-17: Pause frame example

11.8.2 Hierarchical QoS


Hierarchical QoS is implemented using a hierarchical model that enables extensive fine-tuning to meet
network requirements precisely. Hierarchical QoS categories are defined on the following levels:
 Per tunnel BW, at the range of 1-10,000 Mbps, fully guaranteeable.
 Per tunnel CoS, supporting eight CoS levels, where the higher CoS value indicates better service. In
addition, an overbooking factor per CoS allows more efficient BW usage.
 Per tunnel color, assigning two colors (green and yellow) based on service policing categories, where
green traffic is given priority during periods of traffic congestion.

Figure 11-18: Traffic Manager concept

11.8.3 Shaping
Dual-rate token bucket shaping provides both maximum BW limits and smoothing. Shaping is applied at the
port and CoS level with the following objectives:
 Rate limiting for high-CoS traffic, thereby avoiding starvation to low-CoS traffic.
 Marking excess traffic (in excess of the guaranteed quota). This marking serves as input to the WFQ
scheduler, allowing it to distinguish between guaranteed and excess bandwidth usage.
 Smoothing the output rate before transmission to the line.

ECI Telecom Ltd. Proprietary 11-30


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

Each element is assigned values for CIR/CBS and PIR/PBS to determine the element's committed and excess
rates and burst size limits.

11.8.4 WRED (Weighted Random Early Discard)


The statistical nature of Ethernet traffic calls for operation with oversubscription. This in turn means that
for short periods of time, traffic may exceed the allocated bandwidth and, unless something is done,
buffers might be used to the full, preventing any additional frames from entering. The "tail drop" nature of
such scenarios may result in "sawtooth" syndrome. This is caused when all TCP clients drop their
transmission rate because of loss of packets (handled automatically by TCP) and simultaneously increase it
again slowly, up to the point where the PACKET card is overloaded again and starts discarding packets
again. Random Early Discard (RED) prevents this phenomenon by discarding packets prior to overload
condition. Thresholds for dropping packets are handled separately per entity, where entity refers to either
tunnel, CoS, or port. The buffering discard thresholds are set by default in proportion to the entity's
bandwidth. For example, a port of GbE gets approximately 10 times more buffering than a port of FE. This
scheme of setting different discard thresholds is called Weighted RED (WRED). Each element is assigned a
WRED drop profile that consists of two WRED curves, one for packets marked green and one for those
marked yellow. Each curve applies drop probability as a function of the average number of occupied packet
buffers. PACKET cards provide WRED curves for green and yellow traffic. By default, the yellow curve is set
to reach its maximum (when drop probability reaches 100%) before that of the green curve. This default
ensures that upon deep congestion the TM only queues green packets.
A WRED profile curve has the following configurable parameters:
 Minimum and maximum thresholds
 Maximum drop probability
 Weight factor that affects the buffer-averaging frequency (the higher the port rate, the higher the
weight factor)
Figure 11-19: WRED curves

In addition to automatic WRED, PACKET supports user-configurable (manual) WRED profiles. Each CoS
within every port can use any one of these profiles. PACKET WRED is hierarchical, meaning it is applied on
multiple levels (flow or tunnel, CoS, port). A packet is queued for transmission only if the WRED decision at
all three levels is Pass or when it is in guaranteed range. Otherwise the packet is dropped.

ECI Telecom Ltd. Proprietary 11-31


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.8.5 Queuing and scheduling


Packet implements advanced queuing and scheduling mechanisms targeted to assure E2E bandwidth
guarantees, as well as fair allocation of excess bandwidth.
On MPLS ports, queuing starts at the tunnel level, where each tunnel gets its own dedicated queue. Tunnels
that share the same CoS and port values are aggregated into a virtual CoS queue.
On Non-MPLS ports (ETH UNI/NNI), queuing starts at the CoS level, where each CoS in each port gets its
own dedicated queue. The CoS queues in every port are in turn aggregated into the port queues.
Weighted Fair Queuing (WFQ) scheduling is applied at all three TM levels, with the following objectives:
 To provide high CoS traffic with a strict priority over low CoS, where high/low priority is configurable.
 To assure bandwidth guarantees per tunnel.
 To support overbooking factors per CoS.
 To distribute excess bandwidth (bandwidth in excess of that guaranteed) fairly in proportion to total
bandwidth, where higher bandwidth CoS or tunnels are entitled to enjoy more of the excess
bandwidth.
 To support 8-strict priority on ETH UNI/NNI ports.
With WFQ, each element is assigned a weight that determines the element's share of the available BW.

11.8.6 Connection Admission Control (CAC)


Tunnel CAC and Service CAC are mechanisms defined for MPLS networks, important factors in the effective
QoS delivery provided by MPLS networks. Tunnel CAC for MPLS tunnels ensures that there are enough
network resources to accept a requested change. Changes that trigger tunnel CAC evaluations include
creating a new tunnel, editing parameters (e.g., bandwidth) of an existing tunnel, or editing parameters of
network resources such as link bandwidth (MoT). Service CAC ensures that a tunnel has sufficient
bandwidth to allow addition of a further specified service to the tunnel while still providing the required
QoS. When setting up a tunnel, CAC is performed on all outgoing ports of all equipment along the tunnel
path. As a result, the tunnel provides the declared E2E QoS parameters. CAC is applied at the following
levels:
 Port level: The port level sets separate limits for the bandwidth of tunnels and bypass tunnels that
can be configured per port. The sum of these individual limits must not exceed the port rate.
 CoS level: The CoS level sets separate limits for the bandwidth of tunnels and bypass tunnels that can
be configured per CoS per port. PACKET cards support overbooking per CoS. Overbooking provides
statistical multiplexing gain among the tunnels admitted per CoS per port. For example, with an
overbooking factor of 2, an PACKET card would reserve only 5 Mbps instead of 10 Mbps for 10 tunnels
of 1 Mbps. PACKET cards support bandwidth sharing among bypass tunnels that protect independent
SRLGs. For example, an PACKET card reserves only 64 Mbps instead of 96 Mbps on the CoS-port with
SRLG-dependent bypass tunnels with bandwidths of 64 Mbps and 32 Mbps.
 Tunnel level: Tunnel level CAC limits the sum of the bandwidth of protected tunnels to the bandwidth
of their protecting Bypass tunnel, unless CoS is configured for BE protection. Tunnel level CAC also
limits the total P2P service bandwidth per working tunnel to the bandwidth of their carrying working
tunnel (applicable for P2P service dedicated tunnels only).

ECI Telecom Ltd. Proprietary 11-32


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.8.6.1 Service CAC


Service CAC for P2P services is performed by segregating P2P services into tunnels which carry only P2P
services. LightSoft ensures that only P2P services are assigned to the dedicated P2P tunnels. LightSoft
ensures that the sum of the service policing CIR assigned to a dedicated P2P tunnel does not exceed the
tunnel bandwidth. This check is performed when:
 Adding a service to an existing tunnel
 Increasing the policing CIR for an existing Ethernet service
 Reducing the bandwidth of an existing tunnel

11.8.7 Class of Service (CoS)


For Ethernet service frames carried by VPLS over an MPLS network, CoS is implemented by carrying the
frame in a tunnel with this CoS. The supported CoS for the Ethernet services on such a network are the
supported tunnel CoS. MPLS networks based on packet support up to eight CoS. For Ethernet service
frames carried over a PB network, the CoS is implemented through S-VLAN priority assignment. The normal
mode of operation for services defined on an EAH-VPLS is to define the same CoS names for the MPLS
(core) network and for the PB access networks. By default, packet maps frames on an I-NNI with S-VLAN to
MPLS CoS at a gateway packet.

11.8.8 Policing
High granularity policing and priority marking (802.1p) per SLA enables the provider to control the amount
of bandwidth for each individual user and service. Two-rate three-color policing enhances the service
offering, combining high priority service with BE traffic for the same user. Policer profiles, encapsulating the
bandwidth parameters defined for Ethernet services, allow greater flexibility when managing different
customer scenarios. Bandwidth allocations and traffic priority can be configured per ingress or egress ports,
as well as per EVC and per CoS. This hierarchical approach is illustrated in the following figure.

Figure 11-20: Traffic management with policer profiles

These MPLS cards implement two-rate three-color dual token bucket policing that support 1000 profiles,
defining rate limitations and achieving a notable combination of efficiency and flexibility. Intelligent
bandwidth management improves handling of bursty traffic. Based on MEF5 standards, bandwidth
management profiles are extended. Traffic policing is configured in two stages, in this order:

ECI Telecom Ltd. Proprietary 11-33


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

5. Configure policing profiles.


6. Assign policing profiles to service flows.
Each policing profile can be assigned to multiple service flows, each of which is defined by service identifier,
ingress port, and CoS. Supported traffic categories include guaranteed and BE traffic. Users configure the
following parameters when defining policers:
 Guaranteed traffic is defined through two types of BW values:
 Committed Information Rate (CIR) in kilobits per second, defining the SLA's average guaranteed
transmission rate commitment.
 Committed Burst Size (CBS) in kilobytes, defining the maximum number of bytes that can be
carried in a single transmission burst of CIR traffic.
 Best Effort traffic is defined through two types of BW values:
 Excessive Information Rate (EIR) in kilobits per second, defining the SLA's average best effort
transmission rate. The EIR traffic is of lower priority and may be discarded in case of network
congestion.
 Excessive Burst Size (EBS) in kilobytes, defining the maximum number of bytes that can be
carried in a single transmission burst of EIR traffic.
Within the traffic categories, packets are marked with one of three colors. Packet color is marked in the
MPLS EXP bits upon mapping into MPLS tunnels:
 Green packets meet the requirements for guaranteed CIR/CBS traffic. Green packets have the least
risk of being discarded in times of traffic congestion.
 Yellow packets meet the requirements for EIR/EBS traffic. Yellow frames have a greater risk of being
discarded in times of traffic congestion.
 Red packets do not meet the requirements for either traffic category and are discarded.
Note that yellow packets are not discarded automatically. Yellow packets simply have a slightly higher risk
of being discarded by one of the filtering mechanisms during periods of traffic congestion. Users can
change and redefine their green and yellow packet preferences via WRED profile configuration.
The policer also sets each flow's CoS by setting the priority bit to the appropriate value assigned by the
user. PACKET cards support eight classes of service, CoS0-CoS7, where CoS7 has the highest priority and
CoS0 has the lowest. The CoS value is attached to each packet in the flow and maintained as long as the
packet travels within the network.

ECI Telecom Ltd. Proprietary 11-34


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.8.9 Ingress policers


Policer profiles are used to hierarchically define BW allocations and traffic priority, per port, per EVC, and
per CoS, for each ingress UNI port.
 Per CoS: A single ingress BW profile is applied to all ingress service frames that carry a specific CoS.
This BW profile attribute is associated with each VLAN (EVC) in the UNI port. The following figure
illustrates how the BW profiles are assigned per CoS.

Figure 11-21: Ingress policer

 Per VLAN (EVC): A single ingress BW profile is applied to all ingress service frames for a specific EVC.
This BW profile attribute is associated with each VLAN (EVC) in the UNI port. The following figure
illustrates how the BW profiles are assigned per EVC.

Figure 11-22: Traffic port/VLAN based

 Per Ingress UNI Port: A single ingress BW profile is applied to all ingress service frames for a specific
UNI port. This BW profile attribute is independent of the EVCs in the UNI port. The following figure
illustrates how the BW profiles are assigned per UNI.

ECI Telecom Ltd. Proprietary 11-35


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

Figure 11-23: Traffic policer port based

11.8.10 Classification and marking


Classification is a data plane component responsible for categorizing traffic CoS upon ingress and marking
packets upon egress. Neptune platforms support classification based on VLAN priority bits or Differentiated
Services Code Point (DSCP).
The DSCP implementation enables TM that skillfully incorporates DSCP capabilities wherever DSCP is in use,
for both the ingress and egress directions. DSCP is applied for IP traffic and is defined per VSI.
The following configuration options are available per service:
 ETH/EoS UNI Ingress: CoS mapping based on either VLAN priority bits (802.1p) or DSCP.
 ETH/EoS NNI Ingress: CoS mapping based on S-VLAN priority bits. The CoS level assigned to the traffic
affects subsequent TM behavior and MPLS tunnel mapping. For example, packets assigned CoS n can
only use tunnels of CoS n.
 ETH/EoS NNI Egress: Marking S-VLAN priority bits according to CoS.

11.8.11 DiffServ per port based TM


Until V4 of Neptune, QoS granularity in MPLS-TP networks was per flow basis, SLA to application was
provided by explicitly managing bandwidth and buffer per CoS of every MPLS-TP tunnel, and CAC was
performed by NMS for each flow. DiffServ QoS architecture provides better scalability by dividing traffic
into a small number of classes, and allocating network resources not per flow but per-class basis.
The following figure shows the principle of this model.

ECI Telecom Ltd. Proprietary 11-36


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

Figure 11-24: QoS per flow basis model

 Port BW (CAC) was divided between:


 Reserved BW for working tunnels traffic
 Reserved shared BW for bypass tunnels
 MPLS-TP tunnels have per CoS queues
 MLPS-TP bypass tunnels have per CoS per port shared queues
 For MPLS-TP tunnels, the CAC allocation is usually 100:0
 Port BW was allocated (even if no traffic was used in the network) as a sum of tunnels CIR
 This model requires dedicated tunnel BW management by the user for each change in the network
when the port BW CAC reaches the Max. port capacity.
The DiffServ model is shown in the following figure.

ECI Telecom Ltd. Proprietary 11-37


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

Figure 11-25: DiffServ QoS per port model

The basic principles of this model include:


 Adding DiffServ traffic management block
 New tunnel type named DiffServ tunnels
 Existing MPLS-TP tunnels/bypass use the legacy MPLS-TP TM
 DiffServ/bypass tunnels use the DiffServ block
 DiffServ tunnels BW is zero
 The user allocates the BW for each CoS in the DiffServ block
 All tunnels from the same CoS compete on the same resources
 The NMS manages both tunnel types and DiffServ block in parallel:
 New PM counters monitor the utilization of port per CoS
 The user can change the per CoS allocation per real utilization in the network

ECI Telecom Ltd. Proprietary 11-38


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.9 OAM and Performance Monitoring


Operations, Administration, and Maintenance (OAM) functions provide mechanisms for monitoring a
physical or logical connection. OAM provides network operators the ability to monitor the health of the
network and quickly determine the location of faults. Our platforms provide full E2E OAM for efficient fault
localization, including:
 Performance Monitoring tools and other internal card implementations, enabling efficient tracking,
storage, and analysis of the potentially huge amounts of historical PM data produced by large
numbers of large-scale SDH and data objects, a valuable capability for network operators monitoring
heavy traffic.
 Ethernet link OAM, based on IEEE 802.3-05 (formerly 802.3ah), featuring remote failure indication,
remote loopback control, and link monitoring that includes diagnostic information.
 MPLS service OAM and PM12
 Virtual Circuit Connectivity Verification (VCCV) PW OAM: Including PW ping and BFD failure
detection
 MPLS-TP tunnel OAM, based on RFC5860 and ITU-T 8113.2 Generic Alert Labels (GAL),
providing continuous E2E tunnel connectivity verification as well as monitoring of endpoints and
PWs running over the tunnel, through BFD support for MPLS TP in bidirectional tunnels.
 MPLS-TP fault management (FM), based on RFC 6427. Fault OAM messages are generated by
intermediate nodes where a client LSP is switched and sent downstream towards the end point
of the LSP.
 IP/MPLS VPN service OAM and PM13
 IP and VRF mechanisms, including:
 Ping, to check bi-directional reachability of a specified IP address
 Traceroute, to provide information about the actual path to a specified IP address
 BFD, providing a continuity check mechanism for failure detection
 Service OAM, including Connectivity Fault Management (CFM) based on IEEE 802.1ag, enabling E2E
network OAM for Ethernet networks.
 CFM-PM, based on Y.1731, enabling measurement and collection of Ethernet service performance
measurements that provide objective data regarding delay and synthetic loss.
 Throughput testing, based on RFC2544, including packet generator and analyzer which enable
RFC2544 testing between two access ports for any E2E service. This provides an on-demand service
OAM mechanism to measure service performance.
 Service Level Agreement (SLA)14, based on Y.1564, including its Ethernet-based service testing
method for QoS and network performance. This standard defines procedures to test service turn-up,
installation, and troubleshooting of Ethernet-based services, in order to achieve assured and verified
committed SLA performance.

12 For NPT-1200 with MCIPS320 and NPT-1800.


13 For NPT-1200 with MCIPS320 and NPT-1800.
14 For NPT-1200 with MCIPS320 and NPT-1800.

ECI Telecom Ltd. Proprietary 11-39


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

E2E OAM can be achieved by combining the various OAM techniques, as illustrated in the following figure.
Ethernet link OAM can be used to monitor and localize failure at the connection point between the
customer and the NE. MPLS tunnel OAM can be used to monitor the connections along the provider's MPLS
network. Service OAM provides E2E service monitoring.
Figure 11-26: E2E OAM model for a mobile backhaul network

11.9.1 Ethernet link OAM


OAM can be enabled on any full-duplex P2P or emulated P2P Ethernet link. OAM information is carried in
Slow Protocol frames called OAM Protocol Data Units (OAM PDUs). Maximal rate of OAM PDU frames is
10 frames per second. OAM mechanisms are supported for all ETY UNI and NNI ports according to
IEEE802.3 2005 (formerly 802.3ah).
When the card acts as a PE to a Customer Edge CE, OAM is based on IEEE 802.3-05 standard Ethernet Link
OAM (formerly 802.3ah). It provides connectivity check for link monitoring. Loopback operates on peer
remote equipment. Reports about link-down conditions are sent to peers. Discovery process for peer
capabilities is also supported.
Link OAM is required for ETY UNI and NNI ports.
Figure 11-27: Ethernet link OAM

ECI Telecom Ltd. Proprietary 11-40


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.9.2 MPLS-TP tunnel OAM


MPLS-TP tunnel OAM, based on RFC5860 and ITU-T 8113.2 Generic Alert Labels (GAL), provides continuous
E2E tunnel connectivity verification as well as monitoring of endpoints and PWs running over the tunnel.
MPLS-TP tunnel OAM for bidirectional tunnels is based on Bidirectional Fault Detection (BFD), a simple
Hello protocol used to verify connectivity between systems. A pair of systems transmits BFD packets
periodically over each path between the two systems. If a system stops receiving BFD packets for some
preconfigured period of time, a component in that particular bidirectional path to the neighboring system is
assumed to have failed. Our equipment supports BFD for MPLS-TP in bidirectional tunnels, enabling tunnel
OAM that monitors endpoints and PWs running over the tunnel.
Figure 11-28: Tunnel OAM

BFD provides proactive E2E tunnel CC (Continuity Check), CV (Connectivity Verification), and Remote Defect
Indication (RDI):
 Continuity Check (CC): Continuously monitors the integrity of the continuity of the path. In addition to
failure indication, detection of Loss of Continuity may trigger the switch over to a backup LSP.
 Connectivity Verification (CV): Monitors the integrity of routing of the path between sink and source
for any connectivity issues, continuously or on-demand. Detection of unintended continuity blocks
the traffic received from the misconnected transport path.
 Remote Defect Indication (RDI): Enables an End Point to report to its peer a fault or defect condition
that it detects on a path.
NE platforms work with BFD according to IETF RFC 5880, using the CC mechanism for pro-active monitoring
of MPLS-TP LSPs. Similar to other transport technologies, AoC10_L2/Neptune provides sub-50 msec
protection switchover in case of forwarding path failure, triggered by BFD's consistent failure detection
method.

ECI Telecom Ltd. Proprietary 11-41


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

The following figure illustrates a typical OAM editing window, through which you could, for example,
enable or disable BFD or LDI on the main and protected LSPs.
Figure 11-29: Edit OAM

11.9.3 MPLS-TP fault management


MPLS-TP Fault Management (FM) is defined in RFC 6427 and RFC 6428. Fault OAM messages are generated
by intermediate nodes where a client LSP is switched. When a server (sub-) layer, e.g., a link or bidirectional
LSP, used by the client LSP fails, the intermediate node sends Fault Management messages downstream
towards the end point of the LSP.
The messages are sent to the client MEPs by inserting them into the affected client LSPs in the direction
downstream of the fault location. These messages are sent periodically until the condition is cleared.
Figure 11-30: MPLS-TP fault management

ECI Telecom Ltd. Proprietary 11-42


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

There are two MPLS Fault Management messages:


 MPLS Alarm Indication Signal (AIS), including Link Down Indication (LDI)
 MPLS Lock Report
The main advantage of using FM utilities in MPLS-TP networks is the significant speed-up of protection
time. MPLS-TP cards support MPLS AIS, in which LDI is transferred by setting the L-flag inside the AIS
message to 1. The LDI signal is generated by transit nodes detecting a failure in the server layer or a Loss of
Continuity (LOC) condition. LDI messages can reach the PE in 2-3 msec, enabling faster switch-over upon a
failure occurring along the client LSP.
For example, in the preceding figure, assume a fiber break occurs at the right PE. The right-most transit
node will detect a link down condition (or LOC), generate a LDI and send it in a very short time to the
right-most PE. This can be interpreted in a short time to a trigger to activate a protection system (for
instance Linear Protection).
Alternatively, in general MPLS environments the CC OAM session runs at a relatively low rate (ranging from
100 msec to 1000 msec), which leads to long detection time for a failure event, even when working with
fast BFD, due to the need to wait for several undelivered control messages. This method therefore cannot
match the faster detection achieved through use of the FM utilities.

11.9.4 Service OAM (CFM)


Connectivity Fault Management IEEE802.1ag (CFM) is the OAM mechanism for Ethernet services. CFM is
used to monitor connectivity in Ethernet networks that encompass multiple administrative domains. CFM is
a joint effort of IEEE, ITU-T, and MEF, designed to help SPs achieve E2E network OAM for multidomain
networks. CFM facilitates detection of continuity loss or incorrect network connections, connectivity
verification, and fault isolation.
CFM defines proactive and diagnostic fault localization procedures for P2P and MP services that span one
or more links E2E within an Ethernet network. CFM enables detection, verification, localization, and
notification of different defect conditions and enables SPs to manage each customer service instance on an
individual basis.

ECI Telecom Ltd. Proprietary 11-43


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

CFM relies on a functional model consisting of hierarchical Maintenance Domains (MDs). Each MD is an
administrative domain for the purpose of managing and administering a network. A typical domain is
illustrated in the following figure. The service network in this figure is partitioned into customer, provider,
and operator maintenance levels.
Figure 11-31: Multidomain Ethernet service OAM

As illustrated in the preceding figure, CFM descriptions utilize a specific terminology:


 Maintenance Entity (ME): an entity that requires management. May also be referred to as a
Maintenance Point (MP).
 Maintenance Association (MA): a set of MEs that satisfy the following conditions:
 MEs in a single MA existing in the same administrative domain and at the same ME level.
 MEs in a single MA belonging to the same SP VLAN (S-VLAN).
 MA Endpoint (MEP): an ME located at the edges or ends of an MA. Each MA must include two MEPs,
one at each end, in the administrative domain boundaries. An MEP generates and receives OAM
frames.
 MA Intermediate Point (MIP): an ME located at intermediate points along the E2E path of an MA. A
MIP does not initiate OAM frames; it reacts and responds to OAM frames that were generated by the
MEPs.
Note that the more generic term MP may be used when a description refers to either a MEP or a MIP.
Ethernet service OAM includes the following fault management techniques:

ECI Telecom Ltd. Proprietary 11-44


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 Continuity Check: A simple, reliable, and effective tool for fault detection. These multicast
transmissions are transmitted regularly and automatically by each MEP, providing a constant network
'heartbeat' that verifies transmission integrity. If a MEP misses three consecutive 'heartbeats' of
transmission from another MEP, the network is immediately alerted to a connectivity problem.
Continuity check functionality is illustrated in the following figure.
Figure 11-32: Continuity check functionality

 Loopback: A request/response protocol similar to the classic IP Ping tool. MEPs send Loopback
Messages (LBMs) to verify connectivity with another MP (MEP or MIP) within a specific MA. The
target MP generates a Loopback Reply Message (LBR) in response. LBMs and LBRs are used to verify
bidirectional connectivity, and are initiated by operator command. The path of a typical loopback
sequence is illustrated in the following figure.
Figure 11-33: Loopback protocol

 Link Trace: Another request/response protocol similar to the classic IP Traceroute tool. Link trace may
be used to trace the path to a target MP (MEP or MIP) and for fault isolation. MEPs send multicast
Link Trace Messages (LTMs) within a specific MA to identify adjacency relationships with remote MPs
at the same administrative level. When an MP receives an LTM, it completes one of the following
actions:
 If the NE is aware of the target MP destination MAC address in the LTM frame and associates
that address with a single egress port, the current MP generates a unicast Link Trace Reply (LTR)
to the initiating MEP and forwards the LTM to the target MEP destination MAC address.
 Otherwise the LTM frame is relayed unchanged to all egress ports associated with the MA
except for the port from which the message was received.
The path of a short link trace sequence is illustrated in the following figure.
Figure 11-34: Link trace

 CFM Alarm Management: Various types of CFM alarms can be received at the service level when
Alarms functionality is enabled for an MA.

ECI Telecom Ltd. Proprietary 11-45


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.9.5 CFM-PM (Y.1731)


The Y.1731 standard defines Ethernet PM mechanisms for measuring Ethernet service performance (P2P,
P2MP, and MP2MP services), performed between pairs of MEPs belonging to the same MA. Each pair
includes a Sender MEP that generates messages, and a Responder MEP that replies to them. On MP service
MEGs, CFM-PM can be applied to any subset of the pairs of MEPs.
The CFM-PM mechanism covers the following SLA parameters:
 Frame Delay (FD): The round trip time that a frame spends on the way to the remote endpoint and
back again (2-way trip). FD time includes travel time only; the time that the packet is delayed within
the remote MEP is excluded.
 Frame Delay Variation (FDV): The difference (delta) between the current FD value and the previous
FD value. This measurement also excludes the time that the packet is delayed within the remote MEP.
 Frame Loss (FL): The number of frames lost during transmission from the local MEP to the remote
MEP, or during the return transmission. FL measurements are based on synthetic traffic.
 Availability: The amount of time (in seconds) that service was available between a pair of MEPs
belonging to the same MA.
Apollo supports one-way single-ended forward and backward synthetic loss measurement. To measure SLA
parameters, synthetic frames are periodically generated by a local MEP towards a remote MEP along the
same path as the service frames. The remote MEP replies with synthetic loss reply frames, which are then
used by the local MEP for calculating performance.
CFM-PM quality standards can be tailored to the type of service through user-defined profiles. Settings for
transmission period, frame size, and minimum performance threshold can be configured by the user.

ECI Telecom Ltd. Proprietary 11-46


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

CFM-PM (Y.1731) performance management operations are configured through the Performance
Management windows. The selected service name appears at the top of the window. For example, you
would configure a DM session through the Set DM Session pane, used to define a new DM session or to
reconfigure an existing DM session.
Figure 11-35: Set DM Session pane

11.9.6 Throughput (RFC 2544)


Customer SLAs dictate performance criteria requirements, usually regarding verifiable network availability
and mean-time-to-repair values. Generally, Ethernet performance criteria can be difficult to prove;
demonstrating performance availability, transmission delay, link burstability, and service integrity cannot
be accomplished accurately through PING commands alone.
IETF's RFC 2544 standard (Benchmarking Methodology for Network Interconnect Devices) outlines the tests
required to measure and prove performance criteria for carrier Ethernet networks. The standard provides
an out-of-service benchmarking methodology to evaluate the performance of network devices using
throughput, back-to-back, frame loss and latency tests, with each test validating a specific part of an SLA.
The methodology defines the frame size, test duration and number of test iterations. Once completed,
these tests provide performance metrics of the Ethernet network under test.
The throughput test defines the maximum amount of data, measured in number of frames per second, that
can be transmitted from source to destination without any error. This test involves starting at a maximum
frame rate and then comparing the number of transmitted and received frames. Should frame loss occur,
the transmission rate is divided by two and the test is restarted. If during this trial there is no frame loss,
then the transmission rate is increased by half of the difference from the previous trial. This methodology is
known as the half/doubling method. This trial-and-error methodology is repeated until the highest rate at
which there is no frame loss is found.

ECI Telecom Ltd. Proprietary 11-47


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

The throughput test must be performed for each frame size. The test time during which frames are
transmitted must be at least 60 seconds. Each throughput test result is recorded in a report, using frames
per second (f/s or fps) or bits per second (bit/s or bps) as the measurement unit.

11.9.7 SLA (Y.1564)


ITU's Y.1564 standard (Ethernet service activation test methodology) defines a test methodology used to
assess the proper configuration and performance of an Ethernet network delivering Ethernet-based
services. This out-of-service test methodology was created to standardize Ethernet-based service
performance measurement, enabling verification of SLA compliance.
What makes this standard unique is that it allows for complete validation of Ethernet SLA in one test,
including:
 Ensuring that the network complies with SLA requirements by ensuring that a service meets its key
performance indicators (KPI) at different rates, within the committed range.
 Ensuring that all services carried by the network meet their KPI objectives at their maximum
committed rate, validating that under maximum load the network devices and paths are able to
service all the traffic as designed.
 Confirming that network elements can properly carry all services while under a significant load
extended over a significant period of time (sometime referred to as a soaking test).
Y.1564 supports current service provider offerings, which typically consist of multi-services. Y.1564 allows
them to simultaneously test all services and measure if they qualify to the committed SLA attributes. On
top of that it also validate the different QoS mechanisms provisioned in the network to prioritize the
different service types – allowing service providers faster deployment (as the need for repeated tests is
eliminated) and easier service and network troubleshooting.
Y.1564 allows for very high flexibility in simulating testing scenarios that are very close to the real active
network traffic. It defines test streams (or “flows”) with service attributes aligned with MEF 10.2
definitions. These test flows can be classified using various mechanisms such as 802.1q VLAN, 802.1ad,
DSCP, and CoS profiles. Services are defined at the UNI level with different frame and bandwidth profiles,
such as the service’s MTU or frame size, CIR, and EIR settings, with up to five different frame sizes in single
test.

11.10 Ethernet services built-in tester (Y.1564)


ITU's Y.1564 standard (Ethernet service activation test methodology) defines a test methodology used to
assess the proper configuration and performance of an Ethernet network delivering Ethernet-based
services. This out-of-service test methodology was created to standardize Ethernet-based service
performance measurement, enabling verification of SLA compliance.
Y.1564 supports current service provider offerings, which typically consist of multi-services. Y.1564 allows
them to simultaneously test all services and measure if they qualify to the committed SLA attributes. On
top of that it also validate the different QoS mechanisms provisioned in the network to prioritize the
different service types – allowing service providers faster deployment (as the need for repeated tests is
eliminated) and easier service and network troubleshooting. A built-in tester based on Y.1564 standard is
supported in all Neptune platforms.
Many services run on each UNI that are qualified by their attributes, including:

ECI Telecom Ltd. Proprietary 11-48


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 Connection type
 QoS (including VLAN information), traffic type (data vs management), etc.
 Bandwidth profile: CIR, CBS, EIR, EBS, CF, and CM
 Performance criteria: FTD, FDV, FLR, AVAIL, etc.
The service bandwidth is named bandwidth profile and the SLA features are named Service Acceptance
Criteria (SAC). The bandwidth profile specifies the traffic volume allowed for the client and the way of
which the frames are prioritized within the network. The following values describe the profile of the service
bandwidth:
 Committed Information Rate (CIR)
 Excess Information Rate (EIR)
 Committed Burst Size (CBS)
 Excess Burst Size (EBS)
 Color mode (CM)
The service acceptance criteria are a series of features defining the objectives of performance. The series of
values define the minimum demand to ensure that the service complies with the Service Level Agreement
(SLA).
The service acceptance criteria include the following values:
 Frame Transfer Delay (FTD)
 Frame Delay Variation (FDV)
 Frame Loss Ratio (FLR)
 Availability(AVAIL)
The test methodology checks if the service is in accordance with the bandwidth profile and with the
acceptance criteria. It includes two phases:
 Service configuration test. The services running on the same line are tested one by one, to check the
correct provisioning of their profile.
 Service performance test. The services functioning on the same line are tested simultaneously for a
significant period of time, to check the robustness of the network.
The built in tester is based on ITU-T Y.1564 and supported in NPT-1200 with MCIPS320 and NPT-1800. The
test performed by the tester is an out-of-service application that enables to check the SLA performance of
any connections before it is commissioned.

11.10.1 Built-in tester (Y.1564) features


The following features are supported by the Y.1564 built-in tester:
 Round-trip test
 Loop back with MAC swap
 Service configuration tests:
 CIR configuration test, color aware and non-color aware:
 Simple CIR validation

ECI Telecom Ltd. Proprietary 11-49


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 Step load CIR test


 EIR configuration test:
 Non-color aware
 Traffic policing test:
 Non-color aware
 Service performance test:
 Information rate test (IR)
 Ethernet frame transfer delay test (FTD)
 Ethernet frame delay variation test (FDV)
 Ethernet frame loss ratio (FLR)
 Service availability (AVAIL)
 Per flow test frame header definitions (configuration):
 Destination MAC address
 Source MAC address
 0 or 1 VLAN ID/Priority tags, optional second VLAN ID for S-TAG
 Per flow test frame size definitions:
 Fixed one size
 EMIX
 Per flow test duration definitions
 Service acceptance criteria definitions per flow test
 Restore and view the test reports

11.11 DMXE_22_L2 TM
The DMXE_22_L2 card has a unique and intelligent Traffic Management (TM), which enables reliable
provisioning of different SLA levels. For example, policer profiles encapsulating the bandwidth parameters,
defined for Ethernet services, is one of the tools used by TM, allowing greater flexibility when managing
different customer scenarios.
Basically, the TM has a simple architecture to provide high capacity infrastructure (up to 10 GbE) for access
rings with small amount of capacity per node.
Below are the basic building blocks for access applications for the egress 10GE traffic flow and traffic
management.

Ingress classification
On ingress all traffic is classified into two groups:
 High CoS - traffic is CIR only
 Low CoS

ECI Telecom Ltd. Proprietary 11-50


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

Egress scheduling
In general, strict priority is implemented between High CoS and Low CoS traffic.
Either High CoS or Low CoS traffic can reach 10Gbps line rate with some burst handling with tail drop
algorithm.
The 10 GbE port egress queue has a threshold as shown in the following figure.
Figure 11-36: 10 GbE port egress queue threshold

The TM algorithm is presented in the following figure.


Figure 11-37: TM with Token Bucket for 10 GbE egress port algorithm

Low CoS traffic is checked upon 10 GbE port egress queue threshold. If the threshold is reached, the packet
is discarded.
High CoS traffic is colored by 10 Gbps Token Bucket. If the packet color is green, then it is allowed to egress
queue. If the packet color is red, then egress queue threshold is checked. If threshold is reached, the packet
is discarded.

ECI Telecom Ltd. Proprietary 11-51


The Neptune provides a comprehensive set of protection and restoration mechanisms that supply
complete overall protection for every aspect of your network configuration. The Neptune supports
protection for all types of networks based on the complete range of technologies. Protection mechanisms
are provided through the Neptune's complete set of MPLS and Ethernet traffic protection schemes and fast
IOP (1:1 card protection). The Neptune supports full SDH/Ethernet/MPLS-TP path and line protection,
optical layer protection, equipment protection, and integrated protection for I/O cards with electrical
interfaces. These various protection capabilities are introduced in this section.

11.12 Multiple Protection Schemes


The Neptune control plane supports coexistence of multiple protection and restoration schemes. New
restoration schemes and combined protection/restoration solutions are offered.

11.13 MPLS protection schemes


The following figure shows an MPLS network that incorporates an E2E combination of protection schemes
to provide protection at every point. Protection mechanisms incorporated into the figure include
sub-50 msec FRR link and node protection described in the following sections, as well as Link aggregation
(LAG), Dual-homed device protection in H-VPLS networks , and Fast IOP: 1+1 card protection.
Figure 11-38: Comprehensive MPLS protection

ECI Telecom Ltd. Proprietary 1


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.13.1 Facility backup FRR


Robust networks must protect tunnels against failure of a link or node along the tunnel path. MPLS
supports a protection mechanism called Facility Backup FRR. FRR protects against link or node failure along
a tunnel path through the use of bypass tunnels. FRR protection can be triggered by many types of failures,
such as Bit Error Rate (BER) errors or other link or node failure conditions.
With FRR, a backup LSP called bypass tunnel is pre-established by the NMS to bypass a network link or
node failure to a downstream node where the alternative path merges with the path of the protected
tunnel. Switching to a bypass tunnel requires pushing a third MPLS tag called an FRR label into the packet.
The FRR label remains in the packet until the bypass tunnel merges with the path of the protected tunnel,
where it is removed (label pop) off the packet. The main advantage of FRR over other protection schemes is
the speed of repair. Due to the pre-establishment of the bypass tunnels and the fast physical layer-based
failure detection, FRR can provide sub-50 msec switching time for both link and node protection,
comparable to SDH protection mechanisms.

11.13.2 FRR for P2P tunnels


The following figure shows a tunnel flowing from MCS1 through MCS2 to MCS3. The tunnel is configured
with node protection at MCS1 via Bypass 1 and with link protection at MCS2 via Bypass 2.
 If MCS1 detects that the node MCS2 has failed, MCS1 switches the tunnel traffic to the
node-protecting Bypass tunnel 1 while pushing an FRR label. Bypass tunnel 1 then merges with the
protected tunnel path at Next Next Hop (NNH) MCS3, where the FRR label is removed (pop).
 If MCS2 detects that the link between MCS2 and MCS3 has failed, MCS2 switches the tunnel traffic to
the link-protecting Bypass tunnel 2 while pushing an FRR label. When the packet traveling via Bypass
tunnel 2 arrives at the Next Hop (NH) MCS3, the FRR label is removed.
Figure 11-39: FRR for P2P tunnels

11.13.3 FRR for P2MP tunnels


With facility backup FRR link protection for a P2MP tunnel, the node upstream from the failed link redirects
the traffic through a bypass tunnel whose destination is the NH. The bypass tunnel is an ordinary P2P
bypass tunnel that can be shared by both P2P and P2MP tunnels. As with FRR for a P2P tunnel, an FRR label
is pushed to the packets before they are directed to the bypass tunnel. The FRR label remains until the
bypass tunnel path merges with the protected tunnel, where the label is removed.
When multiple subtunnels share a bypass tunnel, the data plane forwards only one copy of the packet to
that tunnel.

ECI Telecom Ltd. Proprietary 2


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

The following figure shows a P2MP tunnel that flows from P1 to P2, where it branches towards destination
PEs (PE3 and PE4). If P1 detects that the link to P2 has failed, it switches the traffic to the bypass tunnel.
When the rerouted traffic merges at P2, the FRR label is removed.
Figure 11-40: P2MP link protection example

With facility backup FRR node protection for a P2MP tunnel, the node upstream from the failure redirects
the traffic through a bypass tunnel that merges with the original P2MP tree at the NNH node. If the NH is a
P2MP branching point to N links, N bypass tunnels are required for complete protection.
The following figure shows a P2MP tunnel that flows from P1 to P2, where the tunnel branches towards
destinations PE3 and PE4. If the P2 branching point fails, P1 switches all traffic meant for PE3 to go through
bypass tunnel 1 to PE3. P1 also switches all traffic meant for PE4 to go through bypass tunnel 2 to PE4.
Figure 11-41: P2MP node protection example

11.13.4 Dual FRR protection


FRR link and node protection is usually defined in terms of FRR link or node protection (as illustrated in FRR
for P2MP Tunnels). When either FRR link or FRR node protection has been triggered by a link failure, ECI's
proprietary solution provides concurrent FRR link and FRR node protection, thereby enabling fully
protected P2MP tunnels.
Traditionally, an FRR-guaranteed or fully protected P2P tunnel is a tunnel with full FRR protection for all
hops. However, in the case of P2MP tunnels, traditional FRR-guaranteed protection leaves open a
problematic loophole. This scenario and ECI's innovative solution are described here.

ECI Telecom Ltd. Proprietary 3


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

FRR protection provides alternative traffic routes. These routes are activated if a connection link or a
connecting node fails. The following figure shows a portion of a P2MP tunnel. Node PE2 connects to both
transit and tail subtunnels. The transit subtunnel leads to node PE3, and the tail subtunnel terminates at
the access port of PE2. To fully protect the tunnels leading from PE2, the preceding node PE1 has been
designated the PLR. Protection bypass tunnel B1 runs from PE1 to PE2, providing link protection in case the
link from PE1 to PE2 fails. Protection bypass tunnel B2 runs from PE1 to PE3, providing node protection in
case node PE2 fails. Note that both link and node protection is required for this network configuration,
since node protection alone does not provide a backup for the subtunnel that terminates at PE2.
Figure 11-42: FRR protection typical scenario

This scenario is a classic illustration of the traffic duplication problem which, when it occurs, invalidates all
the traffic of the P2MP tunnel. If link PE1-PE2 fails and triggers both link and node protection, protective
traffic can be sent via bypass tunnel B1 (to reach node PE2) as well as via bypass tunnel B2 (to reach nodes
PE3 and continue to node PE4). Because node PE2 is also a tail endpoint for B1, node PE2 forwards traffic
that has been received onto PE3 along the P2MP tunnel. Therefore, PE3 receives two duplicate copies of
the packet (one from PE2 and one over B2), and traffic is thus rendered useless.

ECI Telecom Ltd. Proprietary 4


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

To resolve this problem, the data cards implement a method called Dual FRR. A single bypass tunnel is
defined that provides both link and node protection simultaneously. A corresponding rule is defined to
avoid traffic duplication. The Dual FRR bypass tunnel originates at PE1, the point of local repair, then drops
node-protected traffic at PE3, the node protection merge point, and continues on to drop link-protected
traffic at PE2, the link protection merge point. The protective behavior at node PE3 can be referred to as
drop-and-continue. The traffic packets dropped at PE2 as part of Dual FRR are identified as such and
therefore are not transmitted back to PE3, thus avoiding the problem of traffic duplication. Dual FRR
enables concurrent link and node protection. In this example, Dual FRR works in the event of a failure of
the link between PE1 and PE2 and/or failure of the node PE2. This is illustrated in the following figure.
Figure 11-43: Dual FRR protection

11.13.5 Additional FRR capabilities


In Facility Backup FRR, multiple protected tunnels share a bypass tunnel through the addition of an FRR
label. Facility Backup FRR is scalable in terms of the number of bypass tunnels.
MPLS also supports both Shared and Nonshared Protection BW. In Shared Protection BW, multiple bypass
tunnels share their bandwidths, while in Nonshared Protection BW, each bypass tunnel gets its own
guaranteed bandwidth. Sharing protection bandwidth can only be applied if the bypass tunnels protect
against independent risks or SRLGs.
SRLGs refer to situations where links or nodes in a network share a common physical attribute, such as fiber
duct. If a link or node fails, other links and nodes in the group may fail too. Links and nodes in the group are
said to have a shared risk or shared fate.
Bypass tunnel path selections avoid links or nodes in the same SRLG as the link or node they are protecting.
Otherwise, if that link (or node) fails, the other SRLG members may fail too.
MPLS further supports Best Effort (BE) and BW-based protection per CoS. BE protection means the bypass
tunnel protects tunnels regardless of their bandwidth, while in BW-based protection the bandwidth sum of
the tunnels protected by a bypass tunnel cannot exceed the maximum bypass tunnel bandwidth.
MPLS also offers an option of FRR timing. To avoid switching to protection while the underlying
physical-layer protection is also switching. FRR switching can optionally be delayed through a per-port
configurable Hold-Off time. Similarly, to prevent switching too frequently to or from protection, the switch
back from the bypass tunnel to the protected tunnel after a failure is repaired can be delayed through a
per-port configurable Wait-to-Restore (WTR) time.

ECI Telecom Ltd. Proprietary 5


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.13.6 MPLS-TP 1:1 linear protection


LightSOFT provides E2E linear protection for bidirectional E-LSP tunnels, as described in the MPLS-TP
Survivability Framework (RFC 6372). The goal of 1:1 linear protection is to provide protection switching
triggered by data plane OAM, similar to SDH protection, without depending on signaling or the control
plane. With this bidirectional 1:1 protection, traffic is transmitted only via one LSP - main or protection - but
never both. If an LSP failure, the traffic is automatically switched to the standby LSP.
The following figure shows a 1:1 linear protection configuration for the P2P bidirectional tunnels. This
configuration can be used for HSI, DSLAM aggregation, or L2VPN for business connectivity. The solid purple
and green lines (representing MPLS-TP bidirectional LSPs) are protected by the dotted LSPs to make sure
1:1 traffic protection.
Figure 11-44: Linear protection for various services

ECI's 1:1 linear protection implementation on Neptune platforms includes:


 Protection for bidirectional co-routed E-LSPs
 Protection State Coordination (PSC) protocol to synchronize both ends of a tunnel
 Protection triggers include:
 Local faults (server indication), including:
 MoT link failures (VCAT and LCAS modes)

ECI Telecom Ltd. Proprietary 6


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 MoE link failures


 LDI indication from intermediate points to the end-point
 BER degredation
 End to End Connectivity Check (CC)
 Using BFD OAM mechanism per LSP

11.13.7 PW redundancy for H-VPLS DH topology


The following figure is an example of dual-homing in an H-VPLS network. This topology connects a pivot PE
to two H-VPLS gateway PEs and connects to a peer VPLS domain through the GW PEs.
Figure 11-45: PE Dual-Homing to H-VPLS topology

In this H-VPLS network, the dual-homed PE has configured spoke PWs to H-VPLS gateways PE1 and PE2.
One of the PEs is currently active, linked to the PE via the primary PW. The primary PW is given priority by
the EMS and is responsible for forwarding traffic to the peer H-VPLS domain. Failure of an H-VPLS gateway
PE generates an OAM defect, which in turn triggers the dual-homed PE to select a new primary PW. A
hold-off timer can be used to mask temporary server layer faults.
Another option to trigger PW redundancy is by using PW status from the gateway PE. The end to end PW is
traversing two H-VPLS domains and tunnel OAM is maintained over each domain. Hence, in case of a failure
in Domain #2 which is not recovered by the tunnel protection, the gateway PE will mark the PW as down
and generate a defect status message towards the pivot node that will trigger a PWR switch.
A PW switchover requires an FDB flush at PE1, PE2, and the far H-VPLS domain. This is achieved by the
transmission of CCN messages between data cards that indicate for which PE(s) the FDB entries should be
deleted (see Configuring CCN).
PW Redundancy can also be used for load balancing between the H-VPLS gateways. By configuring some
PEs with the primary PWs toward PE1 (where PE1 becomes the default H-VPLS gateway), and other PEs
with primary PWs toward PE2, the traffic load can be reasonably balanced between the two gateway PEs.

NOTE: In dual-homing to H-VPLS topology, BFD must be used to monitor the status of the
remote PE and the status of the transport layer, in order for the pivot PE to select the
appropriate PW. BFD should therefore be enabled on the tunnel carrying the PW (see
Configuring MPLS-TP Linear Protection).

ECI Telecom Ltd. Proprietary 7


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.13.8 Dual-homed device protection in H-VPLS networks


Neptune data cards support dual-homed device protection for H-VPLS networks. Dual-homed protection
for H-VPLS networks enables dual homing for multiple MPLS access rings connected to a core ring. Typical
configurations include full mesh within each access ring and spokes reaching from each ring towards
gateway nodes in the core ring. The access rings may be either open or closed.
Intelligent use of CCN enhances network resiliency and enables more effective use of dual-homed device
protection in H-VPLS networks. In some H-VPLS dual homing topologies, when there is a need for CCN to
cross VPLS domains, CCN forwarding can be enabled on the relevant NEs.
Redundant connectivity is enabled through use of ERP (see Ethernet ring protection switching (ERPS) in the
access gateway nodes. Configuring ERP between the two local gateways prevents creation of loops in the
network.
Figure 11-46: H-VPLS with dual homing for access rings

ECI Telecom Ltd. Proprietary 8


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.13.9 Multi-segment PW
An L2VPN multisegment pseudowire (MS-PW) is a set of two or more PW segments that function as a single
PW, as illustrated in the following figure. The routers participating in the PW segments are identified as
switching provider edge (S-PE) routers, which are located at the switching points connecting the tunnels of
the participating PW segments, or terminating provider edge (T-PE) routers, which are located at the
MS-PW endpoints. The S-PE routers can switch the control and data planes of the preceding and
succeeding PW segments. MS-PWs can span multiple cores or autonomous systems of the same or
different carrier networks.
Figure 11-47: Stitching PE

MS-PW service enables a hierarchical network structure for data networks, similar to H-VPLS capabilities.
MS-PW functionality improves scalability, facilitates multi-operator deployments, and facilitates use of
different control plane techniques in different domains. These are valuable capabilities in network
configurations that must typically be able to integrate static PW segments in the access domains and
signaled PW segments in the IP/MPLS core.
Signaling gateways (SGW) are used to tie PW segments together into a single connection (stitching) at a
given point. This functionality is implemented within a single platform located at the border of two network
domains. The two domains may both be static, both dynamic, or one static and one dynamic. Network
interworking enables LSP and service stitching, interaction between the data planes, and E2E OAM.
Figure 11-48: Signaling gateway concept

MPLS-TP and IP/MPLS domains can be connected through SGWs. In PW-based backhaul, this is
implemented through multisegment PWs (MS-PWs), including:
 Static MPLS-TP segments
 Dynamic IP-MPLS segments
 Gateway interconnections or "stitches" of both types of segments

ECI Telecom Ltd. Proprietary 9


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

In the current Neptune Hybrid products, MS-PWs are used to stitch together static MPLS-TP segments.
With NPT-1800 and NPT-1200 with MCIPS320, MS-PWs can also be configured as SGWs, stitching together
static and dynamic segments. MS-PWs make it possible to offer a single E2E service that seamlessly spans
network domains, simplifying service management and OAM.
Figure 11-49: PW switching point

11.13.10 Link Aggregation (LAG)


Ethernet link aggregation protection is based on standard Ethernet link aggregation schemes (IEEE
802.3ad). Link aggregation is available for both Ethernet (UNI) and EoS/MoE WAN ports. In LAG protection
schemes, a single logical link is composed of up to eight physical links. When one (or more) physical link
fails, it is simply removed until recovered. The network continues to function correctly without the failed
link, since the links for the LAG as a whole are still functioning.
Network operators can configure a LAG Link Down threshold, defining up to how many links can go down
and the whole LAG still considered operational, and at what point a LAG is considered to have failed even if
still a few links are functional.
Neptune data cards support link aggregation based on either IP or MAC address hashing, depending on the
packet header data. This capability enables superior load balancing and enhanced IP TM efficiency.

ECI Telecom Ltd. Proprietary 10


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

The following figure shows the link aggregation approach. Two variations are displayed, one for Ethernet
ports, MoE and one for EoS WAN ports.
Figure 11-50: LAG: link aggregation examples

Link members are added and removed through the NMS.

11.13.11 Multi-chassis link aggregation (MC-LAG)


Multi-chassis Link Aggregation Group (MC-LAG) is an extension to the LAG protection scheme that provides
not only link redundancy but also protection at the node level. The basic MC-LAG includes a CE node
connected to two redundant PE peer nodes as shown in the following figure.
Figure 11-51: MC-LAG protection scheme

ECI Telecom Ltd. Proprietary 11


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

MC-LAG enables to improve the performance of data networks and provides a higher network protection
with improved reliability. It extends the link-level redundancy capabilities of link aggregation, and adds
support for the device-level redundancy. This is achieved by allowing one end of the link aggregated port
group to be dual-homed into two different devices to provide device-level redundancy.
In the MC-LAG protection scheme the CE behaves as a normal LAG device from the perception of hashing
and traffic distribution. The PE1 and PE2 devices communicate to each other, exchanging LAG messages by
multi-chassis LACP (mLACP) over Inter-Chassis Communication Protocol (ICCP).
As a result of the communication, the group of member ports on a PE is either Active or Standby. When the
member ports are active, Load Sharing is applied locally between the ports. Since port(s) can be active only
on one PE, the two PEs exchange port status information between them, so PE1 knows if the LAG on PE2 is
up or down, and vice versa. If multiple ports in the LAG, LAG Link Down Threshold is used to decide if the
LAG is up or down. A local decision is done on each PE whether to activate its local ports or to keep it as
standby. Equipment failure of the peer PE is detected via OAM (MPLS-TP BFD) and triggers the local LAG to
activate its ports.
When a failure is detected, the system reacts by triggering a switchover from the Active PE to the Standby
PE.

11.13.12 LSP tunnel restoration


Automatic network restoration capabilities provide valuable protection against multiple failures, assuring
network availability over time with efficient hitless restoration. Dynamic restoration capabilities make sure
that there is always an alternative route available as soon as it is needed, even if multiple failure cycles are
triggered.
In our equipment, dynamic restoration capabilities are supported for bidirectional tunnels. Both protected
and unprotected tunnels can be restored. When an automatic switch to protection s triggered, LightSOFT
restores the failed LSP and downloads the restoration route to the network. As soon as the failed link is
fixed, LightSOFT reverts the restored LSP back to the originally provisioned LSP and downloads the restored
(original) route to the network. The following sequence shows a typical example of this restoration process.
 Initially, the network is working normally, with traffic transmitted over LSP1, the original main traffic
path.
Figure 11-52: Automatic restoration: phase 1 (normal operation)

ECI Telecom Ltd. Proprietary 12


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 A link failure along LSP1 automatically triggers 1:1 protection and traffic is redirected to LSP2, the
original protection path.
Figure 11-53: Automatic restoration: phase 2 (link failure)

 The NMS now recalculates and downloads a new path, restoring traffic to most of the original LSP1
route while bypassing the link failure.
Figure 11-54: Automatic restoration: phase 3 (NMS recalculates)

If multiple link failures are detected in the original LSP, LightSOFT dynamically restores the relevant tunnels
by configuring alternative routes, working link by link and taking all active failures into account when
performing restoration. As the participating links are repaired, LightSOFT reverts the tunnels where
possible to the original links. Network restoration is a dynamic, flexible feature that intelligently chooses
the most efficient route, based on the current network status, correlating all affected tunnels and
identifying the most efficient route for the current network functional topology. As link failures are fixed,
LightSOFT efficiently reverts the affected tunnels, correlating the tunnels and repaired links and completing
either full or partial reversions.
Automatic network restoration can be configured for protected and unprotected tunnels, for either one or
both main and protection paths. Operators can choose how they prefer to optimize resource usage, either
maximizing disjoint route selection or focusing on resource sharing to minimize resource utilization.
Network restoration provides protection from multiple network failures, since new LSP paths are
dynamically prepared and ready for use before they are needed.

ECI Telecom Ltd. Proprietary 13


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

You can view the tunnel status in the Tunnel List window. In the event of a failure, a dotted line indicates
the original path of the tunnel and a solid line of the same color indicates the active (restoration) path.
Figure 11-55: Tunnel restoration

In the Tunnel List:


 Restoration Status: Indicates whether the restoration attempt was successful.
 Restoration: Indicates whether restoration is enabled on this tunnel.
 Number of Retries: The maximum number of restoration attempts LightSOFT performed to find
available resources in the event of a failure.
You can exclude one or more links from LSP Restoration. If a link is excluded, it will not be used in LSP
restoration, unless the Ring ID of the link is the same as the provisioned path of the main or protected LSP
that is failed.

11.13.13 Customer Change Notification (CCN)


Communication networks are dynamic entities. On a macro long-term level, networks are constantly
growing and evolving over time. On a micro immediate level, networks are constantly reconfiguring their
path and tunnel configurations in response to changing network traffic conditions and equipment status.
Dynamic networks must be agile, able to react in real time to changes in network status.
A common approach is to handle dynamic network status changes using an LDP MAC Withdraw
mechanism. Neptune data cards offer a more effective approach by providing CCN capabilities. Topology
changes, such as a temporary link down triggering an RSTP/MSTP recovery action, automatically trigger
messages notifying remote PEs of changes in the network topology. Change notification messages are
distributed to all VPLS peers.
Intelligent configuration rules make sure that data is transmitted responsibly, without confusion from
unnecessary multiple notification messages and without affecting uninvolved traffic. Neptune data cards
support selective FDB flush, whereby CCN messages trigger a selective flush of only specific FDB entries
whose source was the PE that originally triggered the topology change.

ECI Telecom Ltd. Proprietary 14


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

Intelligent use of CCN enhances network resiliency and enables more effective use of dual-homed device
protection in dual homing scenarios as well as H-VPLS networks. In some H-VPLS dual homing topologies,
when there is a need for CCN to cross VPLS domains, CCN forwarding can be enabled on the relevant NEs.
Figure 11-56: CCN functionality

11.14 SDH protection schemes


The Neptune Product Line supports various high-reliability traffic protection mechanisms, including SDH
Path/Circuit Protection Schemes and MSP 1+1 at the SDH level. For Ethernet traffic, LCAS-based protection
is supported. Collectively, these proven redundancy mechanisms ensure the complete integrity of all traffic
transfers.
SDH path protection schemes are used to protect each individual VC or EoS trail, connecting every two VCs
or EoS (WAN) ports on any data card. These schemes range from unprotected trails, which use the
minimum amount of traffic, through SNCP for 1+1 protection, and up to MS-SPRing, the most effective
means of ring protection. The Neptune features proven redundancy mechanisms to make sure the
complete integrity of all traffic transfers. System protection schemes offer highly reliable trail protection
arrangements and equipment duplication on all units. The platform supports protection schemes at the line
and service levels.
The Neptune provides complete protection for internal traffic paths. All traffic is fully redundant within the
platform and is routed via separate traffic paths and hardware units. If equipment or line failure, traffic
protection switching takes place within 8-12 msec.

ECI Telecom Ltd. Proprietary 15


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

The Neptune supports mesh and ring traffic protection schemes by Dual Node Interconnection (DNI), Dual
Ring Interconnection (DRI) and restoration. The restoration mechanism makes sure traffic rerouting in the
event of a major contingency. Telecom operators can define their own major contingencies based on
individual operating parameters. Traffic restoration time is generally dependent on network complexity and
traffic load.
For more information about the traffic restoration feature, see the LightSoft User Manual and the relevant
EMS user manuals.

11.14.1 SNCP
SNCP provides independent trail protection for individual subnetworks connected to the Neptune Product
Line platforms. Combined with the system’s drop-and-continue capability, SNCP is a powerful defense
against multifailure conditions in a mesh topology. By integrating SNCP into the Neptune Products,
operators achieve superior traffic availability figures. Therefore, SNCP is extremely important for leased
lines or other traffic requiring superior SLA availability.
SNCP/N and SNCP/I at any VC level (VC-4, VC-3, VC-12) are supported. The SNCP mode can be configured
through the EMS-APT/LCT-APT per VC. Automatic SNCP switching is enabled, without operator intervention
or path redefinition. The Neptune Product Line can support path protection by TDM based matrices, such
as : XIOxx, CPTSxxx, MCPTSxxx. The result is exceptionally fast protection switching in less than 30 msec,
with typical switching taking only a few milliseconds. Protection switching is performed via the
cross-connect matrix in the XIOxx, CPTSxxx, MCPTSxxx cards.

11.14.2 SDH line protection


The Neptune incorporates two independent MS protection mechanisms:
 Linear – Linear Multiplex Section Protection (MSP):
 MSP 1+1 unidirectional
 MSP 1+1 bidirectional
 Ring – MS-SPRing

ECI Telecom Ltd. Proprietary 16


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.14.2.1 MSP
MSP is designed to protect single optical links. This protection is most suitable for appendage TM/star links
or for 4-fiber links in chain topologies.
The Neptune supports MSP in all optical line cards (STM-1, STM-4, STM-16, and STM-64). MSP 1+1
unidirectional and bidirectional modes are supported. MSP 1+1 is implemented between two SDH
interfaces (working and protection) of the same bitrate that communicate with two interfaces on another
platform. As with SNCP and path protection, in MSP mode the Neptune provides protection for both fiber
and hardware faults.
The following figure shows a 4-fiber star Neptune with all links protected. This makes sure uninterrupted
service even in the case of a double fault. The Neptune automatically performs MSP switching within
50 msec.
Figure 11-57: MSP protection modes

11.14.2.2 MS-SPRing
In addition to SNCP protection that may also be implemented in mesh topologies, the Neptune supports
MS-SPRing that provides bandwidth advantages for selected ring-based traffic patterns.
Two-fiber MS-SPRing supports any 2.5 Gbps and/or 10 Gbps rings closed by the Neptune via
XIO30_16/XIO64/XIO16_4/CPTS100//CPTS320 cards, in compliance with applicable ITU-T standards. This is
fully automatic and performed in less than 50 msec.

NOTES:
 In the NPT-1030 and NPT-1200 products, MS-SPRing is supported by the following card
sets:
 XIO30_16
 XIO64
 XIO16_4
 CPTS100
 As explained in this section, MS-SPRing is a network protocol that runs on the ring
aggregate cards. The PDH, STM-1, STM-4, and data cards (electrical and optical) that serve
as drop cards connected to the client are not part of the MS-SPRing ring protocol.
However, all client services can be delivered via MS-SPRing on Neptune networks through
the drop cards and the SDH aggregate cards that create the MS-SPRing protection ring.

MS-SPRing can support LO traffic arriving at the nodes in the same way it does HO traffic.

ECI Telecom Ltd. Proprietary 17


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

In MS-SPRing modes, the STM-n signal is divided into working and protection capacity per MS. In case of a
failure in one MS of the ring, the protection capacity loops back the affected traffic at both ends of the
faulty MS. The platform supports the full squelching protocol to prevent traffic misconnections in cases of
failure at isolated nodes. Trails to be dropped at such nodes are muted to prevent their being delivered to
the wrong destination.
MS-SPRing is particularly beneficial in ring applications with uniform or adjacent traffic patterns, as it offers
significant capacity advantages compared to other protection schemes.
The following figure shows an Neptune in a 2-fiber MS-SPRing. In this configuration, two fibers are
connected between each site. Each fiber delivers 50% of the active and 50% of the shared protection
traffic. For example, in an STM-16 ring, 8 VC-4s are active and 8 VC-4s are reserved for shared protection.
In the event of a fiber cut between sites A and D, traffic is transported through sites B and C on the black
portion of the counterclockwise fiber. The switch in traffic is triggered by the APS protocol that transmits
control signals over the K1 and K2 bytes in the fiber from site D to site A.
Figure 11-58: Two-fiber protection

ECI Telecom Ltd. Proprietary 18


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

Dual-node interconnection with MS-SPRing


When the working and protection fiber pairs travel in separate ducts, two rings can be connected via a dual
link over two different nodes. This enables the network to overcome multiple failures like fiber cuts or node
failures, so improving traffic availability in the network.

Integration of LO SNCP and MS-SPRing


The Neptune can simultaneously close MS-SPRing-protected metro core rings and SNCP-protected
edge-access rings within the same NE. LO traffic can be transported directly from multiple edge-access rings
to the metro core ring transparently, without external XCs or mediation equipment. This reduces floor
space and costs, and improves site reliability.

11.14.3 Dual Node Interconnection (DRI)


DRI protection uses a single NE to bridge two rings. In a typical example of DRI protection, a single
connection point is used to close two rings, providing protection for scenarios that include two fiber cuts.
DRI trails are created automatically by LightSOFT.
Traditional protection schemes rely upon configuration of two paths, main and protection. Traffic is simply
switched from the main path to the protection path if there are any (one or more) fiber cuts in the main
path. DRI topologies improve upon the traditional model by protecting against fiber cuts in both the main
and protection paths.
DRI configures additional links between the main and protection paths to provide multiple alternate route
possibilities between the main and protection paths. This is illustrated in the following figure.
Figure 11-59: DRI classic protection model

The preceding figure portrays two endpoints linked by main and protection paths. Two links are configured
between the two paths, represented by the X shape link topology in the center of the figure. The first fiber
cut on the main path (labeled A), triggers a switch at both endpoints from the main path to the protection
path. A second fiber cut on the protection path (labeled B), triggers a switch at the appropriate points from
the protection path back to the main path. After each fiber cut, the optical equipment used at the DRI
configured nodes at either end of the DRI links must also switch their internal Rx/Tx settings accordingly.

ECI Telecom Ltd. Proprietary 19


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.14.4 Dual Node Interconnection (DNI)


DNI configurations require two fiber links between the main and protection paths. Under certain
circumstances, such as complex or distant topologies, it may be difficult to ensure two fiber connections
between the paths. Therefore, we have designed DNI configuration that can be implemented with a single
fiber link between the two paths. This is illustrated in the following figure.
Figure 11-60: Optical DNI enhanced protection model

11.15 Optical layer protection


Protection is of the utmost importance in the high-capacity traffic transmitted through WDM systems.
Neptune features a variety of optical protection schemes; some are introduced in this section.

11.15.1 Optical protection mechanisms


Protection is of the utmost importance in the high-capacity traffic transmitted through WDM systems. The
Neptune features a variety of optical protection options, enabling network operators to choose the
protection scheme most useful for their network configuration. Protection options include:
 Full equipment protection: The MXP10 as a multirate combiner for standard P2P service supports
OCH 1+1 with full equipment protection.
The MXP10 offers the option of arranging double line aggregates on separate cards with two clients
connected to a single client interface. This configuration requires card installation in adjacent slots
and a splitter/coupler or Y-fiber to connect the client interfaces.
 Network protection: The MXP10 as a multirate combiner for standard P2P service supports standard
line protection on a single card.
The Neptune provides OCH protection very similar to its path protection mechanism. By using double
transponder/combiner cards with built-in OCH units, a dual-traffic path goes around the ring and is
received by both the main and the protection transponder/combiner. Both perform continuous PM to
ensure channel integrity.

ECI Telecom Ltd. Proprietary 20


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

If PM on the main transponder/combiner does not indicate a problem, a message is sent through the
backplane to the protection transponder/combiner for it to shut down its laser to the client, thereby
ensuring transmission to the client from only one transponder/combiner (the main). Protection switching
to the protection transponder/combiner occurs automatically when a failure is detected by the main
transponder/combiner.
The protected channels in the following figure are user selected.
Figure 11-61: OCH 1+1 protection

OCH protection is currently the most popular optical protection method for the optical layer. The
mechanism transports each optical channel in two directions, clockwise and counterclockwise. The shortest
path is defined as the main or working channel; the longer path as the protection channel.
The main benefit of OCH protection is its ability to separately choose the shortest path as the working path
for each channel. There are no dedicated working and protection fibers. Each fiber carries traffic with both
working and protection signals in a single direction.
The OCH 1+1 protection scheme provides separate protection for each channel. For SDH, GbE, and 10G
protection switching is based on PM parameters. Switching criteria can be Loss of Signal (LOS), Loss of
Frame (LOF), or Degraded Signal (SD). The switch-to-protection mode is automatic when a malfunction is
detected in a single channel. This is very convenient as users can choose the channels for protection and
the main or protection paths. Switch-to-protection time in the OCH 1+1 protection scheme is less than
50 msec.
With the MXP10, you can choose any combination of protected network traffic, unprotected traffic, fully
protected traffic including client port protection, and so on. Dual homing from access to ring is also
supported.

ECI Telecom Ltd. Proprietary 21


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.16 Equipment protection


The Neptune's high-level reliability is achieved through comprehensive equipment redundancy on all units
(common units, traffic units, I/O cards, and network connections). Automatic protection switching is
initiated by a robust internal BIT diagnostic system.

11.16.1 Common units


The Neptune provides 1+1 and 1:1 protection of the power supply, central switches, and fan units.

11.16.2 Traffic unit (I/O card) hardware protection


Neptune PON Data cards provide 1:1 hardware protection. Optical interfaces are duplicated using
splitter/coupler devices (Y-fibers or dedicated splitter modules) and electrical interfaces are protected using
an external switch.

11.16.3 Fast IOP: 1:1 card protection


Fast IOP offers the reliability of 1:1 card protection. The protection card is kept on hot standby, ready to
step in immediately, with no delay required for card synchronization. All tables, including FIB, RSTP, etc.,
are kept updated between the active and standby cards. Fast IOP can be used in both revertive and
non-revertive mode. Card protection is based on BIT, card plug-out, and manual switching through the
management system. In Fast IOP for optical links, the links are connected with Y-fiber splitters and
couplers. In Fast IOP for electrical links, the links are connected through switches.
The following figure shows a simple IOP example for optical links.
Figure 11-62: Fast IOP protection

ECI Telecom Ltd. Proprietary 22


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.16.4 Enhanced IOP (eIOP)


Neptune data cards (such as DMGE_4_L2/DMGE_8_L2/DMXE_48_L2 and DMXE_22_L2) support Enhanced
IOP (eIOP) functionality, with switchover triggered by link failures (LOS) in addition to the standard node
failure triggers. Adding LOS as an IOP trigger enhances IOP functionality, freeing up a port on each
participating card for carrying additional traffic. This is explained in the following example.
The following figure illustrates two DMxx cards used in a network gateway node, between a PB/MPLS
network cloud and the customer equipment. The DM cards are associated for IOP protection. The network
configuration is similar to that of the figure illustrating traditional Fast IOP: 1+1 Card Protection.
Figure 11-63: Enhanced IOP example

With traditional Fast IOP, a link failure between DM #1 and the router would result in traffic loss, since DM
#2 remains designated as standby. This means that the router would not be able to find any route available
for traffic. To prevent this loss of traffic, the links are configured over splitter/coupler cables that link both
cards to the router ports (see illustrated in the figure Fast IOP: 1+1 Card Protection).
DM cards resolve this problem through the use of eIOP, by adding LOS as an IOP trigger on selected LAN
ports. With eIOP, a failure on the link to the active DM card triggers an IOP switchover. DM #2 becomes
active and activates transmissions on the LAN ports. The router detects this link is now up and
sets/advertises a new traffic route. Traffic is restored.
With eIOP, the splitter/coupler cable is no longer required. A regular fiber cable can be used between the
DM cards and the router, as illustrated in the preceding figure. This frees a port on each DM card to carry
additional traffic.

11.16.5 Tributary Protection (TP)


NPT-1200, NPT-1050, and NPT-1030 platforms support Tributary Protection (TP) by protection cards,
installed in the EXT-2U. This provides protection for tributary card failures, such as card power-off, card out,
BIT fail, and so on. The platforms should be configured with TDM based central matrices as follows:
 NPT-1200 with XIO16_4, XIO_64, CPTS100, or CPTS320
 NPT-1050 with MCPTS100
 NPT-1030 with XIO30Q_1&4, XIO30_4, or XIO30_16
The protection scheme can be 1:1. For a TP scheme, the protection must be configured with the relevant
tributary cards, meaning the protecting and the protected cards. In the Neptune Product Line, this involves
defining a Protection Group (PG), as follows:
 Protecting card: Only one tributary card can be selected as the protecting card. This card should have
no existing trails. The protecting card can be located in any slot.

ECI Telecom Ltd. Proprietary 23


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 Protected cards: One or two tributary card(s) (one for a 1:1 scheme) can be selected as protected
cards. A protected card can have existing trails. This means that TP can be performed for a card
carrying traffic, without removing existing traffic.
 Associate the protecting card and protected cards with a proper TP card.
The Neptune has three types of managed TP cards:
 TPEH8_1
 TPS1_1
 TP63_1
The following tables list the various tributary protection options for the platforms.

Table 11-2: NPT-1200 tributary PG options


TP type Protection Protection card Associated TP card Protected Comments
scheme interfaces
Protected Protecting

63 x E1 PE1_63 PE1_63 TP63_1 63 x E1 Extension unit only


1:1
63 x E1 1:1 PME1_63 PME1_63 TP63_1 63 x E1 Base unit cards

4 x STM-1e 1:1 S1_4 S1_4 TPS1_1 4 x STM-1e Extension unit only

3 x E3/DS-3 1:1 P345_3E P345_3E TPS1_1 3 x E3/DS-3 Extension unit only

4 x GE 1:1 DMGE_4_L2 DMGE_4_L2 TPEH8_1 4 x GE Enhanced IOP (eIOP) by Y


cable is supported as well

8 x GE 1:1 DMGE_8_L2 DMGE_8_L2 TPEH8_1 8 x GE Enhanced IOP (eIOP) by Y


cable is supported as well

2 x GE 1:1 DMXE_22_L2 DMXE_22_L2 TPEH8_1 2 x GE Enhanced IOP (eIOP) by Y


cable is supported as well

8 x GE 1:1 DMXE_48_L2 DMXE_48_L2 TPEH8_1 8 x GE Enhanced IOP (eIOP) by Y


cable is supported as well

Table 11-3: NPT-1050 tributary PG options


TP type Protection Protection card Associated TP card Protected Comments
scheme interfaces
Protected Protecting

63 x E1 PE1_63 PE1_63 TP63_1 63 x E1 Extension unit only


1:1
63 x E1 1:1 PME1_63 PME1_63 TP63_1 63 x E1 Base unit cards

4 x STM-1e 1:1 S1_4 S1_4 TPS1_1 4 x STM-1e Extension unit only

3 x E3/DS-3 1:1 P345_3E P345_3E TPS1_1 3 x E3/DS-3 Extension unit only

ECI Telecom Ltd. Proprietary 24


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

Table 11-4: NPT-1030 tributary PG options


TP type Protection Protection card Associated TP card Protected Comments
scheme interfaces
Protected Protecting

63 x E1 PE1_63 PE1_63 TP63_1 63 x E1 Extension unit only


1:1
63 x E1 1:1 PME1_63 PME1_63 TP63_1 63 x E1 Base unit cards

2 x STM-1e 1:1 SMD1B SMD1B TPS1_1 2 x STM-1e Base unit cards

4 x STM-1e 1:1 SMQ1&4 SMQ1&4 TPS1_1 4 x STM-1e Base unit cards

4 x STM-1e 1:1 S1_4 S1_4 TPS1_1 4 x STM-1e Extension unit only

3 x E3/DS-3 1:1 P345_3E P345_3E TPS1_1 3 x E3/DS-3 Extension unit only

11.16.5.1 TP63_1
The TP63_1 provides 1:1 protection for two PE1_63 cards installed in the EXT-2U platform and PME1_63
cards in the base unit. It is activated by the MCP1200, enabling a single I/O backup card to protect the main
(working) I/O card when a failure is detected.

The TP63_1 is connected as follows:


 The traffic connectors on the protection I/O module are connected to the PROTECTED CARD1 double
68-pin female VHDCI connector on the TP63_1.
 The traffic connectors on the active I/O module are connected to the PROTECTED CARD2 double
68-pin female VHDCI connectors on the TP63_1.
 The traffic cables from the DDF are connected to the CUSTOMER CONNECTION double 68-pin female
VHDCI connector on the TP63_1.

Figure 11-64: TP63_1 front panel

ECI Telecom Ltd. Proprietary 25


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

Table 11-5: TP63_1 front panel LED indicators


Marking Full name Color Function
ACT. Card active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily
indicate the card is not running normally.
FAIL Card fail Red Normally off. Lights steadily when card failure
is detected.
TRAFFIC ACTIVE Traffic active Green Lights steadily when traffic is being
transferred in the corresponding PE1_63
module.

11.16.5.2 TPS1_1
The TPS1_1 provides 1:1 protection for up to four high rate interfaces. It is activated by the MCP1200
according to the corresponding platform it is installed on, enabling a single I/O backup module to protect
the main (working) card when a failure is detected.
The TPS1_1 is connected as follows:
 The traffic connectors on the protection I/O module are connected to the PROTECTING CARD1 coaxial
8W8 connector on the TPS1_1.
 The traffic connectors on the active I/O module are connected to the PROTECTED CARD2 coaxial 8W8
connector on the TPS1_1.
 The traffic cables from the DDF are connected to the CUSTOMER CONNECTION connectors on the
TPS1_1.

Figure 11-65: TPS1_1 front panel

Table 11-6: TPS1_1 front panel LED indicators


Marking Full name Color Function
ACT. Card active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily
indicate the card is not running normally.
FAIL Card fail Red Normally off. Lights steadily when card failure
is detected.
TRAFFIC ACTIVE Traffic Active Green Lights steadily when traffic is being
transferred in the corresponding module.

ECI Telecom Ltd. Proprietary 26


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.16.5.3 TPEH8_1
The TPEH8_1 provides 1:1 protection for up to eight electrical Ethernet interfaces (10/100/1000BaseT). It is
activated by the MCP1200, enabling a single I/O backup module to protect the main (working) card when a
failure is detected.

The card design also supports the protection of two separate modules, each with up to four electrical
Ethernet ports. The markings on the TPEH8_1 are divided into two groups that indicate such an option.
The TPEH8_1 is connected as follows:
 The customer's Ethernet traffic is connected to the four RJ-45 connectors marked CUSTOMER
CONNECTION 1.
 The protected (operating) module is connected to the SCSI connector marked PROTECTED CARD 1.
 The protecting (standby) module is connected to the SCSI connector marked PROTECTING CARD 1.
The second group of connectors marked with the suffix 2 is connected similarly for protecting a
second set of four electrical Ethernet interfaces.

Figure 11-66: TPEH8_1 front panel

Table 11-7: TPEH8_1 front panel LED indicators


Marking Full name Color Function
ACT. Card active Green Normally blinks with the frequency of 0.5 Hz.
Off or on steadily
indicate the card is not running normally.
FAIL Card fail Red Normally off. Lights steadily when card failure
is detected.
TRAFFIC ACTIVE Traffic active Green Lights steadily when traffic is being
transferred in the corresponding Ethernet
module.

ECI Telecom Ltd. Proprietary 27


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.16.6 Integrated protection for I/O cards with electrical


interfaces
NPT offers electrical protection modules. Two steps are required for using this feature:
1. Define one redundant I/O card of the same type as the card to be protected in one of the free I/O
slots.
2. Insert an electrical protection module in the EXT-2U shelf.

NOTE: For more details refer to Tributary Protection (TP).

The purpose of the protection module is to replace a malfunctioning I/O card automatically with the
redundant I/O card. When the protection is activated, the protection module disconnects the external
ports connected to the electrical protection module of the malfunctioning I/O card and connects them to
the redundant card. In parallel, the matrix card switches the traffic from the malfunctioning card slot to the
protection slot (the slot of the redundant I/O card).

11.17 Security
Comprehensive security mechanisms protect both the complete transport network and individual clients
within the network. ECI is committed to incorporating powerful, advanced security technology and
methodology across the full range of our product offering. The current Neptune release includes certain
new security features, with additional key security enhancements now in development, to be implemented
in upcoming releases.
EMS-APT can be upgraded to apply enhanced security settings to the EMS and to selected NEs managed by
the EMS. Communication channels between entities with enhanced security settings are secured and
information sent via SSH2 protocol.
The main security functions are implemented through the following functionality:
 Radius clients (authentication, and two levels of authorization – viewer and administrator)
 SSH V2.0 and SFTP
 SW integrity based on SHA-2
 Public key authentication for NEs

ECI Telecom Ltd. Proprietary 28


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

11.17.1 Secured FTP and SSH


The SSH File Transfer Protocol (also known as Secure FTP or SFTP) is a computing network protocol for
accessing and managing files on remote file systems. SFTP also allows file transfers between hosts. Unlike
standard FTP, SFTP encrypts both commands and data, preventing passwords and sensitive information
from being transmitted in the clear over a network. SFTP clients are usually programs that use SSH to
access, manage, and transfer files. SFTP clients are functionally similar to FTP clients, but they use different
protocols. Consequently, you cannot use standard FTP clients to connect to SFTP servers, nor can you use
clients that support only SFTP to connect to FTP servers.
These protocols are supported by the EMS-APT and Access NPT products to download files (usually
embedded files) in highly secure manner.

11.17.2 Public key authentication


The unique public key enables to protect the managed NE under the EMS-NPT.
SSH uses public-key cryptography to authenticate the EMS-NPT and allow it to authenticate the NE. The
public key is placed on all NEs that must allow access to the EMS-NPT for the matching private key (the
EMS-NPT keeps the private key secret). While authentication is based on the private key, the key itself is
never transferred through the network during authentication. SSH only verifies if the same NE offering the
public key also owns the matching private key.
In Public key authentication, the NE holds a list of client user keys. Each user has its own key in the list, and
the SSH-2 protocol makes the validation according to this list.
EMS-NPT also holds a list of all NEs with their keys and authenticates the NE user using this list (authorized
users).

11.17.3 Port authentication control (IEEE 802.1x based)


Several platforms of the Neptune provide enhanced security by implementing port authentication based on
the IEEE 802.1x standard. This standard provides a standardized security authentication process for access
to Ethernet networks, including LANs and Wireless LANs (WALNs).
The IEEE 802.1x provides almost unlimited scalability with minimal administration overhead. The user's
access authentication is made at the network edge at the port level. This guarantees that no unauthorized
access is made, and all users access is made through a centralized authentication server.
The following terms are used to describe the 802.1x:
 Supplicant (client) – is the network access device requesting LAN services.
 Authenticator – is the network access point that has 802.1x authentication enabled. This includes LAN
switch ports and Wireless Access Points (WAP).
 Authentication Server – is the server that performs the authentication, allowing or denying access to
the network based on username/password. The 802.1x defines a Remote Authentication Dial In
Server (RADIUS) as the required server.

ECI Telecom Ltd. Proprietary 29


Neptune (Hybrid) Reference Manual Neptune MPLS-TP and Ethernet cards

 Extensible Authentication Protocol (EAP) – is the protocol that is used between the client and the
authenticator. The 802.1x protocol specifies encapsulation methods for transmitting EAP messages so
they can be carried over different media types.
 Port Access Entry (PAE) – is the 802.1x "logical" device of the client and authenticator that exchange
EAP messages.

11.17.4 OSPF encryption with HMAC-SHA256


The current version of the Neptune introduces an additional dimension to network security, using
encrypted OSPF between NEs.
Mechanisms that provide such integrity check based on a secret key are usually called "message
authentication codes" (MAC). Typically, message authentication codes are used between two parties that
share a secret key to validate information transmitted between these parties.
Until the current version the user was able to configure the OSFP security by selecting one of the following
operation modes:
 None - meaning, there is no authentication to a party that joins the network. This is the default mode.
 Simple - a party that wants to join the network must first be authenticated by entering a password.
Starting the current version, the user can configure the OSPPF security by selecting one of the following
modes:
 None - meaning, there is no authentication to a party that joins the network. This is the default mode.
 Simple - a party that wants to join the network must first be authenticated by entering a password.
 HMAC-SHA256 based encrypted OSPF - keyed-Hash Message Authentication Code in conjunction
with SHA256 algorithm hashing function. The OSPF information is encrypted with a 256-bit key. These
algorithms are used as the basis for data origin authentication and integrity check based on a secret
key. There are two options available in this mode:
 Key ID, configurable from the EMS-NPT, is a number from 0 to 255 that identifies the
authentication key.
 Key, configurable from the EMS-NPT, the length of the key is 32 characters.

ECI Telecom Ltd. Proprietary 30


12 Accessories
The Neptune platforms use a wide range of accessories to enable installation, provide power, and route
fibers and cables. The following table lists the accessories used with these platforms.

Table 12-1: Neptune platform accessories


Type Designation
E1 Digital Distribution Frame with balanced-to-unbalanced xDDF-21
conversion
Power distribution and alarm panel for Neptune platforms installed RAP-BG
in racks
Power distribution and alarm panel for Neptune platforms installed RAP-4B
in racks, with alarm distribution support
Power distribution and alarm panel for Neptune platforms installed xRAP-100
in racks
Fiber Storage Tray (FST) FST
Optical Distribution Frame (ODF) ODF
Optical Patch Panel (OPP) OPP
ICPs for auxiliary interfaces in the MCP30 ICP_MCP30
ICPs for the traffic modules in the SM_10E SM_10E ICPs
AC power platform (for NPT-1050) AC_CONV_UNIT
AC power module for the AC_CONV_ UNIT (for NPT-1050) AC_CONV_MODULE
AC/DC power converter up to 2550 W (for NPT-1200) AC/DC-DPS850-48-3 power system
Cable guiding accessories Cable guide frame , PME1_63 cable
guide and holder , Fiber guide for
ETSI A racks , and Cable stack tray
Cables Cables

12.1 RAP-4B
The RAP-4B is a power distribution and alarm panel for ECI platforms installed in racks.

NOTE: The RAP-4B supports operation with BG, XDM (100, 300, 900), 9600 series, and
OPT9603 platforms.

The RAP-4B performs the following main functions:

ECI Telecom Ltd. Proprietary 12-1


Neptune (Hybrid) Reference Manual Accessories

 Power distribution for up to four protected platforms installed on the same rack. The nominal DC
power voltage is 48 VDC or 60 VDC. Since the supported platforms can use redundant power sources,
the RAP-4B supports connection to two separate DC power circuits.
Each DC power circuit of each platform is protected by a circuit breaker, which also serves as a power
on/off switch for the corresponding circuit. The required circuit breakers are included in the
installation parts kit supplied with the platforms, and therefore their current rating is in accordance
with the order requirements. The CB rating installed in the RAP-4B for feeding a single platform is
max. 35 A. The total power that can be provided by the RAP-4B is max. 4 x 1.1 kW (4.4 kW).

NOTE: The maximum power that can be supplied by the RAP-4B to a single platform is not
more than 1.1 kW.

The circuit breakers are installed during the RAP-4B installation. To prevent accidentally changing a
circuit breaker state, the circuit breakers can be reached only after removing the RAP-4B front cover.
The circuit breaker state (ON/OFF) can be seen through translucent covers.
 Bay alarm indications: The RAP-4B includes three alarm indicators, one for each alarm severity. When
alarms of different severities are received simultaneously, the different alarm indications light
simultaneously.

NOTE: BG platforms support only two alarm indications, Major and Minor.

A buzzer is activated whenever a Major or Critical alarm is present in an XDM platform or a Major
alarm in a BG or 9600 series platform connected to the RAP-4B.
 Connection of alarms from up to four platforms, with max. four alarm inputs and two alarm outputs.
The following figure shows the front panel of the RAP-4B, and the table lists the functions of the front panel
components corresponding to the figure callout numbers.
Figure 12-1: RAP-4B front panel

ECI Telecom Ltd. Proprietary 12-2


Neptune (Hybrid) Reference Manual Accessories

Table 12-2: RAP-4B front panel component functions


No. Designation Function
1 SOURCE A Four circuit breakers (designated PLATFORM 1, PLATFORM 2, PLATFORM 3, and
PLATFORM 4 – one per platform installed in the rack). These circuit breakers are
used as ON/OFF power switches and overcurrent protection for the DC power
source A.
--- Buzzer (concealed Operates when at least one unacknowledged Major or Critical alarm is present
under cover) in the platform connected to the RAP-4B.
2 TEST Pushbutton. Press to activate the buzzer and turn the indicators on for test
purposes.
3 POWER Green indicator; lights when at least one DC power source is connected to the
RAP-4B.
4 CRITICAL Red indicator; lights when the severity of at least one of the alarms in the
platform connected to the RAP-4B is Critical.
5 MAJOR Orange indicator; lights when the severity of at least one of the alarms in the
platform connected to the RAP-4B is Major.
6 MINOR Yellow indicator; lights when the severity of at least one of the alarms in the
platform connected to the RAP-4B is Minor.
7 SOURCE B Same as Item 1 for DC source B.

The RAP-4B alarm connectors are on its circuit board, as shown in the following figure. The table lists the
connector functions. The index numbers in the table correspond to those in the figure.
Figure 12-2: Rap-4B alarm connectors

ECI Telecom Ltd. Proprietary 12-3


Neptune (Hybrid) Reference Manual Accessories

Table 12-3: RAP-4B connector functions


No. Designation Function
1, 2, 3, 4 Platform alarms Four 36-pin SCSI connectors designated PLATFORM 1, PLATFORM 2,
PLATFORM 3, and PLATFORM 4, for connecting alarm input/output lines
from the platforms to the RAP-4B.
5 ALARM IN/OUT 68-pin SCSI connector, for connecting alarm input lines from the customer's
equipment to the platforms, and alarm output lines from the platforms to
the customer alarm monitoring facility.

12.2 RAP-BG
The RAP-BG is a DC power distribution panel for BG and other telecommunication platforms installed in
racks. It distributes power for up to four NPT series platforms installed on the same rack. The nominal DC
power voltage is -48 VDC, -60 VDC, or 24 VDC. Since NPT series platforms can use redundant power
sources, the RAP-BG supports connection to two separate DC power circuits.
Each DC power circuit of each platform is protected by a circuit breaker, which also serves as a power
ON/OFF switch for the corresponding circuit. The required circuit breakers are included in the installation
parts kit supplied with the NPT series platforms, and therefore their current rating is in accordance with the
order requirements. The maximum current that can be supplied to a platform fed from the RAP-BG is 16A.
The circuit breakers are installed during the RAP-BG installation. To prevent accidental changing of a circuit
breaker state, the circuit breakers can be reached only after opening the front cover of the RAP-BG. The
circuit breaker state (ON or OFF) can be seen through translucent covers.
The following figure shows the front panel of the RAP-BG, and the table lists the functions of the front
panel components as indicated by the figure callouts.

Figure 12-3: RAP-BG front panel

Table 12-4: RAP-BG front panel component functions


No. Designation Function
1 SOURCE A Translucent cover for the four circuit breakers (Shelf 1, Shelf 2, Shelf 3, Shelf 4 - one
per BG/XDM platform installed in the rack). These circuit breakers are used as
ON/OFF power switches and overcurrent protection for the DC power source A.
2 SOURCE B Same as Item 1 for DC source B.

ECI Telecom Ltd. Proprietary 12-4


Neptune (Hybrid) Reference Manual Accessories

12.3 xRAP-100
The xRAP-100 is a power distribution and alarm panel for different ECI communication platforms installed
in racks. The xRAP-100 performs the following main functions:
 Power distribution for up to four platforms: The nominal DC power voltage is -48 VDC or -60 VDC.
Since most ECI platforms can use redundant power sources, the xRAP-100 supports connection to two
separate DC power circuits. The internal circuits of the xRAP-100 are powered whenever at least one
power source is connected. The presence of DC power within the xRAP-100 is indicated by a POWER
ON indicator.
Each DC power circuit of each platform is protected by a circuit breaker, which also serves as a power
ON/OFF switch for the corresponding circuit. The required circuit breakers are included in the
installation parts kit supplied with the platforms, and therefore their current rating is in accordance
with the order requirements. The 5-pin high power connector supplies power to one platform. The
3-pin connector supplies power to three platforms.
The xRAP-100 is designed to support one high powered and three regular platforms, or four regular
platforms.
The circuit breakers are installed during the xRAP-100 installation. To prevent accidental changing of a
circuit breaker state, the circuit breakers can be reached only after opening the front cover of the
xRAP-100. The circuit breaker state (ON or OFF) can be seen through translucent covers.
 Bay alarm indications: The xRAP-100 includes four alarm indicators, one for each alarm severity.
When alarms of different severities are received simultaneously, the different alarm indications light
simultaneously.
A buzzer is activated whenever a Major or Critical alarm is present in the platforms installed in the
rack.
 Connection of alarms from up to four platforms, each one with a maximum of four alarm inputs and
two alarm outputs.
The following figure shows the front panel of the xRAP-100, and the table lists the functions of the front
panel components as indicated by the figure callouts.

Figure 12-4: xRAP-100 front panel

ECI Telecom Ltd. Proprietary 12-5


Neptune (Hybrid) Reference Manual Accessories

Table 12-5: xRAP-100 front panel component functions


No. Designation Function
1 SOURCE A Translucent cover for the four circuit breakers (designated Shelf 1, Shelf 2,
Shelf 3, Shelf 4 - one per platform installed in the rack). These circuit breakers
are used as ON/OFF power switches and overcurrent protection for the DC
power source A.
2 Buzzer Operates when at least one unacknowledged Major or Critical alarm is
present in the platforms connected to the xRAP-100.
3 POWER ON (TEST) Pushbutton with green indicator, which lights whenever at least one DC
power source is connected to the xRAP-100.
Pressing the pushbutton activates the buzzer and turns the indicators on for
test purposes.
4 CRITICAL Red indicator; lights when the severity of an unacknowledged alarm in the
platforms connected to the xRAP-100 is Critical.
5 MAJOR Orange indicator; lights when the severity of an unacknowledged alarm in the
platforms connected to the xRAP-100 is Major.
6 MINOR Yellow indicator; lights when the severity of an unacknowledged alarm in the
platforms connected to the xRAP-100 is Minor.
7 WARNING White indicator; lights when the severity of an unacknowledged alarm in the
platforms connected to the xRAP-100 is Warning.
8 SOURCE B Same as Item 1 for DC source B.

The xRAP-100 connectors are on its circuit board, as shown in the following figure. The table lists the
connector functions. The index numbers in the table correspond to those in the figure.

Figure 12-5: xRAP-100 connectors

ECI Telecom Ltd. Proprietary 12-6


Neptune (Hybrid) Reference Manual Accessories

Table 12-6: xRAP-100 connector functions


No. Designation Function
1, 2, 3, 4 Platform alarms Four 36-pin SCSI connectors designated Shelf 1, Shelf 2, Shelf 3,
and Shelf 4, for connecting alarm input and output lines to the
platforms and other equipment installed in the rack
5 ALARM IN/OUT 68-pin SCSI connector, for connecting alarm input and output
lines to the customer alarm monitoring facility
6, 7, 8, 9 Platform DC input power Four 3-pin D-type connectors designated Shelf 1, Shelf 2, Shelf
3, and Shelf 4, for connecting DC power to the platforms
10 Platform DC input, high Lines of the Shelf 4 connector also connected to a 5-pin D-type
power connector for connecting a high power platform

12.4 AC/DC-DPS850-48-3 power system


The AC/DC-DPS850-48-3 power system is a high efficient and extreme compact system that provides an
optimal solution for space-critical, low power telecommunication applications, requiring flexible DC power
supplies.
The main components of the power system are three rectifier units, each capable of supplying up to 850 W
at a nominal voltage of 48 VDC.
Each power system includes three DPR-850B-48 rectifier units, an CSU-502 controller unit, two battery
circuit breakers, and six load circuit breakers, all enclosed in an 437.4 x 270 x 43.4 mm (W x D x H)
miniature platform. All connections to the system are made from the rear of the unit.
The rectifiers are connected in parallel in a redundant (n+1) load-sharing mode. The current-sharing mode
enables each rectifier to supply an equal load current, thus increasing system reliability. In case a failure
occurs in one of the units, the current drawn from the other rectifiers is increased to supply the required
load current. The DPR-850B-48 is designed for online replacement (hot-swappable) to support
non-traffic-affecting operation of telecommunication equipment.
In addition, to support high availability, the system has an option to connect backup batteries. In this case,
while the system is operating normally it also charges the batteries in addition to supplying power to the
load. If the AC source fails, the batteries, which are connected on the rectifier output bus, supply the load
current.

ECI Telecom Ltd. Proprietary 12-7


Neptune (Hybrid) Reference Manual Accessories

The state-of-the-art power supply is designed to match constant-power characteristics of modern telecom
loads, and thus reduces the number of rectifiers required in battery backed-up systems. The
AC/DC-DPS850-48-3 features a modular architecture for easy system maintenance and repair.
Figure 12-6: AC/DC-DPS850-48-3 power system, general view

The AC/DC-DPS850-48-3 system has connectors for connecting the load, batteries, AC source, and the
system alarms, at the rear of the unit. It also includes circuit breakers that protect the power supply against
load overcurrent at the battery and rectifier outputs.
The CSU-502 module provides control and monitoring functions. It is supplied preconfigured, ready for
immediate use. It supports system voltage, load current, status, and alarms that can be changed and
displayed on the LCD display.
The AC/DC-DPS850-48-3 platform is preconfigured for fast installation and setup. All system settings are
fully software-configured and stored in transferable configuration files for repeated one-step system setup.
The AC/DC-DPS850-48-3 platform is supplied with two kits of brackets for installation in 19" or ETSI racks.
The following main features are supported by the AC/DC-DPS850-48-3 power system:
 19"/ETSI power platform for 48 VDC @ 2250 W (max.) in non-redundant application
 Single phase 220 VAC input source
 Three DPR-850B-48 rectifier units
 Light weight plug-in modules for simple installation and maintenance
 Hot swappable rectifier and control modules
 Front access to the circuit breakers and control module for simplified operation and maintenance

ECI Telecom Ltd. Proprietary 12-8


Neptune (Hybrid) Reference Manual Accessories

12.4.1 AC/DC-DPS850-48-3 front view


The AC/DC-DPS850-48-3 front view is shown in the following figure.
Figure 12-7: AC/DC-DPS850-48-3 front view

The following table describes the AC/DC-DPS850-48-3 front view components.

Table 12-7: AC/DC-DPS850-48-3 front component functions


Designation Component Function
LD1 to LD6 Load DC voltage circuit Act as ON/OFF switches for connecting loads LD1 to LD6 to
breakers. Ratings: the system DC voltage bus. Protect the system from load
LD1, LD2 = 12 A overcurrent.
LD3, LD4 = 20 A
LD5, LD6 = 30 A
BAT1, BAT2 Battery circuit breakers, 50 Act as ON/OFF switches for connecting two battery systems
A each to the DC voltage bus. Protect the system from battery
overload.
Controller unit Provides control, provisioning, configuration, and display for
CSU-502
the system.
SNMP Remote control connector, Provides SNMP communication with the CSU 502 for
interface RJ-45 (optional per order) controlling the system from remote (via a modem).
DPR-850B-48 Rectifier unit 850 W
Provides AC to DC power conversion for the system. Up to
three units can be installed in a platform. The system max.
power is 2250 W.

ECI Telecom Ltd. Proprietary 12-9


Neptune (Hybrid) Reference Manual Accessories

12.4.2 AC/DC-DPS850-48-3 rear view


The AC/DC-DPS850-48-3 general rear view is shown in the following figure.
Figure 12-8: AC/DC-DPS850-48-3 general rear view

To enable understanding enlarged views of sections on the rear panel are provided in the following figures.
The AC/DC-DPS850-48-3 detailed battery and load connections are shown in the following figure.

ECI Telecom Ltd. Proprietary 12-10


Neptune (Hybrid) Reference Manual Accessories

The AC/DC-DPS850-48-3 AC source and alarm connections are shown in the following figure.

The following table describers the AC/DC-DPS850-48-3 rear panel component functions (connections).

Table 12-8: AC/DC-DPS850-48-3 rear panel component functions


Designation Component Function
TA RJ-45 connector Connects the Ambient temperature sensor to the system. The
sensor with an appropriate cable is supplied, by default, with the
system. If not connected, Ambient Temperature will not affect the
power supply control system.
TB RJ-45 connector Connects the BAT1 temperature sensor to the system. The sensor
with an appropriate cable is supplied, by default, with the system.
It must be connected, in case backup batteries are used.
LOAD1 to Screw terminal Each terminal provides load connection to the system DC output
LOAD6 socket, 2-positions (6 bus. The recommended cable cross-section for connecting the
places) loads are:
 LOAD1 and LOAD2, AWG 14 (2.5 mm2)
 LOAD3 and LOAD4, AWG 12 (4 mm2)
 LOAD5 and LOAD6, AWG 10 (10 mm2).
BAT 2 and Screw terminal Connects the corresponding BAT 2 and BAT 1 battery packs to the
BAT 1 socket, 2-positions (2 system DC output bus. The recommended cable cross-section for
places) connecting the each battery pack is AWG 6 (16 mm2).
ALARM D-type, 26-pin, female Connects the Systems alarm output signals. The signals are
SIGNAL connector delivered by six sets of relay dry contacts.
PE, L, N, Terminal socket, Connects the AC source input voltage to the system:
3-positions  PE - Ground
 L - Line
 N - Neutral
The connections must be made with 3 x AWG 14 (2.5 mm2) cables.

ECI Telecom Ltd. Proprietary 12-11


Neptune (Hybrid) Reference Manual Accessories

12.5 ICP_MCP30
Due to limited space on the MCP30 or MCP1200 panel, there is a single connector on the front panel for
the following auxiliary interfaces: External Alarms, RS-232, OW, and V.11. The ICP_MCP30 is configured to
distribute the concentrated Auxiliary connector into dedicated connectors for each function. If none of
these interfaces is used in your application, there is no must to install the ICP_MCP30. If only an External
Alarms interface is used, there is also no must to install the ICP_MCP30 as a special alarm cable leading only
to the External Alarms interface is provided by ECI.
The ICP_MCP30 is connected to the MCP30 or MCP64 using a back-to-back cable.

Figure 12-9: ICP_MCP30 general view

Table 12-9: ICP_MCP30 front panel interfaces


Marking Interface type Function
J11 SCSI-36 Connector for back-to-back cable connecting the ICP_MCP30
to the MCP1200
ALARMS HD DB-15 male Alarm inputs and outputs interface connection
V11 HD DB-15 female V.11 overhead interface
OW RJ-45 OW interface connecting an external OW box
RS232 RJ-45 RS-232 interface for debugging or managing external ancillary
equipment

12.6 SM_10E ICPs


To simplify installation and connection to customer termination equipment, various types of ICPs were
developed. The ICP is connected to the SM_10E module by a special cable that spreads the condensed
cable to connector types commonly used for each service. ICP types are described in the following table.
The following table and figures show each ICP type.

Table 12-10: SM_10E ICPs


ICP name Description
ICP-VF Supports up to eight voice interfaces; applicable for FXO, FXS, and 2/4W modules
ICP-V24 Supports Transparent, Asynchronous and Synchronous V24 interfaces; applicable for
the V.24 module
ICP-V24F Supports Transparent, Asynchronous and Synchronous V24 interfaces (with female
connectors); applicable for the V.24 module

ECI Telecom Ltd. Proprietary 12-12


Neptune (Hybrid) Reference Manual Accessories

ICP name Description


ICP_V35 Supports V35 interfaces; applicable for the V.35 module
ICP_V11_V24 Support V.11/V.24 64 Kbps interfaces; applicable for the SM_V35_V11 module working
in V11/X.21 and in V.24 64 Kbps modes.
ICP_DB37D Supports V.36, RS-422, RS-449 interfaces; applicable for the SM_V35_V11 module
working in V.36, RS-422 and RS-449 modes.

Figure 12-10: ICP-VF

Figure 12-11: ICP-V24

Figure 12-12: ICP-V24F

Figure 12-13: ICP-V35

ECI Telecom Ltd. Proprietary 12-13


Neptune (Hybrid) Reference Manual Accessories

Figure 12-14: ICP-V11

Figure 12-15: ICP_DB37D

The following figure shows the ICP-VF front panel.

Figure 12-16: ICP-VF front panel

Table 12-11: ICP-VF front panel interfaces


Marking Interface type Function
Applied module Function

J3 SCSI-36
Connector for special cable connecting ICP to applied SM_10E module
CH1-CH4 RJ-45 SM_FXO_8E FXO interface channel #1 to channel #4
SM_FXS_8E FXS interface channel #1 to channel #4
SM_EM_24W6E 2/4 wire E&M interface channel #1 to channel
#4

ECI Telecom Ltd. Proprietary 12-14


Neptune (Hybrid) Reference Manual Accessories

Marking Interface type Function


Applied module Function

SM_CODIR_4E Codirectional 64 Kbps interface channel #1 to


channel #4
CH5-CH6 RJ-45 SM_FXO_8E FXO interface channel #5 to channel #6
SM_FXS_8E FXS interface channel #5 to channel #6
SM_EM_24W6E 2/4 wire E&M interface channel #5 to channel
#6
SM_CODIR_4E Empty
CH7-CH8 RJ-45 SM_FXO_8E FXO interface channel #7 to channel #8
SM_FXS_8E FXS interface channel #7 to channel #8
SM_EM_24W6E Empty
SM_CODIR_4E Empty

Figure 12-17: ICP-V24 front panel

Table 12-12: ICP-V24 front panel interfaces


Marking Interface type Function
J1 SCSI-36 Connector for special cable connecting ICP to applied SM_V24E
module.
SYNC CH1 DB-15 male V.24 interface #1 when SM_V24E is working in synchronous with
control mode. Empty when SM_V24E is working in other modes.
SYNC CH2 DB-15 male V.24 interface #2 when SM_V24E is working in synchronous with
control mode. Empty when SM_V24E is working in other modes.
CH1-CH4 DB-9 male V.24 interfaces #1-4 when SM_V24E is working in transparent or
asynchronous with control mode. Empty when SM_V24E is working
in Synchronous with control mode.
CH5-CH8 DB-9 male V.24 interfaces #5-8 when SM_V24E is working in transparent or
asynchronous with control mode. Empty when SM_V24E is working
in other modes.

Figure 12-18: ICP-V24F front panel

ECI Telecom Ltd. Proprietary 12-15


Neptune (Hybrid) Reference Manual Accessories

NOTE: ICP-V24F supports connection to V.24 interfaces with standard female connectors.

Table 12-13: ICP-V24F front panel interfaces


Marking Interface type Function
J1 SCSI-36 Connector for special cable connecting ICP to applied SM_V24E
module.
SYNC CH1 DB-15 female V.24 interface #1 when SM_V24E is working in synchronous with
control mode. Empty when SM_V24E is working in other modes.
SYNC CH2 DB-15 female V.24 interface #2 when SM_V24E is working in synchronous with
control mode. Empty when SM_V24E is working in other modes.
CH1-CH4 DB-9 female V.24 interfaces #1-4 when SM_V24E is working in transparent or
asynchronous with control mode. Empty when SM_V24E is working
in Synchronous with control mode.
CH5-CH8 DB-9 female V.24 interfaces #5-8 when SM_V24E is working in transparent or
asynchronous with control mode. Empty when SM_V24E is working
in other modes.

Figure 12-19: ICP-V35 front panel

Table 12-14: ICP-V35 front panel interfaces


Marking Interface type Function
J1 SCSI-36 Connector for special cable connecting ICP to applied SM_V24E
module
CH1 M34 male V.35 interface #1
CH2 M34 male V.35 interface #2

Figure 12-20: ICP_V11_V24 front view

Table 12-15: ICP_V11_V24 front panel interfaces


Marking Interface type Function
J1 SCSI-36 Connector for special cable connecting ICP to applied SM_V24E
module
CH1 DB15 female V.11 interface #1

ECI Telecom Ltd. Proprietary 12-16


Neptune (Hybrid) Reference Manual Accessories

Marking Interface type Function


CH1 DB25 female V.24 interface #1
CH2 DB15 female V.11 interface #2
CH2 DB25 female V.24 interface #2

Figure 12-21: ICP_DB37D front view

Table 12-16: ICP_DB37D front panel interfaces


Marking Interface type Function
J1 SCSI-36 Connector for special cable connecting ICP to applied SM_V35_V11
module
CH1 DB37 female V.36/RS-422/RS-449 or V.35/SYNC V.24 interface #1
CH2 DB37 female V.36/RS-422/RS-449 or V.35/SYNC V.24 interface #2

12.7 AC_CONV_UNIT
The AC_CONV_UNIT is an AC power platform that can be mounted separately in the rack. It performs the
following functions:
 Converts AC power to DC power
 Filters input for the NPT-1600CB platform
 Provides backup for AC power

ECI Telecom Ltd. Proprietary 12-17


Neptune (Hybrid) Reference Manual Accessories

Figure 12-22: AC_CONV_UNIT general view

12.8 AC_CONV_MODULE
The AC_CONV_MODULE is an AC power module that can be plugged into the AC_CONV_ UNIT. It performs
the following functions:
 Converts AC power to DC power for the NPT-1600CB only
 Filters input for the NPT-1600CB platform
 Provides up to 130 W of power

Figure 12-23: AC_CONV_MODULE front panel

ECI Telecom Ltd. Proprietary 12-18


Neptune (Hybrid) Reference Manual Accessories

Table 12-17: AC_CONV_MODULE front panel interfaces


Marking Interface type Function
- IEC connector AC voltage input connector for connecting the AC source to the
converter.
POWER OUT 3W3 3-pin male connector Connects the DC output voltage to the load.
ACT. Green LED Lights when DC voltage exists at the output

12.9 Fiber storage tray


When optical modules are used, the BG can be supplied with an FST where a length of surplus optical fiber
is stored to enable hot module replacement without disconnecting fibers from other modules in the
platform. The FST can hold up to 48 x 3 m 2-mm fibers.
The FST can be opened in two positions:
 Halfway (first click when pulling the tray out)
 Completely (to enable threading of the fibers in the tray)

Figure 12-24: FST top view

12.10 Optical distribution frame


When there are a large number of optical fibers connected to a BG, additional connection arrangements
can be required.
ECI recommends using an ODF installed in the BG racks to terminate the external fibers with the proper
optical connectors.
The ODF provides a flexible and reliable solution for interfacing between outside plant optical fiber cables
and fiber optical terminal equipment. It is designed to handle termination, splicing, and storage for excess
length of pigtails and patch cords.
The unit is 1U high, and can be installed in ETSI A and ETSI B racks, as well as in 19” racks using its
configurable rack-mounting brackets. An additional set of brackets supplied with the installation kit enables
installation in 7’ bay, 23” racks. The ODF is available with 12 or 24 ports, and SC, FC, or LC connectors.

ECI Telecom Ltd. Proprietary 12-19


Neptune (Hybrid) Reference Manual Accessories

Figure 12-25: ODF internal view

All fiber connections are made on a swing-out tray that opens to the right at 90° and houses the splicing
trays, optical adapter panels, and the fiber support. Left-side tray opening is available on request. The
swing-out tray enables quick and easy access to all internal parts for connection or maintenance activities.
The fiber connections are protected by a front cover, which latches to the assembly and prevents
unintended disconnection of fibers.

Figure 12-26: ODF front panel

Optical terminal fibers can enter the ODF from the right or left side and be connected to the optical
adapters from one side. Pigtails connect to the adapters from the other side. Excess length of pigtails and
patch cords is threaded on a fiber support that maintains the minimum bend radius to prevent fiber breaks.
A durable and robust tube leads the external fibers cable to the swing-out tray and protects them from
breaks. The adapters are arranged on panels in groups of four or two (depending on the total number of
ports). A large space between the adapters enables easy access to each individual fiber and quick
reconfiguration.

ECI Telecom Ltd. Proprietary 12-20


Neptune (Hybrid) Reference Manual Accessories

12.11 Optical patch panel


When a large number of equipment optical ports have to be connected to the optical network and splicing
to external fibers (as in the ODF) is not required, ECI recommends the use of the OPP.
The OPP is designed for terminating and distributing optical fibers, and can work as a cross connection and
testing point between the optical network and the equipment. It provides a quick and flexible solution for
cable management.
The OPP is 1U high and can be installed in ETSI A, ETSI B, and 19” racks using its configurable rack-mounting
brackets. The OPP is available with 12 or 24 ports, and SC, LC, and FC connectors.
All connections are made on the internal connector panel, which accommodates duplex connectors. This
prevents unintended disconnection of fibers and supports the connection reliability. The optical fibers are
connected to the connectors from both sides. The fibers enter the OPP from the left or right side and are
threaded on special supports that maintain the minimum bend radius to prevent fiber breaks.
A two-position moving tray provides easy access to the fibers for management or maintenance purposes.
Figure 12-27: OPP top view

ECI Telecom Ltd. Proprietary 12-21


Neptune (Hybrid) Reference Manual Accessories

12.12 xDDF-21
The PME1_21 supports only balanced E1s directly from its connectors. For unbalanced E1s, an external DDF
with E1 balanced-to-unbalanced conversion must be configured.

When unbalanced 75  interfaces are required, the xDDF-21 patch panel enables connection and
conversion of these interfaces to the balanced 120  interfaces of the PME1_21.
The xDDF-21 is 1U high and can be installed in ETSI A and ETSI B racks, as well as in 19” racks. It has a
capacity of 21 E1 lines.
The following figure shows a general view of the patch panel. The channel numbers of the various
connectors are marked on the patch panel, and the inside of the cover contains a label for cable
identification (illustrated in the following figure). The customer’s cables are connected to the connectors
inside the patch panel, while the cable leading from the PME1_21 connector is connected to the SCSI
connectors at the rear of the xDDF-21. A special split cable is available to convert the output from the
PME1_21 to SCSI connector pairs at the back of the xDDF-21.
The xDDF-21 can be supplied with BT43, DIN1.6/5.6, or BNC connectors for connecting to the customer’s
traffic cables.

Figure 12-28: xDDF-21 patch panel for unbalanced E1 interfaces

Figure 12-29: Label inside xDDF-21 patch panel door

12.13 Cable guiding accessories


Different accessories are available for guiding cables and fibers in rack installations.

12.13.1 Cable guide frame


The cable guide frame for the BG is used to obtain a tidy route of electrical interface cables connecting to
the panels of modules (or to the panels of the TPMs in I/O-protected installations).
The unit is fastened to the rack side rails just above the top of the BG platforms.

Figure 12-30: Cable guide

ECI Telecom Ltd. Proprietary 12-22


Neptune (Hybrid) Reference Manual Accessories

12.13.2 PME1_63 cable guide and holder


The PME1_63 cable guide and holder is a dedicated accessory for handling the special PME1_63 traffic
cable (see Traffic Cable for PME1_63). It keeps the cable in the right position, so that cable will not
interrupt closing the rack's door.)). It keeps the cable in the right position so it does not interrupt the
closing of the rack's door. It also keeps the PME1_63 traffic cable neatly routed in its short path, between
the PME1_63 module and the conversion box (part of the cable). The groups of three holes in the upper
part of the accessory are used to attach a cable tie that fastens the cable. The groups of three grooves in
the accessory are used to attach a Velcro strip (if necessary) that fastens the installed cable. The accessory
must always be installed above the BG platform.
Figure 12-31: Cable guide and holder - general view

12.13.3 Fiber guide for ETSI A racks


The fiber guide helps route fibers neatly in ETSI A racks. The guide also helps to keep the recommended
bend radius of the routed fibers.

Figure 12-32: Fiber guide for ETSI A racks

ECI Telecom Ltd. Proprietary 12-23


Neptune (Hybrid) Reference Manual Accessories

12.13.4 Cable slack tray


The cable slack tray is used to keep the surplus of cables running under the BG platform organized.
It is used only in horizontal BG platform installations.

Figure 12-33: Cable slack tray

12.14 Cables
The product line platforms are supplied with a number of cables, as described in the following table.

Table 12-18: BG cables


Cable type Description Interconnect Length
2 x 50-pin SCSI - 21 E1 E1 cable for BG-20B BG-20B to DDF From 5 m to 50 m
2 X 68-pin VHDCI - 21 E1 E1 cable for ME1_21 or BG-20B/BG-20E to DDF From 5 m to 50 m
ME1_42 (two cables)
SCSI 100-pin - 21 E1 E1 cable for PME1_21 NPT-1600CB/NPT-1200 to From 5 m to 50 m
DDF
2 x 136-pin VHDCI - 63 E1 E1 cable for PME1_63 NPT-1600CB/NPT-1200 to From 3 m to 150 m
DDF
DIN1.0/2.3 to DIN1.0/2.3 Single coaxial cable M345_3_E to DDF From 5 m to 60 m
PM345_3 to DDF
LC-to-LC fiber Single-fiber patch cord All optical interfaces to From 0.5 m to 10 m
ODF
RJ-45 to RJ-45 UTP CAT-5E Electrical Ethernet From 0.5 m to 5 m
interfaces to DDF
Alarm to client Alarm signals; DB-15 From BG-20B to client From 5 m to 15 m
female to open connection panel
Neptune alarm to client Alarm signals; 36-pin From From 5 m to 15 m
VHDCI connector to open NPT-1600CB/NPT-1200
direct to client
connection facility
T3/T4 (timing) Timing input/output; DB-9 From Neptune to From 5 m to 15 m
female to open customer

ECI Telecom Ltd. Proprietary 12-24


Neptune (Hybrid) Reference Manual Accessories

Cable type Description Interconnect Length


Exp. management RJ-45 to Ethernet hub From Neptune to other 2m
DB-9 female NE
2 x SCSI-50 to SCSI 100-pin E1 IOP on Neptune PME1_21 to TP21_2 Jumpers
DIN1.0/2.3 to DIN1.0/2.3 High-rate IOP High-rate interfaces to Jumpers
TPS1_2
SCSI 36-pin to SCSI 36-pin Neptune ICP_MCP30 NPT-1600CB/NPT-1200 to Jumpers
ICP_MCP30 and other ICP
units
SCSI 36-pin to SCSI 36-pin BG-20 ICPs BG-20B to ICP-VF, Jumpers
ICP-V24, or
ICP-V35
SCSI 36-pin to SCSI 36-pin SM_10E BG-20 ICPs SM_10E to ICP-VF, Jumpers
ICP-V24, ICP-V35, or
ICP-V11
DB15 male to open ICP-V11 ICP-V11 From 5 m to 15 m

ECI Telecom Ltd. Proprietary 12-25


13 Standards and references
The following is a list of standards and reference documents that relate to the Neptune platform product
line. The standards are listed alphabetically by groups.
 Environmental standards
 ETSI: European Telecommunications Standards Institute
 IEC: International Electrotechnical Commission
 IEEE: Institute of Electrical and Electronic Engineers
 IETF: Internet Engineering Task Force
 ISO: International Organization for Standardization
 ITU-T: International Telecommunication Union
 MEF: Metro Ethernet Forum
 NIST: National Institute of Standards and Technology
 North American Standards
 OMG: Object Management Group
 TMF: TeleManagement Forum
 Web Protocol Standards

13.1 Environmental Standards


 EuP Directive 2005/32/EC: Ecodesign Requirements for Energy-Using Products.
 OHSAS 18001: Occupational Health and Safety Management Systems - Requirements.
 REACh Directive 2005/32/EC: Registration, Evaluation, Authorization, and Restriction of Chemicals.
 RoHS Directive 2005/747/EC: Restriction of the Use of Certain Hazardous Substances in Electrical and
Electronic Equipment.
 WEEE Directive 2002/96/EC: Waste from Electrical and Electronic Equipment.

ECI Telecom Ltd. Proprietary 13-1


Neptune (Hybrid) Reference Manual Standards and references

13.2 ETSI: European Telecommunications


Standards Institute
 EN 300 019-1-1 Class 1.2: Environmental Engineering (EE); Environmental Conditions and
Environmental Tests for Telecommunications Equipment; Part 1-1: Classification of Environmental
Conditions; Storage.
 EN 300 019-1-2 Class 2.3: Environmental Engineering (EE); Environmental Conditions and
Environmental Tests for Telecommunications Equipment; Part 1-2: Classification of Environmental
Conditions; Transportation.
 EN 300 019-1-3 Classes 3.2 and 3.3: Environmental Engineering (EE); Environmental Conditions and
Environmental Tests for Telecommunications Equipment; Part 1-3: Classification of Environmental
Conditions; Stationary use at weather-protected locations.
 EN 300 019-2-4 Class 4.1: Environmental Engineering (EE); Environmental Conditions and
Environmental Tests for Telecommunications Equipment; Part 2-4: Specification of Environmental
Tests; Stationary use at non-weather-protected locations.
 EN 300 132 -2: Environmental Engineering (EE); Power Supply Interface at the Input to
Telecommunications Equipment.
 EN 300-166: Physical and electrical characteristics of hierarchical digital interfaces for equipment
using the 2 048 kbit/s based plesiochronous or synchronous digital hierarchies.
 EN 300 253: Earthing and bonding of telecommunication equipment in telecommunication centers.
 EN 300 386: Electromagnetic compatibility and Radio spectrum Matters (ERM); Telecommunication
network equipment; Electromagnetic Compatibility (EMC) requirements.
 EN 300 386-1: Electro-Magnetic Compatibility (EMC) requirements.
 EN 300 386-2: ElectroMagnetic Compatibility (EMC) requirements.
 EN 300-417-2-1: Transmission and Multiplexing (TM); Generic requirements of transport functionality
of equipment.
 EN 300-417-5-1: Generic requirements of transport functionality of equipment.
 EN 300-462-5-1: Transmission and Multiplexing (TM); Generic requirements for synchronization
networks.
 EN 300-689: 34 Mbit/s digital leased lines (D34U and D34S); Terminal equipment interface.
 EN 301-164: SDH leased lines connection characteristics.
 EN 301-165: SDH leased lines Network and Terminal interface presentation.
 EN 50121 – 4 (2006) Railway applications - Electromagnetic compatibility part 4: emission and
immunity of signaling and telecommunications.
 EN 50125-3 (2003) Railways applications – Environmental conditions for equipment – part 3:
Equipment for signaling and telecommunication.
 EN50581 - Guiding Standard for Compliance with RoHS2 Technical Documentation Requirements
 EN 55022: Radio Disturbance Characteristics of Information Technology Equipment.
 EN 55024: 1998 +A1: 2001+A2:2003 Immunity characteristics-Limits and methods of measurements
 EN 60950-1 (2006/ A11+A12+A1)) Equipment Safety Part-1: General Requirements.

ECI Telecom Ltd. Proprietary 13-2


Neptune (Hybrid) Reference Manual Standards and references

 EN 60870-2-2 (1996) Telecontrol equipment and system - Part 2Operation condition – Section 2:
Environmental condition (3k6).
 EN 60950-1 -Information Technology Equipment Safety- Part1 General Requirements
 EN 61000-4-2:1995 +A1:98+A2:2001 Electrostatic Discharge (ESD) Immunity test
 EN 61000-4-3:2008 Electromagnetic compatibility (EMC), Section 3: Radiated, radio frequency,
electromagnetic field immunity IEC test
 EN 61000-4-4: 2008 Electromagnetic compatibility (EMC), Section 4: Electrical fast transient/burst
immunity test
 EN 61000-4-5: 2006 Electromagnetic compatibility (EMC), Section 5: Surge immunity test
 EN 61000-4-6: 2007 Electromagnetic compatibility (EMC), Section 6: Immunity to conducted
disturbances, induced by radio- frequency fields
 EN 61000-6-2: Electromagnetic compatibility (EMC) - Part 6-2: Generic standards - Immunity for
industrial environments
 EN 61000-6-4: Electromagnetic compatibility (EMC) - Part 6-4: Generic standards - Emission standard
for industrial environments
 EN 61000-6-5 (2001) Generic standards – Immunity for power station and substation environments
 EN 61850-3 (2002) Communication network and systems in substations – Part 3: General
requirements
 ETR 114: Functional Architecture of SDH Transport Networks.
 ETR 275: Considerations on Transmission Delay and Transmission Delay value for components on
connections supporting speech communication over evolving digital networks.
 FTZ 1TR9: Deutsche Telekom A.G. EMC Requirements.
 FTZ 153 TL 1part 1: Synchronous Multiplexing Equipment (SM) for Synchronous Multiplex Hierarchy.

13.3 IEC: International Electrotechnical


Commission
 IEC 68: Environmental Testing.
 IEC-68-2-6: Vibration and Shock tests on a typical Current Transformer Set.
 IEC 870-2-1:1995 Telecontrol equipment and systems - Part 2: Operating conditions -Section 1: Power
supply and electromagnetic compatibility.
 IEC 917: Modular Order for the Development of Mechanical Structures for Electronic Equipment
Practices.
 IEC 3309: Information Technology – Telecommunications and Information Exchange between Systems
– High-Level Data Link Control (HDLC) Procedures – Frame Structure.
 IEC 9314-3: Information Processing Systems - Fiber Distributed Data Interface (FDDI) Multiplex.
 IEC 9595, Information Technology: Open Systems Interconnection, Common Management
Information Services.

ECI Telecom Ltd. Proprietary 13-3


Neptune (Hybrid) Reference Manual Standards and references

 IEC 9596, Information Technology: Open Systems Interconnection, Common Management


Information Protocol.
 IEC 60068-1: Environmental testing Part 1: General and guidance (included in EN 300 019 class 3.3)
 IEC 600721-2: Classification of environmental conditions. Part 2 (included in EN 300 019)
 IEC 60825-1: Safety of Laser Products – Part 1: Equipment Classification and Requirements.
 IEC 60825-2 (AS/NZS 2211.2): Safety of Laser Products – Part 2: Safety of Optical Fiber Communication
System (OFCS).
 IEC 61000-4-2: Electromagnetic compatibility (EMC). Part4-2 Testing and measurement techniques.
 IEC 61000-4-3: Electromagnetic compatibility (EMC) – Part 4-3: Testing and measurement techniques
– Radiated, radio-frequency, electromagnetic field immunity test.
 IEC 61000-4-4: Electromagnetic compatibility (EMC) - Part 4-4 Testing and measurement techniques -
Electrical fast transient and burst immunity test. Basic EMC publication.
 IEC 61000-4-5: Electromagnetic compatibility (EMC) - Part 4-5: Testing and measurement techniques -
Surge immunity test.
 IEC 61000-4-6: Electromagnetic compatibility (EMC) - Part 4:Testing and measurement techniques --
Section 6:Immunity to conducted disturbances, induced by radio-frequency fields.
 IEC/EN/UL 60950-1: Information Technology Equipment - Safety - General Requirements.
 IS 1249-1: Safety of Laser products: Equipment classification requirements and users guide.

13.4 IEEE: Institute of Electrical and Electronic


Engineers
 IEEE 802.1ad: Virtual Bridged Local Area Networks—Revision—Amendment 4: Provider Bridges.
 IEEE 802.1ag: Virtual Bridged Local Area Networks Amendment 5: Connectivity Fault Management.
 IEEE 802.1D: Media access control (MAC) Bridges (Incorporates IEEE 802.1t and IEEE 802.1w).
 IEEE 802.1P: Traffic Class Expediting and Dynamic Multicast Filtering.
 IEEE 802.1Q: Virtual Bridged Local Area Networks—Revision.
 IEEE 802.1w: Rapid Reconfiguration of Spanning Tree.
 IEEE 802.1s: Multiple Spanning Trees
 IEEE 802.3: Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and
Physical Layer Specifications.
 IEEE 802.3ad: Link Aggregation.
 IEEE 802.3ah: Ethernet in the First Mile (Link OAM).
 IEEE 802.3x: Full Duplex Operation and Flow Control Protocol.
 IEEE 802.1x: Port-based Network Access Control (PNAC)
 IEEE C37.94: IEEE Standard for N Times 64 Kilobit Per Second Optical Fiber Interfaces Between
Teleprotection and Multiplexer Equipment

ECI Telecom Ltd. Proprietary 13-4


Neptune (Hybrid) Reference Manual Standards and references

 IEEE 1588: IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement
and Control Systems
 IEEE 1613 (2003) Environmental and testing requirements for communication networking devices in
electric power station (class B).

13.5 IETF: Internet Engineering Task Force


 RFC 791: Internet Protocol, DARPA Internet Protocol Specification
 RFC 826: An Ethernet Address Resolution Protocol
 RFC 854: Telnet Protocol Specification
 RFC 894: A Standard for the Transmission of IP Datagrams over Ethernet Networks
 RFC 1122: Requirements for Internet Hosts -- Communication Layers”, Braden R.,Editor, October 1989
 RFC 1157: SNMP - Simple Network Management Protocol.
 RFC 1212: Concise MIB Definitions
 RFC 1213: Management Information Base for Network Management of TCP/IP-based internets:
MIB-II, March 1991
 RFC 1215: The Convention for Defining Traps for use with the SNMP
 RFC 1229: Extensions to the generic-interface MIB
 RFC 1332: The PPP Internet Protocol Control Protocol (IPCP), May 1992
 RFC 1493: Definition of Managed Objects for Bridges.
 RFC 1570: PPP LCP Extensions, W. Simpson, January 1994
 RFC 1643: Ethernet-like Interfaces.
 RFC 1661: The Point-to-Point Protocol (PPP), July 1994
 RFC 1662: PPP in HDLC-like framing, July 1994
 RFC 1757: Remote Network Monitoring Management Information Base.
 RFC 1812: Requirements for IP version 4 Routers, June 1995
 RFC 1823: LDAP Application Program Interface (API).
 RFC 1850: “OSPF Version 2 Management Information Base”, Baker F., R. Coltun R., November 1995
 RFC 1901: Introduction to Community-based SNMPv2.
 RFC 1902: Structure of Management Information for Version 2
 RFC 1903: Textual Conventions for Version 2 of SNMPv2
 RFC 1904: Conformance Statements for Version 2 of SNMPv2
 RFC 1905: Protocol Operations for Version 2 of SNMPv2
 RFC 1907: Management Information Base (MIB) for SNMPv2
 RFC 1908: Coexistence between V1 and V2 of the Internet-standard
 RFC 2058: RADIUS Authentication and Authorization
 RFC 2108: Definitions of Managed Objects for IEEE 802.3 Repeater Devices using SMIv2.

ECI Telecom Ltd. Proprietary 13-5


Neptune (Hybrid) Reference Manual Standards and references

 RFC 2138: Remote Authentication Dial In User Service (RADIUS)


 RFC 2212: Specification of guaranteed quality of service
 RFC 2236: Internet Group Management Protocol, Version 2
 RFC 2251: Lightweight Directory Access Protocol (v3) [specification of the LDAP on-the-wire protocol].
 RFC 2252: Lightweight Directory Access Protocol (v3): Attribute Syntax Definitions.
 RFC 2253: Lightweight Directory Access Protocol (v3): UTF-8 String Representation of Distinguished
Names.
 RFC 2254: The String Representation of LDAP Search Filters.
 RFC 2255: The LDAP URL Format.
 RFC 2256: A Summary of the X.500(96) User Schema for use with LDAPv3.
 RFC 2328: “OSPF Version 2.”, Moy J., April 1998
 RFC 2401: Security Architecture for the Internet Protocol.
 RFC 2409: Internet Key Exchange Protocol (IKE).
 RFC 2464: Transmission of IPv6 Packets over Ethernet Networks.
 RFC 2474: Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers.
 RFC 2697: The single rate three color marker
 RFC 2698: A two rate three color marker
 RFC 2702: Requirements for Traffic Engineering Over MPLS.
 RFC 2737: Entity MIB (Version 2).
 RFC 2819: Remote Network Monitoring Management Information Base.
 RFC 2829: Authentication Methods for LDAP.
 RFC 2830: Lightweight Directory Access Protocol (v3): Extension for Transport Layer Security.
 RFC 2863: Interfaces Group MIB.
 RFC 2865: Remote Authentication Dial In User Server (RADIUS)
 RFC 3014: Notification Log MIB.
 RFC 3031: Multiprotocol Label Switching Architecture.
 RFC 3032: MPLS Label Stack Encoding.
 RFC 3140: Per hop behavior identification codes
 RFC 3246: An Expedited Forwarding PHB (Per-Hop Behavior).
 RFC 3270: Multi-Protocol Label Switching (MPLS) Support of Differentiated Services.
 RFC 3376: "Internet Group Management Protocol, Version3"
 RFC 3377: Lightweight Directory Access Protocol (v3): Technical Specification.
 RFC 3411: SNMP Framework MIB.
 RFC 3414: SNMP User-Based SM MIB.
 RFC 3415: SNMP Vies-Based ACM MIB.

ECI Telecom Ltd. Proprietary 13-6


Neptune (Hybrid) Reference Manual Standards and references

 RFC 3443: Time To Live (TTL) Processing in Multi-Protocol Label Switching (MPLS) Networks.
 RFC 3584: Coexistence between Version 1, Version 2, and Version 3 of
 RFC 3644: Policy quality of service (QoS) Information model
 RFC 3670: Information model for describing network device QoS datapath
 RFC 3812: Multiprotocol Label Switching (MPLS) Traffic Engineering (TE) Management Information
Base (MIB).
 RFC 3916: Requirements for Pseudo-Wire Emulation Edge-to-Edge (PWE3).
 RFC 3985: Pseudo Wire Emulation Edge-to-Edge (PWE3) Architecture.
 RFC 4125: Maximum Allocation Bandwidth Constraints Model for Diffserv-aware MPLS Traffic
Engineering.
 RFC 4126: Max Allocation with Reservation Bandwidth Constraints Model for Diffserv-aware MPLS
Traffic Engineering & Performance Comparisons.
 RFC 4250: The Secure Shell (SSH) Protocol Assigned Numbers
 RFC 4251: The Secure Shell (SSH) Protocol Architecture
 RFC 4252: The Secure Shell (SSH) Authentication Protocol
 RFC 4253: The Secure Shell (SSH) Transport Layer Protocol
 RFC 4254: The Secure Shell (SSH) Connection Protocol
 RFC 4379: Detecting Multi-Protocol Label Switched (MPLS) Data Plane Failures.
 RFC 4448: Encapsulation Methods for Transport of Ethernet over MPLS Networks.
 RFC 4541: “Considerations for IGMP and MLD Snooping Switches”
 RFC 4553: Structure-Agnostic Time Division Multiplexing (TDM) over Packet (SAToP)
 RFC 4664: Framework for Layer 2 Virtual Private Networks (L2VPNs)
 RFC 4665: Service Requirements for Layer 2 Provider-Provisioned Virtual Private Networks
 RFC 4781: Graceful Restart Mechanism for BGP with MPLS
 RFC 5086: Structure-Aware Time Division Multiplexed (TDM) Circuit Emulation Service over Packet
Switched Network (CESoPSN)
 RFC 5087: Time Division Multiplexing over IP (TDMoIP)
 RFC 5254: Requirements for Multi-Segment Pseudowire Emulation Edge-to-Edge (PWE3).
 RFC 5462: Multiprotocol Label Switching (MPLS) Label Stack Entry: "EXP" Field Renamed to "Traffic
Class" Field.
 RFC 5586: MPLS Generic Associated Channel
 RFC 5654: Requirements of an MPLS Transport Profile
 RFC 5659: An Architecture for Multi-Segment Pseudowire Emulation Edge-to-Edge.
 RFC 5718: An In-Band Data Communication Network For the MPLS-TP
 RFC 5860: Requirements for OAM in MPLS-TP Networks
 RFC 5880: Bidirectional Forwarding Detection (BFD)
 RFC 5884: Bidirectional Forwarding Detection (BFD) for MPLS Label Switched Paths (LSPs)

ECI Telecom Ltd. Proprietary 13-7


Neptune (Hybrid) Reference Manual Standards and references

 RFC 5921: A Framework for MPLS in Transport Networks


 RFC 5950: Network Management Framework for MPLS-TP Networks
 RFC 5951: Network Management Requirements for MPLS-TP Networks
 RFC 5960: MPLS-TP Data Plane Architecture
 RFC 6073: Segmented Pseudowire.
 RFC 6215 : MPLS-TP User-to-Network (UNI) and Network-to-Network Interfaces (NNI)
 RFC 6370: MPLS-TP Identifiers
 RFC 6371: OAM Framework for MPLS-TP Networks
 RFC 6372: MPLS-TP Survivability Framework
 RFC 6378: MPLS Transport Profile (MPLS-TP) Linear Protection
 RFC 6423: Using the Generic Associated Channel Label (GAL) for Pseudowire in the MPLS-TP
 RFC 6426: MPLS On-Demand Connectivity Verification and Route Tracing
 RFC 6427: MPLS Fault Management Operations, Administration, and Maintenance (OAM)
 RFC 6428: Proactive Connectivity Verification, Continuity Check, and Remote Defect Indication for the
MPLS Transport Profile
 RFC 6718: Pseudowire Redundancy
 IETF Drafts:
 draft-ietf-l2vpn-vpls-ldp.
 draft-ietf-l2vpn-vpls-mcast-reqts.
 draft-ietf-magma-snoop.
 draft-ietf-mpls-tp-nm-framework.
 draft-ietf-mpls-tp-nm-req.
 draft-ietf-pwe3-dynamic-ms-pw-14.
 draft-ietf-pwe3-ethernet-encap.
 draft-ietf-pwe3-ldp-aii-reachability-04.
 draft-martini-l2circuit-encap-mpls.
 draft-sajassi-l2vpn-vpls-multicast-congruency.
 draft-vasseur-mpls-backup-computation.

ECI Telecom Ltd. Proprietary 13-8


Neptune (Hybrid) Reference Manual Standards and references

13.6 ISO: International Organization for


Standardization
 A2LA: Accredited Laboratory for Electrical and Mechanical Testing.
 ISO 9001: Quality Management System – Requirements.
 ISO 10589: Intermediate System to Intermediate System intra-domain routing information exchange
protocol
 ISO 14001: Environmental Management Systems – Requirements With Guidance for Use.
 ISO 27001: Information Security Management Systems – Requirements.
 TL 9000, QuEST Forum: Quality Management System – Requirements & Measurements Handbooks
for the Telecom Industry.

13.7 ITU-T: International Telecommunication


Union
 G.650: Definition and Test Methods for the Relevant Parameters of Single-Mode Fibers.
 G.651: Characteristics of a 50/125 µm Multimode Graded Index Optical Fiber Cable.
 G.652: Characteristics of a Single-Mode Optical Fiber Cable.
 G.653: Characteristics of a Dispersion-Shifted Single-Mode Optical Fiber Cable.
 G.654: Characteristics of a Cut-off Shifted Single-Mode Optical Fiber Cable.
 G.655: Characteristics of a Non-Zero Dispersion Shifted Single-Mode Optical Fiber Cable.
 G.661: Definition and Test Methods for the Relevant Generic Parameters of Optical Amplifier Devices
and Subsystems.
 G.662: Generic Characteristics of Optical Fiber Amplifier Devices and Subsystems.
 G.663: Application Related Aspects of Optical Fiber Amplifier Devices and Subsystems.
 G.664: Optical Safety Procedures and Requirements for Optical Transport Systems.
 G.671: Transmission Characteristics of Passive Optical Components.
 G.691: Optical Interfaces for Single Channel SDH Systems with Optical Amplifiers and STM-64 Systems
(Draft).
 G.692: Optical Interfaces for Multi-Channel Systems with Optical Amplifiers.
 G.694.1: Spectral Grids for WDM Applications: DWDM Frequency Grid.
 G.694.2: Spectral Grids for WDM Applications: CWDM Wavelength Grid.
 G.695: Optical Interfaces for Coarse Wavelength Division Multiplexing Applications.
 G.703: Physical/Electrical Characteristics of Hierarchical Digital Interfaces.
 G.704: Synchronous Frame Structures Used at 1544, 6312, 2048, 8448 and 44 736 kbps Hierarchical
Levels.
 G.706: Frame Alignment and CRC Procedures Relating to Basic Frame Structure Defined in Rec G.704.

ECI Telecom Ltd. Proprietary 13-9


Neptune (Hybrid) Reference Manual Standards and references

 G.707: Network Node Interface for the Synchronous Digital Hierarchy.


 G.709: Interfaces for the Optical Transport Network (OTN).
 G.752: Characteristics of digital multiplex equipment based on a second order bit rate of 6312 kbit/s
and using positive justification.
 G.772: Protected Monitoring Points Provided on Digital Transmission Systems.
 G.774 & G774.n: SDH Information Model.
 G.775: Loss of Signal (LOS), Alarm Indication Signal (AIS) and Remote Defect Indication (RDI) defect
detection and clearance criteria for PDH signals.
 G.781: Synchronization Layer Functions.
 G.783: Characteristics of SDH Equipment Functional Blocks.
 G.784: Synchronous Digital Hierarchy (SDH) Management.
 G.798: Characteristics of OTN Hierarchy Equipment Functional Blocks.
 G.803: Architectures of Transport Networks based on the Synchronous Digital Hierarchy.
 G.805: Generic Functional Architecture of Transport Networks.
 G.806: Characteristics of Transport Equipment –Description Methodology and Generic Functionality.
 G.809: Functional Architecture of Connectionless Layer Networks.
 G.811: Timing Characteristics of Primary Reference Clocks.
 G.812: Timing Requirements of Slave Clocks Suitable for Use as Node Clocks in Synchronization
Networks.
 G.813: Timing Characteristics of SDH Equipment Slave Clocks (SEC).
 G.823: The Control of Jitter and Wander within Digital Networks Based on the 2048 kbit/s Hierarchy.
 G.825: The Control of Jitter and Wander within Digital Networks Based on the SDH (Draft).
 G.8251: The Control of Jitter and Wander within the Optical Transport Network (OTN).
 G.826: Error Performance Parameters and Objectives for International, Constant Bit Rate Digital Paths
at or above the Primary Rate.
 G.828: Error Performance Parameters and Objectives for International, Constant Bit Rate Synchronous
Digital Paths.
 G.829: Error Performance Events for SDH Multiplex and Regenerator Sections.
 G.831: Management Capabilities of Transport Networks Based on the Synchronous Digital Hierarchy
(SDH).
 G.841: Types and Characteristics of SDH Network Protection Architectures.
 G.842: Inter-Working of SDH Protection Architectures.
 G.872: Architecture of Optical Transport Networks.
 G.874: Management Aspects of the Optical Transport Network Element.
 G.874.1: Optical Transport Network (OTN): Protocol-Neutral Management Information Model for the
Network Element View.
 G.957: Optical Interfaces for Equipment and Systems relating to the Synchronous Digital Hierarchy.

ECI Telecom Ltd. Proprietary 13-10


Neptune (Hybrid) Reference Manual Standards and references

 G.959.1: Optical Transport Network Physical Layer Interfaces.


 G.975: Forward Error Correction for Submarine Systems.
 G.985: 100 Mbit/s point-to-point Ethernet based optical access system.
 G.7041: Generic Framing Procedure (GFP).
 G.7042: Link Capacity Adjustment Scheme (LCAS) for Virtual Concatenated Signals.
 G.7710/Y.1701: Common equipment management function requirements
 G.7712: Architecture and specification of data communication network.
 G.7713: Distributed Connection Management.
 G.7714: Generalized Automatic Discovery Techniques.
 G.7714.1: Protocol for Automatic Discovery in SDH & OTN network.
 G.7715.1: Based on PNNI, OSPF or IS_IS.
 G.8001/Y.1354: Terms and definitions for Ethernet frames over transport.
 G.8010/Y.1306: Architecture of Ethernet Layer Networks.
 G.8011/Y.1307: Ethernet Services Framework.
 G.8011.1/Y.1307.1: Ethernet Private Line Service.
 G.8011.2/Y.1307.2: Ethernet Virtual Private Line Service.
 G.8012/Y.1308: Ethernet UNI and Ethernet NNI.
 G.8032 V1/Y.1344: Ethernet ring protection switching
 G.8112/Y.1371: Interfaces for the MPLS Transport Profile layer network
 G.8113.2/Y.1372.2: Operations, administration and maintenance mechanisms for MPLS-TP networks
using the tools defined for MPLS
 G.8201/Y.1354: Error performance parameters and objectives for multi-operator international paths
within the Optical Transport Network (OTN)
 G.8261/Y. 1361: Timing and synchronization aspects in packet network.
 G.8262/Y. 1362: Timing characteristics of synchronous Ethernet equipment slave clock (EEC).
 G.8264: Distribution of timing information through packet networks
 G.8265: Architecture and requirements for packet-based frequency delivery
 I.610: ATM Operation and Maintenance Principles.
 M.2140: Transport Network Event Correlation.
 M.3010: Principles for a Telecommunications Management Network.
 M.3013: Considerations for a Telecommunications Management Network.
 M.3016.x: Security for the management plane.
 M.3017: Framework for the integrated management of hybrid circuit/packet networks.
 M.3100: Generic Network Information Model.
 M.3180: Catalogue of TMN Management Information.
 M.3200: TMN Management Services and Telecommunications Managed Areas: Overview.

ECI Telecom Ltd. Proprietary 13-11


Neptune (Hybrid) Reference Manual Standards and references

 M.3300: TMN F Interface Requirements.


 M.3400: TMN Management Functions.
 Q.821: Alarm Surveillance.
 Q.822: Performance Monitoring.
 V.10: Electrical characteristics for unbalanced double-current interchange circuits operating at data
signaling rates nominally up to 100 kbit/s.
 V.11: Electrical characteristics for balanced double-current interchange circuits operating at data
signaling rates up to 10 Mbit/s.
 V.24: List of definitions for interchange circuits between data terminal equipment (DTE) and data
circuit-terminating equipment (DCE)
 V.28: Electrical characteristics for unbalanced double-current interchange circuits.
 V.110: Support by an ISDN of data terminal equipment with V-series type interfaces
 X.217: Open Systems Interconnection, Service Definition for the Association Control Service Element.
 X.219: Remote Operations - Model, Notation and Service Definition.
 X.227: Open Systems Interconnection, Connection-Oriented Protocol for the Association Control
Service Element - Protocol Specification.
 X.229: Remote Operations: Protocol Specification.
 X.710: Open Systems Interconnection, Common Management Information Service.
 X.720: Open Systems Interconnection, Structure of Management Information - Management
Information Model.
 X.721 Information Technology: Open Systems Interconnection, Structure of Management
Information - Definition of Management Information.
 X.722: Open Systems Interconnection, Structure of Management Information - Guidelines for the
Definition of Managed Objects.
 X.731: Open Systems Interconnection, Systems Management - State Management Function.
 X.733: Open Systems Interconnection, Systems Management - Alarm Reporting Function.
 X.743: Open Systems Interconnection, Systems Management - Time Management Function.
 X.744: Open Systems Interconnection, Systems Management - Software Management Function.
 Y.1311: Network-based VPNs - Generic architecture and service requirements.
 Y.1710: Requirements for Operation & Maintenance functionality for MPLS networks.
 Y.1711: Operation & Maintenance mechanism for MPLS networks.
 Y.1413: TDM-MPLS network interworking – User plane interworking
 Y.1731: OAM functions and mechanisms for Ethernet based networks
 Z.351: Data oriented human-machine interface specification technique – introduction.
 Z.352: Data oriented human-machine interface specification technique – scope, approach and usage.
 Z.361: Design guidelines for Human-Computer Interfaces (HCI) for the management of
telecommunications networks.

ECI Telecom Ltd. Proprietary 13-12


Neptune (Hybrid) Reference Manual Standards and references

13.8 MEF: Metro Ethernet Forum


 CE 2.0 MEF services with multiple Classes of Service (Multi-CoS) & E-Access Ethernet Network to
Network Interconnect and Manageability (E-NNI and E-Access).
 MEF3 Circuit Emulation Service Definitions, Framework and Requirements in Metro Ethernet
Networks
 MEF4 Metro Ethernet Network Architecture Framework Part 1: Generic Framework.
 MEF6 Metro Ethernet Services Definitions.
 MEF6.1 Ethernet Services Definitions - Phase 2.
 MEF7 EMS-NMS Information Model.
 MEF8 Implementation Agreement for the Emulation of PDH Circuits over Metro Ethernet Networks
 MEF9 Ethernet Services at the UNI
 MEF10 Ethernet Service Attributes.
 MEF 10.1 Ethernet Service Attributes - Phase 2.
 MEF11 User Network Interface (UNI) Requirements and Framework.
 MEF12 Metro Ethernet Network Architecture Framework Part 2: Ethernet Services Layer.
 MEF 13 User to Network Interface (UNI) Type 1 Implementation Agreement.
 MEF14 Test Suite for Ethernet Traffic Management.
 MEF15 Requirements for Management of Metro Ethernet Phase 1 Network Elements
 MEF16 Ethernet Local Management Interface
 MEF18 Abstract Test Suite for Circuit Emulation Services over Ethernet based on MEF8
 MEF 21 Abstract Test Suite for UNI Type 2 Part 1 Link OAM
 MEF 22.1 Mobile Backhaul Phase 2 Implementation Agreement
 MEF 23.1 Class of Service Phase 2 Implementation Agreement
 MEF 25 Abstract Test Suite for UNI Type 2 Part 3 Service OAM
 MEF26 External Network Network Interface Technical Specification
 MEF30 Service OAM Fault Management Implementation Agreement
 MEF31 Service OAM Fault Management Definition of Managed Objects [Text File of FM & TC MIBs
(.zip file)]
 MEF 33 Ethernet Access Services Definition
 MPLS-TP Network Management requirements (draft-ietf-mpls-tp-nm-req)
 MPLS-TP NM Framework (draft-ietf-mpls-tp-nm-framework)

ECI Telecom Ltd. Proprietary 13-13


Neptune (Hybrid) Reference Manual Standards and references

13.9 NIST: National Institute of Standards and


Technology
 FIPS PUB 197: Advanced Encryption Standard (AES).
 FIPS PUB 140-2: Security Requirements for Cryptographic Modules.

13.10 North American Standards


 ANSI FC–F1-2: Fiber Channel Physical Interface REV 10.
 ANSI T1.102: Digital hierarchy electrical interface.
 ANSI T1.105: Synchronous Optical Network (SONET) - Basic Description including Multiplex Structure,
Rates, and Formats.
 ANSI X3.296: Single-Byte Command code sets CONnection (SBCON) architecture.
 FCC CFR Title 47 Part 15: Radio Emission.
 FCC Part 101.147 Fixed microwave services: (i8, i7, K7, 7)
 NEBS GR1089 and GR63 (temperature aspect)
 NEBS Level 3 support for N.A. platforms.
 Rural Utility Service (RUS) Certification.
 Telcordia GR-63-CORE: Network Equipment Building System (NEBS) Requirements: Physical
Protection.
 Telcordia GR-253-CORE: Synchronous Optical Network (SONET) Transport Systems: Common Generic
Criteria.
 Telcordia GR-383-CORE: COMMON LANGUAGE® Equipment Codes (CLEI™ Codes) – Generic
Requirements for Bar Code Labels.
 Telcordia GR-487-CORE: Generic Requirements for Electronic Equipment Cabinets.
 Telcordia GR-499-CORE: Transport System Generic Requirements (TSGR): Common Requirements.
 Telcordia GR-815-CORE: Generic Requirements for Network Element/Network system Security.
 Telcordia GR-1089-CORE: Electromagnetic Compatibility and Electrical Safety – Generic Criteria for
Network Telecommunications Equipment.
 Telcordia GR-1209-CORE: Generic Requirements for Passive Optical Components.
 Telcordia GR-1230-CORE: SONET Bidirectional Line-Switched Ring Equipment Generic Criteria.
 Telcordia GD-1244-CORE: Clock for the synchronization network: Common generic criteria.
 Telcordia GR-1312-CORE: Generic Requirements for Optical Fiber Amplifiers and Proprietary Dense
Wavelength-Division Multiplexed Systems.
 Telcordia GR-1400-CORE: SONET Dual-Fed Unidirectional Path Switched Ring (UPSR) Equipment
Generic Criteria.
 Telcordia GD-2979-CORE: Common Generic Requirements for Optical Add-Drop Multiplexers
(OADMs) and Optical Terminal Multiplexers (OTMs).

ECI Telecom Ltd. Proprietary 13-14


Neptune (Hybrid) Reference Manual Standards and references

13.11 OMG: Object Management Group


 Notification Service Specification V 1.0.
 The Common Object Request Broker: Architecture and Specification V 2.6.

13.12 TMF: TeleManagement Forum


 TMF 513: MTNM Business Agreement Release 3.5.
 TMF 518: Framework Document Delivery Package (DDP) Business Agreement (BA) V1.1.
 TMF 518: Resource Provisioning DDP BA V1.0.
 TMF 608: Multi Technology Network Management Information Agreement V 2.1 and V 3.5.
 TMF 814: Multi Technology Network Management Solution Set V 2.1 and V 3.5.
 TMF 854: The MTOSI XML Solution Set Package Release 1.1.

13.13 Web Protocol Standards


W3C: World Wide Web Consortium
 SOAP 1.1: Simple Object Access Protocol.
 WSDL 1.1: Web Services Description Language.

WS-I: Web Services Interoperability Organization


 WS-I Basic Profile 1.1: Web Services Interoperability.

ECI Telecom Ltd. Proprietary 13-15

You might also like