Transmission Network Design and Architecture Guidelines Version 1 3
Transmission Network Design and Architecture Guidelines Version 1 3
NSI Ireland
Reference: Version: Date: Author(s): Filed As: Status: Approved By: Signature Date: /
Transmission network design & architecture guidelines 1.3 Draft 10 June 2013 David Powders
.......................................................... / ......................
13/06/2013
Page 1 of 64
Document History
Version 1.0 Draft 1.1 Draft Date Comment 27.01.2013 First draft 25.02.2013 Incorporating changes requested from parent operators; - Resilience - Routing - Performance monitoring 14.05.2013 - Updated BT TT Routing section 2.3 10.06.2013 - Section 2.3 E-Lines added to TT design - Section 2.6 Updated dimensioning rules - Section 2.6.4 Updated Policing allocation per class - Section 3.x added (Site design)
Reference documents
1 2 (2012.12.27) OPTIMA BLUEPRINT V1.0 DRAFT FINAL.doc Total Transmission IP design - DLD V2 2 (2)[1].pdf
13/06/2013
Page 2 of 64
Contents
DOCUMENT HISTORY ....................................................................................................................... 2 REFERENCE DOCUMENTS............................................................................................................... 2 1.0 INTRODUCTION......................................................................................................................... 5 BACKGROUND............................................................................................................................. 5 SCOPE OF DOCUMENT.................................................................................................................. 5 DOCUMENT STRUCTURE .............................................................................................................. 5
2.0 PROPOSED NETWORK ARCHITECTURE ............................................................................... 6 2.1 TRANSMISSION NETWORK ............................................................................................................... 7 2.1 DATA CENTRE SOLUTION................................................................................................................ 8 2.1.1 Physical interconnection ........................................................................................................ 8 2.2 SELF BUILD BACKHAUL NETWORK .................................................................................................. 9 2.3.1 Self build fibre diversity ....................................................................................................... 12 2.3 MANAGED BACKHAUL .................................................................................................................. 13 2.3.1 TT Network contract ............................................................................................................ 13 2.3.3 Backhaul network selection.................................................................................................. 16 2.4 BACKHAUL ROUTING ................................................................................................................ 16 2.4.1 Legacy mobile services ........................................................................................................ 16 2.4.2 Enterprise services ..................................................................................................... 17 2.4.3 IP services .................................................................................................................. 17
2.4.3.1 2.4.3.2 L3VPN structure ............................................................................................................... 17 IP service Resilience........................................................................................................ 19
2.5 ACCESS MICROWAVE NETWORK ............................................................................................... 20 2.5.1 Baseband switching ................................................................................................... 21 2.5.2 Microwave DCN ........................................................................................................ 22 2.5.3 Backhaul Interconnections......................................................................................... 22 2.6 NETWORK TOPOLOGY & TRAFFIC ENGINEERING ....................................................................... 23 2.6.1 Access Microwave topology & dimensioning ............................................................ 24 2.6.2 Access MW Resilience rules ....................................................................................... 27 2.6.3 Backhaul & Core transmission network dimensioning rules ..................................... 28 2.6.4 Traffic engineering .................................................................................................... 29 2.7 NETWORK SYNCHRONISATION .................................................................................................. 35 2.7.1 Self Built Transmission network ................................................................................ 36 2.7.2 Ethernet Managed services ........................................................................................ 37 2.7.3 DWDM network ......................................................................................................... 38 2.7.4 Mobile network clock recovery .................................................................................. 39
2.7.4.1 2.7.4.2 2.7.4.3 2.7.4.4 Legacy Ran nodes ........................................................................................................... 39 Ericsson SRAN 2G ....................................................................................................... 39 Ericsson SRAN 3G & LTE ........................................................................................... 39 NSN 3G .......................................................................................................................... 40
2.8 DATA COMMUNICATIONS NETWORK (DCN) ............................................................................ 40 2.8.1 NSN 3G RAN Control Plane routing ......................................................................... 41 2.8.2 NSN 3G RAN O&M routing ....................................................................................... 41 2.9 TRANSMISSION NETWORK PERFORMANCE MONITORING ........................................................... 43 3.0 SITE CONFIGURATION .......................................................................................................... 45
3.1 CORE SITES ............................................................................................................................... 45 3.2 BACKHAUL SITES ...................................................................................................................... 51 3.2.1 BT TT locations .................................................................................................................... 56 3.3 ACCESS LOCATIONS ...................................................................................................................... 57 3.3.1 Access sites (Portacabin installation) ........................................................................ 57 3.3.2 Access site (Outdoor cabinet installation) ................................................................. 60
13/06/2013
Page 3 of 64
Figures
Figure 1 Proposed NSI transmission solution ............................................................................................................ 7 Figure 2 Example data centre northbound physical interconnect ............................................................................... 8 Figure 3 - Dublin Dark fibre IP/MPLS network ...................................................................................................... 10 Figure 4 - National North East area IP/MPLS microwave network ......................................................................... 11 Figure 5a BT Total Transmission network............................................................................................................ 14 Figure 5b NSI logical and physical transmission across the BT network ............................................................. 15 Figure 7 Access Microwave topology ................................................................................................................... 21 Figure 8 Example VSI grouping configuration ..................................................................................................... 23 Figure 10 IP/MPLS traffic engineering ................................................................................................................. 30 Figure 11 Enterprise traffic engineering ............................................................................................................... 31 Figure 12 Downlink traffic control mechanism .................................................................................................... 32 Figure 13- Normal link operation ............................................................................................................................ 34 Figure 14 Self built synchronisation distribution .................................................................................................. 37 Figure 15 1588v2 distribution over Ethernet Managed service ............................................................................. 38
Tables
Table 1: Table 2: Table 3 Table 4 Table 5: Table 6: Table 5 Table 6 Table 7 Table 7: Table 8: Table 9: Table 10: Table 11: Table 12: Table 11: Table 12: Self build fibre diversity .............................................................................................. 12 TT Access fibre diversity ............................................................................................ 13 List of L3VPNs required......................................................................................... 18 Radio configuration V air interface bandwidth ..................................................... 25 Feeder link reference ................................................................................................... 26 CIR per technology reference...................................................................................... 26 Sample Quality of Service mapping...................................................................... 30 City Area (Max link capacity = 400Mb\s).............................................................. 33 Non City Area (Max link capacity = 200Mb\s) ..................................................... 33 Synchronisation source and distribution summary .............................................. 36 DCN network configuration per vendor ................................................................. 41 NSI transmission network KPIs and reporting structure .................................... 44 Core site build guidelines ............................................................................................ 51 Backhaul site build guidelines..................................................................................... 56 Access site categories .................................................................................................. 57 Access site consolidation No 3PP services in place ................................................. 63 Outdoor cabinet consolidation existing 3PP CPE on site ......................................... 64
13/06/2013
Page 4 of 64
1.0 Introduction
1.1 Background
The aim of this document is to detail the design and architecture principles to be applied across the Netshare Ireland (NSI) transmission network. NSI, as detailed in the transition document, is tasked with collapsing the existing transmission networks inherited from both Vodafone Ireland and H3G Ireland onto one single network carrying all of each operators enterprise and mobile services. As detailed in the transition document it is NSIs responsibility to ensure that the network is future proof, scalable and cost effective with the capability to meet the short term requirements of network consolidation and the long term requirements of service expansion.
1.2
Scope of document
This document will detail the proposed solutions for the access and backhaul transmission networks and the steps required to migrate from the current separate network configuration to one consolidated network. While the required migration procedures are detailed within this document timescales required to complete these works are out of scope.
1.3
Document structure
Section 2 describes the desired end to end solution for the consolidated network and the criteria used to arrive at each design decision
13/06/2013
Page 5 of 64
Service layer o IP/MPLS (Tellabs / BT TT) o L2 VPN (Tellabs / BT TT) o E-Line (Ceragon / Siae) o TDM access (Ceragon / Siae / Ericsson MiniLink)
By decoupling the physical media layer from the service layer it allows NSI the flexibility to modify one layer without impacting the other. Therefore routing changes throughout the network are independent of the physical layer once established. In the same way changes in the physical layer such as new nodes or bandwidth changes are independent of the service routing. This in Transmission network design & architecture guidelines 13/06/2013 Page 6 of 64
turn ensures that transmission network changes requiring 3rd party involvement are restricted primarily to the physical layer which, once established, should be minimal. While seamless MPLS from access point through to the core network is possible, for demarcation purposes the NSI transmission network will terminate at the core switch (BSC / RNC / SGw / MME / Enterprise gateway).
Legend
UP
CP
O&M
IRB D 250. M
IRB D 230. B
IRB C 250. M
IRB C 230. B
DN680 VF Clonshaugh
10G LACP 10G LACP
10G LACP
10G LACP
HPD-SR12-1
Trunk
are Et hernet
HPD-SR12-2
Netsh
h ets
ar
the eE
rn
et
u Tr
nk
680-SR12-1 680-SR12-2
BT TT Network
TT Eth ern
et T runk
GPOP-SR12
e Eth TT
rne
t tr
nk
VLAN Trunk
/29 netrwork allocated to backend BTS interface. Static routes required to OMU, DCN and RNC networks ??
VLAN Trunk
Cgn
Ge
UP CP O&M TOP
R B S
Siae Siae Cgn Cgn Cgn Cgn Cgn Cgn Siae Siae Siae Cgn Siae Siae Siae Siae Cgn Siae Siae
Cgn
UP VID = 3170 172.17.x.x/32 CP VID = 3180 172.18.x.x/32 O&M VID = 3190 172.19.x.x/32 TOP VID = 3200 172.20.x.x/32 CGN O&M = 3210 172.21.x.x/32
dn1rnc01
Access 7210
Cgn
elp ?
Cgn
dn1rnc02 dn1rnc03 dn1rnc04 dn1rnc05 dn1rnc06 dn1rnc07 dn1rnc08
el p
Cgn
UP VID = 3170 172.17.x.x/32 CP VID = 3180 172.18.x.x/32 O&M VID = 3190 172.19.x.x/32 TOP VID = 3200 172.20.x.x/32 CGN O&M = 3210 172.21.x.x/32 UP CP Cgn O&M TOP
Tellabs 86xx UP VID = 3170 172.17.x.x/26 CP VID = 3180 172.18.x.x/26 O&M VID = 3190 172.19.x.x/26 TOP VID = 3200 172.20.x.x/26 CGN O&M = 3210 172.21.x.x/26
172.30.213.0/24 172.30.214.0/24 172.30.215.0/24 172.30.216.0/24 172.30.217.0/24 172.30.218.0/24 10/196.0.0/20 10.196.16.0/20 10.196.32.0/20 10.196.48.0/20 10.196.64.0/20 10.196.80.0/20 10.196.96.0/20
Cgn
Ge
R B S
The BT TT Network is configured for L2 PtP circuits to each of the CDC locations. Dual nodes at the Data centres may be used to load balance the traffic from the distributed BPOP locations
Netshare IP/MPLS network. L3 VPNs are configured for each of the srevice types from each of the operators
Access Cluster - Each VLAN = Broadcast Domain - MAC Learning enabled throughout the cluster to enable layer 2 switching - No E-Lines in use
Data centre Northbound interfaces Self build backhaul Managed backhaul Backhaul routing Access Microwave network
13/06/2013
Page 7 of 64
Figure 2 below details the possible northbound connections @ each data centre
customer traffic. The 8800 hardware will interface directly at 10Gb\s, 1Gb\s & STM-1 with the core switches, DCN and synchronisation networks for both operators. Each of the Data centres will be interconnected using n x 10Gb\s rings. RSVP LSPs are not supported on the current release of 8800 interfaces in a Link Aggregation Group (LAG) so multiple 10Gb\s rings can be used to transport traffic from both operators. In the first deployment 1 x 10Gb\s ring will be deployed which can be upgraded as required. Consideration was given to a meshed MPLS core, however the Nx10Gb\s ring was deemed to be technically sufficient and more cost effective. This design may be revisited in the future based on capacity, resilience and expansion requirements. Interfacing to the out of band DCN (mobile and transmission networks) and synchronisation networks will be realised through 1Gb\s interfaces. All interfaces to legacy TDM and ATM systems are achieved through the deployment of STM-1c and STM-1 ATM interfaces. Physical and layer 3 monitoring of the physical interfaces is active on all trunk interfaces so in the event of a link failure all traffic is routed to the diverse path and the required status messaging and resilience advertisements are propagated throughout the network. These will be explained in detail in each of the sections dealing with service provisioning.
13/06/2013
Page 9 of 64
L2B
L2B
ge3/0/7
L2B
ge6/0/0 ge9/0/0
45 10.82.10.44/30 46 L2B
ge3/0/7
DN706201
DN706200
so8/1/0
172.25.0.4
ge10/0/7
21 10.82.0.20/30 22
L1B
so9/1/0
172.25.0.3
L1B
1 10.82.0.0/30 2
so9/1/0
DN422200
DN422201
ge10/0/7
so6/1/0
172.25.0.5
5 10.82.0.4/30 6
L1B
so7/1/0
172.25.0.6
so8/1/0
9 10.82.0.8/30 10 L1B
so6/1/0
DN680201 172.25.0.2
DN680202 172.25.0.17
L1B
DN680200 172.25.0.1
L1B
DNBLP201 172.25.x.y
so6/1/0
so7/1/0
so7/1/0
so6/1/0
172.25.6.1
172.25.64.1
DNCAB200 172.25.2.6
ge9/0/7
ge6/0/7
DNCME200 172.25.1.6
ge9/0/7
ge5/0/7
ge5/0/7
L2B
18 10.82.0.16/30 17
41 10.82.10.40/30 42
L1A L1B
L1A
L2B
ISIS 49.0031
L2B
ge6/0/7
DNBDE200 172.25.0.107
L1A
ge8/0/7
L1A L1B
DN394200 172.25.0.104
L2B
DNDCT200 172.25.x.y
ge8/0/7
L2B 97 10.82.10.96/30 98
ge7/0/7
DN915200 172.25.4.1
ge8/0/7
ge7/0/7
37 10.82.10.36/30 38
L2B
ge10/0/7
ISIS 49.0032
ge6/0/7
L2B
93 10.82.10.92/30 94
DN522200 172.25.2.3
26 10.82.0.24/30 25
ge9/0/7
DNTLH200 172.25.1.4
172.25.5.3
ge9/0/6
ge5/0/7 89 10.82.10.88/30 90
DNCLD200 172.25.1.2
L2B
49 10.82.10.48/30 50
L2B
ge9/0/7
ge9/0/7
ge9/0/7
L2B
245 10.82.10.244/30 L2B 246 ge6/0/7
DN822200
ge6/0/7
ge7/0/7
ge7/0/7
DNAGI200
17 10.82.0.16/30 18
ge9/0/7
DN433200
U14 U1
ge6/0/7
ge9/0/7
L2B
DNCP1200 172.25.1.5
DN113200 172.25.2.8
ge9/0/7
ge7/0/7
DNSAN200
ge8/0/7
DNBLP200 172.25.2.1
U14
U1
DNTWR200 172.25.2.5
ge6/0/7
33 10.82.10.32/30 34
6 10.82.10.4/30 5
ge7/0/0
L2B
L2B
ge6/0/7
ge6/0/7
L2B
DN419201
172.25.0.108
ge8/0/6
ge8/0/7
ge7/0/7
37 10.82.0.36/30 38
ge12/0/6
ge9/0/7
ge6/0/6
ge7/0/7
ge8/0/7
ge9/0/7
DNBLB200
ge7/0/7
L2B
DNBW1200 172.25.0.101
53 10.82.10.52/30 54 L2B
ge8/0/7
DNNGE201 172.25.0.109
ge8/0/0
L2B
221 10.82.10.220/30 222 226 10.82.10.224/30 L2B 225
ge9/0/7
DNNGE200 172.25.3.3
so5/0/0
L3B
ge8/0/7
172.25.0.105
172.25.5.1
81 10.82.10.80/30 82
L3B
ge7/0/6
DNLCN200
DN875200
DN880200 172.25.3.2
ge6/0/7 L2B
L2B
DNFTZ200
ge11/0/7
ge3/0/7
ge6/0/7
10 10.82.10.8/30 9
172.25.0.9
29 10.82.10.28/30 30
172.25.128.1
ge7/0/0
ge7/0/7
L2B
From_ADM
so9/0/0
L2B
ge12/0/7
L2B
DNPAL200 172.25.1.1
DNWAL200 172.25.0.102
L1A
ge8/0/7
DNPRP201
L2B
ge7/0/0
ge7/0/7
14 10.82.10.12/30 13
172.25.0.110
L2B 70 10.82.10.68/30 69
ge10/0/7
DN017200 172.25.4.3
ge5/0/7
L2B 74 10.82.10.72/30 73
ge10/0/7
DN923200 172.25.4.2
ge8/0/0
DNCRL200
DN940200 172.25.5.2
18 10.82.10.16/30 17
ge6/0/7 ge9/0/7
L2B
ge8/0/7
172.25.0.103
ge7/0/7
L2B
so5/0/0
ge5/0/7
ge2/0/7
DNPRP200 172.25.3.1
L2B 22 10.82.10.20/30 21
ge13/0/7
DNHB1200 172.25.0.7
ge3/0/7 so10/0/0
ge12/0/7
DN294200
L2B
From_ADM L1A
34 10.82.0.32/30 33
From_ADM L1A
66 10.82.10.64/30 65
ge8/0/7
L1A
L2B
L2B
so6/0/0
172.25.0.8
so9/0/0
30 10.82.0.28/30 29
ge3/0/7
L2B 62 10.82.10.60/30 61
ge8/0/7
DNFOX200 172.25.0.100
ge7/0/7
Core Dublin ring STM16 (future 10G/nx10G/40G); ISIS L2-only or L1-2 if between routers in same location PoC2 connections GE (future 10G); ISIS L1-2 intra-area links or L2-only inter-area links PoC3 connections GE (future subrate_10G/line_rate_10G); ISIS L1-only PoC3 connections GE; ISIS L1-only Sync Priority 1 Sync Priority 2 Sync Priority 3
ISIS 49.0033
13/06/2013
ge5/0/7 78 10.82.10.76/30 77
58 10.82.10.56/30 57
L2B
ge9/0/7
ge7/0/7
ge8/0/7
L2B
Page 10 of 64
ge9/0/7
DNSE1200 172.25.2.4
ge9/0/7
L2B
DN419200 172.25.2.2
From_ADM
From_ADM
combination of strict and loose hop routing across the network. The working path should always be associated with the strict hop with the protection assigned to either a strict or loose hop. For those LSPs routed over the Microwave POS trunks, strict hops will be used to ensure efficient bandwidth management. For those routed across Dark fibre or managed Ethernet loose hops will be used. In a mesh network where there are multiple physical failures and multiple paths possible this approach offers a greater level of resilience.
CNSGA200 172.25.128.11
197 10.82.128.196/30 198 201 10.82.128.200/30 202 205 10.82.128.204/30 206 209 10.82.128.208/30 210
213 10.82.128.212/30 214 217 10.82.128.216/30 218 221 10.82.128.220/30 222 225 10.82.128.224/30 226
229 10.82.128.228/30 230 233 10.82.128.232/30 234 237 10.82.128.236/30 238 241 10.82.128.240/30 242
CNMCR200 172.25.128.12
LH038200 172.25.0.16
WHLIN200 172.25.128.6
LH011200 172.25.128.9
9/0/0 9/0/1
KECAP200 172.25.128.2
9/0/2 9/0/2
7/0/0
7/0/1 7/0/2
MHSKE200 172.25.128.4
66 70 74 78
DNBW1200 172.25.128.1
7/0/0
7/0/1
7/0/2
13 10.82.128.12/30 14
9 10.82.128.8/30 10
1 10.82.128.0/30 2
5 10.82.128.4/30 6
7/0/3
10/1/4
10/1/5
4/0/4
10/1/6
4/0/5
10/1/7
4/0/6
DN706201 172.25.0.4
DN680200 172.25.0.1
13/06/2013
4/0/7
81 10 .82.12 8.80/3 85 10 0 82 .82.12 8.84/3 0 86 89 10 .82.12 8.88/3 93 10 0 90 .82.12 8. 92 97 10 /30 94 .82.12 8.96/3 0 98
Page 11 of 64
7/0/3 7/0/4
161 10.82.128.160/30 165 10.82.128.164/30 169 10.82.128.168/30 173 10.82.128.172/30 177 10.82.128.176/30
MHFKS200 172.25.128.5
8/0/3
245 10.82.12 8.244/30 246 249 10.82.12 8.248/30 250 253 10.82.12 8.252/30 254 1 10.82.12 9.1/30 2
7/0/3 29 25 10.8 21 10.8 2.12 10.8 8 2.12 8/0/3 8/0/2 8/0/1 8/0/0
7/0/1 7/0/0
multiple hops to the data centres and all routers will be added to the Level 2 area. In order to ensure that traffic is correctly balanced across the SDH trunks RSVP LSPs will be routed statically giving NSI a greater level of control over the bandwidth utilisation. LSPs from each collector will be associated with a particular STM-1 and routed to the destination accordingly. Traffic aggregating at each collector is then associated with a particular LSP. NOTE: The transition document states that the National SDH Microwave network should be replaced by NSI with the BT TT network (See section 2.1.2) or a National DF network. However, as this will take time and consolidated sites are required nationally in the short term, the network described in Figure 4 will be utilised over the short to medium term.
Note that the above table details the desired physical separation. In some cases the physical separation may not be physically possible and a decision on the aggregation level will be made based on other factors such as location, security, landlord, antenna support structure and cost.
13/06/2013
Page 12 of 64
Physical resilience
Comments
13/06/2013
Page 13 of 64
13/06/2013
Page 14 of 64
routers at the data centres due to the path resilience employed, it will be useful in terms of load balancing and future bandwidth requirements. As with the self build design, resilience will be achieved through the physical path diversity to diverse data centre locations from each of the BT GPoPs. Figure 5b illustrates the physical and logical connectivity across the BT TT.
10Gb\ s NSI MPLS
Primary & secondary LSPs Citadel 100
10Gb
Clonshaugh
BT IP GPoP HPD 1
BT IP GPoP HPD 2
E-Lines on BT TT
1G/10G
ADVA XG210
1G/10G
BT IP GPoP Ballymount
1G/10G
1Gb\s 1Gb\s
1G/10G
ADVA XG210
1G/10G
Symmetricom TP500
ge
P1_ 1588v2
Symmetricom TP500
ge
P1_ 1588v2
13/06/2013
1Gbit/s
1Gbit/s
Page 15 of 64
Comment For large bandwidth sites dark fibre may offer the more attractive cost per bit
High/Medium Low
To reduce the impact on the operational expenditure dark Fibre CapEx deals may be more attractive
Surrounding network
Dark fibre
Managed
The transmission network selection should take account of the surrounding backhaul type. This is to ensure that the interconnecting clusters are optimally routed through the hierarchical structure.
2.4
Backhaul routing
Backhaul routing can be split into legacy (TDM/ATM) services, enterprise services and IP services.
13/06/2013
Page 16 of 64
across the MPLS network. ATM services will be carried in ATM PWEs with N:1 encapsulation used for the signalling VCs to reduce the number required. User plane VCs can be mapped into single PWE. TDM services will be transported using SAToP PWEs. At the core locations MSP1+1 protected STM-1 interfaces will be deployed between the 8800 MSRs and the core switches (BSC / RNC). Note: Multichassis MSP feature is not available on the Tellabs 8800 MSRs. Therefore MSP1+1 protecting ports will be on separate cards. At the access locations MSP protection for ingress TDM traffic will be configured in the same way on the 8600 nodes. PWEs for legacy services will be routed between the core and collector locations over physically diverse LSPs.
2.4.3 IP services
2.4.3.1
L3VPN structure
For IP services L3VPNs will be configured across the MPLS network. All routing information will be propagated throughout each L3VPN using BGP.
13/06/2013
Page 17 of 64
The IP/MPLS network will be configured in a hierarchical fashion with route reflectors used to advertise routing within each area. Route Reflectors (RRs) will be implemented in the core area with all level 2 routers peering to those RRs. The ABRs between the level 1 and 2 areas will act as the route reflectors for the connected level 1 areas. This will reduce the size and complexity of the routing tables across the network. For each service a L3VPN will be configured. Because H3G and VFIE use different vendors and have different requirements in the core the number of L3VPNs required differ slightly. Table 3 details the L3VPNs to be configured across the NSI network. Parent
VFIE
L3VPN
2G UP
Description
User Plane
Comment
Separate L3VPNs are configured for each BSC
VFIE
SIU O&M
VFIE
RNC UP
3G User Plane
VFIE
MiniLink O&M
O&M for the MiniLink PDH network (SAU-IP) A single L3VPN for all RNCs A single L3VPN for all RNCs A single L3VPN for all RNCs
3G UP 3G CP 3G O&M (RNC)
H3G
3G O&M RBS
H3G H3G
Synchronisation
LTE LTE
Tbc Tbc
Tbc Tbc
13/06/2013
Page 18 of 64
As services are added to the network they will be added as endpoints to the respective L3VPN for that service and parent core node. This is achieved by adding the endpoint interface and subnet to the VPN. Any adjacent network routing required to connect to a network will be redistributed into the VPN also. VFIE use /30 subnets to address the mobile services across the network. This results in a large number of endpoints within each L3VPN. For that reason the networks will be split based on the parent core switch. This results in a L3VPN for each of the services routed to each of the RNCs/BSCs. For the H3G network, /26 networks are typically used at each of the endpoints. This summarisation reduces significantly the number of endpoints required within each VPN and consequently the number of VPNs. Sections 3 and 4 detail the impacts the proposed design have on each of the operators existing solutions and the steps, if any, required to migrate to the proposed solution.
2.4.3.2
IP service Resilience
Transport resilience
Within the backhaul network IP services will be carried resiliently between the core and collector locations over diversely routed LSPs. It is proposed to use a combination of strict and loose hop routing across the network. The working path should always be associated with the strict hop with the protection assigned to the loose hop. By configuring the protection on a loose hop it will allow the IGP to route the LSP between the source and destination. In the event of a failure all traffic will be switched to the protecting LSP which has been routed between the source and destination via the IGP. In a mesh network where there are multiple physical failures and multiple paths possible this approach offers a greater level of resilience. Note, as described in section 2.2, in the case where both the main and protecting paths are routed over Microwave STM-1 trunks, strict hop routing will be employed for both paths to ensure optimum utilisation of the available capacity. Transmission network design & architecture guidelines
13/06/2013
Page 19 of 64
Router Resilience Within the level 2 area of the network dual routers are deployed to ensure resilience at locations aggregating large volumes of traffic. In this case resiliently LSPs are routed from the collector nodes to both routers. In the event of a router failure traffic will route over the operating router until such time as the second router is operational after which the routing will return to the initial configuration.
Core switch resilience - VRRP For all connections to the mobile core, Virtual Router Redundancy Protocol (VRRP) should be used. While the VRRP implementation will differ slightly based on the mobile core vendor and function, the objective is to ensure that the transmission network to the core has full interface and router redundancy. 10Gb\s (with LAG if required) cross links at each data centre location between the 8800 nodes will be implemented to support the router redundancy. For the 8800 nodes during restart it is possible that the router will advertise the interface addresses to the core switch (BSC/RNC/SGw/MME) before the router forwarding function is re-established. This may result in the temporary Black Holing of traffic. To avoid this scenario a separate connection is required between the routers with a default route added to each for all traffic. This will avoid the above scenario. It is proposed that a 10Gb\s link should be used for this also.
2.5
The target access microwave network with be based on an Ethernet microwave solution utilising ACM to maximise the available bandwidth. In the existing networks H3G use Ceragon IPx Microwave products while VFIE use the Siae Alc+2 and Alc+2e products. While it is envisaged that NSI will tender for one supplier it is not planned to replace one of the existing networks. The access network solution must be designed so as to ensure both vendors
13/06/2013
Page 20 of 64
products and the services transported across them inter operate without issue. Figure 7 details a possible configuration of the access network topology utilising both vendors products.
Siae
Cgn
Siae
GigE GigE
Cgn
Cgn
GigE elp
Cgn Cgn Siae
GigE
Siae
Siae
Siae
Siae
GigE
Siae Cgn
Siae
Note: Future developments may result in the deployment of all outdoor MW Radio products in the traditional MW Bands and in the E-Band. In this case at feeder locations a cell site router may be deployed to perform the baseband switching function using IP/MPLS routing functions. Should this solution be
13/06/2013
Page 21 of 64
employed in the future, an additional design scenario will be described and added to this document.
13/06/2013
Page 22 of 64
VID3170
GE
VID3210 VID3000
GE
Cgn
VSI
VSI
VSI
GE
GE
Siae
GE
GE
Cgn
Tellabs 86xx
2.6
The NSI transition document details the targets for network topology, traffic engineering and bandwidth allocation on a per site basis for each of the mobile networks. In summary they are;
No more than 1 Microwave hop to fibre (Facilitated by providing fibre solutions to 190 towns) No contention for shared transmission resources (NSI are required to monitor utilisation and ensure upgrade prior to congestion on the transmission network)
Traffic engineering (CoS, DSCP, PHB) will be assigned equally to each service type from each operator. At a minimum the following will be applied; o Voice (GBR) o Video/interactive (VBR-RT) o Enterprise (VBR-NRT) o Data (BE)
Bandwidth allocation per site o Dublin & other cities o Towns (5 10K) (400Mb\s\site) (300Mb\s\site)
13/06/2013
Page 23 of 64
o Rural
(200Mb\s\site)
This chapter will explain in detail the required Access, Backhaul and Core transmission network dimensioning guidelines and traffic engineering rules to achieve the targets set out in the transition document
Before creating a cluster plan, each site in the MW network must be classified under the following criteria; Equipment support capabilities Line of sight capabilities proximity to existing fibre solution Existing frequency designations Site development opportunities Landlord agreements (Number and type of equipment/services permitted under the existing agreements) Term of agreement
Creating a database as above will allow the MW network planning team to create cluster solutions where a number of sites are associated with a designated head of cluster. As per the transition document the target topology is one hop to a fire access point. However this will not always be possible due to one or a combination of the following factors; Transmission network design & architecture guidelines
13/06/2013
Page 24 of 64
Line of Site Channel restrictions Proximity of fibre solutions Once the topology of the cluster is defined it is necessary to define the capacity of each link within the cluster. For tail links this is straight forward, the link must meet the capacity requirements of the transition document; Dublin & other cities Towns (5 10K) Rural (400Mb\s\site) (300Mb\s\site) (200Mb\s\site)
For feeder links, statistical gain must be factored while still meeting the capacity requirements for each of the individual sites. Table 4 gives examples of existing MW Radio configurations and the average air interface speeds available. Channel Bandwidth 14MHz 28MHz 28Mhz 28MHz 28MHz 56Mhz 56MHz 56MHz 56MHz E-Band
Table 4
Configuration
Single channel Single channel 2 channel LAG 3 channel LAG 4 channel LAG Single Channel 2 channel LAG 3 channel LAG 4 channel LAG 1GHz
85Mb\s 170Mb\s 340Mb\s 500Mb\s 680Mb\s 340Mb\s 680Mb\s 1.02Gb\s 1.34Gb\s 1Gb\s
Table 5 provides a guide for feeder link configurations based on the number of physical sites aggregated across that link. Physical sites aggregated
2
City
P1: E-band P2: 2 x56MHz
Urban
P1: 1 x56MHz
Rural
P1: 1 x56MHz
Comments
3:1 Stat gain
13/06/2013
Page 25 of 64
Transmission network planning __________________________________________________________________________ P1: E-band P2: 2 x56MHz P1: E-band P2: 2 x56MHz P1: E-band P2: 2 x56MHz P1: E-band P2: 3 x56MHz P1: E-band P2: 3 x56MHz P1: E-band P2: 4 x56MHz
P1: 1 x56MHz
P1: 1 x56MHz
P1: 2 x56MHz
P1: 1 x56MHz
P1: 2 x56MHz
P1: 1 x56MHz
P1: 2 x56MHz P1: E-band P2: 3 x56MHz P1: E-band P2: 3 x56MHz
P1: 2 x56MHz
P1: 2 x56MHz
P1: 2 x56MHz
Table 5:
Note that no more than 8 physical sites should be aggregated on any one feeder link. For MW links utilising adaptive code modulation (ACM) it is important that at the reference modulation (i.e. the modulation scheme for which ComReg have allocated the Max EIRP) is dimensioned so as to meet the sum of the CIRs from each operator across that link. The total CIR per link is based on the product of the RAN technologies deployed and the CIR per RAN technology. Service Voice Voice Voice Data Data Data Data
Table 6:
13/06/2013
Page 26 of 64
Should restrictions apply in terms of hardware, licensing, topology with the effect that links cannot be dimensioned as per table 4 then the following formula should be used to determine the minimum link bandwidth.
Min Feeder link capacity = MAX (VFIE CIR + H3G CIR, Max Tail link capacity)
CIR = Total CIR across all links aggregated from each operator Max Tail link capacity = Max tail link capacity of all sites aggregated across the feeder link
The formula is designed to facilitate the required capacity for each site based on location while at the same time ensuring, where multiple sites are aggregated, that the minimum CIR is available to each site.
Note that while LAG can be considered as a protection mechanism, allowing the link to operate at a lower bandwidth in the event of a Radio failure, NSI will protect the Radios in a LAG group using 1+1HSB to ensure the highest hardware availability for a physical link. NSI will consider LAG for capacity only and 1+1 HSB for protection. The target microwave topology, as described in the transition document, is for 1 microwave hop to fibre which will result in minimal use of 1+1 HSB configurations. However in the event that this topology is not possible NSI will implement protection as described above.
13/06/2013
Page 27 of 64
The statistical gain will be based on the average throughputs per technology aggregated. The statistical gain is based on the following calculation; Stat Gain = Total existing service capacity + Forecasted service capacity Backhaul capacity For the backhaul and core networks the current utilisation will be monitored on a monthly basis with the forecasted Statistical gain forecasted over an annual basis. This will give rise to programmed capacity upgrades across the Backhaul (managed and self build) and Core networks. The time to upgrade trunks across these networks is typically between 6 and 24 months depending on the upgrade involved. To facilitate this process the parent companies must provide 12, 24 and 36 month rolling forecasts at least twice yearly. These forecasts must detail at a minimum; Volume deployment per service type per geographic area Average throughput per service type Max allowable latency per service type
13/06/2013
Page 28 of 64
NSI will constantly monitor utilisation Vs forecast and feedback to the parent companies. This will ensure that the capacity forecasting processes are optimised over time.
Quality of service is used to assign priority to certain services above others. Critical service signalling and GBR services will be assigned the highest priorities with VBR services assigned lower priority based on the service and/or the technology. There are large variations in the bandwidth requirements for LTE, HSPA, R99 and GPRS. For this reason, if all services were assigned equal priority, during periods of congestion, the low bandwidth services would be disproportionally impacted to such an extent that they may become unusable. For that reason, the low bandwidth data services will be assigned a higher priority to those presenting very high bandwidths.
QoS along with the queue management function should be designed to ensure, during periods of congestion, that equivalent services from the two operators have equal access to the available bandwidth. Table 5 details the proposed QoS mapping for all mobile RAN services. Traffic Type Signalling, synchronisation, routing protocols, Speech Transmission network design & architecture guidelines 46 6 EF (Strict) DSCP 24,40,48,49,56 L2-pbit 7 MPLS Queue CS7 (Strict)
13/06/2013
Page 29 of 64
VBR Streaming, GPRS data, Gaming R99 data HS Data Premium Internet access LTE Data
Table 5 Quality of Service mapping
32,34,36,38
AF4 (WRED)
3 2 1 0
Traffic Classification
Shaping
Scheduling
Trunk Interface
EF Tail drop
G+E WRED
WFQ
IP Flow
BE WRED
IP/MPLS PHB
passed without delay. In a congested environment, GBR services are passed directly to the egress interface and the VBR services are queued with access to the egress interface controlled by the weighted fair algorithm. Weighted Random Early Discard (WRED) is used to ensure efficient queue management. Packets from data flows are discarded at a pre-determined rate as the queue fills up. By doing this the 3G flow control and TCP/IP flow control should slow down resulting in reduced retransmissions and more efficient use of the available bandwidth. For enterprise services, policing on ingress will be implemented to ensure the enterprise customer is within the SLA. In such circumstances a CIR and PIR can be allocated to the customer services with a CBS and PBS assigned also. In this case the two rate three colour marking (trTCM) mechanism will be used to control the flow of enterprise traffic through the network.
Discarded traffic Yellow marked traffic. First to be discarded in case of network congestions
Data
Policing marking
Output traffic
Policing implementation according to standard Two Rate Three Color Marker (trTCM)
CBS allows to tolerate bursts above CIR short bursts will be marked GREEN PBS allows to tolerate bursts above PIR short bursts will not be discarded
13/06/2013
Page 31 of 64
Across the microwave network a combination of Shaping, CoS based policing, trTCM and WRED queue management should be used to ensure congestion control and fairness in terms of bandwidth contention.
For downlink traffic, the physical interface from the IP/MPLS network must be shaped to the maximum bandwidth of the radio interface. This is to ensure that egress buffer overflow is not experienced, in particular for large bursts of LTE traffic. For LTE traffic, shaping per VLAN should also be implemented to ensure that tail links, which may be connected to feeder links and be of lower capacity, do not experience buffer overflow. Note: VLAN shaping for LTE must be considered when considering the Layer 2 VLAN structure and Layer 3 addressing to the H3G LTE network.
H3G
Shaping
LTE 3G 2G
GE BEP2.0 BEP2.0
VFIE
BEP1.0
LTE traffic shaping per service & per port (VLAN group) shaping In order to avoid BEP2.0 buffer overflow
13/06/2013
Page 32 of 64
As detailed in previous sections the target bandwidth for RBS sites is 400Mb\s in the City areas, 300Mb\s in towns and 200Mb\s for all others. Tables 6 and 7 detail the proposed policing settings for the two areas. Data traffic CIR (Per operator)
GBR Services GPRS Data NA 1Mb\s
Comments
No Policing - Green PIR will not be greater than max link capacity. Out of policy = yellow
R99 Data
2Mb\s
Not set
PIR will not be greater than max link capacity. Out of policy = yellow
HSDPA
15Mb\s
Not Set
PIR will not be greater than max link capacity. Out of policy = yellow
LTE
20Mb\s
400Mb\s
Table 6
Traffic
Comments
GBR Services
NA
GPRS Data
1Mb\s
Not set
max link capacity. Out of policy = yellow PIR will not be greater than
R99 Data
2Mb\s
Not set
max link capacity. Out of policy = yellow PIR will not be greater than
HSDPA
15Mb\s
Not Set
LTE
20Mb\s
200Mb\s
Table 7
13/06/2013
Page 33 of 64
All packets within CIR and the CBS will be marked green. For 3G and HS services the PIR should not exceed the available link capacity so packets will be marked as yellow. For LTE traffic, out of policy traffic will be marked red and discarded. In some cases the sum of both operators PIR will be greater than the available link capacity, even at maximum modulation. In this case, it will be possible for both operators to peak to the maximum available capacity, but not at the same time.
Operator 1 Op1 + Op2 traffic exceeding TX link capacity. When queues start to fill-up WRED (QoS) mechanism will start dropping YELLOW marked packets from data traffic 3G flow control & TCP/IP LTE sessions will slow down traffic of both Operators Thus preserving GREEN packets (CIR) for both operators
13/06/2013
Page 34 of 64
data sessions, minimising the number of retransmissions and optimising the use of the available bandwidth. This approach ensures that both operators GBR traffic is always transmitted, while also ensuring in a congested scenario both operators have fair access to the available bandwidth for each service provided. Note that for the incumbent vendors of Ethernet microwave radio systems, the majority of the deployed links will not support the required hierarchical QoS features. During the consolidation of both networks it will be necessary to swap out that hardware for hardware supporting those functions. A tender process will be run to select one vendor to fulfil these requirements.
2.7
Network synchronisation
NSI are responsible for managing the quality and distribution of the synchronisation reference clock throughout the mobile network. Table 7 summarises the clock distribution methods that will be implemented for the transmission and mobile networks.
Clock distribution Source PRC / SSU with Rubidium holdover (Symmetricom SSU2000 ) Comments Each SSU is configured with redundant source and supply modules. Redundant SSUs are distributed across the data centre locations Self built backhaul (Ethernet) Synchronous Ethernet Synchronous Ethernet with SSM Self built backhaul(SDH) Self built backhaul (DWDM) SDH trunks 1588v2 (IPVPN configured for 1588v2 distribution) SSM enabled TP500 slaves used to recover clock and reference the southbound self built network Ethernet managed Service 1588v2 (IPVPN configured for 1588v2 distribution) TP500 slaves used to recover clock and reference the southbound self built network Self built Access Microwave (Ethernet) Synchronous Ethernet & Radio interface Synchronous Ethernet with SSM
13/06/2013
Page 35 of 64
Transmission network planning __________________________________________________________________________ Self built access microwave (PDH) Ericsson DUW (3G network) E1 connections and Radio interface NTP phase synchronisation from NTP server in RNC Parent RNC is reference to PRC and distributes clock via NTP carried over Iub link Ericsson SIU-02 Ericsson DUG (2G) Synchronous Ethernet Legacy E1 Interfaces connected to SIU02 Ericsson DUL (LTE) NTP phase synchronisation from resilient NTP servers at data centre locations Mixed mode Remote Radio units (U900 & GSM 900) Mixed mode Remote Radio units (LTE1800 & GSM 1800) DUG synchronised from DUW directly. DUG synchronised from DUL directly. NTP servers for LTE will be slaves of the SSU2000 nodes. DUW is synchronised over NTP network DUL is synchronised over NTP network from Standalone NTP servers NSN 3G network 1588v2 slaves (IP VPN 1588v2 packet distribution. Table 7: Synchronisation source and distribution summary SSU2000 nodes as servers for NSN 1588v2 network For legacy RBS nodes
The following sections provide additional details for each of the synchronisation solutions and their applications.
13/06/2013
Page 36 of 64
13/06/2013
Page 37 of 64
For Ethernet managed services it is assumed that the synchronisation source within the 3rd partys network is not from a trusted source. NSI will configure a L3VPN to distribute a 1588v2 timing reference from the PRC to the provider edge and recover the reference from the PRC at that point. From there synchronisation will be distributed as described in the self built network. 1588v2 synchronisation is independent of the underlying physical network and will ensure that the clock recovered at the provider edge is referenced to the network PRC.
Figure 15 1588v2 distribution over Ethernet Managed service 2.7.3 DWDM network
For SDH wavelengths the distribution of SDH synchronisation is valid and so no change is required. However for Ethernet trunks, while the DWDM nodes do support SyncE, the current installed base does not. In this case 1588v2 will be implemented across the initial deployment and the scenario described in section 2.7.3 will be deployed with 1588v2 slaves used to recover the reference from the PRC. Note for future deployment of Ethernet trunks across the DWDM backbone, SyncE will be considered and where implemented no 1588v2 clock recovery will be required. Transmission network design & architecture guidelines
13/06/2013
Page 38 of 64
2.7.4.1
TDM will be used to synchronise the legacy RAN technologies namely the legacy 2G systems and the 3G RAN connected via ATM. The legacy RAN technologies will use the E1 connections as their timing reference.
2.7.4.2
Ericsson SRAN 2G
Ericsson use the SIU-02 as the aggregation device for the baseband connections from their SRAN nodes (DUG, DUW and DUL). The SIU-02 converts the PDH signals from the 2G node (DUG) to a format suitable for transmission over Ethernet to the BSC. The SIU-02 supports synchronisation over synchronous Ethernet. In this configuration the SIU-02 will be connected to the transmission network via its WAN interface either directly to a co-sited MPLS router or via Ethernet microwave via a GigE trunk. This connection will be used as the timing reference for the node.
2.7.4.3
The Ericsson 3G (DUG) and LTE (DUL) nodes are synchronised using a NTP network. NTP is similar to 1588v2 with the SRAN core nodes for 3G (RNC) and LTE (SGw) using the Iub and S1 interfaces respectively to transmit the required synchronisation phase information for accurate timing reference to the PRC. As the timing signals are carried within the respective user planes separate VPNs for timing distribution are not required. Note: Ericsson in future releases of DUG and DUL software will support 1588v2. Once this is the case, a decision should be taken as to the benefit of replacing the existing NTP solution for 1588v2.
13/06/2013
Page 39 of 64
2.7.4.4
NSN 3G
The NSN 3G network nodes can act as 1588v2 slaves and recover the clock from a 1588v2 master. NSI will configure a 1588v2 L3VPN dedicated for the NSN 3G network. The network will be configured as described in section 2.7.2 with the NSN node B recovering the 1588v2 timing reference from the 1588v2 master clock.
2.8
DCN refers to the distribution of O&M communications between the various management systems and their respective managed elements and networks. All network elements, namely RAN or transmission technologies, require connection to a network or element management platform for performance and configuration management. This section describes, by vendor, the transmission network configuration required to support such communications. Table 8 details the DCN for each of the vendors networks.
Vendor
Tellabs
Technology
IP/MPLS network
Comments
CM & PM are carried in band and connected to the corporate DCN @ the data centre locations
Siae
Ethernet Microwave
In Band mgt
- MPLS network gateway - L3VPN for Siae microwave network. - Access clusters are addressed in sub-networks based on the cluster size - Interconnect to corporate DCN at Data centre location
Ceragon
Ethernet Microwave
In Band mgt
- MPLS network gateway - L3VPN for Ceragon microwave network. - Access clusters are typically addressed in /26 subnetworks - Interconnect to corporate DCN @ Data centre locations
Ericsson SRAN
Mobile RAN
- MPLS network gateway - L3VPN for each RAN technology (2G, 3G & LTE) and split over multiple VPNs based on network size. - Each network element has a /30 allocation - Interconnect to corporate DCN @ Data centre locations
NSN 3G
Mobile RAN
13/06/2013
Page 40 of 64
For the most part the DCN network will be configured as either in band with direct connectivity to the OSS via the DCN at the data centre locations, or where this is not possible L3VPNs should be configured to connect the remote elements to their respective management systems via the IP/MPLS network. At the data centre locations routing information will be shared between the corporate DCN networks and the transmission network VPNs through OSPF. The exception to this is the NSN 3G RAN where the CP and O&M networks require static routes via the RBS parent RNC to the respective ICSU and O&M network.
13/06/2013
Page 41 of 64
Northbound traffic will be routed to the parent RNC via the gateway router at the collector site. At the access clusters static routes are configured to the /29 networks with the O&M IP address for the RBS as the next hop. At the core sites vrf filters are applied on the 8800 nodes to ensure correct routing of incoming packets to the correct RNC. Each RBS is allocated a /29 subnet from an overall /20 allocated to each RNC. The vrf filter will inspect the source packet and route to the correct RNC. Static routes are required on the endpoints to the OMU and DCN networks via the parent RNCs O&M interface.
13/06/2013
Page 42 of 64
2.9
As detailed in the transition document, NSI are responsible for ensuring the transmission network meets the target performance KPIs described therein and to provide periodic reporting and backup data to prove adherence to those KPIs. Table 9 describes the KPIs which must be measured;
KPI
Description
Target
Reporting period
Comment
Access MW Network
MW link availability
99.99x%
Access MW Network
99.99x%
Access MW Network
MW Network availability
99.96%
Access MW Network
99.96%
Access MW Network
MW link performance
Tbc
Access MW Network
% Packet loss across each link Delay variation across each link
Tbc
Requires export and post processing of RMON counters per link Not available in release 1 hardware. Integration to post processing tool necessary
Access MW Network
Tbc
IP/MPLS network
Latency
One way packet delay from collector switch to Core MPLS routers
<15mS
IP/MPLS network
Jitter
One way packet delay variance from collector switch to core MPLS router
<3mS
IP/MPLS network
Packet loss
<0.2%
13/06/2013
Page 43 of 64
In order to ensure efficient collection, post processing and reporting against each of the KPIs described above and those required in the future NSI are required to export the performance and configuration management of the transmission network elements to a post processing tool. This will require the evaluation of those tools available today and possible replacements. This section will be updated to reflect the selected system and its operation once selected and designed. Until such time as a post processing tool is available all KPIs will be measured using the available tools on the respective vendor management platforms.
13/06/2013
Page 44 of 64
3.1
Core sites
Core sites refer to those locations where the transmission network is directly connected to the mobile Core and/or enterprise core networks. The main features that categorise these locations are; Transmission network has direct connectivity within the same site to a mobile core node (BSC, RNC, EPC) Transmission network has direct physical access to the core enterprise network The following table details the minimum requirements which must be satisfied when designing such sites. Requirement
Network resilience
Category
External optical cabling
Description
For diverse fibre routes a minimum of 5m physical separation is required from the external network through to the ODF presentation in the NSI equipment room
Additional notes
Network resilience
- Intra ODF & ODF to equipment rack will not at any point share the same
13/06/2013
Page 45 of 64
Transmission network planning __________________________________________________________________________ section of the fibre management system (FMS). - Diverse ports (e.g. East/West) and protecting ports (e.g. MSP, ELP, RPS, VRRP) will terminate on diverse ODF and will at no point share sections of the FMS. Network resilience Internal diverse electrical baseband cable management - Intra DDF, DDF to equipment rack will not at any point share the same section of the cable management infrastructure. - Diverse ports (e.g. East/West) and protecting ports (e.g. MSP, ELP, RPS, VRRP) will terminate on diverse ODF and will at no point share sections of the cable management infrastructure. Network resilience Power All equipment within the core site will have diverse A & B dc power. The A & B supply must be traced to separate DC rectifier systems within the core site. The DC rectifiers within the core site will be powered from a UPS ac supply which is backed up by generator power for a minimum of 24 hours Network resilience Power cabling Cables for A and B power (ac and dc) will at no stage share sections of the cable These guidelines apply to both 120Ohm and 75Ohm systems. For clarity 120Ohm distribution frames may also be referred to as Patch panels.
13/06/2013
Page 46 of 64
Transmission network planning __________________________________________________________________________ management infrastructure Network resilience Rack layout Core Equipment (DWDM, IP/MPLS, ATM, SDH) operating in a resilient or load sharing capacity should not be collocated within the same rack. Network Dimensioning Power DC rectifiers should be dimensioned with consideration for a minimum of 2 x spare rectifier units within each cabinet. Once this limit is reached additional rectifiers should be deployed to meet any additional requirements. Network Dimensioning Network Dimensioning Power Power 3 phase ac supplies should be used in all cases AC power for each rectifier unit should be dimensioned with a minimum overhead of 20% to facilitate emergency expansions and inefficiencies within the rectifier units Network Dimensioning Power cable labelling - All power cables must be labelled indicating the remote end equipment and location - All MCBs must be labelled indicating the remote equipment ID Internal cabling Optical cabling (standard) Internal cabling Optical cabling (equipment interconnect) Single Mode fibre should be used in all cases All equipment interconnects must be done via ODF. No direct cabling from equipment to
13/06/2013
Page 47 of 64
Transmission network planning __________________________________________________________________________ equipment should be implemented at any stage Internal cabling Optical cabling (Labelling) All cables must be labelled at the equipment and at the frame indicating the following; - Next hop (e.g. ODF and position position) - Final destination (Equipment and Port ID) Internal cabling Structured cabling (standard) Internal cabling Structured cabling (equipment interconnect) CAT6 should be used in all cases at a minimum All equipment interconnects must be done via patch panel. No direct cabling from equipment to equipment should be implemented at any stage Internal cabling Structured cabling (Labelling) All cables must be labelled at the equipment and at the frame indicating the following; - Next hop (e.g. patch panel and position) - Final destination (Equipment and Port ID) Internal cabling 75 Ohm (standard) RA7000 should be used in all cases at a minimum Internal cabling 75 Ohm cabling (equipment interconnect) All equipment interconnects must be done via DDF. No direct cabling from equipment to equipment should be implemented at any stage Internal cabling Structured cabling (Labelling) All cables must be labelled at the equipment and at the frame indicating the following; - Next hop (e.g. DDF and
13/06/2013
Page 48 of 64
Transmission network planning __________________________________________________________________________ position) - Final destination (Equipment and Port ID) MW Radio Rack installation - Dedicated racks to house the MW Radio IDUs will be installed - A DC headrail should be installed in the transmission cabinet with facility for a minimum of 5 x A and 5 x B MCBs. - 6A MCBs should be fitted as standard - The A and B side will be connected to the respective A & B side of the DC rectifier unit MW Radio Baseband cabling (75Ohm Type 43 to 75Ohm Type 43) - To facilitate cabling between MW IDU equipment within the same rack a DDF will be installed within the MNW equipment rack MW Radio Baseband cabling (75Ohm Type 43 to 120Ohm RJ45) - To facilitate cabling between MW IDU equipment within the same rack a 24 port BALUN should be installed within the same equipment rack MW Radio Baseband cabling (120 Ohm to 120 Ohm RJ45 for TDM services) MW Radio Baseband cabling (120 Ohm to 120 Ohm RJ45 for Ethernet services) MW Radio Baseband cabling (optical) - Direct cabling between MW IDU equipment should be implemented within the same rack - Direct cabling between MW IDU equipment should be implemented within the same rack - To facilitate cabling between MW IDU equipment within the same
13/06/2013
Page 49 of 64
Transmission network planning __________________________________________________________________________ rack a 24 port SC optical patch panel should be installed within the same equipment rack MW Radio IF Cable - All IF cables from the antenna support structure will terminate on N-Type bulk head connectors and panel to rear of the MW transmission rack - IF fly leads from IDU will terminate on the required N-Type bulk head connecting to the system ODU. MW Radio IDU Labelling Near end ID Far end ID Local IP Address and subnet Remote IP Address and subnet Commissioned Tx Power MW Radio IF Labelling Commissioned RSL Tx Freq (Mhz) All IF cable labels should be prefixed with NSI Far end ID on Fly lead Far end ID at Bulk head connector Far end ID inside of Roxtec Far end ID outside of Roxtec MW Radio ODU and Antenna Far end ID @ ODU Far end Site name & ID Tx Frequency (MHz) Polarisation
13/06/2013
Page 50 of 64
Table 10:
3.2
Backhaul sites
Backhaul sites refer to those locations where the transmission network is aggregating large amounts of customer traffic onto high speed transmission links. For TDM traffic this refers to N+0 where N>1 SDH backhaul and for the MPLS network this refers to the Level 2 routing area. For all of these cases the equipment must be housed in a building or Portacabin. Table 11 details the minimum requirements which must be satisfied when designing such sites. Requirement
Network resilience
Category
External optical cabling
Description
For diverse fibre routes a minimum of 5m physical separation is required from the external network through to the ODF presentation in the NSI equipment room
Additional notes
Network resilience
- Intra ODF & ODF to equipment rack will not at any point share the same section of the fibre management system (FMS). - Diverse ports (e.g. East/West) and protecting ports (e.g. MSP, ELP) will terminate on diverse ODF and will at no point share sections of the FMS.
Network resilience
- Intra DDF, DDF to equipment rack will not at any point share the same section of the cable
These guidelines apply to both 120Ohm and 75Ohm systems. For clarity 120Ohm
13/06/2013
Page 51 of 64
Transmission network planning __________________________________________________________________________ management infrastructure. - Diverse ports (e.g. East/West) and protecting ports (e.g. MSP, ELP) will terminate on diverse ODF and will at no point share sections of the cable management infrastructure. Network resilience Power - All equipment within the backhaul site will have diverse A & B dc power. Network resilience Power cabling Cables for A and B power will at no stage share sections of the cable management infrastructure Network resilience Rack layout Core Equipment (DWDM, IP/MPLS, ATM, SDH) operating in a resilient or load sharing capacity should not be collocated within the same rack. Network Dimensioning Power DC rectifiers should be dimensioned with consideration for a minimum of 2 x spare rectifier units within each cabinet. Once this limit is reached additional rectifiers should be deployed to meet any additional requirements. Network Dimensioning Network Dimensioning Power Power 3 phase ac supplies should be used in all cases AC power for each rectifier unit should be dimensioned with a minimum overhead of 20% to facilitate emergency distribution frames may also be referred to as Patch panels.
13/06/2013
Page 52 of 64
Transmission network planning __________________________________________________________________________ expansions and inefficiencies within the rectifier units Network Dimensioning Power - Sufficient battery backup should be in place to power all Tx equipment on site for a minimum of 8 hours Network Dimensioning Power For remote locations, diesel generators should be in place to facilitate full Tx site operation for a minimum of 24 Hours Network Dimensioning Power cable labelling - All power cables must be labelled indicating the remote end equipment and location - All MCBs must be labelled indicating the remote equipment ID Internal cabling (MPLS / SDH) Internal cabling (MPLS / SDH) Optical cabling (standard) Optical cabling (equipment interconnect) Single Mode fibre should be used in all cases All equipment interconnects must be done via ODF. No direct cabling from equipment to equipment should be implemented at any stage Internal cabling (MPLS / SDH) Optical cabling (Labelling) All cables must be labelled at the equipment and at the frame indicating the following; - Next hop (e.g. ODF and position position) - Final destination (Equipment and Port ID) Internal cabling (MPLS / SDH) Internal cabling (MPLS / SDH) Structured cabling (standard) Structured cabling (equipment CAT6 should be used in all cases at a minimum All equipment interconnects must be
13/06/2013
Page 53 of 64
Transmission network planning __________________________________________________________________________ interconnect) done via patch panel. No direct cabling from equipment to equipment should be implemented at any stage Internal cabling (MPLS / SDH) Structured cabling (Labelling) All cables must be labelled at the equipment and at the frame indicating the following; - Next hop (e.g. patch panel and position) - Final destination (Equipment and Port ID) Internal cabling (MPLS / SDH) Internal cabling (MPLS / SDH) 75 Ohm cabling (equipment interconnect) 75 Ohm (standard) RA7000 should be used in all cases at a minimum All equipment interconnects must be done via DDF. No direct cabling from equipment to equipment should be implemented at any stage Internal cabling (MPLS / SDH) Structured cabling (Labelling) All cables must be labelled at the equipment and at the frame indicating the following; - Next hop (e.g. DDF and position) - Final destination (Equipment and Port ID) MW Radio installation Rack installation - Dedicated racks to house the MW Radio IDUs will be installed MW Radio equipment rack installation (Power distribution) Transmission rack - A DC headrail should be installed in the transmission cabinet with facility for a minimum of 5A and 5B MCBs. - 6A MCBs should be fitted as standard - The A and B side will be
13/06/2013
Page 54 of 64
Transmission network planning __________________________________________________________________________ connected to the respective A & B side of the DC rectifier unit MW Radio installation Baseband cabling (75Ohm Type 43 to 120Ohm RJ45) - To facilitate cabling between MW IDU equipment within the same rack a 24 port BALUN should be installed within the same equipment rack MW Radio installation Baseband cabling (120 Ohm to 120 Ohm RJ45 for TDM services) MW Radio installation Baseband cabling (120 Ohm to 120 Ohm RJ45 for Ethernet services) MW Radio installation Baseband cabling (optical) - Direct cabling between MW IDU equipment should be implemented within the same rack - Direct cabling between MW IDU equipment should be implemented within the same rack - To facilitate cabling between MW IDU equipment within the same rack a 24 port SC optical patch panel should be installed within the same equipment rack MW Radio IF Cable - All IF cables from the antenna support structure will terminate on N-Type bulk head connectors and panel to rear of the MW transmission rack - IF fly leads from IDU will terminate on the required N-Type bulk head connecting to the system ODU. MW Radio IDU Labelling Near end ID Far end ID Local IP Address and subnet Remote IP Address
13/06/2013
Page 55 of 64
Transmission network planning __________________________________________________________________________ and subnet Commissioned Tx Power MW Radio IF Labelling Commissioned RSL Tx Freq (Mhz) All IF cable labels should be prefixed with NSI Far end ID on Fly lead Far end ID at Bulk head connector Far end ID inside of Roxtec Far end ID outside of Roxtec MW Radio ODU and Antenna labelling Far end ID @ ODU Far end Site name & ID Tx Frequency (MHz) Polarisation Commissioned Tx power Commissioned RSL
Table 11:
3.2.1 BT TT locations
One specific type of backhaul site is those co-located with the BT TT network. In this case certain restrictions apply in terms of space and presentation of managed circuits which must adhere to BT co-locations rules. Specifically; NSI transmission equipment will be housed within the same rack BT will present all circuits on a single ODF patch panel within the NSI equipment rack Inter-shelf cabling can be run directly between the NSI equipment within the same equipment rack.
13/06/2013
Page 56 of 64
Table 12:
Within this section each site category will be described in terms of equipment installation, power and baseband interconnection
Category
Transmission rack
Description
- 19 racks should be installed as standard - A DC head rail should be installed in the transmission rack with facility for a minimum of 10 x A & 10 x B MCBs. - 6A MCBs should be
Additional notes
13/06/2013
Page 57 of 64
Transmission network planning __________________________________________________________________________ fitted as standard - The A and B side will be connected to the respective A & B side of the DC rectifier unit Power Transmission equipment - 2 x 63A connections should be fitted as standard from the rectifier A & B supply to the respective A & B connections on the DC headrail - The transmission equipment A & B power will be connected to the respective A & B side of the DC head rail within the Tx rack Power Battery configuration Battery backup for the TX equipment should be configured for a minimum of 4 hrs Power Labelling - All power cables will be labelled with the remote termination ID - All MCBs will be labelled with the remote equipment ID Indoor equipment Hardware installation All Indoor transmission equipment should be housed within a 19 rack 3PP presentation Optical 3PP services will be presented on a 19 SC patch panel within the Tx rack 3PP CPE Hardware All 3PP CPE will be housed within the Tx rack MW Radio installation IDU installation - All MW Radio IDU hardware to be installed in a 19 Tx rack
13/06/2013
Page 58 of 64
Transmission network planning __________________________________________________________________________ MW Radio installation Baseband cabling (75Ohm Type 43 to 120Ohm RJ45) - To facilitate cabling between MW IDU equipment within the same rack a 24 port BALUN should be installed within the same equipment rack MW Radio installation Baseband cabling (120 Ohm to 120 Ohm RJ45 for TDM services) MW Radio installation Baseband cabling (120 Ohm to 120 Ohm RJ45 for Ethernet services) MW Radio installation Baseband cabling (optical) - Direct cabling between MW IDU equipment should be implemented within the same rack - Direct cabling between MW IDU equipment should be implemented within the same rack - To facilitate cabling between MW IDU equipment within the same rack a 24 port SC optical patch panel should be installed within the same equipment rack MW Radio IF Cable - All IF cables from the antenna support structure will terminate on N-Type bulk head connectors and panel to rear of the MW transmission rack - IF fly leads from IDU will terminate on the required N-Type bulk head connecting to the system ODU. MW Radio IDU Labelling Near end ID Far end ID Local IP Address and subnet Remote IP Address and subnet Commissioned Tx Power
13/06/2013
Page 59 of 64
Transmission network planning __________________________________________________________________________ MW Radio IF Labelling Commissioned RSL Tx Freq (Mhz) All IF cable labels should be prefixed with NSI Far end ID on Fly lead Far end ID at Bulk head connector Far end ID inside of Roxtec Far end ID outside of Roxtec MW Radio ODU and Antenna labelling Far end ID @ ODU Far end Site name & ID Tx Frequency (MHz) Polarisation Commissioned Tx power Commissioned RSL
Requirement
Power
Category
Transmission rack
Description
- A 2m site support unit should be installed on all outdoor cabinet sites as standard to facilitate Tx consolidation - A DC head rail should be installed in the site support unit with facility for a minimum of 10 x A & 10 x B MCBs.
Additional notes
13/06/2013
Page 60 of 64
Transmission network planning __________________________________________________________________________ - 6A MCBs should be fitted as standard - The A and B side will be connected to the respective A & B side of the DC rectifier unit Power Transmission equipment - 2 x 63A connections should be fitted as standard from the rectifier A & B supply to the respective A & B connections on the DC headrail - The transmission equipment A & B power will be connected to the respective A & B side of the DC head rail within the Tx rack Power Battery configuration Battery backup for the TX equipment should be configured for a minimum of 4 hrs Power Labelling - All power cables will be labelled with the remote termination ID - All MCBs will be labelled with the remote equipment ID Indoor equipment Hardware installation All new hardware will be installed in the Site support unit 3PP presentation Optical All new 3PP services will be presented on a 1U ODF within the site support unit 3PP CPE Hardware All new 3PP CPE will be housed within the site support unit MW Radio installation IDU installation - All MW Radio IDU hardware to be installed in
13/06/2013
Page 61 of 64
Transmission network planning __________________________________________________________________________ a 19 Tx rack MW Radio installation Baseband cabling (75Ohm Type 43 to 120Ohm RJ45) - To facilitate cabling between MW IDU equipment within the same rack a 24 port BALUN should be installed within the same equipment rack MW Radio installation Baseband cabling (120 Ohm to 120 Ohm RJ45 for TDM services) MW Radio installation Baseband cabling (120 Ohm to 120 Ohm RJ45 for Ethernet services) MW Radio installation Baseband cabling (optical) - Direct cabling between MW IDU equipment should be implemented within the same rack - Direct cabling between MW IDU equipment should be implemented within the same rack - To facilitate cabling between MW IDU equipment within the same rack a 24 port SC optical patch panel should be installed within the same equipment rack MW Radio IF Cable - All IF cables from the antenna support structure will terminate on N-Type bulk head connectors and panel to rear of the MW transmission rack - IF fly leads from IDU will terminate on the required N-Type bulk head connecting to the system ODU. MW Radio IDU Labelling Near end ID Far end ID Local IP Address and subnet Remote IP Address and subnet Commissioned Tx
13/06/2013
Page 62 of 64
Transmission network planning __________________________________________________________________________ Power MW Radio IF Labelling Commissioned RSL Tx Freq (Mhz) All IF cable labels should be prefixed with NSI Far end ID on Fly lead Far end ID at Bulk head connector Far end ID inside of Roxtec Far end ID outside of Roxtec MW Radio ODU and Antenna labelling Far end ID @ ODU Far end Site name & ID Tx Frequency (MHz) Polarisation Commissioned Tx power Commissioned RSL
Requirement
IP/MPLS
Category
Equipment installation
Description
IP/MPLS equipment should be installed within the same outdoor cabinet as the existing 3PP CPE
Additional notes
IP/MPLS
Equipment installation
Where space restricts the possibility to install the IP/MPS equipment within the same cabinet, the IP/MPLS equipment should be housed in the site support unit
IP/MPLS
13/06/2013
Page 63 of 64
Transmission network planning __________________________________________________________________________ CPE are in separate outdoor cabinets but on the same plinth all cabling should be done direct via the cable management systems in place between the outdoor cabinets - Where the outdoor cabinets do not share the same plinths structured cabling is required between the outdoor cabinets. The following rules apply for each service( Optical Ethernet & TDM) 12 pair SM fibre suitable for outdoor installation should be run and presented on a 1U splice/presentation tray within each cabinet 12 pair CAT6 suitable for outdoor installation should be run and presented on a 1U patch panel within each cabinet 16 core Coax suitable for outdoor installation should be run and presented on a 2U DDF within each cabinet.
13/06/2013
Page 64 of 64