Network Design Rules PDF
Network Design Rules PDF
29 June 2018
Network Design Rules
29 June 2018
Copyright
This document is subject to copyright and must not be used except as permitted below or under the Copyright
Act 1968 (Cth). You must not reproduce or publish this document in whole or in part for commercial gain
without the prior written consent of nbn co limited (nbn). You may reproduce and publish this document in
whole or in part for educational or non-commercial purposes as approved by nbn in writing.
Disclaimer
This document is provided for information purposes only. The recipient must not use this document other than
with the consent of nbn and must make its own inquiries as to the currency, accuracy and completeness of this
document and the information contained in it. The contents of this document should not be relied upon as
representing nbn’s final position on the subject matter of this document, except where stated otherwise. Any
requirements of nbn or views expressed by nbn in this document may change as a consequence of nbn
finalising formal technical specifications, or legislative and regulatory developments.
Environment
nbn asks that you consider the environment before printing this document.
Contents
1 Introduction 9
2 Network Design 11
2.2.2.1 Fibre 14
2.3.2 Fibre 35
2.4.4 DSLAM 37
2.5.5 RF Transmission 40
2.5.6 RF Combiner 41
2.7.4.3 Transport 46
2.8.2 Satellite 49
2.8.3 RF Sub-System 51
Appendix A Definitions 70
Figures
Figure 1 Target High Level Architecture Context Diagram ......................................................................... 12
Figure 7 Traffic paths available when failure occurs on return spur ............................................................ 21
Figure 10: BJL Aggregation Utilisation example for FTTP and FTTC ............................................................ 24
Figure 36 Examples of redundancy in the DWDM network (using 1+1 Client Protection and 1+1 Service
Protection) ......................................................................................................................................... 56
Figure 45 E-NNI Mode A connection model from an Access Seeker Provider Edge (AS PE) ............................ 63
Tables
Table 1 Summary of fibre & connector types used in the network .............................................................. 17
1 Introduction
These Network Design Rules (NDRs) provide an overview of the current physical network architecture and high
level design of nbn co limited’s (nbn’s) network and network components. The NDRs are linked to nbn’s
Special Access Undertaking (SAU) – the initial version of the NDRs (dated September 2012) satisfied the
requirements of clause 1D.7.1 upon commencement of the SAU on 13 December 2013. This updated version of
the NDRs builds on previous updates provided to the ACCC, and satisfies the requirements of clause 1D.6,
1D.7.2 and 1D.7.4 (which relates to how the NDRs will be updated over time).
On 24 August 2016, the Shareholder Ministers provided nbn with a new Statement of Expectations (24 August
SoE), which replaced all previous Statements and contained changes compared with previous Statements
provided to it. One of those changes related to the requirement for nbn to be mindful of certain market-based
principles. In particular, the 24 August SoE included the following key statements:
“This statement provides guidance to nbn to help ensure its strategic direction aligns with the
Government’s objectives for the delivery of the network. This statement provides nbn with flexibility
and discretion in operational, technology and network design decisions, within the constraints of the
Equity Funding Agreement with the Commonwealth, and the Government’s broadband policy
objectives...
The Government is committed to completing the network and ensuring that all Australians have access
to very fast broadband as soon as possible, at affordable prices, and at least cost to taxpayers. The
Government expects the network will provide peak wholesale download data rates (and proportionate
upload rates) of at least 25 megabits per second to all premises, and at least 50 megabits per second
to 90 per cent of fixed line premises as soon as possible. nbn should ensure that its wholesale services
enable retail service providers to supply services that meet the needs of end users.
To achieve these objectives nbn should roll out a multi-technology mix network and build the network
in a cost effective way using the technology best matched to each area of Australia. nbn will ensure
upgrade paths are available as required.
nbn should pursue [the broadband policy objectives] and operate its business on a commercial basis.
In doing so nbn should be mindful of the following principles:
- Rolling out the network: When planning the rollout, nbn should prioritise locations that are
poorly served, to the extent commercially and operationally feasible. During the rollout, nbn should
be guided by the following goals: service quality and continuity for consumers; certainty for retail
service providers and construction partners; and achievement of rollout objectives as cost-
effectively and seamlessly as possible. nbn should apply the Government’s new developments
policy.
- Vehicle for market reform: The Government expects the network to be a wholesale only access
network, available to all access seekers, that operates at the lowest practical level in the network
stack. The completion of the network will enable the structural separation of Telstra and a more
competitive market for retail broadband and telephony services. nbn should retain optionality for
future restructuring or disaggregation.
- Market environment: nbn is a commercial entity operating in a market environment and can
compete and innovate like other companies in this environment in accordance with legal and policy
parameters.
- Funding: Taxpayers have made a substantial investment in nbn. The Equity Funding Agreement
imposes a cap on the maximum amount of equity funding that will be provided by the Government.
nbn needs to remain disciplined in its operations, proactively managing costs to minimise funding
requirements and working with the Government to optimise its capital structure.
- …”
In accordance with the 24 August SoE, nbn will continue to plan and design its network to pursue the
Government’s broadband policy objectives, while operating on a commercial basis and being mindful of the
market principles set out above.
The NDRs now include the network design for FTTx (including Fibre to the Premises (FTTP) (GPON), Fibre to the
Node (FTTN), Fibre to the Building (FTTB) and Fibre to the Curb (FTTC)), Hybrid Fibre Coax (HFC), TD-LTE
Wireless, Satellite Access, Transport and Aggregation network domains.
The NDRs have been updated in accordance with clause 1D.7.4 of the SAU to reflect the ongoing development
of, and changes to, nbn’s network architecture since nbn’s last update. This includes (amongst other things)
changes to the network design to account for the upcoming introduction of Business Satellite Services, which
nbn expects to release in the second quarter of 2019. nbn is currently consulting with its customers on the
Business Satellite Service and may make further changes to its network design as part of more detailed design
and implementation work, and customer feedback. If such developments entail further changes to the nbn’s
network, then nbn will include such changes in the next relevant version of the NDRs.
Going forward, nbn will continue to periodically update the NDRs to take account of variations, augmentations
or enhancements to the design, engineering and construction of nbn’s network to reflect the development of
new products consistent or in connection with the 24 August SoE (to the extent that those new products give
rise to a variation, change, augmentation or enhancement to the design, engineering or construction of nbn’s
network).An itemised summary of the substantive changes made in this updated version of the NDRs relative to
the June 2017 version is provided in Appendix B. A list of non-substantive changes is also now provided in
Appendix B, which covers refinements, clarifications and corrections to this document.
2 Network Design
2.1 High Level Design Overview
The nbnTM network has been divided into domains to allow for communication of the overall solution.
The target high level architecture is shown in Figure 1. This view includes the IT platforms, which are out of
scope for this High Level Design, but must be interfaced to, to achieve a complete working solution. The target
high level architecture will not be achieved for a number of years, and will be implemented in steps determined
by the services introduced.
The focus of this document is captured in Figure 2. These are the domains for Fibre (GPON), Copper, HFC, LTE
Wireless and Satellite.
o Access: Various technology solutions to allow End Users to be connected to the nbnTM network.
The long term technology domains covered by the NDRs are Fibre (GPON), Copper (VDSL),
HFC, Wireless (LTE) and Satellite.
o Transport: Provides transparent connectivity between network elements
o Aggregation: Takes many interfaces from network elements and aggregates them for
presentation on fewer interfaces back into network elements and vice versa. Also provides the
point of interconnect for access seekers, the customers of nbnTM services.
o Service Connectivity Network (SCN): Is the core IP/MPLS network and provides service
connectivity for satellite access (RF Gateway to DPC and DPC to satellite central POI site) and
flexibility of connectivity within the Aggregation network for support of HFC.
National Connectivity Network: network that does not carry the Access Seeker to/from End User traffic.
This network carries OA&M related traffic and signalling traffic.
Network Management: Systems that support the network carrying the customer service traffic. Functions
include element management, time information, Authentication, Authorisation and Accounting (AAA) for
access to network elements and connectivity between the equipment providing Network Management
functions and the Network Elements themselves
Control: those required to operate in real time (or very near real time) to support the establishment of an
End User connection through to the Access Seeker
Lawful Interception: support the delivery of replicated customer traffic to Law Enforcement Agencies as
legally required by the Telecommunications (Interception and Access) Act 1979.
IT Platforms: Platforms supporting systems required for providing a customer portal, order acceptance,
enforcement of business rules (e.g. number of active data services per NTD), activation, fulfilment,
assurance and billing. Also includes systems required to interact with Element Management systems to
operate the network carrying the customer traffic.
End Network
Users
Access SCN Aggregation
CPE NTD eNodeB Wireless (LTE) PDN GW
Modem Copper Fibre Network Fibre Network 3rd Party Managed Service
End Network
Users
Access SCN Aggregation
CPE NTD eNodeB Wireless (LTE) PDN GW
Ethernet Trunking Ethernet Ethernet
Agg regation & Transport for Wireless A ccess
Aggregation Fanout Access
Seekers
CPE NTD Satellite RF Gateway DPC/BSS POI
(End User
Satellite Agg & Transport for Satellite Acce ss
Traffic)
CPE NTD Fibre FDH/FJL OLT/AAS Transport AS 1
Fibre Network Fibre Network DWDM AS 2
CPE NTD HFC (DOCSIS) Optic Node CMTS
RF Tx, Combiner &
Coax Fibre Network Transport
Direct Fibre
Copper (xDSL) DSLAM AAS
Modem Copper Fibre Network Fibre Network 3rd Party Managed Service
The nbnTM physical network architecture is designed to provide the required connectivity for nbnTM services
whilst allowing for the required level of availability and resiliency in an efficient manner.
Each domain has a set of physical network elements that deliver the functionality and interfaces to support the
nbnTM services.
GPON NTD (ONT) - The Optical Network Terminal located at the end-user premises uses GPON technology
to extend optical cable from an OLT shelf. It delivers UNI-V and UNI-D capabilities to a premises.
Fibre Network (FN) provides optical pathways between the GPON NTD and OLT. Sections of the Fibre
Network are shared by other access domains.
OLT - The Optical Line Terminal provides FAN Site processing, switching, and control functions. The OLT
aggregates GPON networks into a number of network-facing 10Gbps links.
The GPON NTD terminates the incoming physical fibre at the end-user premises and provides one or more User
to Network Interfaces (UNI). A number of GPON NTD varieties will be used to suit different circumstances, end-
user types and interface quantities. The variants included are Indoor and Outdoor Single Dwelling GPON NTDs.
Both have the following:
Indoor GPON NTD variants should be wall mounted inside the nbnTM GPON NTD enclosure. Outdoor GPON NTD
variants will be permanently fixed to a surface (e.g. interior or exterior wall).
The GPON NTD enclosure combines mounting for a GPON Indoor NTD and a standard power supply with fibre,
power and End User service cable management.
There are two basic types of power supply available for each GPON NTD variant; AC to DC standard power
supply or power supply with backup. The standard power supply accepts ~240 VAC input and provides 12 VDC
towards the GPON NTD via a specialised cable with captive connectors (indoor GPON NTD).
The power supply with backup also accepts ~240 VAC input but includes a battery facility for providing 12 VDC
towards the GPON NTD even during an AC mains failure event, until the battery falls below the minimum
energy capacity. DC power is fed to the GPON NTD via a specialised cable with captive connectors (indoor
GPON NTD) or screw terminals (outdoor GPON NTD), the cable also allows for simple power related alarms to
be forwarded from the UPS unit to the GPON NTD.
The UPS can be installed with or without a battery, allowing an Access Seeker or end-user to provide a battery
at a later date, and perform maintenance of the battery facility without requiring nbn involvement. When there
is no battery installed, the UPS operates like a regular AC to DC power supply.
Both the AC to DC power supply and UPS must be installed indoors. When installed with an Outdoor GPON NTD,
the power cabling must extend within the premises to the power supply or UPS.
The NBN Co Fibre Network provides the physical connectivity between the FAN site and the active equipment in
the street or building.
The passive network component of a fixed network build comprises a significant part of the overall fixed
network deployment. It is generally disruptive and expensive to augment or modify so it is important that the
architecture, planning, design, and installation accommodates the long term needs and future growth and
capacity. It is nbn’s aspiration that these facilities will be suitable not only for the nbnTM network’s initial
technology choice, but for fixed access technologies developed in the future, whatever they may be.
In order to provide a functional and operational network a high degree of uniformity of the passive
infrastructure is desirable. Uniformity of design and construction facilitates education and training, the
availability of competent staff, and economies of scale and efficiencies in the supply industry in general.
Regionalised or localised variations in design or construction practices will be minimised where practical as
these differences may disadvantage the affected communities due to the increased costs and complexity of
managing variations in technical and operational processes.
The NBN Co Fibre Network includes the Distribution Fibre Network (DFN), the Local Fibre Network (LFN) and
Premises Fibre Networks (PFN). The DFN provides the connectivity from the Fibre Aggregation Node (FAN) to
the first point where individual fibres can be accessed for a Distribution Area (refer to 2.2.4.1 for description of
a Distribution Area). The LFN provides the connectivity from this point towards the premises. The PFN provides
the connectivity from the street through to the premises. There are currently two DFN architectures: Loop and
Star. Star DFN deployments will be the default design used for all fixed access networks, providing a more cost
effective solution. There are also two LFN architectures: LFN and Skinny LFN.
The common denominator of this network is the actual physical fibre strands. nbn has selected ribbon fibre
technologies due to the cost and labour savings associated with the use of this technology where high fibre
numbers are required. The Skinny LFN uses lower fibre count cables with stranded fibres. All fibres are
accessible individually. This provides greater flexibility in relation to the deployment of the LFN.
2.2.2.1 Fibre
Ribbon Fibre
Ribbon fibre technology significantly increases the fibre cable core counts available, and also provides
significant time savings when joining fibres via fusion splicing when compared to stranded or single fibre
particularly where fibre counts are greater than 144. Ribbon fibre cables are available with fibre counts ranging
between 12 and 864 fibres, and these are virtually identical in size and handling characteristics to similarly
constructed single fibre cables.
The cables match the core counts required where all premises require a fibre as part of the access solution (12,
72, 144, 288, 432, 576, and 864) and the 12 fibre matrix suits the modularity of the Factory Installed
Termination Systems (FITS). The FITS is a pre-connectorised system which pre-terminates fibres in the Local
Fibre Network onto multi-fibre connectors in groups of 12 and multi-ports are then connected into these as
required.
Stranded Fibre
The stranded fibre cable permits access to individual fibres which in turn allows for a greater degree of
flexibility within the LFN. For example, a geographic area may require an allocation of x fibres for an FTTN node
+ x fibres for FTTB, + x fibres for Fibre on Demand. This requirement to allocate fibres initially and also
provides flexibility for in-fill allocation. It allows for access to the fibre on an individual strand level to permit
them to be manipulated individually as opposed to groups of 12 fibre ribbons.
The following table provides a summary view of what fibre types and connectors are used in the network:
DFN - Ribbon Splice: 36 fibres (3 ribbons) per FDH. 2 ribbons All Fibre Optic Cable
Loop spliced, 1 spare
Splice: 288 fibre to multiple TFANs
Splice: 72 fibre to single TFAN
DFN - Ribbon Splice: 12 fibres (1 ribbon) per Distribution All Fibre Optic Cable
Star Area. DFN trunk cable has >33% spare capacity
(for in-fill growth, and to support future
technology and product directions)
Splice: Input tail or individual fibres at a node
(e.g. FTTN Cabinet)
LFN Ribbon Splice: FDH (72) Tail to factory installed patch All Fibre Optic Cable
panel
Connector: FDH Patching = SC/APC
Splice: FDH Tail (e.g. 576) to Multiport Tail (12)
Connector: Multiport tail = 12 fibre tether
optical connector (proprietary format)
Connector: Multiport Single Customer
Connection = single fibre optical connector
(proprietary format)
LFN Ribbon – Splice: FDH Tail (e.g. 576) to Aerial Cable All Fibre Optic Cable
Aerial Splice: Aerial Cable to Multiport Tail (12)
Connector: Multiport tail = 12 fibre tether
optical connector (proprietary format)
Connector: Multiport Single Customer
Connection = single fibre optical connector
(proprietary format)
LFN - Stranded Splice: 12 fibre ribbon splice at DJL to FSD All Fibre Optic Cable
skinny (Flexibility Sheath Distribution), FSD installed
into FJL (Flexibility Joint Location) and fibre de-
ribbonised.
Splice: Individual FSD fibres spliced to 1, 4, or
12 fibre stranded cables
Splitter: Individual FSD fibres spliced to 1st
stage passive splitter, outgoing splitter tails
spliced to 1, 4, or 12 fibre stranded cables
(FLS).
Splitter: Individual FLS fibres spliced to 2nd
stage passive splitter, outgoing splitter tails
spliced to 1 fibre SSS cables.
Connector: Multi-port Tail or Equipment Tail
(Hardened), single fibre connector.
Splitter: Connection to 3rd stage optical splitter
(an example use of this is uplift from FTTC to
FTTP)
Connector: Multiport Single Customer
Connection = single fibre optical connector
(proprietary format)
LFN - Ribbon - Splice: 12 fibre ribbon splice at DJL to FSD All Fibre Optic Cable
skinny Aerial (Flexibility Sheath Distribution), FSD installed
into FJL (Flexibility Joint Location) and fibre de-
PFN – Single Splice: Single Fibre Fusion Splice to PCD & FWO All Fibre Optic Cable +
SDU Fibre Drop
Fibre Optic Single
Drop Cable
Multiport Feeder
Cable
PFN – Ribbon Splice: PDH,FDT tail ,FWO All Fibre Optic Cable +
MDU Connector: PDH,FDT,FWO,NTD = SC/APC
Fibre Optic Aerial
Connector: FDT,FCD = MPO/APC
Cable
PFN – Single Splice: Single Fibre Fusion Splice to PCD All Fibre Optic Cable +
MDU Fibre Drop
Fibre Optic Single
Drop Cable
IEEE 1222 Standard for All Dielectric Self Supporting Aerial Fibre Optic Cable
Multiport
MPO / APC
The Distribution Fibre Network (DFN) provides the underground fibre pathways between the FAN sites and the
first point where individual fibres can be accessed for each Access Distribution Area.
The Star DFN topology is the default for the Fixed Access build (post-Multi Technology Mix) to achieve cost
efficiencies in construction.
A Star DFN topology starts from a FAN site and extends outwards, with nodes connecting into it along its
length. The Star architecture results in the fibre path for nodes being unprotected.
The DFN is routed to pass near the likely node locations without requiring significant construction works.
Figure 5 illustrates the logical connectivity principles for DSS Aggregation, Trunk, Branch, Feeder and HSD
configurations.
Branch *
72f
DSS
DSS 288f 288f DSS Trunk
Aggregation *
FAN 576f
DFN Connectivity
The DFN provides connectivity to the nodes located within the SAM modules (refer to Figure 5).
Multiple cables from different DFN cables can share the same duct-line, as well as sharing duct-lines with TFN
and LFN cables.
Nodes are connected to the DFN cables and each node is allocated multiples of 12 fibres towards the FAN site.
The Distribution Fibre Network (DFN) provides the underground fibre pathways between the FAN sites and the
first point where individual fibres can be accessed for an Access Distribution Area. The DFN has previously been
installed primarily for the support of GPON in a loop topology, starting from a FAN site and finishing at the
same site, with FDHs connecting into it along its length. In this configuration, the DFN cables are typically
higher fibre counts, with fibre core counts needed between 288 to 864 fibres. The DFN is also notionally
allocated an A and B direction to assist in the identification of upstream connections at the FDH, where the A
indicates a clockwise direction and B an anticlockwise direction.
The FDHs are street side externally rated cabinets which are used to house the GPON splitters used to facilitate
connectivity between the DFN and the Local Fibre Network (LFN). The FDH also provides the ability to provide
direct connectivity between the DFN and LFN.
The DFN was also required to provide diverse pathways for point to point services from FDHs where required.
The DFN fibre links cable into the FDH is provided by a single cable and has both A and B directions within the
same cable sheath.
DFN Diversity
The FTTP network has been extensively modelled for availability percentages and expected downtime due to
faults and this modelling has recognized that the DFN has a significant input into the availability calculations.
This is the direct effect of the high fibre counts and distances required for the DFN.
The availability target indicates that a link distance of 4500 metres can be applied to a single connection
pathway between the FAN and the farthest FDH or to a spur off the DFN without the need to provide diversity.
For practical purposes this distance is reduced to 4000 metres to account for unforeseen alterations to the
network in the construction phase and to provide flexibility for future maintenance.
This calculation allows the DFN to be installed in topologies other than fully diverse.
The default position for the DFN was to provide connection diversity to each FDH for the provision of diverse
services and to provide the capability for a quicker Mean Time to Repair (MTTR). The quicker MTTR is achieved
through the use of the diverse path available at the FDH to move affected services to the other “side” of the
DFN effectively bypassing the affected cable link.
FDH FDH
3 4
FAN
FDH FDH
10 9
An alternate position is for a collapsible loop (referred to as a return spur) to be installed from the main diverse
DFN, up to 4000 metres, with a maximum of 2 FDHs connected into the return spur. This topology does not
permit service restoration patching for the two connected FDHs when the return spur is interrupted but does
allow for restoration patching for the remainder of the DFN.
FDH
5
FDH
5
FDH FDH
3 4
Another deployment option is for a spur in which only one pathway (either A or B and referred to as a single
spur) is presented to the FDHs.
DFN Connectivity
The DFN is preferably installed underground, and by exemption aerially, and in the past was designed in a ring
topology starting and finishing at the same FAN site. This effectively divided the DFN cable ring into an ‘A’ and
a ‘B’ side.
The DFN provides connectivity to the Fibre Distribution Hubs (FDH) located within the FSAM modules and
provides two diverse fibre pathways back into the FAN site for each FDH (refer to Figure 6).
Any individual DFN ring cable will be separated from any other portion of the same ring by the most practically
achievable distance. Multiple cables from different DFN ring cables can share the same duct-line, as well as
sharing duct-lines with TFN and LFN cables.
Fibre Distribution Hubs are connected to the DFN ring cables and each FDH is allocated fibres in both A and B
directions towards the FAN site. This ring allocation provides a diverse pathway for end-user connectivity and
allows for temporary service restoration when required. In the event of a serious service impacting event (e.g.
cable cut, etc.) the services that are connected to the affected side can be manually re-patched at the FDH site
and FAN site to the diverse pathway.
Therefore the DFN network must be capable of servicing these connections in both directions of the DFN within
the optical constraints.
Whilst the DFN provides diversity from each FDH this diversity is not transferred into the LFN (the LFN is a star
topology) and therefore the DFN should also be designed to align with potential users of point to point services
(e.g. banks, schools, universities, business parks) to allow the LFN to provide the diverse links.
The Flexibility Joint Location (FJL) is an aggregation and connection point between the Distribution Fibre
Network (DFN) and the Skinny Local Fibre Network (LFN). A FJL is used to permit individual fibre access and
management between the incoming DFN and the outgoing Skinny LFN.
Within the FJL are splice trays capable of single fibre management to permit direct splicing between the DFN
and Skinny LFN for all fibre connectivity requirements.
The FJL has a capacity of up to 144 fibre splices and permits multiple configurations of splitters as required for
the DA specific infrastructure. There is no patching facility within the FJL and all fibre connections are spliced.
Extra DFN capacity can be installed into the FJL as required, and a FJL can also be used to connect more FJLs to
the same incoming Flexibility Sheath Distribution (FSD) cable depending on availability of spare fibres.
Split ratios are generally kept to 1:32 with the optical budget being managed to permit greater degrees of
splitting if required for network growth. Splitters may also be reconfigured within the FJL, if required for
capacity.
1 F SSS 1
1 : 4 Spl i t t e r MPT
D
1 S
F
4
S
N
4 S 1 F SSS 1
1 : 8 Spl i t t e r MPT
1 2 F Spl i c e 1 2 F FSD 8
S
S 1 F L SS 1
1 : 3 2 MDU Spl i t t e r
1F S 32
Sp l i c e
1 or 4 f i b r e L SS
S OF/
S
DJ L Cu
DSLAM / FoD
Fl e x i bi l i t y
J oi nt
Loc a t i on
The Fibre Distribution Hub (FDH) is an aggregation and connection point between the Distribution Fibre
Network (DFN) and the Local Fibre Network (LFN). The FDH is available in 432, 576, and 864 LFN variants,
however 576 has been the preferred choice.
The FDH is an environmentally secure passive device installed on street frontages and serves as a centralised
splitter location. The splitter modules housed within the FDH provide a one-to-many relationship between the
in-coming DFN fibres and the out-going LFN fibres. In keeping with the requirements of the GPON equipment,
the splitters used are a 1:32 passive split.
The standard and most common connectivity type is a PON connection between an end-user and the output leg
of a splitter module.
The splitter module is in a 1:32 planar configuration and these modules are pre-terminated with 1 x DFN
SC/APC fibre lead and 32 x LFN SC/APC fibre leads and these connect into the network via the patch panel
array within the FDH.
1 32
FDH
The Skinny Local Fibre Network (LFN) is installed from the FJL to the end user premises in a star topology with
no inherent capacity for diversity.
The Skinny LFN architecture is the default for greenfield deployments, and is used to support the multi-
technology mix model to achieve cost efficiencies in construction.
The Skinny LFN is comprised of a 12 fibre ribbon cable for the FJL feeder sheath distribution (FSD) from the
Distribution Joint Location (DJL) and either a single fibre pre-connectorised cable of the Splitter Sheath
Segment (SSS) to connect to SMPs or a Flexibility Sheath Local (FSL) cable containing 4 or 12 Fibres to connect
to Breakout Joint Locations (BJLs).
The FSL cables are used to aggregate the individual SSS cables into either a 4 or 12 fibre stranded fibre cable
in conjunction with a Breakout Joint Location (BJL) to minimise the hauling of multiple SSS cables and provide
high utilisation of existing duct space (see Figure 10).
1 FIBRE SSS
SMP
1 FIBRE SSS
SMP
FJL 1 FIBRE SSS
SMP
1 FIBRE SSS
SMP
1 FIBRE SSS
SMP
4 or 12 FIBRE 1 FIBRE SSS
FSL SMP
FJL BJL 1 FIBRE SSS
SMP
1 FIBRE SSS
SMP
1 FIBRE SSS
DPU
4 or 12 FIBRE 1 FIBRE SSS
FSL DPU
FJL BJL 1 FIBRE SSS
DPU
1 FIBRE SSS
DPU
Figure 10: BJL Aggregation Utilisation example for FTTP and FTTC
The FSL cables may also be used to connect multiple BJLs to the same sheath with up to 3 BJLs sharing the
same FSL cable and fibres allocated accordingly (see Figure 11).
Up to 4 x Up to 4 x Up to 4 x
1 FIBRE SSS 1 FIBRE SSS 1 FIBRE SSS
Up to 4 x SMP Up to 4 x SMP
FTTP: The FJL provides a 1:4 or 1:8 way split, spliced through the BJL. Downstream the SSS cables are used
to connect the Splitter Multi-ports with integrated splitters factory-installed. The Splitter Multi-ports (SMP) used
are commonly 1:8, however 1:4 may be deployed and each variant has a single fibre environmentally hardened
connectorised input with each splitter output leg terminated to another single fibre environmentally hardened
connectorised output.
FTTC: The FJL provides a 1:2 way split, spliced into the BJL. Downstream the SSS cables are used to connect
to 4 port DPUs. The BJL contains a 1:4 optical split and has a single fibre environmentally hardened
connectorised DPU input.
For aerial deployment, the LFN is a factory installed termination system (FITS) that utilises factory installed
splice closures, referred to as overmoulds, to present the multi-fibre connector at each required location. The
Splitter MPTs and DPUs are then connected in a similar manner as the underground method.
Each SDU is allocated one port at a SMP in the LFN. This port is utilised for the service connection. An allocation
of 1 spare port per 1:8 SMP is provided and, although allocated, is available for use at the MPT for any non-
addressable connections, extra connections per SDU, etc.
Non-addressable locations can be identified as locations without a physical address, e.g. power transformers,
traffic light controllers, etc.
Any future in-fill or augmentation will be provided on an as needed basis and the LFN expanded accordingly.
Extra ports or fibre may be allocated to provide future capacity. Typically, fibre capacity is held within the FJL
to permit its use anywhere within the DA and for any technology option.
The Local Fibre Network (LFN) is installed from the FDH to the end-user premises in a star topology with no
inherent capacity for diversity. Diversity is achieved by extending the LFN from any other FDH, dependant on
the status of the other FDH’s location in separate geographic pathways.
LFN cables are typically smaller fibre count cables, ranging between 72 and 288 fibre counts, and are installed
in the aerial corridor and underground pathways typically alongside property boundary street frontages. The
LFN cables are presented at the FDH on a connector array to facilitate connection to the upstream DFN either
via a PON splitter or direct connection.
For an underground deployment, the individual LFN cables are extended from the FDH into the Local Fibre
Network to multiple centralised splice closures or Access Joints (AJL) where the cables are joined into smaller
fibre count “tether” cables via splicing. The tether cable is factory terminated on one end only with an
environmentally hardened multi-fibre connector and is installed between the AJL and local fibre pits where they
provide a connection point for the factory terminated Multi-Ports (MPT). Service fibre drops to End User
premises are connected into the MPTs as required.
For aerial deployment, the LFN is a factory installed termination system (FITS) that utilises factory installed
splice closures, referred to as overmoulds, to present the multi-fibre connector at each required location. The
MPTs are then connected in a similar manner as the underground method.
Each SDU is allocated one fibre in the LFN between the MPT and the FDH. This fibre is utilised for the service
connection. A second fibre is allocated to the same Multiport location that the SDU will connect to and, although
allocated, is available for any use at the MPT for any non-addressable connections, extra connections per SDU,
etc.
Non addressable locations can be identified as locations without a physical address e.g. power transformers,
traffic light controllers, etc.
Extra fibres shall be allocated to provide future capacity. It is recognised that there are different levels of
growth expected across Australia. In areas of low growth the effective allocation on average is one and a half
fibres per premises. In other areas, the effective allocation on average is three fibres per premises.
MDU layouts are varied and can be broadly divided into two configurations, horizontal and vertical MDUs. These
two configurations can co-exist on sites, however the horizontal MDUs will generally be serviced via an SDU-like
architecture and the vertical serviced via the MDU specific architecture.
The internal configuration of MDUs is based on a number of repeatable modules: the Premises Distribution Area
(PDA), the Horizontal Distribution Area (HDA), the Backbone Distribution Area (BDA) and the Fibre Distribution
Area (FDA).
The PDA, HDA, BDA, and FDA are repeatable modules used within an MDU to achieve fibre connectivity. These
modules are utilised as design tools to provide conceptual boundaries to clearly identify the location of various
devices and their functionality.
The physical topology is a star network radiating out from either the FDH or the Premises Distribution Hub
(PDH) which may be either external or internal.
The fibre configurations within MDUs are derived from the fibre allocations for the premises within the building.
Residential MDUs:
The fibre allocations within MDUs differ due to the minimal expansion work that is performed within MDU
premises. Realistically a residential MDU will maintain its size due to the relatively contained nature of these
types of buildings. Additionally most MDUs have already maximised the available land.
Residential premises within MDU sites are therefore allocated 1.5 fibres per premises with the first fibre
allocated for service and the ½ fibre allocated for future connectivity. In effect, the ½ fibre is combined with
another ½ fibre from an adjacent MDU premises and the ratio of 3 fibres per 2 premises is used.
Fibre counts are rounded up to the nearest whole number when dealing with the ½ fibre allocation.
Example: An MDU has 7 residential premises to be serviced, with each premises allocated 1.5 fibres for a total
fibre requirement of 10½ fibres. Therefore the total fibre count required is 11 fibres.
Commercial MDUs:
Commercial entities within premises are allocated 2 fibres. Due to the transient nature of some commercial
entities combined with changing floor space usage (for example a commercial floor in an MDU could
accommodate either one or multiple different entities) the minimum fibre count allocated per floor is 12 fibres.
This fibre count is sufficient to service up to 6 commercial entities per floor and if there are additional premises
over the initial 6 then the fibre allocations are added to the initial 12 fibre allocation.
The following configurations are for explanatory purposes to illustrate the hierarchy of component connectivity.
These examples use fibre demand as opposed to premises numbers as a method to illustrate the functions and
connectivity of the hardware.
This configuration will utilise FDTs fed via a Multiport Sheath Segment (MSS), a 12 fibre tether, from an AJL in
the general vicinity to the Cable Transition Location (CTL) which provides the transition point from external to
internal cable types.
External Internal
FDT
The Fibre Distribution Terminal (FDT) provides the breakout point for the individual premises drop cables.
Configuration shall be either a single 24 Fibre FDT or two 12 Fibre FDTs.
This configuration will utilise FDTs fed via a Local Sheath Segment (LSS) either from an LJL or AJL.
FDT
External Internal
FDT
FDT
The FDT configuration shall be a combination of either 24 Fibre FDTs or 12 Fibre FDTs.
Fibre allocations are based on using the external FDH as the centralised splitter location, and using these
splitters and the fibres in the LFN to facilitate connection to the MDU. However, splitters connected to the LFN
may be used for network augmentation when the LFN is at, or near, full capacity.
This configuration will utilise either a Fibre Collector Distributor (FCD) connected to multiple FDTs, or multiple
FDTs, both fed via an LSS from an Local Joint in the LFN (LJL).
FDT
FDT
FCD
FDT
External Internal
FDT
FDT
FDT
Fibre allocations are based on using the external FDH as the centralised splitter location, and using these
splitters and the fibres in the LFN to facilitate connection to the MDU. However, splitters connected to the LFN
may be used for network augmentation when the LFN is at, or near, full capacity.
This configuration will utilise Premises Distribution Hub (PDH) sized at either 144 or 288 fibres dependant on
the fibre demand and will be connected into a DJL via a Distribution Sheath Segment (DSS).
The local fibre network within the MDU will be fed via a combination of FCDs connected to the PDH and FDTs
connected directly to the PDH where required.
FDT
FDT
FCD
External Internal FDT
FDT
DJL DSS
CTL PDH
FDT
FDT
The DFN connectivity will be the same construct as the external FDH. In the case of a ring DFN architecture the
A and B side of the DFN share the same sheath to the PDH.
For new developments, two additional interfaces to the fibre network may be provided to allow delivery of
customer content directly over a wavelength on the fibre to the end user premises.
RF Receiver – generally located at the end user premises. The RF Receiver splits out the wavelength
carrying the nbn Ethernet broadband service to the NTD and the wavelength carrying the Customer content
signal to a RF UNI interface.
RF Combiner – generally located in or near an nbn FAN site, or in a pit appropriate for supporting
connectivity at the last DJL before the new estate LFN. The RF Combiner is the NNI, providing the injection
point for the wavelength carrying the Customer content signal.
FAN
nbnTM Premises
Connection Device
RF
Combiner
UNI-RFo OLT
Fibre Network
RF
Receiver
nbnTM NTD ODF
UNI-RFi
RF Rack
12km(max)
Internal of External of
New Development Estate New Development Estate
FAN
TM
nbn Premises
Connection Device
Fibre Network
3-6
>
TVR
nbnTM NTD TVD Cable NNI-
WDM RFo
“Meet-me” Location
TVL CSP
Headend
6km (max)
12km (max)
An option is also provided for content injection at MDUs, using an RF combiner with splitter in one unit.
The optical budget allocation provides the most significant design constraint of the DFN and LFN Network. The
combination of fibre strand attenuation, splice losses, passive optical splitters, connectors, and operating
headroom provide an optical limitation on the distance between the Optical Line Terminals (OLT) located in FAN
sites and the terminating device, for example, the GPON NTD located at end-user premises. Distance and fusion
splicing are generally the only variables in the external plant.
These limitations are based on the optical equipment transmit and receive parameters which are set via the
optical devices as referenced in the Gigabit Passive Optical Network (GPON) standard ITU-T G.984. This
equates to a maximum permissible optical loss of 28dB or a permissible distance of 15km, whichever is less,
between the OLT and GPON NTD. This optical loss is calculated for the worst case combination of OLT and
GPON NTD connection within an FDA and is applied to both the A and B direction. With Forward Error Correction
(FEC) enabled between the OLT and the NTD, this can be extended to a maximum permissible optical loss of
30dB (17.5km) where required. Where Customer RF injection points are supported this equates to a
permissible distance of 12km. The maximum optical losses used for detailed design optical budget calculations
are listed in the table below.
0.35dB/km @ 1310nm
Single Fibre Strands (DFN and LFN)
0.21dB/km @ 1550nm
1.1dB @ 1330nm
RF Combiner 1.3dB @ 1490nm
8.6dB @ 1550nm
3dB @ 1330nm
Pit located RF Combiner 3dB @ 1490nm
10dB @ 1550nm
RF Receiver 1.0db
The calculation of the optical loss is performed with the following equation:
(DFN Distance (in km) x 0.35dB) + (DFN planned splices x 0.20dB) + (FDH Optical Splitter Loss) + (2 x
mated connector (at FDH)) + (LFN Distance (in km) x 0.35dB) + (LFN splices x 0.20dB) + (multi-fibre
connector loss) + (mated environmentally hardened SC/APC connector) + (2 x mated SC/APC connectors)
The GPON Optical Line Terminator (OLT) function terminates multiple individual GPON connections, each with a
number of ONTs or Distribution Point Units (DPUs) attached. The GPON OLT supports up to 128 GPON
interfaces per shelf, with each GPON interface extending to one or more Fibre Serving Areas for connectivity to
end-users. GPON interfaces are arranged in groups of 8 per Line Terminating (LT) card.
The GPON OLT system supports dual Network Termination (NT) cards for control and forwarding redundancy,
and 4 x 10 Gigabit Ethernet (SFP+) network uplinks per NT for network connectivity. The system is fed by -
48VDC power, and power module/feed redundancy is incorporated into the shelf.
The OLT is located in Fibre Access Node (FAN) sites. To support Greenfields sites where a FAN site has not yet
been established, a cabinetised OLT solution is deployed, known as a Temporary FAN (TFAN).
To optimise the use of multiple fixed technologies the planning hierarchy covers all fixed access types. Please
refer to section 2.6 for the MTM Fixed Access Network Planning Hierarchy.
For planning, design and construction purposes, the network is divided into hierarchical modules and network
entities.
These modules are used to provide the planning constructs needed to provide connectivity between the
individual end-user’s premises through to the Access Seeker (AS) Point of Interconnection (POI)
The first identifiable connection point is an end-user premises. These are defined as physical address points.
Each individual dwelling unit is required to have a unique service location, and is identified as an end-user. If
the end-user premises is situated in an MDU environment then these are treated as individual connections.
The first module is the Fibre Distribution Area (FDA) which comprises an average of 200 end-users. The FDA is
the catchment area of an FDH which provides a passive optical aggregation point for the LFN into the DFN.
X 200
The second module is the combination of a maximum of 16 FDAs to create an FSAM. An FSAM with the
maximum of 16 FDAs will, as a result, have an average catchment of 3200 end-users.
9 10 11 12
X 200 X 200 X 200 X 200
X 200
3 X 200
4 X 200
5 X 200
13
2 7 6 14
X 200 X 200 X 200 X 200
X 200
1 X 200
8 X 200
16 X 200
15
The modular size of the FSAM is dependent on the geography of the area to be served and can be constructed
with any number of FDHs.
The maximum design shall allow for the grouping of 16 FDAs onto a single DFN cable. This grouping can
contain between 13 and 16 FDAs and, for this, the DFN requires two loops, an inner and an outer loop, and
these are used to provide connectivity of a maximum of 16 FDHs. The inner loop as shown in Figure 21 is used
for FDHs 1-8 and the outer loop for 9-16. The cable links between FDA 3 back to the FAN and FDA 8 back to the
FAN is a 576 fibre cable split evenly across the two rings at the splice closures located in FDA 3 and FDA 8.
FDA 14
For an FSAM containing 12 FDAs or less the DFN is a single ring with the same size cable sheath installed
between FDAs back to the FAN.
FDA 3 FDA 4
FDA 10 FDA 9
FAN
DFN Loop
FDH
The DFN cable size also alters depending on the number of FDHs within the FSAM:
1 288
2 288
3 288
4 288
5 288
6 288
7 288
8 288
9 432
10 432
11 432
12 432
9 10 11 12 9 10 11 12
X 200 X 200 X 200 X 200 X 200 X 200 X 200 X 200
FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE
DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION
AREA AREA AREA AREA AREA AREA AREA AREA
(FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA)
3 4 5 13 3 4 5 13
X 200 X 200 X 200 X 200 X 200 X 200 X 200 X 200
FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE
DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION
AREA AREA AREA AREA AREA AREA AREA AREA
(FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA)
2 7 6 14 2 7 6 14
X 200 X 200 X 200 X 200 X 200 X 200 X 200 X 200
FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE
DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION
AREA AREA AREA AREA AREA AREA AREA AREA
(FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA)
1 8 16 15 1 8 16 15
X 200 X 200 X 200 X 200 X 200 X 200 X 200 X 200
FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE
DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION
AREA AREA AREA AREA AREA AREA AREA AREA
(FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA)
FAN
9 10 11 12 9 10 11 12
X 200 X 200 X 200 X 200 X 200 X 200 X 200 X 200
FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE
DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION
AREA AREA AREA AREA AREA AREA AREA AREA
(FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA)
3 4 5 13 3 4 5 13
X 200 X 200 X 200 X 200 X 200 X 200 X 200 X 200
FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE
DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION
AREA AREA AREA AREA AREA AREA AREA AREA
(FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA)
2 7 6 14 2 7 6 14
X 200 X 200 X 200 X 200 X 200 X 200 X 200 X 200
FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE
DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION
AREA AREA AREA AREA AREA AREA AREA AREA
(FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA)
1 8 16 15 1 8 16 15
X 200 X 200 X 200 X 200 X 200 X 200 X 200 X 200
FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE FIBRE
DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION DISTRIBUTION
AREA AREA AREA AREA AREA AREA AREA AREA
(FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA) (FDA)
The third module is the Fibre Serving Area (FSA) and is the combination of a number of FSAM modules located
centrally around a Fibre Access Node.
The FSA is linked to a FAN and comprises multiple FSAMs linked into the FAN via the DFN cables.
The placement of the FAN site in relation to the FSAMs is derived by identifying as central a location as
practical.
The FSA is increased by adding FSAM modules as required. The optical return path for the DFN needs to be
identified and monitored during the planning and design phase for each FSAM to keep the DFN ring within the
optical constraints.
Ethernet NTD - The Ethernet Network Terminating Device located at the service location uses Ethernet
technology to connect to the network. It delivers UNI capabilities to a service location.
Fibre Network (FN) provides optical pathways between the NTD and AAS. Sections of the Fibre Network are
shared by other access domains.
Access Aggregation Switch (AAS) – provides aggregation of multiple point to point services to a number of
network facing 10Gbps links. This switch is shared with other access domains.
The Ethernet NTD terminates the incoming physical fibre at the service location and may provide one or more
User to Network Interfaces (UNI). A number of NTD varieties will be used to suit different circumstances,
service location types and interface quantities. The variants are expected to include Indoor and Outdoor NTDs.
1 x fibre interface
Up to 4 UNI interfaces
Support for standard power supply or Uninterruptable Power Supply (UPS)
2.3.2 Fibre
The fibre network supports point to point services. Refer to the Fibre Access (GPON) Domain Fibre section.
The DFN is designed to support point to point services. Refer to Fibre Access (GPON) Domain Distribution Fibre
Network section.
Point to point connections use a different patching array to GPON services, with the FDH patching directly
between the LFN and DFN. This links an individual end-user directly to a DFN fibre. The Flexibility Joint
provides the same functionality but via direct fusion splicing.
OUT
32
FAN
1 OUT
Premise
NTD
Connection Point
BYPASS
FDH
Service
Location
Fibre Network
Distribution
Local Fibre
Network
Figure 24 Point to Point Connection using DFN
Point to point services will make use of the existing AAS to provide an additional layer of aggregation for
combining multiple 1GE access interfaces from the NTDs into 10Gbps interfaces preferred by the core (Transit
and Aggregation) networks. This is provided by an Access Aggregation Switch (AAS). Refer to section 2.4.6.
A VDSL2 modem (provided by the Access Seeker or end user) for FTTN/B. This is on the customer side
of the nbnTM network boundary and is not discussed further in this document.
The Network Connection Device providing a modem and Reverse Power Unit, used only for support of
FTTC deployments. This provides a power feed to the Distribution Point Unit (DPU) via the modem and
copper plant, and the UNI to the end user.
Centralised splitter, which is optionally installed in the end user’s premises and is recommended to
optimize VDSL performance. This is on the customer side of the nbnTM network boundary and is not
discussed further in this document.
The Copper Plant, which provides the UNI to the premises and a copper path between the UNI and
DSLAM or DPU.
Distribution Point Unit (DPU). The DPU aggregates a small number of end user Ethernet over VDSL2
(ITU-T G993.2) connections into a network facing GPON interface, used only for support of FTTC
deployments.
DSLAM, which provides processing, switching and control functions. The DSLAM aggregates end user
Ethernet over VDSL2 (ITU-T G993.2) connections into a number of network-facing Ethernet 1Gbps
links.
Fibre Network, which provides the optical pathway between the DSLAM and FAN site.
Access Aggregation Switch (AAS), which provides aggregation of multiple DSLAMs to a number of
network facing 10Gbps links. This switch is shared with other access domains.
The Network Connection Device (NCD) is used only for support of FTTC deployments. It is located in the end-
user premises and contains a modem and Reverse Power Unit (RPU) connected to the premises power supply.
The RPU is connected into the in-premises copper network, between the nbn network boundary and the
modem. The RPU is used for providing a reverse power feed to the DPU via the in-premises copper and copper
lead in.
The NCD modem terminates the DSL link and presents an Ethernet UNI-D to the end-user
nbn is acquiring and utilising the existing Telstra copper plant to provide physical connectivity between the UNI
and the DSLAM.
For SDU deployments the UNI is at the first socket in the premises. For MDU deployments that have a Main
Distribution Frame (MDF), the UNI is located at the MDF.
Where growth in brownfields areas makes it appropriate for nbn to provide new copper lines, at least two
copper pairs will be available per serviceable location in the catchment area of the cable section, with a fill ratio
of up to 80%.
The copper network will be constructed and maintained on a like-for-like construction basis i.e. replacement of
aerial routes with aerial and underground with underground where cables are replaced.
Available copper distribution cable sizes for underground deployment are: 2, 10, 30, 50, 100, 200 and 400
pairs with conductor gauges varying between 0.4, 0.64 and 0.9mm
Available copper distribution cable sizes for aerial deployment are: 2, 10, 50, 100 pairs with conductor gauges
varying between 0.4, 0.64 and 0.9mm
The FTTC DPU is an nbn owned device which can be located in an underground pit, or mounted on a pole close
to the end user premises. The DPU terminates up to four individual copper pairs, each serving a single end
user. The DPU is reverse power fed from an end user premises over the copper leads. The DPU is “GPON-fed”
and as such the same physical architecture applies as for FTTP GPON from the DPU up to the OLT and on into
the rest of the nbn™ network.
2.4.4 DSLAM
The DSLAM function terminates multiple individual copper pairs, each serving a single end user. DSLAM sizes
vary, with the number of ports selected to most efficiently address the deployment scenario.
At the time of writing, nbn will deploy DSLAMs in the following deployment scenarios (further DSLAM sizes, are
currently under investigation):
per DSLAM
FTTN – DSLAM deployed near a Distribution Area pillar 48, 192 or 384 SDU and MDU
FTTB – DSLAM deployed at the premises of an MDU 48, 192 or 384 MDU
The interconnect of FTTN nodes can be performed at the following locations in order of preference:
- Directly into the Main Distribution Frame (MDF) within the building
For all DSLAM deployment options, the Fibre Network connectivity is provided by the DFN deployed in the area.
Where the DSLAM deployment is within the Telstra copper Distribution Area, or DPUs are deployed, LFN
connectivity is also required. Refer to the Fibre Access (GPON) Domain Fibre section for detailed information on
the DFN and LFN Fibre Networks.
Out of the 4 x Point to Point fibres, the equipment requires 2 x Uplink Fibres which are connected through to
the Aggregation switch in the NBN Co Network. The additional 2 x Point to Point fibres are spares, to allow
flexibility for future growth or migration activities.
DPUs will be connected with a single fibre connecting back to the multiport, splitters, DFN and GPON OLT.
The introduction of DSLAMs with lower customer density and higher node volumes created the need for an
additional layer of aggregation for combining multiple 1GE access interfaces from the DSLAMs into 10Gbps
interfaces preferred by the core (Transit and Aggregation) networks. This is provided by an Access Aggregation
Switch (AAS).
The AAS solution is positioned in the FAN, which can be physically located in a FAN site or a POI site.
Access POI
Node
N
x1
N EAS/ECS
<= GE FAN
4
E
Access 10G
Nx Nx
Node 1G
N <= E
4
AAS
AAS
Nx
10G
E
GE
x 1 =4 EAS/ECS
N <
N
Access
Node
From the diagram above, the AAS is used to aggregate multiple 1GE uplinks from access nodes to 10GE links,
connecting to the Ethernet Aggregation Switch or Ethernet Connectivity Switch of the Aggregation Domain.
AAS will connect to DSLAMs via N x 1GE (N ≤ 4) connections and will be dual-homed to EAS/ECS pair, each via
N x 10GE connections. The backhaul connection from FAN to POI can be either direct fibre or DWDM and direct
fibre.
The DPUs make use of the OLT to aggregate large numbers of DPUs efficiently over fibre infrastructure.
UNI NNI
Coax Optic Fibre RF RF HFC
NTD CMTS
Plant Node Network Transmission Combiner Transport
The HFC NTD terminates the incoming physical coax cable at the end-user premises and provides one User to
Network Interface (UNI). The HFC NTD will have the following:
1 x coax interface
1 x UNI-Data interfaces
nbn is progressively acquiring and is utilising existing coax plant to provide physical connectivity between the
NTD and the Optical Node.
Where growth or gaps in coverage in brownfields areas makes it appropriate, nbn will provide new coax plant.
The coax plant includes mainline trunk, tie, and lead-in coax cables, amplifiers, line-extenders, taps, power
supplies, and premises coax to connect to the NTD.
nbn is progressively acquiring and utilising existing HFC Optical Nodes. A HFC Optical node is a convertor from
coax to optical signals and vice versa. It connects to the RF Combiner site via the fibre network and RF
Transmission.
Where growth or gaps in coverage in brownfields areas makes it appropriate, nbn will provide new HFC Optical
Nodes.
The Fibre Network provides connectivity between a HFC Optical Node and the RF Transmission equipment and
RF Combiner location. nbn has leased the existing Telstra and Optus fibres.
Where growth or gaps in coverage in brownfields areas makes it necessary, nbn will provide new fibre
networks, in alignment with the fibre designs captured in section 2.2.2.
2.5.5 RF Transmission
The optical transmission equipment is co-located with the RF combiner at either the FAN site or the Aggregation
Node site. It connects the RF Combiner to the Fibre Network, converting from electrical to optical signals and
vice versa.
2.5.6 RF Combiner
The RF combiner is located in a FAN site or an Aggregation Node site. It connects HFC Optical nodes to
upstream sources via the RF Transmission, including:
nbn CMTS
Foxtel Pay TV head end
Telemetry and maintenance equipment
These systems connect electrically and the RF Combiner combines RF signals to allow for coexistence of signals
for delivery over a HFC Optical Node and its associated coax plant.
Where the RF Combiner and Cable Modem Termination System (CMTS) equipment is not co-located RF
Transmission equipment is used in conjunction with the Transport Direct Fibre to provide connectivity.
The Cable Modem Termination System (CMTS) equipment is located at either a FAN site or an Aggregation
Node site. It connects to the RF Combiner via a coax interface and provides DOCSIS connectivity over RF to
the Cable Modems. RF Segments (or Service Groups) are defined as the group of downstream and upstream
RF channels that are seen by a Cable Modem.
Depending on the location of the CMTS it may connect to the EAS via the following methods:
These modules are used to provide the planning constructs needed to provide connectivity between the
individual end-user’s premises through to the Access Seeker (AS) Point of Interconnection (POI). The planning
constructs are intended to be applied flexibly to reduce build costs.
The first identifiable connection point is an end-user premises. These are defined as physical address points.
Each individual dwelling unit is required to have a unique service location, and is identified as an end-user
premises. If the end-user premises is situated in an MDU environment then these are treated as individual
premises.
The first module is the Access Distribution Area (ADA). The Access Distribution Area (ADA) is the aggregate
footprint of the set of premises served by an nbnTM node. A node is the first point of aggregation (passive or
active) encountered moving from the FAN site towards the premises. A node may be:
The splitters of a Fibre Distribution Hub (FDH) or Flexibility Joint Location (FJL)
a DSLAM with an Ethernet backhaul architecture,
a physical HFC Optical Node
The ADA is guided by the existing duct and pit infrastructure of the Telstra Copper Distribution Areas, and in
the case of HFC, the existing Optical Node and coax plant. Note that copper and HFC access technologies must
be taken into account when considering the Fibre Network planning to optimise the Fibre Network where
multiple access technologies are to be supported.
The second module is the combination of a maximum of 24 ADAs to create a Serving Area Module (SAM). The
typical size of a SAM is expected to be 16 ADAs, but this is dependent on the access technology mix and
geography of the area to be served. A SAM may have:
The third module is the Fibre Serving Area (FSA) and is the combination of a number of SAM modules located
around a Fibre Access Node (FAN).
The placement of the FAN site in relation to the SAMs is derived by identifying as central a location as practical.
The FSA is increased by adding SAM modules as required. The optical path for the Fibre Network needs to be
identified and monitored during the planning and design phase for each SAM to keep the FN length within the
optical constraints.
The following table provides an example of the planning hierarchy where the Fibre Network includes a single
nbnTM DFN, and no leased fibre, such as will be encountered in areas where HFC access technologies contribute
to coverage:
Conduit selected.
144f
288f
8 Nodes clustered in SAM 1
72f
(yellow) = 144f DFN
72f
144f
576f Aggregation
SAM2
SAM1
SAM3
SAM4
SAM5
SAM6
SAM8 SAM7
Wireless Network Terminating Device (WNTD) - The Wireless Network Terminal located at the end-user
premises uses TD-LTE technology to connect back to the eNodeB. It delivers UNI-D capabilities to a
premises.
Radio Spectrum – nbn has radio spectrum to support the TD-LTE radio technology in line with the 3GPP
specification
eNodeB – provides the radio access interface for the WNTDs
Aggregation and Transport for Wireless –
o Microwave equipment is used to backhaul and where appropriate, aggregate traffic from
eNodeBs back to a FAN site
o Transport - provides the carriage between the FAN site, a further aggregation switch located in
a POI site and the PDN-GW. Common infrastructure used for all connectivity between FAN and
POI sites
o POI Aggregation Switch is used in a POI site to aggregate the traffic from the FANs before
transport to the PDN-GW. This is common infrastructure that provides the Aggregation Domain
EAS/ECS functions as well
The PDN-GW - provides policy and admission control for the WNTDs. It also provides the interface to the
Aggregation Domain for the aggregated Wireless Access traffic.
The WNTD terminates the incoming radio signal at the end-user premises and provides one or more User to
Network Interfaces (UNI).
The WNTD consists of the following two physical components: an Indoor Unit (IDU) and an Outdoor Unit (ODU),
connected together with a Cat 5 cable.
4 UNI-D ports,
Switch
Interface to the ODU
A standard AC power supply is used to power the IDU and through that the ODU.
There are three variants of WNTD. Band 40, Band 42 and dual band.
nbn will use its spectrum holdings in the E-UTRA Operating Band 40 (2.3 GHz to 2.4 GHz frequency range) and
E-UTRA Operating Band 42 (3.4 GHz to 3.6 GHz frequency range) in its deployment of Wireless Access
Services.
nbn’s spectrum holding is registered with the ACMA and can be found:
http://web.acma.gov.au/pls/radcom/client_search.client_lookup?pCLIENT_NO=1104329
http://web.acma.gov.au/pls/radcom/client_search.client_lookup?pCLIENT_NO=8129031
Generally, the spectrum held by NBN Co Limited in the 2.3 GHz and 3.4 GHz bands was purchased in the
multiband spectrum auction (including most recently, in the multiband residual lots auction conducted by ACMA
in late 2017), while the spectrum held by NBN Co Spectrum Pty Ltd was purchased from AUSTAR. The 2015
allocation of additional Band 42 spectrum has been allocated to nbn through ACMA as a result of a ministerial
directive.
A licence number in the table links to the licence details, including a Licence Image. The Licence Image
defines, among other technical requirements, the exact geographic area covered by the licence.
Each eNodeB has multiple sectors and is sited according to a site specific radio coverage plan to provide
optimised coverage, and may have additional sectors added if there is capacity demand. The number of
premises supported by each sector is determined by the radio condition. Each sector supports on average 60
premises, adjusting to a typical maximum of 56 premises as capacity demands change. Management of
capacity to meet customer expectations is an ongoing process.
Some sites have existing sheltered accommodation available, while other sites will have none. Where none is
available an environmentally protected housing will be provided to house the equipment.
The eNodeB provides the Air (Radio) Interface which connects the eNodeB to the WNTD, and a backhaul
Interface which connects the eNodeB to the Microwave Backhaul equipment. A typical eNodeB comprises a
number of interconnected modules:
3 RF Antennae
GPS Antenna
Remote Radio Unit
Digital Baseband Unit
Power Supply
Battery Backup
Microwave Transport equipment is used to connect eNodeB sites into a FAN site. Microwave Transport
equipment is in most cases housed in the eNodeB cabinet. The Microwave Antenna is mounted on the radio
tower where the eNodeB RF Antennas are mounted.
Microwave Transport equipment is also used to connect an eNodeB spur using a Microwave End Terminal to a
Microwave Hub Site.
A Microwave Hub Site may have an eNodeB present. The Microwave Transport equipment at a hub site can
communicate with various microwave sites and provides an aggregate point for connectivity to a FAN site. The
maximum bandwidth planned was originally 900Mbps but is now moving to 4Gbps to support capacity growth,
allowing for the aggregation of up to 8 eNodeBs.
Microwave Hub sites can be connected to other Microwave Hub sites, as long as the final Hub to end terminal
bandwidth does not exceed an aggregate of eight eNodeBs.
The exact number of eNodeB spurs that are connected back to a Microwave Hub Site or Repeater Site, and the
location of the Microwave End Terminal to connect to the FAN site, is heavily dependent on local geography and
End User distribution.
Where a wireless base station site has access to direct fibre tails to connect to a FAN site. A switch function is
housed in the eNodeB cabinet to enable connectivity from the eNodeB to the fibres.
2.7.4.3 Transport
Transport from the FAN to the PDN-GW site is provided by either 3rd party backhaul or nbn’s DWDM solution.
Refer to the Transport Domain section for further information.
The POI Aggregation Switch is deployed in pairs in POI sites to aggregate the Wireless Access traffic from FAN
sites.
The PDN-GW supports a total of up to 80,000 WNTDs by consolidating up to 16 AARs. It is provided as a pair
for resiliency, each with its own internal resiliency of controller, traffic and line cards. Consolidation of the AARs
is achieved via the POI Aggregation Switch.
The PDN-GW has two 4 x 10 Gigabit Ethernet (SFP+) interfaces to the Aggregation Domain for resilient network
connectivity, and up to eight 8 x 10 Gigabit Ethernet (SFP+) interfaces to the POI Aggregation Switch pair for
resilient network connectivity. The system is fed by -48VDC power, and power module/feed redundancy is
incorporated into the shelf.
For planning, design, and construction purposes the network is divided into hierarchical modules and network
entities.
These modules are used to provide the planning constructs needed to provide connectivity between the
individual end-user’s premises through to the Access Seeker (AS) Point of Interconnection (POI)
The first identifiable connection point is an end-user premises. These are defined as physical address points.
Each individual dwelling unit is required to have a unique service location, and is identified as a premises.
The first module is the Wireless Serving Area Module. The eNodeB sectors provide adjacent or overlapping
coverage and are grouped in clusters of up to eight sites.
The planned maximum number of connected premises in a sector has typically been 110 premises but is now
moving towards 56 premises in each sector driving the need for additional sectors to support capacity demand
(this may vary depending on the exact positions and radio conditions of the served premises).
The maximum bandwidth planned for the Microwave Hub Site back to a FAN site has been 900Mbps, but is now
moving to 4Gbps to support capacity growth, allowing for the aggregation of up to 8 eNodeBs, with a maximum
of 2640 End Users.
The second module is the Wireless Serving Area. All Wireless Serving Area Modules connected to a common
FAN site are grouped into a Wireless Serving Area. The largest WSA will have up to 24 WSAM connecting to a
FAN.
The third module is the Access Aggregation Region (AAR). The wireless area served by a single POI is an AAR.
The maximum number of WSAs in an AAR is determined by the number of FANs that connect back to the POI.
For the purposes of wireless dimensioning, the size of an AAR is based on the maximum number of End Users,
rather than the maximum number of WSAs, with a maximum of 25000 End Users in an AAR.
Satellite NTD (Very Small Aperture Terminals or VSATs) - The Satellite Network Terminal located at the
end-user premises or service location communicates over the satellite RF links back to the VSAT baseband
sub-system. It delivers UNI capabilities to a service location. Specific business service VSATs will be
available.
Satellite – two, multi-spot beam, geostationary, "bent-pipe”, telecommunications satellites to connect
VSATs to the VSAT Baseband sub-system. Located in geosynchronous equatorial orbit (GEO) overlooking
Australia and operating in the Ka-band RF spectrum.
RFGW – RF Gateway Facility supporting the RF and VSAT Baseband Sub-Systems.
o RF Sub-System – Earth station RF antennas, transmission and reception equipment to provide trunk
links to and from the satellites.
o VSAT Baseband Sub-System – subscriber service termination equipment to manage and carry user
traffic to and from Satellite subscribers.
o Business Satellite Services Baseband Sub-System – subscriber service termination equipment to
manage and carry user traffic to and from Business Satellite Services subscribers of RF spectrum
Satellite Gateway Routing and Switching – provides the connectivity at the Gateway & DPC sites for local
equipment and to the Service Connectivity Network (SCN)
Business Satellite Services Gateway Routing and Switching – separate instance of Routing and Switching
that provides connectivity at the Gateway for local equipment to the SCN
Service Connectivity Network (SCN) - provides the connectivity between the RF GW site and the DPC site.
Common infrastructure used for resilient carriage of traffic between nbnTM sites.
DPC – Data Processing Centre supporting the Service Control Sub-System (SCS).
o Service Control – Satellite specific user application and traffic management equipment to enhance the
user experience due to the long latencies incurred over geostationary satellite links.
Business Satellite Services Point of Interconnect – two Points of Interconnect specifically for Business
Satellite Services, located at existing nbn POI sites
o BSS Routing, Switching & Security – provides connectivity between the SCN and the local equipment,
also securing an Internet connection
o BSS Hosting Infrastructure – provides infrastructure for the customer to host local applications
The VSAT is located in the user premises or service location, provides the User to Network Interfaces (UNI-D)
and connects to the nbnTM network via the satellite over Ka-band RF spectrum.
A number of Satellite NTD varieties will be used to suit different circumstances, end-user types and interface
quantities. The variants included are Indoor and Outdoor NTDs for business and residential purposes.
The indoor VSAT consists of the following two physical components: an Indoor Unit (IDU) and an Outdoor Unit
(ODU), connected together with a coax cable.
Up to 4 UNI-D ports
Layer 2 Ethernet Switch
Embedded Transparent Performance Enhancing Proxy (TPEP) software client
Interface to the ODU
A standard AC power supply is used to power the IDU and through that the ODU.
The outdoor VSAT consists of the same components, hardened to deal with environmental requirements.
2.8.2 Satellite
nbn uses two, multi-spot beam, geostationary, "bent-pipe”, Ka-band, telecommunications satellites for the
Satellite Access Solution with the following characteristics:
Two satellites to enable load balancing of users and limited service redundancy.
Multi-spot beam design on each satellite providing the combined total of 107 Gbps forward path and 28
Gbps return path system capacity to best support the regional and remote Australian population and utilise
the optimum broadband user experience from the amount of RF spectrum available. The delivered solution
has achieved an actual capability of 154 Gbps forward path and 31 Gbps return path.
Geostationary orbital locations to enable full coverage of the Australian mainland and territories with the
least number of satellites and at the lowest possible VSAT costs.
“bent-pipe” design, meaning the satellite is only an RF transceiver for network to user traffic and user to
network traffic, providing broadband services to the most remote Australians with minimum mass of the
spacecraft.
Ka-band operation to achieve the user experience speeds and the required system capacity from only 2
satellites.
The “satellite to user” spot beams are designed to achieve the system capacity requirements, by re-using the
Ka frequency band as efficiently as possible. This is achieved through the combination of:
The “RF Gateway to satellite” links are designed to carry the satellite traffic to the RF Gateways (RFGWs), which
contain the RF Subsystem and VSAT baseband subsystem.
nbn uses the Ka-band radio spectrum through the appropriate ACMA domestic licensing regimes, including
class licensed bands for “satellite to user” beams, and a combination of one nbn held spectrum license,
privately negotiated agreements with other current spectrum license holders and apparatus licenses issued by
the ACMA for “RF Gateway to satellite” links.
nbn’s satellites are designed to operate at the orbital slots of 140 deg E and 145 deg E. nbn has gone through
the ITU international frequency coordination process.
2.8.3 RF Sub-System
In each RF Gateway, the RF Sub-System transmits and receives RF signals with the satellites, and connects to
the VSAT baseband Sub-system. The RF Sub-system comprises of the following major components;
The VSAT baseband system is located in the RFGWs and is the termination system for VSAT traffic and
management signals. It is composed of the following components:
The dimensioning of each RFGW’s VSAT Baseband sub-system is based on the most efficient RF channel
packing of the available Ka-band spectrum, the highest order modulation code points possible for transmission
and reception to and from the satellite from the specific RFGW location and the required equipment resiliency
supported by the VSAT Baseband sub-system supplier to meet the business requirements of service availability.
The BSS VSAT baseband system is also located in the RFGWs and is the termination system for BSS VSAT
traffic and management signals. It is composed of the same components as the VSAT Baseband sub-system,
but used for a subset of RF channels that are dedicated for BSS.
The Satellite GW routing and switching provides the connectivity between the gateway satellite baseband
systems and the SCN for carriage through to the DPC. There is an additional instance at the DPC for
connecting the Service Control system to the SCN.
The BSS Satellite GW routing and switching provides the connectivity between the gateway satellite baseband
systems and the SCN for carriage through to the DPC.
The Service Connectivity Network provides the connectivity between the RF gateway sites and the DPC sites, .
This is a shared network, refer to section 2.11.
The Service Control system is located in the DPC and is designed to enhance the user experience by
intelligently accelerating and spoofing user traffic to counteract the latency effects of the very long distances
encountered between earth to satellite, then satellite to earth RF links. Multiple Transparent Performance
Enhancing Proxy (TPEP) techniques are being used in the Satellite Access solution design:
The TPEP system is implemented in servers located in the nbnTM DPCs as well as embedded software in the
VSATs. No software or configuration is required by the Access Seeker or the user to enable the TPEP
functionality.
Subscriber and POI traffic will be switched between the remote RFGW’s, the DPCs and the satellite central POI
via a nation-wide Service Connectivity Network (SCN) that will reuse the existing nbnTM transport network
(refer to 2.11).
The BSS Satellite POI routing, switching and security provides the connectivity and control between the SCN,
the hosting infrastructure, BSS customers and the internet for BSS customer traffic and management and
operations traffic.
The BSS Hosting Infrastructure provides a computing infrastructure and storage capability for service related
applications to be supported from.
For planning, design, and construction purposes the network is divided into hierarchical modules and network
entities.
These modules are used to provide the planning constructs needed to provide connectivity between the
individual end-user’s premises through to the Access Seeker (AS) Point of Interconnection (POI)
For a satellite access network, this is a somewhat different process than for other access technologies due to
the fixed nature of the satellite. The planning hierarchy is effectively set at the time of the system design, with
few areas for adjustment once the satellite, RF Gateway and DPC are established.
The first identifiable connection point for the Long Term Satellite Solution (LTSS) is the end-user premises.
These are defined as physical address points and/or Lat /Long. Each individual dwelling unit is required to have
a unique service location, and is identified as a premises.
The first module is the spot beam. The spot beams have been planned to provide coverage as shown in section
2.8.2, taking into account the premises density and expected takeup of services. Where there is available
capacity, some RF spectrum within a beam will be reserved for Business Satellite Services.
The second planning module for LTSS is the RF Gateway. Nine (9) RFGW’s are used to carry the aggregated
traffic load, with an additional RFGW for disaster recovery protection of the service. To achieve the optimum
RFGW trunk link performance, the following design criteria are optimised:
Using these criteria the following RF Gateway locations have been selected: Ceduna, Geeveston, Bourke,
Wolumla, Geraldton, Carnarvon, Roma, Broken Hill, Waroona and Kalgoorlie.
The VSAT Baseband sub-system in the RF Gateways is initially deployed with half of the total capacity of
modulator and demodulator modules. As end user takeup grows more modulators and demodulators will be
added to the system as required.
The third module is the Data Processing Centre (DPC). LTSS has 2 DPCs: a primary and a backup working in
an active standby mode. All traffic is aggregated in the primary (or in case of a DPC failure, the backup) DPC.
The initial planned aggregated capacity in the DPC is 80Gbps. The DPC capacity can be augmented as required.
The fourth module is the Satellite Serving Area. This is the complete footprint of the 101 user beams from
each Satellite, as LTSS services are provided to Access Seekers through a single POI collocated with the
primary DPC.
The nbnTM Dense Wavelength Division Multiplexing (DWDM) fibre optic transport network is made up of a
number of DWDM Nodes (DNs) situated mainly at POI and FAN sites, all interconnected by Optical Multiplex
Section (OMS) links. A DN may also be used within a link for amplification.
The DWDM network predominantly provides physical connectivity and transit backhaul capacity between POI
and FAN sites. The network also provides connectivity to centralised depots, data centres and delivery points
for the specific transport services that require them.
The following diagram shows the basic connectivity scenarios for the DWDM network. Each DN has a metric
called “degree of connectivity”, which is the number of fibre interfaces it has, and therefore the number of
other DNs it can connect to. The DN at POI 1, for instance, has a degree if two: it connects to the DNs at two
FAN sites. The DN at FAN 7 has a degree of one.
POI 1 POI 2
DN DN
Depot/Data
Centre/Delivery
FAN 1 FAN 2 FAN 3 Point
DN DN DN DN
DN DN
Optical
Multiplex
Section
DN DN
DN DN DN
DN DN
POI 3
DN
FAN 7
There are two types of DWDM Nodes: Reconfigurable Optical Add-Drop Multiplexers (ROADMs) and Optical Line
Repeaters (OLRs). Each of these two types of DN comprises:
Baseline elements: elements that have a fixed quantity per degree and do not expand with traffic growth
unless that growth involves an increase in degrees. Such elements include amplifiers, add/drop filters and
wavelength selective switches.
Growth elements: these are elements within a degree that can be added to as growth requires. Such
elements include channel cards, controller and chassis.
Reconfigurable Optical Add-Drop Multiplexers (ROADMs): these are one- (or more) degree nodes that can
provide the following functions:
Extract data from a wavelength on any of its degrees for presentation to a client interface
Inject data from the client interface into a wavelength for transport on any of its degrees
Transit wavelengths between degrees
The standard ROADM variant used allows transmission up to 96 wavelengths, each wavelength in effect a
channel of 40Gbps, 100Gbps or 200Gbps. Standard ROADMs will be used at both POI and FAN sites.
A smaller, compact ROADM variant, for use where space is an issue, supports up to 8 wavelengths. The
standard and compact ROADMs can be used together: although the compact ROADM only supports 8
wavelengths, those wavelengths can be any of the 96 produced by the standard ROADM. Compact ROADMs will
be used only at FAN sites that require a maximum of two degrees and a maximum of 8 channels.
There is also a Long Span ROADM that is used to allow un-regenerated OMS links across Bass Strait. The Long
Span ROADM allows up to 40 wavelengths (channels).
Optical Line Repeaters (OLRs): these are two-degree wavelength-pass-through nodes that provide optical
amplification only. OLRs may be required when transmission fibre distances between adjacent ROADMs exceeds
the optical reach of the equipment.
There are four network topologies that can be deployed. In most cases the overlapping physical ring topology
will be used as the majority of dark fibre is being sourced from Telstra and only a single pair is being provided
on most routes.
Each physical ring utilises one fibre pair. Wavelength routing and count within one ring has no dependencies on
the physical attributes or traffic requirements on any other ring. If there are any DNs common between rings,
they will require a degree of more than two. Any inter-ring wavelength connectivity will be performed via the
wavelength selective switches within the common DNs.
FAN FAN
DN DN
Common
FAN DN DN FAN DN FAN
Nodes
DN DN
The rings provide the 1+1 redundancy required between any two points in the network: if one link should fail,
any node past the breach can be reached by routing in the other direction using a different wavelength. The re-
routing can be achieved through either 1+1 Client Protection, where the client network recognises the break
and uses a second connection for traffic continuity, or 1+1 Service Protection, where the DWDM network itself
provides the rerouting capability.
FAN WAN
NodeB/
OLT MW
DN DN
Fibre Pair Fibre Pair
POI POI
1+1 Client Protection 1+1 Service Protection
Figure 36 Examples of redundancy in the DWDM network (using 1+1 Client Protection and 1+1
Service Protection)
This is the preferred topology where there are adjacent DWDM rings with shared physical routes. In this
scenario physical rings can share common fibre pairs. This means that wavelength routing and counts within
the shared links have dependencies on the physical attributes and traffic requirements of both rings. Like the
standalone deployment option, some DNs will require degrees of more than two degrees. And again inter-ring
wavelength connectivity will be performed via the wavelength selective switches within the common DNs.
DN FAN
FAN DN DN FAN
DN FAN
FAN DN DN FAN
Common
DN FAN
FAN DN Links DN FAN
These are used to extend ring or POI connectivity to one or more isolated FAN sites. There is a preference to
use two fibre pairs within a spur link to introduce diversity, but some spur links may be limited to one fibre
pair. Standard and compact ROADMs can both be used as the far DN in spur links, but the DN that connects the
spur to the ring must have at least three degrees, and therefore must be a standard ROADM. OLR nodes may
be required within the spur link when transmission fibre distances between adjacent ROADM nodes exceed the
optical reach.
FAN
DN
FAN FAN FAN FAN FAN
Spur DN DN DN DN DN Spur
FAN DN DN FAN
FAN DN DN FAN
Fibre Pair
DN DN Spur
Spur DN
POI FAN
FAN
Used only in the Bass Strait deployment, two point-to-point standalone links are deployed between the Long
Span ROADM nodes.
FAN FAN
DN DN
Long Span Long Span
FAN ROADM ROADM FAN
FAN DN DN DN DN DN DN FAN
Standalone Link
FAN DN FAN DN DN FAN DN FAN
Standalone Link
FAN DN DN DN DN DN DN FAN
Managed Services are required to supplement the direct fibre and DWDM transport solutions during the roll-out
of the transit network. They may be used to provide a complete end-to-end solution or form part of a transport
service in combination with DWDM and/or direct fibre.
Where multiple devices at a site require a managed service a solution is available to aggregate the traffic such
that one managed service can be used instead of one per device.
Transport is required between FAN and POI sites for the Fibre Access Service. The basic transport using
Managed Services is a pair of fully redundant (1+1) point-to-point Ethernet services running at subrate, 1Gbps
or 10Gbps speeds.
Managed Service
EAS/
ECS
FAN OLT POI
EAS/
ECS
Managed Service
Transport is required between FAN and POI sites, much as with the standard fibre access transit. A single
unprotected managed service will be deployed.
Managed Service
Temporary Fibre Pair Long Term
FAN FAN EAS/
ECS
OLT POI
EAS/
ECS
Managed Service
Wireless access transit is required between FAN and POI sites. As with the fibre access transit, the basic
transport using Managed Services is a pair of fully redundant (1+1) point-to-point Ethernet services running at
subrate, 1Gbps or 10Gbps speeds.
Managed Service
FAN POI
Agg.
Wireless
Swit
Aggregation ch POI
Equipment
POI
Agg.
Managed Service Swit
ch
Direct fibre is the underlying building block of the Transport network domain. The following connectivity
scenarios are either fully or partially supported by direct fibre:
Direct Fibre for fibre access transit is limited by the interfaces supported on the OLT. The OLT only supports
10GBase-LR interfaces today, which limits direct fibre reach for fibre access to 10km. This is being upgraded to
support 10Gbase-ER interfaces in 2012, extending the reach to 40km.
The following direct fibre transit scenarios are supported for fibre access transit:
Direct fibre for both paths from the OLT back to the POI. This can be used when both fibre paths are
≤10km today, or ≤40km from mid-2012.
Direct fibre on the short path from the OLT to the POI and direct fibre on the long path from the OLT to the
nearest DWDM node. This can be used when the short path is ≤10km (≤40km from mid-2012) and the
path to the nearest DWDM node is ≤10km (≤70km from mid-2012, as both the OLT and the DWDM nodes
will support 10Gbase-ZR).
Direct fibre is used between the OLTs housed in temporary FAN sites and the Managed Services interface (see
Figure 40 and Figure 41). Both 1000Base-LX and 1000Base-ZX interfaces are supported allowing for fibre
distances of up to 70km to be supported.
Direct fibre is supported between the Wireless Access network eNodeBs and Microwave Transport and the local
FAN site. 1000Base-LZ and 1000Base-ZX interfaces are supported allowing for fibre distances of up to 70km to
be supported.
Microwave Backhaul is used as a Transport solution for areas where other backhaul is unfeasible or cannot be
deployed. It consists of routing capability to connect the end systems, and then the microwave transport itself.
Transport is required between OLT of the Fibre domain and the nearest location of DWDM. The transport using
Microwave Backhaul is a pair of fully redundant (1+1) point-to-point Ethernet services running at multiples of
1Gbps or 10Gbps speeds over up to four microwave hops.
Transport is required between the AAS of the copper access domain and the nearest location of DWDM. Again,
the transport using Microwave Backhaul is a pair of fully redundant (1+1) point-to-point Ethernet services
running at multiples of 1Gbps or 10Gbps speeds over up to four microwave hops.
An example is:
Microwave backhaul may be used in conjunction with other transport types to provide redundant paths where
required.
The majority of metropolitan FAN sites will also be POI sites, where Access Seekers can connect their network
equipment into the nbnTM network – via the External Network-to-Network Interface (E-NNI) – to service all
end-users hosted off the FANs associated with that POI.
In regional areas where end-user densities are lower, it will be more common for FAN sites and POI sites to be
in separate physical locations.
In the very few locations where the selected POI site cannot host the nbnTM aggregation equipment, the
aggregation equipment will be located at a nearby site and connected via direct fibre such that the POI site can
still be serviced.
For the HFC Access service, the Aggregation Domain will where necessary provide any-to-any mapping between
end users and POIs to resolve Optical Node boundary and AAR overlaps, using the SCN to carry traffic between
the required EAS and EFS pairs. For the Wireless Access service, the Aggregation Domain connects locally at
the POI to the Wireless Access Domain.
For the Satellite Access service, the central POI will connect to the Satellite DPC via the SCN network. The SCN
network will also connect the DPC to the Satellite RFGW.
This platform is used to aggregate a number of Access Domain network interfaces and provide traffic
forwarding towards other Aggregation Domain network elements such as the EFS, or another EAS. In support
of the HFC access network it may also interface to the SCN where Optical Node boundary and AAR overlaps
need to be resolved. In this case a dedicated EAS pair will be used. To support Access Domain network
interface resiliency, EAS are deployed in pairs.
The EAS node is connected to its paired EAS node, to the EFS nodes and SCN using N × 10Gbps direct fibre
links, or transport solutions where not co-located, where N is 1 to 5.
EAS EFS
N × 10Gbps
N × 10Gbps
N × 10Gbps
EAS EFS
The EAS offers redundant connectivity to the access domains via 1Gbps, 10Gbps or sub-rate bit rates. The links
can use any of the Transport Domain solutions for connectivity to remote locations (e.g. FAN sites), or direct
fibre for any local connections within the POI (e.g. the PDN-GW for wireless access).
This platform is used to aggregate a number of Access Seeker interfaces (on the External Network-Network
Interface, E-NNI) and fan out the traffic towards other Aggregation Domain network elements such as the
Ethernet Aggregation Switch, or another EFS. In support of the HFC access network it may also interface to the
SCN where Optical Node boundary (HFC Access Distribution Area) and AAR overlaps need to be resolved. To
support E-NNI redundancy, EFS are deployed in pairs.
The EFS node is connected to its paired EFS node, to the EAS nodes and SCN using N × 10Gbps direct fibre
links, where N is 1 to 5. Where the SCN is not co-located Transport solutions will be used.
EAS EFS
N × 10Gbps
N × 10Gbps
N × 10Gbps
EAS EFS
Depending on Access Seeker (AS) demand, EFS nodes may be equipped with a combination of 1Gbps, 10Gbps
and 100Gbps (available for future use) Ethernet interfaces. The following connectivity models are supported for
the AS E-NNI on the EFS nodes, referred to as the “E-NNI Mode”:
E-NNI Mode A: provides an N × 1Gbps or N × 10Gbps uplink to a single EFS node, where N is between 1
and 8 links.
Active
EFS/
AS PE
ECS
N × 10Gbps
EFS/
ECS
Figure 46 E-NNI Mode A connection model from an Access Seeker Provider Edge (AS PE)
E-NNI Mode B: provides an N × 1Gbps or N × 10Gbps dual uplink with 4+4 protection to an EFS pair.
N × Active
EFS/ N × 1Gbps
10G or
ECS bps
AS PE
EFS/ y
ndb r
ECS Sta bps o
1G bps
N × 10G
N×
Active
N × 1Gbps or
N × 10Gbps
EFS/
AS PE
ECS
EFS/
AS PE
ECS
Standby
N × 1Gbps or
N × 10Gbps
This platform takes the functions of the EAS and EFS and combines them into the one physical box. It is used in
the smaller POIs.
The ECS node is connected to its paired ECS using N × 10Gbps direct fibre links, where N is 1 to 5.
ECS
N × 10Gbps
ECS
Like the EAS, the ECS offers redundant connectivity to the access domains via 1Gbps, 10Gbps or sub-rate bit
rates. The links can use any of the Transport Domain solutions for connectivity to remote locations (e.g. FAN
sites), or direct fibre for any local connections within the POI (e.g. the PDN-GW for wireless access).
The ECS has the same interfaces to Access Seekers as those on the EFS. See the section above on EFS Access
Seeker interfaces.
To support E-NNI redundancy, a POI site will contain one pair of EFSs or ECSs. The EFSs attach downstream to
the EASs, which aggregate connectivity across all Access Domain nodes associated with the POI site. The ECSs
combine the functionality of the EFSs and EASs.
There are two physical architecture types for Points of Interconnect sites, based on whether the EAS/EFS is
used, or the ECS:
Two Tier: this architecture type uses at least two pairs of nodes, an EAS pair (which can grow to four EAS
pairs, depending on the size of the POI site) and an EFS pair.
Aggregation Domain
Access Aggregation
Access Seekers
Wireless EAS
Access
Nodes Pair 1
AS 1
Fanout
A
Fibre EAS ode
NIM
Access E- N
Nodes Pair 1
EFS E-NNI Mode B
AS 2
eB
od
NIM
Wireless E-N
EAS
Access
Nodes Pair 4 EFS E-NNI Mode A
AS n
Fibre EAS
Access
Nodes Pair 4
One Tier: this uses a pair of Ethernet Combined Switches (ECSs), which is a single platform that combines
the EAS and EFS functions.
Aggregation
Domain
Access Seekers
Fanout and AS 1
Access
Aggregation
A
de
Mo
-NNI
Wireless E
Access ECS E-NNI Mode B
AS 2
Nodes
eB
od
NIM
E-N
Fibre
Access ECS E-NNI Mode A
AS n
Nodes
Whether the one tier or two tier architecture is used depends on the size of the Customer Serving Area (CSA).
A CSA is comprised of a number of access service areas: Fibre Serving Areas (FSAs; each served by an OLT)
and Wireless Serving Areas (WSAs; each served by a pair of PDN-GWs).
The main component of the SCN is the Ethernet Trunking Switch (ETS). The LTSS and the HFC access network
will utilise the SCN.
ETS to ETS
ETS to Satellite RFGW
ETS to DPC
ETS to ECS (Satellite Central POI)
ETS to EAS
ETS to EFS
The ETS to ETS connectivity is made via N x 10GE connections. Here N = 8 for ETS located in NSW and VIC,
whilst N = 4 in all other sites. These nodes will be connected in pairs, forming a ladder topology, and will allow:
N x 10GE
ETS ETS
N x 10GE
N x 10GE
N x 10GE
ETS ETS
The SCN acts as a transit network to allow the 10 Satellite gateway sites to reach either of the two DPC sites.
The ETS to Satellite RFGW connectivity will be made via up to 2 x 10GE connections. These nodes will be
connected in pairs, forming a ladder topology, and will allow:
8x 10GE
ETS SAR
Satellite
Gateway
8x 10GE
ETS SAR
The ETS to SCS connectivity will be made via 8 x 10GE connections. These nodes will be connected in pairs,
forming a ladder topology, and will allow:
Satellite
8x 10GE
ETS Control
Sub-system
DPC Site
Satellite
8x 10GE
ETS Control
Sub-system
With the Centralised Satellite POI, the SCN acts as the transit network to allow each of the two DPC sites
access to the ECS, which is located in the Centralised Satellite POI. With the Centralised Satellite POI, each of
the two DPC sites connects to the Centralised Satellite POI ECS node.
In the Centralised Satellite POI, the ETS to POI connectivity is between the ETS and the ECS node using N X
10GE optical fibre links. Here N <= 8. These nodes are connected in pairs, forming a ladder topology, and
allow:
N x 10GE
ETS ECS
N x 10GE
ETS ECS
For the HFC Access service, the Aggregation Domain will where necessary provide any-to-any mapping between
end users and POIs to resolve HFC Optical Node boundary (HFC Access Distribution Area) and AAR overlaps.
This requires one EAS pair to be able connect to multiple EFS pairs. The SCN acts as a transit network between
the EAS and EFS pairs.
In the selected POI, the ETS to EAS connectivity is between the ETS and the EAS node using N X 10GE optical
fibre links. Here N <= 8. These nodes are connected in pairs, forming a ladder topology, and allow:
N x 10GE
ETS EAS
N x 10GE
ETS EAS
For the HFC Access service, the Aggregation Domain will where necessary provide any-to-any mapping between
end users and POIs to resolve Optical Node boundary (HFC Access Distribution Area) and AAR overlaps.
This requires one EAS pair to be able connect to multiple EFS pairs. The SCN acts as a transit network between
the EAS and EFS pairs at designated POIs to maintain the AAR to POI mapping.
In POIs where the ETS node and EFS pairs are co-located, the ETS to EFS connectivity is between the ETS and
the EFS nodes using N X 10GE optical fibre links. Where the ETS node and EFS pairs are not co-located a
transport solution will be used to provide the N X 10GE connectivity. Here N <= 8. These nodes are connected
in pairs, forming a ladder topology, and allow:
N x 10GE
ETS EFS
N x 10GE
ETS EFS
Appendix A Definitions
Acronym /Term Definition
AS Access Seeker
DA Distribution Area
DN DWDM Node
FN Fibre Network
MPT Multiport
MR Management Router
RF Radio Frequency
SP Service Provider
Access Seeker A Customer of nbn, providing one or more public telecommunications services
whose provision consists wholly or partly in the transmission and routing of signals
on a telecommunications network.
Access Seekers may be retail or wholesale Service Providers.
End User A ‘User’ or ‘End User’ is the person/persons who subscribe to telecommunications
services provided by Retail Service Providers
Optical Distribution In the PON context, a tree of optical fibres in the access network, supplemented
Network with power or wavelength splitters, filters or other passive optical devices.
Point of Designated point with the nbnTM network for Access Seeker connection.
Interconnect
Retail Service Retail Service Providers are Access Seekers who purchase the Ethernet Bitstream
Provider service from nbn and on-sell the service to their End Users.
a. the Network Design Rules, in accordance with clauses 1D.7.1 and 1D.7.4;
c. an Endorsed Network Change, in accordance with the process described in clauses 1D.8 to 1D.12, or a
Network Change as otherwise determined or permitted by the ACCC, including in any Regulatory
Determination made by the ACCC.
ii) improves the performance or functionality of the Relevant Assets and results in the same or lower
Total Cost of Ownership; or
iv) is reasonably necessary to establish and maintain the quality, reliability and security of the Relevant
Assets or the supply of the Product Components; or
vi) is required in order to comply with the Statement of Expectations, or a legal, policy, regulatory or
administrative requirement, or any requirement of the Shareholder Ministers; or
vii) relates to the maintenance, replacement or re‐routing of assets that comprise the NBN Network
that has a substantial primary purpose other than the augmentation or extension to such network
(e.g. straight swap out of assets for assets as part of routine maintenance); or
viii) subject to clause 1D.7.3(a), is the subject of an assessment by NBN Co (made at the time NBN Co
becomes aware of the need for such variation, change, augmentation or enhancement) that the
estimated Capital Expenditure incurred in connection with the relevant variation, change,
augmentation or enhancement is likely to be less than the Minor Expenditure Limit; or
ix) is required to address an urgent and unforeseen network issue where it is necessary that the
variation, change, augmentation or enhancement is operational within 6 months of NBN Co
becoming aware of the urgent and unforeseen network issue and:
(A) the event or circumstance causing the required variation, change, augmentation or
enhancement was not reasonably foreseeable by, and was beyond the reasonable control of,
NBN Co; and
b. NBN Co must ensure that each Permitted Variation is designed, engineered and constructed with the
objective of achieving the lowest Total Cost of Ownership.
i. a Permitted Variation;
iv. any legal, policy, regulatory or administrative requirement, or any requirement of the Shareholder
Ministers, which has the effect of varying the design scope in clause 1D.7.1.
2.7.6 2.7.6 Wireless Access Change to allow for capacity growth 1D.7.2(a)(iv)
Planning Hierarchy
2.8.1 2.8.1 Satellite NTD Inclusion of new NTD types for 1D.7.2(a)(i)
Business Satellite Services and
1D.7.4(a)(iii)
Number Number
(old) (new)