3he18161aaabtqzza V1 NSP
3he18161aaabtqzza V1 NSP
Planning Guide
3HE-18161-AAAB-TQZZA
Issue 1
June 2022
© 2022 Nokia.
Use subject to Terms available at: www.nokia.com/terms
NSP
Legal notice
Nokia is committed to diversity and inclusion. We are continuously reviewing our customer documentation and consulting with standards
bodies to ensure that terminology is inclusive and aligned with the industry. Our future customer documentation will be updated accordingly.
This document includes Nokia proprietary and confidential information, which may not be distributed or disclosed to any third parties without
the prior written consent of Nokia.
This document is intended for use by Nokia’s customers (“You”/”Your”) in connection with a product purchased or licensed from any
company within Nokia Group of Companies. Use this document as agreed. You agree to notify Nokia of any errors you may find in this
document; however, should you elect to use this document for any purpose(s) for which it is not intended, You understand and warrant that
any determinations You may make or actions You may take will be based upon Your independent judgment and analysis of the content of
this document.
Nokia reserves the right to make changes to this document without notice. At all times, the controlling version is the one available on
Nokia’s site.
NO WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY OF
AVAILABILITY, ACCURACY, RELIABILITY, TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE, IS MADE IN RELATION TO THE CONTENT OF THIS DOCUMENT. IN NO EVENT WILL NOKIA BE LIABLE FOR ANY
DAMAGES, INCLUDING BUT NOT LIMITED TO SPECIAL, DIRECT, INDIRECT, INCIDENTAL OR CONSEQUENTIAL OR ANY LOSSES,
SUCH AS BUT NOT LIMITED TO LOSS OF PROFIT, REVENUE, BUSINESS INTERRUPTION, BUSINESS OPPORTUNITY OR DATA
THAT MAY ARISE FROM THE USE OF THIS DOCUMENT OR THE INFORMATION IN IT, EVEN IN THE CASE OF ERRORS IN OR
OMISSIONS FROM THIS DOCUMENT OR ITS CONTENT.
Copyright and trademark: Nokia is a registered trademark of Nokia Corporation. Other product names mentioned in this document may be
trademarks of their respective owners.
© 2022 Nokia.
Disclaimer
Open Source Software and Red Hat Enterprise Linux Operating System
In case:
• (i) any “Open Source Software and Red Hat Enterprise Linux Operating System (“FOSS & RHEL”) is packaged separately or integrated
with any Nokia Software and to which third party license obligations apply; or,
• (ii) any FOSS & RHEL is directly licensed by Customer under a separate license or subscription agreement, and such FOSS & RHEL is
interacting or interoperating with any Nokia Software or Product:
information will be available either in the FOSS & RHEL itself or on the website from which the download is available indicating the license
under which such FOSS was released, and containing required acknowledgements, legends and/or notices.
It is hereby acknowledged and agreed by the Parties that any FOSS & RHEL is distributed on an “as is” basis under the respective FOSS &
RHEL license terms. Nokia will not warrant nor will be liable for, and will not defend, indemnify, or hold Customer harmless for any claims
arising out of, or in any case related to FOSS & RHEL and their use (or inability to use) by the Parties. This includes, but is not restricted to,
any and all claims for direct, indirect, incidental, special, exemplary, punitive or consequential damages in connection with FOSS & RHEL or
its components (whether included in the Nokia Software or not) and their use or inability to use. Also, this includes claims for or in
connection with the title in, the non-infringement of or interferences and damages caused to Customer or third parties by FOSS & RHEL.
CUSTOMER SHALL HAVE NO RIGHT TO RECEIVE FROM NOKIA ANY CARE (MAINTENANCE & SUPPORT SERVICES) ON FOSS &
RHEL LICENSED BY CUSTOMER UNDER A SEPARATE LICENSE AGREEMENT OR SUBSCRIPTION CONTRACT AND TO WHICH
THIRD PARTY LICENSE OBLIGATIONS APPLY WHETHER OR NOT IT INTERACTS WITH ANY NOKIA SOFTWARE OR PRODUCT.
The above shall also apply in case Customer requires - and Nokia accepts under the terms of this Disclaimer to use its reasonable
commercial effort to do so - certain installation services on FOSS & RHEL as directly licensed by Customer under a separate license or
subscription agreement; and, such FOSS & RHEL are interacting or interoperating with a Nokia Software or Product. For sake of clarity in
such a case the following shall also apply:
• Before starting any installation service, Customer must instruct Nokia to start such installation and must confirm in writing to Nokia that its
FOSS & RHEL license or subscription contract (for RHEL: Red Hat Enterprise Agreement) with Customer includes the right to use of the
specific FOSS & RHEL and the related support for all platforms running the FOSS & RHEL; that such subscription and support contract
is in force (not expired) and allows such installation activities.
• Nokia will not warrant nor will be liable for any cost, expense, damage, and will not defend, indemnify, or hold Customer or any third party
harmless for any claims arising out of, or in any case related to FOSS & RHEL (and in connection with the installation activities of such
FOSS & RHEL) and their use (or inability to use) by the Customer or by any third party, following the installation of such FOSS & RHEL.
This includes, but is not restricted to, any and all claims for direct, indirect, incidental, special, exemplary, punitive or consequential
damages in connection with FOSS & RHEL or its components and their use or inability to use.
• Nokia will not provide nor will have any liability or obligation to provide any support, maintenance, care service, warranty or indemnity
with respect to any (installed) FOSS & RHEL as licensed by the Customer under a separate license agreement or subscription contract.
Any Care service (maintenance and support service) on FOSS & RHEL licensed by Nokia as packaged separately or integrated with any
Nokia Software may be made available by Nokia under specific contractual terms to be agreed upon by the Parties.
Contents
6 Security .......................................................................................................................................................113
6.1 Introduction......................................................................................................................................113
6.2 Securing the NSP............................................................................................................................113
6.3 Operating system security for NSP stations ....................................................................................114
6.4 Communication between the NSP and external systems................................................................114
6.5 NSP Port Communications..............................................................................................................116
6.6 NSP Kubernetes Platform Communications ...................................................................................125
6.7 Securing the NFM-P........................................................................................................................127
6.8 NFM-P port information ...................................................................................................................130
6.9 FTP .................................................................................................................................................143
6.10 NFM-P firewall and NAT rules .........................................................................................................144
8 Appendix A .................................................................................................................................................179
8.1 Storage-layer I/O performance tests ...............................................................................................179
Scope
The scope of this document is limited to the planning of an NSP deployment. The guide provides an
overview of the NSP product and the components that comprise an NSP deployment, including the
NFM-P. The guide also describes operating-system specifications, and system resource and
network requirements for successful NSP system deployment. Scale limits and general information
about platform security recommendations are also included.
Safety information
For your safety, this document contains safety statements. Safety statements are given at points
where risks of damage to personnel, equipment, and operation may exist. Failure to follow the
directions in a safety statement may result in serious consequences.
Document support
Customer documentation and product support URLs:
• Documentation Center
• Technical support
How to comment
Documentation feedback
• Documentation Feedback
software upgrades, service activation tests, service fulfillment with pre- and post-deployment
workflows, and mass migration of services from one tunnel to another.
Baseline Analytics
Baseline Analytics monitors network traffic to establish baselines and flag anomalous traffic
patterns. Traffic patterns can be saved for analysis and comparison, and for automated corrective
action by other NSP applications.
Simulation tool
The Simulation tool is a traffic-engineering function that network engineers can use for network
design or optimization. You can simulate failures in an existing network that you import to the tool
from the IPRC.
Auxiliary database
An auxiliary database provides scalable, high-throughput storage capacity for specific data
collection functions. An auxiliary database can be deployed on one station, or distributed in a
cluster of at least three member nodes.
Note: BIOS CPU frequency scaling must be disabled on the NFM-P auxiliary database
platform.
Note: The NFM-P main server and main database support collocated installation on one
station, in addition to distributed installation on separate stations.
Note: IPv6 connectivity is supported between NFM-P components, with the following
exceptions:
• an NFM-P system that manages CMM or eNodeB NEs
• an NFM-P system that includes PCMD auxiliary servers
• NFM-P integration with another EMS
Note: An NE can be managed by only one NFM-P system; multiple NFM-P systems
managing the same NE is not supported, and can cause unexpected behavior.
concurrently; see 1.5.2 “NFM-P system redundancy” (p. 23) for information about the NFM-P
redundancy model that includes statistics auxiliary servers.
A statistics auxiliary server can be designated as Preferred, Reserved, or Remote for a main
server, depending on the primary or standby role of the main server. The designations enable
geographically redundant fault tolerance.
For PM statistics collection from eNodeB NEs, NTP is recommended for synchronizing the
NE and the auxiliary server to ensure successful statistics retrieval.
Note: NFM-P statistics auxiliary server deployment is supported only in an NFM-P system
that has a distributed main server and main database.
− call-trace auxiliary server— collects call-trace files from CMM and CMG NEs; you can
deploy up to two active call-trace auxiliary servers can be deployed to scale up the call-trace
data collection
In a redundant NFM-P deployment, each active call-trace auxiliary server is assigned to a
distributed or collocated redundant main server; the main servers synchronize the collected
call-trace data. Up to two call-trace auxiliary servers in a redundant deployment can collect
call-trace data; fault tolerance is achieved by configuring each redundant NFM-P call-trace
auxiliary server as the Preferred auxiliary server of the local primary main server, and as the
Reserved auxiliary server of the standby main server. Only one call-trace auxiliary server in a
redundant pair collects call-trace data at any time; the auxiliary server synchronizes the
collected data with the peer auxiliary server.
See 1.5.2 “NFM-P system redundancy” (p. 23) for information about the NFM-P redundancy
model that includes call-trace auxiliary servers.
− PCMD auxiliary server—collects PCMD data streams from SROS-based mobile gateways
that are configured to stream per-call measurement data to the NFM-P. The auxiliary server
uses the data to generate CSV files for transfer to a third-party application for post-
processing. PCMD auxiliary server deployment is supported in a distributed or collocated
NFM-P system, and is supported in a redundant deployment.
1.3.2 Databases
A NSP deployment has multiple databases. IPRC has a Neo4j database for network topology
information. The nspOS component has a PostgreSQL database for policy management and
common applications data, and a Neo4j database for topology data for the Map Server. Cross-
domain resource control contains a Neo4j database for network topology information.
A Neo4j database contains a graphical representation of the network topology and its elements in a
multi-layer graph. The installation of the Neo4j database is customized for, and must be dedicated
to, the NSP. Data redundancy and replication within an HA cluster is managed within the neo4j
instances.
The PostgreSQL database contains non-topological NSP information, including policies, templates,
and nspOS common model data. PostgreSQL is an open source database application.
PostgreSQL database redundancy is managed by the role-manager. In a redundant configuration of
the NSP, the active NSP cluster hosts the primary PostgreSQL database. The standby NSP cluster
hosts the standby PostgreSQL database.
In an HA NSP cluster deployment, one PostgreSQL database in the NSP cluster is selected as the
primary database and the other in the NSP cluster is standby. If the active pod fails, then the
standby member is promoted to primary database. In a redundant HA configuration of NSP, the
standby datacenter databases are updated as standby databases.
In an NSP cluster deployment where the customer provides the orchestration layer, each NSP
cluster has one PostgreSQL database instance. Data replication of PostgreSQL database needs to
be provided by the storage layer.
Note: Nokia does not support any PostgreSQL database configuration that deviates from the
NSP installation procedure.
Nokia does not support direct customer access to the Neo4j and PostgreSQL databases.
NOTICE
The NSP provides functionality using browser-based NSP applications. These applications require
that WebGL be enabled and use standard REST security mechanisms for authentication and
authorization. All NSP applications are HTML-5 based and are supported on the following web
browsers:
• Latest version of Google Chrome
• Latest version of Chromium Edge for most applications (see the NSP Release Notice for
restrictions)
• Latest version of Mozilla Firefox
Note: You cannot switch browsers between clients or applications. You must always use the
system default browser.
Note: In order for the Safari web browser to open the Analytics application, you must ensure
that the following Safari privacy settings are configured, if present in your browser version:
• Safari Preferences page, Cookies And Website Data—Always Allow
• Prevent cross-site tracking—disabled
Note: If you are using Chrome or Firefox on Windows 8.1 or Windows Server 2012, it is
recommended that you enable ClearType Text for optimal viewing of fonts. To enable, open
the Display settings in Windows Control Panel and enable the Turn on ClearType parameter
under the Adjust ClearType text settings.
Note: The NSP components support localized language settings using predefined strings, and
do not translate data to different languages.
1.3.5 APIs
The NSP applications provide northbound REST and RESTCONF APIs with Swagger-compliant
documentation. Each northbound API supports queries, service-creation requests, and other
functions. See the Network Developer Portal for information.
Production deployment
Figure 1-1, “NSP system, production deployment” (p. 17) shows a typical three-node NSP cluster
and a number of external RPM-based components. Depending on the specified NSP deployment
profile, a cluster may have additional nodes.
Customer-Provided K8s
(example of POD deployment
placement is done by K8S)
k8s master* k8s worker k8s master* k8s worker k8s worker
IM MDM RTA nsp-tomcat OAM MDM RTA nrcx-tomcat IM LSO MDM RTA
IM-tc app1-tc TSC ACT solr app2-tc ACT app3-tc mdm-tc ACT
kafka neo4j File server kafka neo4j nspos-tc kafka neo4j nspos-tc
Centralized Prometheus zookeeper Centralized Health zookeeper Centralized postgres zookeeper
Logging Grafana Logging monitor Logging
Common Storage
VM VM VM VM VM
36055
Lab deployment
A one-node NSP cluster is used in a lab deployment, as shown in Figure 1-2, “NSP system, lab
deployment” (p. 18).
K8s Node
VM/BareMetal
VM VM VM VM VM
36056
Note: A redundant NSP deployment supports classic HA and fast-HA NFM-T deployment; see
the NSP 22.6 Release Notice for compatibility information.
In a redundant deployment of NSP that includes the NFM-P and NFM-T, the primary NSP cluster
operates independently of the primary NFM-P and NFM-T; if the NSP cluster undergoes an activity
switch, the new primary cluster connects to the primary NFM-P and NFM-T. Similarly, if the NFM-P
or NFM-T undergoes an activity switch, the primary NSP cluster connects to the new primary
NFM-P or NFM-T.
Figure 1-3, “Redundant NSP deployment including NFM-P and NFM-T” (p. 18) shows a fully
redundant NSP deployment that includes the NFM-P and NFM-T.
Primary Standby
NSP NSP
cluster cluster
Optical
Transport
Network
IP Network Elements
Elements
26495
VSR-NRC redundancy
The VSR-NRC also supports redundant deployment. A standalone NSP can be deployed with a
standalone or redundant VSR-NRC. When NSP is installed with a redundant VSR-NRC, if the
communication channel between NSP and primary VSR-NRC fails, then the NSP switches
communication to the standby VSR-NRC.
In a DR deployment of NSP, a redundant deployment of VSR-NRC is required. The primary NSP
will communicate to NEs through the primary VSR-NRC, but if the communication channel to
primary VSR-NRC fails, then the primary NSP can switch to the standby VSR-NRC.
Primary Standby
NSP NSP
Primary Standby
VSR-NRC VSR-NRC
27498
When an activity switch takes place between redundant NSP clusters, the new active NSP cluster
communicates with IP NEs through its corresponding VSR-NRC instance.
Managed
network
Figure 1-6, “Distributed standalone NFM-P deployment” (p. 21) is an example of a standalone
NFM-P deployment in which the NFM-P main server and main database are hosted on separate
stations.
Managed
network
NFM-P database
22674
The following illustrates a typical deployment of NFM-P in standalone mode when the NFM-P
server and NFM-P database functions are collocated and an NFM-P auxiliary call-trace server is
used. The NFM-P auxiliary statistics server is not supported in this configuration.
Figure 1-7 NFM-P standalone deployment – collocated NFM-P server and NFM-P database
configuration and NFM-P auxiliary call-trace server
Managed
network
NFM-P auxiliary
call trace collector(s)
22673
The following illustrates a typical deployment of NFM-P in standalone mode when the NFM-P
server and NFM-P database functions are in a distributed configuration and NFM-P auxiliary
servers are used. In this configuration there can be up to three active NFM-P auxiliary statistics
servers or it could be configured redundant, and there can be one or two NFM-P auxiliary call-trace
servers collecting call-trace data from the network.
Figure 1-8 NFM-P standalone deployment - distributed NFM-P server and NFM-P database
configuration and NFM-P auxiliary servers
Managed
network
NFM-P
NFM-P clients server
The following illustrates a deployment of NFM-P in standalone mode when the NFM-P server and
NFM-P database functions are in a distributed deployment and NFM-P auxiliary servers are
installed with statistics collection using the NFM-P auxiliary database. In this configuration, there
can be up to three preferred NFM-P auxiliary statistics servers. There can be one or two NFM-P
auxiliary call-trace servers collecting call-trace data from the network. The NFM-P auxiliary
database cluster comprises of either a single instance or a cluster of at least three instances.
Figure 1-9 NFM-P standalone deployment - distributed NFM-P server and NFM-P database
configuration and NFM-P auxiliary servers with statistics collection using the NFM-P
auxiliary database
Managed
network
NFM-P
NFM-P clients server
For bare metal installations, the NFM-P server, NFM-P auxiliary server, NSP Flow server, NSP Flow
Collector Controller, NFM-P auxiliary database, NSP analytics server, and NFM-P database are
supported on specific Intel x86 based HP and Nokia AirFrame stations.
The NFM-P client and client delegate server software may be installed on stations running different
operating systems from the NFM-P server, NFM-P auxiliary, NFM-P auxiliary database, NSP Flow
Collector, NSP Flow Collector Controller, NSP analytics server, and NFM-P database. The NFM-P
client can be installed on RHEL 7 server x86-64, Windows, or Mac OS where the NFM-P client
delegate server can be installed on RHEL 7 server x86-64, Windows Server 2012 R2, Windows
Server 2016, or Windows Server 2019. See 3.5 “Third party software for NFM-P single-user client
or client delegate server” (p. 62) for Operating System specifics.
All NFM-P stations in the NFM-P management complex must maintain consistent and accurate
time. The RHEL chronyd service is strongly recommended as the time-synchronization mechanism.
Redundancy between NFM-P server and database applications is used to ensure visibility of the
managed network is maintained when one of the following failure scenarios occur:
• Loss of physical network connectivity between NFM-P server and/or NFM-P database and the
managed network
• Hardware failure on station hosting the NFM-P server and/or NFM-P database software
component
NFM-P supports redundancy of the NFM-P server and NFM-P database components in the
following configurations:
• NFM-P server and NFM-P database collocated configuration
• NFM-P server and NFM-P database distributed configuration
The following illustrates an NFM-P redundant installation when the NFM-P server and NFM-P
database components are installed in a collocated configuration.
NFM-P
clients
22671
The following illustrates an NFM-P redundant installation when the NFM-P server and NFM-P
database components are located on different stations in a distributed configuration.
NFM-P
clients
NFM-P NFM-P
database database
active standby
NFM-P NFM-P
server server
active standby
Managed
network
22670
auxiliary statistics servers and up to two NFM-P auxiliary call-trace server stations configured in
each geographic location. The Preferred/Reserved/Remote role of the NFM-P auxiliary statistics
server is dependent upon and configured, based on the NFM-P server that is active. When there
are more than one active auxiliary statistics server, local redundancy (Preferred/Reserved) of the
auxiliary statistics server must be used in conjunction with geographic redundancy, where the same
number of auxiliary statistics servers will be deployed in each geographic site. The NFM-P auxiliary
statistics servers in the opposite geographic location are configured to be Remote. In this scenario,
if one of the NFM-P auxiliary statistics servers for the active NFM-P server were no longer
available, the active NFM-P server would use the reserved NFM-P auxiliary statistics server in the
same geographic location to collect statistics. Figure 1-13, “NFM-P distributed server/database
redundant deployment with redundant NFM-P auxiliaries using the NFM-P auxiliary database for
statistics collection” (p. 28) shows the same redundant configuration but with statistics collection
using a geographically redundant NFM-P auxiliary database. Latency between geographic sites
must be less than 200 ms.
Figure 1-12 NFM-P distributed server/database redundant deployment with redundant NFM-P
auxiliaries that crosses geographic boundaries
NFM-P
clients
Oracle DataGuard
NFM-P NFM-P
database database
active standby
22669
Figure 1-13 NFM-P distributed server/database redundant deployment with redundant NFM-P
auxiliaries using the NFM-P auxiliary database for statistics collection
NFM-P
clients
Oracle DataGuard
NFM-P NFM-P
database database
active standby
NFM-P NFM-P
server server
active standby
24405
Further information about NFM-P redundancy can be found in the NSP NFM-P User Guide.
Requirements:
• The operating systems installed on the primary and standby NFM-P server/database must be of
the same versions and at the same patch levels.
• The layout and partitioning of the disks containing the NFM-P software, the Oracle software and
the database data must be identical on the active and standby NFM-P server/database.
• The machine which will be initially used as the active NFM-P server/database must be installed
or upgraded before the machine that will initially be used as the standby.
• The stations hosting the NFM-P software should be connected in a way to prevent a single
physical failure from isolating the two stations from each other.
• Stations that host the NFM-P server/database software must be configured to perform name
service lookups on the local station before reverting to a name service database located on the
network such as NIS, NIS+, or DNS. A root user must inspect and modify the /etc/nsswitch.conf
file to ensure that files is the first entry specified for each database listed in the file.
Requirements:
• The operating systems installed on the primary and standby NFM-P server as well as the
primary and standby NFM-P database must be of the same versions and at the same patch
levels.
• The layout and partitioning of the disks containing the NFM-P software, the Oracle software and
the database data must be identical on the primary and standby NFM-P database.
• The machines which are intended to be used as primary NFM-P server and NFM-P database
should be installed on the same LAN as one another with high quality network connectivity.
• The machines which are intended to be used as standby NFM-P server and standby NFM-P
database should be installed on the same LAN as one another with high quality network
connectivity.
• The pair of stations to be used as active NFM-P server and NFM-P database should be
connected to the pair of stations to be used as standby NFM-P server and NFM-P database in a
way that will prevent a single physical failure from isolating the two station pairs from each other.
• Stations that host the NFM-P server and NFM-P database software must be configured to
perform name service database lookups on the local station before reverting to a name service
located on the network such as NIS, NIS+, or DNS. A root user must inspect and modify the /etc/
nsswitch.conf file to ensure that files is the first entry specified for each database listed in the file.
Redundancy with distributed NFM-P server and NFM-P database and NFM-P auxiliary
servers
In addition to the rules stated above for distributed NFM-P server and NFM-P database, the
following rules apply:
• The operating systems installed on the NFM-P auxiliary servers must be of the same versions
and patch levels as the NFM-P server and NFM-P database stations.
• If collecting statistics using the NFM-P auxiliary database, the operating systems installed on the
NFM-P auxiliary database stations must be of the same versions and patch levels as the NFM-P
server, NFM-P database, and NFM-P auxiliary statistics server stations.
• NFM-P auxiliary servers are intended to be on the same HA network as the NFM-P server and
NFM-P database. NFM-P auxiliary statistics, call-trace, and PCMD servers are intended to be
geographically collocated with the active and standby locations of the NFM-P server and NFM-P
database. The NSP Flow Collector typically resides in the managed network, closer to the
network elements.
• When the NFM-P auxiliary database is deployed in a cluster of at least three separate instances,
it can tolerate a single instance failure with no data loss. All NFM-P auxiliary database nodes in
the same cluster must be deployed in the same geographic site, with less than 1 ms of latency
between the nodes. A second cluster can be deployed to implement geographic redundancy and
must contain the same number of nodes as the primary cluster.
• When using more than one active NFM-P auxiliary statistics server in a geographic (greater than
1 ms latency) configuration, the active and reserved servers for a give NFM-P server must reside
in the same geographic site. The auxiliary statistics servers in the opposite geographic site
would be configured as Remote.
• Stations that host the NFM-P auxiliary server software must be configured to perform name
service database lookups on the local station before reverting to a name service database
located on the network such as NIS, NIS+, or DNS. A root user must inspect and modify the /etc/
nsswitch.conf file to ensure that files is the first entry specified for each database listed in the file.
KVM configuration
When you configure the KVM, set the parameters listed in the following table to the required values.
Parameter Value
Disk Controller type virtio
Storage format raw
Cache mode none
I/O mode native
NIC device model virtio
Hypervisor type kvm
Hypervisor
KVM is the only supported hypervisor for an OpenStack environment. For information about the
supported KVM hypervisor versions, see 2.1.3 “KVM virtualization” (p. 31).
The required CPU and memory resources must be reserved and dedicated to each guest OS, and
cannot be shared or oversubscribed. You must set the following parameters to 1.0 in the
OpenStack Nova configuration on the control NE, or on each individual compute node that may
host an NSP VM”
• cpu_allocation_ratio
• ram_allocation_ratio
HyperThreading
The usage of CPUs with enabled HyperThreading must be consistent across all compute nodes. If
the CPUs do not support HyperThreading, you must disable HyperThreading at the hardware level
on each compute node that may host an NSP VM.
CPU pinning
CPU pinning is supported, but may restrict some OpenStack functions such as migration.
Availability zones/affinity/placement
Nokia does not provide recommendations for configuring VM placement in OpenStack.
Migration
The OpenStack environment supports only regular migration; live migration is not supported.
Networking
Basic Neutron functions using Open vSwitch with the ML2 plugin are supported in an NSP
deployment, as is the use of OpenStack floating IP addresses.
Storage
All storage must meet the throughput and latency performance criteria in the response to your NSP
Platform Sizing Request.
VM storage
The VM storage must be persistent block (Cinder) storage, and not ephemeral. In order to deploy
each VM, you must create a bootable Cinder volume. The volume size is indicated in the response
to your NSP Platform Sizing Request.
Flavors
Flavors must be created for each Station Type described in the response to your NSP Platform
Sizing Request.
Firewalls
You can enable firewalls using OpenStack Security Groups, or on the VMs using the firewalld
service, except as noted. If firewalld is enabled, an OpenStack Security Group that allows all
incoming and outgoing traffic must be used.
Note: The system clocks of the NSP components must always be closely synchronized. The
RHEL chronyd service is strongly recommended as the time-synchronization mechanism to
engage on each NSP component during deployment.
Only one time-synchronization mechanism can be active in an NSP system. Before you
enable a service such as chronyd on an NSP component, you must ensure that no other time-
synchronization mechanism, for example, the VMWare Tools synchronization utility, is
enabled.
Virtual Machine Version 11 or above must be used. The deployer host OVA file provided in the NSP
software bundle is built using Virtual Machine Version 14.
See the following table for additional Virtual Machine setting requirements:
The platform requirements for NSP cluster VMs and VMs that host additional components depend
on, but are not limited to, the following factors:
• number of managed LSPs and services
• number of managed NEs
• number of MDM learned services
• number of simultaneous operator and API sessions
• expected numbers of:
− flows
− monitored NEs
− AS
Note: The NSP cluster VMs require sufficient available user IDs to create system users for
NSP applications.
Note: The disk layout and partitioning of each VM in a multi-node NSP cluster, including DR
deployments, must be identical.
Note: In Release 22.6, the Skylake microarchitecture using a CPU speed of less than
2.4GHz, is supported for NSP deployments that do not use Multi-layer Discovery and
Visualization and Multi-layer Control Coordination Installation Options.
Defined CPU and Memory resources must be reserved for the individual Guest OSs and cannot be
shared or oversubscribed. This includes individual vCPUs which must be reserved for the VM.
Provisioned CPU resources are based upon threaded CPUs. The NSP Platform Requirements will
specify a minimum number of vCPUs to be assigned to the VM. VMs are recommended to be
configured with all vCPUs on one virtual socket.
A host system requires CPU, memory and disk resources after resources for NSP VMs have been
allocated. Contact the hypervisor provider for requirements and best practices related to the hosting
environment.
You must provide information about the provisioned VM to Nokia support. You can provide the
information through read-only hypervisor access, or make the information available upon request.
Failure to provide the information may adversely affect NSP support.
Table 2-5, “Minimum NSP cluster IOPS requirements” (p. 35); see Chapter 8, “Appendix A” for
information about how to determine the current storage-layer performance.
WARNING
The NSP Sizing Tool specifies the overall vCPU, memory and disk space requirements; typically,
however, a Kubernetes worker node is a VM with the following specifications:
• vCPU: 24
• Memory: 64 GB
• Disk: 900 GB
A Basic NSP cluster deployment requires 32 vCPU and 80 GB of memory and supports the NSP
Platform feature package plus one additional feature package.
A lab/trial NSP cluster deployer node requires a minimum of 2 vCPU, 4 GB memory and 250 GB
disk.
The Simulation Tool NSP deployment must be deployed as type “lab” or “ip-mpls-sim”. Type “ip-
mpls-sim” uses higher minimum resources for CPU and memory. The Simulation Tool NSP cluster
deployer requires a minimum of 2 vCPU, 4 GB memory and 250 GB disk.
Note: The Kubernetes node count represents the number of nodes in a single NSP cluster. A
redundant NSP cluster requires that number of nodes at each datacenter.
The following table lists the minimum and production platform requirements for deployment of VSR-
NRC as described in the NSP Installation and Upgrade Guide.
Note: Verify that the VSR-NRC platform specifications are consistent with the specifications
provided in the Virtualized 7750 SR and 7950 XRS Simulator (vSIM) Installation and Setup
Guide for this release.
2.2.5 VM memory
The virtual memory configuration of each NSP cluster VM requires a parameter change to support
Centralized Logging.
The following command entered as the root user displays the current setting:
sysctl -a | grep “vm.max_map_count”
If the setting is not at least 26 2144, you must enter the following command before you deploy the
NSP:
sysctl -w vm.max_map_count=262144
Note: Only one time-synchronization mechanism can be active in an NSP system. Before you
enable a service such as chronyd on an NSP component, you must ensure that no other time-
synchronization mechanism, for example, the VMWare Tools synchronization utility, is
enabled.
The hostname of an NSP component station must meet the following criteria:
• include only alphanumeric ASCII characters and hyphens
• not begin or end with a hyphen
• if an FQDN, FQDN components are delimited using periods
• hostname FQDN does not exceed 63 characters
Note: If you use hostnames or FQDNs to identify the NSP components in the NSP
configuration, the hostnames or FQDNs must be resolvable using DNS.
installations, VMware’s best practices should be followed regarding Virtual Machine Hardware
version changes. Virtual Machine versions 10, 14, and 19 have been tested with NFM-P where the
minimum supported Virtual Machine Hardware version is 10 and the latest supported version is 19.
See the following table for additional VM setting requirements.
Note: The system clocks of the NSP components must always be closely synchronized. The
RHEL chronyd service is strongly recommended as the time-synchronization mechanism to
engage on each NSP component during deployment.
Only one time-synchronization mechanism can be active in an NSP system. Before you
enable a service such as chronyd on an NSP component, you must ensure that no other time-
synchronization mechanism, for example, the VMWare Tools synchronization utility, is
enabled.
Reservation Must be set to the number of vCPUs * the CPU frequency. For
example, on a 2.4GHz 8 vCPU configuration, the reservation must be
set to (8*2400) 19200MHz
Limit Unlimited
Lab/trial 2000
vMotion
• Always follow VMware best practices
• Testing was performed with dedicated 10Gb connections between all hosts
High Availability
• Always follow VMware best practices
• Do not use Application Monitoring
• Use Host or VM Monitoring only
• Enable NFM-P database alignment feature to keep active servers in same Data Center
Snapshots
• Always follow VMware best practices
• Do not include memory snapshots
• Always reboot all NFM-P VMs after reverting to snapshots
• NFM-P performance can be degraded by as much as 30% when a snapshot exists and therefore
NFM-P performance and stability is not guaranteed
• Snapshots should be kept for the least amount of time possible
• Snapshot deletion can take many hours and will pause the VM several times
• NFM-P database failover will occur when VMs are reverted to snapshots, requiring a re-
instantiation of the standby database
• Supported on all components except for the NFM-P auxiliary database
2.4.5 OpenStack
NFM-P tests on open source OpenStack and will support the application running on any OpenStack
distribution that is based on the tested versions. Any product issues deemed to be related to the
specific OpenStack distribution will need to be pursued by the customer with their OpenStack
vendor. Supported OpenStack versions include Newton, Queens, and Train.
To ensure NFM-P stability and compatibility with OpenStack, the following recommendations should
be noted:
Hypervisor
• KVM is the only hypervisor supported within an OpenStack environment. See the preceding
section for supported versions.
Hyperthreading
• Hyper-threaded CPU usage must be consistent across all compute nodes. If there are CPUs that
do not support hyper-threading, hyper-threading must be disabled on all compute nodes, at the
hardware level, where NFM-P components could be deployed.
CPU Pinning
• CPU pinning is supported but not recommended as it restricts the use of OpenStack migration.
See the 7701 CPAA Installation Guide for vCPAA CPU Pinning requirements
Migration
• Only Regular migration is supported. Live migration is not supported.
Networking
• Basic Neutron functionality using Open vSwitch with the ML2 plugin can be used in an NFM-P
deployment. OpenStack floating IP address functionality can be used on specific interfaces used
by NFM-P that support the use of NAT. This would require a Neutron router using the neutron L3
agent.
Storage
• All storage must meet the performance metrics provided with NFM-P Platform Responses.
Performance must meet the documented requirements for both throughput and latency.
VM Storage
• VM storage must be persistent block (Cinder) storage and not ephemeral. For each VM to be
deployed, a bootable Cinder volume must be created. The size of the volume is indicated in the
NFM-P Platform sizing response.
Flavors
• Flavors should be created for each “Station Type” indicated in the NFM-P platform sizing
response.
Firewalls
• Firewalls can be enabled using OpenStack Security Groups or on the VMs using firewalld. If
firewalld is used, an OpenStack Security Group that allows all incoming and outgoing traffic
should be used.
NFM-P server and database (Collocated) 6* x86 CPU Cores (12 Hyper-threads), minimum 2.0GHz 1
64 GB RAM recommended (55 GB RAM minimum)
4*10K RPM SAS disk drives of at least 300 GB in size is required for performance and
storage capacity
Notes:
1. 2.0GHz only supported on Skylake and newer CPU microarchitecture. Minimum speed on CPUs older than
Skylake is 2.4GHz.
Notes:
1. 2.0GHz only supported on Skylake and newer CPU microarchitecture. Minimum speed on CPUs older than
Skylake is 2.4GHz.
Notes:
1. 2.0GHz only supported on Skylake and newer CPU microarchitecture. Minimum speed on CPUs older than
Skylake is 2.4GHz.
The minimum resource requirements above are also applicable in situations where the NFM-P
application is installed in a redundant configuration.
Notes:
1. 2.0GHz only supported on Skylake and newer CPU microarchitecture. Minimum speed on CPUs older than
Skylake is 2.4GHz.
All of the minimum hardware platforms above are also applicable in situations where the NFM-P
application is installed in a redundant configuration.
TCAs 50 000
Bare Metal x86 statistics collector 4 * x86 CPU Cores (8 Hyper-threads), minimum 2.0GHz 1
12 GB RAM minimum. 16 GB RAM is recommended.
4*10K RPM SAS disk drives of at least 300 GB each in size
Bare Metal x86 call trace collector 4 * x86 CPU Cores (8 Hyper-threads), minimum 2.0GHz 1
32 GB RAM minimum - CMM call trace collection
42 GB RAM minimum - CMG or CMM+CMG call trace collection
4*10K RPM SAS disk drives of at least 300 GB each in size - CMM call
trace collection
7*10K RPM SAS disk drives of at least 300 GB each in size - CMM or
CMM+CMG call trace collection
Bare Metal x86 PCMD collector 12 * x86 CPU Cores (24 Hyper-threads), minimum 2.6GHz
64 GB RAM minimum.
2*10K RPM SAS disk drives of at least 300 GB each in size (RAID 1) +
6*0K RPM SAS disk drives of at least 300 GB each in size (RAID 0)
Minimum of two 1Gb network interfaces. One dedicated to PCMD data
collection.
Notes:
1. 2.0GHz only supported on Skylake and newer CPU microarchitecture. Minimum speed on CPUs older than
Skylake is 2.4GHz
Notes:
1. 2.0GHz only supported on Skylake and newer CPU microarchitecture. Minimum speed on CPUs older than
Skylake is 2.4GHz
2.5.8 Platform requirements for NSP Flow Collector and NSP Flow Collector
Controller
Table 2-16 NSP Flow Collector and NSP Flow Collector Controller platforms for labs
Bare Metal x86 NSP Flow Collector 4 * x86 CPU Cores (8 Hyper-threads), minimum 2.0GHz 1
16 GB RAM minimum.
1*10K RPM SAS disk drives of at least 300 GB each in size
Bare Metal x86 NSP Flow Collector Controller 2 * x86 CPU Cores (4 Hyper-threads), minimum 2.0GHz 1
4 GB RAM minimum.
1*10K RPM SAS disk drives of at least 300 GB each in size
Bare Metal x86 NSP Flow Collector and NSP Flow 4 * x86 CPU Cores (8 Hyper-threads), minimum 2.0GHz 1
Collector Controller (collocated) 16 GB RAM minimum.
1*10K RPM SAS disk drives of at least 300 GB each in siz
VMware/KVM NSP Flow Collector and NSP Flow 8 vCPUs, minimum 2.0GHz 1
Collector Controller (collocated) 16 GB RAM minimum.
300 GB disk space
I/O throughput and latency as provided in NFM-P sizing
response
Notes:
1. 2.0GHz only supported on Skylake and newer CPU microarchitecture. Minimum speed on CPUs older than
Skylake is 2.4GHz
Table 2-17 NSP Flow Collector and NSP Flow Collector Controller platforms for production deployments
Bare Metal x86 NSP Flow Collector 12 * x86 CPU Cores (24 Hyper-threads), minimum 2.0GHz 1
64 GB RAM minimum.
2*10K RPM SAS disk drives of at least 300 GB each in size
Table 2-17 NSP Flow Collector and NSP Flow Collector Controller platforms for production deployments
(continued)
Architecture Type Configuration
Bare Metal x86 NSP Flow Collector Controller 2 * x86 CPU Cores (4 Hyper-threads), minimum 2.0GHz 1
4 GB RAM minimum.
1*10K RPM SAS disk drives of at least 300 GB each in size
Bare Metal x86 NSP Flow Collector and NSP Flow 12 * x86 CPU Cores (24 Hyper-threads), minimum 2.0GHz 1
Collector Controller (collocated) 64 GB RAM minimum.
2*10K RPM SAS disk drives of at least 300 GB each in size
VMware/KVM NSP Flow Collector and NSP Flow 24 vCPUs, minimum 2.0GHz 1
Collector Controller (collocated) 64 GB RAM minimum.
500 GB disk space
I/O throughput and latency as provided in NFM-P sizing
response
Notes:
1. 2.0GHz only supported on Skylake and newer CPU microarchitecture. Minimum speed on CPUs older than
Skylake is 2.4GHz
Architecture Configuration
Bare Metal x86 12 * x86 CPU Cores (24 Hyper-threads), minimum 2.6GHz
128 GB RAM minimum.
2 SAS 10K RPM disk drives of at least 300 GB each in size (RAID 1) + 12 SAS
10K RPM disk drives of at least 600 GB each in size (RAID 1+0) + 5 SAS 10K
RPM disks of at least 1.2TB each in size (RAID 5)
Minimum of two 1Gb network interfaces. One dedicated to inter-cluster
communication.
Notes:
1. Minimum of three nodes required in the cluster.
Architecture Configuration
Notes:
1. 2.0GHz only supported on Skylake and newer CPU microarchitecture. Minimum speed on CPUs older than
Skylake is 2.4GHz
Ad-hoc report design is a computationally intensive process. If it is expected that the ad-hoc feature
will be used on a regular basis, customers are strongly encouraged to meet the recommended
specifications.
The client delegate server platform provides an option to consolidate multiple installations of the
NFM-P client on a single station or the option of installing one instance of the NFM-P client run by
many users (with unique Operating System accounts). Regardless of the method of the client
installation, the platform requirements per client are the same.
The amount of memory listed includes the requirement for the NFM-P java UI and web browser.
Additional memory for each NFM-P client will be required for management of the network elements
described in 4.12 “Network element specific requirements” (p. 80) .
Management of certain network elements may include the cross-launch of additional software that
may not be compatible with certain operating systems. The information in Table 2-23, “Element
manager operating system support summary” (p. 53) lists these element managers and their
operating system support. See the element-manager documentation to determine current operating
system support.
The NFM-P client delegate server configuration is only supported on specific HP or Nokia AirFrame
x86 stations running RHEL server x86-64, or specific versions of Windows. Additionally, the NFM-P
client delegate server installation is supported on a Guest Operating System of a VMware vSphere
ESXi or RHEL KVM installation. The Guest OS must be one of those supported for GUI clients
found in 3.1.3 “NFM-P RHEL support” (p. 60). Table 2-21, “Minimum NFM-P client delegate server
resource requirements” (p. 50) describes resource requirements for this type of station.
Architecture Configuration
The configurations in the preceding table will support up to 15 GUI clients. Additional GUI clients
can be hosted on the same platform provided that the appropriate additional resources found in
Table 2-22, “Additional client NFM-P client delegate server resource requirements” (p. 51) are
added to the platform.
Table 2-22 Additional client NFM-P client delegate server resource requirements
Bare Metal x86 1/4 * x86 CPU Core (1/2 Hyper-thread), minimum 2.0GHz
1.75 GB RAM, 2.25 GB for networks with greater than 15 000 NEs, 5 GB for networks
managing AirScale eNodeBs
1 GB Disk Space
Table 2-22 Additional client NFM-P client delegate server resource requirements (continued)
For situations where more than 60 simultaneous GUI sessions are required, Nokia recommends
deploying multiple NFM-P client delegate servers.
Displaying GUI clients to computers running X-emulation software is not currently supported. In
cases where the GUI client is to be displayed to a PC computer running Windows, Nokia supports
installing the GUI client directly on the PC.
NFM-P supports using Citrix for remote display of NFM-P clients. Supporting Citrix on the delegate
platform requires extra system resources that will need to be added to those that are required by
the NFM-P delegate. See the Citrix documentation to determine the additional Citrix resource
requirements.
The following Citrix software has been tested with the Windows client delegate server:
• Windows Server 2012 R2 — Citrix Server - XenApp Version 7.18
• Windows Server 2016 — Citrix Server - XenApp Version 7.18
• Windows Server 2019 — Citrix Server - XenApp Version 7.18
• Windows Server 2019 — Citrix Client - Receiver Version 4.1.2.0.18020
• Windows 10 — Citrix Client - Receiver Version 4.1.2.0.18020
Due to an incompatibility between 64-bit versions of the Firefox web browser and Citrix Server
XenApp, the default web browser must be set to either Google Chrome, if using a version of
XenApp older than 7.14.
The client delegate server can be published in XenApp by installing the delegate server on your
delivery controller and then publishing the <delegate install directory>\nms\bin\nmsclient.bat file as
a manually published application.
All platforms used to display NFM-P applications must have a WebGL compatible video card and
the corresponding drivers installed.
Element manager Node type RHEL 7 server support Microsoft Windows support
The following table provides the minimum requirement for the hardware that will host NFM-P GUI
client software. Additional memory and disk resources will be required by the Operating System.
An NFM-P client installation is supported on a Guest Operating System of a VMware vSphere ESXi
or RHEL KVM installation. The Guest OS must be one of those supported for GUI clients found in
3.1.3 “NFM-P RHEL support” (p. 60).
The following table provides the dedicated NFM-P resource requirements for each Guest OS
running under VMware vSphere ESXi or RHEL KVM that will be used to host the NFM-P client GUI.
This does not include the specific operating system resource requirements which are in addition to
the hardware resources listed below. CPU and Memory resources must be reserved and dedicated
to the individual Guest OSs and cannot be shared or oversubscribed. Disk and network resources
should be managed appropriately to ensure that other Guest OSs on the same physical server do
not negatively impact the operation of the NFM-P GUI client.
VM resource requirements
NFM-P may require larger stations in order to successfully manage networks that exceed any of the
dimensions supported by the minimum hardware platforms. In order to determine station
requirements to successfully manage larger networks, the following information is required:
• Expected number and types of network elements to be managed
• Expected number of MDAs in the network to be managed
• Expected number of services and SAPs in the network to be managed
• Expected number of Dynamic LSPs to be deployed in the network
• Maximum expected number of NFM-P clients (GUI) simultaneously monitoring the network
• Expected number of XML-API applications that will connect as clients to the XML API interface
• Expected number of subscribers, specifically for triple-play network deployments
• Expected statistics collection and retention
• Expected number of STM tests
• Expected number of managed GNEs
• Whether NFM-P redundancy is to be utilized
• Whether NEBS compliance is required
• Whether CPAM is required
• Whether RAID 1 is required
The information above must then be sent to an Nokia representative who can provide the required
hardware specifications.
Ensure that any projected growth in the network is taken into account when specifying the expected
network dimensioning attributes. For existing NFM-P systems, the user may determine the number
of MDAs deployed in the network using the help button on the GUI. It is also possible to determine
the number of statistics being handled by the system by looking at the “Statistics Collection”
information window. Select the “Tools”, then “Statistics”, then “Server Performance Statistics” menu.
List the “Statistics Collection” objects. From this list window, check the “Scheduled Polling Stats
Processed Periodic” and the “Accounting Stats Processed Periodic” columns for the performance
and accounting statistics that your system is currently processing within the time interval defined by
the collection policy (15 minutes by default).
Nokia supports the use of RAID 1 (Mirroring). Deployments requiring increased resiliency are
encouraged to use NFM-P platform redundancy. If RAID 1 is required, a platform providing
hardware RAID 1 and that has sufficient number of disk to meet the increased disk requirements
must be selected.
To reduce the chance of data loss or application down time, Nokia recommends using RAID 1, in a
RAID 1+0 configuration.
For specific applications, Nokia supports the use of RAID 5 to increase storage resiliency and
maximize available space. The NFM-P auxiliary database backup partition is supported with RAID
5.
Executing the utility with the -h flag will present the user with a help menu, explaining different
options and presenting execution examples. Each mount point must be tested and must meet the
throughput and latency requirements for the specific deployment. These throughput and latency
requirements must be obtained from Nokia as they are specific to each deployment. The throughput
and latency targets must be met, irrespective of any other activity on the underlying storage device
and the targets must be achievable concurrently. For this reason, it is important to understand the
underlying storage configuration to ensure that the output of the benchmarking utility is interpreted
correctly. For example, each of the listed targets may be achievable using a single 10K RPM SAS
disk but concurrently, the listed targets would not be achievable using the same single 10K RPM
SAS disk. The performance of NFM-P would be degraded using this configuration.
See the NSP Installation and Upgrade Guide for the recommended partition sizes.
NSP Release 22.6 supports the following RHEL Server x86-64 versions in a NSP cluster
deployment.
• RHEL server 7 x86-64 - Update 7 (7.7)
• RHEL server 7 x86-64 - Update 8 (7.8)
• RHEL server 7 x86-64 - Update 9 (7.9)
Note: Each NSP cluster VM requires RHEL kernel version kernel-3.10.0-1075.el7 or later, and
must have the following kernel setting:
--args=cgroup.memory=nokmem
Previous releases, or other variants of Red Hat, and other Linux variants are not supported.
See the Host Environment Compatibility Reference for NSP and CLM for information about NSP
and RHEL OS compatibility.
The Nokia provided NSP RHEL OS image are based upon RHEL 7.9 and is only available for KVM
and Openstack hypervisors. The NSP RHEL OS image can be used only for the deployment of
NSP software, and not for the deployment of any other Nokia or third-party product.
The RHEL operating system must be installed as English.
The NSP does not necessarily support all functionality provided in RHEL. Network Manager is not
supported in NSP deployments. SELinux is supported in passive mode only. The RHEL chronyd
service is strongly recommended as the time-synchronization mechanism. The NSP also requires
that the server hostname is configured in the /etc/hosts file. RHEL must be installed in 64 bit mode
where NSP will be installed.
Nokia recommends installing any OS, driver, or firmware updates that the hardware vendor advises
for RHEL.
With the exception of documented Operating System parameter changes for NSP, all other settings
must be left at the RHEL default configuration.
The NSP Installation and Upgrade Guide provides detailed instructions for the RHEL OS
installation.
When installing the NFM-P client on Windows, ensure that there is sufficient disk space as
identified in the NSP Deployment and Installation Guide for the software.
NFM-P auxiliary database 7.3 through 7.9 Not supported Not supported
NSP analytics server 7.3 through 7.9 Not supported Not supported
NSP Flow Collector 7.3 through 7.9 Not supported Not supported
NSP Flow Collector 7.3 through 7.9 Not supported Not supported
Controller
4 Network requirements
Deploying NSP with IPv6 network communications has the following limitations and restrictions:
• The deployer host for an NSP cluster must have IPv4 connectivity to NSP cluster nodes. The
NSP cluster can be configured for IPv6 communications for NSP applications, but must have
IPv4 connectivity to the deployer node.
• Common web browser applications have security policies that may prevent the use of bracketed
IPv6 addresses in the URL browser bar. Customers that use IPv6 networking for client
communications to NSP must use hostname configuration for NSP components.
• All NSP components in an NSP deployment must use IPv4 or IPv6 for inter-component
communications. Different integrated components in an NSP deployment cannot communicate
between IPv4 and IPv6 interchangeably (example: if NSP is deployed with IPv6, then NFMP also
needs to be deployed with IPv6.).
• NFM-T does not support IPv6 deployment. An integrated deployment of NSP with NFM-T must
be deployed with IPv4 addressing.
Note: In Release 22.6, the VSR-NRC must be deployed with IPv4 addressing. A NSP
deployed with IPv6 can be deployed with a VSR-NRC using IPv4 addressing.
The NSP can be deployed with multiple network interfaces using IPv4 and IPv6 addressing.
Chapter 7 of this guide documents the requirements and limitations of a multiple network interface
NSP deployment.
resource control, cross-domain resource control, NSP cluster). An OSS client that performs
frequent query operations (for example, port or service inventory) must be provided additional
bandwidth.
The bandwidth requirements between NSP and NFM-P depends on the following factors:
• the number of NEs, LSPs, and services configured on the NFM-P
• the frequency of NE updates to the NSP
When an NSP system resynchronizes with NFM-P, optimal performance is achieved with 50 Mbps
of bandwidth between NSP and NFM-P. Nokia recommends providing a minimum of 25 Mbps of
bandwidth.
Network latency impacts the time it takes for the NSP to resynchronize a large amount of data from
the NFM-P. Nokia recommends limiting the round-trip network latency time to 100 ms.
Note: Call trace data can only be retrieved from CMM and CMG network elements with IPv4
connectivity.
NFM-P supports the use of multiple interfaces for network element management communication. If
a network element uses both an in-band and out-of-band address for management, these
interfaces must reside on the same server interface.
Available bandwidth required from primary NFM-P Recommended bandwidth: excluding statistics bandwidth
server/database station requirements
XML API client (The bandwidth will depend on the XML-API 1 Mbps
application)
Between primary and standby NFM-P server/database station 5-10 Mbps (sustained)
NOTE: When network element database backup synchronization 16-26 Mbps (during re-instantiation or database backup
is enabled, the bandwidth requirement between the NFM-P synchronization)
servers will vary significantly depending on the size of the network
element backup file sizes.
Available bandwidth requirements for NFM-P Recommended bandwidth: excluding statistics and call trace
bandwidth requirements
NFM-P server to an XML API client (The bandwidth will depend on 1 Mbps
the XML-API application)
Available bandwidth requirements for NFM-P Recommended bandwidth: excluding statistics and call trace
bandwidth requirements
Table 4-3 Additional bandwidth requirements for file accounting STM results collection
Bandwidth requirements for installations collecting file accounting Increased bandwidth per 50 000 file accounting STM records
STM results using the logToFile method only
Between the NFM-P database stations – required for sufficient 2 Mbps (sustained)
bandwidth for database re-instantiations 12 Mbps (during re-instantiation or database backup
NOTE: The higher bandwidth is required to handle re-instantiation synchronization)
during STM collection
NFM-P auxiliary N/A 2.2 Mbps N/A 0.8 Mbps per N/A N/A
database NFM-P auxiliary
database node
Table 4-5 Additional bandwidth requirements for application assurance accounting statistics collection
NFM-P auxiliary N/A 3.1 Mbps N/A 0.8 Mbps per N/A N/A
database NFM-P auxiliary
database node
NFM-P auxiliary 5.4 Mbps 5.4 Mbps 14.4 Mbps 0.8 Mbps per 72 Mbps N/A
database NFM-P auxiliary
database node
Table 4-7 Additional bandwidth requirements for LTE performance management statistics collection
Bandwidth requirements for installations collecting LTE Increased bandwidth per 200 000 LTE performance management
performance management statistics statistics records
When an NFM-P auxiliary statistics collector is installed to collect statistics using the NFM-P
database, the bandwidth requirements between two geographic locations will need to reflect the
state where an NFM-P auxiliary statistics collector in geographic location A may send information to
the active NFM-P server in geographic location B which will, in turn, send information back to the
NFM-P database in geographic location A. For this reason, the bandwidth between geographic
location A and B must be the sum of the bandwidth requirements between the NFM-P auxiliary
statistics collector to NFM-P server and NFM-P server to NFM-P database. It is also a best practice
to ensure that the NFM-P auxiliary statistics collector, NFM-P server, and NFM-P database are all
collocated in the same geographic site.
Table 4-8 Additional bandwidth requirements for the NSP analytics server
Between the NSP analytics server and the end user (web 2.5 Mbps minimum requirement, 10 Mbps optimal
browser)
Between the NSP analytics server and the server hosting nspOS 3 Mbps
Between the NSP analytics server and external FTP host (if used 3 Mbps
for sending results of scheduled reports)
Bandwidth requirements for installations with call trace collection Bandwidth usage characterization
NFM-P server to an XML API client Low bandwidth XML-API requests and responses
XML API client to NFM-P auxiliary call trace collector station Higher bandwidth to retrieve via FTP the call trace files from the
NOTE: a higher bandwidth may be desirable NFM-P auxiliary
NFM-P auxiliary call trace collector Preferred station to it’s Higher bandwidth to ensure timely synchronization of call trace
Reserved redundant pair. files
NOTE: a higher bandwidth may be desirable
Bandwidth requirements for installations with NFM-P auxiliary Bandwidth usage characterization
database
NFM-P auxiliary statistics collector to NFM-P auxiliary database Higher bandwidth to write statistics data into the NFM-P auxiliary
cluster database cluster
NSP analytics server to NFM-P auxiliary database cluster Higher bandwidth to generate reports based upon raw and
NOTE: a higher bandwidth may be desirable aggregated data
NFM-P auxiliary database node to NFM-P auxiliary database High — must use a dedicated, minimum 1Gbps interface
node (intra-cluster)
There are two main factors affecting the bandwidth requirements between the NFM-P server and an
XML API client:
• Design and behavior of the application using the XML-API interface
• Rate of changes in the network
Applications which listen to network changes via the JMS interface provided by NFM-P XML API or
applications which retrieve large pieces of information via the API, such as statistics information or
network inventory information, require access to dedicated bandwidth from the machine hosting the
application to the NFM-P server according to the tables above. Applications which do not require
real time event and alarm notification may operate with acceptable performance when the
bandwidth between the machine hosting the application and the NFM-P server is less than the
quantity specified in the tables above.
It is a best practice to minimize event and alarm notifications using a JMS filter to reduce bandwidth
requirements and the possible effects of network latency.
In an environment where network changes are infrequent, it is possible to successfully operate an
application using the API when the bandwidth between the machine hosting this application and the
NFM-P server is less than the quantity specified in the tables above, possibly as little as 128 kbps.
However, in situations where the frequency of network changes increases, the performance or
responsiveness of the application will degrade.
The main factors impacting communication to and from the NFM-P auxiliary statistics collector are:
• Number of performance statistics being collected. The NFM-P server needs to tell the NFM-P
auxiliary statistics collector which statistics to collect every interval.
• Number of statistics collected from the network elements.
• Number of statistics written to the NFM-P database.
The more performance statistics are collected, the more significant the bandwidth utilization
between the NFM-P server and the NFM-P auxiliary statistics collector. Similarly, this requires more
significant bandwidth utilization between the NFM-P auxiliary statistics collector and the NFM-P
database stations. The bandwidth requirements are not dependent on network activity.
The main factors impacting communication to and from the NSP Flow Collector are:
• Size of the NFM-P managed network for the network extraction
• Size of generated IPDR files
• Number of network elements sending cflowd records
The main factors impacting communication to and from the NSP Flow Collector Controller are:
• Size of the NFM-P managed network for the network extraction
• Number of NSP Flow Collectors connected to the NSP Flow Collector Controller
Table 4-11 Additional bandwidth requirements for the NSP Flow Collector
NSP Flow Collector Controller to an NSP Flow Collector Bandwidth requirement will depend upon network size, which
This is for Network Snapshot Transfer (FTP/SFTP) By default this determines the network extraction file size, and the desired time
operation should only occur weekly if the NFM-P server and NSP complete the file transfer from the NSP Flow Collector Controller
Flow Collector Controller remain in sync. The amount of to the NSP Flow Collector
bandwidth required is dependent on network size.
Managed Network to NSP Flow Collector 40 Mbps per 20 000 flows per second
In the case of Redundant NSP Flow Collectors, the amount of
dedicated bandwidth is required for each NSP Flow Collector.
NSP Flow Collector to IPDR file storage server Use the information on the left to calculate the amount of data
Approximate amount of Stats per a 1 MB IPDR Stats File: 2,560 generated for the expected statistics. Use this to calculate the
TCP PERF statistics (all counters) or, 3,174 RTP statistics (all time to transfer at a given bandwidth. The total time must be less
counters) or, 9,318 Comprehensive statistics (all counters) or than 50% of collection interval. For example – if 1 GB of IPDR
9,830 Volume statistics (all counters) In the case of Redundant files are expected per interval, and the collection interval is 5min,
NSP Flow Collectors, the amount of dedicated bandwidth a 45 Mbps connection will take 3min,2sec to transfer. This is more
calculated on the right is for each NSP Flow Collector to the than 50% and a larger network connection is required.
station where IPDR files are being transferred.
Table 4-12 Additional bandwidth requirements for the NSP Flow Collector Controller
Bandwidth requirements for NSP Flow Collector Controller Bandwidth usage characterization
NFM-P server to an NSP Flow Collector Controller Bandwidth requirement will depend upon network size, which
This is for Network Snapshot Transfer (FTP/SFTP) By default this determines the network extraction file size, and the desired time
operation should only occur weekly if the NFM-P server and NSP complete the file transfer from the NFM-P server to the NSP Flow
Flow Collector remain in sync. The amount of bandwidth required Collector Controller.
is dependent on network size.
The main factors impacting communication to and from the NFM-P auxiliary PCMD auxiliary
collector are:
• Number of bearers.
• Number of and size of events per bearer.
• Size of files being retrieved by the NFM-P XML-API client requesting the PCMD files.
On average, each bearer will generate 100 events per hour where each event is approximately 250
bytes in size.
7250 IXR-6 / 7250 IXR-R4 / 7250 IXR-R6 / 7250 IXR-x 800 kbps – 1000 kbps
7210 SAS-D, 7210 SAS-X, 7210 SAS-T, 7210 SAS-R, 7210 SAS-Mxp, 7210 500-600 kbps
SAS-Sx
OmniSwitch 6250, 6350 6400, 6450, 6465, 6560, 6850, 6855, 6865, 9000 Series 300 kbps
The following are the main operations that result in significant amounts of information being
exchanged between NFM-P and the network elements. These factors are therefore the principal
contributors to the bandwidth requirements.
• Network element discovery: Upon first discovery of the network element, a significant amount of
data is exchanged between NFM-P and the network element.
• SNMP traps: SNMP traps do not result directly in significant data being sent from the network
element to the NFM-P. Several of the SNMP traps however do not contain all of the information
required for NFM-P to completely represent the new status of the network element. As a result,
NFM-P will subsequently perform a poll of a certain number of the SNMP MIBs to obtain the
required information from the network element. Consequently, SNMP traps do result in a certain
quantity of data and therefore cause bandwidth utilization. The exact quantity of bandwidth
utilized will vary based on the number and the type of trap that is sent from the network element.
In the worst case however, this bandwidth utilization will be less than that utilized during a
network element discovery.
• SNMP polling: It is possible to configure NFM-P to poll the SNMP MIBs on the network elements
at various intervals. By default, NFM-P will perform a complete poll of the SNMP MIBs every 24
hours on non-SR–OS based network elements. During the polling cycle, the amount of data
transferred between NFM-P and the network element is equivalent to the amount of data
transferred during the network element discovery.
• Statistics collection: It is possible to configure NFM-P to poll the SNMP MIBs on the network
elements that contain performance statistics information. During the polling cycle, the amount of
data transferred between NFM-P and the network element is less than the amount of data
transferred during the network element discovery. With the configuration of an NFM-P auxiliary
statistics collector, the communication from and to the network elements will be distributed
between the NFM-P server and an NFM-P auxiliary statistics collector.
• Network element backup: It is possible to configure NFM-P to request a backup of the network
element at specified interval. During the NE backup cycle, the amount of data transferred
between NFM-P and the network element is less than half of the amount of data transferred
during the network element discovery.
• Provisioning of services and deployment of configuration changes: When network elements are
configured or when services are provisioned via the NFM-P GUI or via application using the API,
a small quantity of network bandwidth is utilized. The amount of data transferred is significantly
less than during the network element discovery.
• Initiation and collection of STM tests and their results: When STM tests are initiated, the NFM-P
server sends individual requests per elemental test to the network elements. Once the test is
complete, the network elements report back using a trap. The NFM-P server then requests the
information from the network element, and stores it in the database. This can result in a
significant increase in network traffic to the network elements.
• Software Downloads: The infrequent downloading of network element software loads is not
included in the bandwidth levels stated in Table 4-13, “NFM-P server to network bandwidth
requirements” (p. 74). Bandwidth requirements will depend upon the size of the network element
software load and the desired amount of time to successfully transfer the file to the NE.
For some network elements, management of the NE includes methods other than standard MIB/
SNMP management – for example web-based tools. These network elements may require
additional bandwidth above the bandwidth levels stated in Table 4-13, “NFM-P server to network
bandwidth requirements” (p. 74).
In situations where there is less than the recommended bandwidth between the NFM-P and the
network element, the following are possible consequences:
• The length of time required to perform a network element discovery will increase
• The length of time required to perform a SNMP poll of the network element will increase
• The length of time required to retrieve statistics from the network element will increase
• The proportion of SNMP traps that will not reach NFM-P because of congestion will increase.
This is significant since NFM-P will detect it has missed traps from the network element and will
result in NFM-P performing additional SNMP polling to retrieve the missing information. This will
result in additional data being transferred, which will increase the bandwidth requirements,
possibly exacerbating the situation.
In cases where the management traffic is carried over physical point-to-point links between the
NFM-P server and NFM-P auxiliary network and each of the network elements, sufficient bandwidth
must be reserved on the physical links. The NFM-P server complex can simultaneously
communicate to several NEs for the following functions:
• NE discovery, NE resync, resyncing for trap processing
• NE backups, NE software downloading, and sending configurations to NEs
• Collecting performance statistics
• Collecting accounting statistics
• Initiating STM tests on NEs
• Retrieve STM Test Results - also via (s)FTP
• NE reachability checks and NE trap gap checks
Rarely are all of the above performed simultaneously so it is recommended to assume for link
aggregation points that NFM-P can communicate with a minimum of 20-30 NEs simultaneously –
this can increase to 60-70 NEs on a 16 CPU core NFM-P server station. For Networks of over 1000
NEs or where an NFM-P auxiliary statistics collector is being used, that number should be
increased by 20-30 NEs. Higher bandwidth maybe required under special cases where above
average data is attempted to be transferred between NFM-P and the network elements. For
example, large statistics files, NE backups, or software images.
Network latency can potentially impact the performance of NFM-P. The following are known impacts
of latency between the various NFM-P components:
• NFM-P server to NFM-P clients (GUI/XML-API): event notification rates of network changes
• NFM-P auxiliary statistics collector to the network elements: ftp connection for statistics
collection and SNMP stats collection
• NFM-P server to the network elements: resync times, provisioning, ftp connections for statistics
and network element backups, trap handling, and SNMP stats collection (See 5.7 “Scaling
guidelines for statistics collection” (p. 96) for more information about latency impact on SNMP
stats collection)
• NFM-P server and NFM-P auxiliary collectors to NFM-P database: NFM-P performance is
sensitive to latency in this area. The round trip latency between the active NFM-P components
(server, database, auxiliary) must be no longer than 1 ms., otherwise overall NFM-P
performance will be significantly impacted. The NFM-P auxiliary database can tolerate up to 200
ms of latency between it and the rest of the NFM-P management complex.
Since SNMP communication to a single network element is synchronous, the impact of latency is
directly related to the number of SNMP gets and responses. Operations to a network element with a
round trip latency of 50 ms will have the network transmission time increase by ten times compared
to a network element with a round trip latency of only 5 ms. For example, is a specific operation
required NFM-P to send 1000 SNMP gets to a single network element, NFM-P will spend a total of
5 seconds sending and receiving packets when the round trip latency to the network element if 5
ms. The time that NFM-P spends sending and receiving the same packets would increase to 50
seconds if the round trip latency were increased to 50 ms.
Network element re-sync can be especially sensitive to latency as the number of packets
exchanged can number in the hundreds of thousands. For example, if a re-sync consists of the
exchange of 100 000 packets (50 000 gets and 50 000 replies), 50 ms of round trip latency would
add almost 42 minutes to the overall re-sync time and 100 ms of round trip latency would add
almost 84 minutes to the overall re-sync time.
NFM-P can use a proprietary mechanism to discover and resync specific node types and versions,
that can dramatically reduce resync and discovery times to network elements with high network
latency. TCP Streaming is supported on the following network element types with a release of
11.0R5 or later:
• 7950 XRS
• 7750 SR
• 7450 ESS
• 7250 IXR
BDP = 20 Mbps * 40ms = 20,000,000 bps * .04s = 800,000 bits / 8 = 100,000 bytes socket
buffer size = BDP = 100,000 bytes
See the RHEL documentation for information about how to modify the TCP socket buffer size and
ensure that the change is persistent.
It is important to note that increasing the TCP socket buffer size directly affects the amount of
system memory consumed by each socket. When tuning the TCP socket buffer size at the
operating system level, it is imperative to ensure the current amount of system memory can support
the expected number of network connections with the new buffer size.
The NFM-P requires reliable network communications between all the NFM-P components:
• NFM-P servers
• NFM-P databases
• NFM-P auxiliaries
• NSP Flow Collector
• NSP Flow Collector Controller
• NFM-P auxiliary databases
• NSP analytics server
• NFM-P clients and NFM-P client delegate server
• NFM-P XML API clients
The performance and operation of NFM-P can be significantly impacted if there is any measurable
packet loss between the NFM-P components. Significant packet loss can cause NFM-P reliability
issues.
Nokia supports the deployment of NFM-P using the RHEL IP Bonding feature. The support for IP
Bonding is intended only to provide network interface redundancy configured in active-backup
mode for IP Bonding on the OS instance hosting the application software. All other modes of IP
Bonding are not supported. See the RHEL documentation for information about configuring IP
Bonding.
NFM-P uses several mechanisms to maintain and display the current state of the network elements
it manages. These mechanisms can include:
• IP connectivity (ping) verification
• SNMP connectivity verification
• SNMP traps
• SNMP trap sequence verification
• Scheduled SNMP MIB polling
These mechanisms are built into the Nokia 7950 XRS, 7750 SR, 7450 ESS, 7450 SR, 7210 SAS,
and 7705 SAR network elements and the NFM-P network element interaction layers.
NFM-P can be configured to ping all network elements at a configurable interval to monitor IP
connectivity. If the network element is unreachable, an alarm will be raised against the network
element. Details of the alarm are the following:
• Severity: Critical
• Type: communicationsAlarm
• Name: StandbyCPMManagementConnectionDown, OutOfBandManagementConnectionDown or
InBandManagementConnectionDown
• Cause: managementConnectionDown.
Ping verification is disabled by default. IP connectivity checks using ping must be scheduled
through the default policy.
NFM-P performs an SNMP communication check every 4 minutes. If NFM-P can not communicate
via SNMP with a network element, it will raise a communications alarm against that network
element. NFM-P will also color the network element red on the map to indicate the communication
problem. NFM-P will clear the alarm and color the network element as green once NFM-P detects
SNMP connectivity to the network is re-established. Details of the alarm are the following:
• Severity: Major
• Type: communicationsAlarm
• Name: SnmpReachabilityProblem
• Cause: SnmpReachabilityTestFailed
This behavior occurs by default and is not configurable.
Table 5-1 Dimensioning details for an NSP deployment with classic and model-driven IP
management
Notes:
1. Customers wishing to deploy more than 6000 PCEP IP NEs should contact Nokia for details on
timer configuration.
The following table presents key dimension details for a Simulation tool deployment.
Notes:
1. Please consult with Nokia on the performance aspects of brownfield stitching and association.
Note: Customers should use caution when invoking bulk actions at the ICM template level
with many thousands of configuration instances as this may take many hours to complete.
Bulk operations will be optimized in a future NSP release.
5.2.7 Cross-domain resource control scaling within NSP classic and model-driven
IP + optical deployment
The following table presents key dimension details for cross-domain resource control in an NSP
classic and model-driven IP + optical deployment as described in the NSP Installation and Upgrade
Guide.
Table 5-5 Dimensioning details for cross-domain resource control in an NSP classic and
model-driven IP + optical deployment
Table 5-5 Dimensioning details for cross-domain resource control in an NSP classic and
model-driven IP + optical deployment (continued)
Note: The combined concurrent session limit for all applications is 125.
The NSP can support up to 100 concurrent NE sessions on NFMP managed nodes, and 100
concurrent NE sessions on MDM managed nodes. Users must have execute level access control
for a NE to create a NE session.
The maximum number of rows uploaded to the database per minute is 90 000 per active MDM
instance, where one row equals one Telemetry record. This limit applies for Postgres and Auxiliary
database storage.
If NSP is deployed with multiple active MDM instances, the maximum collective upload rate to a
Postgres database is 180 000 rows per minute. When Telemetry data is stored in the Auxiliary
Database, the upload rate scales horizontally with more active MDM instances. Network activity,
database activity and latency can also affect database upload rates.
Note: Alarm limits describe the aggregate number of alarms that can be handled by the NSP
Fault Management application but do not supersede individual datasource limits.
The following table defines the performance limits for the Fault Management application:
Note: Alarm rate describes the aggregate volume that can be handled by the NSP Fault
Management application but does not supersede individual datasource limits.
The following table defines the squelching limits for the Fault Management application:
Note: Because the maximum size for a port group is currently 100k (100 000) ports, multiple
resource groups are needed to achieve the 250k squelching limit.
The size of the supervision group (number of NEs and links) may affect performance and topology
rendering time.
Multi-layer maps support a recommended maximum of 4000 objects. Users should expect the
following multi-layer map loading times with different numbers of NEs.
• For 250 NEs (125 physical links), approximately six seconds for the initial page loading and four
seconds to reload.
• For 500 NEs (250 physical links), approximately nine seconds for the initial page loading and six
seconds to reload.
• For 2000 NEs (1000 physical links), approximately 50 seconds for the initial page loading and 28
seconds to reload.
Where cross-domain resource control is deployed, the following scaling limits for Map Layout will
apply:
• Maximum number of nodes per region is 250.
• Maximum number of links per region is 1200.
Note: Numbers are based on using enhanced profile disk availability for File Service.
Note: Role Based Access Control will not apply to LSO app user operations in this release.
Service Fulfillment
no MDM learned 65k services learned
services from MDM
60k PCEP LSPs 60k PCEP LSPs
3k NEs and 11k links 3k NEs and 11k
links
Down time for a switch of leader activity within 5 - 20 minutes 33 - 80 minutes
an HA cluster
Down time for an activity switch from active to 16- 30 minutes 45 - 90 minutes
standby (HA cluster or single node) in a
redundant deployment
Note: Performance assumes that NFM-P is running release 18.12 SP5 or later.
Users and client applications that need access to Launchpad and NSP applications will also
experience downtime during an activity switch and during pod re-selection in an enhanced
deployment. Once Launchpad has been restored to service, northbound clients can authenticate
and access applications.
Launchpad
Down time for pod reselection < 1s (see note)
Down time for an activity switch from active to 9 minutes
standby (enhanced or single node) in a
redundant deployment
Note: The Launchpad will remain available to users during a pod re-selection in an enhanced
deployment; some applications on Launchpad may be temporarily unavailable due to pod
rescheduling.
Table 5-7, “NFM-P Release 22.6 scalability limits” (p. 93) represents the scalability limits for
Release 22.6. Note that:
• These limits require particular hardware specifications and a specific deployment architecture.
• Scale limits for all network elements assume a maximum sustained trap rate of 100 traps/
second for the entire network. NFM-P’s trap processing rate depends on many factors including
trap type, NE type, NE configuration, NE and network latency, network reliability as well as the
size and speed of the servers hosting the NFM-P application. NFM-P scalability testing runs at a
sustained trap rate exceeding 100 per second for the largest deployment and server
configurations.
2.3 “NFM-P Hardware platform requirements” (p. 38) contains information about identifying the
correct platform for a particular network configuration. To achieve these scale limits, a distributed
NFM-P configuration is required, and may also require an NFM-P auxiliary statistics collector and a
storage array for the NFM-P database station.
Contact Nokia to ensure that you have the correct platform and configuration for your network size.
Maximum number of simultaneous web UI client sessions 4–250 Table 5-9, “NFM-P apps maximum number of concurrent
sessions” (p. 94)
Notes:
1. The number of interfaces on a GNE and the traps that may arise from them is the key factor determining the
number of GNE devices that can be managed. As GNE devices are expected to be access devices the
sizing is based on an average of 10 interfaces of interest on each device (10 x 50 000 = 500 000 interfaces).
Processing of traps from interface types that are not of interest can be turned off in NFM-P. Under high trap
load, NFM-P may drop traps.
NFM-P uses the number of MDAs as the fundamental unit of network dimensioning. To determine
the current or eventual size of a network, the number of deployed or expected MDAs, as opposed
to the capacity of each router, must be calculated.
7250 IXR-6 / 7250 IXR-10 / 7250 IXR-R4 / 7250 IXR-R6 50 000 1 MDA = 1MDA
OMNISwitch 6250, 6400, 6450, 6850, 6855 (each shelf in the stackable 50 000 50 000
chassis)
OMNISwitch 6350, 6465, 6560, 6865 (each shelf in the stackable chassis) 5000 5000
OMNISwitch 9600, 9700, 9700E, 9800, 9800E (each NI) 1000 1000
VSC 1 N/A
Notes:
1. The IMM card has an MDA equivalency of 2 MDAs per card.
2. The CMA card has an MDA equivalency of 1 MDA per card.
3. The CMM has an MDA equivalency of 1 MDA per blade / VM.
4. The 1830 VWM OSU Card Slot has an MDA equivalency of 1/4 MDA per card to a maximum MDA
equivalency of 30 000
Analytics 10
Network Supervision 50
Wireless Supervision 50
Wireless NE Views 50
Table 5-10, “NFM-P Release 22 performance targets” (p. 95) represents the performance targets
for the NFM-P. Factors that may result in fluctuations of these targets include:
• NFM-P server and NFM-P database system resources
• network activity
• user/XML-API activity
• database activity
• network size
• latency
Upgrade Performance
Notes:
1. The target includes the installation of the software on the existing servers and NFM-P database conversion.
Operating System installation/upgrades, patching, pre/post-upgrade testing and file transfers are excluded
from the target.
2. Provided proper planning and parallel execution procedures were followed.
JMS messaging Round-trip latency from the XML API client to the NFM-P server
0 ms 20 ms 40 ms
Table 5-12 Maximum number of performance statistics records processed by an NFM-P server
Number of CPU cores on NFM-P Maximum number of performance statistics records per 15-minute interval
server stations
Collocated configuration Distributed configuration
Table 5-13 Maximum number of performance statistics records processed by an NFM-P statistics auxiliary
Number of active Maximum number of performance statistics records per 15-minute interval
auxiliary statistics
collectors Statistics collection with NFM-P Statistics collection Statistics collection logTofile only
database with single with three+ auxiliary
auxiliary database database cluster
8 CPU Cores, 32 12 CPU Cores, 8 CPU Cores, 32 12 CPU Cores, 32 12 CPU Cores,
GB RAM 32 GB RAM GB RAM GB RAM 32 GB RAM
1 500 000 2 000 000 500 000 2 000 000 2 000 000
2 500 000 2 000 000 500 000 4 000 000 4 000 000
3 500 000 2 000 000 500 000 4 000 000 4 000 000
In situations where NFM-P is asked to collect more performance statistics than it can process in the
specified polling period, the PollerDeadlineMissed alarms will start appearing. These alarms
indicate to the user that the polling mechanisms within NFM-P cannot retrieve the requested
information within the specified polling period. Should this situation arise, the polling period for
statistics should be increased or the number of objects that are applied to Statistics Poller Policies
should be reduced.
An accounting statistic record is the statistic for one queue for one SAP. For example, if 2 ingress
and 2 egress queues are configured per SAP, the “Combined Ingress/Egress” statistic represents 4
NFM-P accounting statistic records.
It is recommended that the Accounting Policy Interval and the File Policy Interval be aligned to the
same period. Misalignment of the policy periods can cause NFM-P resource contention for both
performance and accounting statistics processing.
The following tables provide the maximum number of accounting statistics records that can be
retrieved and processed by the NFM-P server or NFM-P auxiliary statistics collector in various
situations.
To reach the peak accounting statistics collection from the NFM-P auxiliary statistics collector
station, the NFM-P database station requires a customized configuration that can be obtained from
Nokia personnel.
Table 5-14 Maximum number of accounting statistics records processed by an NFM-P server
station
Number of CPU cores on Maximum number of accounting statistics records per 15-minute interval
NFM-P server stations
Collocated configuration Distributed configuration
Table 5-15 Maximum number of accounting statistics records processed by an NFM-P statistics auxiliary
Number of active Maximum number of accounting statistics records per 15-minute interval
auxiliary statistics
collectors Statistics collection with NFM-P database Statistics collection Statistics logToFile only
with single auxiliary collection with
database three+ auxiliary
database cluster
8 CPU cores, 32 GB 12 CPU cores, 32 8 CPU cores, 32 GB 12 CPU cores, 32 12 CPU cores,
RAM GB RAM RAM GB RAM 32 GB RAM
1 10 000 000 10 000 000 5 000 000 20 000 000 20 000 000
2 10 000 000 10 000 000 5 000 000 40 000 000 40 000 000
3 10 000 000 10 000 000 5 000 000 60 000 000 60 000 000
In situations where NFM-P is asked to collect more accounting statistics records than it can process
in the specified retrieval period, the extra statistics will not be retrieved from the network.
There are two methods to export accounting and performance statistics from NFM-
P; registerLogToFile, and findToFile. The registerLogToFile method is the preferred method and is
required for situations where more than 400 000 accounting statistics records are retrieved in 15
minutes or 500 000 performance statistics are retrieved in 15 minutes.
The quantity of resources which are allocated to the retrieval and processing of application
assurance accounting statistics within the NFM-P server are set at the installation time and depend
on the number of CPUs available to the NFM-P server software. The number of CPUs available to
the NFM-P server depends on the number of CPUs on the station and whether the NFM-P
database software is collocated with the NFM-P server software on the same station.
Scaling of application assurance collection is related to the number of objects configured for
collection as opposed to the number of records collected per interval.
The following tables provide the maximum number of application assurance objects that can be
configured for collection by the NFM-P server or NFM-P auxiliary statistics collector in various
situations.
Table 5-16 Maximum number of application assurance accounting objects configured for
collection by an NFM-P server station
Number of CPU cores on Maximum number of application assurance accounting objects configured for collection per
NFM-P server stations 15-minute interval
Table 5-17 Maximum number of application assurance accounting objects configured for
collection by an NFM-P statistics auxiliary
Number of active Maximum number of application assurance accounting objects configured for collection per
auxiliary statistics 15-minute interval
collectors
Statistics collection with NFM-P database Statistics Statistics
collection with collection with
single auxiliary three+ auxiliary
database database cluster
In situations where NFM-P is asked to collect more application assurance accounting records than
it can process in the specified retrieval period, the extra statistics will not be retrieved from the
network.
Table 5-18 NFM-P database station hardware requirements for a distributed configuration
Maximum number of simultaneous statistics records per 15-minute interval NFM-P Requires the following NFM-P
auxiliary database station setup
Accounting statistics Application assurance Performance statistics statistics
records accounting objects configured records collec-
for collection tor(s)
10 000 000 5 000 000 2 000 000 Yes 12 CPU cores, minimum
2.0GHz 1
0 15 000 000 0 6 disks (RAID 0)
64 GB RAM
Notes:
1. 2.0GHz only supported on Skylake and newer CPU microarchitecture. Minimum speed on CPUs older than
Skylake is 2.4GHz
to carefully assess the impact of creating and assigning statistics policies. Review the number of
objects that are assigned to statistics policies and ensure the polling and retrieval periods are set
such that the numbers will remain below the stated guidelines.
Using NFM-P server performance statistics, NFM-P can assist in determining how many polled and
accounting statistics are being collected.
NFM-P performance can be adversely affected by increasing the number of historical statistics
entries recorded by the NFM-P. NFM-P system impacts include increased time listing log records
from the GUI and XML API clients, increased database space, and increased database backups
times.
>40M 96
>40M 16
Performance 35,040
Accounting 35,040
When using the logToFile method only, for collection, the maximum retention of data on the file
system is 600 minutes (10 hours).
Distributed NFM-P configuration with minimum 8 15 000 1 500 000 1 1 500 000 1
CPU Core NFM-P server
Notes:
1. may require a dedicated disk or striped disks for the xml_output partition
By default, NFM-P will only allow test suites with a combined weight of 80 000 to execute
concurrently. The test suite weights are identified in the NFM-P GUI’s Test Suites List window.
Running too many tests that start at the same time will cause the system to exceed the previously
mentioned limit, and the test will be skipped. Ensuring the successful execution of as many STM
tests as possible requires planning the schedules, the contents, and the configuration of the test
suites. The following guidelines will assist in maximizing the number of tests that can be executed
on your system:
• When configuring tests or test policies, do not configure more packets (probes) than necessary,
as they increase the weight of the test suite.
• Test suites with a smaller weight will typically complete more quickly, and allow other test suites
to execute concurrently. The weight of the test suite is determined by the number of tests in the
test suite, and the number of probes that are executed by each test. See Table 5-22, “OAM test
weight” (p. 105) for test weight per test type.
• Assign the time-out of the test suite in such a way that if one of the test results has not been
received it can be considered missed or failed without stopping other test suites from executing.
• Rather than scheduling a test suite to execute all tests on one network element, tests should be
executed on multiple network elements to allow for concurrent handling of the tests on the
network elements. This will allow the test suite results to be received from the network element
and processed by NFM-P more quickly freeing up available system weight more quickly.
• Rather than scheduling a test suite to run sequentially, consider duplicating the test suite and
running the test suites on alternating schedules. This allows each test suite time to complete or
time-out before the same test suite is executed again. Remember that this may cause double the
system weight to be consumed until the alternate test suite has completed.
• Create test suites that contain less than 200 elemental tests. This way you can initiate the tests
at different times by assigning the test suites to different schedules thereby having greater
control over how many tests are initiated or in progress at any given time.
• Prioritize which tests you wish to perform by manually executing the test suite to determine how
long it will take in your network. Use that duration with some added buffer time to help determine
how much time to leave between schedules or repetitions of a schedule and how to configure
the test suite time-out.
• A test suite time-out needs to be configured to take effect before the same test suite is
scheduled to run again, or it will not execute if it does not complete before the time-out.
• NFM-P database backups can impact the performance of STM tests.
5.8.5 Example 1
Assume there is a network with 400 LSPs and that the objective is to perform LSP pings on each
LSP as frequently as possible. The following steps are to be followed:
1. Create 4 test suites each containing 100 elemental LSP ping tests
2. One at a time, execute each test suite and record the time each one took to complete. Assume
that the longest time for executing one of the test suites is 5 minutes.
3. Create a schedule that is ongoing and has a frequency of 15 minutes. This doubles the time
taken for the longest test suite and ensures that the test will complete before it is executed
again. Assign this schedule to the 4 test suites.
4. Monitor the test suite results to ensure that they are completing. If the tests are not completing
(for example getting marked as “skipped”), then increase the frequency time value of the
schedule.
5. In the above case, there are 200 elemental tests configured to be executed each 10 minutes.
5.8.6 Example 2
Assume there are eight test suites (T1, T2, T3, T4, T5, T6, T7 and T8), each containing 50
elemental tests. Assume the test suites individually take 5 minutes to run. Also, assume the
objective is to schedule them so that the guideline of having less than 200 concurrently running
elemental tests is respected.
5.8.7 Factors impacting the number of elemental tests that can be executed in a
given time frame
The following factors can impact the number of elemental tests that can be executed during a given
time frame:
• The type of tests being executed. Each type of elemental test takes varying quantities of time to
complete (for example, a simple LSP ping of an LSP that spans only two routers may take less
than 2 seconds; an MTU ping could take many minutes).
• The amount of data that is generated/updated by the test within the network elements. NFM-P
will have to obtain this information and store it in the NFM-P database. The quantity of data
depends on the type of tests being performed and the configuration of the objects on which the
tests are performed.
• The number of test suites scheduled at or around the same time
• The number of tests in a test suite
• The number of routers over which the tests are being executed; in general, a large number of
tests on a single router can be expected to take longer than the same number of tests distributed
over many routers.
• An NFM-P database backup may temporarily reduce the system’s ability to write test results into
the database.
• The station used to perform the tests will dictate how many physical resources NFM-P can
dedicate to executing elemental tests. On the minimum supported station (collocated NFM-P
server and NFM-P database on a single server), the number of concurrent tests must be limited
to 3 000.
use case is not, and there are no reports for this data. Additionally, even for residential, the only
reports available are for Comprehensive. Comprehensive statistics have a fixed 60min aggregation
interval. Due to the amount of data generated in a mobile deployment, Volume statistics require an
aggregation interval of 60 minutes. As an alternative, Volume Special Study statistics on specific
subscribers can be used. The only key factor of difference is whether or not additional counters are
enabled for Comprehensive statistics.
Table 5-23 cflowd statistics scaling limits for residential and mobile deployments
NSP Flow Collector Counter selection 1 Maximum number of unique Packet loss per hour 3
Notes:
1. Default: two counters. Volume: total bytes/total packets. Comp-volume: total bytes StoC/CtoS sum unknown.
Only one counter exists. Vol SS: should be minuscule. All counters: Comp-volume has a total of ten counters
that can be enabled.
2. Number of aggregated output requests that are sent to the server every 60 minutes. Assumes transfer has
sufficient bandwidth to complete in a timely manner.
3. Packet loss may increase if communication between the NSP Flow Collector and target file server is
interrupted.
For business deployments, in addition to the statistics types with a small number of records;
Comprehensive, Volume, Unknown, and Volume Special Study, there are also statistics types with a
larger number of records; TCP Performance, and RTP (Voice/Audio/Video). The main distinction is
whether or not the TCP/RTP statistics types use the default enabled counters, or if all counters
have been enabled. Enabling all of the TCP/RTP counters increases the amount of memory used
by the NSP Flow Collector. Aside from the incoming FPS (Flows Per Second) that the NSP Flow
Collector can process, the other main factor putting pressure on the NSP Flow Collector is the
memory used by the number of unique objects/records (or unique routes, that is the # of output
records the NSP Flow Collector produces in the IPDR files) in NSP Flow Collector memory at any
one time. And finally the interval size – the smaller the aggregation interval, the greater percentage
of the next interval time will overlap with the transfer time of the previous interval – during this time
the NSP Flow Collector must store objects in memory from two different intervals. Comprehensive
statistics types are fixed at 60 minute intervals.
A unique object/route for TCP/Volume records in the business context is:
SAP, App/AppGroup, Interval ID, Src Group ID, Source Interface ID, Dest Group ID, Dest Interface
ID
A Volume record will also have a direction field. Volume records coming from the router to the NSP
Flow Collector will result in two output records in the IPDR files (one for each direction). For TCP,
two incoming records from the NSP Flow Collector (one for each direction) will be combined by the
NSP Flow Collector into a single output TCP record in the IPDR files.
A unique object/route for COMPREHENSIVE record in the business context is:
SAP, App/AppGroup, Interval ID, Src Group ID, Source Interface ID, Dest Group ID, Dest Interface
ID
and either a hostname field, or three device identification fields.
A unique object/route for RTP is defined as:
Every single flow into the NSP Flow Collector is a unique route and an equal number of flow
records are produced in the IPDR file. The expected number of RTP records sent from 7750 SR
Routers is expected to be a small percentage of the total flows (for example, <5% total flows TCP/
VOL/RTP)
3
NSP Flow Collector Statistic types used and counters Maximum number of unique Packet loss per hour
processing rate in flows per used 1 objects in memory 2
second
Notes:
1. Comprehensive/Volume/ Unknown/Volume SS: All Counters RTP/TCP/TCP S.S Counter Selection Default
Counters: Leaving default enabled counters on All Counters: Enabling all available counters for given stat
type. There are 40-60 total counters available for TCP and RTP types.
2. Number of aggregated output requisitions that are sent to the server every 60 seconds. Assumes transfer
has sufficient bandwidth to complete in a timely manner.
3. Packet loss may increase if communication between the NSP Flow Collector and target file server is
interrupted
Note: All CPAAs monitoring a contiguous network must belong to the same administrative
domain.
The scalability of 7701 CPAA Hardware revision 2 and vCPAA is described in the following table.
Maximum Number of Routers Supported with both IGP and BGP 500
turned-on in the same 7701 CPAA (larger node count must use two
separate 7701 CPAAs for IGP and BGP) with 2 200 000 BGP combined
routes
Maximum Number of IGP Routers per 7701 CPAA if BGP is deployed on 700
separate 7701 CPAA
(routers can all be in one or multiple areas and count includes Nokia
P/PE & 3rd party routers)
Maximum Number of OSPF Domains/Areas per 7701 CPAA One Administrative Domain per 7701 CPAA
Up to 50 areas per 7701 CPAA
Maximum Number of ISIS regions/area IDs per 7701 CPAA One Administrative Domain per 7701 CPAA
Up to 32 L1s + L2 per 7701 CPAA
Combined IP and LSP path monitors configured The numbers below can be combined
(60 000)
Maximum number of supported paths (IP / LSP) that can be imported into 20 000
Simulated Impact Analysis simultaneously
6 Security
6.1 Introduction
6.1.1 Overview
This chapter provides general information about platform security for a deployment of NSP and
NFM-P. Recommendations in this section apply to NSP and its optional components except where
indicated.
The NSP implements a number of safeguards to ensure the protection of private data. Additional
information can be found in the NSP Data Privacy section of the NSP System Architecture Guide.
Nokia recommends performing the following steps to achieve station security for the NSP:
• Install the latest recommended patch cluster from Red Hat (not supported on RHEL OS images)
• NSP has no ingress or egress requirements to access the public internet and should be isolated
with properly configured firewalls.
• Implement firewall rules to control access to ports on NSP systems, as detailed in this section
• Use SSL certificates with strong hashing algorithms.
• Enforce minimum password requirements and password renewal policies on user accounts that
access the NSP applications.
• Configure a warning message in the Launchpad Security Statement.
• Configure login throttling to prevent denial of service attacks (see NSP Installation and Upgrade
Guide).
• Configure maximum session limits for administrators and users (see NSP Installation and
Upgrade Guide).
• Configure user lockout after a threshold of consecutive failed login attempts (see NSP
Installation and Upgrade Guide).
• When using custom TLS certificates for NSP deployment, ensure that the server private key file
is protected when not in use by nsp configurator.
• Optional: Revoke world permission on compiler executables (see NSP Installation and Upgrade
Guide).
See the NSP System Architecture Guide for NSP RHEL OS compliance with CIS Benchmarks. The
supported CIS Benchmark best practices are already implemented on NSP RHEL OS images.
The NSP supports the use of custom TLS certificates for client communications with NSP
applications. Internal communications between NSP components can be secured with the use of a
PKI server which can create, sign and distribute certificates. The NSP cluster software package
provides a PKI server that can be used to simplify the TLS certificate distribution to NSP
components.
A NSP cluster will check the expiry date of TLS certificates every 24h and raise an alarm in the
Fault Management application if the certificate is expired or nearing expiry. See the NSP System
Administrator Guide for further information.
See the NSP Installation and Upgrade Guide for instructions on the configuration of custom TLS
certificates and the provided PKI server application.
GUI/REST
client
NSP cluster
remote
Tomcat Kafka Zookeeper 11 authentication
server
email
12 server
external
10 controller
2 4 5 7
3 6 8
Connection Usage
1 Web Client/REST API client connections.
REST over HTTPS secured with TLS
2 SSO authentication (secure), zookeeper
registration (secure), neo4j database
(non-secure), kafka (secure), NFM-P API
(secure), Data connection – CPROTO protocol
secured with TLS
3 NE mediation using SNMP and FTP/SCP
4 NE mediation using gRPC/gNMI, SNMP,
NETCONF, SSH
5 Data connection – CPROTO (non-secure)
6 BGP (supports GTSM), PCEP (secured by
TLS), OpenFlow communications (secured by
TLS) * Note
Connection Usage
7 SSO authentication (secure), zookeeper
registration (secure), REST over HTTPS
secured with TLS, proprietary HTTP with
NFM-T
8 NE mediation with SNMP and TL-1
9 SSO authentication (secure), zookeeper
registration (secure), kafka (secure),
PostgreSQL (secure), gRPC (secure), REST
over HTTPS secured with TLS
10 syslog notifications secured with TLS
11 Mediator communications with external
controller, REST/RESTCONF secured with
TLS
12 SMTP, SMTPS, STARTTLS communication
with email server
13 LDAP, RADIUS, TACACS communications
with remote authentication servers
Note: VSR-NRC supports secure PCEP and OpenFlow communications in specific releases.
See the SR OS documentation for details.
Table notes:
• Each table identifies network communications based on the destination component.
• Each communication link defines traffic from the originating component and port to the
destination component and port. When firewall rules are applied in both directions of
communication, the return path must also be permitted.
• In a multi-node NSP cluster deployment, communications originating from NSP to a destination
must allow a firewall rule for each node of that NSP cluster to the destination component. Traffic
destined to a multi-node NSP cluster will require a firewall rule for the virtual IP address of the
NSP cluster.
• For NSP deployments with multiple network interfaces, the communications matrix will define on
which network interface the communications will be received.
• Where multiple components may be communicating with a destination component and port,
each source component with source port range is listed.
• A system administrator will require SSH access to components in the NSP deployment for
installation and maintenance purposes. For this purpose, tables will list a source component of
System administration server.
• Any ports that are optional, or are required only for unsecure communications, are identified at
the bottom of each table.
Note: The ephemeral port range of different server types may vary. Many Linux kernels use
the port range 32768 - 61000. To determine the ephemeral port range of a server, execute
cat /proc/sys/net/ipv4/ip_local_port_range
Note: Some NSP operations require idle TCP ports to remain open for long periods of time.
Therefore, a customer firewall that closes idle TCP connections should adjust OS TCP keep-
alives to ensure that the firewall will not close sockets that are in use by the NSP.
Note: The use of firewalld is not supported on NSP cluster virtual machines. Nokia
recommends using Calico policies to control traffic to an NSP cluster deployment. (Kubernetes
networking relies on calico rules added to iptables. Using firewalld changes the order of those
calico rules and can disrupt traffic flow in the NSP cluster.)
Refer to section 6.11 of this guide for a complete list of firewall rules for NFM-P and associated
components.
Nokia recommends performing the following steps to achieving NFM-P station security:
• Install a clean operating system environment with the minimum required packages documented
in the NSP Installation and Upgrade Guide.
• Install the latest Recommended Patch Cluster from Red Hat (not supported on NSP RHEL OS
images)
• Harden the RHEL operating system installation based upon the CIS Benchmarks best practices.
Reference the NSP System Architecture Guide for the Recommendations and Compliance
statements. The supported CIS Benchmark best practices are already implemented on the NSP
RHEL OS images.
• If installing RHEL, disable the mDNS Service.
• Implement firewall rules for NFM-P to control access to ports on NFM-P platforms as described
in 6.7.5 “Deploying NFM-P with firewalls” (p. 129) . NFM-P systems have no ingress or egress
requirements to access the public internet and should be isolated with properly configured
firewalls.
• If installing RHEL, enable the RHEL firewall filter rules lists. See 6.10 “NFM-P firewall and NAT
rules” (p. 144) for more details
• Installation of NFM-P with a secure configuration described in 6.7.3 “Installing the NFM-P
components” (p. 128)
• Network element connection configuration as described in 6.7.4 “NFM-P network element
communication” (p. 128)
• Configure NFM-P to run at runlevel 3 as opposed to the default runlevel 5
• Update the supported TLS versions and ciphers to remove older versions, if not required
• Consider using a Certificate Authority signed certificate instead of self-signed certificates
• Use TLS certificates signed with stronger hashing algorithms
• Enable SELinux in permissive/enforcing mode for the components that support it
• Enable Federal Information Processing Standards (FIPS) security. Note that using FIPS and
SNMPv3 algorithms larger than SHA1/AES128 will add CPU load and NE response times may
be diminished.
Nokia also recommends the configuration (as documented in the NSP NFM-P User Guide) of the
following options to secure communication with the NFM-P client UI and the NFM-P client XML API
interfaces:
• Password history count
• Password expiry periods
• Client time-outs
• Security statements
• Scope of command and span of control
• Client IP validation
NFM-P
server
SNMP traps
JMS
FTP
Managed
EJB/HTTP network
SNMP/SSH/
NFM-P Telnet
clients
NFM-P
auxiliary
22668
ftp/tftp
SNMP Active
NFM-P
Managed auxiliary
network
SNMP/SSH/ JMS
Telnet
ftp/tftp TCP
(Oracle)
SNMP traps
NFM-P NFM-P
server database
JMS
JMS TCP (Oracle)
EJB/HTTP
NFM-P
clients
Standby
NFM-P
auxiliary
JMS
TCP
(Oracle)
NFM-P NFM-P
server database
22667
NFM-P server and NFM-P auxiliary (statistics, call trace, and PCMD)
69 UDP None
See SFTP for a secure
alternative.
1099 TCP None Internal system communications protocol (JBoss Naming Service
-JNDI)
This port is required to ensure the NFM-P GUI, XML API clients,
auxiliaries and standby NFM-P server properly initialize with the
active NFM-P server.
When initially logging into the NFM-P server, NFM-P GUI and XML
API clients use this port to find the various services that are
available. This port is also used by the NFM-P GUI and XML API
clients to register with the NFM-P server to receive notification of
network changes.
3528 TCP None JBoss jacorb LTE NWI3 Corba Service Port
This port is required to communicate with OMS for Flexi MR BTS
management.
3529 TCP Dynamic Encryption JBoss jacorb-ssl LTE NWI3 Corba Service Port
Encryption provided by TLS. This port is required to communicate with OMS for Flexi MR BTS
Strong ciphers are supported management.
using various CBC and AES
ciphers provided by TLS.
7473 TCP Dynamic Encryption (if TLS is Neo4j https web server
configured)
Encryption provided by TLS.
Strong ciphers are supported
using various CBC and AES
ciphers provided by TLS.
8093 TCP Dynamic Encryption This port provides an HTTPS interface for Ne3s communication
Encryption provided by TLS. NFM-P server only
Strong ciphers are supported
using various CBC and AES
ciphers provided by TLS.
8094 TCP Dynamic Encryption This port provides an HTTPS interface for Ne3s communication.
Encryption provided by TLS. NFM-P statistics auxiliary only
Strong ciphers are supported
using various CBC and AES
ciphers provided by TLS.
9010 TCP Dynamic Encryption This port is used for file synchronization between redundant NFM-P
Encryption provided by TLS. servers and redundant auxiliary collectors (statistics and call trace).
Strong ciphers are supported
using various CBC and AES
ciphers provided by TLS.
11800 TCP Static Encryption Internal system communications protocol (JBoss Clustering)
Encryption provided by AES This port is required to ensure that redundant NFM-P servers can
Cipher Algorithm with 128 bit monitor each other.
Cipher Strength.
12010 TCP Dynamic Encryption This port is used for Warm standby Cache Sync communication
Encryption provided by TLS. between redundant NFM-P servers
Strong ciphers are supported This port is not used on the NFM-P auxiliary.
using various CBC and AES
ciphers provided by TLS.
12300 - 12307 TCP None These ports are used for detecting communication failures between
NFM-P server clusters (primary / secondary / auxiliaries)
12800 TCP Static Encryption Internal system communications protocol (JBoss clustering)
Encryption provided by AES During run-time operations, the NFM-P auxiliary uses this port to
Cipher Algorithm with 128 bit send and receive information to and from the NFM-P server.
Cipher Strength. The number of required ports depends on the number of NFM-P
auxiliary stations that are installed.
Note that NFM-P can be configured to use a different port for this
purpose. The procedure is available from Nokia personnel.
29780 UDP None Used to stream UDP PCMD data from SGW and PGW network
elements
Auxiliary PCMD collector only
47100 - 47199 TCP Dynamic Encryption Session-Manager ignite cache communication spi
Encryption provided by TLS. Only required on the NFM-P server when hosting the nspOS
Strong ciphers are supported components. Communication to external hosts is not required.
using various CBC and AES
ciphers provided by TLS.
47500 - 47599 TCP Dynamic Encryption Session-Manager ignite cache discovery spi
Encryption provided by TLS. Only required on the NFM-P server when hosting the nspOS
Strong ciphers are supported components. Communication to external hosts is not required.
using various CBC and AES
ciphers provided by TLS.
48500 - 48599 TCP Dynamic Encryption Session-Manager ignite cache communication spi
Encryption provided by TLS. Only required on the NFM-P server when hosting the nspOS
Strong ciphers are supported components. Communication to external hosts is not required.
using various CBC and AES
ciphers provided by TLS.
48600 - 48699 TCP Dynamic Encryption Session-Manager ignite cache discovery spi
Encryption provided by TLS. Only required on the NFM-P server when hosting the nspOS
Strong ciphers are supported components. Communication to external hosts is not required.
using various CBC and AES
ciphers provided by TLS.
2205 UDP None CGNAT / IPFIX cflowd records from 7750 SR routers to NSP Flow
Collector
4739 UDP None cflowd records from 7750 SR routers to NSP Flow Collector
8083 TCP None JBoss Socket for dynamic class and resource loading.
1090 TCP None JBoss RMI/JRMP socket for connecting to the JMX MBeanServer.
Used for NFM-P server to NSP Flow Collector Controller
communication.
1098 TCP None JBoss Socket Naming service used to receive RMI request from
client proxies.
Used for NFM-P server to NSP Flow Collector Controller
communication.
1099 TCP None JBoss The listening socket for the Naming service.
Used for Jboss communication between NFM-P and NSP Flow
Collector Controller.
4444 TCP None JBoss Socket for the legacy RMI/JRMP invoker.
Used for Jboss communication between NFM-P to NSP Flow
Collector Controller.
4445 TCP None JBoss Socket for the legacy Pooled invoker.
Used for Jboss communication between NFM-P to NSP Flow
Collector Controller.
4446 TCP None JBoss Socket for the JBoss Remoting Connected used by Unified
Invoker.
Used for Jboss communication between NFM-P to NSP Flow
Collector Controller.
4457 TCP Dynamic Encryption JBoss Socket for JBoss Messaging 1.x
Encryption provided by TLS.
Strong ciphers are supported
using various CBC and AES
ciphers provided by TLS.
8083 TCP None JBoss Socket for dynamic class and resource loading.
Managed devices
NFM-P database
6.9 FTP
6.9.1 FTP between the NFM-P server and NFM-P auxiliary statistics collector and
the managed network
NFM-P server and NFM-P auxiliary statistics collector may use FTP for several purposes.
The NFM-P server may use FTP, if configured, to receive backup images of managed devices, to
send new software images to the managed devices and to receive accounting statistics from the
managed devices.
If an NFM-P auxiliary statistics collector station is installed, FTP will be used, if configured, to
retrieve accounting statistics from managed devices.
If STM Accounting tests are being executed, the NFM-P server will retrieve the test results from the
managed devices by FTP, if configured.
The FTP communication is configured as an extended passive FTP connection, with the managed
devices serving as the FTP servers and the NFM-P server and NFM-P auxiliary acting as the FTP
client.
Extended passive FTP connections use dynamically-allocated ports on both sides of the
communication channel, and are ephemeral in nature. As such, the data sent from the managed
devices will be sent from a port in the range of 1024-65536. This data will be sent to the NFM-P
server on a port in the range of 1024-65536. Support for EPSV/EPRT ftp commands (commands
that can replace PASV/PORT commands) must be enabled for connections to the 7x50 family of
routers.
It is imperative that all rules are considered completely for the NFM-P systems to interoperate
correctly. The following tables will define the rules to be applied to each NFM-P station. Within the
section there will be a number of conditions that indicate whether or not that particular table needs
to be applied.
See 7.4 “NFM-P Network Address Translation” (p. 175) for supported NAT configurations.
Table 6-15 SNMP firewall rules for traffic between the NFM-P server(s) and the managed network
UDP Any Managed network 162 Server(s) SNMP trap initiated from
the NE
Note: Due to the size of SNMP packets, IP fragmentation may occur in the network. Ensure
the firewall will allow fragmented packets to reach the server(s).
Table 6-16 Telnet / FTP firewall rules for traffic between the NFM-P server(s) and the managed network
TCP > 1023 Managed network > 1023 Server(s) Passive FTP ports for
data transfer
Table 6-17 SSH / SFTP / SCP firewall rules for traffic between the NFM-P server(s) and the managed network
TCP >15000 Server(s) 830 Managed network SSHv2 request for MME
Table 6-18 Other firewall rules for traffic between the NFM-P server(s) and the managed network
ICMP N/A Managed network N/A NFM-P server(s) Only used if Ping Policy
is enabled.
Table 6-19 Firewall rules for traffic between the NFM-P server(s) and OMS for Flexi MR BTS management
Table 6-20 Firewall rules for traffic between the NFM-P server(s) and the LTE BTS
Table 6-21 Firewall rules for traffic between the NFM-P server(s) and the 1830 SMS HSM
TCP 758 NFM-P server 5552 1830 SMS HSM Two way
communication
TCP/UDP Any NFM-P server 389 LDAP server For LDAP authentication
TCP/UDP Any NFM-P server 636 LDAP server For LDAP authentication
(TLS)
When there is a firewall at the interface that reaches the NFM-P client(s) (NIC 3 on Figure 7-3,
“Distributed NFM-P server/database deployment with multiple network interfaces” (p. 170)) the
following rules need to be applied.
Table 6-23 Firewall rules for traffic coming into the NFM-P server(s) from the NFM-P client(s) (GUI/XML
API/JMS/Web/Kafka)
TCP > 1023 XML API client > 1023 Server(s) If (passive) FTP is
required
When there is a firewall configured, and there are redundant NFM-P auxiliary station(s), the
following rules need to be applied to the appropriate interface.
Table 6-24 Firewall rules for traffic coming into the NFM-P server(s) from the NFM-P auxiliary statistics / call
trace / PCMD collector(s)
Table 6-24 Firewall rules for traffic coming into the NFM-P server(s) from the NFM-P auxiliary statistics / call
trace / PCMD collector(s) (continued)
Table 6-25 Firewall rules for traffic coming into the NFM-P server(s) from the NSP Flow Collector Controller(s)
Table 6-26 Firewall rules for traffic coming into the NFM-P server(s) from the NSP Flow Collector(s)
When a firewall and NAT are configured to the NFM-P server at the NFM-P client interface (NIC 3
on Figure 7-3, “Distributed NFM-P server/database deployment with multiple network interfaces”
(p. 170)) the following rules need to be applied to allow the XML API clients to retrieve the logToFile
accounting statistics information. Services require the use of public addresses.
Table 6-27 Additional firewall rules required to allow services on the NFM-P client(s) to communicate with the
NFM-P server if NAT is used
TCP > 1023 Server Public Address > 1023 Server Private Address
When there is a firewall at the interface that reaches the NFM-P management network (NIC 1 on
Figure 7-3, “Distributed NFM-P server/database deployment with multiple network interfaces”
(p. 170)), the following rules apply.
Table 6-28 Firewall rules for traffic coming into the NFM-P server(s) from the NFM-P database server(s)
When there is a firewall at the NFM-P management interface (NIC 1 on Figure 7-3, “Distributed
NFM-P server/database deployment with multiple network interfaces” (p. 170)) and NFM-P server
redundancy is configured, then the following rules need to be applied. Configuration needs to be in
both directions to handle an activity switch.
Table 6-29 Firewall rules for setups with redundant NFM-P servers
Table 6-29 Firewall rules for setups with redundant NFM-P servers (continued)
When there is a firewall at the NFM-P management interface (NIC 1 on Figure 7-3, “Distributed
NFM-P server/database deployment with multiple network interfaces” (p. 170)) and NFM-P auxiliary
statistics / call trace / PCMD collectors are configured, then the following rules need to be applied:
Table 6-30 Firewall rules for traffic coming into the NFM-P server(s) from the NFM-P auxiliary statistics / call
trace server(s)
If NFM-P is not deployed with NSP, the following rules need to be applied to the NFM-P server if
there is a firewall on the NFM-P management interface (NIC 1 on Figure 7-3, “Distributed NFM-P
server/database deployment with multiple network interfaces” (p. 170))
Table 6-31 Firewall rules for inter-process communication on the NFM-P server(s)
Table 6-31 Firewall rules for inter-process communication on the NFM-P server(s) (continued)
If NFM-P is deployed with NSP, the following rules need to be applied to the NFM-P server if there
is a firewall on the NFM-P management interface (NIC 1 on Figure 7-3, “Distributed NFM-P server/
database deployment with multiple network interfaces” (p. 170))
Table 6-32 Firewall rules for communication between the NFM-P server(s) and NSP
Table 6-32 Firewall rules for communication between the NFM-P server(s) and NSP (continued)
Table 6-33 Firewall rules for communication between the NFM-P statistics auxiliary and NSP
Table 6-34 Firewall rules for traffic coming into the NFM-P database server(s) from the NFM-P server(s),
NFM-P auxiliary statistics / call trace / PCMD collector(s), and NSP analytics server
When there is a firewall at the interface that reaches the NFM-P management network (NIC 1 on
Figure 7-3, “Distributed NFM-P server/database deployment with multiple network interfaces”
(p. 170)) and redundancy is configured, the following rules apply. Configuration needs to be in both
directions to handle an activity switch.
Table 6-35 Firewall rules for traffic between the NFM-P database servers (redundant only)
6.10.4 NFM-P auxiliary server and NSP Flow Collector firewall and NAT rules
When there is a firewall at the interface that reaches the managed network (NIC 2 on Figure 7-3,
“Distributed NFM-P server/database deployment with multiple network interfaces” (p. 170)), the
following rules apply.
Table 6-36 SNMP firewall rules for traffic coming into the NFM-P auxiliary statistics collector server(s) from the
managed network
Note: Due to the size of SNMP packets, IP fragmentation may occur in the network. Ensure
the firewall will allow fragmented packets to reach the server(s).
Table 6-37 SSH / Telnet firewall rules for traffic coming into the NFM-P auxiliary statistics collector server(s)
from the managed network
Table 6-38 FTP firewall rules for traffic coming into the NFM-P auxiliary statistics collector(s) from the
managed network
TCP > 1023 Managed network > 1023 Auxiliary server(s) Passive FTP ports for
data transfer (See
6.9 “FTP” (p. 143))
Table 6-39 Firewall rules for traffic coming into the NFM-P auxiliary statistics collector(s) from the LTE BTS
Note: FTP access is only required for the NFM-P auxiliary statistics collector.
Table 6-40 SNMP firewall rules for traffic coming into the NFM-P auxiliary call trace collector(s) from the
managed network
UDP 161 Managed network > 32768 Auxiliary server(s) SNMP response
Note: Due to the size of SNMP packets, IP fragmentation may occur in the network. Ensure
the firewall will allow fragmented packets to reach the server(s).
Table 6-41 Firewall rules for traffic coming into the NFM-P auxiliary PCMD collector(s) from the managed
network
UDP Any Managed network 29780 Auxiliary server(s) PCMD records from
SGW / PGW to NFM-P
PCMD auxiliary
collector
Table 6-42 Firewall rules for traffic coming into the NSP Flow Collector(s) from the managed network
UDP Any Managed network 2205 NSP Flow Collector CGNAT / IPFIX cflowd
records
UDP Any Managed network 4739 NSP Flow Collector cflowd records from
7750 SR routers to NSP
Flow Collector
When there is a firewall at the interface that reaches the NFM-P client(s) (NIC 3 on Figure 7-3,
“Distributed NFM-P server/database deployment with multiple network interfaces” (p. 170)), the
following rules apply for FTP access to the NFM-P auxiliary by the XML-API client.
Table 6-43 Firewall rules for XML API client communication to the NFM-P auxiliary collector(s)
TCP Any XML API client 21/22 auxiliary collector(s) (S)FTP requests
(logToFile statistics, and
call trace information)
TCP 21/22 XML API client Any auxiliary collector(s) (S)FTP responses
TCP > 1023 XML API client Any auxiliary collector(s) Passive (S)FTP ports
for data transfer (See
6.9 “FTP” (p. 143))
TCP Any XML API client 8086 auxiliary collector(s) HTTP interface for
WebDAV for WTA
TCP Any XML API client 8445 auxiliary collector(s) HTTPS interface for
WebDAV for WTA
Table 6-44 FTP/SFTP firewall rules for the NSP Flow Collector(s)
TCP Any NSP Flow Collector(s) 21/22 Target file server (S)FTP requests
TCP 21/22 Target file server Any NSP Flow Collector(s) (S)FTP responses
TCP > 1023 Target file server Any NSP Flow Collector(s) Passive (S)FTP ports for
data transfer (See
6.9 “FTP” (p. 143))
Table 6-45 FTP/SFTP firewall rules for the NSP Flow Collector Controllers(s)
TCP Any NSP Flow Collector(s) 21/22 NFM-P server (S)FTP requests
TCP 21/22 NFM-P server Any NSP Flow Collector(s) (S)FTP responses
TCP > 1023 NFM-P server Any NSP Flow Collector(s) Passive (S)FTP ports for
data transfer (See
6.9 “FTP” (p. 143))
TCP Any NSP Flow Collector 22222 NSP Flow Collector SFTP requests
Controller
TCP 22222 NSP Flow Collector Any NSP Flow Collector SFTP responses
Controller
When there is a firewall at the interface that communicates with the NFM-P servers, the following
rules apply for inter process communication.
Table 6-46 Firewall rules for inter-process communication on the NFM-P auxiliary statistics / call trace
collector(s)
Table 6-47 Firewall rules for inter-process communication on the NSP Flow Collector Controller(s)
Table 6-48 Firewall rules for inter-process communication on the NSP Flow Collector(s)
When there is a firewall at the interface that communicates with the NFM-P servers, the following
rules apply.
Table 6-49 Firewall rules for traffic coming into the NFM-P auxiliary statistics / call trace / PCMD collector(s)
from the NFM-P server(s)
Table 6-50 Firewall rules for traffic between the NSP Flow Collector Controller(s) from the NFM-P server(s) or
NSP
Table 6-51 Firewall rules for traffic between the NSP Flow Collector(s) from the NFM-P server(s) or NSP
When there is a firewall at the interface that reaches the NFM-P client(s) (NIC 3 on Figure 7-3,
“Distributed NFM-P server/database deployment with multiple network interfaces” (p. 170)) and
NAT is used on the NFM-P auxiliary server(s), the following rules apply to allow the XML API clients
to collect the logToFile accounting statistics files. Services require the use of public addresses.
Table 6-52 Additional firewall rules required to allow services on the NFM-P client(s) to communicate with the
NFM-P auxiliary(s) if NAT is used on the auxiliary server(s)
Table 6-52 Additional firewall rules required to allow services on the NFM-P client(s) to communicate with the
NFM-P auxiliary(s) if NAT is used on the auxiliary server(s) (continued)
TCP > 1023 Auxiliary server public > 1023 Auxiliary server private
address address
When there is a firewall at the interface that reaches the NFM-P management network (NIC 1 on
Figure 7-3, “Distributed NFM-P server/database deployment with multiple network interfaces”
(p. 170)), the following rules apply.
Table 6-53 Firewall rules for traffic coming into the NFM-P auxiliary collector(s) from the NFM-P database(s)
When there is a firewall at the interface that reaches the NFM-P management network (NIC 1 on
Figure 7-3, “Distributed NFM-P server/database deployment with multiple network interfaces”
(p. 170)), the following rules apply.
Table 6-54 Firewall rules for traffic coming into the NFM-P auxiliary collector(s) from the NFM-P server(s)
Table 6-55 Firewall rules for traffic between redundant NFM-P auxiliary statistics collectors
Table 6-56 Firewall rules for traffic between redundant NFM-P auxiliary call trace collectors
Table 6-57 Firewall rules for traffic between the NFM-P server and the NFM-P auxiliary database
Table 6-58 Firewall rules for traffic between the NFM-P auxiliary database(s) and NSP
Table 6-59 Firewall rules for traffic between the NFM-P auxiliary statistics collector and the NFM-P auxiliary
database
Table 6-60 Firewall rules for traffic between the NSP Flow Collector and the NFM-P auxiliary database
Table 6-61 Firewall rules for traffic between the NSP analytics server and the NFM-P auxiliary database
Table 6-62 Firewall rules for traffic between NFM-P auxiliary database clusters
Table 6-63 Firewall rules for traffic between the NFM-P server(s) and NSP analytics server
Table 6-64 Firewall rules for traffic between the NFM-P client and NSP analytics server
Apply the following firewall rules to the connection between the NSP analytics server and NSP
server when NFM-P is integrated with NSP. Note that all connections are bi-directional.
Table 6-65 Firewall rules for traffic between the NSP analytics server and NSP
Table 6-66 Firewall rules for traffic between the NFM-P client and GNEs
TCP Any NFM-P client(s) 80 Managed network HTTP (See GNE vendor
for specifics)
TCP Any NFM-P client(s) 443 Managed network HTTPS (See GNE
vendor for specifics)
Table 6-67 Firewall rules for traffic between the NFM-P client (NEtO) and 9500 MPR / Wavence SM
(MSS-8/4/1)
Table 6-68 Firewall rules for traffic between the NFM-P client (NEtO) and 9500 MPR / Wavence SM (MSS-1C /
MPR-e / MSS-8)
UDP Any NFM-P client 11500 Managed network Equipment View (GUI)
Table 6-69 Firewall rules for traffic between the NFM-P client (NEtO) and 9400 AWY
Table 6-70 Firewall rules for traffic between the NFM-P client and OmniSwitches
Table 6-71 Firewall rules for traffic between the NFM-P client and NSP
Table 6-72 Firewall rules for traffic between NFM-P and the server hosting the pki-server
NSP supports configuring different network interfaces to handle the following types of traffic in a
multi-homed system.
• A client network interface can be used for connecting users to NSP GUI and to connect external
OSS systems to NSP.
• An internal network interface can be used to handle traffic between NSP systems that does not
need to be accessed by external systems or with managed network elements. Internal traffic
includes, but is not limited to, resync of network topology information, security communications,
application registration and data synchronization between redundant components.
• A mediation network interface can be used to communicate with network elements (provisioning,
NE database backups, monitoring, operations, etc).
Client
Network
elements
client network
internal network
managed network nic1 nic2 nic3 nic
36809
The NSP cluster can be deployed with one interface for all client, network management and NE
traffic. The NSP cluster can be deployed with one interface for client and internal network traffic,
and a second interface for NE traffic. All servers in an NSP deployment must support the same
network configuration for client and internal networks. For example, an NSP cluster deployed with
network interfaces for client and internal networks cannot be deployed with a NFMP server that is
configured for one interface supporting client and internal communications.
When installing NSP components on stations with multiple network interfaces, each interface must
reside on a separate subnet, with the exception of interfaces that are to be used in IP Bonding.
There is no requirement on the NSP cluster to use the first network interface (eg. eth0, bge0) to
communicate with client applications.
Additional network interfaces can be configured on the NSP cluster, at the customer's discretion, for
other operations such as archiving database backups or activity logs.
When an NSP cluster is deployed with NFM-T, the separation of client and internal network traffic is
not supported. NSP and NFM-T must use a single network for client and internal communications.
When using custom TLS certificates in a multi-network configuration, the NSP server certificate
requires the IP address or hostname or FQDN of the client network interface (or virtual IP) and the
IP address or hostname or FQDN of the internal network interface (or virtual IP) in the certificate
SAN field (ref parameters advertisedAddress and internalAdvertisedAddress in nsp-config.yml).
The NSP cluster can use IPv4 or IPv6 addressing on the client, internal and mediation network
interfaces. In addition to the limitations and restrictions documented in section 4.2.1, the following
conditions apply:
• The NSP cluster can only use IPv4 or IPv6 communications on the client network interface and
on the internal network interface. The system network interfaces can have both IPv4 and IPv6
addresses assigned, but NSP communications on those interfaces can only use IPv4 or IPv6.
• The NSP cluster mediation interface supports IPv4 only, IPv6 only and IPv4 and IPv6
simultaneously. When NSP is configured with IPv4 and IPv6 mediation simultaneously, the NSP
must have a dedicated mediation interface not shared with client and internal network
communications.
• In an NSP deployment with separate network interfaces for client and internal communications,
the client and internal networks must both use IPv4 or IPv6 addressing. Example, client
communications on IPv4 and internal communications on IPv6 is not supported.
The NSP cluster can communicate with some external components on any network interface,
including
• remote authentication servers (LDAP, RADIUS, TACACS)
• VSR-NRC
• syslog server
• email server
Each node in an NSP cluster must allow the same traffic on each network interface.
NSP supports the use of Network Address Translation (NAT) between the following components:
• NSP cluster and clients (web application users, REST API clients)
• NSP cluster and network elements
• NSP cluster and other components in the NSP deployment (eg. NFM-P)
NSP does not support the use of NAT between nodes within an NSP cluster deployment, including
the deployer host.
Figure 7-2 Collocated NFM-P server/database deployment with multiple network interfaces
bge0
NFM-P server/
database NFM-P Clients
bge1
Managed
network
22666
Figure 7-3 Distributed NFM-P server/database deployment with multiple network interfaces
NFM-P
clients
NFM-P NFM-P
database NIC 1 NIC 1 database
active standby
NIC 5
NIC 5
NIC 5
NIC 1
NIC 3 NIC 3
NSP flow NSP flow
collector
controller
NIC 1 NIC 1 collector
controller
active standby
NIC 3 NIC 3
NSP flow NSP flow
NIC 2 collector NIC 1 NIC 1 collector NIC 2
NIC 4 NIC 4
NIC 3 NIC 3
NFM-P NFM-P
NIC 2 auxiliary NIC 1 NIC 1 auxiliary NIC 2
NIC 4 NIC 4
Figure 7-3, “Distributed NFM-P server/database deployment with multiple network interfaces”
(p. 170) illustrates a distributed, redundant NFM-P deployment where the NFM-P components are
configured to actively use more than one network interface.
Due to limitations with the inter-process and inter-station communication mechanisms, a specific
network topology and the use of hostnames is required (see 7.5 “Use of hostnames for the NFM-P
client” (p. 177)). Contact an Nokia representative to obtain further details.
The NFM-P server supports the configuration of different IP addresses for the following purposes:
• One or multiple network interfaces can be used to manage the network. (NIC 2 on Figure 7-3,
“Distributed NFM-P server/database deployment with multiple network interfaces” (p. 170)) This
network interface contains the IP address that the managed devices will use to communicate
with the NFM-P server and NFM-P auxiliary. If managing a network element with both an in-band
and out-of-band connection, the same network interface on the NFM-P server must be used for
both communication types.
• One network interface can be used to service the requirements of the NFM-P clients (GUIs and
XML API) (NIC 3 on Figure 7-3, “Distributed NFM-P server/database deployment with multiple
network interfaces” (p. 170)). This network interface contains the IP address that all clients (GUIs
and XML API) will use to communicate with the NFM-P server. All clients (GUIs and XML API)
must be configured to use the same IP address to communicate to the NFM-P server. This IP
address can be different from the one used by the managed devices to communicate with the
NFM-P server. Each client can use the hostname to communicate with the NFM-P server, where
the hostname could map to different IP addresses on the NFM-P server - for example, some
clients could connect over IPv4 and some over IPv6. In this scenario, the NFM-P server must be
configured for clients to use hostname and not IP.
• One network interface can be used to communicate with the NFM-P database, NFM-P auxiliary
database, and NFM-P auxiliary collectors as well as any redundant NFM-P components should
they be present (NIC 1 on Figure 7-3, “Distributed NFM-P server/database deployment with
multiple network interfaces” (p. 170)). This network interface contains the IP address that the
NFM-P database, NFM-P auxiliary database, and redundant NFM-P components will use to
communicate with the NFM-P server. This IP address can be different from the addresses used
by the NFM-P clients and the managed devices to communicate with the NFM-P server.
• In a redundant NFM-P installation, the NFM-P servers and NFM-P auxiliary collectors must have
IP connectivity to the NFM-P server peer.
• Additional network interfaces may be configured on the NFM-P server station, at the customer’s
discretion, to perform maintenance operations such as station backups.
• IPv4 and IPv6 network elements can be managed from the same interface or from separate
interfaces. (NIC2 and/or NIC4 on Figure 7-3, “Distributed NFM-P server/database deployment
with multiple network interfaces” (p. 170)).
The NFM-P auxiliary statistics collector supports the configuration of different IP addresses for the
following purposes:
• One or multiple network interfaces can be used to retrieve information from the managed
network. (NIC 2 on Figure 7-3, “Distributed NFM-P server/database deployment with multiple
network interfaces” (p. 170)) This network interface contains the IP address that the managed
devices will use to retrieve the accounting statistics files, and performance statistics from the
network elements.
• One network interface can be used to service the requirements of the XML API clients (NIC 3 on
Figure 7-3, “Distributed NFM-P server/database deployment with multiple network interfaces”
(p. 170)). This network interface contains the IP address that all XML API clients will use to
communicate with the NFM-P auxiliary statistics collector. XML API clients will use this IP
address to retrieve the logToFile statistics collection data from the NFM-P auxiliary statistics
collector.
• One network interface can be used to communicate with the NFM-P server, NFM-P database,
NFM-P auxiliary database cluster as well as any redundant NFM-P components should they be
present (NIC 1 on Figure 7-3, “Distributed NFM-P server/database deployment with multiple
network interfaces” (p. 170)). This network interface contains the IP address that the NFM-P
server, NFM-P database, NFM-P auxiliary database, and redundant NFM-P components will use
to communicate with the NFM-P auxiliary statistics collector. This IP address can be different
from the addresses used by the NFM-P XML API clients and the managed devices to
communicate with the NFM-P auxiliary statistics collector.
• In a redundant NFM-P installation, the NFM-P auxiliary statistics collector must have IP
connectivity to the NFM-P server peer.
• Additional network interfaces may be configured on the NFM-P auxiliary statistics collector
station, at the customer’s discretion, to perform maintenance operations such as station
backups.
• IPv4 and IPv6 network elements can be managed from the same interface or from separate
interfaces. (NIC2 and/or NIC4 on Figure 7-3, “Distributed NFM-P server/database deployment
with multiple network interfaces” (p. 170)).
with the NFM-P auxiliary call trace collector. 9958 WTA will use this IP address to retrieve the call
trace data from the NFM-P auxiliary call trace collector.
• One network interface can be used to communicate with the NFM-P management complex as
well as any redundant NFM-P components should they be present (NIC 1 on Figure 7-3,
“Distributed NFM-P server/database deployment with multiple network interfaces” (p. 170)). This
network interface contains the IP address that the NFM-P management complex components
will use to communicate with the NFM-P auxiliary call trace collector. If a redundant NFM-P
auxiliary call trace collector is present, this network interface will also be used to sync call trace
and debug trace data collected from the network, with the peer NFM-P auxiliary call trace
collector. This IP address can be different from the addresses used by the 9958 WTA clients and
the managed devices to communicate with the NFM-P server.
• In a redundant NFM-P installation, the NFM-P auxiliary call trace collector must have IP
connectivity to the NFM-P server peer.
• Additional network interfaces may be configured on the NFM-P auxiliary call trace collector
station, at the customer’s discretion, to perform maintenance operations such as station
backups.
address can be different from the addresses used by the clients and the managed devices to
communicate with the NFM-P server. If the NSP deployment includes either NSP, this is the
network interface that would be used for communication.
• One network interface can be used to retrieve information from the managed network. (NIC 2
and/or NIC 4 on Figure 7-3, “Distributed NFM-P server/database deployment with multiple
network interfaces” (p. 170)). This network interface contains the IP address that the managed
devices will use to send the cflowd flow data from the network elements.
• One network interface can be used to communicate with the clients (NIC 3 on Figure 7-3,
“Distributed NFM-P server/database deployment with multiple network interfaces” (p. 170)) This
network interface contains the IP address that the user will connect to with the web management
interface.
• One network interface can be used to send the formatted IPDR files to the target file server (NIC
4 on Figure 7-3, “Distributed NFM-P server/database deployment with multiple network
interfaces” (p. 170)). This network interface contains the IP address that all clients will use to
communicate with the NSP Flow Collector.
• In a redundant NFM-P installation, the NSP Flow Collector must have IP connectivity to the
NFM-P server peer.
• Additional network interfaces may be configured on the NSP Flow Collector station, at the
customer’s discretion, to perform maintenance operations such as station backups.
The NFM-P auxiliary PCMD collector supports the configuration of different IP addresses for the
following purposes. To meet scaling targets a minimum of two separate interfaces must be used —
one for management traffic and one for PCMD data collection:
• One network interface can be used to retrieve information from the managed network. (NIC 2 on
Figure 7-3, “Distributed NFM-P server/database deployment with multiple network interfaces”
(p. 170)) This network interface contains the IP address that the managed devices will use to
send the PCMD data from the network elements.
• One network interface can be used for retrieval of the formatted PCMD files by the target file
server (NIC 3 on Figure 7-3, “Distributed NFM-P server/database deployment with multiple
network interfaces” (p. 170)). This network interface contains the IP address that all clients will
use to communicate with the NFM-P auxiliary PCMD collector.
• One network interface can be used to communicate with the NFM-P management complex as
well as any redundant NFM-P components should they be present (NIC 1 on Figure 7-3,
“Distributed NFM-P server/database deployment with multiple network interfaces” (p. 170)). This
network interface contains the IP address that the NFM-P management complex components
will use to communicate with the NFM-P auxiliary PCMD collector. This IP address can be
different from the addresses used by the clients and the managed devices to communicate with
the NFM-P server.
• In a redundant NFM-P installation, the NFM-P auxiliary PCMD collector must have IP
connectivity to the NFM-P server peer.
• Additional network interfaces may be configured on the NFM-P auxiliary PCMD collector station,
at the customer’s discretion, to perform maintenance operations such as station backups.
NFM-P supports the use of Network Address Translation (NAT) between the following components:
• The NFM-P server and NFM-P clients (GUIs or XML API)
• The NFM-P auxiliary server and NFM-P XML API clients
• The NFM-P server and the managed network
• The NFM-P auxiliary statistics collector and the managed network
• The NFM-P auxiliary PCMD collector and the managed network
The figure below illustrates a deployment of NFM-P where NAT is used between the NFM-P server
and the managed network.
Figure 7-4 NFM-P server deployments with NAT between the server and the managed network
Private Public
NFM-P network network
server
Managed
NFM-P network
auxiliary NAT-Enabled
firewall
NFM-P
database 22664
Note: Network Address Translation is not supported between the NFM-P auxiliary call trace
collector and the managed network.
The following two figures illustrates a deployment of NFM-P where NAT is used between the
NFM-P server and the NFM-P clients (GUIs, XML API or client delegate servers). In Figure 7-5,
“NFM-P server deployment using NAT with IP Address communication” (p. 176), NFM-P clients on
the private side and public side of the NAT-Enabled Firewall must connect to the public IP address
of the NFM-P server. A routing loopback from the NFM-P server private IP address to the NFM-P
server public IP address must be configured in this scenario as all NFM-P clients must
communicate to the NFM-P server through the NFM-P server public IP address.
The NFM-P auxiliary will need to be able to connect to the public IP address of the NFM-P server.
Figure 7-5 NFM-P server deployment using NAT with IP Address communication
Routing loopback
required for NFM-P
server
Private Public
network network
NFM-P
server
NFM-P client
must connect
to NFM-P server
NAT-Enabled public IP address
NFM-P client firewall
must connect
to NFM-P server
public IP address
22663
Figure 7-6 NFM-P server deployment using NAT with name resolution based communication
No routing loopback
required for NFM-P
server
Private Public
network network
NFM-P
server
NFM-P client
must connect
to NFM-P server
NAT-Enabled by hostname
NFM-P client firewall
must connect
to NFM-P server
by hostname
22662
In Figure 7-6, “NFM-P server deployment using NAT with name resolution based communication”
(p. 176), a name resolution service on the public side of the NAT-Enabled Firewall is configured to
resolve the NFM-P server hostname to the public IP address of the NFM-P server. Name resolution
service on the private side of the NAT-Enabled Firewall is configured to resolve the NFM-P server
hostname to the private IP address of the NFM-P server. clients on both sides of the NAT-Enabled
Firewall are configured to communicate with the NFM-P server via hostname where the NFM-P
server hostname must be the same on both sides of the NAT-Enabled Firewall.
The figure below illustrates a deployment of NFM-P where NAT is used between the NFM-P
complex, NFM-P clients, and the managed network.
NFM-P
client
NFM-P NFM-P
auxiliary auxiliary
Managed
network
22661
For installations using NAT between the NFM-P server and NFM-P client, a reverse DNS look-up
mechanism must be used for the client, to allow proper startup.
NAT rules must be in place before NFM-P installation can occur, since the installation scripts will
access other systems for configuration purposes.
Note: Network Address Translation is not supported between the NFM-P auxiliary call trace
collector and the managed network.
The following scenarios identify situations where it is necessary for the NFM-P client to be
configured to use a hostname rather than a fixed IP address to reach the NFM-P server:
• When CA signed TLS certificates are used, the FQDN must be used for client communication.
• When NFM-P clients can connect to the NFM-P server over multiple interfaces on the NFM-P
server. For example, when clients can connect over both IPv4 and IPv6 interfaces.
• When NAT is used between NFM-P clients and the NFM-P server.
• For situations where the NFM-P client and the NFM-P auxiliary (and/or NFM-P peer server) are
using different network interfaces to the NFM-P server, the NFM-P client must use a hostname to
reach the NFM-P server.
8 Appendix A