NE9000 V800R023C00SPC500 Configuration Guide 18 System Monitor
NE9000 V800R023C00SPC500 Configuration Guide 18 System Monitor
V800R023C00SPC500
Configuration Guide
Issue 01
Date 2023-09-30
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees
or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://www.huawei.com
Email: [email protected]
Contents
1 Configuration............................................................................................................................1
1.1 System Monitor........................................................................................................................................................................ 1
1.1.1 About This Document........................................................................................................................................................ 1
1.1.2 IP FPM Configuration......................................................................................................................................................... 7
1.1.2.1 Overview of IP FPM......................................................................................................................................................... 7
1.1.2.2 Feature Requirements for IP FPM............................................................................................................................ 10
1.1.2.3 Configuring IP FPM End-to-End Performance Statistics Collection..............................................................10
1.1.2.3.1 Configuring an MCP.................................................................................................................................................. 11
1.1.2.3.2 Configuring a DCP...................................................................................................................................................... 13
1.1.2.3.3 Checking the Configurations.................................................................................................................................. 17
1.1.2.4 Configuring IP FPM Hop-by-Hop Performance Statistics Collection........................................................... 17
1.1.2.4.1 Configuring an MCP.................................................................................................................................................. 18
1.1.2.4.2 Configuring a DCP...................................................................................................................................................... 20
1.1.2.4.3 Checking the Configurations.................................................................................................................................. 25
1.1.2.5 Maintaining IP FPM....................................................................................................................................................... 25
1.1.2.5.1 Configuring Alarm and Clear Alarm Thresholds for IP FPM Performance Counters.......................... 25
1.1.2.5.2 Monitoring the IP FPM Running Status.............................................................................................................. 26
1.1.2.6 IP FPM Configuration Examples............................................................................................................................... 26
1.1.2.6.1 Example for Configuring IP FPM End-to-End Performance Statistics Collection................................. 27
1.1.2.6.2 Example for Configuring IP FPM Hop-by-Hop Performance Statistics Collection............................... 46
1.1.3 NetStream Configuration............................................................................................................................................... 66
1.1.3.1 Overview of NetStream............................................................................................................................................... 66
1.1.3.2 Feature Requirements for NetStream..................................................................................................................... 67
1.1.3.3 Collecting Statistics About IPv4 Original Flows...................................................................................................67
1.1.3.3.1 Specifying a NetStream Service Processing Mode.......................................................................................... 68
1.1.3.3.2 Outputting Original Flow Packets........................................................................................................................ 69
1.1.3.3.3 (Optional) Configuring NetStream Monitoring Services.............................................................................. 71
1.1.3.3.4 (Optional) Adjusting the AS Field Mode and Interface Index Type.......................................................... 72
1.1.3.3.5 (Optional) Enabling Statistics Collection of TCP Flags..................................................................................73
1.1.3.3.6 (Optional) Configuring NetStream Interface Option Packets and Setting Option Template
Refreshing Parameters............................................................................................................................................................... 73
1.1.3.3.7 Sampling IPv4 Flows..................................................................................................................................................75
1.1.3.3.8 (Optional) Disabling MPLS Packet Sampling on an Interface....................................................................76
1.1.4.12.9 Example for Configuring an Ethernet Service Activation Test in a Layer 2 Scenario (Y.1564)... 256
1.1.4.12.10 Example for Configuring an Ethernet Service Activation Test in a Layer 3 Scenario (Y.1564).261
1.1.4.12.11 Example for Configuring an Ethernet Service Activation Test on an EVPN VXLAN (Y.1564)... 265
1.1.4.12.12 Example for Configuring Test Results to Be Sent to the FTP Server................................................. 271
1.1.5 Ping and Tracert Configuration.................................................................................................................................. 273
1.1.5.1 Feature Requirements for Ping and Tracert........................................................................................................273
1.1.5.2 Using Ping/Tracert to test an IP Network........................................................................................................... 273
1.1.5.2.1 Using Ping to Check Link Connectivity on an IPv4 or IPv6 Network..................................................... 273
1.1.5.2.2 Using Ping to Monitor the Reachability of Layer 3 Trunk Member Interfaces...................................276
1.1.5.2.3 Using Ping to Check the TCP Reachability on an IPv4 Network..............................................................278
1.1.5.2.4 Using Tracert to Check a Path on an IPv4 or IPv6 Network..................................................................... 279
1.1.5.3 Using Ping/Tracert to test an MPLS Network.................................................................................................... 281
1.1.5.3.1 Using Ping to Check Link Connectivity on an MPLS Network..................................................................281
1.1.5.3.2 Using Tracert to Check a Path on an MPLS Network.................................................................................. 287
1.1.5.3.3 Using Ping to Check Link Connectivity on a P2MP MPLS Network....................................................... 291
1.1.5.3.4 Using Tracert to Check the Forwarding Path on a P2MP MPLS Network............................................ 293
1.1.5.4 Using Ping/Tracert to Test a VPN........................................................................................................................... 294
1.1.5.4.1 Using Ping to Check PW Connectivity on a VPLS Network....................................................................... 294
1.1.5.4.2 Using Tracert to Check a PW Path on a VPLS Network..............................................................................295
1.1.5.4.3 Using Ping to Check VPWS PW Connectivity..................................................................................................296
1.1.5.4.4 Using Tracert to Check PWE3 Network Connectivity.................................................................................. 298
1.1.5.4.5 Using Ping to Check VPLS MAC Connectivity.................................................................................................299
1.1.5.4.6 Using Ping to Check Tunnel Connectivity on an EVPN VPLS Network by Specifying a MAC
Address.......................................................................................................................................................................................... 300
1.1.5.4.7 Using Tracert to Check a Path on an EVPN VPLS Network by Specifying a MAC Address............ 302
1.1.5.4.8 Using Ping to Check EVPN VPWS Network Connectivity........................................................................... 303
1.1.5.4.9 Using Tracert to Check a Path on an EVPN VPWS Network..................................................................... 304
1.1.5.4.10 Using CE Ping to Check the Connectivity Between a PE and a CE on a VPLS Network............... 305
1.1.5.4.11 Using CE Ping to Check the Connectivity Between a PE and a CE in an EVC Model.....................306
1.1.5.4.12 Using CE Ping to Check the Connectivity Between a PE and a CE on an EVPN Network........... 307
1.1.5.5 Using Ping/Tracert to Test a Layer 2 Network.................................................................................................. 308
1.1.5.5.1 Using GMAC Ping to Check Link Connectivity on a Layer 2 Network...................................................309
1.1.5.5.2 Using GMAC Trace to Check a Path on a Layer 2 Network...................................................................... 310
1.1.5.5.3 Using 802.1ag MAC Ping to Check Link Connectivity on a Layer 2 Network..................................... 312
1.1.5.5.4 Using 802.1ag MAC Trace to Check a Path on a Layer 2 Network........................................................ 313
1.1.5.6 Using Tracert to Check Path Information in an ECMP Scenario..................................................................315
1.1.5.7 Using Ping/Tracert to Test a Multicast Network...............................................................................................316
1.1.5.7.1 Checking Multicast Network Path Information Using MTracert..............................................................316
1.1.5.7.2 Using Ping to Check BIERv6 Network Connectivity..................................................................................... 319
1.1.5.7.3 Using Tracert to Check a Path on an BIERv6 Network............................................................................... 320
1.1.5.8 Using Ping/Tracert to Test an SRv6 Network..................................................................................................... 321
1.1.5.8.1 Using Ping to Check the Connectivity of an SRv6 Network...................................................................... 321
1.1.5.8.2 Using Tracert to Check a Path on an SRv6 Network................................................................................... 324
1.1.12.5.6 Example for Configuring Peer Locator-based IFIT on an EVPN VPWS over SRv6 Network........ 501
1.1.12.5.7 Example for Configuring APN6-based IFIT on an L3VPN over SRv6 Network.................................509
1.1.12.5.8 Example for Configuring IFIT in Inter-AS VPN Option A Scenarios..................................................... 519
1.1.12.5.9 Example for Configuring IFIT on a G-SRv6 Network................................................................................ 531
1.1.12.5.10 Example for Configuring IFIT on a BIERv6 Network............................................................................... 540
1.1.12.5.11 Example for Configuring Bidirectional Flow-based IFIT on an L3VPN............................................. 550
1.1.12.5.12 Example for Configuring IFIT in Public Network Traffic over SRv6 Scenarios............................... 559
1.1.12.5.13 Example for Configuring IFIT Measurement Based on Dynamic Flow Learning.......................... 568
1.1.12.5.14 Example for Configuring IFIT on a Single Device.....................................................................................576
1.1.12.6 Configuration Examples for IFIT Tunnel-Level Quality Measurement....................................................579
1.1.12.6.1 Example for Configuring Intelligent Traffic Steering Based on IFIT Tunnel-Level Quality
Measurement.............................................................................................................................................................................. 579
1.1.13 eMDI Configuration..................................................................................................................................................... 597
1.1.13.1 EMDI Overview.......................................................................................................................................................... 597
1.1.13.2 Feature Requirements for EMDI...........................................................................................................................599
1.1.13.3 Configuring Basic eMDI Detection Functions.................................................................................................. 599
1.1.13.3.1 Configuring an eMDI Channel Group............................................................................................................. 599
1.1.13.3.2 Configuring an eMDI Board Group..................................................................................................................600
1.1.13.3.3 Binding a Channel Group to a Board Group................................................................................................ 601
1.1.13.3.4 (Optional) Configuring eMDI Jitter Detection.............................................................................................602
1.1.13.3.5 (Optional) Configuring eMDI Detection on Ps............................................................................................602
1.1.13.4 Configuring eMDI Attributes................................................................................................................................. 603
1.1.13.4.1 Configuring an eMDI Detection Period.......................................................................................................... 603
1.1.13.4.2 Configuring eMDI Alarm Thresholds and the Number of Alarm Suppression Times....................604
1.1.13.4.3 Configuring an eMDI Detection Rate............................................................................................................. 605
1.1.13.4.4 Configuring the Aging Period for eMDI Detection.....................................................................................605
1.1.13.5 Maintaining eMDI..................................................................................................................................................... 606
1.1.13.6 Configuration Examples for eMDI....................................................................................................................... 607
1.1.13.6.1 Example for Configuring eMDI Detection for a Common Layer 3 Multicast Service.................... 607
1.1.13.6.2 Example for Configuring eMDI Detection on an Intra-AS NG MVPN with an mLDP P2MP LSP
......................................................................................................................................................................................................... 615
1.1.13.6.3 Example for Configuring eMDI Detection for NG MVPN over BIER Services................................... 633
1.1.14 ESQM Configuration.................................................................................................................................................... 642
1.1.14.1 Overview of ESQM................................................................................................................................................... 642
1.1.14.2 Feature Requirements for ESQM......................................................................................................................... 643
1.1.14.3 Configuring ESQM End-to-End Performance Measurement......................................................................643
1.1.14.4 Example for Configuring ESQM End-to-End Performance Measurement.............................................644
1.1.15 Flow Recognition Configuration.............................................................................................................................. 650
1.1.15.1 Overview of Flow Recognition..............................................................................................................................650
1.1.15.2 Feature Requirements for Flow Recognition................................................................................................... 652
1.1.15.3 Configuring Flow Recognition.............................................................................................................................. 652
1.1.15.4 Verifying the Flow Recognition Configuration................................................................................................ 653
1.1.15.5 Configuration Examples for Flow Recognition................................................................................................654
1 Configuration
Licensing Requirements
For details about the License, see the License Guide.
● Enterprise users: License Usage Guide
Related Version
The following table lists the product version related to this document.
Intended Audience
This document is intended for:
● Data configuration engineers
● Commissioning engineers
Security Declaration
● Notice on Limited Command Permission
The documentation describes commands when you use Huawei devices and
make network deployment and maintenance. The interfaces and commands
for production, manufacturing, repair for returned products are not described
here.
If some advanced commands and compatible commands for engineering or
fault location are incorrectly used, exceptions may occur or services may be
interrupted. It is recommended that the advanced commands be used by
engineers with high rights. If necessary, you can apply to Huawei for the
permissions to use advanced commands.
● Encryption algorithm declaration
The encryption algorithms DES/3DES/RSA (with a key length of less than
3072 bits)/MD5 (in digital signature scenarios and password encryption)/
SHA1 (in digital signature scenarios) have a low security, which may bring
security risks. If protocols allowed, using more secure encryption algorithms,
such as AES/RSA (with a key length of at least 3072 bits)/SHA2/HMAC-SHA2
is recommended.
For security purposes, insecure protocols Telnet, FTP, and TFTP as well as
weak security algorithms in BGP, LDP, PECP, MSDP, DCN, TCP-AO, MSTP, VRRP,
E-Trunk, AAA, IPsec, BFD, QX, port extension, SSH, SNMP, IS-IS, RIP, SSL, NTP,
OSPF, and keychain features are not recommended. To use such weak security
algorithms, run the undo crypto weak-algorithm disable command to enable
the weak security algorithm function. For details, see the Configuration Guide.
● Password configuration declaration
– When the password encryption mode is cipher, avoid setting both the
start and end characters of a password to "%^%#". This causes the
password to be displayed directly in the configuration file.
– To further improve device security, periodically change the password.
● MAC addresses and Public IP addresses Declaration
– For purposes of introducing features and giving configuration examples,
the MAC addresses and public IP addresses of real devices are used in the
product documentation. Unless otherwise specified, these addressees are
used as examples only.
– Open-source and third-party software may contain public addresses
(including public IP addresses, public URLs/domain names, and email
addresses), but this product does not use these public addresses. This
complies with industry practices and open-source software usage
specifications.
– For purposes of implementing functions and features, the device uses the
following public IP addresses:
Special Declaration
● This document package contains information about the NE9000. For details
about hardware, such as devices or boards sold in a specific country/region,
see Hardware Description.
● This document serves only as a guide. The content is written based on device
information gathered under lab conditions. The content provided by this
document is intended to be taken as general guidance, and does not cover all
scenarios. The content provided by this document may be different from the
information on user device interfaces due to factors such as version upgrades
and differences in device models, board restrictions, and configuration files.
The actual user device information takes precedence over the content
provided by this document. The preceding differences are beyond the scope of
this document.
● The maximum values provided in this document are obtained in specific lab
environments (for example, only a certain type of board or protocol is
configured on a tested device). The actually obtained maximum values may
be different from the maximum values provided in this document due to
factors such as differences in hardware configurations and carried services.
● Interface numbers used in this document are examples. Use the existing
interface numbers on devices for configuration.
● The pictures of hardware in this document are for reference only.
● The supported boards are described in the document. Whether a
customization requirement can be met is subject to the information provided
at the pre-sales interface.
● In this document, public IP addresses may be used in feature introduction and
configuration examples and are for reference only unless otherwise specified.
● The configuration precautions described in this document may not accurately
reflect all scenarios.
● Log Reference and Alarm Reference respectively describe the logs and alarms
for which a trigger mechanism is available. The actual logs and alarms that
the product can generate depend on the types of services it supports.
● All device dimensions described in this document are designed dimensions
and do not include dimension tolerances. In the process of component
manufacturing, the actual size is deviated due to factors such as processing or
measurement.
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Command Conventions
The command conventions that may be found in this document are defined as
follows.
Convention Description
Change History
Changes between document issues are cumulative. The latest document issue
contains all the changes made in earlier issues.
V800R023C00SPC500 01 2023-09-30
NOTE
Background
As IP services are more widely used, fault diagnosis and end-to-end service quality
analysis are becoming an increasingly pressing concern for carriers. However,
absence of effective measures prolongs fault diagnosis and increases the
workload. IP FPM is developed to help carriers collect statistics and monitor end-
to-end network performance.
Basic Concepts
The IP Flow Performance Measurement (FPM) model describes how service flows
are measured to obtain the packet loss rate and delay.Figure 1-1 shows the IP
FPM statistical model. The IP FPM model is composed of three objects: target
flows, a transit network, and the statistical system. The statistical system is further
classified into the Target Logical Port (TLP), Data Collecting Point (DCP), and
Measurement Control Point (MCP).
● Target flow
Target flows must be pre-defined.
One or more fields in IP headers can be specified to identify target flows. The
field can be the source IP address or prefix, destination IP address or prefix,
protocol type, source port number, destination port number, or type of service
(ToS). The more fields specified, the more accurately flows can be identified.
Specifying as many fields as possible is recommended to maximize
measurement accuracy.
● Transit network
The transit network only bears target flows. The target flows are not
generated or terminated on the transit network. The transit network can be a
Layer 2 (L2), Layer 3 (L3), or L2+L3 hybrid network. Each node on the transit
network must be reachable at the network layer.
● TLP
TLPs are interfaces on the edge nodes of the transit network. TLPs perform
the following actions:
– Compile statistics on the packet loss rate and delay.
– Generate statistics, such as the number of packets sent and received,
traffic bandwidth, and timestamp.
An In-Point-TLP collects statistics about service flows it receives. An Out-Point-
TLP collects statistics about service flows it sends.
● DCP
DCPs are edge nodes on the transit network. DCPs perform the following
actions:
– Manage and control TLPs.
– Collect statistics generated by TLPs.
– Report the statistics to an MCP.
● MCP
MCPs can be any nodes on the transit network. MCPs perform the following
actions:
– Collect statistics reported by DCPs.
Implementation
IP Flow Performance Measurement (FPM) measures multipoint-to-multipoint
(MP2MP) service flows to obtain the packet loss rate and delay.In statistical terms,
the statistical objects are the service flows, and statistical calculations determine
the packet loss rate and delay of the service flows traveling across the transit
network. Service flow statistical analysis is performed on the ingress and egress of
the transit network. On the IP/MPLS network shown in Figure 1-2, the number of
packets entering the network in the ingress direction on R(n) is PI(n), and the
number of packets leaving the network in the egress direction on HUAWEI (n) is
PE(n).
The difference between the number of packets entering the network and the
number of packets leaving the network within a specified period is the packet loss.
● The number of packets entering the network is the sum of all packets moving
in the ingress direction: PI = PI(1) + PI(2) + PI(3)
● The number of packets leaving the network is the sum of all packets moving
in the egress direction: PE = PE(1) + PE(2) + PE(3)
The difference between the time a service flow enters the network and the time
the service flow leaves the network within a specified period is the delay.
Benefits
IP FPM brings the following benefits to carriers:
NOTE
The following examples describe how to configure packet loss measurement and two-way
delay measurement in end-to-end proactive performance statistics and how to configure
packet loss measurement and one-way delay measurement in hop-by-hop on-demand
performance statistics.
IP FPM supports LPUF-480/LPUI-1T/LPUI-2T/LPUI-2T-CM/LPUF-2T/LPUI-4T/LPUI-4T-CM/
LPUF-480-L/LPUI-4T-L service boards.
In P2MP (MP being two points) and MP2P (MP being two points) delay measurement
scenarios, all devices in the delay measurement area must support P2MP delay
measurement. Otherwise, delay measurement fails.
Usage Scenario
The NE9000 supports proactive and on-demand IP FPM end-to-end performance
statistics. These functions apply to different scenarios:
● Proactive performance statistics apply when you want to monitor network
performance in real-time. After you configure this function, the system
continuously implements performance statistics on packet loss or delay.
● On-demand performance statistics apply when you want to diagnose network
faults or monitor network performance over a specified period. After you
configure this function, the system periodically implements performance
statistics on packet loss or delay.
These measurements serve as a reliable reference for network operation and
maintenance and fault diagnosis, improving network reliability and user
experience.
Pre-configuration Tasks
Before configuring IP FPM end-to-end performance statistics collection, complete
the following tasks:
● Configure a dynamic routing protocol or static routes so that devices are
reachable at the network layer.
● Configure the network time protocol (NTP) or 1588v2 so that all device clocks
can be synchronized.
Context
On the network shown in Figure 1-3, IP Flow Performance Measurement (FPM)
end-to-end performance statistics collection is implemented. The target flow
enters the transport network through Device A, travels across Device B, and leaves
the transport network through Device C. To monitor transport network
performance or diagnose faults, configure IP FPM end-to-end performance
statistics collection on both Device A and Device C.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa ipfpm mcp
MCP is enabled globally, and the IPFPM-MCP view is displayed.
Step 3 Run mcp id mcp-id
An MCP ID is configured.
Using the Router ID of a device that is configured as an MCP as its MCP ID is
recommended.
----End
Follow-up Procedure
When DCP configurations are being changed, the MCP may receive incorrect
statistics from the DCP. To prevent this, run the measure disable command to
Context
On the network shown in Figure 1-4, IP Flow Performance Measurement (FPM)
end-to-end performance statistics collection is implemented. The target flow
enters the transport network through Device A, travels across Device B, and leaves
the transport network through Device C. To monitor transport network
performance or diagnose faults, configure IP FPM end-to-end performance
statistics collection on both Device A and Device C.
As shown in Figure 1-4, Device A and Device C function as DCPs to manage and
control TLP100 and TLP310, respectively. Device A and Device C collect statistics
generated by TLP100 and TLP310 and report the statistics to the MCP.
Perform the following steps on Device A and Device C:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa ipfpm dcp
DCP is enabled globally, and the IPFPM-DCP view is displayed.
A DCP ID is configured.
The DCP ID configured on a DCP must be the same as that specified in the dcp
dcp-id command run in the IP FPM instance view of the MCP associated with this
DCP. Otherwise, the MCP cannot process the statistics reported by the DCP.
The authentication mode and password configured on a DCP must be the same as
those configured in the authentication-mode hmac-sha256 key-id key-id
[ cipher ] [ password | password ] command run on the MCP associated with this
DCP. Otherwise, the MCP cannot process the statistics reported by the DCP.
The loss and delay measurement flags cannot use the same bit, and the bits used
for loss and delay measurement must not have been used in other measurement
tasks.
An MCP ID is specified for the DCP, and the UDP port number is configured for the
DCP to communicate with the MCP.
The UDP port number configured on the DCP must be the same as that
configured in the protocol udp port port-number command run on the MCP
associated with this DCP. Otherwise, the DCP cannot report the statistics to the
MCP.
The VPN instance has been created on the DCP before you configure vpn-instance
vpn-instance-name or net-manager-vpn to allow the DCP to report the statistics
to the MCP through the specified VPN or management VPN.
The DCP is configured to select NTP as the clock source when calculating an IP
FPM statistical period ID.
In P2MP (MP being two points) delay measurement scenarios, if the ingress of the
service traffic uses NTP as the clock source, but the egresses use a different clock
source, for example, NTP or 1588v2, you must configure the egresses to select
NTP as the clock source when calculating an IP FPM statistical period ID to ensure
consistent clock sources on the ingress and egresses.
instance-id must be unique on an MCP and all its associated DCPs. The MCP and
all its associated DCPs must have the same IP FPM instance configured.
Otherwise, statistics collection does not take effect.
Step 9 (Optional) Run description text
The description is configured for the IP FPM instance.
The description of an IP FPM instance can contain the functions of the instance,
facilitating applications.
Step 10 (Optional) Run interval interval
The statistical period is configured for the IP FPM instance.
Step 11 Perform either of the following operations to configure the target flow
characteristics in the IP FPM instance.
Configure the forward or backward target flow characteristics.
● When protocol is specified as TCP or UDP, run:
flow { forward | backward } { protocol { tcp | udp } { source-port src-port-
number1 [ to src-port-number2 ] | destination-port dest-port-number1 [ to
dest-port-number2 ] } * | dscp dscp-value | source src-ip-address [ src-mask-
length ] | destination dest-ip-address [ dest-mask-length ] } *
● When protocol is specified as any protocol other than TCP or UDP, run:
NOTE
● If the target flow in an IP FPM instance is unidirectional, only forward can be specified.
● If the target flow in an IP FPM instance is bidirectional, two situations are available:
– If the bidirectional target flow is asymmetrical, you must configure forward and
backward in two command instances to configure the forward and backward flow
characteristics.
– If the bidirectional target flow is symmetrical, you can specify bidirectional to
configure the bidirectional target flow characteristics. By default, the characteristics
specified are used for the forward flow, and the reverse of those are used for the
backward flow. Specifically, the source and destination IP addresses and port numbers
specified for the forward flow are used respectively as the destination and source IP
addresses and port numbers for the backward flow. If the target flow is symmetrical
bidirectional, set src-ip-address to specify a source IP address and dest-ip-address to
specify a destination IP address for the target flow.
Step 12 Run tlp tlp-id { in-point | out-point } { ingress | egress } [ vpn-label vpn-label
[ lsp-label lsp-label ] ] [ backward-vpn-label backward-vpn-label [ backward-
lsp-label backward-lsp-label ] ]
A TLP is configured and its role is specified.
A TLP compiles statistics and outputs data in the IP FPM model. A TLP can be
specified as an in-point or an out-point. The system sets the measurement flags of
target flows on an in-point, and clears the measurement flags of target flows on
an out-point. TLP100 and TLP310 in Figure 1-4 are the in-point and out-point,
respectively.
----End
Prerequisites
The IP FPM end-to-end performance statistics collection function has been
configured.
Procedure
● Run the display ipfpm mcp command to check MCP configurations.
● Run the display ipfpm dcp command to check DCP configurations.
● Run the display ipfpm statistic-type { loss | oneway-delay | twoway-
delay } instance instance-id [ verbose ] command to check the performance
statistics for a specified IP FPM instance.
----End
Usage Scenario
IP FPM hop-by-hop performance statistics collection helps locate faults hop by
hop from the source node that initiates traffic.
● When a target flow is unidirectional, you can directly implement hop-by-hop
performance statistics collection for the flow.
● When a target flow is bidirectional, two situations are available:
– If the target flow is symmetrical, you can implement hop-by-hop
performance statistics collection for the forward or backward flow, and
the measurement is the same either way.
– If the target flow is asymmetrical, you must implement hop-by-hop
performance statistics collection for both the forward and backward flows
to obtain their respective measurements.
These measurements serve as a reliable reference for network operation and
maintenance and fault diagnosis, improving network reliability and user
experience.
Pre-configuration Tasks
Before configuring IP FPM hop-by-hop performance statistics collection, complete
the following tasks:
● Configure a dynamic routing protocol or static routes so that devices are
reachable at the network layer.
● Configure the network time protocol (NTP) or 1588v2 so that all device clocks
can be synchronized.
NOTE
Context
On the network shown in Figure 1-5, IP Flow Performance Measurement (FPM)
hop-by-hop performance statistics collection is implemented. The target flow
enters the transport network through Device A, travels across Device B, and leaves
the transport network through Device C. To locate faults when network
performance deteriorates, configure IP FPM hop-by-hop performance statistics
collection on Device A, Device B, and Device C to measure packet loss and delay
hop by hop.
Procedure
Step 1 Run system-view
An MCP ID is configured.
A UDP port number is specified for the MCP to communicate with DCPs.
The UDP port number configured on an MCP must be the same as that specified
in the mcp mcp-id [ port port-number ] command run in the IP FPM instance
view of all DCPs associated with this MCP. If a UDP port number is changed on an
MCP, it must be changed for all DCPs associated with this MCP in an IP FPM
instance. Otherwise, the MCP cannot process the statistics reported by the DCPs.
The authentication mode and password configured on an MCP must be the same
as those configured in the authentication-mode hmac-sha256 key-id key-id
[ cipher ] [ password | password ] command run on all DCPs associated with this
MCP. Otherwise, the MCP cannot process the statistics reported by the DCPs.
instance-id must be unique on an MCP and all its associated DCPs. The MCP and
all its associated DCPs must have the same IP FPM instance configured.
Otherwise, statistics collection does not take effect.
The description of an IP FPM instance can contain the functions of the instance,
facilitating applications.
The DCP ID configured in an IP FPM instance must be the same as that specified
in the dcp id dcp-id command run on a DCP. Otherwise, the MCP associated with
this DCP cannot process the statistics reported by the DCP.
Step 9 Run the following commands to configure Atomic Closed Hops (ACHs).
1. Run the ach ach-id command to create an ACH and enter the ACH view.
2. Run the flow { forward | backward | bidirectional } command to specify the
direction in which hop-by-hop delay measurement is implemented for the
target flow.
3. Run the in-group dcp dcp-id tlp tlp-id command to configure the TLP in-
group.
4. Run the out-group dcp dcp-id tlp tlp-id command to configure the TLP out-
group.
----End
Follow-up Procedure
When DCP configurations are being changed, the MCP may receive incorrect
statistics from the DCP. To prevent this, run the measure disable command to
disable IP FPM performance statistics collection of a specified instance on the
MCP. After the DCP configuration change is complete, run the undo measure
disable or measure enable command to enable IP FPM performance statistics
collection for the specified instance on the MCP. This ensures accurate
measurement.
Context
On the network shown in Figure 1-6, IP Flow Performance Measurement (FPM)
hop-by-hop performance statistics collection is implemented. The target flow
enters the transport network through Device A, travels across Device B, and leaves
the transport network through Device C. To locate faults when network
performance deteriorates, configure IP FPM hop-by-hop performance statistics
collection on Device A, Device B, and Device C to measure packet loss and delay
hop by hop.
As shown in Figure 1-6, Device A, Device B, and Device C function as DCPs. Device
A manages and controls TLP100, Device B manages and controls TLP200, and
Device C manages and control TLP300 and TLP310. Device A, Device B, and Device
C collect statistics generated by these TLPs and report the statistics to the MCP.
Perform the following steps on Device A, Device B, and Device C:
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa ipfpm dcp
DCP is enabled globally, and the IPFPM-DCP view is displayed.
Step 3 Run dcp id id-value
A DCP ID is configured.
Using the Router ID of a device that is configured as a DCP as its DCP ID is
recommended.
The DCP ID configured on a DCP must be the same as that specified in the dcp
dcp-id command run in the IP FPM instance view of the MCP associated with this
DCP. Otherwise, the MCP cannot process the statistics reported by the DCP.
Step 4 (Optional) Run authentication-mode hmac-sha256 key-id key-id [ cipher ]
[ password | password ]
The authentication mode and password are configured on the DCP.
The authentication mode and password configured on a DCP must be the same as
those configured in the authentication-mode hmac-sha256 key-id key-id
[ cipher ] [ password | password ] command run on the MCP associated with the
DCP. Otherwise, the MCP cannot process the statistics reported by the DCP.
Step 5 (Optional) Run color-flag loss-measure { tos-bit tos-bit | flags-bit0 } delay-
measure { tos-bit tos-bit | flags-bit0 }
IP FPM measurement flags are configured.
The loss and delay measurement flags cannot use the same bit, and the bits used
for loss and delay measurement must not have been used in other measurement
tasks.
Step 6 Run mcp mcp-id [ port port-number ] [ vpn-instance vpn-instance-name | net-
manager-vpn ]
An MCP ID is specified for the DCP, and the UDP port number is configured for the
DCP to communicate with the MCP.
The UDP port number configured on the DCP must be the same as that
configured in the protocol udp port port-number command run on the MCP
associated with this DCP. Otherwise, the DCP cannot report the statistics to the
MCP.
The VPN instance has been created on the DCP before you configure vpn-instance
vpn-instance-name or net-manager-vpn to allow the DCP to report the statistics
to the MCP through the specified VPN or management VPN.
Step 7 (Optional) Run period source ntp
The DCP is configured to select NTP as the clock source when calculating an IP
FPM statistical period ID.
In P2MP (MP being two points) delay measurement scenarios, if the ingress of the
service traffic uses NTP as the clock source, but the egresses use a different clock
source, for example, NTP or 1588v2, you must configure the egresses to select
NTP as the clock source when calculating an IP FPM statistical period ID to ensure
consistent clock sources on the ingress and egresses.
Step 8 Run instance instance-id
An IP FPM instance is created, and the instance view is displayed.
instance-id must be unique on an MCP and all its associated DCPs. The MCP and
all its associated DCPs must have the same IP FPM instance configured.
Otherwise, statistics collection does not take effect.
Step 9 (Optional) Run description text
The description is configured for the IP FPM instance.
The description of an IP FPM instance can contain the functions of the instance,
facilitating applications.
Step 10 (Optional) Run interval interval
The statistical period is configured for the IP FPM instance.
Step 11 Perform either of the following operations to configure the target flow
characteristics in the IP FPM instance.
Configure the forward or backward target flow characteristics.
NOTE
● If the target flow in an IP FPM instance is unidirectional, only forward can be specified.
● If the target flow in an IP FPM instance is bidirectional, two situations are available:
– If the bidirectional target flow is asymmetrical, you must configure forward and
backward in two command instances to configure the characteristics for the forward
and backward flows, respectively.
– If the bidirectional target flow is symmetrical, you can specify bidirectional to
configure the bidirectional target flow characteristics. By default, the characteristics
specified are used for the forward flow, and the reverse of those are used for the
backward flow. Specifically, the source and destination IP addresses and port numbers
specified for the forward flow are used respectively as the destination and source IP
addresses and port numbers for the backward flow. If the target flow is symmetrical
bidirectional, set src-ip-address to specify a source IP address and dest-ip-address to
specify a destination IP address for the target flow.
On the network shown in Figure 1-6, TLP200 and TLP300 are mid-points.
● Run the tlp tlp-id mid-point flow bidirectional { ingress | egress } [ forward
{ vpn-label vpn-label [ lsp-label lsp-label [ lsp-label2 lsp-label2 ] ] [ flow-
label ] [ control-word ] [ l2vpn [ tpid tpid ] ] } ] [ backward { vpn-label
vpn-label [ lsp-label lsp-label [ lsp-label2 lsp-label2 ] ] [ flow-label ]
[ control-word ] [ l2vpn [ tpid tpid ] ] } ] command to configure a TLP and
specify it as a mid-point for the bidirectional target flow. In a load-balancing
scenario where different paths share the same interface or path segment, run
the tlp tlp-id index index-id mid-point flow bidirectional { ingress | egress }
{ forward vpn-label vpn-label [ lsp-label lsp-label [ lsp-label2 lsp-label2 ] ]
[ flow-label ] [ control-word ] [ l2vpn [ tpid tpid ] ] | backward vpn-label
vpn-label [ lsp-label lsp-label [ lsp-label2 lsp-label2 ] ] [ flow-label ]
[ control-word ] [ l2vpn [ tpid tpid ] ] | forward vpn-label vpn-label [ lsp-
label lsp-label [ lsp-label2 lsp-label2 ] ] [ flow-label ] [ control-word ]
[ l2vpn [ tpid tpid ] ] backward vpn-label vpn-label [ lsp-label lsp-label
[ lsp-label2 lsp-label2 ] ] [ flow-label ] [ control-word ] [ l2vpn [ tpid
tpid ] ] } command to configure the mid-point of the bidirectional target flow
included in the IP FPM statistical instance and the role of the mid-point.
NOTE
----End
Prerequisites
The IP FPM hop-by-hop performance statistics collection function has been
configured.
Procedure
● Run the display ipfpm mcp command to check MCP configurations.
● Run the display ipfpm dcp command to check DCP configurations.
● Run the display ipfpm statistic-type { loss | oneway-delay } instance
instance-id ach ach-id [ verbose ] command to check the hop-by-hop
performance statistics for a specified ACH.
----End
1.1.2.5.1 Configuring Alarm and Clear Alarm Thresholds for IP FPM Performance
Counters
After you configure the alarm threshold and its clear alarm threshold for packet
loss or delay, the device generates an alarm when the packet loss rate or delay
reaches the alarm threshold and clears the alarm when the packet loss rate or
delay falls below the clear alarm threshold. The alarm functions help network
operation and maintenance.
Context
If the packet loss rate or delay on a network is detected high but left unattended,
the packet loss rate or delay may increase and potentially affect user experience.
To help network operation and maintenance, configure the alarm threshold and
its clear alarm threshold for packet loss or delay.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa ipfpm mcp
The IPFPM-MCP view is displayed.
Step 3 Run instance instance-id
The IP FPM instance view is displayed.
Step 4 Run loss-measure ratio-threshold upper-limit upper-limit lower-limit lower-
limit
The packet loss alarm threshold and its clear alarm threshold are configured.
Step 5 Run either of the following commands to configure the delay alarm threshold and
its clear alarm threshold.
● When the target flow is unidirectional, run the delay-measure one-way
delay-threshold upper-limit upper-limit lower-limit lower-limit command to
configure the one-way delay alarm threshold and its clear alarm threshold.
● When the target flow is bidirectional, run the delay-measure two-way
delay-threshold upper-limit upper-limit lower-limit lower-limit command to
configure the two-way delay alarm threshold and its clear alarm threshold.
----End
Context
Run the display commands in any view to check the IP FPM performance statistics
and monitor the IP FPM running status in routine maintenance.
Procedure
● Run the display ipfpm statistic-type { loss | oneway-delay | twoway-
delay } instance instance-id [ verbose ] command to check the performance
statistics for a specified IP FPM instance.
● Run the display ipfpm statistic-type { loss | oneway-delay } instance
instance-id ach ach-id [ verbose ] command to check the hop-by-hop
performance statistics for a specified ACH.
----End
Networking Requirements
Various value-added services, such as IPTV, video conferencing, and Voice over
Internet Protocol (VoIP) are widely used on networks. As these services rely heavily
on high speed and robust networks, link connectivity and network performance
are essential to service transmission.
● When voice services are deployed, users will not detect any change in the
voice quality if the packet loss rate on links is lower than 5%. If the packet
loss rate is higher than 10%, the voice quality will deteriorate significantly.
● Real-time services, such as VoIP, online games, and video conferencing,
require a delay lower than 100 ms, or even 50 ms. As the delay increases, user
experience worsens.
To meet users' service quality requirements, carriers need to promptly measure the
packet loss rate and delay so that they can quickly respond to resolve network
issues if the service quality deteriorates.
The IPRAN network shown in Figure 1-7 transmits voice services. Voice flows are
symmetrical and bidirectional, and therefore one voice flow can be divided into
two unidirectional service flows. The forward service flow enters the network
through the UPE, travels across SPE1, and leaves the network through the NPE.
The backward service flow enters the network through the NPE, also travels across
SPE1, and leaves the network through the UPE.
To meet users' service quality requirements and take measures when service
quality deteriorates, configure IP FPM end-to-end performance statistics collection
to monitor the packet loss and delay of the links between the UPE and NPE in real
time.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each interface so that all
provider edge devices (PEs) can communicate at the network layer. This
example uses Open Shortest Path First (OSPF) as the routing protocol.
2. Configure Multiprotocol Label Switching (MPLS) functions and public network
tunnels. In this example, RSVP-TE tunnels are established between the UPE
and SPEs, and Label Distribution Protocol (LDP) LSPs are established between
the SPEs and between the NPE and SPEs.
3. Create a VPN instance on the UPE and NPE and import the local direct routes
on the UPE and NPE to their respective VPN instance routing tables.
4. Establish MP-IBGP peer relationships between the UPE and SPEs and between
the NPE and SPEs.
5. Configure the SPEs as route reflectors (RRs) and specify the UPE and NPE as
RR clients.
6. Configure VPN FRR on the UPE and NPE.
7. Configure the Network Time Protocol (NTP) to synchronize the clocks of the
UPE, SPE1, and the NPE.
8. Configure proactive packet loss and delay measurement on the UPE and NPE
to collect packet loss and delay statistics at intervals.
9. Configure the packet loss and two-way delay alarm thresholds and clear
alarm thresholds on the UPE.
Data Preparation
To complete the configuration, you need the following data:
Before you deploy IP FPM for packet loss and delay measurement, if two or more bits
in the IPv4 packet header have not been planned for other purposes, they can be used
for packet loss and delay measurement at the same time. If only one bit in the IPv4
packet header has not been planned, it can be used for either packet loss or delay
measurement in one IP FPM instance.
● Authentication mode (HMAC-SHA256), password (YsHsjx_202206), key ID (1),
and UDP port number (2048) on the UPE and NPE
● On-demand packet loss and delay measurement intervals (30 minutes)
● Packet loss alarm threshold and its clear alarm threshold (respectively 10%
and 5%); two-way delay alarm threshold and its clear alarm threshold
(respectively 100 ms and 50 ms)
Procedure
Step 1 Configure interface IP addresses.
Configure OSPF on each node to allow the nodes to communicate at the network
layer. For detailed configurations, see Configuration Files in this section.
● Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and Constraint
Shortest Path First (CSPF).
# Configure the UPE.
<UPE> system-view
[~UPE] mpls lsr-id 1.1.1.1
[*UPE] mpls
[*UPE-mpls] mpls te
[*UPE-mpls] mpls rsvp-te
[*UPE-mpls] mpls te cspf
[*UPE-mpls] quit
[*UPE] interface gigabitethernet 1/0/1
[*UPE-GigabitEthernet1/0/1] mpls
[*UPE-GigabitEthernet1/0/1] mpls te
[*UPE-GigabitEthernet1/0/1] mpls rsvp-te
[*UPE-GigabitEthernet1/0/1] quit
[*UPE] interface gigabitethernet 1/0/2
[*UPE-GigabitEthernet1/0/2] mpls
[*UPE-GigabitEthernet1/0/2] mpls te
[*UPE-GigabitEthernet1/0/2] mpls rsvp-te
[*UPE-GigabitEthernet1/0/2] quit
[*UPE] ospf 1
[*UPE-ospf-1] opaque-capability enable
[*UPE-ospf-1] area 0
[*UPE-ospf-1-area-0.0.0.0] mpls-te enable
[*UPE-ospf-1-area-0.0.0.0] quit
[*UPE-ospf-1] quit
[*UPE] commit
# Configure SPE1.
<SPE1> system-view
[~SPE1] mpls lsr-id 2.2.2.2
[*SPE1] mpls
[*SPE1-mpls] mpls te
[*SPE1-mpls] mpls rsvp-te
[*SPE1-mpls] mpls te cspf
[*SPE1-mpls] quit
[*SPE1] mpls ldp
[*SPE1-mpls-ldp] quit
[*SPE1] interface gigabitethernet 1/0/1
[*SPE1-GigabitEthernet1/0/1] mpls
[*SPE1-GigabitEthernet1/0/1] mpls te
[*SPE1-GigabitEthernet1/0/1] mpls rsvp-te
[*SPE1-GigabitEthernet1/0/1] quit
[*SPE1] interface gigabitethernet 1/0/3
[*SPE1-GigabitEthernet1/0/3] mpls
[*SPE1-GigabitEthernet1/0/3] mpls ldp
[*SPE1-GigabitEthernet1/0/3] quit
[*SPE1] ospf 1
[*SPE1-ospf-1] opaque-capability enable
[*SPE1-ospf-1] area 0
[*SPE1-ospf-1-area-0.0.0.0] mpls-te enable
[*SPE1-ospf-1-area-0.0.0.0] quit
[*SPE1-ospf-1] quit
[*SPE1] commit
# Configure SPE2.
<SPE2> system-view
[~SPE2] mpls lsr-id 3.3.3.3
[*SPE2] mpls
[*SPE2-mpls] mpls te
[*SPE2-mpls] mpls rsvp-te
[*SPE2-mpls] mpls te cspf
[*SPE2-mpls] quit
[*SPE2] mpls ldp
[*SPE2-mpls-ldp] quit
[*SPE2] interface gigabitethernet 1/0/2
[*SPE2-GigabitEthernet1/0/2] mpls
[*SPE2-GigabitEthernet1/0/2] mpls te
[*UPE] commit
# Configure SPE1.
[~SPE1] interface Tunnel 11
[*SPE1-Tunnel11] ip address unnumbered interface loopback 1
[*SPE1-Tunnel11] tunnel-protocol mpls te
[*SPE1-Tunnel11] destination 1.1.1.1
[*SPE1-Tunnel11] mpls te tunnel-id 100
[*SPE1-Tunnel11] mpls te signal-protocol rsvp-te
[*SPE1-Tunnel11] mpls te reserved-for-binding
[*SPE1-Tunnel11] quit
[*SPE1] commit
# Configure SPE2.
[~SPE2] interface Tunnel 12
[*SPE2-Tunnel12] ip address unnumbered interface loopback 1
[*SPE2-Tunnel12] tunnel-protocol mpls te
[*SPE2-Tunnel12] destination 1.1.1.1
[*SPE2-Tunnel12] mpls te tunnel-id 200
[*SPE2-Tunnel12] mpls te signal-protocol rsvp-te
[*SPE2-Tunnel12] mpls te reserved-for-binding
[*SPE2-Tunnel12] quit
[*SPE2] commit
● Configure tunnel policies.
# Configure the UPE.
[~UPE] tunnel-policy policy1
[*UPE-tunnel-policy-policy1] tunnel binding destination 2.2.2.2 te Tunnel 11
[*UPE-tunnel-policy-policy1] tunnel binding destination 3.3.3.3 te Tunnel 12
[*UPE-tunnel-policy-policy1] quit
[*UPE] commit
# Configure SPE1.
[~SPE1] tunnel-policy policy1
[*SPE1-tunnel-policy-policy1] tunnel binding destination 1.1.1.1 te Tunnel 11
[*SPE1-tunnel-policy-policy1] quit
[*SPE1] commit
# Configure SPE2.
[~SPE2] tunnel-policy policy1
[*SPE2-tunnel-policy-policy1] tunnel binding destination 1.1.1.1 te Tunnel 12
[*SPE2-tunnel-policy-policy1] quit
[*SPE2] commit
Step 4 Create a VPN instance on the UPE and NPE and import the local direct routes on
the UPE and NPE to their respective VPN instance routing tables.
# Configure the UPE.
[~UPE] ip vpn-instance vpna
[*UPE-vpn-instance-vpna] ipv4-family
[*UPE-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*UPE-vpn-instance-vpna-af-ipv4] vpn-target 1:1
[*UPE-vpn-instance-vpna-af-ipv4] quit
[*UPE-vpn-instance-vpna] quit
[*UPE] interface gigabitethernet 1/0/0
[*UPE-GigabitEthernet1/0/0] ip binding vpn-instance vpna
[*UPE-GigabitEthernet1/0/0] ip address 192.168.1.1 24
[*UPE-GigabitEthernet1/0/0] quit
[*UPE] bgp 100
[*UPE-bgp] ipv4-family vpn-instance vpna
[*UPE-bgp-vpna] import-route direct
[*UPE-bgp-vpna] quit
[*UPE-bgp] quit
[*UPE] commit
[*NPE-vpn-instance-vpna] ipv4-family
[*NPE-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*NPE-vpn-instance-vpna-af-ipv4] vpn-target 1:1
[*NPE-vpn-instance-vpna-af-ipv4] quit
[*NPE-vpn-instance-vpna] quit
[*NPE] interface gigabitethernet 1/0/3
[*NPE-GigabitEthernet1/0/3] ip binding vpn-instance vpna
[*NPE-GigabitEthernet1/0/3] ip address 192.168.2.1 24
[*NPE-GigabitEthernet1/0/3] quit
[*NPE] bgp 100
[*NPE-bgp] ipv4-family vpn-instance vpna
[*NPE-bgp-vpna] import-route direct
[*NPE-bgp-vpna] quit
[*NPE-bgp] quit
[*NPE] commit
Step 5 Establish MP-IBGP peer relationships between the UPE and SPEs and between the
NPE and SPEs.
# Configure the UPE.
[~UPE] bgp 100
[*UPE-bgp] router-id 1.1.1.1
[*UPE-bgp] peer 2.2.2.2 as-number 100
[*UPE-bgp] peer 2.2.2.2 connect-interface loopback 1
[*UPE-bgp] peer 3.3.3.3 as-number 100
[*UPE-bgp] peer 3.3.3.3 connect-interface loopback 1
[*UPE-bgp] ipv4-family vpnv4
[*UPE-bgp-af-vpnv4] peer 2.2.2.2 enable
[*UPE-bgp-af-vpnv4] peer 3.3.3.3 enable
[*UPE-bgp-af-vpnv4] quit
[*UPE-bgp] quit
[*UPE] commit
# Configure SPE1.
[~SPE1] bgp 100
[*SPE1-bgp] router-id 2.2.2.2
[*SPE1-bgp] peer 1.1.1.1 as-number 100
[*SPE1-bgp] peer 1.1.1.1 connect-interface loopback 1
[*SPE1-bgp] peer 3.3.3.3 as-number 100
[*SPE1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*SPE1-bgp] peer 4.4.4.4 as-number 100
[*SPE1-bgp] peer 4.4.4.4 connect-interface loopback 1
[*SPE1-bgp] ipv4-family vpnv4
[*SPE1-bgp-af-vpnv4] undo policy vpn-target
[*SPE1-bgp-af-vpnv4] peer 1.1.1.1 enable
[*SPE1-bgp-af-vpnv4] peer 3.3.3.3 enable
[*SPE1-bgp-af-vpnv4] peer 4.4.4.4 enable
[*SPE1-bgp-af-vpnv4] quit
[*SPE1-bgp] quit
[*SPE1] commit
[*NPE] commit
Step 6 Configure the SPEs as RRs and specify the UPE and NPE as RR clients.
[~SPE1] bgp 100
[*SPE1-bgp] ipv4-family vpnv4
[*SPE1-bgp-af-vpnv4] peer 1.1.1.1 reflect-client
[*SPE1-bgp-af-vpnv4] peer 1.1.1.1 next-hop-local
[*SPE1-bgp-af-vpnv4] peer 4.4.4.4 reflect-client
[*SPE1-bgp-af-vpnv4] peer 4.4.4.4 next-hop-local
[*SPE1-bgp-af-vpnv4] quit
[*SPE1-bgp] quit
[*SPE1] commit
The configuration of the NPE is similar to the configuration of the UPE. For
configuration details, see Configuration Files in this section. After completing the
configurations, run the display bgp vpnv4 vpn-instance vpna routing-table
command on the UPE and NPE to view detailed information about received
routes.
[~UPE] display bgp vpnv4 vpn-instance vpna routing-table
BGP Local router ID is 1.1.1.1
Status codes: * - valid, > - best, d - damped,
h - history, i - internal, s - suppressed, S - Stale
Origin : i - IGP, e - EGP, ? - incomplete
The command output shows that the UPE and NPE both preferentially select the
routes advertised by SPE1 and use UPE <-> SPE1 <-> NPE as the primary path.
Step 9 Configure NTP to synchronize the clocks of the UPE, SPE1, and the NPE.
# Configure SPE1.
[~SPE1] ntp-service sync-interval 180
[*SPE1] ntp-service unicast-server 172.16.1.1
[*SPE1] commit
After completing the configuration, the UPE, SPE1, and the NPE have synchronized
their clocks.
Run the display ntp-service status command on the UPE to check its NTP status.
The command output shows that the clock status is synchronized, which means
that synchronization is complete.
[~UPE] display ntp-service status
clock status: synchronized
clock stratum: 1
reference clock ID: LOCAL(0)
nominal frequency: 64.0000 Hz
actual frequency: 64.0000 Hz
clock precision: 2^7
clock offset: 0.0000 ms
root delay: 0.00 ms
root dispersion: 26.49 ms
peer dispersion: 10.00 ms
Run the display ntp-service status command on SPE1 to check its NTP status.
The command output shows that the clock status is synchronized and the clock
stratum is 2, lower than that of the UPE.
[~SPE1] display ntp-service status
clock status: synchronized
clock stratum: 2
reference clock ID: 172.16.1.1
nominal frequency: 64.0000 Hz
actual frequency: 64.0000 Hz
clock precision: 2^7
clock offset: -0.0099 ms
root delay: 0.08 ms
root dispersion: 51.00 ms
peer dispersion: 34.30 ms
reference time: 08:56:45.000 UTC Apr 2 2013(D5051BCD.00346DC5)
synchronization state: clock synchronized
Run the display ntp-service status command on the NPE to check its NTP status.
The command output shows that the clock status is synchronized and the clock
stratum is 3, lower than that of SPE1.
[~NPE] display ntp-service status
clock status: synchronized
clock stratum: 3
reference clock ID: 172.16.4.1
nominal frequency: 64.0000 Hz
actual frequency: 64.0000 Hz
clock precision: 2^7
clock offset: -0.0192 ms
root delay: 0.18 ms
root dispersion: 201.41 ms
peer dispersion: 58.64 ms
reference time: 08:56:47.000 UTC Apr 2 2013(D5051BCF.001E2584)
synchronization state: clock synchronized
Step 10 Configure proactive packet loss and delay measurement on the UPE and NPE;
configure the UPE as the MCP and also a DCP and configure TLP310 on the UPE;
configure the NPE as a DCP and configure TLP100 on the NPE.
After completing the configuration, run the display ipfpm mcp command on
the UPE. The command output shows MCP configurations on the UPE.
[~UPE] display ipfpm mcp
Specification Information:
Max Instance Number :64
Max DCP Number Per Instance :256
Max ACH Number Per Instance :16
Max TLP Number Per ACH :16
Configuration Information:
MCP ID :1.1.1.1
Status :Active
Configuration Information:
DCP ID : 1.1.1.1
Loss-measure Flag : tos-bit3
Delay-measure Flag : tos-bit4
Configuration Information:
DCP ID : 4.4.4.4
Loss-measure Flag : tos-bit3
Delay-measure Flag : tos-bit4
Step 11 Configure alarm thresholds and clear alarm thresholds for IP FPM performance
counters on the UPE.
# Configure the packet loss alarm threshold and its clear alarm threshold.
[~UPE] nqa ipfpm mcp
[*UPE-nqa-ipfpm-mcp] instance 1
[*UPE-nqa-ipfpm-mcp-instance-1] loss-measure ratio-threshold upper-limit 10 lower-limit 5
[*UPE-nqa-ipfpm-mcp-instance-1] commit
# Configure the two-way delay alarm threshold and its clear alarm threshold.
[~UPE-nqa-ipfpm-mcp-instance-1] delay-measure two-way delay-threshold upper-limit 100000 lower-
limit 50000
[*UPE-nqa-ipfpm-mcp-instance-1] commit
● # The following example uses the two-way delay statistics for IP FPM instance
1.
[~UPE] display ipfpm statistic-type twoway-delay instance 1
Latest two-way delay statistics:
--------------------------------------------------
Period Delay(usec) Delay
Variation(usec)
--------------------------------------------------
136118757 800 0
136118756 800 0
136118755 800 0
136118753 800 0
136118752 800 0
136118751 800 0
136118750 800 0
136118749 800 0
136118748 800 0
136118747 800 0
136118746 800 0
136118745 800 0
Latest one-way delay statistics of bidirectional flow:
--------------------------------------------------------------------------------
Period Forward ForwardDelay Backward BackwardDelay
Delay(usec) Variation(usec) Delay(usec) Variation(usec)
--------------------------------------------------------------------------------
136118757 400 0 400 0
136118756 400 0 400 0
136118755 400 0 400 0
136118753 400 0 400 0
136118752 400 0 400 0
136118751 400 0 400 0
136118750 400 0 400 0
136118749 400 0 400 0
136118748 400 0 400 0
136118747 400 0 400 0
136118746 400 0 400 0
136118745 400 0 400 0
----End
Configuration Files
● UPE configuration file
#
sysname UPE
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy policy1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
mpls
mpls te
label advertise non-null
mpls rsvp-te
mpls te cspf
#
ntp-service sync-interval 180
ntp-service refclock-master 1
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.1.1 255.255.255.0
ipfpm tlp 100
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.2.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Tunnel11
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 2.2.2.2
mpls te tunnel-id 100
mpls te reserved-for-binding
#
interface Tunnel12
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.3
mpls te tunnel-id 200
mpls te reserved-for-binding
#
bgp 100
router-id 1.1.1.1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpna
import-route direct
auto-frr
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 172.16.1.0 0.0.0.255
network 172.16.2.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 2.2.2.2 te Tunnel11
tunnel binding destination 3.3.3.3 te Tunnel12
#
nqa ipfpm dcp
dcp id 1.1.1.1
mcp 1.1.1.1 port 2048
authentication-mode hmac-sha256 key-id 1 cipher #%#%c^)+6\&Xmec@('3&m,d%1C,d%1C<#%#%
color-flag loss-measure tos-bit 3 delay-measure tos-bit 4
instance 1
flow bidirectional source 10.1.1.1 destination 10.2.1.1
tlp 100 in-point ingress
loss-measure enable continual
delay-measure enable two-way tlp 100 continual
#
nqa ipfpm mcp
mcp id 1.1.1.1
protocol udp port 2048
authentication-mode hmac-sha256 key-id 1 cipher #%#%\8u;Ufa-'-+mtJG0r#:00dV[#%#%
instance 1
dcp 1.1.1.1
dcp 4.4.4.4
loss-measure ratio-threshold upper-limit 10.000000 lower-limit 5.000000
delay-measure two-way delay-threshold upper-limit 100000 lower-limit 50000
#
return
● SPE1 configuration file
#
sysname SPE1
#
tunnel-selector bindTE permit node 10
apply tunnel-policy policy1
#
mpls lsr-id 2.2.2.2
mpls
mpls te
label advertise non-null
mpls rsvp-te
mpls te cspf
#
mpls ldp
#
ntp-service sync-interval 180
ntp-service unicast-server 172.16.1.1
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.4.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 172.16.3.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
interface Tunnel11
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 100
mpls te reserved-for-binding
#
bgp 100
router-id 2.2.2.2
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 3.3.3.3 enable
peer 4.4.4.4 enable
#
ipv4-family vpnv4
undo policy vpn-target
tunnel-selector bindTE
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 1.1.1.1 next-hop-local
peer 3.3.3.3 enable
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
peer 4.4.4.4 next-hop-local
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 172.16.1.0 0.0.0.255
network 172.16.3.0 0.0.0.255
network 172.16.4.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.1 te Tunnel11
#
return
● SPE2 configuration file
#
sysname SPE2
#
tunnel-selector bindTE permit node 10
apply tunnel-policy policy1
#
mpls lsr-id 3.3.3.3
mpls
mpls te
label advertise non-null
mpls rsvp-te
mpls te cspf
#
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.5.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.2.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 172.16.3.2 255.255.255.0
mpls
mpls te
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
interface Tunnel12
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 200
mpls te reserved-for-binding
#
bgp 100
router-id 3.3.3.3
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 4.4.4.4 enable
#
ipv4-family vpnv4
undo policy vpn-target
tunnel-selector bindTE
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 1.1.1.1 next-hop-local
peer 2.2.2.2 enable
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
peer 4.4.4.4 next-hop-local
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 172.16.2.0 0.0.0.255
network 172.16.3.0 0.0.0.255
network 172.16.5.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.1 te Tunnel12
#
return
● NPE configuration file
#
sysname NPE
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 4.4.4.4
mpls
#
mpls ldp
#
ntp-service sync-interval 180
ntp-service unicast-server 172.16.4.1
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.5.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.4.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/3
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.1 255.255.255.0
ipfpm tlp 310
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
bgp 100
router-id 4.4.4.4
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpna
import-route direct
auto-frr
#
ospf 1
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 172.16.4.0 0.0.0.255
Networking Requirements
Various value-added services, such as IPTV, video conferencing, and Voice over
Internet Protocol (VoIP) are widely used on networks. As these services rely heavily
on high speed and robust networks, link connectivity and network performance
are essential to service transmission. The performance measurement function can
be used to verify performance of links that transmit services.
● When voice services are deployed, users will not detect any change in the
voice quality if the packet loss rate on links is lower than 5%. If the packet
loss rate is higher than 10%, the voice quality will deteriorate significantly.
● Real-time services, such as VoIP, online games, and video conferencing,
require a delay lower than 100 ms, or even 50 ms. As the delay increases, user
experience worsens.
To locate faults when network performance deteriorates, configure IP FPM hop-
by-hop performance statistics collection.
The IPRAN network shown in Figure 1-8 transmits video services. A unidirectional
service flow enters the network through the UPE, travels across SPE1, and leaves
the network through the NPE.
To locate faults when network performance deteriorates, configure hop-by-hop
packet loss and delay measurement on the UPE and NPE to locate faults segment
by segment.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each interface so that all
provider edge devices (PEs) can communicate at the network layer. This
example uses Open Shortest Path First (OSPF) as the routing protocol.
2. Configure Multiprotocol Label Switching (MPLS) functions and public network
tunnels. In this example, RSVP-TE tunnels are established between the UPE
and SPEs, and Label Distribution Protocol (LDP) LSPs are established between
the SPEs and between the NPE and SPEs.
3. Create a VPN instance on the UPE and NPE and import the local direct routes
on the UPE and NPE to their respective VPN instance routing tables.
4. Establish MP-IBGP peer relationships between the UPE and SPEs and between
the NPE and SPEs.
5. Configure the SPEs as route reflectors (RRs) and specify the UPE and NPE as
RR clients.
6. Configure VPN FRR on the UPE and NPE.
7. Configure the Network Time Protocol (1588v2) to synchronize the clocks of
the UPE, SPE1, and the NPE.
8. Configure hop-by-hop packet loss and delay measurement on the UPE and
NPE to locate faults segment by segment.
9. Configure the packet loss and two-way delay alarm thresholds and clear
alarm thresholds on the UPE.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface listed in Table 1-3
● Interior Gateway Protocol (IGP) protocol type (OSPF), process ID (1), and area
ID (0)
● Label switching router (LSR) IDs (1.1.1.1, 2.2.2.2, and 3.3.3.3) of the UPE,
SPE1, and SPE2
● Tunnel interface names, tunnel IDs, and tunnel interface addresses for the
bidirectional tunnels between the UPE and SPE1 (Tunnel11, 100, and
loopback interface address) and between the UPE and SPE2 (Tunnel12, 200,
and loopback interface address); tunnel policy names for the bidirectional
tunnels between the UPE and SPEs (policy1); tunnel selector names on the
SPEs (BindTE)
● Names, RDs, and VPN targets of the VPN instances on the UPE and NPE:
vpna, 100:1, and 1:1
● UPE's DCP ID and MCP ID (both 1.1.1.1); SPE1's DCP ID (2.2.2.2); NPE's MCP
ID (4.4.4.4)
● IP FPM instance ID (1) and statistical period (10s)
● Forward target flow's source IP address (10.1.1.1) and destination IP address
(10.2.1.1); backward target flow's source IP address (10.2.1.1) and destination
IP address (10.1.1.1)
● Measurement points (TLP100 and TLP310)
● Loss and delay measurement flags (respectively the third and fourth bits in
the ToS field of the IPv4 packet header)
NOTE
Before you deploy IP FPM for packet loss and delay measurement, if two or more bits
in the IPv4 packet header have not been planned for other purposes, they can be used
for packet loss and delay measurement at the same time. If only one bit in the IPv4
packet header has not been planned, it can be used for either packet loss or delay
measurement in one IP FPM instance.
● Authentication mode (HMAC-SHA256), password (YsHsjx_202206), key ID (1),
and UDP port number (2048) on the UPE, SPE1, and NPE
● On-demand packet loss and delay measurement intervals (30 minutes)
● Packet loss alarm threshold and its clear alarm threshold (respectively 10%
and 5%); two-way delay alarm threshold and its clear alarm threshold
(respectively 100 ms and 50 ms)
Procedure
Step 1 Configure interface IP addresses.
Assign an IP address to each interface according to Table 1-3 and create a
loopback interface on each node. For configuration details, see Configuration
Files in this section.
Step 2 Configure OSPF.
Configure OSPF on each node to allow the nodes to communicate at the network
layer. For detailed configurations, see Configuration Files in this section.
Step 3 Configure basic MPLS functions and public network tunnels.
● Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and Constraint
Shortest Path First (CSPF).
# Configure the UPE.
<UPE> system-view
[~UPE] mpls lsr-id 1.1.1.1
[*UPE] mpls
[*UPE-mpls] mpls te
[*UPE-mpls] mpls rsvp-te
[*UPE-mpls] mpls te cspf
[*UPE-mpls] quit
[*UPE] interface gigabitethernet 1/0/1
[*UPE-GigabitEthernet1/0/1] mpls
[*UPE-GigabitEthernet1/0/1] mpls te
[*UPE-GigabitEthernet1/0/1] mpls rsvp-te
[*UPE-GigabitEthernet1/0/1] quit
[*NPE] mpls
[*NPE-mpls] quit
[*NPE] mpls ldp
[*NPE-mpls-ldp] quit
[*NPE] interface gigabitethernet 1/0/1
[*NPE-GigabitEthernet1/0/1] mpls
[*NPE-GigabitEthernet1/0/1] mpls ldp
[*NPE-GigabitEthernet1/0/1] quit
[*NPE] interface gigabitethernet 1/0/2
[*NPE-GigabitEthernet1/0/2] mpls
[*NPE-GigabitEthernet1/0/2] mpls ldp
[*NPE-GigabitEthernet1/0/2] quit
[*NPE] commit
● Enable the egress of each unidirectional tunnel to be created to assign a non-
null label to the penultimate hop.
# Configure the UPE.
[~UPE] mpls
[*UPE-mpls] label advertise non-null
[*UPE-mpls] quit
[*UPE] commit
# Configure SPE1.
[~SPE1] mpls
[*SPE1-mpls] label advertise non-null
[*SPE1-mpls] quit
[*SPE1] commit
# Configure SPE2.
[~SPE2] mpls
[*SPE2-mpls] label advertise non-null
[*SPE2-mpls] quit
[*SPE2] commit
● Configure RSVP-TE tunnel interfaces.
# Configure the UPE.
[~UPE] interface Tunnel 11
[*UPE-Tunnel11] ip address unnumbered interface loopback 1
[*UPE-Tunnel11] tunnel-protocol mpls te
[*UPE-Tunnel11] destination 2.2.2.2
[*UPE-Tunnel11] mpls te tunnel-id 100
[*UPE-Tunnel11] mpls te signal-protocol rsvp-te
[*UPE-Tunnel11] mpls te reserved-for-binding
[*UPE-Tunnel11] quit
[*UPE] interface Tunnel 12
[*UPE-Tunnel12] ip address unnumbered interface loopback 1
[*UPE-Tunnel12] tunnel-protocol mpls te
[*UPE-Tunnel12] destination 3.3.3.3
[*UPE-Tunnel12] mpls te tunnel-id 200
[*UPE-Tunnel12] mpls te signal-protocol rsvp-te
[*UPE-Tunnel12] mpls te reserved-for-binding
[*UPE-Tunnel12] quit
[*UPE] commit
# Configure SPE1.
[~SPE1] interface Tunnel 11
[*SPE1-Tunnel11] ip address unnumbered interface loopback 1
[*SPE1-Tunnel11] tunnel-protocol mpls te
[*SPE1-Tunnel11] destination 1.1.1.1
[*SPE1-Tunnel11] mpls te tunnel-id 100
[*SPE1-Tunnel11] mpls te signal-protocol rsvp-te
[*SPE1-Tunnel11] mpls te reserved-for-binding
[*SPE1-Tunnel11] quit
[*SPE1] commit
# Configure SPE2.
[~SPE2] interface Tunnel 12
[*SPE2-Tunnel12] ip address unnumbered interface loopback 1
[*SPE2-Tunnel12] tunnel-protocol mpls te
# Configure SPE1.
[~SPE1] tunnel-policy policy1
[*SPE1-tunnel-policy-policy1] tunnel binding destination 1.1.1.1 te Tunnel 11
[*SPE1-tunnel-policy-policy1] quit
[*SPE1] commit
# Configure SPE2.
[~SPE2] tunnel-policy policy1
[*SPE2-tunnel-policy-policy1] tunnel binding destination 1.1.1.1 te Tunnel 12
[*SPE2-tunnel-policy-policy1] quit
[*SPE2] commit
Step 4 Create a VPN instance on the UPE and NPE and import the local direct routes on
the UPE and NPE to their respective VPN instance routing tables.
Step 5 Establish MP-IBGP peer relationships between the UPE and SPEs and between the
NPE and SPEs.
# Configure the UPE.
[~UPE] bgp 100
[*UPE-bgp] router-id 1.1.1.1
[*UPE-bgp] peer 2.2.2.2 as-number 100
[*UPE-bgp] peer 2.2.2.2 connect-interface loopback 1
[*UPE-bgp] peer 3.3.3.3 as-number 100
[*UPE-bgp] peer 3.3.3.3 connect-interface loopback 1
[*UPE-bgp] ipv4-family vpnv4
[*UPE-bgp-af-vpnv4] peer 2.2.2.2 enable
[*UPE-bgp-af-vpnv4] peer 3.3.3.3 enable
[*UPE-bgp-af-vpnv4] quit
[*UPE-bgp] quit
[*UPE] commit
# Configure SPE1.
[~SPE1] bgp 100
[*SPE1-bgp] router-id 2.2.2.2
[*SPE1-bgp] peer 1.1.1.1 as-number 100
[*SPE1-bgp] peer 1.1.1.1 connect-interface loopback 1
[*SPE1-bgp] peer 3.3.3.3 as-number 100
[*SPE1-bgp] peer 3.3.3.3 connect-interface loopback 1
[*SPE1-bgp] peer 4.4.4.4 as-number 100
[*SPE1-bgp] peer 4.4.4.4 connect-interface loopback 1
[*SPE1-bgp] ipv4-family vpnv4
[*SPE1-bgp-af-vpnv4] undo policy vpn-target
[*SPE1-bgp-af-vpnv4] peer 1.1.1.1 enable
[*SPE1-bgp-af-vpnv4] peer 3.3.3.3 enable
[*SPE1-bgp-af-vpnv4] peer 4.4.4.4 enable
[*SPE1-bgp-af-vpnv4] quit
[*SPE1-bgp] quit
[*SPE1] commit
Step 6 Configure the SPEs as RRs and specify the UPE and NPE as RR clients.
[~SPE1] bgp 100
[*SPE1-bgp] ipv4-family vpnv4
[*SPE1-bgp-af-vpnv4] peer 1.1.1.1 reflect-client
[*SPE1-bgp-af-vpnv4] peer 1.1.1.1 next-hop-local
[*SPE1-bgp-af-vpnv4] peer 4.4.4.4 reflect-client
[*SPE1-bgp-af-vpnv4] peer 4.4.4.4 next-hop-local
[*SPE1-bgp-af-vpnv4] quit
[*SPE1-bgp] quit
[*SPE1] commit
Step 7 Apply the tunnel policy on the UPE and configure a tunnel selector on each SPE
because SPEs do not have VPN instances, so that the UPE and SPEs use RSVP-TE
tunnels to transmit traffic.
The configuration of the NPE is similar to the configuration of the UPE. For
configuration details, see Configuration Files in this section. After completing the
configurations, run the display bgp vpnv4 vpn-instancevpna routing-table
command on the UPE and NPE to view detailed information about received
routes.
[~UPE] display bgp vpnv4 vpn-instance vpna routing-table
BGP Local router ID is 1.1.1.1
Status codes: * - valid, > - best, d - damped,
h - history, i - internal, s - suppressed, S - Stale
Origin : i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V - valid, I - invalid, N - not-found
The command output shows that the UPE and NPE both preferentially select the
routes advertised by SPE1 and use UPE <-> SPE1 <-> NPE as the primary path.
Step 9 Configure 1588v2 to synchronize the clocks of the UPE, SPE1, and the NPE.
1. # Import BITS0 signals to SPE1.
[~SPE1] clock bits-type bits0 2mhz
[*SPE1] clock source bits0 synchronization enable
[*SPE1] clock source bits0 priority 1
[*SPE1] commit
# Configure UPE.
[~UPE] ptp enable
[*UPE] ptp domain 1
[*UPE] ptp device-type bc
[*UPE] ptp clock-source local clock-class 185
[*UPE] clock source ptp synchronization enable
[*UPE] clock source ptp priority 1
[*UPE] commit
# Configure NPE.
[~NPE] ptp enable
[*NPE] ptp domain 1
[*NPE] ptp device-type bc
[*NPE] ptp clock-source local clock-class 185
[*NPE] clock source ptp synchronization enable
[*NPE] clock source ptp priority 1
[*NPE] commit
# Configure UPE.
[~UPE] interface gigabitethernet 1/0/0
[~UPE-GigabitEthernet1/0/0] ptp enable
[*UPE-GigabitEthernet1/0/0] commit
[~UPE-GigabitEthernet1/0/0] quit
[~UPE] interface gigabitethernet 1/0/1
[~UPE-GigabitEthernet1/0/1] ptp enable
[*UPE-GigabitEthernet1/0/1] commit
[~UPE-GigabitEthernet1/0/1] quit
# Configure NPE.
[~NPE] interface gigabitethernet 1/0/2
[~NPE-GigabitEthernet1/0/2] ptp enable
[*NPE-GigabitEthernet1/0/2] commit
[~NPE-GigabitEthernet1/0/2] quit
[~NPE] interface gigabitethernet 1/0/3
[~NPE-GigabitEthernet1/0/3] ptp enable
[*NPE-GigabitEthernet1/0/3] commit
[~NPE-GigabitEthernet1/0/3] quit
Step 10 Configure hop-by-hop packet loss and delay measurement on the UPE, SPE1, and
the NPE; configure two ACHs on the link between the UPE and NPE: ACH1
{TLP100, TLP200} and ACH2 {TLP200, TLP310}.
# Configure UPE.
● Configure the MCP.
[~UPE] nqa ipfpm mcp
[*UPE-nqa-ipfpm-mcp] mcp id 1.1.1.1
[*UPE-nqa-ipfpm-mcp] protocol udp port 2048
[*UPE-nqa-ipfpm-mcp] authentication-mode hmac-sha256 key-id 1 cipher YsHsjx_202206
[*UPE-nqa-ipfpm-mcp] instance 1
[*UPE-nqa-ipfpm-mcp] description Instanceforpoint-by-pointtest
[*UPE-nqa-ipfpm-mcp-instance-1] dcp 1.1.1.1
[*UPE-nqa-ipfpm-mcp-instance-1] dcp 2.2.2.2
[*UPE-nqa-ipfpm-mcp-instance-1] dcp 4.4.4.4
[*UPE-nqa-ipfpm-mcp-instance-1] ach 1
[*UPE-nqa-ipfpm-mcp-instance-1-ach-1] flow forward
[*UPE-nqa-ipfpm-mcp-instance-1-ach-1] in-group dcp 1.1.1.1 tlp 100
[*UPE-nqa-ipfpm-mcp-instance-1-ach-1] out-group dcp 2.2.2.2 tlp 200
[*UPE-nqa-ipfpm-mcp-instance-1-ach-1] quit
[*UPE-nqa-ipfpm-mcp-instance-1] ach 2
[*UPE-nqa-ipfpm-mcp-instance-1-ach-2] flow forward
[*UPE-nqa-ipfpm-mcp-instance-1-ach-2] in-group dcp 2.2.2.2 tlp 200
[*UPE-nqa-ipfpm-mcp-instance-1-ach-2] out-group dcp 4.4.4.4 tlp 310
[*UPE-nqa-ipfpm-mcp-instance-1-ach-2] quit
[*UPE-nqa-ipfpm-mcp-instance-1] quit
[*UPE-nqa-ipfpm-mcp] quit
[*UPE] commit
After completing the configuration, run the display ipfpm mcp command on
the UPE. The command output shows MCP configurations on the UPE.
[~UPE] display ipfpm mcp
Specification Information:
Max Instance Number :64
Max DCP Number Per Instance :256
Max ACH Number Per Instance :16
Max TLP Number Per ACH :16
Configuration Information:
MCP ID :1.1.1.1
Status :Active
Protocol Port :2048
Current Instance Number :1
● Configure a DCP.
[~UPE] nqa ipfpm dcp
[*UPE-nqa-ipfpm-dcp] dcp id 1.1.1.1
[*UPE-nqa-ipfpm-dcp] mcp 1.1.1.1 port 2048
[*UPE-nqa-ipfpm-dcp] authentication-mode hmac-sha256 key-id 1 cipher YsHsjx_202206
[*UPE-nqa-ipfpm-dcp] color-flag loss-measure tos-bit 3 delay-measure tos-bit 4
[*UPE-nqa-ipfpm-dcp] instance 1
[*UPE-nqa-ipfpm-dcp-instance-1] description Instanceforpointbypointtest
[*UPE-nqa-ipfpm-dcp-instance-1] interval 10
[*UPE-nqa-ipfpm-dcp-instance-1] flow forward source 10.1.1.1 destination 10.2.1.1
[*UPE-nqa-ipfpm-dcp-instance-1] tlp 100 in-point ingress
[*UPE-nqa-ipfpm-dcp-instance-1] quit
[*UPE-nqa-ipfpm-dcp] quit
[*UPE] commit
After completing the configuration, run the display ipfpm dcp command on
the UPE. The command output shows DCP configurations on the UPE.
[~UPE] display ipfpm dcp
Specification Information(Main Board):
Max Instance Number :64
Max 10s Instance Number :64
Max 1s Instance Number :--
Max TLP Number :512
Max TLP Number Per Instance :8
Configuration Information:
DCP ID : 1.1.1.1
Loss-measure Flag : tos-bit3
Delay-measure Flag : tos-bit4
Authentication Mode : hmac-sha256
Test Instances MCP ID : 1.1.1.1
Test Instances MCP Port : 2048
Current Instance Number :1
# Configure SPE1.
● Configure a DCP.
[~SPE1] nqa ipfpm dcp
[*SPE1-nqa-ipfpm-dcp] dcp id 2.2.2.2
[*SPE1-nqa-ipfpm-dcp] authentication-mode hmac-sha256 key-id 1 cipher YsHsjx_202206
[*SPE1-nqa-ipfpm-dcp] color-flag loss-measure tos-bit 3 delay-measure tos-bit 4
[*SPE1-nqa-ipfpm-dcp] mcp 1.1.1.1 port 2048
[*SPE1-nqa-ipfpm-dcp] instance 1
[*SPE1-nqa-ipfpm-dcp-instance-1] description Instanceforpointbypointtest
[*SPE1-nqa-ipfpm-dcp-instance-1] interval 10
[*SPE1-nqa-ipfpm-dcp-instance-1] flow forward source 10.1.1.1 destination 10.2.1.1
[*SPE1-nqa-ipfpm-dcp-instance-1] tlp 200 mid-point flow forward ingress vpn-label 17 lsp-label 18
[*SPE1-nqa-ipfpm-dcp-instance-1] quit
[*SPE1-nqa-ipfpm-dcp] quit
[*SPE1] commit
After completing the configuration, run the display ipfpm dcp command on
SPE1. The command output shows DCP configurations on SPE1.
[~SPE1] display ipfpm dcp
Specification Information(Main Board):
Max Instance Number :64
Max 10s Instance Number :64
Max 1s Instance Number :--
Max TLP Number :512
Max TLP Number Per Instance :8
Configuration Information:
DCP ID : 2.2.2.2
Loss-measure Flag : tos-bit3
Delay-measure Flag : tos-bit4
Authentication Mode : hmac-sha256
Test Instances MCP ID : 1.1.1.1
Test Instances MCP Port : 2048
Current Instance Number :1
After completing the configuration, run the display ipfpm dcp command on
the NPE. The command output shows DCP configurations on the NPE.
[~NPE] display ipfpm dcp
Specification Information(Main Board):
Max Instance Number :64
Max 10s Instance Number :64
Max 1s Instance Number :--
Max TLP Number :512
Max TLP Number Per Instance :8
Configuration Information:
DCP ID : 4.4.4.4
Loss-measure Flag : tos-bit3
Delay-measure Flag : tos-bit4
Authentication Mode : hmac-sha256
Test Instances MCP ID : 1.1.1.1
Test Instances MCP Port : 2048
Current Instance Number :1
Step 11 Configure alarm thresholds and clear alarm thresholds for IP FPM performance
counters on the UPE.
# Configure the packet loss alarm threshold and its clear alarm threshold.
[~UPE] nqa ipfpm mcp
[*UPE-nqa-ipfpm-mcp] instance 1
[*UPE-nqa-ipfpm-mcp-instance-1] loss-measure ratio-threshold upper-limit 10 lower-limit 5
[*UPE-nqa-ipfpm-mcp-instance-1] commit
# Configure the two-way delay alarm threshold and its clear alarm threshold.
[~UPE-nqa-ipfpm-mcp-instance-1] delay-measure two-way delay-threshold upper-limit 100000 lower-
limit 50000
[*UPE-nqa-ipfpm-mcp-instance-1] commit
----End
Configuration Files
● UPE configuration file
#
sysname UPE
#
ptp enable
ptp domain 1
ptp device-type bc
ptp clock-source local clock-class 185
ptp clock-source bits0 on
#
#
ipv4-family vpnv4
undo policy vpn-target
tunnel-selector bindTE
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 1.1.1.1 next-hop-local
peer 3.3.3.3 enable
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
peer 4.4.4.4 next-hop-local
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 172.16.1.0 0.0.0.255
network 172.16.3.0 0.0.0.255
network 172.16.4.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.1 te Tunnel11
#
nqa ipfpm dcp
dcp id 2.2.2.2
mcp 1.1.1.1 port 2048
authentication-mode hmac-sha256 key-id 1 cipher #%#%/#(8ARUz1+=(sUrXdsM1P.x#%#%
color-flag loss-measure tos-bit 3 delay-measure tos-bit 4
instance 1
description Instanceforpointbypointtest
flow forward source 10.1.1.1 destination 10.2.1.1
tlp 200 mid-point flow forward ingress vpn-label 17 lsp-label 18
#
return
● SPE2 configuration file
#
sysname SPE2
#
tunnel-selector bindTE permit node 10
apply tunnel-policy policy1
#
mpls lsr-id 3.3.3.3
mpls
mpls te
label advertise non-null
mpls rsvp-te
mpls te cspf
#
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.5.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.2.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet1/0/3
undo shutdown
ip address 172.16.3.2 255.255.255.0
mpls
mpls te
mpls ldp
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
interface Tunnel12
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.1
mpls te tunnel-id 200
mpls te reserved-for-binding
#
bgp 100
router-id 3.3.3.3
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 4.4.4.4 enable
#
ipv4-family vpnv4
undo policy vpn-target
tunnel-selector bindTE
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 1.1.1.1 next-hop-local
peer 2.2.2.2 enable
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
peer 4.4.4.4 next-hop-local
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 172.16.2.0 0.0.0.255
network 172.16.3.0 0.0.0.255
network 172.16.5.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.1 te Tunnel12
#
return
● NPE configuration file
#
sysname NPE
#
ptp enable
ptp domain 1
ptp device-type bc
ptp clock-source local clock-class 185
ptp clock-source bits0 on
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
#
return
Context
NOTE
The NetStream feature may be used to analyze the communication information of terminal
customers for network traffic statistics and management purposes. Before enabling the
NetStream feature, ensure that it is performed within the boundaries permitted by
applicable laws and regulations. Effective measures must be taken to ensure that
information is securely protected.
● Accounting statistics
NetStream provides detailed accounting statistics, including IP addresses,
number of packets, number of bytes, time, type of service (ToS), and
application types. Based on the collected statistics, the Internet service
provider (ISP) can charge users flexibly based on resource information, such
as time periods, bandwidth, applications, or service quality, and enterprises
can estimate their expenses and assign costs to efficiently use resources.
● Network planning and analysis
NetStream provides key information for advanced network management tools
to optimize the network design and plan. This helps achieve the best network
performance and reliability with the lowest network operation cost.
● Network monitoring
NetStream monitors network traffic in real time.
● Application monitoring and analysis
NetStream provides detailed network application information. For example, it
allows a network administrator to view the proportion of each application,
such as web, the File Transfer Protocol (FTP), Telnet, and other TCP/IP
applications, to communication traffic. Based on the information, the Internet
Content Provider (ICP) and ISP can properly plan and allocate network
application resources.
● Abnormal traffic detection
By analyzing NetStream flows, the NMS can detect abnormal traffic, such as
different types of attacks on networks in real time. The NMS uses alarm
information reported by NetStream to monitor devices to secure network
operation.
The NetStream function involves three devices. Figure 1-9 shows the relationships
between the three devices.
Usage Scenario
On the network shown in Figure 1-10, a carrier enables NetStream on the router
functioning as a NetStream Data Exporter (NDE) to obtain detailed network
application information. The carrier can use the information to monitor abnormal
network traffic, analyze users' operation modes, and plan networks between ASs.
Statistics about original flows are collected based on the 7-tuple information. The
NDE samples IPv4 flows passing through it, collects statistics about sampled flows,
encapsulates the aging NetStream original flows into UDP packets, and sends the
packets to the NetStream Collector (NSC) for processing. Unlike collecting
statistics about aggregated flows, collecting statistics about original flows imposes
less impact on NDE performance. Original flows consume more storage space and
network bandwidth resources because the volume of original flows is greater than
that of aggregated flows.
Pre-configuration Tasks
Before collecting the statistics about IPv4 original flows, configure static routes or
enable an IGP to implement network connectivity.
Context
NetStream services can be processed in the following modes:
● Distributed mode
An interface board samples packets, aggregates flows, and outputs flows.
The ip netstream sampler to slot command has the same function as the ipv6
netstream sampler to slot command.
● The execution of either command takes effect on all packets, and there is no
need to configure both of them. If it is required to configure both of them,
ensure that NetStream service processing modes are the same. A mode
inconsistency causes an error.
Procedure
● Specify the distributed NetStream service processing mode.
a. Run system-view
The system view is displayed.
b. Run slot slot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
c. Run ip netstream sampler to slot self
The distributed NetStream service processing mode is specified.
d. Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 (Optional) Run ip netstream export version { 5 [ origin-as | peer-as ] | 9
[ origin-as | peer-as ] [ bgp-nexthop ] [ ttl ] [ route-distinguisher ] | ipfix
[ origin-as | peer-as ] [ bgp-nexthop ] [ ttl ] [ route-distinguisher ] }
The format of output packets is configured.
NetStream original flow packets can be output in V5, V9, or IPFIX format. V5, V9,
and IPFIX formats are mutually exclusive.
Using a template to output original flow packets, V9 allows statistics to be output
more flexibly, newly defined flow elements to be extended more easily, and new
records to be generated more easily.
Compared with the V9 format, the IPFIX format improves packet extensibility and
compatibility, security, and reliability. In addition, the IPFIX format has an
enterprise identifier field added. When setting this field, you must use the IPFIX
format for outputting NetStream IPv4 original flows.
The V5 format is fixed, and the system cost is low. In most cases, NetStream
original flow packets can be output in V5 format. In the following cases, however,
NetStream original flow packets must be output in V9 or IPFIX format.
● The output NetStream packets need to carry BGP next-hop information.
----End
Context
Increasing types of services and applications on networks urge carriers to provide
more delicate management and accounting services.
If NetStream is configured on multiple interfaces on an NDE, all interfaces send
traffic statistics to a single NetStream Collector (NSC). The NSC cannot distinguish
interfaces, and therefore, cannot manage or analyze traffic statistics based on
interfaces. In addition, the NSC will be overloaded due to a great amount of
information.
NetStream monitoring configured on an NDE allows the NDE to send traffic
statistics collected on specified interfaces to specified NSCs for analysis, achieving
interface-specific service monitoring. Traffic statistics can be balanced among
these NSCs to reduce the load on a single NSC.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ip netstream monitor monitor-name
A NetStream monitoring service view is created and displayed, or an existing
NetStream monitoring service view is directly displayed.
Step 3 (Optional) Run ip netstream export source { ip-address | ipv6 ipv6-address }
[ port ]
A source IP address and a source port are configured for outputting NetStream
flows.
Step 4 Run ip netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ version { 5 | 9 | ipfix } ] [ dscp dscp-value ]
The destination IP address for outputting statistics and the UDP port number of
the peer NSC are configured.
Step 5 Run quit
Return to the system view.
NOTE
When configuring NetStream monitoring services, you need to run the ip netstream { inbound
| outbound } command in the interface view. Otherwise, the ip netstream monitor monitor-
name { inbound | outbound } command does not take effect.
If NetStream monitoring services have been configured on the interface, statistics about original
flows are sent to the destination IP address specified in the NetStream monitoring service view,
not that specified in the system view. Similarly, the source address and source port configured in
the NetStream monitoring service view are used for outputting statistics.
----End
1.1.3.3.4 (Optional) Adjusting the AS Field Mode and Interface Index Type
For the NetStream Collector (NSC) to properly receive and parse NetStream
packets output by the NetStream Data Exporter (NDE), ensure that the AS field
modes and interface index types configured on the NDE and the NSC are the
same.
Context
Before you enable the NSC to properly receive and parse NetStream packets
output by the NDE, specify the same AS field mode and interface index type on
the NDE and NSC.
● AS field mode: The length of the AS field in IP packets can be set to 16 bits
or 32 bits. Devices on a network must use the same AS field mode. An AS
field mode inconsistency causes NetStream to fail to sample inter-AS traffic.
NOTE
If the 32-bit AS field mode is used, the NMS must be able to identify the 32-bit AS
field. Otherwise, the NMS fails to identify inter-AS traffic sent by devices.
● Interface index: The NMS uses an interface index carried in a NetStream
packet output by the NDE to query information about the interface that sends
the packet. The interface index can be 16 or 32 bits long. The index length is
determined by the NMSs of different vendors. Therefore, the NDE must use a
proper interface index type that is also supported by the NMS. For example, if
the NMS can parse 32-bit interface indexes, set the format of the interface
indexes in the output NetStream packets to 32-bit.
NOTE
Compared with the default 16-bit interface index, the 32-bit interface index can be
identified by more third-party NMSs.
Procedure
Step 1 Run system-view
The length type of the interface index carried in NetStream packets output by the
device is set.
When converting the interface index from a 16-bit value to a 32-bit value, ensure
that the following conditions are met:
● Original flows are output in V9 or IPFIX format.
● All aggregation flows are output in V9 or IPFIX format.
----End
Procedure
Step 1 Run system-view
An original flow for each flag value is created. If statistics collection for TCP flags
is enabled, the number of original flows will greatly increase.
----End
Context
Regardless of the flow format in which the traffic statistics are output, option
packet data is exported to the NetStream Collector (NSC) as a supplement. In this
way, the NetStream Data Exporter (NDE) can obtain information, such as the
sampling ratio and whether the sampling function is enabled, to reflect the actual
network traffic.
Option packets, which are independent of statistics packets, are exported to the
NSC in V9 or IPFIX format. Therefore, the required option template is sent to the
NMS for parsing option packets. You can set option template refreshing
parameters as needed to regularly refresh the template to notify the NSC of the
latest option template format.
Procedure
● Run system-view
----End
Context
NOTE
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Configure a sampling mode and sampling ratio by performing at least one of the
following steps:
● Configure a sampling mode and sampling ratio globally.
a. Run ip netstream sampler { fix-packets packet-interval | random-
packets packet-interval | fix-time time-interval } { inbound | outbound }
A global sampling mode and sampling ratio are configured.
b. Run interface interface-type interface-number
The interface view is displayed.
● Configure a sampling mode and sampling ratio on an interface.
a. Run interface interface-type interface-number
The interface view is displayed.
b. Run ip netstream sampler { fix-packets packet-interval | random-
packets packet-interval | fix-time time-interval } { inbound | outbound }
A sampling mode and sampling ratio are configured on the interface.
NOTE
The sampling mode and sampling ratio configured in the system view are
applicable to all interfaces on the device. The sampling mode and sampling ratio
configured in the interface view take precedence over those configured in the
system view.
NetStream is not applied to traffic matching the ACL rule or traffic behavior that
contains deny.
NOTE
The traffic behavior view must be displayed before you run this command.
----End
Prerequisites
NetStream has been enabled on an interface.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The interface view is displayed.
Step 3 Run ip netstream mpls exclude
MPLS packet sampling is disabled on the interface.
Step 4 Run commit
The configuration is committed.
----End
Procedure
● Run the display ip netstream cache origin [ source-ip source-ip ] [ source-
port source-port ] [ destination-ip destination-ip ] [ destination-port
destination-port ] [ protocol { udp | tcp | protocol-number } ] [ time-range
from start-time to end-time ] [ source-interface { source-interface-type
source-interface-num | source-interface-name } ] [ destination-interface
{ destination-interface-type destination-interface-num | destination-interface-
name } ] slot slot-id command to check information about the NetStream
buffer.
NOTE
Usage Scenario
On the network shown in Figure 1-11, a carrier enables NetStream on the router
functioning as a NetStream Data Exporter (NDE) to obtain detailed network
application information. The carrier can use the information to monitor abnormal
network traffic, analyze users' operation modes, and plan networks between ASs.
Statistics about NetStream aggregated flows contain information about original
flows with the same attributes, whereas statistics about NetStream original flows
contain information about sampled packets. The volume of aggregated flow
statistics is greater than that of original flow statistics.
Pre-configuration Tasks
Before collecting statistics about IPv4 aggregated flows, complete the following
tasks:
Context
NetStream services can be processed in the following modes:
● Distributed mode
An interface board samples packets, aggregates flows, and outputs flows.
The ip netstream sampler to slot command has the same function as the ipv6
netstream sampler to slot command.
● The execution of either command takes effect on all packets, and there is no
need to configure both of them. If it is required to configure both of them,
ensure that NetStream service processing modes are the same. A mode
inconsistency causes an error.
Procedure
● Specify the distributed NetStream service processing mode.
a. Run system-view
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
c. Run ip netstream sampler to slot self
----End
Procedure
Step 1 Run system-view
If the NetStream flow aggregation function is enabled on a device, the device classifies and
aggregates original flows based on specified rules and sends the aggregated flows to the
NetStream Data Analyzer (NDA) for analysis. Aggregating original flows minimizes the
consumption of network bandwidths, CPU resources, and memory resources. Flow
attributes based on which flows are aggregated vary according to flow aggregation modes.
The length of the aggregate mask is set. The effective mask is the greater one
between the mask in the FIB table and the configured mask. If no aggregate mask
is set, the system uses the mask in the FIB table for flow aggregation.
NOTE
The aggregate mask takes effect only on flows aggregated in the following modes:
destination-prefix, destination-prefix-tos, prefix, prefix-tos, source-prefix, and source-prefix-
tos.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ip netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ dscp dscp-value ]
The destination IP address and UDP port number of the peer NSC are specified for
NetStream original flows to be output.
If the destination IP addresses are specified in both the system and the
aggregation views, the configuration in the aggregation view takes effect.
Step 3 Run ip netstream aggregation { as | as-tos | bgp-nexthop-tos | destination-
prefix | destination-prefix-tos | index-tos | mpls-label | prefix | prefix-tos |
protocol-port | protocol-port-tos | source-prefix | source-prefix-tos | source-
index-tos | vlan-id | bgp-community | vni-sip-dip}
The IPv4 NetStream aggregation view is displayed.
Step 4 Run enable
The NetStream aggregation mode is enabled.
Step 5 (Optional) Run export version { 8 | 9 | ipfix }
The output format is specified for the aggregated flows. Flows aggregated in as,
as-tos, destination-prefix, destination-prefix-tos, prefix, prefix-tos, protocol-
port, protocol-port-tos, source-prefix, or source-prefix-tos mode are output in
V8 format by default. You can specify the output format for aggregated flows as
needed.
NOTE
For the vlan-id, bgp-nhp-tos, vni-sip-dip, and index-tos aggregation modes, aggregated
packets can be encapsulated only in the default V9 format. You can change the format to
IPFIX using the export version command.
source port is specified in the aggregation view, the source IP address and source
port specified in the system view take effect.
Step 8 Run ip netstream export host { ip-address | ipv6 ipv6-address } port [ vpn-
instance vpn-instance-name ] [ vpn-instancevpn-instance-name ] [ dscp dscp-
value ]
The destination IP address and UDP port number of the peer NSC are specified for
NetStream original flows to be output.
NOTE
The destination IP address specified in the NetStream aggregation view takes precedence
over that specified in the system view.
Step 10 (Optional) Configure NetStream packets to carry the flow sequence field.
1. Run slot slot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
2. Run ip netstream export sequence-mode flow
The NetStream export sequence mode is set to flow.
NOTE
Step 12 (Optional) Exit the IPv4 aggregated configuration mode view. In the system view,
run ip netstream export template sequence-number fixed
The sequence numbers of template packets and option template packets in IPFIX
format are configured to remain unchanged, but data packets and option data
packets in IPFIX format are still consecutively numbered.
----End
1.1.3.4.4 (Optional) Adjusting the AS Field Mode and Interface Index Type
For the NetStream Collector (NSC) to properly receive and parse NetStream
packets output by the NetStream Data Exporter (NDE), ensure that the AS field
modes and interface index types configured on the NDE and the NSC are the
same.
Context
Before you enable the NSC to properly receive and parse NetStream packets
output by the NDE, specify the same AS field mode and interface index type on
the NDE and NSC.
● AS field mode: The length of the AS field in IP packets can be set to 16 bits
or 32 bits. Devices on a network must use the same AS field mode. An AS
field mode inconsistency causes NetStream to fail to sample inter-AS traffic.
NOTE
If the 32-bit AS field mode is used, the NMS must be able to identify the 32-bit AS
field. Otherwise, the NMS fails to identify inter-AS traffic sent by devices.
● Interface index: The NMS uses an interface index carried in a NetStream
packet output by the NDE to query information about the interface that sends
the packet. The interface index can be 16 or 32 bits long. The index length is
determined by the NMSs of different vendors. Therefore, the NDE must use a
proper interface index type that is also supported by the NMS. For example, if
the NMS can parse 32-bit interface indexes, set the format of the interface
indexes in the output NetStream packets to 32-bit.
NOTE
Compared with the default 16-bit interface index, the 32-bit interface index can be
identified by more third-party NMSs.
Procedure
Step 1 Run system-view
The length type of the interface index carried in NetStream packets output by the
device is set.
When converting the interface index from a 16-bit value to a 32-bit value, ensure
that the following conditions are met:
● Original flows are output in V9 or IPFIX format.
● All aggregation flows are output in V9 or IPFIX format.
----End
Context
Regardless of the flow format in which the traffic statistics are output, option
packet data is exported to the NetStream Collector (NSC) as a supplement. In this
way, the NetStream Data Exporter (NDE) can obtain information, such as the
sampling ratio and whether the sampling function is enabled, to reflect the actual
network traffic.
At present, the following option packets are supported on IPv4 networks:
● Interface option packets: These packets are used to send the NetStream
configurations of all the boards on the NDE to the NSC in a scheduled
manner. The configurations cover the interface index, statistics collection
direction, and sampling value in the inbound/outbound direction.
● Time application label (TAL) option packets: These packets are used to send
application label data to the NSC. The application label option function
provides data, such as the application type of system labels, for users to
collect L3VPN NetStream statistics. For details, see 1.1.3.11 Collecting
Statistics About BGP/MPLS VPN Flows.
Option packets, which are independent of statistics packets, are exported to the
NSC in V9 or IPFIX format. Therefore, the required option template is sent to the
NMS for parsing option packets. You can set option template refreshing
parameters as needed to regularly refresh the template to notify the NSC of the
latest option template format.
Procedure
● Run system-view
The system view is displayed
● Run the following commands as required to configure functions related to
interface option packets.
– Run the ip netstream export template option sampler command to
enable the function of exporting statistics about interface option packets.
– Run the ip netstream export template option { refresh-rate packet-
number | timeout-rate timeout-interval } command to set the packet
sending interval and timeout interval for option template refreshing.
The packet sending interval and timeout interval are set for option
template refreshing. An option template can be refreshed at a fixed
packet sending interval or timeout interval. The two intervals can both
take effect. In the command:
----End
Context
NOTE
Procedure
Step 1 Run system-view
Step 2 Configure a sampling mode and sampling ratio by performing at least one of the
following steps:
● Configure a sampling mode and sampling ratio globally.
a. Run ip netstream sampler { fix-packets packet-interval | random-
packets packet-interval | fix-time time-interval } { inbound | outbound }
A global sampling mode and sampling ratio are configured.
b. Run interface interface-type interface-number
The interface view is displayed.
● Configure a sampling mode and sampling ratio on an interface.
a. Run interface interface-type interface-number
The interface view is displayed.
b. Run ip netstream sampler { fix-packets packet-interval | random-
packets packet-interval | fix-time time-interval } { inbound | outbound }
A sampling mode and sampling ratio are configured on the interface.
NOTE
The sampling mode and sampling ratio configured in the system view are
applicable to all interfaces on the device. The sampling mode and sampling ratio
configured in the interface view take precedence over those configured in the
system view.
Statistics about packets' BGP next-hop information can also be collected. Original
flows output in V5 format, however, cannot carry the BGP next-hop information.
The traffic statistics diagnosis function is enabled so that you can compare the
traffic statistics collected by the device with those restored by the NMS to
determine the cause of inaccurate sampling.
Step 5 (Optional) Run ip netstream sampler except deny-action
NetStream is not applied to traffic matching the ACL rule or traffic behavior that
contains deny.
NOTE
The traffic behavior view must be displayed before you run this command.
----End
Prerequisites
NetStream has been enabled on an interface.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The interface view is displayed.
Step 3 Run ip netstream mpls exclude
MPLS packet sampling is disabled on the interface.
Step 4 Run commit
The configuration is committed.
----End
Procedure
● Run the display ip netstream cache { as | as-tos | bgp-nexthop-tos | bgp-
community | destination-prefix | destination-prefix-tos | index-tos | mpls-
Usage Scenario
On the network shown in Figure 1-12, a carrier enables NetStream on the router
to obtain detailed network application information. The carrier can use the
information to monitor abnormal network traffic, analyze users' operation modes,
and plan networks between ASs.
Statistics about original flows are collected based on the 7-tuple information. The
NetStream data exporter (NDE) samples IPv6 flows passing through it,
encapsulates information about the post-aging NetStream original flows into UDP
packets, and sends the packets to the NetStream Collector (NSC) for further
processing. Unlike collecting statistics about aggregated flows, collecting statistics
about original flows imposes less impact on NDE performance. Original flows
consume more storage space and network bandwidth resources of the NSC
because the volume of original flows is greater than that of aggregated flows.
Pre-configuration Tasks
Before collecting the statistics about IPv6 original flows, complete the following
task:
● Configure parameters of the link layer protocol and IP addresses for interfaces
so that the link layer protocol on the interfaces can go Up.
● Configure static routes or enable an IGP to implement network connectivity.
Context
NetStream services can be processed in the following modes:
● Distributed mode
An interface board samples packets, aggregates flows, and outputs flows.
The ip netstream sampler to slot command has the same function as the ipv6
netstream sampler to slot command.
● The execution of either command takes effect on all packets, and there is no
need to configure both of them. If it is required to configure both of them,
ensure that NetStream service processing modes are the same. A mode
inconsistency causes an error.
Procedure
● Specify the distributed NetStream service processing mode.
a. Run system-view
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
c. Run ipv6 netstream sampler to slot self
----End
Context
IPv6 original flows can be output only in V9 or IPFIX format.
Procedure
Step 1 Run system-view
Step 3 (Optional) Configure NetStream packets to carry the flow sequence field.
1. Run slot slot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
2. Run ip netstream export sequence-mode flow
NetStream packets are configured to carry the flow sequence field.
NOTE
The device is configured to keep the sequence numbers of template packets and
option template packets in IPFIX format unchanged and to consecutively number
data packets and option data packets in IPFIX format.
The interval at which the template is updated when original flows are output in
V9 or IPFIX format is set.
Step 6 Run ipv6 netstream export source { ip-address | ipv6 ipv6-address } [ port ]
The source address and source port for outputting statistics are configured.
Step 7 In the system or slot view, specify the destination address and UDP port number
of the peer NSC for original flows to be output.
● In the system view:
Run ipv6 netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ dscp dscp-value ]
The destination address for outputting statistics and the UDP port number for
the peer NSC are configured.
● In the slot view:
a. Run slot slot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
b. Run ipv6 netstream export host [ ipv6 ] ip-address port [ vpn-instance
vpn-instance-name ] [ dscp dscp-value ]
The destination address for outputting statistics and the UDP port
number for the peer NSC are configured.
c. Run quit
Return to the system view.
----End
Context
Increasing types of services and applications on networks urge carriers to provide
more delicate management and accounting services.
If NetStream is configured on multiple interfaces on an NDE, all interfaces send
traffic statistics to a single NetStream Collector (NSC). The NSC cannot distinguish
interfaces, and therefore, cannot manage or analyze traffic statistics based on
interfaces. In addition, the NSC will be overloaded due to a great amount of
information.
NetStream monitoring configured on an NDE allows the NDE to send traffic
statistics collected on specified interfaces to specified NSCs for analysis, achieving
interface-specific service monitoring. Traffic statistics can be balanced among
these NSCs to reduce the load on a single NSC.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 netstream monitor monitor-name
A NetStream monitoring service view is created and displayed, or an existing
NetStream monitoring service view is directly displayed.
Step 3 Run ipv6 netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ version { 9 | ipfix } ] [ dscp dscp-value ]
The destination address for outputting statistics and the UDP port number of the
peer NSC are configured.
Step 4 (Optional) Run ipv6 netstream export source { ip-address | ipv6 ipv6-address }
[ port ]
The source address and source port for outputting statistics are configured.
Step 5 Run quit
Return to the system view.
Step 6 Run interface interface-type interface-number
The interface view is displayed.
Step 7 Run ipv6 netstream monitor monitor-name { inbound | outbound }
NetStream monitoring services are deployed in the inbound or outbound direction
of the interface.
NOTE
When configuring NetStream monitoring services, you need to run the ipv6 netstream
{ inbound | outbound } command in the interface view. Otherwise, the ipv6 netstream
monitor monitor-name { inbound | outbound } command does not take effect.
If NetStream monitoring services have been configured on the interface, statistics about original
flows are sent to the destination IP address specified in the NetStream monitoring service view,
not that specified in the system view. Similarly, the source address and source port configured in
the NetStream monitoring service view are used for outputting statistics.
----End
1.1.3.5.4 (Optional) Adjusting the AS Field Mode and Interface Index Type
For the NetStream Collector (NSC) to properly receive and parse NetStream
packets output by the NetStream Data Exporter (NDE), ensure that the AS field
modes and interface index types configured on the NDE and the NSC are the
same.
Context
Before you enable the NSC to properly receive and parse NetStream packets
output by the NDE, specify the same AS field mode and interface index type on
the NDE and NSC.
● AS field mode: The length of the AS field in IP packets can be set to 16 bits
or 32 bits. Devices on a network must use the same AS field mode. An AS
field mode inconsistency causes NetStream to fail to sample inter-AS traffic.
NOTE
If the 32-bit AS field mode is used, the NMS must identify the 32-bit AS field. If the
NMS cannot identify the 32-bit AS field, the NMS fails to identify inter-AS traffic sent
by devices.
● Interface index: The NMS uses an interface index carried in a NetStream
packet to query information about the interface that sends the packet. The
interface index can be 16 or 32 bits long. The index length is determined by
the NMSs of different vendors. Therefore, the NDE must use a proper
interface index type that is also supported by the NMS. For example, if the
NMS can parse 32-bit interface indexes, set the format of the interface
indexes in the output NetStream packets to 32-bit.
NOTE
Compared with the default 16-bit interface index, the 32-bit interface index can be
identified by more third-party NMSs.
Procedure
Step 1 Run system-view
The length type of the interface index carried in the NetStream packets output by
the device is configured.
An interface index can be changed from 16 bits to 32 bits only after the following
conditions are met:
----End
Context
Perform the following steps on the router on which TCP flag statistics are to be
collected.
By enabling statistics collection of TCP flags, you can extract the TCP-flag
information from network packets and send it to the NMS. The NMS can
determine whether there are flood attacks to the network.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 netstream tcp-flag enable
Statistics collection of TCP flags in original flows is enabled.
Step 3 Run commit
The configuration is committed.
----End
Context
Regardless of the flow format in which the traffic statistics are output, option
packet data is exported to the NetStream Collector (NSC) as a supplement. In this
way, the NetStream Data Exporter (NDE) can obtain information, such as the
sampling ratio and whether the sampling function is enabled, to reflect the actual
network traffic.
Currently, the option packets supported by IPv6 networks are interface option
packets, which are used to send the NetStream configurations of all the boards on
the NDE to the NSC in a scheduled manner. The configurations cover the interface
index, statistics collection direction, and sampling value in the inbound/outbound
direction.
Option packets, which are independent of statistics packets, are exported to the
NSC in V9 or IPFIX format. Therefore, the corresponding option template is sent to
the NMS for parsing option packets. You can set option template refreshing
parameters as needed for the device to regularly refresh the template to notify the
NSC of the latest option template format.
Procedure
● Run system-view
----End
Context
NOTE
Procedure
Step 1 Run system-view
Step 2 Configure a sampling mode and sampling ratio by performing at least one of the
following steps:
● Configure a sampling mode and sampling ratio globally.
a. Run ipv6 netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
A sampling mode and sampling ratio are configured globally.
b. Run interface interface-type interface-number
The interface view is displayed.
● Configure a sampling mode and sampling ratio on an interface.
a. Run interface interface-type interface-number
The interface view is displayed.
b. Run ipv6 netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
A sampling mode and sampling ratio are configured on the interface.
NOTE
The sampling mode and sampling ratio configured in the system view apply to
all interfaces on the device. The sampling mode and sampling ratio configured in
the interface view take precedence over those configured in the system view.
The ipv6 netstream sampler command run in the system view has the
same function as that run in the interface view.
The traffic statistics diagnosis function is enabled so that you can compare the
traffic statistics collected by the device with those restored by the NMS to
determine the cause of inaccurate sampling.
----End
Prerequisites
NetStream has been enabled on an interface.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
The interface view is displayed.
Step 3 Run ip netstream mpls exclude
MPLS packet sampling is disabled on the interface.
Step 4 Run commit
The configuration is committed.
----End
Prerequisites
NetStream statistics collection configurations are complete.
Procedure
● Run the display ipv6 netstream cache origin [ source-ipv6 source-ip ]
[ source-port source-port ] [ destination-ipv6 destination-ip ] [ destination-
port destination-port ] [ protocol { udp | tcp | protocol-number } ] [ time-
range from start-time to end-time ] [ source-interface { source-interface-
type source-interface-num | source-interface-name } ] [ destination-
interface { destination-interface-type destination-interface-num | destination-
interface-name } ] slot slot-id command to check information about the
NetStream buffer.
● Run the display ipv6 netstream statistics slot slot-id command to check
statistics about NetStream packets.
Usage Scenario
On the network shown in Figure 1-13, a carrier enables NetStream on the router
to obtain detailed network application information. The carrier can use the
information to monitor abnormal network traffic, analyze users' operation modes,
and plan networks between ASs.
Statistics about NetStream aggregated flows contain information about original
flows with the same attributes, whereas statistics about NetStream original flows
contain information about sampled packets. The volume of aggregated flow
statistics collection is greater than that of original flow statistics.
Pre-configuration Tasks
Before collecting the statistics about IPv6 aggregated flows, complete the
following tasks:
● Configure parameters of the link layer protocol and IP addresses for interfaces
so that the link layer protocol on the interfaces can go Up.
● Configure static routes or enable an IGP to implement network connectivity.
● Enable statistics collection for NetStream original flows.
Context
NetStream services can be processed in the following modes:
● Distributed mode
An interface board samples packets, aggregates flows, and outputs flows.
The ip netstream sampler to slot command has the same function as the ipv6
netstream sampler to slot command.
● The execution of either command takes effect on all packets, and there is no
need to configure both of them. If it is required to configure both of them,
ensure that NetStream service processing modes are the same. A mode
inconsistency causes an error.
Procedure
● Specify the distributed NetStream service processing mode.
a. Run system-view
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
c. Run ipv6 netstream sampler to slot self
----End
Procedure
Step 1 Run system-view
After collecting statistics about NetStream original flows, the router aggregates original
flows into aggregated flows based on specified rules, encapsulates aggregated flows into
UDP packets, and sends UDP packets after the aging timer expires. Aggregating original
flows minimizes the consumption of network bandwidths, CPU resources, and memory
resources. Attributes based on which flows are aggregated vary according to aggregation
modes.
The length of the aggregate mask is set. The mask used by the system is the
greater one between the mask in the FIB table and the configured mask. If no
aggregate mask is set, the system uses the mask in the FIB table for flow
aggregation.
NOTE
The aggregate mask takes effect only on flows aggregated in the following modes:
destination-prefix, destination-prefix-tos, prefix, prefix-tos, source-prefix, and source-prefix-
tos.
----End
Context
IPv6 aggregated flows can be exported only in V9 or IPFIX format.
Procedure
Step 1 Run system-view
Step 2 Run ipv6 netstream export host ip-address port [ vpn-instance vpn-instance-
name ] [ dscp dscp-value ]
The destination IP address of the exported packets carrying statistics is configured.
The destination IP address specified in the system view takes precedence over that
specified in the aggregation view.
The interval at which the template is refreshed when aggregated flows are
exported in the V9 or IPFIX format is set.
Step 7 Run ipv6 netstream export source { ip-address | ipv6 ipv6-address } [ port ]
The source address and source port for exporting statistics are configured.
The source IP address and the source port configured in the aggregation view take
precedence over that configured in the system view. If no source IP address and
source port are configured in the aggregation view, the source IP address and the
source port configured in the system view are used.
Step 8 Run ipv6 netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ dscp dscp-value ]
A destination IP address for exporting statistics and a UDP port number for the
peer NSC are configured.
NOTE
● You can specify eight destination IP addresses in the system view, IPv4 aggregation view,
and IPv6 aggregation view.
● The destination IP address specified in the system view takes precedence over that
specified in the aggregation view.
----End
1.1.3.6.4 (Optional) Adjusting the AS Field Mode and Interface Index Type
For the NetStream Collector (NSC) to properly receive and parse NetStream
packets output by the NetStream Data Exporter (NDE), ensure that the AS field
modes and interface index types configured on the NDE and the NSC are the
same.
Context
Before you enable the NSC to properly receive and parse NetStream packets
output by the NDE, specify the same AS field mode and interface index type on
the NDE and NSC.
● AS field mode: The length of the AS field in IP packets can be set to 16 bits
or 32 bits. Devices on a network must use the same AS field mode. An AS
field mode inconsistency causes NetStream to fail to sample inter-AS traffic.
NOTE
If the 32-bit AS field mode is used, the NMS must identify the 32-bit AS field. If the
NMS cannot identify the 32-bit AS field, the NMS fails to identify inter-AS traffic sent
by devices.
● Interface index: The NMS uses an interface index carried in a NetStream
packet to query information about the interface that sends the packet. The
interface index can be 16 or 32 bits long. The index length is determined by
the NMSs of different vendors. Therefore, the NDE must use a proper
interface index type that is also supported by the NMS. For example, if the
NMS can parse 32-bit interface indexes, set the format of the interface
indexes in the output NetStream packets to 32-bit.
NOTE
Compared with the default 16-bit interface index, the 32-bit interface index can be
identified by more third-party NMSs.
Procedure
Step 1 Run system-view
The system view is displayed.
The length type of the interface index carried in the NetStream packets output by
the device is configured.
An interface index can be changed from 16 bits to 32 bits only after the following
conditions are met:
● Original flows are output in V9 or IPFIX format.
● All aggregation flows are output in V9 or IPFIX format.
----End
Context
Regardless of the flow format in which the traffic statistics are output, option
packet data is exported to the NetStream Collector (NSC) as a supplement. In this
way, the NetStream Data Exporter (NDE) can obtain information, such as the
sampling ratio and whether the sampling function is enabled, to reflect the actual
network traffic.
Currently, the option packets supported by IPv6 networks are interface option
packets, which are used to send the NetStream configurations of all the boards on
the NDE to the NSC in a scheduled manner. The configurations cover the interface
index, statistics collection direction, and sampling value in the inbound/outbound
direction.
Option packets, which are independent of statistics packets, are exported to the
NSC in V9 or IPFIX format. Therefore, the corresponding option template is sent to
the NMS for parsing option packets. You can set option template refreshing
parameters as needed for the device to regularly refresh the template to notify the
NSC of the latest option template format.
Procedure
● Run system-view
The packet sending interval and timeout interval are set for option
template refreshing. An option template can be refreshed at a fixed
packet sending interval or timeout interval. The two intervals can both
take effect. In the command:
----End
Context
NOTE
Procedure
Step 1 Run system-view
Step 2 Configure a sampling mode and sampling ratio by performing at least one of the
following steps:
● Configure a sampling mode and sampling ratio globally.
a. Run ipv6 netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
A sampling mode and sampling ratio are configured globally.
b. Run interface interface-type interface-number
The interface view is displayed.
● Configure a sampling mode and sampling ratio on an interface.
a. Run interface interface-type interface-number
The interface view is displayed.
b. Run ipv6 netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
A sampling mode and sampling ratio are configured on the interface.
NOTE
The sampling mode and sampling ratio configured in the system view apply to
all interfaces on the device. The sampling mode and sampling ratio configured in
the interface view take precedence over those configured in the system view.
The ipv6 netstream sampler command run in the system view has the
same function as that run in the interface view.
The traffic statistics diagnosis function is enabled so that you can compare the
traffic statistics collected by the device with those restored by the NMS to
determine the cause of inaccurate sampling.
----End
Prerequisites
NetStream has been enabled on an interface.
Procedure
Step 1 Run system-view
----End
Context
Run the following command to check the previous configuration.
Procedure
● Run the display ipv6 netstream cache { as | as-tos | bgp-nexthop-tos |
destination-prefix | destination-prefix-tos | index-tos | prefix | prefix-tos |
protocol-port | protocol-port-tos | source-prefix | source-prefix-tos | mpls-
label | vlan-id | flexflowtpl record-name } slot slot-id command to view
various aggregated flows in the buffer.
● Run the display ipv6 netstream statistics slot slot-id command to check
statistics about NetStream packets.
● Run the display ip netstream statistics interface { interface-name |
interface-type interface-number } command to check statistics about sampled
packets on an interface.
● Run the display netstream { all | global | interface interface-type interface-
number } command to check NetStream configurations in different views.
● Run the display ip netstream cache aggregation statistics slot slot-id
command to check aggregation flow table specifications and the number of
current flows of a specific board.
----End
Usage Scenario
On the network shown in Figure 1-14, a carrier enables NetStream on the router
functioning as an NDE to obtain detailed network application information. The
user can use the information to monitor abnormal network traffic, analyze users'
operation modes, and plan networks between ASs.
Flexible flow packets provide user-defined templates for users to customize
matching and collected fields as required. The user-defined template improves
traffic analysis accuracy and reduces network bandwidth occupation, CPU usage,
and storage space usage.
Pre-configuration Tasks
Before collecting the statistics about IPv4 flexible flows, configure static routes or
enable an IGP to implement network connectivity.
Context
NetStream services can be processed in the following mode:
● Distributed mode
An interface board samples packets, aggregates flows, and outputs flows.
Procedure
● Configure the distributed NetStream service processing mode.
a. Run system-view
The system view is displayed.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ip netstream record record-name
An IPv4 flexible flow statistics template is created, and its recording view is
displayed.
Step 3 Run match { { source | destination } { vlan | as | port | address | mask } | mpls
top-label ip-address | mpls label position | { protocol | tos | direction | tcp-
flag } | { input | output } interface | next-hop [ bgp ] }
Aggregation keywords of the flexible flow statistics template are configured.
Step 4 (Optional) Run collect {{ first | last } switched | input { packets | bytes } length |
flow-end-reason }
The device is configured to add the number of packets, number of bytes, flow
aging reasons, and first and last forwarding time to the flexible flow statistics sent
to the NetStream Collector (NSC).
Step 5 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
c. Run quit
The system view is displayed.
----End
Context
Increasing types of services and applications on networks urge carriers to provide
more delicate management and accounting services.
Procedure
Step 1 Run system-view
A source IP address and a source port are configured for outputting NetStream
flow statistics.
Step 4 Run ip netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ version { 5 | 9 | ipfix } ] [ dscp dscp-value ]
The destination IP address for outputting statistics and the UDP port number of
the peer NSC are configured.
Step 5 Run apply record record-name
Flexible flows are applied to monitoring services.
Step 6 Run quit
Return to the system view.
Step 7 Run interface interface-type interface-number
The interface view is displayed.
Step 8 Run ip netstream monitor monitor-name { inbound | outbound }
NetStream monitoring services are deployed in the inbound or outbound direction
of the interface.
NOTE
When configuring NetStream monitoring services, you need to run the ip netstream { inbound
| outbound } command in the interface view. Otherwise, the ip netstream monitor monitor-
name { inbound | outbound } command does not take effect.
If flexible flows are applied to both the NetStream monitoring service view and system view,
statistics about flexible flows are sent to the destination IP address specified in the NetStream
monitoring service view, not that specified in the system view. Similarly, the source address and
source port configured in the NetStream monitoring service view are used for outputting
NetStream flow statistics.
1.1.3.7.5 (Optional) Adjusting the AS Field Mode and Interface Index Type
For the NetStream Collector (NSC) to properly receive and parse NetStream
packets output by the NetStream Data Exporter (NDE), ensure that the AS field
modes and interface index types configured on the NDE and the NSC are the
same.
Context
The NSC can properly receive and parse NetStream packets output by the NDE
only when the AS field modes and interface index types on the NDE and NSC are
the same.
● AS field mode: The length of the AS field in IP packets can be set to 16 bits
or 32 bits. Devices on a network must use the same AS field mode. An AS
field mode inconsistency causes NetStream to fail to sample inter-AS traffic.
NOTE
If the 32-bit AS field mode is used, the NMS must identify the 32-bit AS field. If the
NMS cannot identify the 32-bit AS field, the NMS fails to identify inter-AS traffic sent
by devices.
● Interface index: The NMS uses an interface index carried in a NetStream
packet output by the NDE to query information about the interface that sends
the packet. The interface index can be 16 or 32 bits long. The NMSs of
different vendors may support different interface index lengths. As such, the
NDE must use an interface index length that is supported by the NMS. For
example, if the NMS can parse 32-bit interface indexes, set the length of the
interface indexes carried in the output NetStream packets to 32-bit.
NOTE
Compared with the default 16-bit interface index, the 32-bit interface index can be
identified by more third-party NMSs.
Procedure
Step 1 Run system-view
The length type of the interface index carried in NetStream packets output by the
device is set.
----End
Context
Regardless of the flow format in which the traffic statistics are output, option
packet data is exported to the NetStream Collector (NSC) as a supplement. In this
way, the NetStream Data Exporter (NDE) can obtain information, such as the
sampling ratio and whether the sampling function is enabled, to reflect the actual
network traffic.
Option packets, which are independent of statistics packets, are exported to the
NSC in V9 or IPFIX format. Therefore, the required option template is sent to the
NMS for parsing option packets. You can set option template refreshing
parameters as needed to regularly refresh the template to notify the NSC of the
latest option template format.
Procedure
● Run system-view
----End
Procedure
Step 1 Run system-view
Step 2 Configure a sampling mode and sampling ratio by performing at least one of the
following steps:
● Configure a sampling mode and sampling ratio globally.
a. Run ip netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
A sampling mode and sampling ratio are configured globally.
b. Run interface interface-type interface-number
The sampling mode and sampling ratio configured in the system view apply to
all interfaces on the device. The sampling mode and sampling ratio configured in
the interface view take precedence over those configured in the system view.
NOTE
You need to enter the traffic behavior view before running this command.
----End
Prerequisites
NetStream has been enabled on an interface.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run interface interface-type interface-number
----End
Procedure
● Run the display ip netstream statistics slot slot-id command to check
NetStream packet statistics.
● Run the display ip netstream statistics interface { interface-name |
interface-type interface-number } command to check statistics about sampled
packets on an interface.
● Run the display netstream { all | global | interface interface-type interface-
number } command to check NetStream configurations in different views.
● Run the display ip netstream monitor { all | monitor-name } command to
check monitoring information about IPv4 flexible flows.
----End
Usage Scenario
On the network shown in Figure 1-15, a carrier enables NetStream on the router
functioning as an NDE to obtain detailed network application information. The
user can use the information to monitor abnormal network traffic, analyze users'
operation modes, and plan networks between ASs.
Flexible flow packets provide user-defined templates for users to customize
matching and collected fields as required. The user-defined template improves
traffic analysis accuracy and reduces network bandwidth occupation, CPU usage,
and storage space usage.
Pre-configuration Tasks
Before collecting the statistics about IPv6 flexible flows, configure static routes or
enable an IGP to implement network connectivity.
Context
NetStream services can be processed in the following mode:
● Distributed mode
An interface board samples packets, aggregates flows, and outputs flows.
Procedure
● Configure the distributed NetStream service processing mode.
a. Run system-view
The system view is displayed.
b. Run slot slot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
c. Run ipv6 netstream sampler to slot self
The distributed NetStream service processing mode is specified.
d. Run commit
The configuration is committed.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 netstream record record-name
An IPv6 flexible flow statistics template is created, and its recording view is
displayed.
Step 3 Run match { { source | destination } { vlan | as | port | address | mask } | mpls
top-label ip-address | mpls label position | { protocol | tos | direction | tcp-
flag } | { input | output } interface | next-hop [ bgp ] }
Aggregation keywords of the flexible flow statistics template are configured.
Step 4 (Optional) Run collect { { first | last } switched | input { packets | bytes } length
| flow-end-reason }
The device is configured to add the number of packets, number of bytes, flow
aging reasons, and first and last forwarding time to the flexible flow statistics sent
to the NetStream Collector (NSC).
Step 5 Run commit
The configuration is committed.
----End
Context
IPv6 flexible flow packets can be output only in the V9 format.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 netstream export version 9 [ origin-as | peer-as ] [ bgp-nexthop ]
The output version number and AS option of flexible flow packets are specified.
Step 3 (Optional) Configure NetStream packets to carry the flow sequence field.
1. Run slot slot-id
The view of the slot in which the interface board for NetStream sampling
resides is displayed.
2. Run ip netstream export sequence-mode flow
The NetStream export sequence mode is set to flow.
NOTE
----End
Context
Increasing types of services and applications on networks urge carriers to provide
more delicate management and accounting services.
Procedure
Step 1 Run system-view
Step 3 Run ipv6 netstream export host [ ipv6 ] ip-address port [ vpn-instance vpn-
instance-name ] [ version { 9 | ipfix } ] [ dscp dscp-value ]
The destination address for outputting statistics and the UDP port number of the
peer NSC are configured.
Step 4 (Optional) Run ipv6 netstream export source { ip-address | ipv6 ipv6-address }
[ port ]
A source IP address and a source port are configured for outputting NetStream
flow statistics.
NOTE
When configuring NetStream monitoring services, you need to run the ipv6 netstream
{ inbound | outbound } command in the interface view. Otherwise, the ipv6 netstream
monitor monitor-name { inbound | outbound } command does not take effect.
If flexible flows are applied to both the NetStream monitoring service view and system view,
statistics about flexible flows are sent to the destination IP address specified in the NetStream
monitoring service view, not that specified in the system view. Similarly, the source address and
source port configured in the NetStream monitoring service view are used for outputting
NetStream flow statistics.
----End
1.1.3.8.5 (Optional) Adjusting the AS Field Mode and Interface Index Type
For the NSC to properly receive and parse NetStream packets output by the NDE,
ensure that the AS field modes and interface index types configured on the NDE
and the NSC are the same.
Context
The NSC can properly receive and parse NetStream packets output by the NDE
only when the AS field modes and interface index types on the NDE and NSC are
the same.
● AS field mode: The length of the AS field in IP packets can be set to 16 bits
or 32 bits. Devices on a network must use the same AS field mode. An AS
field mode inconsistency causes NetStream to fail to sample inter-AS traffic.
NOTE
If the 32-bit AS field mode is used, the NMS must identify the 32-bit AS field. If the
NMS cannot identify the 32-bit AS field, the NMS fails to identify inter-AS traffic sent
by devices.
● Interface index: The NMS uses an interface index carried in a NetStream
packet output by the NDE to query information about the interface that sends
the packet. The interface index can be 16 or 32 bits long. The NMSs of
different vendors may support different interface index lengths. As such, the
NDE must use an interface index length that is supported by the NMS. For
example, if the NMS can parse 32-bit interface indexes, set the length of the
interface indexes carried in the output NetStream packets to 32-bit.
NOTE
Compared with the default 16-bit interface index, the 32-bit interface index can be
identified by more third-party NMSs.
Procedure
Step 1 Run system-view
The length type of the interface index carried in NetStream packets output by the
device is set.
----End
Context
Regardless of the flow format in which the traffic statistics are output, option
packet data is exported to the NetStream Collector (NSC) as a supplement. In this
way, the NetStream Data Exporter (NDE) can obtain information, such as the
sampling ratio and whether the sampling function is enabled, to reflect the actual
network traffic.
Currently, the option packets supported by IPv6 networks are interface option
packets, which are used to send the NetStream configurations of all the boards on
the NDE to the NSC in a scheduled manner. The configurations cover the interface
index, statistics collection direction, and sampling value in the inbound/outbound
direction.
Option packets, which are independent of statistics packets, are exported to the
NSC in V9 or IPFIX format. Therefore, the corresponding option template is sent to
the NMS for parsing option packets. You can set option template refreshing
parameters as needed for the device to regularly refresh the template to notify the
NSC of the latest option template format.
Procedure
● Run system-view
----End
Procedure
Step 1 Run system-view
Step 2 Configure a sampling mode and sampling ratio by performing at least one of the
following steps:
● Configure a sampling mode and sampling ratio globally.
a. Run ipv6 netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
A sampling mode and sampling ratio are configured globally.
b. Run interface interface-type interface-number
The interface view is displayed.
● Configure a sampling mode and sampling ratio on an interface.
a. Run interface interface-type interface-number
The interface view is displayed.
b. Run ipv6 netstream sampler { fix-packets fix-packet-number | random-
packets random-packet-number | fix-time fix-time-number } { inbound |
outbound }
A sampling mode and sampling ratio are configured on the interface.
NOTE
The sampling mode and sampling ratio configured in the system view apply to
all interfaces on the device. The sampling mode and sampling ratio configured in
the interface view take precedence over those configured in the system view.
The ipv6 netstream sampler command run in the system view has the
same function as that run in the interface view.
NOTE
For an interface bound to a VPN instance, NetStream applies to all packets of the VPN
instance.
The traffic statistics diagnosis function is enabled so that you can compare the
traffic statistics collected by the device with those restored by the NMS to
determine the cause of inaccurate sampling.
----End
Prerequisites
NetStream has been enabled on an interface.
Procedure
Step 1 Run system-view
----End
Prerequisites
NetStream IPv6 flow statistics have been collected.
Procedure
● Run the display ipv6 netstream statistics slot slot-id command to check
statistics about NetStream flows.
● Run the display ip netstream statistics interface { interface-name |
interface-type interface-number } command to check statistics about sampled
packets on an interface.
● Run the display netstream { all | global | interface interface-type interface-
number } command to check NetStream configurations in different views.
● Run the display ipv6 netstream monitor { all | monitor-name } command to
check monitoring information about IPv6 flexible flows.
----End
Usage Scenario
On the network shown in Figure 1-16, a carrier enables NetStream on the router
functioning as a NetStream Data Exporter (NDE) to obtain detailed network
application information. The carrier can use the information to monitor abnormal
network traffic, analyze users' operation modes, and plan networks between ASs.
If statistics about MPLS packets are collected on the P, the P sends statistics to
inform the NetStream Collector (NSC) of the MPLS label-specific traffic volume.
Context
Before collecting statistics about MPLS IPv4 packets, enable MPLS on the device
and interfaces and configure the MPLS network.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Output statistics about MPLS IPv4 packets in the form of original or aggregated
flows.
NOTE
▪ ip-only: The device samples only inner IP packets, not MPLS labels.
----End
Usage Scenario
On the network shown in Figure 1-17, a carrier enables NetStream on the router
to obtain detailed network application information. The carrier can use the
information to monitor abnormal network traffic, analyze users' operation modes,
and plan networks between ASs.
If statistics about MPLS packets are collected on the P (NDE), the P sends statistics
to inform the NetStream Collector (NSC) of the MPLS label-specific traffic volume.
Context
Before collecting statistics about MPLS IPv6 packets, enable MPLS on the device
and interfaces and configure the MPLS network.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ipv6 netstream mpls-aware { label-only | ip-only | label-and-ip }
Statistics collection for MPLS packets is enabled.
One of the following parameters can be configured to sample MPLS packets:
● label-only: The device samples only MPLS labels, not inner IP packets.
● ip-only: The device samples only inner IP packets, not MPLS labels.
● label-and-ip: The device samples both MPLS labels and inner IP packets.
Step 3 Output statistics about MPLS IPv6 packets in the form of original or aggregated
flows. See 1.1.3.5 Collecting Statistics About IPv6 Original Flows and 1.1.3.6
Collecting Statistics About IPv6 Aggregated Flows as required.
NOTE
Statistics about MPLS original flows and aggregated flows can be collected in V9 or IPFIX
format.
----End
Usage Scenario
In Figure 1-18, statistics about MPLS flows sent by the P to the NetStream
Collector (NSC) inform the NSC of the traffic volume and traffic type
corresponding to each label. Such statistics, however, cannot tell to which VPN
each traffic belongs. In this case, the PE sends the meaning of each label (1024 in
the figure) to the NSC so that the NSC can determine to which VPN the received
traffic belongs. The NSC can analyze the traffic data of each VPN and display the
result.
Figure 1-18 Networking diagram for collecting statistics about BGP/MPLS VPN
flows
Context
Before collecting statistics about BGP/MPLS VPN flows, deploy the BGP/MPLS VPN
network.
Procedure
● Enable the P to collect statistics about MPLS flows.
----End
Context
On the network shown in Figure 1-19, you can deploy NetStream on an SRv6
network to obtain detailed network application information. When inner packet
statistics reach the NDE, the NDE can collect both outer IPv6 information and
inner IPv4 information. After the NDE sends flow statistics to the NSC, the NSC
collects the statistics and sends them to the NSA for analysis.
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 To sample outer IPv6 packets, configure IPv6 flow statistics collection as required.
For details, see 1.1.3.5 Collecting Statistics About IPv6 Original Flows, 1.1.3.8
Collecting Statistics About IPv6 Flexible Flows, or 1.1.3.6 Collecting Statistics
About IPv6 Aggregated Flows.
Step 3 To sample inner packet information carried by SRv6, perform the following
operations as required:
● In an IPv6 over SRv6 scenario, run the ipv6 netstream srv6-aware inner-
header command to enable NetStream for SRv6 inner packet information to
sample inner IPv6 packets.
● In an IPv4 over SRv6 scenario:
a. Configure IPv4 flow statistics collection as required. For details, see
1.1.3.3 Collecting Statistics About IPv4 Original Flows, 1.1.3.7
Collecting Statistics About IPv4 Flexible Flows, or 1.1.3.4 Collecting
Statistics About IPv4 Aggregated Flows.
b. Run the ipv6 netstream srv6-aware inner-header command to enable
NetStream for SRv6 inner packet information to sample inner IPv4
packets.
Step 4 Run the commit command to commit the configuration.
----End
Procedure
● Run the display ip netstream cache origin slot slot-id command to check
information about the NetStream flow buffer.
● Run the display ip netstream statistics slot slot-id command to check
statistics about NetStream packets.
● Run the display ip netstream statistics interface interface-type interface-
number command to check statistics about the sampled packets on an
interface.
● Run the display netstream { all | global | interface interface-type interface-
number } command to check NetStream configurations in different views.
● Run the display ip netstream cache { as | as-tos | bgp-nexthop-tos |
destination-prefix | destination-prefix-tos | index-tos | mpls-label | prefix |
prefix-tos | protocol-port | protocol-port-tos | source-prefix | source-prefix-
tos | source-index-tos | vni-sip-dip | vlan-id } slot slot-id command to check
information about various aggregated flows in the buffer.
● Run the display ip netstream export option command to check information
about the output option template.
● Run the display ipv6 netstream cache { origin | as | as-tos | bgp-nexthop-
tos | destination-prefix | destination-prefix-tos | index-tos | prefix | prefix-
tos | protocol-port | protocol-port-tos | source-prefix | source-prefix-tos |
mpls-label | vlan-id } slot slot-id command to check information about
various aggregated flows in the buffer.
● Run the display ipv6 netstream statistics slot slot-id command to check
statistics about NetStream statistics.
● Run the display ip netstream sampler-id allocated-info [ slot slot-id ]
command to check the sampling ID allocation information on a specified
interface board.
----End
Procedure
● Run the reset ip netstream statistics command to delete NetStream
template statistics.
● Run the reset ipv6 netstream statistics command to delete NetStream
template statistics.
----End
Networking Requirements
On the network shown in Figure 1-20, NetStream is configured to collect statistics
about the source IP address, destination IP address, port, and protocol information
of network packets on the user side. Such statistics help analyze users' behaviors
and detect virus-infected terminals, source and destination of denial of service
(DoS) and distributed denial of service (DDoS) attacks, source of spams, and
unauthorized websites. Based on other characteristics of NetStream data flows,
other network devices can filter out and restrict the spread of virus-infected traffic.
● In this example, interface1 and interface2 represent GE 1/0/0 and GE 2/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure the PE and CE to communicate.
Assign an IP address and a mask to each interface according to Figure 1-20. The
configuration details are not provided.
Step 2 Enable NetStream statistics collection on GE 1/0/0 of the PE.
# Configure the board to process NetStream services in distributed mode.
[*PE] slot 1
[*PE-slot-1] ip netstream sampler to slot self
[*PE-slot-1] quit
# Set the version for outputting NetStream flows to V5, and specify the source
and destination addresses and destination port number for the output flows.
[*PE] ip netstream export host 192.168.2.2 9001
[*PE] ip netstream export source 192.168.2.1
# Enable NetStream sampling and configure the fixed packet sampling mode.
[*PE] ip netstream sampler fix-packets 10000 inbound
[*PE] ip netstream sampler fix-packets 10000 outbound
[*PE] commit
NOTE
NetStream enabled on a main interface cannot collect traffic statistics about its sub-
interface.
DstIf
SrcIf
DstP Msk Pro Tos
SrcP Msk Flags Ttl
Packets Bytes
NextHop Direction
DstIP DstAs
SrcIP SrcAs
BGP: BGP NextHop TopLabelType
Label1 Exp1 Bottom1
Label2 Exp2 Bottom2
Label3 Exp3 Bottom3
GigabitEthernet2/0/0
GigabitEthernet1/0/0
0 24 253 0
0 24 0 60
3 384
192.168.2.1 in
192.168.1.3 0
192.168.1.4 0
0.0.0.0 UNKNOWN
0 0 0
0 0 0
0 0 0
0.0.0.0 0 0
2018-05-09 11:38:07 2018-05-09 11:40:30 --
-- -:-
66(Forwarded Not Fragmented)
----End
Configuration Files
● CE configuration file
#
sysname CE
#
interface GigabitEthernet 1/0/0
ip address 192.168.1.2 255.255.255.0
#
return
● PE configuration file
#
slot 1
ip netstream sampler to slot self
#
sysname PE
#
ip netstream tcp-flag enable
ip netstream sampler fix-packets 10000 inbound
ip netstream sampler fix-packets 10000 outbound
ip netstream export source 192.168.2.1
ip netstream export host 192.168.2.2 9001
#
interface gigabitethernet 2/0/0
ip address 192.168.2.1 255.255.255.0
#
interface GigabitEthernet 1/0/0
ip address 192.168.1.1 255.255.255.0
ip netstream inbound
ip netstream outbound
#
return
Networking Requirements
On the network shown in Figure 1-21, DeviceD connects network A and network
B to the wide area network (WAN). DeviceD samples and aggregates flows before
sending them to the NetStream Collector (NSC).
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure reachable routes between the egress router of the LAN and the
WAN.
2. Configure reachable routes between the ingress router of the LAN and the
NSC.
3. Configure the ingress router of the LAN to sent traffic statistics to the
specified NSC.
4. Configure the ingress router of the LAN to sent traffic statistics to the
inbound interface on the NSC.
5. Aggregate sampled flows to reduce the data sent to the NSC.
6. Enable NetStream on the inbound interface of the ingress router.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface
● Address of the NSC
● Version for outputting NetStream flows
● NetStream sampling ratio
● ID of the slot in which the NetStream service processing board resides (In this
example, the NetStream service processing board is in slot 1.)
Procedure
Step 1 Configure IP addresses for each router. The configuration details are not provided
here.
Step 2 Configure reachable routes between the WAN, DeviceA, and DeviceB.
# Configure reachable routes between DeviceA and DeviceD.
[*DeviceA] ip route-static 1.1.1.1 24 GigabitEthernet 1/0/0
[*DeviceA] commit
NOTE
NetStream enabled on a main interface cannot collect traffic statistics about its sub-
interface.
[*DeviceD] commit
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/0
ip address 172.16.0.1 255.255.255.0
#
ip route-static 1.1.1.1 255.255.255.0 GigabitEthernet1/0/0
#
return
interface GigabitEthernet3/0/1
ip address 3.3.3.1 255.255.255.0
#
ip netstream aggregation as
enable
export version 9
ip netstream export source 3.3.3.1
ip netstream export host 2.2.2.1 3000
#
return
Networking Requirements
On the network shown in Figure 1-22, DeviceA, DeviceB, and DeviceC support
MPLS and use OSPF as IGP on the MPLS backbone network.
Local Label Distribution Protocol (LDP) sessions are established between DeviceA
and DeviceB and between DeviceB and DeviceC. A remote LDP session is
established between DeviceA and DeviceC. NetStream is enabled on DeviceB to
collect statistics about MPLS flows.
● In this example, interface1 and interface2 represent GE 1/0/0 and GE 2/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces on each router as shown in Figure 1-22, OSPF
process ID (1), and area (Area0)
● DeviceA's remote peer (DeviceC) with name Devicec and IP address 3.3.3.9
● DeviceC's remote peer (DeviceA) with name Devicea and IP address 1.1.1.9
● ID of the slot in which the NetStream service processing board resides (In this
example, the NetStream service processing board is in slot 1.)
Procedure
Step 1 Assign an IP address to each involved interface.
# Assign an IP address and a mask to each interface (including loopback
interfaces) according to Figure 1-22. The configuration details are not provided
here.
Step 2 Configure an LDP session between every two routers.
# Configure OSPF to advertise host routes to the specified label switching router
(LSR) ID and of the network segments to which interfaces on the router are
connected. Enable basic MPLS functions and LDP on each router and its interfaces.
For configurations of the static MPLS TE tunnel, see "Basic MPLS Configurations"
in NE9000 Configuration Guide > MPLS.
Step 3 Enable NetStream statistics collection on GigabitEthernet 1/0/0 of DeviceB.
# Specify the distributed NetStream sampling mode on a board.
[*DeviceB] slot 1
[*DeviceB-slot-1] ip netstream sampler to slot self
[*DeviceB-slot-1] quit
NOTE
NetStream enabled on a main interface cannot collect traffic statistics about its sub-
interface.
# Specify the destination address, destination port number, and source address for
the output flows.
[*DeviceB] ip netstream export host 192.168.1.2 2100
[*DeviceB] ip netstream export source 10.1.2.1
# Enable NetStream sampling and configure the fixed packet sampling mode.
DstIf
SrcIf
DstP Msk Pro Tos
SrcP Msk Flags Ttl
Packets Bytes
NextHop Direction
DstIP DstAs
SrcIP SrcAs
BGP: BGP NextHop TopLabelType
Label1 Exp1 Bottom1
Label2 Exp2 Bottom2
Label3 Exp3 Bottom3
TopLabelIpAddress VlanId VniId
CreateFlowTime LastRefreshTime VPN(direct)
FlowLabel Rdvalue
ForwardStatus
--------------------------------------------------------------------------
GigabitEthernet2/0/0
GigabitEthernet1/0/0
0 24 253 0
0 24 0 60
3 384
10.1.2.1 in
10.1.1.5 0
192.168.1.4 0
0.0.0.0 UNKNOWN
0 0 0
0 0 0
0 0 0
0.0.0.0 0 0
2018-05-09 11:38:07 2018-05-09 11:40:30 --
-- -:-
66(Forwarded Not Fragmented)
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
mpls lsr-id 1.1.1.9
#
mpls
lsp-trigger all
#
mpls ldp
#
mpls ldp remote-peer Devicec
remote-ip 3.3.3.9
#
interface GigabitEthernet1/0/0
undo shutdown
#
mpls ldp remote-peer DeviceA
remote-ip 1.1.1.9
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.2.2 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 4.1.1.0 0.0.0.255
network 3.3.3.9 0.0.0.0
#
return
Networking Requirements
As Layer 3 virtual private network (L3VPN) services develop, carriers place
increasingly higher requirements on VPN traffic statistics collection. After
conventional IP networks carry voice and video services, it has become
commonplace for carriers and their customers to sign Service Level Agreements
(SLAs). Deploying NetStream on a BGP/MPLS IP VPN network allows users to
analyze LSP traffic between PEs and adjust the network to better meet service
requirements.
On the IPv4 BGP/MPLS IP VPN network shown in Figure 1-23:
● Packets with specified application labels are sampled on PE2 and sent to the
NetStream Collector (NSC) and NetStream Data Analyzer (NDA).
● Statistics collection of incoming and outgoing packets with specified
application labels is enabled on the P. Packets with specified application labels
sent by the CE are sampled and sent to the NSC and NDA.
● Traffic statistics are analyzed on the NSC and NDA to obtain users' traffic
volume between PEs.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each involved interface.
2. Configure the BGP/MPLS IP VPN.
3. Enable NetStream to sample packets with specified application labels on PE2.
4. Enable NetStream to collect statistics about incoming and outgoing packets
with specified labels on the P.
Data Preparation
To complete the configuration, you need the following data:
● Version for outputting NetStream packets and sampling interval
● Destination address, destination port number, and source address of the
output NetStream flows
● ID of the slot in which the NetStream service processing board resides (In this
example, the NetStream service processing board is in slot 1.)
Procedure
Step 1 Assign an IP address to each involved interface.
Assign an IP address and a mask to each interface (including loopback interfaces)
according to Figure 1-23. The configuration details are not provided here.
Step 2 Configure the BGP/MPLS IP VPN.
For configuration details, see "BGP/MPLS IP VPN Configuration" in NE9000
Configuration Guide > VPN.
Step 3 Enable NetStream to sample packets with specified application labels on PE2.
# Configure the board on PE2 to process NetStream services in distributed mode.
[*PE2] slot 1
[*PE2-slot-1] ip netstream sampler to slot self
[*PE2-slot-1] quit
# Configure PE2 to send information about L3VPN application labels to the NSC.
[*PE2] ip netstream export template option application-label
# Set the version for outputting NetStream flows to V9, and specify the source
and destination addresses and destination port number for the output flows.
[*PE2] ip netstream export version 9
[*PE2] ip netstream export host 192.168.2.2 9000
[*PE2] ip netstream export source 192.168.2.1
Step 4 Enable NetStream to collect statistics about incoming and outgoing packets with
specified labels on the P.
# Configure the board on the P to process NetStream services in distributed mode.
[*P] slot 1
[*P-slot-1] ip netstream sampler to slot self
[*P-slot-1] quit
NOTE
NetStream enabled on a main interface cannot collect traffic statistics about its sub-
interface.
# Set the version for outputting NetStream flows to V9, and specify the source
and destination addresses and destination port number for the output flows.
[*P] ip netstream export version 9
[*P] ip netstream export host 192.168.2.2 9001
[*P] ip netstream export source 172.16.2.1
# Enable NetStream sampling and configure the fixed packet sampling mode.
[*P] ip netstream sampler fix-packets 10000 inbound
[*P] ip netstream sampler fix-packets 10000 outbound
[*P] commit
DstIf
SrcIf
DstP Msk Pro Tos
GigabitEthernet2/0/0
GigabitEthernet1/0/0
0 24 253 0
0 24 0 60
3 384
172.16.3.1 in
10.2.1.5 0
10.4.1.5 0
0.0.0.0 UNKNOWN
0 0 0
0 0 0
0 0 0
0.0.0.0 0 0
2018-05-09 11:38:07 2018-05-09 11:40:30 --
-- -:-
66(Forwarded Not Fragmented)
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
route-distinguisher 100:1
apply-label per-instance
vpn-target 100:1 export-extcommunity
vpn-target 100:1 import-extcommunity
#
mpls lsr-id 1.1.1.9
#
mpls
#
interface GigabitEthernet1/0/0
ip binding vpn-instance vpna
ip address 10.2.1.2 255.255.255.0
#
interface GigabitEthernet3/0/0
ip address 172.16.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
peer 3.3.3.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.9 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 10.1.1.1 as-number 65440
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 172.16.1.0 0.0.0.255
#
return
● P configuration file
#
slot 1
ip netstream sampler to slot self
#
sysname P
#
ip netstream mpls-aware label-and-ip
ip netstream export version 9
ip netstream sampler fix-packets 10000 inbound
ip netstream sampler fix-packets 10000 outbound
ip netstream export source 172.16.2.1
ip netstream export host 172.16.2.2 9001
#
mpls lsr-id 2.2.2.9
#
mpls
lsp-trigger all
#
mpls ldp
#
interface GigabitEthernet1/0/0
ip address 172.16.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
ip address 172.16.3.1 255.255.255.0
ip netstream inbound
ip netstream outbound
mpls
mpls ldp
#
interface GigabitEthernet3/0/0
ip address 172.16.2.1 255.255.255.0
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 172.16.1.0 0.0.0.255
network 172.17.1.0 0.0.0.255
network 2.2.2.9 0.0.0.0
#
return
● PE2 configuration file
#
slot 1
ip netstream sampler to slot self
#
sysname PE2
#
sysname CE4
#
interface GigabitEthernet1/0/0
ip address 10.4.1.1 255.255.255.0
#
bgp 65440
peer 10.4.1.2 as-number 100
#
ipv4-family unicast
import-route direct
peer 10.4.1.2 enable
#
return
Networking Requirements
As the Internet continues to develop rapidly, carrier networks support higher
bandwidth and predictable QoS parameters. As such, carriers need to provide
finer-grained management and accounting services. To implement classified
monitoring over networks more effectively, you can configure NetStream
monitoring services to output traffic statistics collected on specified interfaces to
specified NSCs and NDAs for analysis. This enables collected statistics to be output
to multiple addresses.
As shown in Figure 1-24, GE1/0/0 and GE2/0/0 on DeviceC are connected to two
IPv6 networks through DeviceA and DeviceB, respectively. DeviceC collects traffic
statistics, aggregates the statistics, and sends them to NMS1 and NMS2.
To collect flow-specific statistics, configure NetStream monitoring services in the
inbound direction of GE 1/0/0 and GE 2/0/0 on DeviceC. Traffic statistics collected
on GE 1/0/0 are sent to NMS1 with an IPv4 address and traffic statistics collected
on GE 2/0/0 are sent to NMS2 with an IPv6 address.
● In this example, interface1 and interface2 represent GE 1/0/0 and GE 2/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure IP addresses for each router. The configuration details are not provided
here.
# Enable NetStream sampling and configure the fixed packet sampling mode.
[*DeviceC] ipv6 netstream sampler fix-packets 10000 inbound
# Set the version number and source address of the output packets carrying
original flow statistics.
[*DeviceC] ipv6 netstream export version 9
[*DeviceC] ipv6 netstream export source ipv6 2001:db8:100::1
NOTE
NetStream enabled on a main interface cannot collect traffic statistics about its sub-
interface.
Address Port
192.168.0.2 6000
------------------------------------------------------------
Monitor monitor2
ID :2
AppCount : 1
Address Port
2001:DB8:100::1 6000
------------------------------------------------------------
# Run the display ipv6 netstream cache origin slot 1 command to check
information about various original flows in the NetStream flow buffer.
[~DeviceC] display ipv6 netstream cache origin slot 1
DstIf
SrcIf
DstP Msk Pro Tos
SrcP Msk Flags Ttl
Packets Bytes
NextHop Direction
DstIP DstAs
SrcIP SrcAs
BGP: BGP NextHop TopLabelType
Label1 Exp1 Bottom1
Label2 Exp2 Bottom2
Label3 Exp3 Bottom3
TopLabelIpAddress VlanId VniId
CreateFlowTime LastRefreshTime VPN(direct)
FlowLabel Rdvalue
ForwardStatus
--------------------------------------------------------------------------
GigabitEthernet2/0/0
GigabitEthernet1/0/0
0 0 59 0
0 0 0 100
443426 56758528
:: in
2001:DB8:20::1 0
2001:DB8:80::1 0
:: UNKNOWN
0 0 0
0 0 0
0 0 0
0.0.0.0 0 0
2018-05-09 11:38:07 2018-05-09 11:40:30 --
112706 -:-
64(Forwarded Unknown)
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:200::2/96
#
return
#
slot 1
ipv6 netstream sampler to slot self
#
return
Networking Requirements
On the network shown in Figure 1-25, DeviceD connects network A and network
B to the wide area network (WAN). DeviceD samples and aggregates flows before
sending them to the NetStream Collector (NSC).
Figure 1-25 Networking diagram of collecting statistics about IPv4 flexible flows
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure reachable routes between DeviceA and DeviceB of the LAN and the
WAN.
2. Configure reachable routes between DeviceD and the NSC.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure IP addresses for each router. The configuration details are not provided
here.
Step 2 Configure reachable routes between the WAN, DeviceA, and DeviceB.
NOTE
NetStream enabled on a main interface cannot collect traffic statistics about its sub-
interface.
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/0
ip address 172.16.0.1 255.255.255.0
#
ip route-static 192.168.1.1 255.255.255.0 GigabitEthernet1/0/0
#
return
● DeviceB configuration file
#
sysname DeviceB
#
interface GigabitEthernet1/0/0
ip address 172.17.1.1 255.255.255.0
#
ip route-static 192.168.1.1 255.255.255.0 GigabitEthernet1/0/0
#
return
● DeviceC configuration file
#
sysname DeviceC
#
interface GigabitEthernet1/0/0
ip address 192.168.2.2 255.255.255.0
#
return
Networking Requirements
NetStream can be deployed in an SRv6 private network scenario to provide traffic
analysis for forwarding paths between PEs and collect private network
information. This helps users adjust network parameters to better meet service
requirements.
● Analyze traffic on the NSC and NDA to obtain user traffic between PEs and
collect private network information.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each involved interface.
For the configuration roadmap, see Segment Routing IPv6 Configuration. For
configuration details, see Configuration Files.
Step 3 Configure NetStream on the P to collect statistics about inner IPv4 packets in IPv6
original flows.
# Configure the board on the P to process NetStream services in distributed mode.
[*P] slot 1
[*P-slot-1] ipv6 netstream sampler to slot self
[*P-slot-1] quit
NOTE
NetStream enabled on a main interface cannot collect traffic statistics about its sub-
interface.
# Configure the output format of IPv6 packets, and the source address,
destination address, and destination port of the output packets.
[*P] ipv6 netstream export version 9
[*P] ipv6 netstream export host ipv6 2001:DB8:111::1 9001
[*P] ipv6 netstream export source ipv6 2001:DB8:30::1
# Configure NetStream to sample the outer IPv6 packets and set the mode to
fixed packet sampling.
[*P] ipv6 netstream sampler fix-packets 10000 inbound
[*P] ipv6 netstream sampler fix-packets 10000 outbound
[*P] quit
NOTE
After completing the preceding configuration, the device samples outer IPv6 packets. You
can run the display ipv6 netstream cache origin slot slot-id command to check sampling
information about outer packets.
To sample inner IPv4 packets, you need to configure NetStream IPv4.
# Configure the output format of IPv4 packets, and the source address,
destination address, and destination port of the output packets.
[*P] ip netstream export version 9
[*P] ip netstream export host ipv6 2001:DB8:111::1 9001
[*P] ip netstream export source ipv6 2001:DB8:30::1
DstIf
SrcIf
DstP Msk Pro Tos
GigabitEthernet2/0/0
GigabitEthernet1/0/0
0 64 253 0
0 128 0 60
3 384
2001:DB8:20::2 in
10.1.1.2 0
10.2.1.2 0
:: UNKNOWN
0 0 0
0 0 0
0 0 0
0.0.0.0 0 0
2020-05-09 11:38:07 2020-05-09 11:40:30 --
-- -:-
66(Forwarded Not Fragmented)
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator as1 ipv6-prefix 2001:DB8:100:: 64 static 32
opcode ::111 end
srv6-te-policy locator as1
segment-list list1
index 5 sid ipv6 2001:DB8:200::222
index 10 sid ipv6 2001:DB8:300::333
srv6-te policy policy1 endpoint 2001:DB8:3::3 color 101
binding-sid 2001:DB8:100::100
candidate-path preference 100
segment-list list1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1 auto-sid-disable
#
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.1 255.255.255.0
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::1/96
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:1::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 2001:DB8:3::3 as-number 100
peer 2001:DB8:3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 2001:DB8:3::3 enable
peer 2001:DB8:3::3 route-policy p1 import
peer 2001:DB8:3::3 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 traffic-engineer best-effort
peer 10.1.1.2 as-number 65410
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
return
● P configuration file
#
sysname P
#
segment-routing ipv6
encapsulation source-address 2001:DB8:2::2
locator as1 ipv6-prefix 2001:DB8:200:: 64 static 32
opcode ::222 end
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.1 255.255.255.0
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:3::3/64
isis ipv6 enable 1
#
bgp 100
router-id 2.2.2.2
peer 2001:DB8:1::1 as-number 100
peer 2001:DB8:1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 2001:DB8:1::1 enable
peer 2001:DB8:1::1 route-policy p1 import
peer 2001:DB8:1::1 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 traffic-engineer best-effort
peer 10.2.1.2 as-number 65420
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
return
As carriers' value-added services develop, both carriers and users alike place
higher requirements on quality of service (QoS). With conventional IP networks
now carrying voice and video services, it has become commonplace for carriers
and their customers to sign Service Level Agreements (SLAs).
NQA measures the performance of each protocol running on a network and helps
carriers collect network operation indicators, such as the delay time of a TCP
connection, packet loss ratio, and path maximum transmission unit (MTU).
Carriers provide users with differentiated services and charge users differently
based on these indicators. NQA is also an effective tool to diagnose and locate
faults in a network.
Usage Scenario
Pre-configuration Tasks
Before configuring NQA to monitor an IP network, configure static routes or an
Interior Gateway Protocol (IGP) to implement network connectivity.
Context
A DNS test is based on UDP packets. Only one probe packet is sent in one DNS
test to detect the speed at which a DNS name is resolved to an IP address. The
test result clearly reflects the performance of the DNS protocol on the network.
Procedure
Step 1 Run system-view
The IP address of the local DNS client is configured as the source address for DNS
communication.
Step 5 Create an NQA test instance and set the test instance type to DNS.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created and the view of the test instance is displayed.
2. Run test-type dns
An IP address is configured for the DNS server in the DNS test instance.
Step 8 (Optional) Set optional parameters for the test instance and simulate packets
transmitted on an actual network.
1. Run agetime ageTimeValue
The maximum numbers of historical records and result records are set for the test
instance.
The function that traps are sent to the NMS after the number of continuous
test failures reaches a certain value is configured.
2. Run threshold rtd thresholdRtd
You can start an NQA test instance immediately, at a specified time, after a
delay, or periodically.
– To start an NQA test instance immediately, run the start now [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time, run the start at
[ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance after a specified delay, run the start delay
{ seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time every day, run the start
daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ]
command.
----End
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 Create an NQA test instance and set the test instance type to ICMP.
1. Run the nqa test-instance admin-name test-name command to create an
NQA test instance and enter the test instance view.
2. Run the test-type icmp command to set the test type to ICMP.
3. (Optional) Run the description description command to configure the test
instance description.
Step 4 (Optional) Set optional parameters for the test instance and simulate packets
transmitted on an actual network.
1. Run the agetime ageTimeValue command to configure the aging time of an
NQA test instance.
2. Run the datafill fill-string command to configure padding characters in NQA
test packets.
3. Run the datasize datasizeValue command to set the size of the data field in
an NQA test packet.
4. Run the probe-count number command to configure the number of probes in
an NQA test instance.
5. Run the interval seconds interval command to configure the interval for
sending NQA test packets.
6. Run the sendpacket passroute command to configure the NQA test instance
to send packets without searching the routing table.
7. Run the source-address { ipv4 srcAddress | ipv6 srcAddr6 } command to set
the source IP address of NQA test packets.
8. Run the source-interface ifType ifNum command to configure the source
interface of NQA test packets.
9. Run the tos tos-value [ dscp ] command to set the ToS value of NQA test
packets.
10. Run the ttl ttlValue command to configure the TTL value of NQA test packets.
11. Run the nexthop { ipv4 ipv4Address | ipv6 ipv6Address } command to
configure the next-hop address for the test instance.
1. Run the timeout time command to configure the timeout period of response
packets.
2. Run the fail-percent percent command to configure the failure percentage
for the NQA test instance.
Step 8 (Optional) Configure the NQA statistics function. Run the records { history
number | result number } command to configure the maximum number of history
records and the maximum number of result records for the NQA test instance.
If the following conditions are met, the Completion field in the test results may be
displayed as no result:
– frequency < (probe-count – 1) x interval + timeout +1
2. Run the start command to start an NQA test.
Step 12 (Optional) In the system view, run whitelist session-car { nqa-icmp { cir cir-value
| cbs cbs-value | pir pir-value | pbs pbs-value } * | nqa-icmpv6 { cir cir-value | cbs
cbs-value | pir pir-value | pbs pbs-value } * } The session-CAR value of the ICMP
test instance is adjusted.
The session CAR function is enabled by default. If the session CAR function is
abnormal, you can run the whitelist session-car { nqa-icmp | nqa-icmpv6 }
disable command to disable it.
----End
Procedure
● Configure the NQA server for the TCP test.
a. Run system-view
The system view is displayed.
b. Run nqa-server tcpconnect [ vpn-instance vpn-instance-name ] ip-
address port-number
The IP address and number of the port used to monitor TCP services are
specified on the NQA server.
c. Run commit
The configuration is committed.
● Configure the NQA client for the TCP test.
a. Run system-view
The destination address and destination port number specified in this step must
be the same as ip-address and port-number specified for the NQA server.
The VPN instance name is configured for the NQA test instance.
i. Schedule the NQA test instance.
i. (Optional) Run frequency frequencyValue
The test period is set for the NQA test instance.
ii. Run start
An NQA test is started.
The start command has multiple formats. Choose one of the
following formats as needed.
○ To start an NQA test instance immediately, run the start now
[ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second |
hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
○ To start an NQA test instance at a specified time, run the start
at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
○ To start an NQA test instance after a specified delay, run the
start delay { seconds second | hh:mm:ss } [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss }
| lifetime { seconds second | hh:mm:ss } } ] command.
○ To start an NQA test instance at a specified time every day, run
the start daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ]
[ end yyyy/mm/dd ] command.
j. Run commit
----End
Procedure
● Configure an NQA server.
a. Run system-view
The system view is displayed.
b. Run nqa-server udpecho [ vpn-instance vpn-instance-name ] ip-address
port-number
The IP address and port number of the NQA server for monitoring UDP
services are specified.
c. Run commit
The configuration is committed.
----End
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 Create an NQA test instance and set the test instance type to SNMP.
NOTE
Before configuring an NQA SNMP test instance, configure SNMP. The NQA SNMP test
instance supports SNMPv1, SNMPv2c, and SNMPv3.
1. Run the nqa test-instance admin-name test-name command to create an
NQA test instance and enter the test instance view.
2. Run the test-type snmp command to set the test instance type to SNMP.
3. (Optional) Run the description description command to configure the test
instance description.
The destination address (that is, the NQA server address) of the client is specified.
If a target SNMP agent runs SNMPv1 or SNMPv2c, the read community name
specified in the community read cipher command must be the same as the read
community name configured on the SNMP agent. Otherwise, the SNMP test will
fail.
Step 5 (Optional) Set parameters for the test instance and simulate packets.
1. Run the probe-count number command to configure the number of probes in
an NQA test instance.
2. Run the interval seconds interval command to configure the interval for
sending NQA test packets.
3. Run the sendpacket passroute command to configure the NQA test instance
to send packets without searching the routing table.
4. Run the source-address ipv4 srcAddress command to configure a source IP
address for NQA test packets.
5. Run the source-port portValue command to configure a source port number
for the NQA test.
6. Run the tos tos-value command to configure the ToS value in NQA test
packets.
7. Run the ttl ttlValue command to configure the TTL value of NQA test packets.
Step 6 (Optional) Configure probe failure conditions.
1. Run the timeout time command to configure the timeout period of response
packets.
2. Run the fail-percent percent command to configure the failure percentage
for the NQA test instance.
Step 7 (Optional) Configure the NQA statistics function. Run the records { history
number | result number } command to configure the maximum number of history
records and the maximum number of result records for the NQA test instance.
Step 8 (Optional) Enable the device to send traps to the NMS.
1. Run the probe-failtimes failTimes command to enable the device to send
traps to the NMS after the number of consecutive probe failures reaches the
specified threshold.
2. Run the test-failtimes failTimes command to configure the Trap message to
be sent to the NMS when the number of continuous probe failures reaches
the specified value in NQA tests.
3. Run the threshold rtd thresholdRtd command to configure an RTD threshold.
4. Run send-trap { all | { rtd | testfailure | probefailure | testcomplete |
testresult-change }* }
Step 9 (Optional) Run the vpn-instance vpn-instance-name command to configure the
VPN instance name for the NQA test instance.
Step 10 Schedule the test instance.
1. (Optional) Run the frequency frequencyValue command to configure the test
period for an NQA test instance.
2. Run the start command to start an NQA test.
An NQA test instance can be started immediately, at a specified time, or after
a specified delay.
– Run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds
second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command to start an NQA test instance immediately.
– Run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds
second | hh:mm:ss } } ] command to start an NQA test instance at a
specified time.
– Run the start delay { seconds second | hh:mm:ss } [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and set the test instance type to trace.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the view of the test instance is displayed.
2. Run test-type trace
The test instance type is set to trace.
3. (Optional) Run description description
A description is configured for the NQA test instance.
Step 3 Specify the destination address and destination port number for the test instance.
1. Run destination-address { ipv4 destAddress | ipv6 destAddress6 }The
destination address (that is, the NQA server address) of the client is specified.
2. (Optional) Run destination-port port-number
The destination port number is specified for the NQA test instance.
Step 4 (Optional) Set parameters for the test instance to simulate packets.
1. Run agetime ageTimeValue
The aging time of an NQA test instance is configured.
2. Run datafill fill-string
Padding characters in NQA test packets are configured.
3. Run datasize datasizeValue
The size of the data field in an NQA test packet is set.
4. Run probe-count number
The number of probes in a test is set for the NQA test instance.
5. Run sendpacket passroute
The NQA test instance is configured to send packets without searching the
routing table.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and set the test instance type to ICMP Jitter.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the view of the test instance is displayed.
2. Run test-type icmpjitter
The test instance type is set to ICMP Jitter.
NOTE
● You are advised to configure hardware-based packet sending on the interface board to
implement more accurate delay and jitter calculation, facilitating high-precision network
monitoring.
● After enabling packet sending on the interface board of the client, you need to run the
nqa-server icmp-server [ vpn-instance vpn-instance-name ] ip-address command on
the NQA server to specify the IP address of the ICMP services monitored by the NQA
server.
Step 5 (Optional) Set timestamp units for the NQA test instance.
NOTE
The timestamp units need to be configured only after the hardware-based enable
command is run.
1. Run timestamp-unit { millisecond | microsecond }
A timestamp unit is configured for the source in the NQA test instance.
2. Run receive-timestamp-unit { millisecond | microsecond }
A timestamp unit is configured for the destination in the NQA test instance.
In a scenario where a Huawei device is connected to a non-Huawei device, an
ICMP jitter test in which the Huawei device functions as the source (client) is
configured to detect the delay, jitter, and packet loss on the network. To set
the timestamp unit of the ICMP timestamp packet returned by the
destination, run the receive-timestamp-unit command.
The source's timestamp unit configured using the timestamp-unit
{ millisecond | microsecond } command must be the same as the
destination's timestamp unit configured using the receive-timestamp-unit
command. If the timestamp unit is set to microseconds and the interface
board's precision that the device supports is milliseconds, the device uses
milliseconds as the timestamp unit.
Step 6 Set parameters for the test instance to simulate packets.
1. Run agetime ageTimeValue
The aging time of an NQA test is configured.
2. Run icmp-jitter-mode { icmp-echo | icmp-timestamp }
The mode for an ICMP jitter test is set.
NOTE
----End
Procedure
● Configure the NQA server for the UDP jitter test.
a. Run system-view
The system view is displayed.
b. Run nqa-server udpecho [ vpn-instance vpn-instance-name ] ip-address
port-number
The IP address and number of the port used to monitor UDP jitter
services are specified on the NQA server.
c. Run commit
The configuration is committed.
NOTE
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run test-type pathjitter
The type of the test instance is configured as path jitter.
Step 4 Run destination-address ipv4 destAddress
The destination IP address is configured.
Step 5 (Optional) Run the following commands to configure other parameters for the
path jitter test:
● Run icmp-jitter-mode { icmp-echo | icmp-timestamp }
The mode of the path jitter test is configured.
● Run vpn-instance vpn-instance-name
The VPN instance to be tested is configured.
● Run source-address ipv4 srcAddress
The source IP address is configured.
● Run probe-count number
The number of test probes to be sent each time is set.
● Run jitter-packetnum packetNum
The number of test packets to be sent during each test is set.
NOTE
The probe-count command is used to configure the number of times for the jitter test
and the jitter-packetnum command is used to configure the number of test packets
sent during each test. In actual configuration, the product of the number of times for
the jitter test and the number of test packets must be less than 3000.
● Run interval seconds interval
The interval for sending jitter test packets is set.
The shorter the interval is, the sooner the test is complete. However, delays
arise when the processor sends and receives test packets. Therefore, if the
interval for sending test packets is set to a small value, a relatively greater
error may occur in the statistics of the jitter test.
● Run fail-percent percent
The percentage of the failed NQA tests is set.
Step 6 Run start
The NQA test is started.
Select the start mode as required because the start command has several forms.
● To perform the NQA test immediately, run the startnow [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
The test instance is started immediately.
● To perform the NQA test at the specified time, run the startat [ yyyy/mm/
dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second |
hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ] command.
The test instance is started at a specified time.
● To perform the NQA test after a certain delay period, run the startdelay
{ seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command.
The test instance is started after a certain delay.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa test-instance admin-name test-name
An NQA test instance is created and the test instance view is displayed.
Step 3 Run test-type pathmtu
The type of the test instance is configured as path MTU.
Step 4 Run destination-address ipv4 destAddress
The destination IP address is configured.
Step 5 (Optional) Run the following commands to configure other parameters for the
path MTU test.
● Run discovery-pmtu-max pmtu-max
The maximum value of the path MTU test range is set.
● Run step step
The value of the incremental step is set for the packet length in the path MTU
test.
● Run vpn-instance vpn-instance-name
The VPN instance to be tested is configured.
● Run source-address ipv4 srcAddress
The source IP address is configured.
● Run probe-count number
The maximum number of probe packets that are allowed to time out
consecutively is configured.
Step 6 Run start
The NQA test is started.
Select the start mode as required because the start command has several forms.
● To perform the NQA test immediately, run the startnow [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime
{ seconds second | hh:mm:ss } } ] command.
----End
Prerequisites
NOTE
NQA test results are not displayed automatically on the terminal. To view test results, run
the display nqa results command.
Procedure
● Run the display nqa results [ collection ] [ test-instance adminName
testName ] command to check NQA test results.
● Run the display nqa results [ collection ] this command to check NQA test
results in a specified NQA test instance view.
● Run the display nqa history [ test-instance adminName testName ]
command to check historical NQA test records.
● Run the display nqa history [ this ] command to check historical statistics on
NQA tests in a specified NQA test instance view.
● Run the display nqa-server command to check the NQA server status.
----End
Usage Scenario
Pre-configuration Tasks
Before configuring NQA to monitor an MPLS network, configuring basic MPLS
functions.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and set the test instance type to LSP ping.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and its view is displayed.
2. Run test-type lspping
The test instance type is set to LSP ping.
3. (Optional) Run description description
A description is configured for the NQA test instance.
Step 3 (Optional) Run fragment enable
MPLS packet fragmentation is enabled for the NQA test instance.
Step 4 Run lsp-type { ipv4 | te | bgp | srte | srbe | srte-policy }
An LSP test type is specified for the NQA test instance.
If the LSP test type of the NQA test instance is set to srbe, run the following
commands as required:
Step 5 Configure the destination address or tunnel interface based on the type of the
checked LSP.
● To configure the destination address for a checked LDP LSP, run the
destination-address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-
loopback loopbackAddress } ] * command.
● To configure the tunnel interface for a checked TE tunnel, run the lsp-
tetunnel { tunnelName | ifType ifNum } [ hot-standby | primary ] command.
● To configure the destination address for a BGP tunnel, run the destination-
address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-loopback
loopbackAddress } ] * command.
● To configure an SR-MPLS TE tunnel, run the lsp-tetunnel { tunnelName |
ifType ifNum } [ hot-standby | primary ] command.
● To configure the destination address for an SR-MPLS BE tunnel, run the
destination-address ipv4 destAddress lsp-masklen maskLen command.
● To configure the name, binding segment ID, endpoint IP address, and color ID
of an SR-MPLS TE Policy, run the policy { policy-name policyname | binding-
sid bsid | endpoint-ip endpointip color colorid } command.
An IP address is configured for the next hop when load balancing is enabled.
The LSP EXP value is set for the NQA test instance.
2. Run lsp-replymode { level-control-channel | no-reply | udp }
The number of probes in a test is set for the NQA test instance.
6. Run interval seconds interval
The interval at which NQA test packets are sent is set for the NQA test
instance.
7. Run source-address ipv4 srcAddress
If no response packets are received before the set period expires, the probe is
regarded as a failure.
2. Run fail-percent percent
The maximum number of history records and the maximum number of result
records that can be saved for the NQA test instance are set.
The start command has multiple formats. Choose one of the following as
needed.
– To start an NQA test instance immediately, run the start now [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time, run the start at
[ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance after a specified delay, run the start delay
{ seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time every day, run the start
daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ]
command.
----End
Procedure
Step 1 Run system-view
Step 2 Create an NQA test instance and set the test instance type to LSP trace.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the test instance view is displayed.
2. Run test-type lsptrace
If the LSP test type of the NQA test instance is set to srbe, run the following
commands as required:
Step 5 Configure the destination address or tunnel interface based on the type of the
checked LSP.
● To configure the destination address for a checked LDP LSP, run the
destination-address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-
loopback loopbackAddress } ] *command.
● To configure the tunnel interface for a checked TE tunnel, run the lsp-
tetunnel { tunnelName | ifType ifNum } [ hot-standby | primary ] command.
● To configure the destination address for a BGP tunnel, run the destination-
address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-loopback
loopbackAddress } ] * command.
● To configure the tunnel interface for an SR-MPLS TE tunnel, run the lsp-
tetunnel { tunnelName | ifType ifNum } [ hot-standby | primary ] command.
● To configure the destination address for an SR-MPLS BE tunnel, run the
destination-address ipv4 destAddress lsp-masklen maskLen command.
● To configure the name, binding segment ID, endpoint IP address, and color ID
of an SR-MPLS TE policy, run the policy { policy-name policyname | binding-
sid bsid | endpoint-ip endpointip color colorid } command.
Step 6 (Optional) Set optional parameters for the NQA test instance and simulate
packets transmitted on an actual network.
1. Run lsp-exp exp
The LSP EXP value is set for the NQA test instance.
2. Run lsp-replymode { level-control-channel | no-reply | udp }
The LSP packet return mode is configured for the NQA test instance.
3. Run probe-count number
The maximum number of history records and the maximum number of result
records are set for the NQA test instance.
You can start an NQA test instance immediately, at a specified time, after a
delay, or periodically.
– To start an NQA test instance immediately, run the start now [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time, run the start at
[ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance after a specified delay, run the start delay
{ seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time every day, run the start
daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ]
command.
Step 10 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 Create an NQA test instance and set the test instance type to LSP jitter.
1. Run the nqa test-instance admin-name test-name command to create an
NQA test instance and enter the test instance view.
2. Run test-type lspjitter
The test instance type is set to LSP jitter.
3. (Optional) Run the description description command to configure the test
instance description.
Step 3 (Optional) Run fragment enable
MPLS packet fragmentation is enabled for the NQA test instance.
Step 4 Run lsp-type { ipv4 | te }
The LSP type is specified for the NQA test instance.
Step 5 Configure the destination address or tunnel interface based on the type of the
checked LSP.
● To configure the destination address for a checked LDP LSP, run the
destination-address ipv4 destAddress [ { lsp-masklen maskLen } | { lsp-
loopback loopbackAddress } ] * command.
● To configure the tunnel interface for a checked TE tunnel, run:
lsp-tetunnel { tunnelName | ifType ifNum } [ hot-standby | primary ]
Step 6 (Optional) Set parameters for the test instance to simulate packets.
1. Run the lsp-exp exp command to set the LSP EXP value for the NQA test
instance.
2. Run the lsp-replymode { level-control-channel | no-reply | udp } command
to configure the LSP packet return mode for the NQA test instance.
3. Run the datafill fill-string command to configure padding characters in NQA
test packets.
4. Run the datasize datasizeValue command to set the size of the data field in
an NQA test packet.
5. Run the jitter-packetnum packetNum command to configure the number of
packets sent each time in a probe.
6. Run the probe-count number command to configure the number of probes in
an NQA test instance.
7. Run interval seconds interval
The interval at which NQA test packets are sent is set for the NQA test
instance.
8. Run source-addressipv4srcAddress
The source IP address of NQA test packets is set.
9. Run the ttl ttlValue command to configure the TTL value of NQA test packets.
Step 7 (Optional) Configure test failure conditions.
1. Run the timeout time command to configure the timeout period of response
packets.
2. Run the fail-percent percent command to configure the failure percentage
for the NQA test instance.
Step 8 (Optional) Configure the NQA statistics function. Run the records { history
number | result number } command to configure the maximum number of history
records and the maximum number of result records for the NQA test instance.
Step 9 Schedule the test instance.
1. (Optional) Run the frequency frequencyValue command to configure the test
period for an NQA test instance.
2. Run the start command to start an NQA test.
An NQA test instance can be started immediately, at a specified time, or after
a specified delay.
– Run the start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds
second | hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
command to start an NQA test instance immediately.
– Run the start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds
second | hh:mm:ss } } ] command to start an NQA test instance at a
specified time.
– Run the start delay { seconds second | hh:mm:ss } [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command to start an NQA test
instance after a specified delay.
----End
Prerequisites
NOTE
NQA test results are not displayed automatically on the terminal. To view test results, run
the display nqa results command.
Procedure
● Run the display nqa results [ collection ] [ test-instance adminName
testName ] command to check NQA test results.
● Run the display nqa results [ collection ] this command to check NQA test
results in a specified NQA test instance view.
● Run the display nqa history [ test-instance adminName testName ]
command to check historical NQA test records.
● Run the display nqa history [ this ] command to check historical statistics on
NQA tests in a specified NQA test instance view.
----End
Usage Scenario
Pre-configuration Tasks
Before you configure NQA to check VPNs, configure basic VPN functions.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and configure the test instance type as PWE3 ping.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the view of the test instance is displayed.
2. Run test-type pwe3ping
The test instance type is configured as PWE3 ping.
3. Run description description
A description is configured for the NQA test instance.
Step 3 (Optional) Run fragment enable
MPLS packet fragmentation is enabled for the NQA test instance.
Step 4 Set parameters for the L2VPN network to be tested.
1. Run local-pw-type pwTypeValue
An encapsulation type is configured for the local PW.
2. Run label-type { control-word | { { label-alert | normal } [ no-control-
word ] } }
A packet encapsulation type is configured.
3. Run local-pw-id pwIdValue
Step 6 Set optional parameters for the test instance and simulate packets transmitted on
an actual network.
1. Run lsp-exp exp
The LSP EXP value is set for the NQA test instance.
2. Run lsp-replymode { level-control-channel | no-reply | udp
The number of probes in a test is set for the NQA test instance.
6. Run interval seconds interval
The interval at which NQA test packets are sent is set for the NQA test
instance.
7. Run ttl ttlValue
If no response packets are received before the set period expires, the probe is
regarded as a failure.
2. Run fail-percent percent
The maximum number of history records and the maximum number of result
records that can be saved for the NQA test instance are set.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and configure the test instance type as VPLS MAC
ping.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the view of the test instance is displayed.
Step 5 (Optional) Set optional parameters for the NQA test instance and simulate
packets transmitted on an actual network.
1. Run lsp-exp exp
The LSP EXP value is set for the NQA test instance.
2. Run lsp-replymode { no-reply | udp | udp-via-vpls }
The number of probes in a test is set for the NQA test instance.
6. Run interval seconds interval
The interval at which NQA test packets are sent is set for the NQA test
instance.
7. Run ttl ttlValue
Step 6 (Optional) Configure detection failure conditions and enable the function to send
traps to the NMS upon detection failures.
1. Run timeout time
The system is enabled to send traps to the NMS after the number of
consecutive probe failures reaches the specified threshold.
4. Run test-failtimes failTimes
The system is enabled to send traps to the NMS after the number of
consecutive failures of the NQA test instance reaches the specified threshold.
5. Run threshold rtd thresholdRtd
The maximum number of history records and the maximum number of result
records that can be saved for the NQA test instance are set.
The start command has multiple formats. Choose one of the following as
needed.
– To start an NQA test instance immediately, run the start now [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time, run the start at
[ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance after a specified delay, run the start delay
{ seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time every day, run the start
daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ]
command.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and set the test instance type to VPLS PW ping.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the test instance view is displayed.
2. Run test-type vplspwping
The test instance type is set to VPLS PW ping.
3. (Optional) Run description description
A description is configured for the NQA test instance.
Step 3 (Optional) Run fragment enable
MPLS packet fragmentation is enabled for the NQA test instance.
Step 4 Set parameters for the VPLS network to be monitored.
1. Run vsi vsi-name
The name of a virtual switching instance (VSI) to be monitored is specified.
2. Run destination-address ipv4 destAddress
An IP address of the remote PE is specified.
3. (Optional) Run local-pw-id pwIdValue
A PW ID is set on the local PE.
NOTE
If the VSI configured using the vsi vsi-name command has a specified negotiation-vc-
id, the local-pw-id local-pw-id command must be run.
Step 5 (Optional) Set optional parameters for the test instance to simulate packet
transmission.
1. Run lsp-exp exp
An LSP EXP value is set for the NQA test instance.
2. Run lsp-replymode { level-control-channel | no-reply | udp }
The mode in which LSPs are returned is set for the NQA test instance.
3. Run datafill fill-string
The size of the data field in the NQA test packet is set.
5. Run probe-count number
The maximum number of history records and the maximum number of result
records that can be saved for the NQA test instance are set.
You can start an NQA test instance immediately, at a specified time, after a
delay, or periodically.
– To start an NQA test instance immediately, run the start now [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time, run the start at
[ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance after a specified delay, run the start delay
{ seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time every day, run the start
daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ]
command.
Step 10 Run commit
The configuration is committed.
----End
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 Create an NQA test instance and set the test instance type to PWE3 trace.
1. Run the nqa test-instance admin-name test-name command to create an
NQA test instance and enter the test instance view.
2. Run test-type pwe3trace
The test instance type is set to PWE3 trace.
3. (Optional) Run the description description command to configure the test
instance description.
Step 3 (Optional) Run fragment enable
MPLS packet fragmentation is enabled for the NQA test instance.
Step 4 Set parameters for the Layer 2 virtual private network (L2VPN) to be monitored.
1. Run destination-address ipv4 destAddress
An IP address is configured for the remote PE.
2. Run local-pw-type pwTypeValue
An encapsulation type is set for the PW on the local PE.
3. Run label-type { control-word | { { label-alert | normal } [ no-control-
word ] } }
An encapsulation type is set for packets.
Step 6 (Optional) Set parameters for the test instance and simulate packets.
1. Run the lsp-exp exp command to set the LSP EXP value for the NQA test
instance.
2. Run lsp-replymode { level-control-channel | no-reply | udp }
The reply mode of LSPs is configured for the NQA test instance.
3. Run the probe-count number command to configure the number of probes in
an NQA test instance.
4. Run tracert-livetime first-ttl first-ttl max-ttl max-ttl
Step 8 (Optional) Configure the NQA statistics function. Run the records { history
number | result number } command to configure the maximum number of history
records and the maximum number of result records for the NQA test instance.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Create an NQA test instance and set the test instance type to VPLS PW ping.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the test instance view is displayed.
2. Run test-type vplspwtrace
The test instance type is set to VPLS PW trace.
3. (Optional) Run description description
A description is configured for the NQA test instance.
Step 3 (Optional) Run fragment enable
MPLS packet fragmentation is enabled for the NQA test instance.
Step 4 Set parameters for the VPLS network to be monitored.
1. Run vsi vsi-name
The name of a virtual switching instance (VSI) to be monitored is specified.
2. Run destination-address ipv4 destAddress
An IP address of the remote PE is specified.
You can start an NQA test instance immediately, at a specified time, after a
delay, or periodically.
– To start an NQA test instance immediately, run the start now [ end { at
[ yyyy/mm/dd ] hh:mm:ss | delay { seconds second | hh:mm:ss } |
lifetime { seconds second | hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time, run the start at
[ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay
{ seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance after a specified delay, run the start delay
{ seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ] command.
– To start an NQA test instance at a specified time every day, run the start
daily hh:mm:ss to hh:mm:ss [ begin yyyy/mm/dd ] [ end yyyy/mm/dd ]
command.
Step 11 Run commit
The configuration is committed.
----End
Prerequisites
NOTE
NQA test results are not displayed automatically on the terminal. To view test results, run
the display nqa results command.
Procedure
● Run the display nqa results [ collection ] [ test-instance adminName
testName ] command to check NQA test results.
----End
Usage Scenario
Table 1-9 Usage scenario for checking a Layer 2 network using NQA
Pre-configuration Tasks
Before configuring NQA to check a Layer 2 network, complete the following tasks:
Procedure
Step 1 Run system-view
Step 3 Create an NQA test instance and specify the test instance type as MAC ping.
1. Run nqa test-instance admin-name test-name
An NQA test instance is created, and the view of the test instance is displayed.
2. Run test-type macping
Step 4 Configure the MEP ID, MD name, and MA name based on the MAC ping type.
1. Run mep mep-id mep-id
The names of the MD and MA for sending NQA test packets are configured.
Step 5 Perform either of the following steps to configure the destination address for the
MAC ping test:
To query a destination MAC address, run the display cfm remote-mep command.
● Run destination-address remote-mep mep-id remoteMepID
The peer MEP ID is configured.
NOTE
If the destination address type is remote-mep, you must configure the mapping between
the remote MEP and MAC address first.
Step 6 (Optional) Set optional parameters for the NQA test instance and simulate
packets transmitted on an actual network.
1. Run datasize datasizeValue
The size of the data field in an NQA test packet is set.
2. Run probe-count number
The number of probes in a test is set for the NQA test instance.
3. Run interval seconds interval
The interval at which NQA test packets are sent is set for the NQA test
instance.
Step 7 (Optional) Configure detection failure conditions and enable the function to send
traps to the NMS upon detection failures.
1. Run timeout time
The response timeout period is set.
If no response packets are received before the set period expires, the probe is
regarded as a failure.
2. Run fail-percent percent
The failure percentage is set for the NQA test instance.
If the percentage of failed probes is greater than or equal to the failure
percentage, the test is regarded as a failure.
3. Run probe-failtimes failTimes
The system is enabled to send traps to the NMS after the number of
consecutive probe failures reaches the specified threshold.
4. Run test-failtimes failTimes
The system is enabled to send traps to the NMS after the number of
consecutive failures of the NQA test instance reaches the specified threshold.
5. Run threshold rtd thresholdRtd
The RTD threshold is configured.
6. Run send-trap { all | { rtd | testfailure | probefailure | testcomplete |
testresult-change }* }
The condition for triggering a trap is configured.
----End
Prerequisites
NOTE
NQA test results are not displayed automatically on the terminal. To view test results, run
the display nqa results command.
Procedure
● Run the display nqa results [ collection ] [ test-instance adminName
testName ] command to check NQA test results.
● Run the display nqa results [ collection ] this command to view NQA test
results in a specified NQA test instance view.
● Run the display nqa history [ test-instance adminName testName ]
command to check historical NQA test records.
● Run the display nqa history [ this ] command to check historical statistics on
NQA tests in a specified NQA test instance view.
----End
Prerequisites
NQA test instances have been configured. Currently, an NQA test group can be
bound to only ICMP and TCP test instances.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa group group-name
An NQA test group is created and its view is displayed.
Step 3 Run operator { and | or }
The operation type between test instances in the NAQ test group is set to AND or
OR.
By default, the operation type between test instances is or.
Step 4 (Optional) Run description string
A description is configured for the NQA test group.
Step 5 Run nqa test-instance admin-name test-name
The NQA test group is bound to a test instance.
Step 6 Run commit
The configuration is committed.
----End
Result
Run the display nqa group [ group-name ] command to check the test result of
the NQA test group.
Follow-up Procedure
After completing the preceding configurations, you can report the NQA test group
detection result to the static route module to control the advertisement of static
routes by Configuring NQA Group for IPv4 Static Route or Configuring NQA Group
for IPv6 Static Route.
Context
An NQA generalflow test is a standard traffic testing method for evaluating
network performance and is in compliance with RFC 2544. This test can be used in
various networking scenarios that have different packet formats. It is a standard
network performance evaluation method implemented based on UDP.
Before a customer performs a service cutover, an NQA generalflow test helps the
customer evaluate whether the network performance counters meet the
requirements in the design. An NQA generalflow test has the following
advantages:
● Enables a device to send simulated service packets to itself before services are
deployed on the device.
Existing methods, unlike generalflow tests, can only be used when services
have been deployed on networks. If no services are deployed, testers must be
used to send and receive test packets.
● Uses standard methods and procedures that comply with RFC 2544 so that
NQA generalflow tests can be conducted on a network on which both Huawei
and non-Huawei devices are deployed.
A generalflow test measures the following counters:
● Throughput: maximum rate at which packets are sent without loss.
● Packet loss rate: percentage of discarded packets to all sent packets.
● Delay: consists of the bidirectional delay time and jitter calculated based on
the transmission and receipt timestamps carried in test packets. The
transmission time in each direction includes the time the forwarding devices
process the test packet.
A generalflow test can be used in the following scenarios:
● Layer 2: native Ethernet, EVPN, and L2VPN
On the network shown in Figure 1-28, an initiator and a reflector perform a
generalflow test to monitor the forwarding performance for end-to-end
services exchanged between two user-to-network interfaces (UNIs).
In the L2VPN accessing L3VPN networking shown in Figure 1-29, the initiator
and reflector can reside in different locations to represent different scenarios.
– If the initiator and reflector reside in locations 1 and 5 (or 5 and 1),
respectively, or the initiator and reflector reside in locations 4 and 6 (or 6
and 4), respectively, it is a native Ethernet scenario.
– If the initiator and reflector reside in locations 2 and 3 (or 3 and 2),
respectively, it is a native IP scenario.
Figure 1-30 General flow test in the scenario in which a Layer 2 interface
accesses a Layer 3 device
Pre-configuration Tasks
Before configuring an NQA generalflow test, complete the following tasks:
● Layer 2:
– In a native Ethernet scenario, configure reachable Layer 2 links between
the initiator and reflector.
– In an L2VPN scenario, configure reachable links between CEs on both
ends of an L2VPN connection.
– In an EVPN scenario, configure reachable links between CEs on both ends
of an EVPN connection.
● Layer 3:
Context
On the network shown in Figure 1-28 of "Configuring an RFC 2544 Generalflow
Test Instance", the following two roles are involved in a generalflow test:
● Initiator: sends simulated service traffic to a reflector.
● Reflector: loops the service traffic to the initiator.
The reflector can loop all packets on a reflector interface or the packets matching
filter criteria to the initiator. The filter criteria include a destination unicast MAC
address or a port number.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Configure the reflector. The reflector settings vary according to usage scenarios.
NOTE
----End
Context
On the network shown in Figure 1-28 of "Configuring an RFC 2544 Generalflow
Test Instance", the following two roles are involved in a generalflow test:
● Initiator: sends simulated service traffic to a reflector.
● Reflector: loops the service traffic to the initiator.
The process of configuring the initiator is as follows:
1. Create a generalflow test instance.
2. Set basic simulated service parameters.
3. Set key test parameters based on counters.
4. Set generalflow test parameters.
5. Start the generalflow test instance.
Procedure
Step 1 Create a generalflow test instance.
1. Run system-view
The system view is displayed.
The basic simulated service parameters on the initiator must be the same as those
configured on the reflector.
Throug 1. Run the rate rateL rateH command to set the upper and lower rate
hput thresholds.
2. Run the interval seconds interval command to set the interval at
which test packets are transmitted at a specific rate.
3. Run the precision precision-value command to set the throughput
precision.
4. Run the fail-ratio fail-ratio-value command to set the packet loss
rate during a throughput test. The value is expressed in 1/10000. If
the actual packet loss rate is less than 1/10000, the test is
successful and continues.
Latenc 1. Run the rate rateL command to set the rate at which test packets
y are sent.
2. Run the interval seconds interval command to set the interval at
which test packets are sent.
Packet 1. Run the rate rateL command to set the rate at which test packets
loss are sent.
rate
NOTE
The duration value must be greater than twice the interval value in throughput and
delay tests.
3. Run records result number
The maximum number of results that can be recorded is set.
4. Run priority 8021p priority-value
The 802.1p priority is set for generalflow test packets in an Ethernet scenario.
5. Run tos tos-value
The IP packet priority is set.
6. (Optional) Run exchange-port enable
A UDP source port number is enabled to be switched with a UDP destination
port number.
Step 5 Run start now
The NQA test is started.
NOTICE
Currently, an RFC 2544 generalflow test can be started only by running the start
now command. However, user services will be interrupted during the test.
----End
Prerequisites
All general flow test configurations are complete.
NOTE
NQA test results are not displayed automatically on the terminal. To view test results, run
the display nqa results command.
Procedure
● Run the display nqa results [ test-instance adminName testName ]
command on the initiator to view the test results.
● Run the display nqa reflector [ reflector-id ] command on the reflector to
view reflector information.
----End
Context
An Ethernet service activation test is a method defined in Y.1564. This test helps
carriers rapidly and accurately verify whether network performance meets SLA
requirements before service rollouts.
Pre-configuration Tasks
Before configuring an Ethernet service activation test, complete the following
tasks:
● Layer 2 scenarios:
– In a native Ethernet scenario, configure reachable Layer 2 links between
the initiator and reflector.
– In an L2VPN/EVPN L2VPN scenario, configure reachable links between
CEs on both ends of an L2VPN/EVPN L2VPN connection.
– In an EVPN VXLAN scenario, configure reachable links between devices
on both ends of an EVPN VXLAN connection.
– In an HVPN scenario, configure reachable links between CEs on both ends
of an HVPN connection.
● Layer 3 scenarios:
– In a native IP scenario, configure reachable IP links between the initiator
and reflector.
– In an L3VPN/EVPN L3VPN scenario, configure reachable links between
CEs on both ends of an L3VPN/EVPN L3VPN connection.
– In an EVPN VXLAN scenario, configure reachable links between devices
on both ends of an EVPN VXLAN connection.
Context
Devices performing an Ethernet service activation test play two roles: initiator and
reflector. An initiator sends simulated service traffic to a reflector, and the
reflector reflects the service traffic.
● Interface-based mode: A reflector loops all traffic that its interface receives.
● Flow-based mode: A reflector loops only traffic meeting specified conditions.
In flow-based mode, a test flow must have been configured.
Procedure
Step 1 Run system-view
Services may be forwarded based on the MAC or IP address or using MPLS based on
the service type, such as Ethernet, IP, L2VPN, L2VPN accessing L3VPN, and L3VPN.
When testing a service network, you must determine the specific service forwarding
mode before configuring a traffic type for the service flow to be tested.
– For Ethernet Layer 2 switching and L2VPN services, a MAC address must be
specified, and an IP address is optional.
– For IP routing and L3VPN services, an IP address and a MAC address must be
specified. If no IP address or MAC address is specified, the reflector will reflect all
the traffic, which affects other service functions.
– For L2VPN accessing L3VPN, both MAC and IP addresses must be specified.
▪ For the same test flow, a range can be specified only in one of the traffic-
type, vlan, pe-vid, udp destination-port, and udp source-port commands.
In addition, the difference between the start and end values cannot be more
than 127, and the end value must be greater than the start value.
▪ In the traffic-type command, the start MAC or IP address has only one
different octet from the end MAC or IP address. For example, the start IP
address is set to 1.1.1.1, and the end IP address can only be set to an IP
address in the network segment 1.1.1.0.
If the test-flow flow-id & <1-16> parameter is configured, the reflector loops
traffic based on a specified flow. If this parameter is not configured, the reflector
loops all traffic that its interface receives. The agetime age-time parameter is
optional, and the default value is 14400s.
----End
Context
Devices performing an Ethernet service activation test play two roles: initiator and
reflector. An initiator sends simulated service traffic to a reflector, and the
reflector reflects the service traffic.
The process of configuring the initiator is as follows:
1. Configure a test flow.
2. Configure a test instance.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa test-flow flow-id
A test flow is configured, and the test flow view is displayed.
Step 3 Specify service flow characteristics.
1. Run:
– traffic-type mac { destination destination-mac [ end-destination-mac ] |
source source-mac [ end-source-mac ] }
– traffic-type ipv4 { destination destination-ip [ end-destination-ip ] |
source source-ip [ end-source-ip ] }
A traffic type is configured, and the MAC or IP address of test packets is
specified. An address range can be specified here.
NOTE
Services may be forwarded based on the MAC or IP address or using MPLS based on
the service type, such as Ethernet, IP, L2VPN, L2VPN accessing L3VPN, and L3VPN.
When testing a service network, you must determine the specific service forwarding
mode before configuring a traffic type for the service flow to be tested.
– For Ethernet Layer 2 switching and L2VPN services, a MAC address must be
specified, and an IP address is optional.
– For IP routing and L3VPN services, an IP address and a MAC address must be
specified. If no IP address or MAC address is specified, the reflector will reflect all
the traffic, which affects other service functions.
– For L2VPN accessing L3VPN, both MAC and IP addresses must be specified.
2. Configure the following parameters as needed:
– Run vlan vlan-id [ end-vlan-vid ]
A single VLAN ID is specified for Ethernet packets in the NQA test flow
view.
– Run pe-vid pe-vid ce-vid ce-vid [ ce-vid-end ]
Double VLAN IDs are specified for Ethernet packets in the NQA test flow
view.
– Run udp destination-port destination-port [ end-destination-port ]
A destination UDP port number or range is specified.
– Run udp source-port source-port [ end-source-port ]
– For the same test flow, a range can be specified only in one of the traffic-type,
vlan, pe-vid, udp destination-port, and udp source-port commands. In addition,
the difference between the start and end values cannot be more than 127, and the
end value must be greater than the start value.
– In the traffic-type command, the start MAC or IP address has only one different
octet from the end MAC or IP address. For example, the start IP address is set to
1.1.1.1, and the end IP address can only be set to an IP address in the network
segment 1.1.1.0.
----End
Prerequisites
All configurations related to the Ethernet service activation test are complete.
NOTE
NQA test results are not displayed automatically on the terminal. To view test results, run
the display nqa results command.
Procedure
● Run the display nqa results [ test-instance adminName testName ]
command on the initiator to view the test results.
● Run the display nqa reflector [ reflector-id ] command on the reflector to
check reflector information.
----End
Usage Scenario
The result table of NQA test instances records results of each test type. A
maximum of 5000 test result records are supported in total. If the number of
records reaches 5000, test results are uploaded and the new test result overwrites
the earliest one. If the NMS cannot poll test results in time, test results are lost.
You can send the statistics on the test results that reach the capacity of the local
storage or periodically send the statistics to the FTP server for storage through
FTP. This can effectively prevent the loss of test results and facilitate network
management based on the analysis of test results at different times.
Pre-configuration Tasks
Before configuring test results to be sent to the FTP server, complete the following
tasks:
● Configure the FTP server.
● Configure a reachable route between the NQA client and the NMS.
● Configure a test instance.
Data Preparation
Before configuring test results to be sent to the FTP server, you need the following
data.
No. Data
2 User name and password used for logging in to the FTP server
1.1.4.10.1 Setting Parameters for Configuring Test Results to Be Sent to the FTP
Server
Before starting a test instance, set the IP address of the FTP server that receives
test results, user name and password for logging in to the FTP server, name of the
file in which test results are saved, interval at which test results are uploaded, and
retransmission times.
Context
Perform the following operations on the NQA client.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa upload test-type { icmp | icmpjitter | jitter | udp } ftp ipv4 ipv4-address
file-name file-name [ vpn-instance vpn-instance-name ] [ port port-number ]
username user-name password password [ interval upload-interval ] [ retry
retry-times ]
A device is enabled to upload test result files onto a specified server.
Step 3 Run commit
The configuration is committed.
----End
Context
Perform the following operations on the NQA client.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa test-instance admin-name test-name
The NQA view is displayed.
Step 3 Run test-type { icmp | icmpjitter | jitter | udp }
A test instance type is set.
Step 4 Run destination-address ipv4 destAddress
A destination address is configured.
Step 5 (Optional) Run destination-port port-number
A destination port number is configured.
Step 6 Run start
An NQA test instance is started.
An NQA test instance can be started immediately, at a specified time, or after a
specified delay.
● Run start now [ end { at [ yyyy/mm/dd ] hh:mm:ss | delay { seconds second
| hh:mm:ss } | lifetime { seconds second | hh:mm:ss } } ]
The test instance is started immediately.
● Run start at [ yyyy/mm/dd ] hh:mm:ss [ end { at [ yyyy/mm/dd ] hh:mm:ss |
delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ]
The test instance is started at a specified time.
● Run start delay { seconds second | hh:mm:ss } [ end { at [ yyyy/mm/dd ]
hh:mm:ss | delay { seconds second | hh:mm:ss } | lifetime { seconds second |
hh:mm:ss } } ]
The test instance is started after a specified delay.
● Run start daily hh:mm:ss to hh:mm:ss [ begin { yyyy/mm/dd | yyyy-mm-
dd } ] [ begin { yyyy/mm/dd | yyyy-mm-dd } ]
The test instance is started at a specified time every day.
Step 7 Run commit
The configuration is committed.
----End
1.1.4.10.3 Verifying the Configuration of Test Results to Be Sent to the FTP Server
After configuring test results to be sent to the FTP server, verify the configuration.
Prerequisites
Test results have been configured to be sent to the FTP server.
Procedure
Step 1 Run the display nqa upload file-info command to check information about files
that a device is uploading and has attempted to upload onto a server.
----End
Procedure
● Run the display nqa support-test-type command to check the supported test
types.
● Run the display nqa support-server-type command to check the supported
server types.
----End
Procedure
Step 1 Run system-view
An NQA test instance is created, and the test instance view is displayed.
----End
Context
NOTICE
Procedure
Step 1 Run system-view
An NQA test instance is created, and the test instance view is displayed.
----End
Prerequisites
The NQA test instance has been stopped.
Context
NOTICE
Test records cannot be restored after being deleted. Exercise caution when running
the clear-records command.
Procedure
Step 1 Run system-view
An NQA test instance is created, and the test instance view is displayed.
Historical records and result records of the NQA test instance are deleted.
----End
1.1.4.12.1 Example for Configuring an NQA Test to Detect the DNS Resolution
Speed on an IP Network
This section provides an example for configuring an NQA test to measure the
performance of interaction between a client and the DNS server.
Networking Requirements
On the network shown in Figure 1-31, Device A needs to access host A using the
domain name Server.com. A DNS test instance can be configured on Device A to
measure the performance of interaction between Device A and the DNS server.
Figure 1-31 Networking diagram for detecting the DNS resolution speed
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure reachable routes between Device A, the DNS server, and host A at
the network layer.
2. Configure a DNS test instance on Device A and start the test instance to
detect the DNS resolution speed on an IP network.
Data Preparation
To complete the configuration, you need the following data:
● IP address of the DNS server
● Domain name and IP address of host A
Procedure
Step 1 Configure reachable routes between Device A, the DNS server, and host A at the
network layer. (Omitted)
Step 2 Configure a DNS test instance and start it.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] dns resolve
[*DeviceA] dns server 10.3.1.1
[*DeviceA] dns server source-ip 10.1.1.1
[*DeviceA] nqa test-instance admin dns
[*DeviceA-nqa-admin-dns] test-type dns
[*DeviceA-nqa-admin-dns] dns-server ipv4 10.3.1.1
[*DeviceA-nqa-admin-dns] destination-address url Server.com
[*DeviceA-nqa-admin-dns] commit
[~DeviceA-nqa-admin-dns] start now
[*DeviceA-nqa-admin-dns] commit
Step 3 Verify the test result. Min/Max/Average Completion Time indicates the delay
between the time when a DNS request packet is sent and the time when a DNS
response packet is received. In this example, the delay is 208 ms.
[~DeviceA-nqa-admin-dns] display nqa results test-instance admin dns
NQA entry(admin, dns) :testflag is inactive ,testtype is dns
1 . Test 1 result The test is finished
Send operation times: 1 Receive response times: 1
Completion:success RTD OverThresholds number:0
Attempts number:1 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Status errors number:0
Destination ip address:10.3.1.1
Min/Max/Average Completion Time: 208/208/208
Sum/Square-Sum Completion Time: 208/43264
Last Good Probe Time: 2018-01-25 09:18:22.6
Lost packet ratio: 0 %
----End
Configuration Files
Device A configuration file
#
sysname DeviceA
#
dns resolve
dns server 10.3.1.1
dns server source-ip 10.1.1.1
#
nqa test-instance admin dns
test-type dns
destination-address url Server.com
dns-server ipv4 10.3.1.1
#
return
1.1.4.12.2 Example for Configuring an NQA TCP Test to Measure the Response
Time on an IP Network
This section provides an example for configuring an NQA TCP test to measure the
response time on an IP network.
Networking Requirements
On the network shown in Figure 1-32, the headquarters and a subsidiary of a
company often need to use TCP to exchange files with each other. The time taken
to respond to a TCP transmission request must be less than 800 ms. The NQA TCP
test can be configured to measure the TCP response time between routerDevice A
and routerDevice D that are connected to the IP backbone network.
Figure 1-32 Networking diagram for an NQA TCP test to measure the response
time on an IP network
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Device D as the NQA client and Device A as the NQA server, and
create a TCP test instance.
2. Configure the test instance to start at 10:00 o'clock every day and start the
test instance.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure the NQA server Device A.
<DeviceA> system-view
[~DeviceA] nqa-server tcpconnect 10.1.1.1 4000
[*DeviceA] commit
Step 2 Configure the NQA client Device D. Create a TCP test instance. Set the destination
IP address to the IP address of Device A.
<DeviceD> system-view
[*DeviceD] nqa test-instance admin tcp
[*DeviceD-nqa-admin-tcp] test-type tcp
[*DeviceD-nqa-admin-tcp] destination-address ipv4 10.1.1.1
[*DeviceD-nqa-admin-tcp] destination-port 4000
[*DeviceD-nqa-admin-tcp] commit
Step 4 Verify the test result. Run the display nqa results test-instance admin tcp
command on Device D. The command output shows that the TCP response time is
less than 800 ms.
[~DeviceD-nqa-admin-tcp] display nqa results test-instance admin tcp
NQA entry(admin, tcp) :testflag is active ,testtype is tcp
1 . Test 1 result The test is finished
Send operation times: 3 Receive response times: 3
Completion:success RTD OverThresholds number:0
Attempts number:1 Drop operation number:0
Disconnect operation number:0 Operation timeout number:0
System busy operation number:0 Connection fail number:0
Operation sequence errors number:0 RTT Stats errors number:0
Destination ip address:10.1.1.1
Min/Max/Average Completion Time: 600/610/603
Sum/Square-Sum Completion Time: 1810/1092100
Last Good Probe Time: 2011-01-16 02:59:41.6
Lost packet ratio: 0 %
Step 5 Configure the test instance to start at 10:00 o'clock every day.
[~DeviceD-nqa-admin-tcp] stop
[*DeviceD-nqa-admin-tcp] start daily 10:00:00 to 10:30:00
[*DeviceD-nqa-admin-tcp] commit
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
nqa-server tcpconnect 10.1.1.1 4000
#
isis 1
network-entity 00.0000.0000.0001.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
#
return
● Device D configuration file
#
sysname DeviceD
#
isis 1
network-entity 00.0000.0000.0002.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.2.1 255.255.255.0
isis enable 1
#
nqa test-instance admin tcp
test-type tcp
destination-address ipv4 10.1.1.1
destination-port 4000
start daily 10:00:00 to 10:30:00
#
return
1.1.4.12.3 Example for Configuring an NQA UDP Jitter Test to Monitor the VoIP
Service Jitter Time
This section provides an example for configuring an NQA UDP jitter test to
monitor the jitter time for Voice over Internet Protocol (VoIP) services.
Networking Requirements
On the network shown in Figure 1-33, the headquarters and a subsidiary of a
company often need to use VoIP to hold teleconferences. The round-trip delay
time must be less than 250 ms, and the jitter time must be less than 20 ms. The
UDP jitter test can be configured to simulate VoIP services.
Figure 1-33 Networking diagram for an NQA UDP jitter test to monitor the VoIP
service jitter time
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure Device D to function as an NQA client and Device A to function as
an NQA server. Configure a UDP jitter test instance on Device D.
2. Start the test instance.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of Device A and Device D that are connected to the IP backbone
network
● Code type for simulated VoIP services
Procedure
Step 1 Configure the NQA server Device A.
<DeviceA> system-view
[~DeviceA] nqa-server udpecho 10.1.1.1 4000
[*DeviceA] commit
2. Create a UDP jitter test instance and set the destination IP address to the IP
address of Device A.
[*DeviceD] nqa test-instance admin udpjitter
[*DeviceD-nqa-admin-udpjitter] test-type jitter
[*DeviceD-nqa-admin-udpjitter] destination-address ipv4 10.1.1.1
[*DeviceD-nqa-admin-udpjitter] destination-port 4000
Step 4 Verify the test result. Run the display nqa results test-instance admin udpjitter
command on Device D. The command output shows that the round-trip delay
time is less than 250 ms, and the jitter time is less than 20 ms.
[~DeviceD-nqa-admin-udpjitter] display nqa results test-instance admin udpjitter
NQA entry(admin, udpjitter) :testflag is active ,testtype is jitter
1 . Test 1 result The test is finished
SendProbe:1000 ResponseProbe:919
Completion:success RTD OverThresholds number:0
OWD OverThresholds SD number:0 OWD OverThresholds DS number:0
Min/Max/Avg/Sum RTT:1/408/5/4601 RTT Square Sum:1032361
NumOfRTT:919 Drop operation number:0
Operation sequence errors number:0 RTT Stats errors number:0
System busy operation number:0 Operation timeout number:81
Min Positive SD:1 Min Positive DS:1
Max Positive SD:2 Max Positive DS:9
Positive SD Number:67 Positive DS Number:70
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
nqa-server udpecho 10.1.1.1 4000
#
isis 1
network-entity 00.0000.0000.0001.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
isis enable 1
#
return
1.1.4.12.4 Example for Configuring an NQA LSP Ping Test to Monitor MPLS
Network Connectivity
This section provides an example for configuring an NQA LSP ping test to monitor
MPLS network connectivity.
Networking Requirements
On the MPLS network shown in Figure 1-34, Device A and Device C are PEs. An
NQA LSP ping test can be configured to periodically monitor the connectivity
between these two PEs.
Figure 1-34 Networking diagram for an NQA LSP ping test to monitor MPLS
network connectivity
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Create an LSP ping test instance on Device A.
2. Start the test instance.
Data Preparation
To complete the configuration, you need the IP addresses of Device A and Device
C.
Procedure
Step 1 Create an LSP ping test instance.
<DeviceA> system-view
[~DeviceA] nqa test-instance admin lspping
[*DeviceA-nqa-admin-lspping] test-type lspping
[*DeviceA-nqa-admin-lspping] lsp-type ipv4
[*DeviceA-nqa-admin-lspping] destination-address ipv4 3.3.3.9 lsp-masklen 32
Step 4 Configure the test instance to start at 10:00 o'clock every day.
[*DeviceA-nqa-admin-lspping] stop
[*DeviceA-nqa-admin-lspping] start daily 10:00:00 to 10:30:00
[*DeviceA-nqa-admin-lspping] commit
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
mpls lsr-id 1.1.1.9
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 1.1.1.9 0.0.0.0
#
nqa test-instance admin lspping
test-type lspping
destination-address ipv4 3.3.3.9 lsp-masklen 32
start daily 10:00:00 to 10:30:00
#
return
● Device B configuration file
#
sysname DeviceB
#
mpls lsr-id 2.2.2.9
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
return
Networking Requirements
Figure 1-35 illustrates the networking of monitoring PW connectivity between U-
PE1 and U-PE2. CE-A and CE-B run PPP to access U-PE1 and U-PE2, respectively.
U-PE1 and U-PE2 are connected on an MPLS backbone network. A dynamic multi-
segment PW between U-PE1 and U-PE2 is established over an label switched path
(LSP), with an S-PE functioning as the transit node.
The PWE3 ping function can be configured to monitor the connectivity of the
multi-segment PW between U-PE1 and U-PE2.
Configuration Roadmap
The configuration roadmap is as follows:
1. Run an IGP on the backbone network to implement the connectivity of
routers on the backbone network.
2. Enable basic MPLS functions over the backbone and set up LSP tunnels.
Establish remote MPLS Label Distribution Protocol (LDP) peer relationship
between U-PE1 and S-PE, and between U-PE2 and S-PE.
3. Set up an MPLS Layer 2 virtual circuit (L2VC) connection between U-PEs.
4. Set up a switched PW on the switching node S-PE.
5. Configure the PWE3 Ping test on the multi-segment PW on U-PE1.
Data Preparation
To complete the configuration, you need the following data:
● Different L2VC IDs of U-PE1 and U-PE2
● MPLS LSR IDs of U-PE1, S-PE, and U-PE2
● IP address of the peer
● Encapsulation type of the switched PW
● Name of the PW template configured on the U-PEs and parameters of the
PW template
Procedure
Step 1 Configure a dynamic multi-segment PW.
# Configure U-PE1.
[*U-PE1] nqa test-instance test pwe3ping
[*U-PE1-nqa-test-pwe3ping] test-type pwe3ping
[*U-PE1-nqa-test-pwe3ping] local-pw-id 100
[*U-PE1-nqa-test-pwe3ping] local-pw-type ppp
[*U-PE1-nqa-test-pwe3ping] label-type control-word
[*U-PE1-nqa-test-pwe3ping] remote-pw-id 200
[*U-PE1-nqa-test-pwe3ping] commit
Run the display nqa results command on PEs. The command output shows that
the test is successful.
[*U-PE1-nqa-test-pwe3ping] display nqa results
NQA entry(lh, 11) :testflag is inactive ,testtype is pwe3ping
1 . Test 1 result The test is finished
SendProbe:3 ResponseProbe:3
Completion:success RTD OverThresholds number:0
OWD OverThresholds SD number:0 OWD OverThresholds DS number:0
Min/Max/Avg/Sum RTT:3/5/4/11 RTT Square Sum:43
NumOfRTT:3 Drop operation number:0
Operation sequence errors number:0 RTT Status errors number:0
System busy operation number:0 Operation timeout number:0
Min Positive SD:0 Min Positive DS:1
Max Positive SD:0 Max Positive DS:1
Positive SD Number:0 Positive DS Number:1
Positive SD Sum:0 Positive DS Sum:1
Positive SD Square Sum:0 Positive DS Square Sum:1
Min Negative SD:1 Min Negative DS:1
Max Negative SD:2 Max Negative DS:1
Negative SD Number:2 Negative DS Number:1
Negative SD Sum:3 Negative DS Sum:1
Negative SD Square Sum:5 Negative DS Square Sum:1
Min Delay SD:0 Min Delay DS:0
Max Delay SD:0 Max Delay DS:0
Delay SD Square Sum:0 Delay DS Square Sum:0
Packet Loss SD:0 Packet Loss DS:0
Packet Loss Unknown:0 Average of Jitter:1
Average of Jitter SD:1 Average of Jitter DS:1
Jitter out value:0.1015625 Jitter in value:0.0611979
NumberOfOWD:0 Packet Loss Ratio:0 %
OWD SD Sum:0 OWD DS Sum:0
ICPIF value:0 MOS-CQ value:0
Attempts number:1 Disconnect operation number:0
Connection fail number:0 Destination ip address:10.4.1.2
Last Good Probe Time: 2016-11-15 20:33:43.8
----End
Configuration Files
● CE-A configuration file
#
sysname CE-A
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.10.1.1 255.255.255.0
#
return
● U-PE1 configuration file
#
sysname U-PE1
#
mpls lsr-id 1.1.1.9
mpls
#
mpls l2vpn
#
mpls ldp
#
mpls ldp remote-peer 3.3.3.9
remote-ip 3.3.3.9
#
pw-template wwt
peer-address 3.3.3.9
control-word
#
interface GigabitEthernet1/0/0
undo shutdown
mpls l2vc 3.3.3.9 pw-template wwt 100
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
ip address 1.1.1.9 255.255.255.255
#
nqa test-instance test pwe3ping
test-type pwe3ping
local-pw-id 100
local-pw-type ppp
remote-pw-id 200
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 1.1.1.9 0.0.0.0
#
return
● P1 configuration file
#
sysname P1
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
undo shutdown
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
return
● S-PE configuration file
#
sysname S-PE
#
mpls lsr-id 3.3.3.9
mpls
#
mpls l2vpn
#
mpls switch-l2vc 5.5.5.9 200 between 1.1.1.9 100 encapsulation ethernet
#
mpls ldp
#
mpls ldp remote-peer 1.1.1.9
remote-ip 1.1.1.9
#
mpls ldp remote-peer 5.5.5.9
remote-ip 5.5.5.9
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.3.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
undo shutdown
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 10.2.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
#
return
● P2 configuration file
#
sysname P2
#
mpls lsr-id 4.4.4.9
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.3.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 10.4.1.1 255.255.255.0
mpls
mpls ldp
#
interface LoopBack0
undo shutdown
ip address 4.4.4.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 4.4.4.9 0.0.0.0
network 10.3.1.0 0.0.0.255
network 10.4.1.0 0.0.0.255
#
● U-PE2 configuration file
#
sysname U-PE2
#
mpls lsr-id 5.5.5.9
mpls
#
mpls l2vpn
#
mpls ldp
#
mpls ldp remote-peer 3.3.3.9
remote-ip 3.3.3.9
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.4.1.2 255.255.255.0
mpls
mpls ldp
#
pw-template wwt
peer-address 3.3.3.9
control-word
#
interface GigabitEthernet2/0/0
undo shutdown
mpls l2vc 3.3.3.9 pw-template wwt 200
#
interface LoopBack0
ip address 5.5.5.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 5.5.5.9 0.0.0.0
network 10.4.1.0 0.0.0.255
#
return
● CE-B configuration file
#
sysname CE-B
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.10.1.2 255.255.255.0
#
return
Networking Requirements
As shown in Figure 1-36, default routes are configured on DeviceA to import
traffic from DeviceC to DeviceB1 and DeviceB2. The default routes are associated
with an NQA test group that is bound to ICMP test instances test1 and test2 on
DeviceA.
In this example, interface1 and interface2 represent GE 1/0/1 and GE 1/0/2, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure NQA test instances and start the tests.
2. Configure an NQA test group.
3. Bind the NQA test group to test instances.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of DeviceB1 and DeviceB2.
Procedure
Step 1 Configure NQA test instances and start the tests.
<DeviceA> system-view
[~DeviceA] nqa test-instance admin1 test1
[~DeviceA-nqa-admin1-test1] test-type icmp
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/1
ip address 192.168.1.1 255.255.255.0
#
interface GigabitEthernet1/0/2
ip address 192.168.1.2 255.255.255.0
#
nqa test-instance admin1 test1
test-type icmp
destination-address ipv4 10.1.1.1
frequency 15
start now
#
nqa test-instance admin2 test2
test-type icmp
destination-address ipv4 10.2.2.2
frequency 15
start now
#
nqa group group1
description this is an nqa group
operator and
nqa test-instance admin1 test1
Networking Requirements
On the network shown in Figure 1-37, the performance of the Ethernet virtual
connection (EVC) between DeviceA and DeviceB is monitored.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure DeviceB (reflector) to loop traffic with a specified destination MAC
address through GE1/0/1 (reflector interface) to the initiator.
2. Configure initiator DeviceA and test the throughput, delay, and packet loss
rate.
3. Configure DeviceC's interface to join the specified VLAN.
Data Preparation
To complete the configuration, you need the following data:
● Configurations on DeviceB (reflector): MAC address (00e0-fc12-3456) of the
peer interface connected to UNI B.
● Configurations on DeviceA (initiator):
– Destination MAC address: 00e0-fc12-3456 (MAC address of the peer
interface connected to UNI B)
– Throughput test parameters: upper and lower thresholds of the packet
sending rate (100000 kbit/s and 10000 kbit/s, respectively), throughput
precision (1000 kbit/s), packet loss precision (81/10000), timeout period
of each packet sending rate (5s), packet sending size (70 bytes), and test
instance execution duration (100s)
– Delay test parameters: packet rate (99000 Kbit/s), test duration (100s),
and interval (5s) at which the initiator sends test packets
– Packet loss rate test parameters: packet rate (99000 Kbit/s), and test
duration (100s)
Procedure
Step 1 Configure reachable Layer 2 links between the initiator and reflector and add
Layer 2 interfaces to VLAN 10.
Step 2 Configure the reflector.
<DeviceB> system-view
[~DeviceB] nqa reflector 1 interface gigabitethernet 1/0/1 mac 00e0-fc12-3456 vlan 10
Step 3 Configure the initiator to conduct a throughput test and check the test results.
<DeviceA> system-view
[~DeviceA] nqa test-instance admin throughput
[*DeviceA-nqa-admin-throughput] test-type generalflow
[*DeviceA-nqa-admin-throughput] measure throughput
[*DeviceA-nqa-admin-throughput] destination-address mac 00e0-fc12-3456
[*DeviceA-nqa-admin-throughput] forwarding-simulation inbound-interface gigabitethernet 1/0/1
[*DeviceA-nqa-admin-throughput] rate 10000 100000
[*DeviceA-nqa-admin-throughput] interval seconds 5
[*DeviceA-nqa-admin-throughput] precision 1000
[*DeviceA-nqa-admin-throughput] fail-ratio 81
[*DeviceA-nqa-admin-throughput] datasize 70
[*DeviceA-nqa-admin-throughput] duration 100
[*DeviceA-nqa-admin-throughput] vlan 10
[*DeviceA-nqa-admin-throughput] start now
[*DeviceA-nqa-admin-throughput] commit
[~DeviceA-nqa-admin-throughput] display nqa results test-instance admin throughput
NQA entry(admin, throughput) :testflag is inactive ,testtype is generalflow
1 . Test 1 result: The test is finished, test mode is throughput
ID Size Throughput(Kbps) Precision(Kbps) LossRatio Completion
1 70 100000 1000 0.00% success
Step 4 Configure the initiator to conduct a latency test and check the test results.
[*DeviceA] nqa test-instance admin delay
[*DeviceA-nqa-admin-delay] test-type generalflow
Step 5 Configure the initiator to conduct a packet loss rate test and check the test results.
[*DeviceA] nqa test-instance admin loss
[*DeviceA-nqa-admin-loss] test-type generalflow
[*DeviceA-nqa-admin-loss] measure loss
[*DeviceA-nqa-admin-loss] destination-address mac 00e0-fc12-3456
[*DeviceA-nqa-admin-loss] forwarding-simulation inbound-interface gigabitethernet 1/0/1
[*DeviceA-nqa-admin-loss] datasize 64
[*DeviceA-nqa-admin-loss] rate 99000
[*DeviceA-nqa-admin-loss] duration 100
[*DeviceA-nqa-admin-loss] vlan 10
[*DeviceA-nqa-admin-loss] start now
[*DeviceA-nqa-admin-loss] commit
[~DeviceA-nqa-admin-loss] display nqa results test-instance admin loss
NQA entry(admin, loss) :testflag is inactive ,testtype is generalflow
1 . Test 1 result: The test is finished, test mode is loss
ID Size TxRate/RxRate(Kbps) TxCount/RxCount LossRatio Completion
1 64 99000/99000 653265345/653265345 0.00% finished
----End
Configuration Files
● Configuration file of DeviceA
#
sysname DeviceA
#
vlan 10
#
interface GigabitEthernet 1/0/1
portswitch
undo shutdown
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet 1/0/2
portswitch
undo shutdown
port link-type trunk
port trunk allow-pass vlan 10
#
nqa test-instance admin throughput
test-type generalflow
duration 100
measure throughput
fail-ratio 81
destination-address mac 00e0-fc12-3456
datasize 70
rate 10000 100000
precision 1000
forwarding-simulation inbound-interface GigabitEthernet1/0/1
vlan 10
Usage Scenario
A generalflow test needs to be configured to monitor the performance of the
Ethernet network shown in Figure 1-38 between DeviceA and IP gateway DeviceB.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure DeviceA (reflector).
2. Configure DeviceB (initiator) and measure the delay.
Data Preparation
To complete the configuration, you need the following data:
● Configurations on DeviceA (reflector): simulated IP address 10.1.1.1 (CE's IP
address) and reflector interface (GE 1/0/1).
● Configurations on DeviceB (initiator):
– Destination IP address: 10.1.1.1 (IP address of the CE interface connected
to reflector's GE 1/0/1
– Source IP address: an address that resides on the same network segment
as the IP address of the initiator
– Delay test parameters: packet rate (99000 kbit/s), test duration (100s),
and interval (5s) for sending test packets
Procedure
Step 1 Configure Layer 2 devices so that Layer 3 routes between the CE and DeviceB are
reachable. For configuration details, see "Configuration Files" in this section.
Step 3 Configure the initiator to conduct a latency test and view test results.
[*DeviceB] vlan 10
[*DeviceB-vlan10] commit
[~DeviceB-vlan10] quit
[~DeviceB] interface gigabitethernet 1/0/2.1
[*DeviceB-GigabitEthernet1/0/2.1] vlan-type dot1q 10
[*DeviceB-GigabitEthernet1/0/2.1] ip address 10.1.1.2 24
[*DeviceB-GigabitEthernet1/0/2.1] quit
[*DeviceB] arp static 10.1.1.1 00e0-fc12-3456 vid 10 interface GigabitEthernet 1/0/2.1
[*DeviceB] nqa test-instance admin delay
[*DeviceB-nqa-admin-delay] test-type generalflow
[*DeviceB-nqa-admin-delay] measure delay
[*DeviceB-nqa-admin-delay] destination-address ipv4 10.1.1.1
[*DeviceB-nqa-admin-delay] source-address ipv4 10.1.1.2
[*DeviceB-nqa-admin-delay] source-interface gigabitethernet 1/0/2.1
[*DeviceB-nqa-admin-delay] rate 99000
[*DeviceB-nqa-admin-delay] interval seconds 5
[*DeviceB-nqa-admin-delay] datasize 64
[*DeviceB-nqa-admin-delay] duration 100
[*DeviceB-nqa-admin-delay] start now
[*DeviceB-nqa-admin-delay] commit
[~DeviceB-nqa-admin-delay] display nqa results test-instance admin delay
NQA entry(admin, delay) :testflag is inactive ,testtype is generalflow
1 . Test 1 result: The test is finished, test mode is delay
ID Size Min/Max/Avg RTT(us) Min/Max/Avg Jitter(us) Completion
1 64 1/12/5 2/15/8 finished
----End
Configuration Files
● Configuration file of DeviceA
#
sysname DeviceA
#
vlan 10
#
nqa reflector 1 interface GigabitEthernet 1/0/1 ipv4 10.1.1.1 vlan 10
#
interface GigabitEthernet 1/0/1
portswitch
undo shutdown
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet 1/0/2
portswitch
undo shutdown
port link-type trunk
port trunk allow-pass vlan 10
#
return
portswitch
undo shutdown
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet 1/0/2.1
vlan-type dot1q 10
ip address 10.1.1.2 255.255.255.0
#
arp static 10.1.1.1 00e0-fc12-3456 vid 10 interface GigabitEthernet 1/0/2.1
nqa test-instance admin delay
test-type generalflow
destination-address ipv4 10.1.1.1
source-address ipv4 10.1.1.2
duration 100
measure delay
interval seconds 5
datasize 64
rate 99000
source-interface GigabitEthernet 1/0/2.1
#
return
Networking Requirements
On the network shown in Figure 1-39, it is required that Ethernet frame
transmission between DeviceB and DeviceC be checked to determine whether the
performance parameters meet SLAs.
Configuration Roadmap
1. Configure a reflector (DeviceC) and flow-based traffic filtering with the
reflection port set to GE1/0/1.
2. Configure DeviceB as the initiator and execute configuration and performance
tests.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure a reachable link between the initiator and reflector and add Layer 2
interfaces to VLAN 10.
Step 2 Configure the reflector.
[*DeviceC] nqa test-flow 1
[*DeviceC-nqa-testflow-1] vlan 10
[*DeviceC-nqa-testflow-1] udp destination-port 1234
[*DeviceC-nqa-testflow-1] udp source-port 5678
[*DeviceC-nqa-testflow-1] traffic-type mac destination 00e0-fc12-3457
[*DeviceC-nqa-testflow-1] traffic-type mac source 00e0-fc12-3456
[*DeviceC-nqa-testflow-1] quit
[*DeviceC] nqa reflector 1 interface GigabitEthernet 1/0/1 test-flow 1 exchange-port agetime 0
[*DeviceC] commit
Step 3 Configure the initiator to perform configuration and performance tests and view
test results.
[*DeviceB] nqa test-flow 1
[*DeviceB-nqa-testflow-1] vlan 10
[*DeviceB-nqa-testflow-1] udp destination-port 1234
[*DeviceB-nqa-testflow-1] udp source-port 5678
[*DeviceB-nqa-testflow-1] cir simple-test enable
[*DeviceB-nqa-testflow-1] bandwidth cir 10000 eir 10000
[*DeviceB-nqa-testflow-1] sac flr 1000 ftd 1000 fdv 1000
[*DeviceB-nqa-testflow-1] traffic-type mac destination 00e0-fc12-3457
[*DeviceB-nqa-testflow-1] traffic-type mac source 00e0-fc12-3456
[*DeviceB-nqa-testflow-1] traffic-policing test enable
[*DeviceB-nqa-testflow-1] color-mode 8021p green 0 7 yellow 0 7
[*DeviceB-nqa-testflow-1] quit
[*DeviceB] nqa test-instance admin ethernet
[*DeviceB-nqa-admin-ethernet] test-type ethernet-service
[*DeviceB-nqa-admin-ethernet] forwarding-simulation inbound-interface GigabitEthernet 1/0/1
[*DeviceB-nqa-admin-ethernet] test-flow 1
[*DeviceB-nqa-admin-ethernet] start now
[*DeviceB-nqa-admin-ethernet] commit
[~DeviceB-nqa-admin-ethernet] display nqa results test-instance admin ethernet
NQA entry(admin, ethernet) :testflag is inactive ,testtype is ethernet-service
1 . Test 1 result The test is finished
Status : Pass
Test-flow number : 1
Mode : Round-trip
Last step : Performance-test
Estimated total time :6
Real test time :6
1 . Configuration-test
Test-flow 1, CIR simple test
Begin : 2014-06-25 16:22:45.8
End : 2014-06-25 16:22:48.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9961/10075/10012
Min/Max/Mean FTD(us) : 99/111/104
Min/Max/Mean FDV(us) : 0/7/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, CIR/EIR test, Green
Begin : 2014-06-25 16:23:15.8
End : 2014-06-25 16:23:18.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9979/10054/10012
Min/Max/Mean FTD(us) : 101/111/105
Min/Max/Mean FDV(us) : 0/10/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, CIR/EIR test, Yellow
Begin : 2014-06-25 16:23:15.8
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
vlan batch 10
#
interface GigabitEthernet 1/0/1
portswitch
undo shutdown
port link-type trunk
port trunk allow-pass vlan 10
#
return
Networking Requirements
On the network shown in Figure 1-40, it is required that Ethernet frame
transmission between DeviceB and DeviceC be checked to determine whether the
performance parameters meet SLAs.
Configuration Roadmap
1. Configure DeviceC as the reflector and set filter criteria based on flows.
2. Configure DeviceB as the initiator and execute configuration and performance
tests.
Data Preparation
To complete the configuration, you need the following data:
The link between the two user networks must be reachable. Otherwise, static ARP entries
must be configured.
Procedure
Step 1 Configure the Layer 3 link reachability for the initiator and reflector.
Step 2 Configure the reflector.
[*DeviceC] nqa test-flow 1
[*DeviceC-nqa-testflow-1] vlan 10
[*DeviceC-nqa-testflow-1] traffic-type mac destination 00e0-fc12-3459
[*DeviceC-nqa-testflow-1] traffic-type mac source 00e0-fc12-3458
[*DeviceC-nqa-testflow-1] traffic-type ipv4 destination 10.1.3.2
[*DeviceC-nqa-testflow-1] traffic-type ipv4 source 10.1.1.1
[*DeviceC-nqa-testflow-1] traffic-policing test enable
[*DeviceC-nqa-testflow-1] quit
[*DeviceC] nqa reflector 1 interface GigabitEthernet 1/0/1.1 test-flow 1 exchange-port agetime 0
[*DeviceC] commit
Step 3 Configure the initiator to perform configuration and performance tests and view
test results.
[*DeviceB] nqa test-flow 1
[*DeviceB-nqa-testflow-1] vlan 10
[*DeviceB-nqa-testflow-1] bandwidth cir 500000 eir 20000
[*DeviceB-nqa-testflow-1] sac flr 1000 ftd 10000 fdv 10000000
[*DeviceB-nqa-testflow-1] traffic-type mac destination 00e0-fc12-3457
[*DeviceB-nqa-testflow-1] traffic-type mac source 00e0-fc12-3456
[*DeviceB-nqa-testflow-1] traffic-type ipv4 destination 10.1.3.2
[*DeviceB-nqa-testflow-1] traffic-type ipv4 source 10.1.1.1
[*DeviceB-nqa-testflow-1] traffic-policing test enable
[*DeviceB-nqa-testflow-1] color-mode 8021p green 0 7 yellow 0 7
[*DeviceB-nqa-testflow-1] quit
[*DeviceB] nqa test-instance admin ethernet
[*DeviceB-nqa-admin-ethernet] test-type ethernet-service
[*DeviceB-nqa-admin-ethernet] forwarding-simulation inbound-interface GigabitEthernet 1/0/1.1
[*DeviceB-nqa-admin-ethernet] test-flow 1
[*DeviceB-nqa-admin-ethernet] start now
[*DeviceB-nqa-admin-ethernet] commit
[~DeviceB-nqa-admin-ethernet] display nqa results test-instance admin ethernet
NQA entry(admin, ethernet) :testflag is inactive ,testtype is ethernet-service
1 . Test 1 result The test is finished
Status : Pass
Test-flow number : 1
Mode : Round-trip
Last step : Performance-test
Estimated total time :6
Real test time :6
1 . Configuration-test
Test-flow 1, CIR simple test
Begin : 2014-06-25 16:22:45.8
End : 2014-06-25 16:22:48.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9961/10075/10012
Min/Max/Mean FTD(us) : 99/111/104
Min/Max/Mean FDV(us) : 0/7/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, CIR/EIR test, Green
Begin : 2014-06-25 16:23:15.8
End : 2014-06-25 16:23:18.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9979/10054/10012
Min/Max/Mean FTD(us) : 101/111/105
Min/Max/Mean FDV(us) : 0/10/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, CIR/EIR test, Yellow
Begin : 2014-06-25 16:23:15.8
End : 2014-06-25 16:23:18.8
Status : --
Min/Max/Mean IR(kbit/s) : 9979/10057/10013
Min/Max/Mean FTD(us) : 98/111/104
Min/Max/Mean FDV(us) : 1/11/5
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, Traffic policing test, Green
Begin : 2014-06-25 16:23:45.8
End : 2014-06-25 16:23:48.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 10039/10054/10045
Min/Max/Mean FTD(us) : 96/110/104
Min/Max/Mean FDV(us) : 1/9/4
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, Traffic policing test, Yellow
Begin : 2014-06-25 16:23:45.8
End : 2014-06-25 16:23:48.8
Status : --
Min/Max/Mean IR(kbit/s) : 12544/12566/12554
Min/Max/Mean FTD(us) : 101/111/105
Min/Max/Mean FDV(us) : 1/8/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
2 . Performance-test
Test-flow 1, Performance-test
Begin : 2014-06-25 16:24:15.8
End : 2014-06-25 16:39:15.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9888/10132/10004
----End
Configuration Files
● DeviceB configuration file
#
sysname DeviceB
#
interface GigabitEthernet 1/0/1
undo shutdown
#
interface GigabitEthernet 1/0/1.1
vlan-type dot1q 10
ip address 10.1.1.2 255.255.255.0
#
interface GigabitEthernet 1/0/2
undo shutdown
ip address 10.1.2.1 255.255.255.0
#
nqa test-flow 1
vlan 10
bandwidth cir 500000 eir 20000
sac flr 1000 ftd 10000 fdv 10000000
traffic-type mac destination 00e0-fc12-3457
traffic-type mac source 00e0-fc12-3456
traffic-type ipv4 destination 10.1.3.2
traffic-type ipv4 source 10.1.1.1
traffic-policing test enable
color-mode 8021p green 0 7 yellow 0 7
#
nqa test-instance admin ethernet
test-type ethernet-service
forwarding-simulation inbound-interface GigabitEthernet 1/0/1.1
test-flow 1
#
return
● DeviceC configuration file
#
sysname DeviceC
#
interface GigabitEthernet 1/0/1
undo shutdown
#
interface GigabitEthernet 1/0/1.1
vlan-type dot1q 10
ip address 10.1.3.1 255.255.255.0
#
interface GigabitEthernet 1/0/2
undo shutdown
ip address 10.1.2.2 255.255.255.0
#
nqa test-flow 1
vlan 10
traffic-type mac destination 00e0-fc12-3459
traffic-type mac source 00e0-fc12-3458
traffic-type ipv4 destination 10.1.3.2
traffic-type ipv4 source 10.1.1.1
traffic-policing test enable
#
nqa reflector 1 interface GigabitEthernet 1/0/1.1 test-flow 1 exchange-port agetime 0
#
return
Networking Requirements
After network deployment is complete and before services are provisioned, you
can configure an Ethernet service activation test to assess the network
performance, which is necessary for business planning and service promotion.
In this example, the destination MAC address is specified in a test instance to check the
network performance between CEs on both ends of a Layer 2 EVPN. In a Layer 3 scenario,
the destination IP address must be specified.
Interface1 and interface2 in this example represent GE1/0/0 and GE2/0/0, respectively.
Configuration Roadmap
The roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interconnected interfaces of devices
● Service flow characteristics:
– Destination MAC address: 00e0-fc12-3467 (MAC address of GE2/0/0 on
DeviceD)
– Source MAC address: 00e0-fc12-3465 (MAC address of GE2/0/0 on
DeviceA)
– VLAN ID carried in Ethernet frames: 10
– UDP destination port number: 1234
– UDP source port number: 5678
● Bandwidth profile: 10000 kbit/s for both the CIR and EIR
● Service acceptance criteria: 1000/100000 for FLR and 1000 microseconds for
both FTD and FDV
Procedure
Step 1 Assign an IP address and a loopback address to each interface.
For configuration details, see Configuration Files in this section.
Step 2 Configure an IGP on the backbone network. In this example, OSPF is used.
For configuration details, see Configuration Files in this section.
Step 3 Configure a VXLAN tunnel between DeviceB and DeviceC.
For details about the configuration roadmap, see VXLAN Configuration. For
configuration details, see Configuration Files.
After a VXLAN tunnel is established, you can run the display vxlan tunnel
command on DeviceB or DeviceC to view VXLAN tunnel information. The
command output on DeviceB is used as an example.
[~DeviceB] display vxlan tunnel
Number of vxlan tunnel : 1
Tunnel ID Source Destination State Type Uptime
-----------------------------------------------------------------------------------
4026531841 1.1.1.1 2.2.2.2 up dynamic 00:12:56
Step 4 Configure communication between DeviceA and DeviceB, and between DeviceC
and Device D.
# Configure DeviceB.
[~DeviceB] interface gigabitethernet 2/0/0.1 mode l2
[*DeviceB-GigabitEthernet2/0/0.1] encapsulation dot1q vid 10
[*DeviceB-GigabitEthernet2/0/0.1] bridge-domain 10
[*DeviceB-GigabitEthernet2/0/0.1] commit
[~DeviceB-GigabitEthernet2/0/0.1] quit
# Configure DeviceA.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] interface gigabitethernet 2/0/0.1
[*DeviceA-GigabitEthernet2/0/0.1] ip address 10.100.0.1 24
[*DeviceA-GigabitEthernet2/0/0.1] vlan-type dot1q 10
[*DeviceA-GigabitEthernet2/0/0.1] commit
[~DeviceA-GigabitEthernet2/0/0.1] quit
Status : Pass
Min/Max/Mean IR(kbit/s) : 9961/10075/10012
Min/Max/Mean FTD(us) : 99/111/104
Min/Max/Mean FDV(us) : 0/7/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, CIR/EIR test, Green
Begin : 2014-06-25 16:23:15.8
End : 2014-06-25 16:23:18.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9979/10054/10012
Min/Max/Mean FTD(us) : 101/111/105
Min/Max/Mean FDV(us) : 0/10/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, CIR/EIR test, Yellow
Begin : 2014-06-25 16:23:15.8
End : 2014-06-25 16:23:18.8
Status : --
Min/Max/Mean IR(kbit/s) : 9979/10057/10013
Min/Max/Mean FTD(us) : 98/111/104
Min/Max/Mean FDV(us) : 1/11/5
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, Traffic policing test, Green
Begin : 2014-06-25 16:23:45.8
End : 2014-06-25 16:23:48.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 10039/10054/10045
Min/Max/Mean FTD(us) : 96/110/104
Min/Max/Mean FDV(us) : 1/9/4
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
Test-flow 1, Traffic policing test, Yellow
Begin : 2014-06-25 16:23:45.8
End : 2014-06-25 16:23:48.8
Status : --
Min/Max/Mean IR(kbit/s) : 12544/12566/12554
Min/Max/Mean FTD(us) : 101/111/105
Min/Max/Mean FDV(us) : 1/8/3
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
2 . Performance-test
Test-flow 1, Performance-test
Begin : 2014-06-25 16:24:15.8
End : 2014-06-25 16:39:15.8
Status : Pass
Min/Max/Mean IR(kbit/s) : 9888/10132/10004
Min/Max/Mean FTD(us) : 101/111/105
Min/Max/Mean FDV(us) : 0/8/2
FL Count/FLR : 0/0.000%
Disorder packets :0
Unavail Count/AVAIL : 0/0.000%
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet2/0/0
undo shutdown
#
interface GigabitEthernet2/0/0.1
vlan-type dot1q 10
ip address 10.100.0.1 255.255.255.0
#
ospf 1
import-route direct
area 0.0.0.0
network 10.100.0.0 0.0.0.255
#
return
● DeviceB configuration file
#
sysname DeviceB
#
evpn vpn-instance evpna bd-mode
route-distinguisher 1:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
ip vpn-instance evpna
ipv4-family
route-distinguisher 1:1
apply-label per-instance
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
vxlan vni 100
#
bridge-domain 10
vxlan vni 1 split-horizon-mode
evpn binding vpn-instance evpna
#
interface Vbdif10
ip binding vpn-instance evpna
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.0.0.1 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Nve1
source 1.1.1.1
vni 1 head-end peer-list protocol bgp
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 2.2.2.2 enable
peer 2.2.2.2 advertise irb
peer 2.2.2.2 advertise encap-type vxlan
#
ospf 1
import-route direct
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.0.0.0 0.0.0.255
#
nqa test-flow 1
vlan 10
udp destination-port 1234
udp source-port 5678
cir simple-test enable
bandwidth cir 10000 eir 10000
sac flr 1000 ftd 1000 fdv 1000
traffic-type mac destination 00e0-fc12-3457
traffic-type mac source 00e0-fc12-3456
traffic-policing test enable
color-mode 8021p green 0 7 yellow 0 7
#
nqa test-instance admin ethernet
test-type ethernet-service
forwarding-simulation inbound-interface GigabitEthernet 2/0/0.1
test-flow 1
#
return
● DeviceC configuration file
#
sysname DeviceC
#
evpn vpn-instance evpna bd-mode
route-distinguisher 1:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
ip vpn-instance evpna
ipv4-family
route-distinguisher 1:1
apply-label per-instance
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
vxlan vni 100
#
bridge-domain 10
vxlan vni 1 split-horizon-mode
evpn binding vpn-instance evpna
#
interface Vbdif10
ip binding vpn-instance evpna
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.0.0.2 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
interface Nve1
source 2.2.2.2
vni 1 head-end peer-list protocol bgp
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 advertise irb
peer 1.1.1.1 advertise encap-type vxlan
#
ospf 1
import-route direct
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.0.0.0 0.0.0.255
#
nqa test-flow 1
vlan 10
udp destination-port 1234
udp source-port 5678
traffic-type mac destination 00e0-fc12-3457
traffic-type mac source 00e0-fc12-3456
#
nqa reflector 1 interface GigabitEthernet 2/0/0.1 test-flow 1 exchange-port agetime 0
#
return
1.1.4.12.12 Example for Configuring Test Results to Be Sent to the FTP Server
Delivering the test results to the FTP server can save the test results to the
maximum extent.
Networking Requirements
In Figure 1-42, DeviceA serves as the client to perform an ICMP test and send test
results to the FTP server through FTP.
NOTE
Figure 1-42 Networking diagram of sending test results to the FTP server
Configuration Roadmap
The configuration roadmap is as follows:
1. Set parameters for configuring test results to be sent to the FTP server.
2. Start a test instance.
3. Verify the configurations.
Data Preparation
To complete the configuration, you need the following data:
● IP address of the FTP server
● User name and password used for logging in to the FTP server
● Name of a file in which test results are saved through FTP
● Interval at which test results are uploaded through FTP
Procedure
Step 1 Set parameters for configuring test results to be sent to the FTP server.
<DeviceA> system-view
[~DeviceA] nqa upload test-type icmp ftp ipv4 10.1.2.8 file-name test1 port 21 username ftp password
YsHsjx_202206 interval 600 retry 3
[*DeviceA] commit
FileName : NQA_38ba47987301_icmp_20171014112319701_test1.xml
Status : Upload success
RetryTimes : 3
UploadTime : 2017-10-14 11:23:21.697
---------------------------------------------------------------
FileName : NQA_38ba47987301_icmp_20171014112421710_test1.xml
Status : Uploading
RetryTimes : 3
UploadTime : --
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
interface GigabitEthernet 1/0/0
ip address 10.1.1.11 255.255.255.0
#
interface GigabitEthernet 2/0/0
ip address 10.1.2.1 255.255.255.0
#
nqa upload test-type icmp ftp ipv4 10.1.2.8 file-name test1 port 21 username ftp password %^%#`P'|
9L1x62lN*b+C~wMTT|$EA7+z0XOFC_,B$M+"%^%# interval 600 retry 3
nqa test-instance admin icmp
test-type icmp
destination-address ipv4 10.1.1.10
start now
#
return
Context
The ping command is a most common debugging tool used for testing device
accessibility. It uses ICMP Echo messages to determine the following:
● Whether the remote device is available.
● Round-trip delay in communication with a remote device.
● Whether packet loss occurs.
The ping command labels each ICMP Echo message with a sequence ID that starts
from 1 and is increased by 1. The number of ICMP Echo messages to be sent is
determined by the device, and the default number is 5. You can also set the
number of ICMP Echo messages to be sent using a command. If the destination is
reachable, it sends five ICMP Echo Reply messages to the source, with their
sequence numbers identical with those of ICMP Echo messages.
Perform the following steps in any view on the client:
NOTE
The command format is for reference only. For details about the formats of the ping and
ping ipv6 commands, see Command Reference.
Procedure
Step 1 Run the ping command to check whether network connection is normal. You can
run either command to view detailed or brief information.
● To view detailed information, run the ping [ ip ] { [ -c count | { [ -i
{ interface-name | interface-type interface-number } | -nexthop nexthop-
address ] * | -si { source-interface-name | source-interface-type source-
interface-number } [ -ei { evcSubIfName | evcSubIfType evcSubIfNum } ] } | { -
s packetsize | -range [ [ min min-value | max max-value | step step-value ]
* ] } | -t timeout | -m time | -a source-ip-address | -h ttl-value | -p pattern | { -
On an IPv6 network, you need to run the ping ipv6 command. For details, see
Command Reference.
Note that on a network that carries APN6 services, you need to run the ping ipv6
{ -apn-id-ipv6 instance instName [ -a source-ipv6-address | -c echo-number | { -s
byte-number | -range [ [ min min-value | max max-value | step step-value ] * ] } |
-t timeout | { -tc traffic-class-value | -dscp dscp } | -vpn-instance vpn-instance-
name | -m wait-time | -name | -si { source-interface-name | source-interface-type
source-interface-number } | { -brief | [ -system-time | -ri | -detail ] * } | -p
pattern ] * destination-ipv6-address } command to check whether the network
connection is normal. The following is an example:
<HUAWEI> ping ipv6 -apn-id-ipv6 instance inst1 vpn-instance vpna -si Gigabitethernet 1/0/0
2001:DB8:22::1
PING 2001:DB8:22::1 : 56 data bytes, press CTRL_C to break
Reply from 2001:DB8:22::1
bytes=56 Sequence=1 hop limit=64 time=7 ms
Reply from 2001:DB8:22::1
bytes=56 Sequence=2 hop limit=64 time=2 ms
Reply from 2001:DB8:22::1
bytes=56 Sequence=3 hop limit=64 time=2 ms
Reply from 2001:DB8:22::1
bytes=56 Sequence=4 hop limit=64 time=2 ms
Reply from 2001:DB8:22::1
bytes=56 Sequence=5 hop limit=64 time=2 ms
Step 2 (Optional) Run the icmp-reply fast command to enable ICMP fast reply.
NOTE
The jitter time and delay time in ping processes are great. This is because the
ICMP messages used in ping operations need to be processed by the CPUs of
devices and the processing produces great delays. The details are as follows:
● To minimize the impact of ping attacks on itself, the NE9000 reduces the
processing priority of ICMP messages to the lowest level.
● The NE9000 uses a distributed processing system. ARP, ICMP, routing, and
other information is processed on the main control board. In a ping operation,
an interface board sends ICMP messages to the main control board for
processing. Then, the main control board returns the processed messages to
the interface board. Due to their low processing priority, ICMP messages are
always transmitted and processed after other packets, resulting in a long
delay.
To resolve ping delay and jitter issues, devices provide the ICMP fast reply function.
After this function is enabled, received ICMP request messages are not sent to the
CPU for processing. Instead, the packet forwarding engine (PFE) of an interface
board responds to the client with ICMP reply messages, greatly shortening the
ping delay.
NOTE
After the undo icmp-reply fast command is run in the system or slot view, ICMP fast reply
is disabled on the interface board. After ICMP fast reply is disabled on an interface board,
this function takes effect on the interface board only after the icmp-reply fast command is
run in both the system and slot views.
----End
Context
Multiple physical interfaces can be bundled into a logical trunk interface, and
these physical interfaces are trunk member interfaces. A specific transmission path
is used by each member interface. Path-specific service parameters, such as the
delay, jitter, and packet loss rate, are also different. Therefore, you cannot
determine which member interface is faulty when the quality of services on a
trunk interface deteriorates. To resolve this problem, perform a ping test to detect
each physical link to help locate the faulty link.
NOTE
The ping test applies when two devices are directly connected through trunk interfaces or Eth-
Trunk sub-interfaces.
Procedure
Step 1 Enable the receive end to monitor Layer 3 trunk member interfaces.
1. Run system-view
NOTE
This command takes effect on all Layer 3 trunk interfaces in a virtual system (VS).
Therefore, if you only need to test the connectivity of trunk links, disable this function
after the monitoring process is complete. Otherwise, the system keeps monitoring the
trunk member interfaces, consuming a lot of system resources.
3. Run commit
Step 2 Run either of the following command to ping Layer 3 trunk member interfaces
from the transmit end:
Fragmented packets do not support the fast reply function. In this case, configuring
the -fri keyword to implement trunk member interface-based fast reply does not take
effect.
● Final statistics: include the number of sent and received packets, percentage
of failure response packets, and minimum, maximum, and average response
time.
<HUAWEI> ping -a 192.168.1.1 -i Eth-Trunk 1 10.1.1.2
PING 10.1.1.2: 56 data bytes, press CTRL_C to break
Reply from 10.1.1.2: bytes=56 Sequence=1 ttl=255 time=170 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=30 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=30 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=50 ms
Reply from 10.1.1.2: bytes=56 Sequence=2 ttl=255 time=50 ms
NOTE
----End
Prerequisites
Before using the ping tcp command to check the time taken to set up a TCP
connection on an IPv4 network, ensure that the TCP server is enabled on the peer
end or the peer end is specified as the TCP server using the nqa-server
tcpconnect command.
Procedure
Step 1 Run ping tcp [ -c count | -t timeout | -m interval | -h ttl | -vpn-instance vrfName |
-passroute | -a srcAddress ] * destAddress [ destPort ]
The time taken to set up a TCP connection on the IPv4 network is displayed.
For example:
<HUAWEI> ping tcp 10.1.1.1 3000
PING TCP 10.1.1.1: 3000, press CTRL_C to break
Reply from 10.1.1.1: Sequence=1 time=3 ms
Reply from 10.1.1.1: Sequence=2 time=3 ms
Reply from 10.1.1.1: Sequence=3 time=3 ms
Reply from 10.1.1.1: Sequence=4 time=3 ms
Reply from 10.1.1.1: Sequence=5 time=4 ms
--- TCP ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 3/3/4 ms
----End
Context
The tracert and tracert ipv6 commands are used to trace the gateways through
which a packet passes from the source to the destination. The maximum TTL
value that can be set for packets using the tracert or tracert ipv6 command is
255. Each time the source does not receive a reply after the configured period of
time elapses, it displays timeout information and sends another packet with the
TTL value incremented by 1. If timeout information is still displayed when the TTL
value is 255, the source considers that the destination is unreachable and the test
fails.
To prevent malicious users from forging ICMP Port Unreachable or Time Exceeded
messages in order to detect the IPv4 or IPv6 addresses of device interfaces, you
can specify the source IPv4 or IPv6 address of Port Unreachable or Time Exceeded
messages in the loopback interface view. If the tracert or tracert ipv6 command
is run to detect a remote address, the device uses the address of the loopback
interface as the source address of Port Unreachable or Time Exceeded messages.
NOTE
The command format is for reference only. For details about the formats of the tracert or
tracert ipv6 commands, see Command Reference.
Procedure
● On an IPv4 network:
a. (Optional) Configure the address of a loopback interface as the source IP
address of ICMP Port Unreachable or Time Exceeded messages.
i. Run system-view
The system view is displayed.
ii. Run interface loopback loopback-number
A loopback interface is created, and the loopback interface view is
displayed.
iii. (Optional) Run ip binding vpn-instance vpn-instance-name
The interface is bound to a VPN instance.
iv. Run ip icmp { ttl-exceeded | port-unreachable } source-address
The address of the loopback interface is configured as the source IP
address of ICMP Port Unreachable or Time Exceeded messages.
v. Run commit
The configuration is committed.
b. Run tracert [ -a source-ip-address | -f initTtl | -m maxTtl | -p destPort | -q
nqueries | { -vpn-instance vpn-instance-name [ peer peerIpv6 ] | -as } | -
w timeout | -v | -s size | { { -i { interface-name | interface-type interface-
number } | -nexthop nexthop-address | -passroute | -service-class
classValue | -pipe } * | -si { source-interface-name | source-interface-type
----End
Prerequisites
Before you start a test, run the lspv mpls-lsp-ping echo enable/lspv mpls-lsp-
ping echo enable ipv6 command to enable the device to respond to MPLS echo
request/MPLS echo request IPv6 packets.
NQA is a detection method deployed on the main control board. Both the initiator
and responder send LSP packets to the main control board for processing. If a
large number of packets are sent to the main control board, the CPU usage of the
main control board becomes high, affecting normal device running. To prevent this
problem, you can run the lspv mpls-lsp-ping cpu-defend cpu-defend command
to limit the rate at which MPLS echo request packets are sent to the main control
board.
If the MPLS packet length of an NQA test instance is greater than the MTU of a
specified MPLS tunnel, MPLS packets fail to pass through the tunnel. To allow the
packets to pass through the tunnel, run the fragment enable command to enable
MPLS packet fragmentation.
Context
Perform the following steps in any view on the client.
Procedure
● Check the connectivity of an LDP LSP that carries IPv4 packets.
To check the connectivity of an LDP LSP that carries IPv4 packets, run the
ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m interval | -
r reply-mode | -s packet-size | -t time-out | -v | -g ] * ip destination-iphost
mask-length [ ip-address ] [ nexthop nexthop-address ] [ remote remote-
address ] command.
<HUAWEI> ping lsp -v ip 3.3.3.3 32
LSP PING FEC: IPV4 PREFIX 3.3.3.3/32 : 100 data bytes, press CTRL_C to break
Reply from 3.3.3.3: bytes=100 Sequence=1 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=2 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=3 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=4 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=5 time = 5 ms Return Code 3, Subcode 1
--- FEC: IPV4 PREFIX 3.3.3.3/32 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 4/4/5 ms
--- FEC: RSVP IPV4 SESSION QUERY Tunnel1 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 2/2/4 ms
--- FEC: AUTO TE TUNNEL IPV4 SESSION QUERY Tunnel10 ping statistics ---
3 packet(s) transmitted
3 packet(s) received
0.00% packet loss
round-trip min/avg/max = 6/8/11 ms
--- FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel10 ping statistics ---
3 packet(s) transmitted
3 packet(s) received
0.00% packet loss
round-trip min/avg/max = 6/8/11 ms
--- FEC: SEGMENT ROUTING IPV4 PREFIX 3.3.3.9/32 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 2/6/13 ms
To check the connectivity of a BGP LSP that carries IPv4 packets, run the ping
lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m interval | -r
reply-mode | -s packet-size | -t time-out | -v | -g ] * bgp destination-iphost
mask-length [ vpn-instance vpn-name ] [ ip-address ] [ nexthop nexthop-
address [ out-label mplsLabel ] ] command.
<HUAWEI> ping lsp -c 2 bgp 4.4.4.4 32
LSP PING FEC: BGP LABELED IPV4 PREFIX 4.4.4.4/32/ : 100 data bytes, press CTRL_C to break
Reply from 4.4.4.4: bytes=100 Sequence=1 time=46 ms
Reply from 4.4.4.4: bytes=100 Sequence=2 time=2 ms
--- FEC: BGP LABELED IPV4 PREFIX 4.4.4.4/32 ping statistics ---
2 packet(s) transmitted
2 packet(s) received
0.00% packet loss
round-trip min/avg/max = 2/24/46 ms
● Check the connectivity of an LDP LSP interworking an SR-MPLS BE tunnel.
To check the connectivity of an LDP LSP interworking with an SR-MPLS BE
tunnel, run the ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value
| -m interval | -r reply-mode | -s packet-size | -t time-out | -v | -g ] * ip
destination-iphost mask-length [ ip-address ] [ nexthop nexthop-address ]
[ remote remote-address ] command on the ingress to initiate a ping test to
the egress of the SR-MPLS BE tunnel.
<HUAWEI> ping lsp -c 3 ip 5.5.5.9 32 remote 5.5.5.9
LSP PING FEC: IPV4 PREFIX 5.5.5.9/32/ : 100 data bytes, press CTRL_C to break
Reply from 5.5.5.9: bytes=100 Sequence=1 time=3 ms
Reply from 5.5.5.9: bytes=100 Sequence=2 time=3 ms
Reply from 5.5.5.9: bytes=100 Sequence=3 time=3 ms
NOTE
You must run the lspv echo-reply fec-validation ldp disable command on the SR-
MPLS BE side to disable the LSPV response end from checking the LDP FEC.
<HUAWEI> ping lsp -v ip 3.3.3.3 32
LSP PING FEC: IPV4 PREFIX 3.3.3.3/32 : 100 data bytes, press CTRL_C to break
Reply from 3.3.3.3: bytes=100 Sequence=1 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=2 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=3 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=4 time = 4 ms Return Code 3, Subcode 1
Reply from 3.3.3.3: bytes=100 Sequence=5 time = 5 ms Return Code 3, Subcode 1
--- FEC: IPV4 PREFIX 3.3.3.3/32 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 4/4/5 ms
● Check the connectivity of an SR-MPLS BE tunnel interworking with an LDP
LSP (the LDP end does not support interworking).
To check the connectivity of an SR-MPLS BE tunnel interworking with an LDP
LSP, run the ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -
m interval | -s packet-size | -t time-out | -v | -g ] * segment-routing ip
destination-address mask-length [ flex-algo flex-algo-id ] [ version draft2 ]
[ bypass ] remote-fec { ldp remoteipaddr remotemasklen | nil } command on
the ingress to initiate a ping test to the egress of the LDP LSP.
<HUAWEI> ping lsp -c 3 segment-routing ip 5.5.5.9 32 version draft2 remote-fec ldp 5.5.5.9 32
LSP PING FEC: IPV4 PREFIX 5.5.5.9/32 : 100 data bytes, press CTRL_C to break
Reply from 5.5.5.9: bytes=100 Sequence=1 time=9 ms
Reply from 5.5.5.9: bytes=100 Sequence=2 time=2 ms
Reply from 5.5.5.9: bytes=100 Sequence=3 time=3 ms
To check the connectivity of an inter-AS E2E SR-MPLS TE tunnel, run the ping
lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m interval | -s
packet-size | -t timeout | -v | -g | -r reply-mode ] * segment-routing { { auto-
tunnel srAutoTnlName version { draft2 | draft4 } } | te { tunnelName | ifType
ifNum } [ draft2 ] } [ remote remoteAddress ] [ hot-standby ] command.
<HUAWEI> ping lsp segment-routing te Tunnel 11 draft2
LSP PING FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel11 : 100 data bytes,
press CTRL_C to break
Reply from 5.5.5.9: bytes=100 Sequence=1 time=14 ms
Reply from 5.5.5.9: bytes=100 Sequence=2 time=12 ms
Reply from 5.5.5.9: bytes=100 Sequence=3 time=9 ms
Reply from 5.5.5.9: bytes=100 Sequence=4 time=11 ms
Reply from 5.5.5.9: bytes=100 Sequence=5 time=8 ms
--- FEC: SEGMENT ROUTING TE TUNNEL IPV4 SESSION QUERY Tunnel11 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 8/10/14 ms
● (Optional) Run the display lspv statistics command to check LSPV packet
statistics.
If the test using the ping lsp command fails, you can run this command to
check whether the fault occurs on the LSP or the device.
● (Optional) Run the reset lspv statistics command to clear LSPV packet
statistics.
----End
Follow-up Procedure
After the test is completed, you are advised to run the undo lspv mpls-lsp-ping
echo enable/undo lspv mpls-lsp-ping echo enable ipv6 command to disable the
device from responding to MPLS echo request/MPLS echo request IPv6 packets to
prevent system resource occupation.
Prerequisites
Before you start a test, run the lspv mpls-lsp-ping echo enable/lspv mpls-lsp-
ping echo enable ipv6 command to enable the device to respond to MPLS echo
request/MPLS echo request IPv6 packets.
NOTE
If the device interworks with a non-Huawei device, run the lspv echo-reply compitable fec
enable command to enable the device to respond to MPLS echo request packets with MPLS
echo reply packets that do not carry FEC information.
NQA is a detection method deployed on the main control board. Both the initiator
and responder send LSP packets to the main control board for processing. If a
large number of packets are sent to the main control board, the CPU usage of the
main control board becomes high, affecting normal device running. To prevent this
problem, you can run the lspv mpls-lsp-ping cpu-defend cpu-defend command
to limit the rate at which MPLS echo request packets are sent to the main control
board.
Context
Perform the following steps in any view on the client.
Procedure
● Check the path over which an LDP LSP that carries IPv4 packets is established
or locate the failure point on the path.
To check the path over which an LDP LSP that carries IPv4 packets is
established or locate the failure point on the path, run the tracert lsp [ -a
source-ip | -exp exp-value | -h ttl-value | -r reply-mode | -t time-out | -s size |
-g ] * ip destination-iphost mask-length [ ip-address ] [ nexthop nexthop-
address ] [ detail ] command.
<HUAWEI> tracert lsp ip 1.1.1.1 32
LSP Trace Route FEC: IPV4 PREFIX 1.1.1.1/32 , press CTRL_C to break.
TTL Replier Time Type Downstream
0 Ingress 10.1.1.1/[3 ]
1 1.1.1.1 5 Egress
● Check the path over which a TE tunnel (RSVP-TE tunnel, static TE tunnel, or
dynamic TE tunnel) that carries IPv4 packets is established or locate the
failure point on the path.
To check the path over which a TE tunnel (RSVP-TE tunnel, static TE tunnel, or
dynamic TE tunnel) that carries IPv4 packets is established or locate the
failure point on the path, run the tracert lsp [ -a source-ip | -exp exp-value | -
h ttl-value | -r reply-mode | -t time-out | -s size | -g ] * te { tunnelName |
ifType ifNum } [ hot-standby | primary ] [ compatible-mode ] | auto-tunnel
auto-tunnelname [ detail ] command.
NOTE
NOTE
You must run the lspv echo-reply fec-validation ldp disable command on the SR-
MPLS BE side to disable the LSPV response end from checking the LDP FEC.
<HUAWEI> tracert lsp ip 1.1.1.1 32
LSP Trace Route FEC: IPV4 PREFIX 1.1.1.1/32 , press CTRL_C to break.
TTL Replier Time Type Downstream
0 Ingress 10.1.1.1/[3 ]
1 1.1.1.1 5 Egress
● (Optional) Run the display lspv statistics command to check LSPV packet
statistics.
If the test performed using the tracert lsp command fails, you can run this
command to check whether the fault occurs on the LSP or the device.
● (Optional) Run the reset lspv statistics command to clear LSPV packet
statistics.
----End
Follow-up Procedure
After the test is completed, you are advised to run the undo lspv mpls-lsp-ping
echo enable/undo lspv mpls-lsp-ping echo enable ipv6 command to disable the
device from responding to MPLS echo request/MPLS echo request IPv6 packets to
prevent system resource occupation.
Usage Scenario
On a VPLS over P2MP network, ping can be used to check the following tunnels:
Pre-configuration Tasks
Before using ping to check the P2MP network connectivity, ensure that P2MP is
correctly configured.
Procedure
Step 1 Run the ping multicast-lsp command to check the connectivity of the following
tunnels on a P2MP network:
● P2MP LDP LSPs
ping multicast-lsp [ -a source-ip | -c count | -exp exp-value | -j jitter-value | -
m interval | -r reply-mode | -s packet-size | -t time-out | -v ] * mldp p2mp
root-ip root-ip-address { lsp lsp-id | opaque-value opaque-value } [ leaf-
destination leaf-destination ]
● P2MP TE tunnels that are automatically generated
ping multicast-lsp [ -a source-ip | -c count | -exp exp-value | -j jitter-value | -
m interval | -r reply-mode | -s packet-size | -t time-out | -v ] * te-auto-tunnel
auto-tunnel-name [ leaf-destination leaf-destination ]
# Use ping to check the P2MP LDP LSP connectivity.
<HUAWEI> ping multicast-lsp mldp p2mp root-ip 1.1.1.1 lsp-id 1
LSP PING FEC: Multicast P2MP LDP root-ip 1.1.1.1 opaque-value 01000400014497 : 100 data bytes, press
CTRL_C to break
Reply from 10.10.10.10: bytes=100 Sequence=1 time=60 ms
Reply from 10.10.10.10: bytes=100 Sequence=2 time=50 ms
Reply from 10.10.10.10: bytes=100 Sequence=3 time=30 ms
Reply from 10.10.10.10: bytes=100 Sequence=4 time=100 ms
Reply from 10.10.10.10: bytes=100 Sequence=5 time=80 ms
ms
Reply from 6.6.6.6: bytes=100 Sequence=4 time=50
ms
Reply from 5.5.5.5: bytes=100 Sequence=4 time=60
ms
Reply from 3.3.3.3: bytes=100 Sequence=4 time=60
ms
Reply from 2.2.2.2: bytes=100 Sequence=5 time=30
ms
Reply from 6.6.6.6: bytes=100 Sequence=5 time=40
ms
Reply from 5.5.5.5: bytes=100 Sequence=5 time=80
ms
Reply from 3.3.3.3: bytes=100 Sequence=5 time=80
ms
----End
1.1.5.3.4 Using Tracert to Check the Forwarding Path on a P2MP MPLS Network
Run the tracert commands to check path information on a Point-to-Multipoint
(P2MP) MPLS network so that faults can be located.
Usage Scenario
On a VPLS over P2MP network, tracert can be used to check the following tunnels:
● P2MP label distribution protocol (LDP) label switched paths (LSPs)
● P2MP TE tunnels that are automatically generated
Pre-configuration Tasks
Before using tracert to check the P2MP network connectivity, ensure that P2MP is
correctly configured.
Procedure
Step 1 Run the tracert multicast-lsp command to check path information about the
following tunnels on a VPLS over P2MP network:
----End
Prerequisites
Before you run the ping vpls command to check PW connectivity, ensure that the
VPLS network has been configured correctly.
Context
To check whether a PW on the VPLS network is faulty, run the ping vpls
command.
Procedure
Step 1 To locate the faulty node on the VPLS network, run either of the following
commands as required:
● In BGP mode, run the ping vpls [ -c echo-number | -m time-value | -s data-
bytes | -t timeout-value | -exp exp-value | -r reply-mode | -v | -g ] * vsi vsi-
name local-site-id remote-site-id [ bypass -si interface-type interface-
number ] command.
● in LDP mode, run the ping vpls [ -c echo-number | -m time-value | -s data-
bytes | -t timeout-value | -exp exp-value | -r reply-mode | -v | -g ] * vsi vsi-
name peer peer-address [ negotiate-vc-id vc-id ] [ control-word [ remote
remote-address remote-pw-id [ sender sender-address ] ] ] [ bypass -si
interface-type interface-number ] command.
The ping vpls command output contains the following information:
● Response to each ping VPLS packet. If no response packet is received after the
corresponding timer expires, the message reading "Request time out" is
displayed. If a response packet is received, the number of data bytes, packet
sequence number, TTL, and response time are displayed.
● Final statistics: include the number of sent packets, number of received
packets, percentage of sent packets with failed responses, and minimum,
maximum, and average response times.
For example:
<HUAWEI> ping vpls vsi a2 peer 10.1.1.1
PW PING : FEC 128 PSEUDOWIRE (NEW). Type = vlan, ID = 2 : 100 data bytes, press CTRL_C to break
Reply from 10.1.1.1: bytes=100 Sequence=1 time=60 ms
Reply from 10.1.1.1: bytes=100 Sequence=2 time=50 ms
Reply from 10.1.1.1: bytes=100 Sequence=3 time=60 ms
Reply from 10.1.1.1: bytes=100 Sequence=4 time=60 ms
Reply from 10.1.1.1: bytes=100 Sequence=5 time=60 ms
--- FEC: FEC 128 PSEUDOWIRE (NEW). Type = vlan, ID = 2 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 50/58/60 ms
----End
Prerequisites
Before you run the tracert vpls command to check PW connectivity, ensure that
the VPLS network has been configured correctly.
Context
Perform the following steps on a PE of a VPLS network:
Procedure
Step 1 To locate the faulty node on the VPLS network, run either of the following
commands as required:
● In BGP mode, run the tracert vpls [ -exp exp-value | -f first-ttl | -m max-ttl | -
r reply-mode | -t timeout-value | -g ] * vsi vsi-name local-site-id remote-site-id
[ full-lsp-path ] [ detail ] [ bypass -si interface-type interface-number ]
command.
● In LDP mode, run the tracert vpls [ -exp exp-value | -f first-ttl | -m max-ttl | -
r reply-mode | -t timeout-value | -g ] * vsi vsi-name peer peer-address
[ negotiate-vc-id vc-id ] [ full-lsp-path ] [ control-word ] [ pipe | uniform ]
[ detail ] [ bypass -si interface-type interface-number ] command.
Run the tracert vpls command to locate VPLS network faults.
<HUAWEI>tracert vpls vsi test 10 10 full-lsp-path
PW Trace Route FEC: L2 VPN ENDPOINT. Sender VEID = 10, Remote VEID = 20, press CTRL_C to break
TTL Replier Time Type Downstream
0 Ingress 10.1.1.2/[294929 32894 32888 ]
1 10.1.1.2 93 ms Transit 10.2.1.2/[32925 3 ]
2 10.2.1.2 1 ms Transit 10.3.1.2/[32881 ]
3 4.4.4.4 2 ms Egress
The preceding command output contains information about each node along the
PW and the response time of each hop.
----End
Prerequisites
Before testing pseudo wire (PW) connectivity using the ping vc command, ensure
that the VPWS network has been configured correctly.
Context
To check whether a PW on the VPWS network is faulty, run the ping vc command.
When the PW is Up, you can locate faults, such as forwarding entry loss or errors.
Table 1-10 describes check modes and usage scenarios.
Procedure
● Control-word mode:
To monitor PW connectivity using the Control-word mode, run the ping vc vc-
type pw-id [ peer-address ] [ -c echo-number | -m time-value | -s data-bytes |
-t timeout-value | -exp exp-value | -r reply-mode | -v | -g ]* control-word [ ttl
ttl-value ] [ pipe | uniform ] [ bypass -si interface-name | interface-type
interface-number ] command.
● Label-alert mode:
To monitor PW connectivity using the Label-alert mode, run the ping vc vc-
type pw-id [ peer-address ] [ -c echo-number | -m time-value | -s data-bytes |
-t timeout-value | -exp exp-value | -r reply-mode | -v | -g ] * label-alert [ no-
control-word ] [ bypass -si interface-name | interface-type interface-
number ] command.
● TTL mode:
To monitor PW connectivity using the TTL mode, run the ping vc vc-type pw-
id [ peer-address ] [ -c echo-number | -m time-value | -s data-bytes | -t
timeout-value | -exp exp-value | -r reply-mode | -v | -g ] * normal [ no-
control-word ] [ ttl ttl-value ] [ pipe | uniform ] command.
For example:
<HUAWEI> ping vc ethernet 100 control-word
PW PING : FEC 128 PSEUDOWIRE (NEW). Type = ethernet, ID = 100 : 100 data bytes, press CTRL_C
to break
Reply from 10.10.10.10: bytes=100 Sequence=1 time = 140 ms
Reply from 10.10.10.10: bytes=100 Sequence=2 time = 40 ms
Reply from 10.10.10.10: bytes=100 Sequence=3 time = 30 ms
Reply from 10.10.10.10: bytes=100 Sequence=4 time = 50 ms
Reply from 10.10.10.10: bytes=100 Sequence=5 time = 50 ms
--- FEC: FEC 128 PSEUDOWIRE (NEW). Type = ethernet, ID = 100 ping statistics---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 30/62/140 ms
----End
Context
Perform the following steps on the PE of a PWE3 network:
Procedure
Step 1 To locate the faulty node on a PWE3 network, run any of the following commands
as required:
● To monitor connectivity of the PWE3 network through the control word
channel, run:
tracert vc vc-type pw-id [ peer-address ] [ -exp exp-value | -f first-ttl | -m
max-ttl | -r reply-mode | -t timeout-value | -g ] * control-word [ ptn-mode |
full-lsp-path ] [ pipe | uniform ] [ detail ]
tracert vc vc-type pw-id [ peer-address ] [ -exp exp-value | -f first-ttl | -m
max-ttl | -r reply-mode | -t timeout-value | -g ] * control-word remote
remote-ip-address [ ptn-mode | full-lsp-path ] [ pipe | uniform ] [ detail ]
[ bypass -si interface-name | interface-type interface-number ]
● To monitor connectivity of the PWE3 network through the label alert channel,
run:
tracert vc vc-type pw-id [ peer-address ] [ -exp exp-value | -f first-ttl | -m
max-ttl | -r reply-mode | -t timeout-value | -g ] * label-alert [ no-control-
word ] [ full-lsp-path ] [ pipe | uniform ] [ detail ] [ bypass -si interface-
name | interface-type interface-number ]
● To monitor check connectivity of the PWE3 network in ordinary mode, run:
tracert vc vc-type pw-id [ peer-address ] [ -exp exp-value | -f first-ttl | -m
max-ttl | -r reply-mode | -t timeout-value | -g ] * normal [ remote remote-ip-
address ] } [ full-lsp-path ] [ pipe | uniform ] [ detail ] [ bypass -si
interface-name | interface-type interface-number ]
Before using the tracert vc command to monitor PWE3 network connectivity,
perform the following operations:
● Configure the PWE3 network correctly.
● If the control word channel is used, run the control-word command in the
PW template view to enable the control word function.
<HUAWEI> tracert vc vlan 100 control-word remote 4.1.1.2 full-lsp-path
TTL Replier Time Type Downstream
0 Ingress 1.1.1.2/[1025 ]
1 1.1.1.2 230 ms Transit 2.1.1.2/[1301 ]
2 2.1.1.2 230 ms Transit 3.1.1.2/[1208 ]
3 3.1.1.2 100 ms Transit 4.1.1.2/[3 ]
4 4.1.1.2 150 ms Egress
The preceding command output contains information about each node along the
PW and the response time of each hop.
----End
Procedure
● Run the ping vpls mac mac-address vsi vsi-name [ vlan vlan-id | -c count | -
m time-value | -s packetsize | -t timeout | -exp exp | -r replymode | -h ttl | -a
source-ip-address | -g ] * command to perform a VPLS MAC ping test in
common packet sending mode.
<HUAWEI> ping vpls mac 00e0-fc12-3456 vsi a1
Ping mac 00e0-fc12-3456 vsi a1: 142 data bytes, press CTRL_C to break
Reply from 10.1.1.2: bytes=142 Sequence=1 time=16 ms
Reply from 10.1.1.2: bytes=142 Sequence=2 time=4 ms
Reply from 10.1.1.2: bytes=142 Sequence=3 time=7 ms
Reply from 10.1.1.2: bytes=142 Sequence=4 time=8 ms
Reply from 10.1.1.2: bytes=142 Sequence=5 time=8 ms
The IP address of the PE is 192.168.2.9.
● Run the ping vpls mac mac-address vsi vsi-name rapid [ vlan vlan-id | -c
rapidCount | -s packetsize | -t timeout | -exp exp | -r replymode | -h ttl | -a
source-ip-address | -g ] * command to perform a VPLS MAC ping test in
rapidly packet sending mode.
<HUAWEI> ping vpls mac 00e0-fc12-3456 vsi a1 rapid
Ping mac 00e0-fc12-3456 vsi a1: 142 data bytes, press CTRL_C to break
!!!!!
NOTE
! indicates that the path is reachable, and . indicates that the path is unreachable.
● (Optional) Run the display vpls-ping statistics command to check VPLS MAC
ping packet statistics.
If the test using the ping vpls mac command fails, you can run this command
to check whether a VPLS fault or a device fault occurs.
● (Optional) Run the reset vpls-ping statistics command to clear VPLS MAC
ping packet statistics.
----End
Prerequisites
Before using the ping evpn command to check EVPN VPLS network connectivity,
ensure that the EVPN VPLS network has been correctly configured.
Context
Perform the following steps in any view.
Procedure
● Check the connectivity of the tunnel whose public network type is EVPN VPLS
over LDP/TE/BGP LSP/SR-MPLS BE/SR-MPLS TE/SR-MPLS TE Policy.
Run the ping evpn bridge-domain bd-id [ vlan vlan-id ] mac mac-address [ -
a source-ip | -c count | -m interval | -s packet-size | -t time-out | -r reply-
mode | -nexthop nexthop-address ] * command to check the EVPN VPLS
status and roughly locate EVPN VPLS exceptions.
<HUAWEI> ping evpn bridge-domain 100 mac 00e0-fc12-3456
Ping bridge-domain 100 mac 00e0-fc12-3456 : 110 data bytes, press CTRL_C to break
Tunnel-Type: VXLAN; Peer-Address: 1.1.1.1
Reply from 1.1.1.1: bytes=110 sequence=1 time < 1ms
Reply from 1.1.1.1: bytes=110 sequence=2 time < 1ms
Reply from 1.1.1.1: bytes=110 sequence=3 time < 1ms
Reply from 1.1.1.1: bytes=110 sequence=4 time < 1ms
Reply from 1.1.1.1: bytes=110 sequence=5 time < 1ms
--- bridge-domain: 100 00e0-fc12-3456 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 1/1/1 ms
● Check the connectivity of the tunnel whose public network type is EVPN VPLS
over SRv6 BE/SRv6 TE Policy/SRv6 TE flow group.
– Run the ping evpn vpn-instance evpn-name mac mac-address [ -a
source-ip | -c count | -m interval | -s packet-size | -t time-out | -r reply-
mode | -nexthop nexthop-address ] * command to check the EVPN VPLS
status and roughly locate EVPN VPLS exceptions.
<HUAWEI> ping evpn vpn-instance evpna mac 00e0-fc12-3456 -c 3 -s 200
Ping vpn-instance evpna mac 00e0-fc12-3456 : 200 data bytes, press CTRL_C to break
Tunnel-Type: SRv6 TE Policy; Peer-Address: 2001:DB8:100::1:0:5F
Reply from 2001:DB8:100::1:0:5F: bytes=200 sequence=1 time = 11ms
Reply from 2001:DB8:100::1:0:5F: bytes=200 sequence=2 time = 10ms
Reply from 2001:DB8:100::1:0:5F: bytes=200 sequence=3 time = 10ms
--- vpn-instance: evpna 00e0-fc12-3456 ping statistics ---
3 packet(s) transmitted
3 packet(s) received
0.00% packet loss
round-trip min/avg/max = 10/10/11 ms
– Run the ping evpn bridge-domain bd-id [ vlan vlan-id ] mac mac-
address [ -a source-ip | -c count | -m interval | -s packet-size | -t time-out
| -r reply-mode | -nexthop nexthop-address | { -service-class classValue |
-te-class teClassValue } ] * command to check the EVPN VPLS status and
roughly locate EVPN VPLS exceptions.
NOTE
You can specify a VLAN only if the MAC address learning mode is set to qualify.
<HUAWEI> ping evpn bridge-domain 100 mac 00e0-fc12-3456
Ping bridge-domain 100 mac 00e0-fc12-3456 : 110 data bytes, press CTRL_C to break
Tunnel-Type: SRv6 TE Policy; Peer-Address: 2001:DB8:100::1:0:5F
Reply from 2001:DB8:100::1:0:5F: bytes=110 sequence=1 time < 1ms
Reply from 2001:DB8:100::1:0:5F: bytes=110 sequence=2 time < 1ms
Reply from 2001:DB8:100::1:0:5F: bytes=110 sequence=3 time < 1ms
Reply from 2001:DB8:100::1:0:5F: bytes=110 sequence=4 time < 1ms
Reply from 2001:DB8:100::1:0:5F: bytes=110 sequence=5 time < 1ms
--- bridge-domain: 100 00e0-fc12-3456 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 1/1/1 ms
----End
Prerequisites
Before running the tracert evpn command to locate a forwarding fault on an
EVPN VPLS network, ensure that the EVPN VPLS network has been correctly
configured.
Context
Perform the following steps in any view.
Procedure
● Locate a forwarding fault on a tunnel whose public network type is EVPN
VPLS over LDP/TE/BGP LSP/SR-MPLS BE/SR-MPLS TE/SR-MPLS TE Policy.
Run the tracert evpn vpn-instance evpn-name mac mac-addr [ -a source-ip |
-s pkt-size | -t timeout | -h max-ttl | -r reply-mode | -nexthop next-hop ]*
[ pipe | uniform ] [ detail ] command to check the EVPN VPLS status and
locate exceptions on a node along the EVPN VPLS path.
<HUAWEI> tracert evpn vpn-instance evrf1 mac 00e0-fc12-3456
Tracert vpn-instance evrf1 mac 00e0-fc12-3456 : 30 hops max, press CTRL_C to break
Tunnel-Type: MPLS; Peer-Address: 3.3.3.9
TTL Replier Time Type Hit Downstream
0 Ingress N 10.1.1.2/[48061 48060 ]
1 10.1.1.2 5 ms Transit N 10.3.1.2/[3 ]
2 3.3.3.9 3 ms Egress Y
● Locate a forwarding fault on a tunnel whose public network type is EVPN
VPLS over SRv6 BE/SRv6 TE Policy/SRv6 TE flow group.
– Run the tracert evpn bridge-domain bd-id [ vlan vlan-id ] mac mac-
address [ -a source-ip | -s pkt-size | -t timeout | -h max-ttl | -r reply-mode
| -nexthop next-hop | { -service-class classValue | -te-class
teClassValue } ] * [ pipe | uniform ] [ detail ] command to check the
EVPN VPLS status and locate a faulty node along the EVPN VPLS path.
<HUAWEI> tracert evpn bridge-domain 10 mac 00e0-fc12-3456
Tracert bridge-domain 10 mac 00e0-fc12-3456 : 30 hops max, press CTRL_C to break
Tunnel-Type: SRv6 TE Policy; Peer-Address: 2001:DB8:3::3
TTL Replier Time Type Hit Downstream
0 Ingress N 2001:DB8:21::1:0:0
1 2001:DB8:20::2 6 ms Transit N 2001:DB8:31::1:0:0
2 2001:DB8:31::1:0:22 4 ms Egress Y
----End
Prerequisites
Before using the ping evpn vpws command to check EVPN VPWS network
connectivity, ensure that the EVPN VPWS network has been correctly configured.
Context
Perform the following steps in any view.
Procedure
● Check the EVPN VPWS network connectivity.
– Check the connectivity of the tunnel whose public network type is EVPN
VPWS over LDP/TE/BGP LSP/SR-MPLS BE/SR-MPLS TE/SR-MPLS TE Policy.
Run the ping evpn vpws local-ce-id remote-ce-id [ vpn-instance evpn-
name ] [ control-word ] [ -a source-ip | -c count | -exp exp-value | -m
interval | -s packet-size | -t time-out | -r reply-mode | -tc tc | backup ] *
command to check the EVPN VPWS status and roughly locate the EVPN
VPWS exceptions.
NOTE
If the local service ID is not globally unique, the vpn-instance evpn-name parameter
must be specified.
----End
Prerequisites
Before running the tracert evpn vpws command to locate a forwarding fault on
an EVPN VPWS network, ensure that the EVPN VPWS network has been correctly
configured.
Context
Perform the following steps in any view.
Procedure
● Locate a forwarding fault on an EVPN VPWS network.
– Locate a forwarding fault on a tunnel whose public network type is EVPN
VPWS over LDP/TE/BGP LSP/BGP Localifnet/SR-MPLS BE/SR-MPLS TE/SR-
MPLS TE Policy.
Run the tracert evpn vpws local-ce-id remote-ce-id [ vpn-instance
evpn-name ] [ control-word ] [ -a source-ip | -exp exp-value | -s packet-
size | -t timeout | -h max-ttl | -r reply-mode | -tc tc | backup ] * [ pipe |
uniform ] command to check the EVPN VPWS status and locate the
faulty node on the EVPN VPWS path.
NOTE
If the local service ID is not globally unique, the vpn-instance evpn-name parameter
must be specified.
----End
Prerequisites
The VPLS network has been correctly configured, and the specified virtual service
instance (VSI) is Up.
Context
NOTICE
Procedure
Step 1 Run the following command to monitor the connectivity between a PE and a CE:
ce-ping ip-address vsi vsi-name source-ip source-ip-address [ mac mac-address ]
[ interval interval | count count ] *
For example:
<HUAWEI> ce-ping 10.1.1.1 vsi abc source-ip 10.1.1.2 mac e024-7fa4-d2cb interval 2 count 5
Info: If the designated source IP address is in use, it could cause the abnormal data transmission in VPLS
network. Are you sure the source-ip is unused in this VPLS? [Y/N]:y
Ce-ping is in process...
10.1.1.1 is used by 00e0-fc12-3456
----End
Prerequisites
The network has been correctly configured, and the specified BD is Up.
Context
An EVC model unifies the Layer 2 bearer service model and configuration model.
In an EVC model, you can use CE ping to check the link reachability between a PE
and a CE in a specified BD. For details about EVCs, see HUAWEI NetEngine9000
Feature Description > Local Area Network. For configuration details, see EVC
Configuration.
When using CE ping to check the link reachability between a PE and a CE, you
must specify a source IP address that meets the following conditions:
● The source IP address must be on the same network segment as the CE's IP
address. If they are on different network segments, the CE considers received
CE Ping packets invalid and discards them.
● The source IP address must be an unused IP address in the specified BD. If you
specify a used IP address for the source IP address, CE Ping packets cannot be
properly forwarded. As a result, the user using the source IP address cannot
access the Internet. If you specify a gateway IP address for the source IP
address, all users cannot access the Internet.
To avoid this problem, do not specify a used IP address as the source IP address.
Procedure
Step 1 Run the ce-ping ip-address bd bd-id source-ip source-ip-address [ mac mac-
address ] [ interval interval | count count ] * command in any view to check the
link reachability between a PE and a CE.
<HUAWEI> ce-ping 10.1.1.1 bd 123 source-ip 10.1.1.2 mac e024-7fa4-d2cb interval 2 count 5
Info: If the designated source IP address is in use, it could cause the abnormal data transmission in EVC
network. Are you sure the source-ip is unused in this EVC? [Y/N]:y
Ce-ping is in process...
10.1.1.1 is used by 00e0-fc12-3456
----End
Prerequisites
The EVPN has been correctly configured.
Context
NOTICE
Procedure
Step 1 Run the ce-ping ip-address evpn evpn-name source-ip source-ip-address [ mac
mac-address ] [ count count | interval interval ] * command in any view to check
the connectivity from the PE to the CE.
<HUAWEI> ce-ping 10.1.1.1 evpn huawei123 source-ip 10.1.1.12 mac e024-7fa4-d2cb interval 2 count 5
Info: If the designated source IP address is in use, it could cause the abnormal data transmission in EVPN
network. Are you sure the source-ip is unused in this EVPN? [Y/N]:y
Ce-ping is in process...
10.1.1.1 is used by 00-e0-fc-12-34-56
----End
Context
To manually monitor the connectivity between two devices, you can send test
packets and wait for a reply to test whether the destination device is reachable.
● To check the connectivity of the link between two devices on a network
without MDs, MAs, and MEPs configured, use GMAC ping.
● To check the connectivity of the link between MEPs or between a MEP and a
MIP in the same MA on a network with MDs, MAs, and MEPs configured, use
802.1ag MAC ping.
Pre-configuration Tasks
No pre-configuration task is required for GMAC ping.
Before performing 802.1ag MAC ping, complete the following task:
Context
GMAC ping has principles similar to those of 802.1ag MAC ping. The difference is
that a source device does not need to be a MEP, and a destination device does not
need to be a MEP or maintenance association intermediate point (MIP). In other
words, GMAC ping can be implemented without the need to configure an MD,
MA, or MEP on the source, intermediate, or destination device.
Enable the GMAC ping function on the source and destination devices. The
intermediate devices must have the bridge function to directly forward messages.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ping mac enable
The GMAC ping function is enabled globally.
If the GMAC ping function is enabled:
● A source device starts the GMAC ping function by sending a loopback
message (LBM) to a destination device.
● After receiving the LBM, the destination device replies to the source device
with a loopback reply (LBR).
Step 3 Run commit
The configuration is committed.
Step 4 (Optional) In a VLAN scenario: Run ping mac mac-address vlan vlan-id
[ interface interface-type interface-number | -c count | -s packetsize | -t timeout -
p priority-value ] *
The VLAN network connectivity is checked.
The following shows an example:
<HUAWEI> system-view
[~HUAWEI] ping mac enable
[*HUAWEI] commit
[~HUAWEI] ping mac 00e0-fc12-3456 vlan 10 -c 2 -s 112
Reply from 00e0-fc12-3456: bytes = 112 time < 1ms
Reply from 00e0-fc12-3456: bytes = 112 time < 1ms
Packets: Sent = 2, Received = 2, loss = 0 (0.00% loss)
Minimum = 1ms, Maximum = 1ms, Average = 1ms
Step 5 (Optional) In a VLL scenario: Run ping mac mac-address l2vc l2vc-id { raw |
tagged } [ interface interface-type interface-number | { pe-vid pe-vid ce-vid ce-
vid | dot1q-vlan vlan-id } -c count | -s packetsize | -t timeout | -p priority-value ] *
Step 6 (Optional) In a VPLS scenario: Run ping mac mac-address vsi vsi-name
[ interface interface-type interface-number | { pe-vid pe-vid ce-vid ce-vid |
dot1q-vlan vlan-id } -c count | -s packetsize | -t timeout | -p priority-value ] *
----End
Context
GMAC trace has principles similar to those of 802.1ag MAC trace. The difference is
that a source device does not need to be a MEP, and a destination device does not
need to be a MEP or MIP. In other words, GMAC trace can be implemented
without the need to configure an MD, MA, or MEP on the source, intermediate, or
destination device.
Enable the GMAC trace function on the source, intermediate, and destination
devices.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run trace mac enable
The GMAC trace function is enabled globally.
If the GMAC trace function is enabled:
● A source device starts the GMAC trace function by sending a linktrace
message (LTM) to a destination device.
● After receiving the LTM, the destination device replies to the source device
with a linktrace reply (LTR).
Step 3 Run commit
The configuration is committed.
Step 4 (Optional) In a VLAN scenario: Run trace mac mac-address vlan vlan-id
[ interface interface-type interface-number | -t timeout ] *
Paths on a VLAN are checked.
The following is an example:
<HUAWEI> system-view
[~HUAWEI] trace mac enable
[*HUAWEI] commit
[~HUAWEI] trace mac 00e0-fc23-3459 vlan 2
Tracing the route to 00e0-fc23-3459 over a maximum of 255 hops:
-------------------------------------------------------------------------------
Hops Mac Ingress Ingress Action Relay Action
Forwarded Egress Egress Action
--------------------------------------------------------------------------------
1 00e0-fc12-3459 gigabitethernet2/0/1 IngOK RlyFDB
Forwarded gigabitethernet1/0/1 EgrOK
2 00e0-fc12-3457 gigabitethernet1/0/1 IngOK RlyFDB
Forwarded gigabitethernet1/0/0 EgrOk
3 00e0-fc23-3459 gigabitethernet1/0/0 IngOK RlyHit
Not Forwarded
Info: Succeed in tracing the destination address 00e0-fc23-3459.
Step 5 (Optional) In a VLL scenario: Run trace mac mac-address l2vc l2vc-id { raw |
tagged } [ interface interface-type interface-number | { [ pe-vid pe-vid ce-vid ce-
vid ] | [ dot1q-vlan vlan-id ] } | -t timeout | -h ] *
The VLL network connectivity is checked.
The following shows an example:
<HUAWEI> system-view
[~HUAWEI] trace mac enable
[*HUAWEI] commit
[~HUAWEI] trace mac 00e0-fc12-3458 l2vc 1 raw
Tracing the route to 00e0-fc12-3458 over a maximum of 255 hops:
Hops Host Name (IP Address)
Mac Ingress Ingress Action Relay Action
Forwarded Egress Egress Action
1 HUAWEIA (10.10.10.16)
00e0-fc22-3459 GigabitEthernet2/0/1 IngOK RlyFDB
Forwarded GigabitEthernet1/0/1.1 EgrOK
2 HUAWEIB (10.10.10.13)
00e0-fc12-3458 GigabitEthernet3/0/1 IngOK RlyHit
Not Forwarded
Info: Succeeded in tracing the destination address 00e0-fc12-3458.
Step 6 (Optional) In a VPLS scenario: Run trace mac mac-address vsi vsi-name
[ interface interface-type interface-number | { [ pe-vid pe-vid ce-vid ce-vid ] |
[ dot1q-vlan vlan-id ] } | -t timeout | -h ] *
The VPLS network connectivity is checked.
The following shows an example:
<HUAWEI> system-view
[~HUAWEI] trace mac enable
[*HUAWEI] commit
----End
1.1.5.5.3 Using 802.1ag MAC Ping to Check Link Connectivity on a Layer 2 Network
802.1ag MAC ping monitors connectivity between MEPs or between MEPs and
MIPs within an MA.
Context
Similar to the ping operation, 802.1ag MAC ping checks whether the destination
device is reachable by sending test packets and receiving response packets. In
addition, the ping operation time can be calculated at the transmit end for
network performance analysis.
Before performing 802.1ag MAC ping, ensure that 802.1ag has been configured.
For more information, see Configuring Basic Ethernet CFM Functions.
Procedure
Step 1 A device is usually configured with multiple MDs and MAs. To monitor the
connectivity of a link between two or more devices, perform either of the
following steps on the NE9000 with a MEP on one end of the link to be
monitored.
● In the MA view:
a. Run system-view
The system view is displayed.
b. Run cfm enable
CFM is globally enabled on the device.
c. Run cfm md md-name
The MD view is displayed.
d. Run ma ma-name
The MA view is displayed.
e. Run ping mac-8021ag mep mep-id mep-id [ md md-name ma ma-
name ] { mac mac-address | remote-mep mep-id mep-id } [ -c count | -s
packetsize | -t timeout | -p priority-value ] *
The connectivity between a MEP and an RMEP or between a MEP and a
MIP on other devices is monitored.
The following shows an example:
<HUAWEI> system-view
[~HUAWEI] cfm enable
Context
Similar to traceroute or tracert, 802.1ag MAC trace tests the path between the
local device and a destination device or locates failure points by sending test
packets and receiving reply packets.
Before performing 802.1ag MAC trace, ensure that 802.1ag has been configured.
For more information, see Configuring Basic Ethernet CFM Functions.
Procedure
Step 1 A device is usually configured with multiple MDs and MAs. To determine the
forwarding path for sending packets from a MEP to another MEP or a MIP in an
MA or failure points, perform either of the following operations on the router with
a MEP at one end of the link to be tested.
● In the MA view:
a. Run system-view
The system view is displayed.
b. (Optional) Run cfm portid-tlv type { interface-name | local }
The portid-tlv type for trace packets is set.
c. Run the cfm enable command to globally enable the CFM function on
the device.
d. Run cfm md md-name
The MD view is displayed.
e. Run ma ma-name
The MA view is displayed.
f. Run the trace mac-8021ag mep mep-id mep-id [ md md-name ma ma-
name ] { mac mac-address | remote-mep mep-id mep-id } [ -t timeout |
ttl ttl ] * command to locate the connectivity fault between the local and
peer devices.
The connectivity fault between the local and the remote devices is
located.
When implementing 802.1ag MAC trace, ensure that:
● The MEP is configured in the MA.
● If the outbound interface is specified, no inward-facing MEP is configured on
it. The interface must be added to the VLAN associated with the MA.
● If the destination node is an RMEP, either mac mac-address or remote-mep
mep-id mep-id can be selected. If remote-mep mep-id mep-id is selected,
the RMEP must already be created using the remote-mep command.
● If the destination node is a MIP, select mac mac-address.
● If the forwarding entry of the destination node does not exist in the MAC
address table, interface interface-type interface-number must be specified.
Step 2 (Optional) Run the display cfm statistics lblt command to check 802.1ag
protocol packet statistics.
If the test using the trace mac-8021ag command fails, you can run this command
to check whether a link fault or a device fault occurs.
Step 3 (Optional) Run the reset cfm statistics lblt command to clear the 802.1ag
protocol packet statistics.
----End
Context
By traversing a specified port number range, ECMP tracert can detect the quality
of all possible equal-cost load balancing links. If a link fails, ECMP tracert can
quickly switch traffic to another link, ensuring service continuity and stability.
Procedure
Step 1 Run either of the following commands based on the service scenario:
● To check links that carry IPv4 services in ECMP scenarios, run the tracert
multipath [ -vpn-instance vrfName ] destAddress [ -a sourceAddress | -f
initTtl | -m maxTtl | -w timeout | -s pktSize | -q count | -no-fragment | { -tos
tos | -dscp dscp } | -detail | destination-port begin-port [ end-port ] | -si
{ sourceIfName | sourceIfType sourceIfNum } ] * command. The following is an
example:
<HUAWEI> tracert multipath 10.1.3.1 -si GigabitEthernet 1/0/1 -detail destination-port 12345
12348
traceroute to 10.1.3.1, max hops: 64, packet length: 40, press CTRL_C to break
destination-port: 12345
1 10.1.1.2 3 ms
2 10.1.3.1 3 ms
destination-port: 12346
1 10.2.1.2 3 ms
2 10.1.3.1 2 ms
destination-port: 12347
1 10.1.1.2 1 ms
2 10.1.3.1 2 ms
destination-port: 12348
1 10.2.1.2 2 ms
2 10.1.3.1 2 ms
----End
Prerequisites
Before configuring an MTrace test instance, run the undo mtrace echo disable
command on each device along the multicast or RPF path to be detected to
enable the devices to respond to MTrace request and query messages.
Context
MTrace mainly has the following uses:
● The mtrace command can be used in multicast troubleshooting and routine
maintenance to locate a faulty device and reduce configuration errors.
● The mtrace command can be used to collect traffic statistics in path tracing
and calculate the multicast traffic rate in cyclic path tracing.
● The NMS analyzes faulty device information displayed in the mtrace
command output and generates alarms.
Perform the following steps in any view on the client.
Procedure
Step 1 (Optional) Run reset mtrace statistics
MTrace message statistics are cleared.
NOTICE
After the reset mtrace statistics command is run, the statistics cleared cannot be
restored.
To ensure that the mtrace command is run successfully, the current device must have the
(S, G) entries and meet either of the following conditions:
● The current device is directly connected to the destination host.
● A ping test initiated from the current device to the last-hop device or destination host
succeeds.
● The current device is on the multicast path from the multicast source to the
destination host.
The following examples show some parameters. For detailed options and parameter
description, see mtrace.
● Run the mtrace source source-address command to trace the RPF path from
a multicast source to the current device.
<HUAWEI> mtrace source 10.1.0.1
Press Ctrl+C to break multicast traceroute facility
From the receiver(10.1.5.1), trace reverse path to source (10.1.0.1) according to RPF rules
- 1 10.1.5.1
Incoming Interface Address: 10.1.5.1 Input packets rate: 0xffffffff
Outgoing Interface Address: 0.0.0.0 Output packets rate: 0xffffffff
Forwarding Cache (10.1.0.1, 225.0.0.1) Forwarding packets rate: 1500
Last forwarded packets: 19121341 Current forwarded packets: 19136201
The packet loss rate of (10.1.0.1, 225.0.0.1) is 0.00%
- 2 10.1.2.1
Incoming Interface Address: 10.1.2.1 Input packets rate: 0xffffffff
Outgoing Interface Address: 10.1.5.2 Output packets rate: 0xffffffff
Forwarding Cache (10.1.0.1, 225.0.0.1) Forwarding packets rate: 1500
Last forwarded packets: 149378592 Current forwarded packets: 149393372
The packet loss rate of (10.1.0.1, 225.0.0.1) is 0.00%
- 3 10.1.0.1
Incoming Interface Address: 10.1.0.1 Input packets rate: 0xffffffff
Outgoing Interface Address: 10.1.2.2 Output packets rate: 0xffffffff
Forwarding Cache (10.1.0.1, 225.0.0.1) Forwarding packets rate: 1500
Last forwarded packets: 149454710 Current forwarded packets: 149469582
The packet loss rate of (10.1.0.1, 225.0.0.1) is 0.00%
********************************************************
In calculating-rate mode, reach the demanded number of statistic,and multicast traceroute finished.
-2 10.1.2.1 1 PIM
-3 10.1.0.1 1 PIM
In maximum-hop mode, received the response message, and multicast traceroute finished.
NOTICE
After the mtrace echo disable command is run, a device discards MTrace request
and query messages. As a result, MTrace detection is terminated on this device.
----End
Prerequisites
Before using the ping bier ipv6 command to check connectivity of a BIERv6
network, ensure that a BIERv6 tunnel has been correctly configured.
Procedure
Step 1 Run the ping bier ipv6 sub-domain subDomainId bsl { 64 | 128 | 256 } { bfr-id
bfrID | bfr-id-start bfrIdStartVal bfr-id-end bfrIdEndVal } [ -a source-ip-address | -
c count | -h ttl-value | -m interval | -t timeout | udp-port dstPort6 ] * command to
check the connectivity, BFR reachability, packet loss rate, and delay of the BIERv6
network.
<HUAWEI> ping bier ipv6 sub-domain 1 bsl 256 bfr-id 2
Ping BIER IPv6: Subdomain ID: 1, BSL: 256, BFRID: 2, press CTRL_C to break
Reply from BFRID: 2 (2001:DB8:1::1)
bytes=72 Sequence=1 time=10 ms
Reply from BFRID: 2 (2001:DB8:1::1)
bytes=72 Sequence=2 time=4 ms
Reply from BFRID: 2 (2001:DB8:1::1)
bytes=72 Sequence=3 time=4 ms
Reply from BFRID: 2 (2001:DB8:1::1)
bytes=72 Sequence=4 time=4 ms
Reply from BFRID: 2 (2001:DB8:1::1)
bytes=72 Sequence=5 time=4 ms
In network slicing scenarios, run the ping bier ipv6 sub-domain subDomainId bsl
{ 64 | 128 | 256 } { bfr-id bfrID | bfr-id-start bfrIdStartVal bfr-id-end
bfrIdEndVal } [ network-slice sliceid [ force-match-slice ]] [ -a source-ip-address
| -c count | -h ttl-value | -m interval | -t timeout | udp-port dstPort6 ]* command
to check BIERv6 network connectivity, BFR reachability, and performance
indicators such as the packet loss rate and delay.
NOTE
The force-match-slice keyword is used to forcibly match network slices and takes effect
only for segment lists with slice attributes.
<HUAWEI> ping bier ipv6 sub-domain 0 bsl 256 bfr-id 2 network-slice 20 force-match-slice
Ping BIER IPv6: Subdomain ID: 0, BSL: 256, BFRID: 2, network-slice: 20, press CTRL_C to break
Reply from BFRID: 2 (2001:DB8:20::1) Slice-ID:20
bytes=116 Sequence=1 time=5 ms
Reply from BFRID: 2 (2001:DB8:20::1) Slice-ID:20
bytes=116 Sequence=2 time=3 ms
Reply from BFRID: 2 (2001:DB8:20::1) Slice-ID:20
bytes=116 Sequence=3 time=3 ms
Reply from BFRID: 2 (2001:DB8:20::1) Slice-ID:20
bytes=116 Sequence=4 time=5 ms
Reply from BFRID: 2 (2001:DB8:20::1) Slice-ID:20
bytes=116 Sequence=5 time=3 ms
Step 2 (Optional) Run the following commands to configure functions related to BIERv6
Echo Request messages.
● Run the system-view command to enter the system view.
● Run the nqa bier ipv6 receive rate-limit bierCpuLimit6 command to limit the
rate at which BIERv6 Echo Request messages are sent to the main control
board.
● Run the nqa bier ipv6 udp-port udpPort6 command to set the UDP port
number for receiving BIERv6 Echo Request messages.
● Run the nqa bier ipv6 echo-reply disable command to disable the function
of responding to BIERv6 Echo Request messages.
----End
Prerequisites
Before using the tracert bier ipv6 command to locate the failure point on a
BIERv6 network, ensure that a BIERv6 tunnel has been correctly configured.
Procedure
Step 1 Run the tracert bier ipv6 sub-domain subDomainId bsl { 64 | 128 | 256 } { bfr-id
bfrID | bfr-id-start bfrIdStartVal bfr-id-end bfrIdEndVal } [ -a source-ip-address | -
In network slicing scenarios, run the tracert bier ipv6 sub-domain subDomainId
bsl { 64 | 128 | 256 } { bfr-id bfrID | bfr-id-start bfrIdStartVal bfr-id-end
bfrIdEndVal } [ network-slice sliceid [ force-match-slice ]] [ -a source-ip-address
| -f first-ttl-val | -m max-ttl-val | -w timeout | udp-port dstPort6 | entropy
entropy-val ]* command to discover the gateways through which packets to the
destination BIERv6 device passes and locate the failure point on the BIERv6
network.
NOTE
The force-match-slice keyword is used to forcibly match network slices and takes effect
only for segment lists with slice attributes.
<HUAWEI> tracert bier ipv6 sub-domain 0 bsl 256 bfr-id 2 network-slice 20 force-match-slice
Tracert BIER IPv6: Subdomain ID: 0, BSL: 256, BFRID: 2, network-slice: 20, press CTRL_C to break
TTL Replier(BFR-ID)(Slice-ID) Time Type
0 Ingress
1 2001:DB8:40::1(-)(20) 5 ms Transit
2 2001:DB8:20::1(2)(20) 4 ms Egress
Step 2 (Optional) Run the following commands to configure functions related to BIERv6
Echo Request messages.
● Run the system-view command to enter the system view.
● Run the nqa bier ipv6 receive rate-limit bierCpuLimit6 command to limit the
rate at which BIERv6 Echo Request messages are sent to the main control
board.
● Run the nqa bier ipv6 udp-port udpPort6 command to set the UDP port
number for receiving BIERv6 Echo Request packets.
● Run the nqa bier ipv6 echo-reply disable command to disable the function
of responding to BIERv6 Echo Request packets.
----End
Context
After SRv6 configurations are complete, you can perform the following operations
in any view of the client.
Procedure
● Specify SIDs to check the connectivity of an SRv6 network.
NOTE
The force-match-slice keyword is used to forcibly match network slices and takes
effect only for segment lists with slice attributes.
<HUAWEI> ping ipv6-sid 2001:DB8:10::1 2001:DB8:20::2 2001:DB8:30::3 network-slice 100 force-
match-slice
PING ipv6-sid 2001:DB8:10::1 2001:DB8:20::2 2001:DB8:30::3 : 56 data bytes, press CTRL_C to break
Reply from 2001:DB8:30::3
bytes=56 Sequence=1 hop limit=64 time=2 ms
Reply from 2001:DB8:30::3
bytes=56 Sequence=2 hop limit=64 time=1 ms
Reply from 2001:DB8:30::3
bytes=56 Sequence=3 hop limit=64 time=1 ms
Reply from 2001:DB8:30::3
bytes=56 Sequence=4 hop limit=64 time=1 ms
Reply from 2001:DB8:30::3
bytes=56 Sequence=5 hop limit=64 time=1 ms
--- ipv6-sid ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max=1/1/2 ms
NOTE
If an End SID is used to implement a test, you can initiate a ping operation
without the need to run the remote end-op command or specify an End.OP SID.
Note that if the last SID of an SRv6 TE Policy segment list is an End.X SID or
binding SID, you need to specify the destination parameter.
b. To check the connectivity of the SRv6 TE Policy, run the ping srv6-te
policy { policy-name policyName | endpoint-ip endpointIpv6 color
colorId | binding-sid bsid } [ end-op endOp | destination dest ]
[ segment-list slid ] [ -a sourceAddr6 | -c count | -m interval | -s
packetSize | -t timeout | { -tc tc | -dscp dscp } | -h hopLimit | -ri | -p
pattern | ignore-mtu | -i { ifName | ifType ifNum } | -nexthop
nextHopAddr ] * command with the policy-name policyName, endpoint-
ip endpointIpv6 color colorId, or binding-sid bsid parameter specified on
the headend of the SRv6 TE Policy to initiate a ping test.
<HUAWEI> ping srv6-te policy policy-name test end-op 2001:DB8:2::1 -a 2001:DB8:1::1 -c 5 -
m 2000 -t 2000 -s 100 -tc 0 -h 255
PING srv6-te policy : 100 data bytes, press CTRL_C to break
srv6-te policy's segment list:
Preference: 200; Path Type: primary; Protocol-Origin: local; Originator: 0, 0.0.0.0; Discriminator:
200; Segment-List ID: 1; Xcindex: 1; end-op: 2001:DB8:2::1
Reply from 2001:DB8:2::1
bytes=100 Sequence=1 time=8 ms
Reply from 2001:DB8:2::1
bytes=100 Sequence=2 time=2 ms
Reply from 2001:DB8:2::1
bytes=100 Sequence=3 time=3 ms
Reply from 2001:DB8:2::1
bytes=100 Sequence=4 time=3 ms
Reply from 2001:DB8:2::1
bytes=100 Sequence=5 time=3 ms
----End
Context
After configuring SRv6, you can perform the following configurations in any view
of a client.
Procedure
● Specify SIDs to check a path on an SRv6 network or locate the failure point
on the path.
To locate the failure point on an SRv6 network, run the tracert ipv6-sid [ -f
first-hop-limit | -m max-hop-limit | -p port-number | -fixedPort | -q probes | -
w timeout | -s packetsize | -a source-ipv6-address | ignore-mtu { -tc tc | -dscp
NOTE
The force-match-slice keyword is used to forcibly match network slices and takes
effect only for segment lists with slice attributes.
<HUAWEI> tracert ipv6-sid 2001:DB8:10::1 2001:DB8:20::2 2001:DB8:30::3 network-slice 100 force-
match-slice
traceroute ipv6-sid 2001:DB8:10::1 2001:DB8:20::2 2001:DB8:30::3 64 hops max,60 bytes packet
1 2001:DB8:1:2::21(SRH: 2001:DB8:30::3, 2001:DB8:20::2, 2001:DB8:10::1, SL=2, Slice-ID:100) 5 ms 3
ms 2 ms
2 2001:DB8:2:3::31(SRH: 2001:DB8:30::3, 2001:DB8:20::2, 2001:DB8:10::1, SL=1, Slice-ID:100) 5 ms 2
ms 2 ms
3 2001:DB8:30::3(SRH: 2001:DB8:30::3, 2001:DB8:20::2, 2001:DB8:10::1, SL=1, Slice-ID:100) 5 ms 10
ms 0.759 ms
● Check the path over which an SRv6 TE Policy is established or locate the
failure point on the path.
a. (Optional) Configure an End.OP SID on the remote endpoint of the SRv6
TE Policy.
An End.OP SID (OAM endpoint with punt) is an OAM SID that specifies
the punt behavior to be implemented for OAM packets. You can run the
remote end-op command or specify an End.OP SID to enable the device
to initiate a tracert test. Note that if the last SID of an SRv6 TE Policy
segment list is an End.X SID or binding SID, the remote end-op
command does not take effect. In this case, you need to specify an
End.OP SID when running the ping srv6-te policy command. An End.OP
SID must have been configured before you specify the end-op parameter.
i. Run system-view
The system view is displayed.
ii. Run segment-routing ipv6
The SRv6 view is displayed.
iii. Run locator locator-name
The locator view is displayed.
Ensure that the locator has been created and advertised through IS-
IS. The locator is also used by the created SRv6 TE Policy.
iv. Run opcode func-opcode end-op
An opcode is configured for an End.OP SID.
v. Run commit
The configuration is committed.
NOTE
If an End SID is used to implement a test, you can initiate a tracert operation
without the need to run the remote end-op command or specify an End.OP SID.
Note that if the last SID of an SRv6 TE Policy segment list is an End.X SID or
binding SID, you need to specify the destination parameter.
b. On the headend of the SRv6 TE Policy, run the tracert srv6-te policy
{ policy-name policyName | endpoint-ip endpointIpv6 color colorId |
binding-sid bsid } [ end-op endOp | destination dest ] [ segment-list
slid ] [ -a sourceAddr6 | -f initHl | -m maxHl | -s packetSize | -w timeout |
-p destPort | -fixedPort | { -tc tc | -dscp dscp } | ignore-mtu | -i { ifName
| ifType ifNum } | -nexthop nextHopAddr ] * command with the policy-
name policyName, endpoint-ip endpointIpv6 color colorId, or binding-
sid bsid parameter specified to initiate a tracert test to check all transit
nodes through which the SRv6 TE Policy passes.
<HUAWEI> tracert srv6-te policy policy-name test end-op 2001:DB8:2::1 -a 2001:DB8:1::1 -p
5 -m 20 -tc 0
Trace Route srv6-te policy : 100 data bytes, press CTRL_C to break
srv6-te policy's segment list:
Preference: 200; Path Type: primary; Protocol-Origin: local; Originator: 0, 0.0.0.0; Discriminator:
200; Segment-List ID: 1; Xcindex: 1; end-op: 2001:DB8:2::1
TTL Replier Time Type SRH(SID[n], ..., SID[0](the last SID to be
processed))
0 Ingress (SRH: 2001:DB8:1::F:1, 2001:DB8:2::F:1,
2001:DB8:2::1, SL=2)
1 2001:DB8:A::192:168:103:2 22 ms Transit (SRH: 2001:DB8:1::F:1, 2001:DB8:2::F:
1, 2001:DB8:2::1, SL=2)
2 2001:DB8:A::192:168:106:2 10 ms Transit (SRH: 2001:DB8:1::F:1, 2001:DB8:2::F:
1, 2001:DB8:2::1, SL=1)
3 2001:DB8:2::1 4 ms Egress
On the headend of the SRv6 TE Policy, run the tracert srv6-te policy
{ policy-name policyName | endpoint-ip endpointIpv6 color colorId |
binding-sid bsid } [ network-slice sliceid ] [ force-match-slice ] [ end-
op endOp | destination dest ] [ segment-list slid ] [ -a sourceAddr6 | -f
initHl | -m maxHl | -s packetSize | -w timeout | -p destPort | -fixedPort |
{ -tc tc | -dscp dscp } | ignore-mtu | -i { ifName | ifType ifNum } | -
nexthop nextHopAddr ] * command with the policy-name policyName,
endpoint-ip endpointIpv6 color colorId, or binding-sid bsid parameter
specified to initiate a tracert test to check all transit nodes through which
the SRv6 TE Policy passes in network slicing scenarios.
NOTE
----End
NOTE
As shown in Table 1-11, although SNMP trap and Syslog use the push mode, only alarms
or events are pushed. Monitoring data such as the interface traffic cannot be collected or
sent.
Prerequisites
● The route between the device and NMS is reachable.
● The user configuration is correct, the user has been added to the
administrator group, and the service type is HTTP.
● An ACL has been created if it is needed for the gRPC service to control which
clients can connect to the server. For details about how to create an ACL, see
"ACL Configuration" in Configuration Guide > IP Services.
● An SSL policy has been created and bound to the gRPC service so that a
secure SSL connection can be established between the server and client. For
details about how to create an SSL policy, see "Configuring and Binding an
SSL Policy" in Configuration Guide > Basic Configuration.
Context
In dial-in mode where the device functions as the gRPC server and the collector
functions as the gRPC client, you can configure and query data through gRPC.
For details about how to subscribe to data using gRPC, see telemetry subscription
sections.
Procedure
Step 1 Run system-view
Step 3 Run either of the following commands as required to enter the server view:
● On an IPv4 network, run the grpc server command to enter the gRPC server
view.
● On an IPv6 network, run the grpc server ipv6 command to enter the gRPC
IPv6 server view.
NOTE
● If an SSL policy has been configured on the gRPC server, services can run only on an
encrypted gRPC channel.
● If no SSL policy is configured on the gRPC server, the connections that are established
for services after the permit no-tls command is run will be disconnected after the undo
permit no-tls command is run.
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run grpc
The gRPC view is displayed.
Step 3 Run whitelist session-car grpc { cir sessionCarCir | cbs sessionCarCbs | pir
sessionCarPir | pbs sessionCarPbs } *
Whitelist session-CAR parameters are set for gRPC protocol packets.
Step 4 Run commit
The configuration is committed.
----End
Context
The controller uses commands to configure telemetry-capable devices, subscribe
to data sources, and collect data. The protocol used to send data can be gRPC or
UDP.
● If the connection is interrupted, the device connects to the collector and sends
data again. However, the data sampled when the connection is being
established again is lost.
● After an active/standby main control board switchover is performed or the
device saves telemetry service configurations and restarts, the device reloads
telemetry service configurations so that the service can run properly. However,
the data sampled during the restart or switchover is lost.
Pre-configuration Tasks
Before configuring static telemetry subscription, configure a static or dynamic
routing protocol so that devices can communicate at the network layer.
Context
A device functions as a client and a collector functions as a server. To statically
subscribe to the data sampled or a customized event, you need to configure an IP
address and port number for a destination collector, and configure a protocol and
encryption mode for data sending to the destination collector.
Procedure
Step 1 Run system-view
A destination group to which the data sampled is sent is created, and the
destination group view is displayed.
Step 4 Based on the collector type, run either of the following commands to configure an
IP address and port number for the destination collector, and configure a protocol
and encryption mode for data sending to the destination collector:
● For an IPv4 collector, run the ipv4-address ip-address-ipv4 port port [ vpn-
instance vpn-instance ] [ protocol grpc [ no-tls ] [ compression gzip ]]
command.
● For an IPv6 collector, run the ipv6-address ip-address-ipv6 port port [ vpn-
instance vpn-instance ] [ protocol grpc [ no-tls ] [ compression gzip ]]
command.
NOTE
● This command can be run for no more than five times for each destination group.
● Both this command and the protocol command in the subscription view can configure a
protocol and encryption mode for data sending to the destination collector. If the
destination collector is associated with the subscription, command configurations take
effect based on the following rules:
– If the protocol command has been run in the subscription view, the protocol and
encryption mode configured in the subscription view take effect.
– If the protocol command is not run in the subscription view, the protocol and
encryption mode configured in the destination group view take effect.
– Configuring no-tls may pose security risks.
----End
Context
A device functions as a client, and a collector functions as a server. To statically
subscribe to the data sampled or a customized event, you need to configure a
source from which to sample the data.
You can configure a telemetry customized event. If a performance indicator of a
resource object that telemetry monitors exceeds the user-defined threshold, the
customized event is reported to the collector in time for service policy
determination.
Procedure
● Configure the data to be sampled.
a. Run system-view
The system view is displayed.
b. Run telemetry [ openconfig ]
The telemetry view is displayed.
c. Run sensor-group sensor-name
A sampling sensor group is created, and its view is displayed.
d. Run sensor-path path
A sampling path is configured for a telemetry sensor.
NOTE
NOTE
After the sampling mode is switched, some sampling content will change, and
the sampling interval may change to an integer multiple of the minimum
sampling interval in the new sampling mode.
f. (Optional) Run depth depth-value
A data sampling depth is configured for the sampling path.
g. (Optional) Run policy reset-when-start
The sampling path is cleared during initial configuration.
A filter is configured for the sampling path, and its view is displayed.
NOTE
NOTE
NOTE
After the sampling mode is switched, some sampling content changes, and the
sampling interval may change to an integer multiple of the minimum sampling
interval under the new sampling mode.
f. (Optional) Run description event-description
A description is configured for the customized telemetry event.
g. (Optional) Run suppress-period period
A suppression period is configured for the customized telemetry event.
h. (Optional) Run level level-value
The level of the customized telemetry event is set.
i. (Optional) Run depth depth-value
A data sampling depth is configured for the sampling path.
j. Run filter filter-name
A filter is configured for the sampling path, and its view is displayed.
NOTE
Creating a Subscription
When configuring static telemetry subscription to the sampled data, you need to
create a subscription to associate the configured destination group with the
configured sampling sensor group so that data can be sent to the collector.
Context
A device functions as a client and a collector functions as a server. To statically
subscribe to the data sampled, you need to create a subscription to set up a data
sending channel. The protocol used to send data can be gRPC or UDP. The
following uses gRPC as an example.
Before configuring an SSL policy on the client to establish a secure SSL connection
between the client and server, ensure that the SSL policy has been created. For
details about how to create an SSL policy, see "Configuring and Binding an SSL
Procedure
Step 1 Run system-view
Generally, a small sampling interval is set for an analyzer to obtain more accurate
data for analysis. However, a large amount of redundant data is generated when a
small sampling interval is used. The data requires a large amount of storage space
and is inconvenient for data management. If adaptive sampling is configured,
telemetry dynamically adjusts the sampling interval based on preset conditions.
When the monitoring indicators are normal, telemetry reduces the sampling
interval. When the monitoring indicators reach the threshold, telemetry
automatically adjusts the sampling interval based on the configuration to report
collected data at a higher frequency, reducing the amount of data on the analyzer.
A protocol and encryption mode are configured for data reporting to the
destination collector associated with this subscription.
NOTE
Both this command and the ipv4-address port/ipv6-address port command in the
destination group view can configure a protocol and encryption mode for data sending to
the destination collector. If the destination collector is associated with the subscription,
command configurations take effect based on the following rules:
● If this command has been run, the protocol and encryption mode configured using
this command in the subscription view take effect.
● If this command is not run, the protocol and encryption mode configured using the
ipv4-address port/ipv6-address port command in the destination group view take
effect.
NOTE
The dampening interval and full data reporting interval apply only to sampling of the
OnChange+ type. If a non-zero sampling interval is configured using the sample-interval
command, the dampening interval and full data reporting interval cannot be configured.
Step 10 (Optional) Based on the collector type, run either of the following commands to
configure a source IP address for gRPC-based data sending:
● For an IPv4 collector, run the local-source-address ipv4 ipv4-address
command.
● For an IPv6 collector, run the local-source-address ipv6 ipv6-address
command.
NOTE
In the same subscription view, either the source interface or the source IP address can be
configured for the packets to be sent.
The anchor time for periodically sampling data packets to be sent is configured.
A maximum usage is configured for the amount of CPU resources the main
control board occupies when telemetry collects data.
NOTE
Step 17 (Optional) Configure an SSL policy for the client or enable the client to perform
SSL verification on the server.
NOTE
The certificate to be loaded must be supported by both the client and server.
This configuration takes effect differently from the dscp command configured in the
subscription view. This configuration takes effect on all connections of the gRPC client.
If the dscp command is run in both the gRPC client view and the subscription view,
the DSCP value configured in the subscription view takes effect.
----End
Context
A device functions as a client and a collector functions as a server. To statically
subscribe to the data sampled or a customized event, you need to configure an IP
address and port number for a destination collector, and configure a protocol and
encryption mode for data sending to the destination collector.
Procedure
Step 1 Run system-view
A destination group to which the data sampled is sent is created, and the
destination group view is displayed.
Step 4 Based on the collector type, run either of the following commands to configure an
IP address and port number for the destination collector, and configure a protocol
and encryption mode for data sending to the destination collector:
● For an IPv4 collector, run the ipv4-address ip-address-ipv4 port port [ vpn-
instance vpn-instance ] [ protocol udp ] command.
● For an IPv6 collector, run the ipv6-address ip-address-ipv6 port port [ vpn-
instance vpn-instance ] [ protocol udp ] command.
NOTE
● This command can be run for no more than five times for each destination group.
● Both this command and the protocol command in the subscription view can configure a
protocol and encryption mode for data sending to the destination collector. If the
destination collector is associated with the subscription, command configurations take
effect based on the following rules:
– If the protocol command has been run in the subscription view, the protocol and
encryption mode configured in the subscription view take effect.
– If the protocol command is not run in the subscription view, the protocol and
encryption mode configured in the destination group view take effect.
----End
Context
A device functions as a client, and a collector functions as a server. To statically
subscribe to the data sampled or a customized event, you need to configure a
source from which to sample the data.
You can configure a telemetry customized event. If a performance indicator of a
resource object that telemetry monitors exceeds the user-defined threshold, the
customized event is reported to the collector in time for service policy
determination.
Procedure
● Configure the data to be sampled.
a. Run system-view
The system view is displayed.
b. Run telemetry [ openconfig ]
The telemetry view is displayed.
c. Run sensor-group sensor-name
A sampling sensor group is created, and its view is displayed.
d. Run sensor-path path
A sampling path is configured for a telemetry sensor.
NOTE
NOTE
After the sampling mode is switched, some sampling content will change, and
the sampling interval may change to an integer multiple of the minimum
sampling interval in the new sampling mode.
NOTE
NOTE
NOTE
After the sampling mode is switched, some sampling content changes, and the
sampling interval may change to an integer multiple of the minimum sampling
interval under the new sampling mode.
f. (Optional) Run description event-description
A description is configured for the customized telemetry event.
g. (Optional) Run suppress-period period
A suppression period is configured for the customized telemetry event.
h. (Optional) Run level level-value
The level of the customized telemetry event is set.
i. (Optional) Run depth depth-value
A data sampling depth is configured for the sampling path.
j. Run filter filter-name
A filter is configured for the sampling path, and its view is displayed.
NOTE
Creating a Subscription
When configuring static telemetry subscription to the sampled data, you need to
create a subscription to associate the configured destination group with the
configured sampling sensor group so that data can be sent to the collector.
Context
A device functions as a client and a collector functions as a server. To statically
subscribe to the data sampled, you need to create a subscription to set up a data
sending channel. The protocol used to send data can be gRPC or UDP. The
following uses UDP as an example.
Before configuring an SSL policy on the client to establish a secure SSL connection
between the client and server, ensure that the SSL policy has been created. For
details about how to create an SSL policy, see "Configuring and Binding an SSL
Policy" in HUAWEI NetEngine9000 Product Documentation > Configuration >
Basic Configuration > Accessing Other Devices Configuration.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run telemetry [ openconfig ]
The telemetry view is displayed.
Step 3 (Optional) Run protocol udp message-header ietf-netconf-udp-notif
The UDP header in the draft-ietf-netconf-udp-notif-08 format is used.
Step 4 Run subscription subscription-name
A subscription is created to associate a destination group with a sampling sensor
group, and the subscription view is displayed.
Step 5 Run sensor-group sensor-name [ sample-interval sample-interval { [ suppress-
redundant ] | [ heartbeat-interval heartbeat-interval ] } * ]
A sampling sensor group is associated with the subscription, and a sampling
interval, a heartbeat interval, and redundancy suppression are configured for the
sampling sensor group.
Step 6 Run destination-group destination-name
A destination group is associated with the subscription.
Step 7 (Optional) Run the following commands to configure telemetry adaptive
sampling:
1. Run the sensor-group sensor-group-name sample-adaptive command to
configure a sampling sensor group that requires adaptive sampling, and enter
the sample-adaptive view.
2. Run the sample-interval interval op-field field op-type { eq | gt | ge | lt | le }
op-value value command to configure an interval and conditions for adaptive
sampling
Generally, a small sampling interval is set for an analyzer to obtain more accurate
data for analysis. However, a large amount of redundant data is generated when a
small sampling interval is used. The data requires a large amount of storage space
and is inconvenient for data management. If adaptive sampling is configured,
telemetry dynamically adjusts the sampling interval based on preset conditions.
When the monitoring indicators are normal, telemetry reduces the sampling
interval. When the monitoring indicators reach the threshold, telemetry
automatically adjusts the sampling interval based on the configuration to report
collected data at a higher frequency, reducing the amount of data on the analyzer.
A protocol and encryption mode are configured for data sending to the
destination collector that is associated with this subscription.
NOTE
Both this command and the ipv4-address port/ipv6-address port command in the
destination group view can configure a protocol and encryption mode for data sending to
the destination collector. If the destination collector is associated with the subscription,
command configurations take effect based on the following rules:
● If this command has been run, the protocol and encryption mode configured using
this command in the subscription view take effect.
● If this command is not run, the protocol and encryption mode configured using the
ipv4-address port/ipv6-address port command in the destination group view take
effect.
NOTE
The dampening interval and full data reporting interval apply only to sampling of the
OnChange+ type. If a non-zero sampling interval is configured using the sample-interval
command, the dampening interval and full data reporting interval cannot be configured.
Step 11 (Optional) Based on the collector type, run either of the following commands to
configure a source IP address for UDP-based data sending:
● For an IPv4 collector, run the local-source-address ipv4 ipv4-address [ port
port-value ] command.
● For an IPv6 collector, run the local-source-address ipv6 ipv6-address [ port
port-value6 ] command.
Step 12 (Optional) Run local-source-interface { if-name | if-type if-number } [ port port-
value
A source interface and source port are configured for UDP-based data sending.
NOTE
In the same subscription view, either the source interface or the source IP address can be
configured for the packets to be sent.
NOTE
----End
Prerequisites
All configurations of static telemetry subscription are complete.
Procedure
● Run the display telemetry sensor [ sensor-name ] command to check the
sampling sensor information.
● Run the display telemetry destination [ dest-name ] command to check
information about the destination group.
● Run the display telemetry subscription [ subscription-name ] command to
check subscription information.
● Run the display telemetry sensor-path command to check the sampling
path of a telemetry sensor.
----End
Context
The controller uses commands to configure telemetry-capable devices, subscribe
to data sources, and collect data. The protocol used to send data can be UDP only.
● If the connection is interrupted, the device connects to the collector and sends
data again. However, the data sampled when the connection is being
established again is lost.
● After an active/standby main control board switchover is performed or the
device saves telemetry service configurations and restarts, the device reloads
telemetry service configurations so that the service can run properly. However,
the data sampled during the restart or switchover is lost.
Pre-configuration Tasks
Before configuring static telemetry subscription, configure a static or dynamic
routing protocol so that devices can communicate at the network layer.
Context
A device functions as a client, and a collector functions as a server. To statically
subscribe to the sampled data through the YANG-Push model, you need to
configure the IP address and port number of the receiver for the sampled data
and configure the fragmentation capability for the receiver.
Procedure
Step 1 Run system-view
NOTE
A maximum of five receivers can be associated with each VS using this command.
Step 4 Based on the collector type, run either of the following commands to configure an
IP address and a port number for data sending to the destination collector:
● For an IPv4 collector, run the ipv4-address ipv4-addr port port-number
command.
● For an IPv6 collector, run the ipv6-address ipv6-addr port port-number
command.
NOTE
● Each receiver can be configured with only one IP address and one port number, and the
latest configuration overrides the previous one.
● For each two receivers, their IP addresses, port numbers, or both must be different.
----End
Context
A device functions as a client and a collector functions as a server. To statically
subscribe to the sampled data through the YANG-Push model, you need to
configure a source from which to sample the data.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run telemetry ietf
The telemetry IETF view is displayed.
A sampling filter is created, and the telemetry IETF sampling filter view is
displayed.
NOTE
----End
Context
A device functions as a client and a collector functions as a server. To statically
subscribe to the sampled data through the YANG-Push model, you need to create
a subscription to set up a data sending channel. Only UDP can be used to send
data to the collector.
Procedure
Step 1 Run system-view
The transport protocol is set to UDP-NOTIF for the receiver associated with the
subscription.
Step 5 Run encoding json
The encoding format is set to JSON_IETF for the data packets to be sent.
Step 6 (Optional) Run distribute enable
The data reporting mode is set to distributed for the telemetry IETF subscription.
Step 7 (Optional) Run collect-depth depth
A sampling depth is configured for the subscription.
NOTE
● When the data reporting mode is set to distributed, the default sampling depth is 1. The
sampling depth can be configured in the range from 1 to 3. If the configured sampling
depth differs from the maximum value supported by services, the smaller of the two
values takes effect.
● If the data reporting mode is not set to distributed, data of all nodes along the specified
path is collected by default. The sampling depth can be configured in the range from 1
to 3.
Step 9 (Optional) Configure a source address for the packets to be sent using either of
the following methods:
● Configure a source IP address and bound VPN instance for the packets to be
sent.
– If the source IP address of the packets to be sent is an IPv4 address, run
the local-source-address { ipv4-address | vpn-instance vpn-value}
command.
– If the source IP address of the packets to be sent is an IPv6 address, run
the local-source-address { ipv6 ipv6-address | vpn-instance vpn-value}
command.
● Run the local-source-interface { if-name | if-type if-number } command to
configure a source interface for the packets to be sent.
NOTE
NOTE
This command can be run to associate the subscription with only one receiver.
----End
Prerequisites
All configurations of static telemetry subscription are complete.
Procedure
● Run the display telemetry sensor-path command to check the sampling
path of a telemetry sensor.
----End
Context
NOTE
Pre-configuration Tasks
Before configuring dynamic telemetry subscription, complete the following tasks:
Procedure
Step 1 Run system-view
Step 3 (Optional) Based on the collector type, run either of the following commands to
enter the corresponding server view:
● For an IPv4 collector, run the grpc server command to enter the gRPC server
view.
● For an IPv6 collector, run the grpc server ipv6 command to enter the gRPC
IPv6 server view.
The number of the port to be listened for during dynamic telemetry subscription is
set.
----End
Networking Requirements
As the network scale increases, users need to optimize networks and rectify faults
based on device information. For example, if the CPU usage of a device exceeds a
specified threshold, the device reports data to a collector so that network traffic
can be monitored and optimized in a timely manner.
As shown in Figure 1-43, DeviceA supports telemetry and establishes a gRPC
connection with the collector. When the CPU usage of DeviceA exceeds 40%, data
needs to be sent to the collector. When the system memory usage of DeviceA
exceeds 50%, a customized event needs to be sent to the collector.
In this example, Interface1 and Interface2 represent GE 1/0/1 and GE 1/0/2, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● Collector's IP address 10.20.2.1 and port number 10001 (DeviceA and the
collector must be routable.)
● Destination group name destination1
● Sampling sensor group name sensor1
● Subscription name subscription1
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all
devices can communicate at the network layer. For details, see configuration files.
NOTE
● If the device connects to the destination collector using an IPv6 address, you must run
the ipv6-address ip-address-ipv6 port port [ vpn-instance vpn-instance ] [ protocol
grpc [ no-tls ] ] command to configure the IPv6 address and port number of the
destination collector.
● For details about how to configure TLS encryption for gRPC, see Configuration Files.
[*DeviceA-telemetry-destination-group-destination1] quit
Step 3 Configure the data to be sampled and a customized event. When the value of os-
memory-usage in the sampling path huawei-debug:debug/memory-infos/
memory-info is greater than 50, a customized event is reported.
[*DeviceA-telemetry] sensor-group sensor1
[*DeviceA-telemetry-sensor-group-sensor1] sensor-path huawei-debug:debug/cpu-infos/cpu-info
[*DeviceA-telemetry-sensor-group-sensor1-path] filter cpuinfo
[*DeviceA-telemetry-sensor-group-sensor1-path-filter-cpuinfo] op-field system-cpu-usage op-type gt op-
value 40
[*DeviceA-telemetry-sensor-group-sensor1-path-filter-cpuinfo] quit
[*DeviceA-telemetry-sensor-group-sensor1-path] quit
[*DeviceA-telemetry-sensor-group-sensor1] sensor-path huawei-debug:debug/memory-infos/memory-
info self-defined-event
[*DeviceA-telemetry-sensor-group-sensor1-self-defined-event-path] filter meminfo
[*DeviceA-telemetry-sensor-group-sensor1-self-defined-event-path-filter-meminfo] op-field os-memory-
usage op-type gt op-value 50
[*DeviceA-telemetry-sensor-group-sensor1-self-defined-event-path-filter-meminfo] quit
[*DeviceA-telemetry-sensor-group-sensor1-self-defined-event-path] quit
[*DeviceA-telemetry-sensor-group-sensor1] quit
----End
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
ssl policy policy1
pki-domain domain1
#
grpc
#
grpc client
ssl-policy policy1
ssl-verify peer
#
telemetry
#
sensor-group sensor1
sensor-path huawei-debug:debug/cpu-infos/cpu-info
filter cpuinfo
op-field system-cpu-usage op-type gt op-value 40
sensor-path huawei-debug:debug/memory-infos/memory-info self-defined-event
filter meminfo
op-field os-memory-usage op-type gt op-value 50
#
destination-group destination1
ipv4-address 10.20.2.1 port 10001 protocol grpc
#
subscription subscription1
sensor-group sensor1
destination-group destination1
#
pki domain domain1
#
return
Networking Requirements
As the network scale increases, users need to optimize networks and rectify faults
based on device information. For example, if the CPU usage of a device exceeds a
specified threshold, the device reports data to a collector so that network traffic
can be monitored and optimized in a timely manner.
In this example, interface1 and interface2 represent GE 1/0/0 and GE 2/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all
devices can communicate at the network layer.
Step 2 Configure a destination collector.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] telemetry
[~DeviceA-telemetry] destination-group destination1
[*DeviceA-telemetry-destination-group-destination1] ipv4-address 10.20.2.1 port 10001 protocol udp
NOTE
If the device connects to the destination collector using an IPv6 address, you need to run
the ipv6-address ip-address port port [ vpn-instance vpn-instance ] [ protocol udp ]
command to configure an IPv6 address and port number for the destination collector.
[*DeviceA-telemetry-destination-group-destination1] quit
NOTE
If the device connects to the destination collector using an IPv6 address, you need to run
the local-source-address ipv6 ip-address port port command to configure a source IPv6
address and a source port number.
[*DeviceA-telemetry-subscription-subscription1] commit
----End
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
telemetry
#
sensor-group sensor1
sensor-path huawei-debug:debug/cpu-infos/cpu-info
filter cpuinfo
Networking Requirements
As the network scale increases, it is required that networks be optimized or faults
rectified in a timely manner based on device information. On the network shown
in Figure 1-45, telemetry-capable DeviceA establishes a UDP connection with the
collector.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a destination collector.
2. Configure a sampling path.
3. Create a subscription.
Data Preparation
To complete the configuration, you need the following data:
● Collector's IP address (10.20.2.1) and port number (3600) (DeviceA and the
collector must be routable.)
● Name of the receiver for the sampled data (r1)
● Name of the sampling filter (f1)
● Name of the subscription (s1)
● Name of the receiver in the subscription (r2)
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all
devices can communicate at the network layer.
Step 2 Configure a destination collector.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] telemetry ietf
[~DeviceA-telemetry-ietf] receiver r1
[*DeviceA-telemetry-ietf-receiver-r1] ipv4-address 10.20.2.1 port 3600
NOTE
If the device connects to the collector using an IPv6 address, you need to run the ipv6-
address ipv6-addr port port-number command to configure an IPv6 address and a port
number for the destination collector.
[*DeviceA-telemetry-ietf-receiver-r1] quit
----End
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
telemetry ietf
#
filter f1 type datastore
xpath /huawei-debug:debug/cpu-infos/cpu-info
#
receiver r1
ipv4-address 10.20.2.1 port 3600
#
subscription s1
transport udp-notif
encoding json
distribute enable
update-trigger period 100
filter f1 type datastore
#
receiver r2
bind-receiver r1
#
return
Networking Requirements
As the network scale increases, users need to optimize networks and rectify faults
based on device information. For example, if a user wants to monitor an interface
for a period of time, dynamic telemetry subscription can be configured. To stop
monitoring, tear down the connection. The subscription is automatically canceled
and cannot be restored. This avoids long-term loads on devices and simplifies the
interaction between users and devices.
As shown in Figure 1-46, telemetry-capable DeviceA establishes a gRPC
connection with the collector. It is required that interface1 of DeviceA be
monitored and data be sent to the collector as required.
In this example, interface1 and interface2 represent GE 1/0/1 and GE 1/0/2, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a local user, and add the user to the administrator group. Configure
the service type of the user.
2. Configure an SSL policy.
3. Configure the gRPC server function.
Data Preparation
To complete the configuration, you need the following data:
● IP address of interface1 to be listened for: 192.168.1.1 (interface1 on DeviceA
and the collector must be routable.)
● Number of the port to be listened for: 20000
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all
devices can communicate at the network layer.
Step 2 Create a local user, and add the user to the administrator group. Configure the
service type of the user.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] aaa
NOTE
The CA certificate is used as an example. During the actual configuration, you need to
replace ca and test.crt with the existing certificate type and name on the device. You can
directly upload the certificate to the device for installation, or apply for and download the
certificate for installation. For details, see "Obtaining a Certificate" in PKI Configuration.
----End
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
aaa
local-user hhww123 password irreversible-cipher $1c$%L%X:Jn3hY$5^%T5I\4HG>j|i~s,{.@FpH*2XGM\R;7#
$"\i!L0$
local-user hhww123 service-type http
local-user hhww123 user-group manage-ug
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 192.168.1.1 255.255.255.0
#
ssl policy policy1
pki-domain domain1
#
grpc
#
grpc server
source-ip 192.168.1.1
server-port 20000
ssl-policy policy1
ssl-verify peer
server enable
#
pki domain domain1
#
return
Networking Requirements
As the network scale increases, users need to optimize networks and rectify faults
based on device information. For example, if a user wants to monitor an interface
for a period of time, dynamic telemetry subscription can be configured. To stop
monitoring, tear down the connection. The subscription is automatically canceled
and cannot be restored. This avoids long-term loads on devices and simplifies the
interaction between users and devices.
As shown in Figure 1-47, telemetry-capable DeviceA establishes a gRPC
connection with the collector. It is required that interface1 of DeviceA be
monitored and data be sent to the collector as required.
DeviceA communicates with the collector using an IPv6 address.
In this example, interface1 and interface2 represent GE 1/0/1 and GE 1/0/2, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Create a local user, and add the user to the administrator group. Configure
the service type of the user.
2. Configure an SSL policy.
3. Configure the gRPC IPv6 server function.
Data Preparation
To complete the configuration, you need the following data:
● IPv6 address of interface1 to be listened for: 2001:db8:4::1 (interface1 on
DeviceA and the collector must be routable.)
● Number of the port to be listened for: 20000
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all
devices can communicate at the network layer.
Step 2 Create a local user, and add the user to the administrator group. Configure the
service type of the user.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] aaa
[~DeviceA-aaa] local-user hhww123 password
Please configure the password (8-128)
Enter Password:
Confirm Password:
[*DeviceA-aaa] local-user hhww123 service-type http
NOTE
The CA certificate is used as an example. During the actual configuration, you need to
replace ca and test.crt with the existing certificate type and name on the device. You can
directly upload the certificate to the device for installation, or apply for and download the
certificate for installation. For details, see "Obtaining a Certificate" in PKI Configuration.
----End
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
aaa
local-user hhww123 password irreversible-cipher $1c$%L%X:Jn3hY$5^%T5I\4HG>j|i~s,{.@FpH*2XGM\R;7#
$"\i!L0$
local-user hhww123 service-type http
local-user hhww123 user-group manage-ug
#
interface GigabitEthernet1/0/1
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:4::1/64
#
ipv6 route-static 2001:DB8:3:: 64 GigabitEthernet1/0/1 2001:DB8:4::2
#
ssl policy policy1
pki-domain domain1
#
grpc
#
grpc server ipv6
source-ip 2001:DB8:4::1
server-port 20000
ssl-policy policy1
ssl-verify peer
server enable
#
pki domain domain1
#
return
Background
As networks rapidly develop and applications widely apply, various services are
deployed to meet requirements in different scenarios. Therefore, networks
encounter increasingly higher requirements for statistics collection. A tool that
rapidly provides statistics about IP network performance is in urgent need.
Advantages
TWAMP has the following advantages over the traditional tools that collect
statistics about IP network performance:
● TWAMP is a standard protocol that has a unified measurement model and
packet format, facilitating deployment.
● Multiprotocol Label Switching Transport Profile (MPLS-TP) Operation,
Administration and Maintenance (OAM) can be deployed only on MPLS-TP
networks, whereas TWAMP can be deployed on IP networks, MPLS networks,
and Layer 3 virtual private networks (L3VPNs).
Models
TWAMP uses the client/server mode and defines four logical entities, as shown in
Figure 1-48.
● Control-client: establishes, starts, and stops a test session and collects
statistics.
● Session-sender: proactively sends probes for performance statistics after being
notified by the control-client.
● Server: responds to the control-client's request for establishing, starting, or
stopping a test session.
● Session-reflector: replies to the probes sent by the session-sender with
response probes after being notified by the server.
In TWAMP, TCP packets are used as control signals, and UDP packets are used as
probes.
Context
TWAMP applies to scenarios where statistics on IP network performance, such as
the packet loss rate, jitter, and delay, need to be quickly obtained but not
necessarily be highly accurate.
NOTE
Pre-configuration Tasks
Before configuring TWAMP, complete the following tasks:
● Ensure that some devices on the live network can function as the control-
client and session-sender and comply with relevant standards.
● Ensure that the control-client and server are routable and the IP link between
them works properly.
Data Preparation
To configure TWAMP, you need the following data.
No. Data
1 (Optional) TCP port number and inactive interval for a control session
Procedure
Step 1 Run system-view
Step 4 Run either of the following commands to set the TCP listening mode for the
TWAMP server:
● Run the tcp listen-mode any-ip command to set the TCP listening mode of
the TWAMP server to any IP.
● Run the tcp listen-mode assign-ip command to set the TCP listening mode
of the TWAMP server to assigned IP. In this mode, you need to run the tcp
listen-address ip-address command to set a TCP listening address.
Step 5 (Optional) Run tcp port port-number [ all | vpn-instance vpn-instance-name ]
A TCP port is specified.
Step 6 (Optional) Run control-session inactive time-out
An inactive interval is configured for a control session.
Step 7 (Optional) Run client acl { aclnumBasic | aclnumAdv | aclname }
The ACL rule to be referenced is configured.
Step 8 Run commit
The configuration is committed.
----End
Context
After the session-reflector is configured, the session-reflector can reply to the
session-sender with timestamps and serial numbers to help collect statistics about
the delay, jitter, and packet loss rate.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa twamp
The TWAMP view is displayed.
Step 3 Run reflector
The session-reflector function is enabled, and the session-reflector view is
displayed.
Step 4 (Optional) Run test-session inactive timeout
An inactive interval is configured for a test session.
Step 5 Run commit
The configuration is committed.
----End
Prerequisites
TWAMP has been configured.
Procedure
● Run the display twamp global-info command to check global information
about TWAMP.
● Run the display twamp control-session [ verbose | client-ip client-ip-
address client-port client-port-number [ vpn-instance vpn-instance-name ] ]
command to check information about control sessions on the server.
● Run the display twamp test-session [ verbose | reflector-ip reflector-ip-
address reflector-port reflectort-port-number [ vpn-instance vpn-instance-
name ] ] command to check information about test sessions on the session-
reflector.
----End
Networking Requirements
On the IP network shown in Figure 1-49, DeviceA functions as the Server
(supports only passive measurement) in a TWAMP test. DeviceB functions as the
Control-Client. It initiates statistics collection by specifying the IP address of
DeviceA. DeviceB then sends collected statistics to the performance management
system.
NOTE
DeviceB must be able to function as the Controller. Data is sent using data collection
technology such as telemetry. For details about the configuration procedure, see the
corresponding third-party product manual.
Configuration Roadmap
The configuration roadmap for DeviceA is as follows:
1. Configure the Server.
2. Configure the Session-Reflector.
Data Preparation
To complete the configuration, you need the following data:
● IP address of DeviceA
● TCP port number
● Inactive interval for a control session
● Inactive interval for a test session
Procedure
Step 1 Configure DeviceA, DeviceB, and the performance management system to be
routable. The configuration details are not provided here.
Step 2 Configure the Server.
<DeviceA> system-view
[~DeviceA] nqa twamp
[~DeviceA-twamp] server
[~DeviceA-twamp-srv] tcp listen-mode any-ip
[*DeviceA-twamp-srv] tcp port 65530
[*DeviceA-twamp-srv] control-session inactive 600
[*DeviceA-twamp-srv] quit
Inactivity Time(s) : -
Test Session Number : 10
Created Time : 2019-08-05 16:47:55
Normal Stop : 100
Abort Stop : 10
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
nqa twamp
server
tcp listen-mode any-ip
tcp port 65530
control-session inactive 600
reflector
test-session inactive 600
#
return
Networking Requirements
On the L3 VXLAN shown in Figure 1-50, DeviceB functions as the Server (supports
only passive measurement) in a TWAMP test. DeviceA functions as the Control-
Client. It initiates statistics collection by specifying the IP address of DeviceB.
DeviceA then sends collected statistics to the performance management system.
NOTE
DeviceA must be able to function as the Controller. Data is sent using data collection
technology such as telemetry. For details about the configuration procedure, see the
corresponding third-party product manual.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a VXLAN tunnel between DeviceA and DeviceB.
2. Configure the Server on DeviceB.
3. Configure the Session-Reflector on DeviceB.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign an IP address to each node interface, including the loopback interface.
For details about the configuration roadmap, see VXLAN Configuration. For
configuration details, see Configuration Files.
After a VXLAN tunnel is established, you can run the display vxlan tunnel
command on DeviceA to view VXLAN tunnel information.
Step 4 Set the forwarding mode of the VXLAN tunnel to hardware loopback.
# Configure DeviceA.
[~DeviceA] global-gre forward-mode loopback
# Configure DeviceB.
[~DeviceB] global-gre forward-mode loopback
Control Session ID : 1
Mode : unauthenticated
DSCP : 03
Padding Length : 128
VPN Instance : vpn1
Create Time : 2019-08-05 16:47:55
Last Start Time : 2019-08-05 16:47:55
Last Stop Time : never
Sequence Number : 2000
Test Tx Numbers : 100
Test Rx Numbers : 100
Test Discard Numbers : 0
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
evpn vpn-instance evrf3 bd-mode
route-distinguisher 10:1
apply-label per-instance
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 11:11
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 11:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity
vpn-target 11:1 import-extcommunity evpn
vxlan vni 5010
#
bridge-domain 10
vxlan vni 10 split-horizon-mode
evpn binding vpn-instance evrf3
#
isis 1
network-entity 10.0000.0000.0001.00
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
arp distribute-gateway enable
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.1.1 255.255.255.0
isis enable 1
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
interface Nve1
source 1.1.1.1
vni 10 head-end peer-list protocol bgp
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 2.2.2.2 enable
peer 2.2.2.2 advertise irb
peer 2.2.2.2 advertise encap-type vxlan
#
global-gre forward-mode loopback
#
return
● DeviceB configuration file
#
sysname DeviceB
#
evpn vpn-instance evrf3 bd-mode
route-distinguisher 20:1
apply-label per-instance
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 22:22
apply-label per-instance
vpn-target 2:2 export-extcommunity
vpn-target 11:1 export-extcommunity evpn
vpn-target 2:2 import-extcommunity
vpn-target 11:1 import-extcommunity evpn
vxlan vni 5010
#
bridge-domain 20
vxlan vni 20 split-horizon-mode
evpn binding vpn-instance evrf3
#
isis 1
network-entity 10.0000.0000.0002.00
#
interface Vbdif20
ip binding vpn-instance vpn1
ip address 10.2.1.1 255.255.255.0
arp distribute-gateway enable
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.2.2 255.255.255.0
isis enable 1
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
interface Nve1
source 2.2.2.2
vni 20 head-end peer-list protocol bgp
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
Networking Requirements
On the EVPN L3VPN shown in Figure 1-51, DeviceB functions as the Server
(supports only passive measurement) in a TWAMP test. DeviceA functions as the
Control-Client. It initiates statistics collection by specifying the IP address of
DeviceB. DeviceA then sends collected statistics to the performance management
system.
NOTE
DeviceA must be able to function as the Controller. Data is sent using data collection
technology such as telemetry. For details about the configuration procedure, see the
corresponding third-party product manual.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an EVPN L3VPN.
2. Configure the Server.
3. Configure the Session-Reflector.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces connecting devices
● TCP port number
Procedure
Step 1 Assign an IP address to each node interface, including the loopback interface.
For configuration details, see Configuration Files.
Step 2 Configure an IGP (IS-IS in this example) on the backbone network.
For configuration details, see Configuration Files.
Step 3 Configure an IS-IS SR-MPLS BE tunnel between DeviceA and DeviceB.
For details about the configuration roadmap, see Configuring an IS-IS SR-MPLS BE
Tunnel. For configuration details, see Configuration Files.
Step 4 Configure an EVPN L3VPN between DeviceA and DeviceB.
For configuration details, see Configuring an EVPN to Carry Layer 3 Services. For
configuration details, see Configuration Files.
Step 5 Configure the Server.
<DeviceB> system-view
[~DeviceB] nqa twamp
[~DeviceB-twamp] server
[*DeviceB-twamp-srv] tcp listen-mode any-ip
[*DeviceB-twamp-srv] tcp port 65530 vpn-instance vpna
[*DeviceB-twamp-srv] quit
[*DeviceB-twamp] quit
[*DeviceB] commit
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
apply-label per-instance
tnl-policy SR-MPLS-BE
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.3
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0012.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 200000 201000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.2 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.16.1.1 255.255.255.0
isis enable 1
mpls
#
interface LoopBack0
ip address 1.1.1.3 255.255.255.255
isis enable 1
isis prefix-sid index 20
#
bgp 100
peer 1.1.1.2 as-number 100
peer 1.1.1.2 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
import-route direct
peer 1.1.1.2 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.2 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 1.1.1.2 as-number 100
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.2 enable
#
tunnel-policy SR-MPLS-BE
tunnel select-seq lsp load-balance-number 1
#
return
● DeviceB configuration file
#
sysname DeviceB
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy SR-MPLS-BE
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.2
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0010.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 200000 201000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.1.1 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.1 255.255.255.0
isis enable 1
mpls
#
interface LoopBack0
ip address 1.1.1.2 255.255.255.255
isis enable 1
isis prefix-sid index 10
#
bgp 100
peer 1.1.1.3 as-number 100
peer 1.1.1.3 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
import-route direct
peer 1.1.1.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.3 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 1.1.1.3 as-number 100
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.3 enable
#
nqa twamp
server
tcp listen-mode any-ip
tcp port 65530 vpn-instance vpna
reflector
#
tunnel-policy SR-MPLS-BE
tunnel select-seq lsp load-balance-number 1
#
return
Background
TWAMP is an IP performance monitoring (IPPM) protocol and has two versions:
standard version and light version. Different from standard TWAMP, TWAMP Light
moves the control plane from the Responder to the Controller so that TWAMP
control modules can be simply deployed on the Controller. Therefore, TWAMP
Light greatly relaxes its requirements on the Responder performance, allowing the
Responder to be rapidly deployed.
Characteristic
TWAMP Light integrates the Control-Client and Session-Sender on the Controller.
The Responder functions merely as the Session-Reflector.
The Controller creates test sessions, collects performance statistics, and reports
statistics to the NMS using Performance Management (PM) or MIBs. After that,
the Controller parses NMS information and sends the results to the Responder
through private channels. The Responder merely responds to TWAMP-Test packets
received over test sessions.
Models
In Figure 1-52, TWAMP-Test packets function as probes and carry the IP address
and UDP port number, and fixed TTL value 255 that are predefined for the test
session between the Controller and Responder. The Controller sends a TWAMP-
Test packet to the Responder, and the Responder replies to it. The Controller
collects TWAMP statistics.
TWAMP Light defines two types of TWAMP-Test packets: Test-request packets and
Test-response packets.
● Session-Sender test packets are sent from the Controller to the Responder.
● Session-Reflector test packets are replied by the Responder to the Controller.
The Controller collects performance statistics based on TWAMP-Test packets and
reports the results to the NMS, which provides the statistics to users.
Usage Scenario
As TWAMP Light simplifies deployment and supports plug-and-play, you can use
TWAMP Light to rapidly and flexibly measure the round-trip performance of an IP
network, such as the two-way packet loss rate, jitter, and delay.
Pre-configuration Tasks
Before configuring TWAMP Light functions, complete the following tasks:
● Ensure that devices on the live network support TWAMP Light and comply
with standard protocols.
● Ensure that the Controller and Responder are routable and IP links between
them work properly.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run nqa twamp-light
The TWAMP Light view is displayed.
Step 3 Run responder
The TWAMP Light Responder is enabled, and its view is displayed.
Step 4 Run either of the following commands to create a test session on the Responder
as required.
● To create a test session in Eth-Trunk member interface-based test scenarios,
run the test-session session-id { local-ip local-ip-address remote-ip remote-
ip-address | local-ipv6 local-ipv6-address remote-ipv6 remote-ipv6-address }
local-port local-port remote-port remote-port [ vpn-instance vpn-instance-
name ] link-bundle-interface { link-bundle-interface-type link-bundle-
interface-number | link-bundle-interface-name } [ anti-loop-on ]
[ description description ] command.
● To create a test session in other test scenarios, run the test-session session-id
{ local-ip local-ip-address remote-ip remote-ip-address | local-ipv6 local-
ipv6-address remote-ipv6 remote-ipv6-address } local-port local-port
remote-port remote-port [ interface { interface-type interface-number |
interface-name } | vpn-instance vpn-instance-name ] [ anti-loop-on ]
[ description description ] command.
NOTE
● After a test session is configured, its parameters cannot be modified. To modify parameters
of a test session, delete the session and reconfigure it.
● The IP address configured for a test session must be a unicast address.
● The UDP port number of the Responder must be a port number not in use.
● The VPN instance configured for a TWAMP Light test session must exist. This instance
cannot be deleted after it is bound to a test session (the system displays a prompt if an
attempt is made to delete the VPN instance).
● In Layer 2 and Layer 3 hybrid networking scenarios where the base station is offline, you
need to configure static ARP on the Layer 3 virtual interface of the network device that
connects Layer 3 to Layer 2.
session. The TWAMP Light Sender starts performance tests and sends test packets
to the Responder, or stops performance tests.
Procedure
Step 1 Configure a TWAMP Light Client and create a test session.
1. Run system-view
The system view is displayed.
2. Run nqa twamp-light
The TWAMP Light view is displayed.
3. Run client
The TWAMP Light Client function is enabled, and the TWAMP Light Client
view is displayed.
4. Create a test session on the Controller:
– To create a test session in Eth-Trunk member interface-based test
scenarios, run the test-session session-id { sender-ip sender-ip-address
reflector-ip reflector-ip-address | sender-ipv6 sender-address-v6
reflector-ipv6 reflector-address-v6 } sender-port sender-port reflector-
port reflector-port [ vpn-instance vpn-instance-name ] link-bundle-
interface { link-bundle-interface-type link-bundle-interface-number | link-
bundle-interface-name } [ dscp dscp-value | padding padding-length |
padding-type padding-type | description description ] * command.
– To create a session in other test scenarios, run the test-session session-id
{ sender-ip sender-ip-address reflector-ip reflector-ip-address | sender-
ipv6 sender-address-v6 reflector-ipv6 reflector-address-v6 } sender-port
sender-port reflector-port reflector-port [ vpn-instance vpn-instance-
name ] [ [ [ dscp dscp-value | padding padding-length | padding-type
padding-type | description description ] * command.
NOTE
NOTE
When configuring a TWAMP Light client to send IPv6 packets, ensure that
the length of the IPv6 packets to be sent is smaller than the smallest MTU
configured on interfaces along the path. Otherwise, packets are discarded.
Before the configuration, perform the ping test. Ensure that the source
address, destination address, and packet length of the ping packet are the
same as those of the TWAMP Light IPv6 packet. Then run the display ipv6
pathmtu command to check the PMTU value of each interface along the
path. For details, see Path MTU Test.
5. (Optional) Run test-session session-id ptp-compatible enable
The PTP compatible mode is enabled for the TWAMP Light test session.
In the scenario where a Huawei device interworks with a non-Huawei device,
if the non-Huawei device uses a PTP server as the clock source but the
Huawei device uses an NTP server as the clock source, you can run this
command to enable the PTP compatible mode on the Huawei device. This
prevent inaccurate measurement data caused by asynchronous clocks
(different primary reference clocks).
6. (Optional) Run test-session session-id bind interface { interface-type
interface-number | interface-name }
An interface is bound to a TWAMP Light test session.
After an interface is bound to a TWAMP Light test session, valid statistics are
reported to the bound interface. Other functional modules can obtain the
statistics from this interface.
Step 2 Configure the TWAMP Light Sender and start the TWAMP Light performance test.
1. Run system-view
The system view is displayed.
2. Run nqa twamp-light
The TWAMP Light view is displayed.
3. (Optional) Run one-way delay-measure enable
TWAMP Light one-way delay measurement is enabled.
NOTE
Before performing TWAMP Light one-way delay measurement, you must configure
1588v2 for clock synchronization among devices.
4. Run sender
The TWAMP Light Sender function is enabled, and the TWAMP Light Sender
view is displayed.
5. Run commit
The configuration is committed.
6. Start TWAMP Light performance measurement.
– To perform one-off performance measurement, run the test start test-
session session-id { duration duration | packet-count packet-count }
[ period { 10 | 100 | 1000 | 30000 } ] [ time-out time-out ] command.
Statistics collection automatically stops when the specified interval is reached or the
specified number of sent packets is reached. You can also run the test stop { all | test-
session session-id } command to stop statistics collection.
----End
Prerequisites
You have configured the TWAMP Light statistics collection function.
Procedure
● Run the display twamp-light [ link-bundle ] test-session [ verbose |
session-id ] command to check the real-time statistics about a specified
TWAMP Light test session.
● Run the display twamp-light statistic-type { twoway-delay | twoway-loss }
test-session session-id [ link-bundle-member { ifType ifNum | ifName } ]
[ summary ] command to check two-way delay or two-way packet loss
statistics about a specified TWAMP Light test session.
● Run the display twamp-light statistic-type oneway-delay test-session
session-id command to check one-way delay statistics about a specified
TWAMP Light test session.
● Run the display twamp-light responder [ link-bundle ] test-session
[ verbose | session-id ] command to check real-time session information on
the TWAMP Light Responder.
----End
Context
NOTICE
TWAMP Light session statistics cannot be restored after being cleared. Exercise
caution when clearing the statistics.
Procedure
● Run the reset twamp-light statistics { all | test-session session-id }
command to clear TWAMP Light session statistics.
----End
Networking Requirements
On the IP network shown in Figure 1-53, DeviceA functions as the Controller, and
DeviceB functions as the Responder.
● DeviceA: sends and receives packets over a test session, collects and calculates
performance statistics, and reports the statistics to the performance
management system.
● DeviceB: responds to the packets received over a test session.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each involved interface so
that all devices can communicate at the network layer.
Data Preparation
To complete the configuration, you need the following data:
● Responder DeviceB
– IP address 10.2.2.2
– UDP port number 2010
● Controller DeviceA
– IP address 10.1.1.1
– UDP port number: 2001
● Performance management system
– IP address: 192.168.100.100
– gRPC port number: 10001
Procedure
Step 1 Configure an IP address and a routing protocol for each involved interface so that
all devices can communicate at the network layer. The configuration details are
not provided here.
Step 2 Configure the TWAMP Light Responder.
<DeviceB> system-view
[~DeviceB] nqa twamp-light
[*DeviceB-twamp-light] responder
[*DeviceB-twamp-light-responder] test-session 1 local-ip 10.2.2.2 remote-ip 10.1.1.1 local-port 2010
remote-port 2001
[*DeviceB-twamp-light-responder] commit
[~DeviceB-twamp-light-responder] quit
[~DeviceB-twamp-light] quit
Type : continual
Sender IP : 10.1.1.1
Sender Port : 2001
Reflector IP : 10.2.2.2
Reflector Port : 2010
Mode : unauthenticated
DSCP :0
Padding Length : 128
Padding Type : 00
VPN Instance :-
Link-Bundle Interface :-
Last Start Time : 2017-04-13 15:33:52
Last Stop Time : never
Regular Time(in minute) :-
Period Time(in millisecond) : 10
Time Out(in second) :5
Duration Time(in second) :-
Packet Count :-
--------------------------------------------------------------------------------
108196 0 0.0000% 0 0.0000%
108197 0 0.0000% 0 0.0000%
108198 0 0.0000% 0 0.0000%
108199 0 0.0000% 0 0.0000%
108200 0 0.0000% 0 0.0000%
108201 0 0.0000% 0 0.0000%
108202 0 0.0000% 0 0.0000%
108203 0 0.0000% 0 0.0000%
108204 0 0.0000% 0 0.0000%
108205 0 0.0000% 0 0.0000%
108206 0 0.0000% 0 0.0000%
108207 0 0.0000% 0 0.0000%
108208 0 0.0000% 0 0.0000%
108209 0 0.0000% 0 0.0000%
108210 0 0.0000% 0 0.0000%
108211 0 0.0000% 0 0.0000%
108212 0 0.0000% 0 0.0000%
108213 0 0.0000% 0 0.0000%
108214 0 0.0000% 0 0.0000%
108215 0 0.0000% 0 0.0000%
108216 0 0.0000% 0 0.0000%
108217 0 0.0000% 0 0.0000%
108218 0 0.0000% 0 0.0000%
108219 0 0.0000% 0 0.0000%
108220 0 0.0000% 0 0.0000%
108221 0 0.0000% 0 0.0000%
108222 0 0.0000% 0 0.0000%
108223 0 0.0000% 0 0.0000%
108224 0 0.0000% 0 0.0000%
108225 0 0.0000% 0 0.0000%
--------------------------------------------------------------------------------
Average Loss Count : 0 Average Loss Ratio : 0.0000%
Maximum Loss Count : 0 Maximum Loss Ratio : 0.0000%
Minimum Loss Count : 0 Minimum Loss Ratio : 0.0000%
Average RxError Count: 0 Average RxError Ratio: 0.0000%
Maximum RxError Count: 0 Maximum RxError Ratio: 0.0000%
Minimum RxError Count: 0 Minimum RxError Ratio: 0.0000%
----End
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
nqa twamp-light
client
test-session 1 sender-ip 10.1.1.1 reflector-ip 10.2.2.2 sender-port 2001 reflector-port 2010
sender
test start-continual test-session 1 period 10
#
telemetry
#
sensor-group twamp
sensor-path huawei-twamp-controller:twamp-controller/client/sessions/session/huawei-twamp-
statistics:statistics
#
destination-group twamp
ipv4-address 192.168.100.100 port 10001 protocol grpc no-tls
#
subscription twamp
sensor-group twamp sample-interval 5000
destination-group twamp
#
return
Networking Requirements
On the L3 VXLAN shown in Figure 1-54, DeviceA functions as the Responder and
DeviceB functions as the Controller.
● DeviceA: responds to the packets received over a test session.
● DeviceB: sends and receives packets over a test session, collects and calculates
performance statistics, and reports the statistics to the performance
management system.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a VXLAN tunnel between DeviceA and DeviceB.
2. Configure the TWAMP Light Responder on DeviceA.
3. Configure the TWAMP Light Controller on DeviceB.
4. Configure the Controller to send statistics to the performance management
system through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of the interfaces connecting devices
● IP addresses and UDP port numbers of the Responder and Controller and IP
address and gRPC port number of the performance management system.
Procedure
Step 1 Assign IP addresses to node interfaces, including loopback interfaces.
For configuration details, see Configuration Files.
Step 2 Configure an IGP (IS-IS in this example) on the backbone network.
For configuration details, see Configuration Files.
Step 3 Configure a VXLAN tunnel between DeviceA and DeviceB.
For the configuration roadmap, see VXLAN Configuration. For configuration
details, see Configuration Files.
After a VXLAN tunnel is established, you can run the display vxlan tunnel
command on DeviceA to display VXLAN tunnel information. The following
example uses the command output on DeviceA.
[~DeviceA] display vxlan tunnel
Number of vxlan tunnel : 1
Tunnel ID Source Destination State Type Uptime
-----------------------------------------------------------------------------------
4026531841 1.1.1.1 2.2.2.2 up dynamic 00:12:56
Step 4 Set the forwarding mode of the VXLAN tunnel to hardware loopback.
# Configure DeviceA.
[~DeviceA] global-gre forward-mode loopback
# Configure DeviceB.
[~DeviceB] global-gre forward-mode loopback
11027 345 5 3 4
11028 345 5 3 4
11029 345 5 4 4
11030 347 5 3 4
11031 347 4 3 4
11032 347 4 3 4
11033 347 4 3 4
11034 346 4 3 4
11035 346 5 3 4
11036 346 5 3 4
11037 346 5 3 4
11038 346 4 4 3
11039 347 4 4 3
11040 347 4 4 3
11041 347 4 4 3
11042 347 4 4 3
11043 347 5 3 4
11044 346 5 3 4
11045 346 5 3 4
11046 346 5 3 4
11047 346 5 3 4
11048 346 5 3 4
11049 346 4 3 4
11050 346 4 3 4
11051 345 4 3 3
11052 345 4 3 3
11053 345 5 4 3
11054 345 5 4 3
11055 345 4 4 3
11056 345 4 3 3
--------------------------------------------------------------------------------
Average Delay : 346 Average Jitter : 5
Maximum Delay : 370 Maximum Jitter : 32
Minimum Delay : 328 Minimum Jitter : 0
Average TxJitter : 3 Average RxJitter : 4
Maximum TxJitter : 29 Maximum RxJitter : 23
Minimum TxJitter : 0 Minimum RxJitter : 0
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
evpn vpn-instance evrf3 bd-mode
route-distinguisher 10:1
apply-label per-instance
vpn-target 11:1 export-extcommunity
vpn-target 11:1 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 11:11
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 11:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity
vpn-target 11:1 import-extcommunity evpn
vxlan vni 5010
#
bridge-domain 10
vxlan vni 10 split-horizon-mode
evpn binding vpn-instance evrf3
#
isis 1
network-entity 10.0000.0000.0001.00
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
arp distribute-gateway enable
arp collect host enable
#
interface GigabitEthernet1/0/0
undo shutdown
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
interface Nve1
source 2.2.2.2
vni 20 head-end peer-list protocol bgp
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.1 enable
peer 1.1.1.1 advertise irb
peer 1.1.1.1 advertise encap-type vxlan
#
nqa twamp-light
client
test-session 1 sender-ip 192.168.1.1 reflector-ip 192.168.2.2 sender-port 2001 reflector-port 2010
sender
test start-continual test-session 1 period 10
#
telemetry
#
sensor-group twamp
sensor-path huawei-twamp-controller:twamp-controller/client/sessions/session/huawei-twamp-
statistics:statistics
#
destination-group twamp
ipv4-address 192.168.100.100 port 10001 protocol grpc no-tls
#
subscription twamp
sensor-group twamp sample-interval 5000
destination-group twamp
#
global-gre forward-mode loopback
#
return
Networking Requirements
On the EVPN L3VPN shown in Figure 1-55, DeviceB functions as the Responder
and DeviceA functions as the Controller.
● DeviceA: sends and receives packets over a test session, collects and calculates
performance statistics, and reports the statistics to the performance
management system.
● DeviceB: responds to the packets received over a test session.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an EVPN L3VPN.
2. Configure the TWAMP Light Controller.
3. Configure the TWAMP Light Responder.
4. Configure the Controller to send statistics to the performance management
system through telemetry.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Assign IP addresses to node interfaces, including loopback interfaces.
For configuration details, see Configuration Files.
Step 2 Configure an IGP (IS-IS in this example) on the backbone network.
For configuration details, see Configuration Files.
Step 3 Configure an IS-IS SR-MPLS BE tunnel between DeviceA and DeviceB.
For the configuration roadmap, see Configuring an IS-IS SR-MPLS BE Tunnel. For
configuration details, see Configuration Files.
Step 4 Configure an EVPN L3VPN between DeviceA and DeviceB.
For details about the configuration roadmap, see Configuring an EVPN to Carry
Layer 3 Services. For configuration details, see Configuration Files.
Step 5 Create a TWAMP Light test session on DeviceB (Responder).
<DeviceB> system-view
[~DeviceB] nqa twamp-light
[*DeviceB-twamp-light] responder
[*DeviceB-twamp-light-responder] test-session 1 local-ip 192.168.1.1 remote-ip 192.168.2.2 local-port
3000 remote-port 2000 vpn-instance vpna
[*DeviceB-twamp-light-responder] commit
[~DeviceB-twamp-light-responder] quit
[~DeviceB-twamp-light] quit
11029 345 5 4 4
11030 347 5 3 4
11031 347 4 3 4
11032 347 4 3 4
11033 347 4 3 4
11034 346 4 3 4
11035 346 5 3 4
11036 346 5 3 4
11037 346 5 3 4
11038 346 4 4 3
11039 347 4 4 3
11040 347 4 4 3
11041 347 4 4 3
11042 347 4 4 3
11043 347 5 3 4
11044 346 5 3 4
11045 346 5 3 4
11046 346 5 3 4
11047 346 5 3 4
11048 346 5 3 4
11049 346 4 3 4
11050 346 4 3 4
11051 345 4 3 3
11052 345 4 3 3
11053 345 5 4 3
11054 345 5 4 3
11055 345 4 4 3
11056 345 4 3 3
--------------------------------------------------------------------------------
Average Delay : 346 Average Jitter : 5
Maximum Delay : 370 Maximum Jitter : 32
Minimum Delay : 328 Minimum Jitter : 0
Average TxJitter : 3 Average RxJitter : 4
Maximum TxJitter : 29 Maximum RxJitter : 23
Minimum TxJitter : 0 Minimum RxJitter : 0
----End
Configuration Files
● DeviceA configuration File
#
sysname DeviceA
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
tnl-policy SR-MPLS-BE evpn
vpn-target 111:1 export-extcommunity evpn
vpn-target 111:1 import-extcommunity evpn
#
mpls lsr-id 1.1.1.3
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0012.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 200000 201000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.2 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.16.1.1 255.255.255.0
isis enable 1
mpls
#
interface LoopBack0
ip address 1.1.1.3 255.255.255.255
isis enable 1
isis prefix-sid index 20
#
bgp 100
peer 1.1.1.2 as-number 100
peer 1.1.1.2 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
import-route direct
peer 1.1.1.2 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.2 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 1.1.1.2 as-number 100
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.2 enable
#
nqa twamp-light
client
test-session 1 sender-ip 192.168.2.2 reflector-ip 192.168.1.1 sender-port 2000 reflector-port 3000
vpn-instance vpna padding 1454
sender
test start-continual test-session 1 period 10
#
telemetry
#
sensor-group twamp
sensor-path huawei-twamp-controller:twamp-controller/client/sessions/session/huawei-twamp-
statistics:statistics
#
destination-group twamp
ipv4-address 192.168.100.100 port 10001 protocol grpc no-tls
#
subscription twamp
sensor-group twamp sample-interval 5000
destination-group twamp
#
tunnel-policy SR-MPLS-BE
tunnel select-seq sr-lsp load-balance-number 1
#
return
● DeviceB configuration File
#
sysname DeviceB
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy SR-MPLS-BE evpn
vpn-target 111:1 export-extcommunity evpn
vpn-target 111:1 import-extcommunity evpn
#
mpls lsr-id 1.1.1.2
#
mpls
#
segment-routing
tunnel-prefer segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0010.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 200000 201000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.1.1 255.255.255.0
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.1 255.255.255.0
isis enable 1
mpls
#
interface LoopBack0
ip address 1.1.1.2 255.255.255.255
isis enable 1
isis prefix-sid index 10
#
bgp 100
peer 1.1.1.3 as-number 100
peer 1.1.1.3 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
import-route direct
peer 1.1.1.3 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.3 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 1.1.1.3 as-number 100
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.3 enable
#
nqa twamp-light
responder
test-session 1 local-ip 192.168.1.1 remote-ip 192.168.2.2 local-port 3000 remote-port 2000 vpn-
instance vpna
#
tunnel-policy SR-MPLS-BE
tunnel select-seq sr-lsp load-balance-number 1
#
return
Networking Requirements
On the IP network shown in Figure 1-56, DeviceA functions as the Controller, and
DeviceB functions as the Responder.
● DeviceA: sends and receives packets over a test session, collects and calculates
performance statistics, and reports the statistics to the performance
management system.
● DeviceB: responds to the packets received over a test session.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each interface so that all
the devices can communicate at the network layer.
2. Configure the TWAMP Light Responder on DeviceB.
3. Configure the TWAMP Light Controller on DeviceA.
4. Configure the Controller to send statistics to the performance management
system through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● Responder DeviceB
– IP address: 2001:DB8:2::2
– UDP port number 2010
● Controller DeviceA
– IP address: 2001:DB8:1::1
– UDP port number: 2001
● Performance management system
– IP address: 2001:db8:100::100
– gRPC port number: 10001
Procedure
Step 1 Configure an IP address and a routing protocol for each involved interface so that
all the devices can communicate at the network layer. The configuration
procedure is not provided here.
11038 346 4 4 3
11039 347 4 4 3
11040 347 4 4 3
11041 347 4 4 3
11042 347 4 4 3
11043 347 5 3 4
11044 346 5 3 4
11045 346 5 3 4
11046 346 5 3 4
11047 346 5 3 4
11048 346 5 3 4
11049 346 4 3 4
11050 346 4 3 4
11051 345 4 3 3
11052 345 4 3 3
11053 345 5 4 3
11054 345 5 4 3
11055 345 4 4 3
11056 345 4 3 3
--------------------------------------------------------------------------------
Average Delay : 346 Average Jitter : 5
Maximum Delay : 370 Maximum Jitter : 32
Minimum Delay : 328 Minimum Jitter : 0
Average TxJitter : 3 Average RxJitter : 4
Maximum TxJitter : 29 Maximum RxJitter : 23
Minimum TxJitter : 0 Minimum RxJitter : 0
----End
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:1::1/64
#
nqa twamp-light
client
test-session 1 sender-ipv6 2001:DB8:1::1 reflector-ipv6 2001:DB8:2::2 sender-port 2001 reflector-port 2010
sender
test start-continual test-session 1 period 10
#
telemetry
#
sensor-group twamp
sensor-path huawei-twamp-controller:twamp-controller/client/sessions/session/huawei-twamp-
statistics:statistics
#
destination-group twamp
ipv6-address 2001:DB8:100::100 port 10001 protocol grpc no-tls
#
subscription twamp
sensor-group twamp sample-interval 5000
destination-group twamp
#
return
Networking Requirements
On the VLL+L3VPN networks shown in Figure 1-57, DeviceA functions as the
Responder and is deployed on the last hop of the link connecting to a base
station. DeviceB functions as the Controller and is deployed on the aggregation
node.
● DeviceA: responds to the packets received over a test session.
● DeviceB: sends and receives packets over a test session and collects and
calculates performance statistics on the Layer 3 network, and reports the
statistics to the performance management system.
NOTE
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure VLL and L3VPN networks.
2. Configure the TWAMP Light Responder.
3. Configure devices at the edge of Layer 2 and Layer 3 networks.
4. Configure the TWAMP Light Controller.
5. Configure the Controller to send statistics to the performance management
system through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of the interfaces connecting devices
● IP addresses and UDP port numbers of the Responder and Controller and IP
address and gRPC port number of the performance management system.
Procedure
Step 1 Assign IP addresses to node interfaces, including loopback interfaces.
For configuration details, see Configuration Files.
Step 2 Configure an IGP on the backbone network. OSPF is used in this example.
For configuration details, see Configuration Files.
Step 3 Configure an MPLS tunnel between DeviceA and DeviceC, and between DeviceC
and DeviceB.
For configuration details, see Configuration Files.
After an MPLS tunnel is established, you can run the display mpls ldp command
on DeviceA to display LDP information. Take the display on DeviceA as an
example:
[~DeviceA] display mpls ldp
LDP Global Information
------------------------------------------------------------------------------
Protocol Version : V1 Neighbor Liveness : 600 Sec
Graceful Restart : Off FT Reconnect Timer : 300 Sec
MTU Signaling : On Recovery Timer : 300 Sec
Capability-Announcement : On Longest-match : Off
mLDP P2MP Capability : Off mLDP MBB Capability : Off
mLDP MP2MP Capability : Off mLDP Recursive-fec : Off
NOTE
When DeviceA functions as the reflector, the local IP address in the command for creating a
session is the IP address of the base station, and the remote IP address is the IP address of
DeviceB.
[~DeviceB-twamp-light-client] quit
[~DeviceB-twamp-light] sender
[*DeviceB-twamp-light-sender] commit
[~DeviceB-twamp-light-sender] test start-continual test-session 1 period 10
[*DeviceB-twamp-light-sender] commit
[~DeviceB-twamp-light-sender] quit
[~DeviceB-twamp-light] quit
Step 6 When the base station is offline, you need to configure static ARP on DeviceC to
specify the mapping between the IP address and the MAC address of the base
station.
<DeviceC> system-view
[~DeviceC] arp static 192.168.1.1 00e0-fc12-3456 vid 26 interface Virtual-Ethernet1/0/1.31
[*DeviceC] commit
11048 346 5 3 4
11049 346 4 3 4
11050 346 4 3 4
11051 345 4 3 3
11052 345 4 3 3
11053 345 5 4 3
11054 345 5 4 3
11055 345 4 4 3
11056 345 4 3 3
--------------------------------------------------------------------------------
Average Delay : 346 Average Jitter : 5
Maximum Delay : 370 Maximum Jitter : 32
Minimum Delay : 328 Minimum Jitter : 0
Average TxJitter : 3 Average RxJitter : 4
Maximum TxJitter : 29 Maximum RxJitter : 23
Minimum TxJitter : 0 Minimum RxJitter : 0
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
mpls
#
mpls lsr-id 10.0.0.1
#
mpls
#
mpls ldp
outbound peer all split-horizon
accept target-hello all
#
ipv4-family
#
mpls ldp remote-peer 10.0.0.2
mpls ldp timer hello-hold 45
mpls ldp timer keepalive-hold 45
remote-ip 10.0.0.2
#
ospf 1 router-id 10.0.0.1
area 0.0.0.1
network 3.0.0.0 0.0.0.3
network 10.0.0.1 0.0.0.0
#
interface loopback0
ip address 10.0.0.1 255.255.255.255
#
interface GigabitEthernet1/0/1.31
vlan-type dot1q 31
mtu 9500
ip address 192.168.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/0.1
vlan-type dot1q 1
mtu 9500
mpls l2vc 10.0.0.2 26 control-word raw
#
nqa twamp-light
responder
test-session 1 local-ip 192.168.1.1 remote-ip 192.168.2.2 local-port 6000 remote-port 6000 interface
GigabitEthernet1/0/0.1
#
return
#
mpls
#
mpls ldp
#
ipv4-family
#
isis 1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 4134:3060
apply-label per-instance
arp vlink-direct-route advertise
vpn-target 4134:306000 export-extcommunity
vpn-target 4134:306000 import-extcommunity
#
interface loopback0
ip address 10.0.0.3 255.255.255.255
isis enable 1
#
interface GigabitEthernet1/0/0.31
vlan-type dot1q 31
mtu 9500
ip address 3.0.0.6 255.255.255.252
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/1.31
vlan-type dot1q 1
ip binding vpn-instance vpna
ip address 192.168.2.2 255.255.255.0
#
nqa twamp-light
client
test-session 1 sender-ip 192.168.2.2 reflector-ip 192.168.1.1 sender-port 6000 reflector-port 6000
vpn-instance vpna
sender
test start-continual test-session 1 period 10
#
telemetry
#
sensor-group twamp
sensor-path huawei-twamp-controller:twamp-controller/client/sessions/session/huawei-twamp-
statistics:statistics
#
destination-group twamp
ipv4-address 192.168.100.100 port 10001 protocol grpc no-tls
#
subscription twamp
sensor-group twamp sample-interval 5000
destination-group twamp
#
return
● DeviceC configuration file
#
sysname DeviceC
#
mpls
#
mpls lsr-id 10.0.0.2
#
mpls
#
mpls ldp remote-peer 10.0.0.1
mpls ldp timer hello-hold 45
Networking Requirements
On the IP network shown in Figure 1-58, DeviceA functions as the Controller, and
DeviceB functions as the Responder. Multiple member links of an Eth-Trunk are
bundled between DeviceA and DeviceB.
● DeviceA: sends and receives packets over a test session, collects and calculates
performance statistics, and reports the statistics to the performance
management system.
● DeviceB: responds to the packets received over a test session.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each interface so that all
the devices can communicate at the network layer.
2. Create an Eth-Trunk interface on DeviceA and DeviceB, and then add Ethernet
physical interfaces to each Eth-Trunk interface.
3. Configure the TWAMP Light Responder on DeviceB.
4. Configure the TWAMP Light Controller on DeviceA.
5. Configure the Controller to send statistics to the performance management
system through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● Responder DeviceB
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all the
devices can communicate at the network layer. The configuration procedure is not
provided here.
Step 2 Create an Eth-Trunk interface and add Ethernet physical interfaces to the Eth-
Trunk interface. The following example uses the configuration on DeviceB. The
configuration roadmap on DeviceA is the same as that on DeviceB.
<DeviceB> system-view
[~DeviceB] interface Eth-Trunk 1
[*DeviceB-Eth-Trunk1] ip address 10.2.2.2 8
[*DeviceB-Eth-Trunk1] commit
[~DeviceB-Eth-Trunk1] quit
[~DeviceB] interface gigabitethernet1/0/0
[~DeviceB-Gigabitethernet1/0/0] undo shutdown
[*DeviceB-Gigabitethernet1/0/0] eth-trunk 1
[*DeviceB-Gigabitethernet1/0/0] commit
[~DeviceB-Gigabitethernet1/0/0] quit
[~DeviceB] interface gigabitethernet2/0/0
[~DeviceB-Gigabitethernet2/0/0] undo shutdown
[*DeviceB-Gigabitethernet2/0/0] eth-trunk 1
[*DeviceB-Gigabitethernet2/0/0] commit
[~DeviceB-Gigabitethernet2/0/0] quit
Member-If : Gigabitethernet1/0/0
State : active
Last Start Time : 2020-12-13 15:33:52
Last Stop Time : never
Member-If : Gigabitethernet2/0/0
State : active
Last Start Time : 2020-12-13 15:33:52
Last Stop Time : never
11056 345 4 3 3
--------------------------------------------------------------------------------
Average Delay : 346 Average Jitter : 5
Maximum Delay : 370 Maximum Jitter : 32
Minimum Delay : 328 Minimum Jitter : 0
Average TxJitter : 3 Average RxJitter : 4
Maximum TxJitter : 29 Maximum RxJitter : 23
Minimum TxJitter : 0 Minimum RxJitter : 0
# Display the two-way packet loss statistics of a TWAMP Light session based on
member interfaces on DeviceA.
[~DeviceA] display twamp-light statistic-type twoway-loss test-session 1 link-bundle-member
Gigabitethernet 1/0/0
Latest two-way loss statistics:
--------------------------------------------------------------------------------
Index Loss count Loss ratio Error count Error ratio
--------------------------------------------------------------------------------
108196 0 0.0000% 0 0.0000%
108197 0 0.0000% 0 0.0000%
108198 0 0.0000% 0 0.0000%
108199 0 0.0000% 0 0.0000%
108200 0 0.0000% 0 0.0000%
108201 0 0.0000% 0 0.0000%
108202 0 0.0000% 0 0.0000%
108203 0 0.0000% 0 0.0000%
108204 0 0.0000% 0 0.0000%
108205 0 0.0000% 0 0.0000%
108206 0 0.0000% 0 0.0000%
108207 0 0.0000% 0 0.0000%
108208 0 0.0000% 0 0.0000%
108209 0 0.0000% 0 0.0000%
108210 0 0.0000% 0 0.0000%
108211 0 0.0000% 0 0.0000%
108212 0 0.0000% 0 0.0000%
108213 0 0.0000% 0 0.0000%
108214 0 0.0000% 0 0.0000%
108215 0 0.0000% 0 0.0000%
108216 0 0.0000% 0 0.0000%
108217 0 0.0000% 0 0.0000%
108218 0 0.0000% 0 0.0000%
108219 0 0.0000% 0 0.0000%
108220 0 0.0000% 0 0.0000%
108221 0 0.0000% 0 0.0000%
108222 0 0.0000% 0 0.0000%
108223 0 0.0000% 0 0.0000%
108224 0 0.0000% 0 0.0000%
108225 0 0.0000% 0 0.0000%
--------------------------------------------------------------------------------
Average Loss Count : 0 Average Loss Ratio : 0.0000%
Maximum Loss Count : 0 Maximum Loss Ratio : 0.0000%
Minimum Loss Count : 0 Minimum Loss Ratio : 0.0000%
Average RxError Count: 0 Average RxError Ratio: 0.0000%
Maximum RxError Count: 0 Maximum RxError Ratio: 0.0000%
Minimum RxError Count: 0 Minimum RxError Ratio: 0.0000%
----End
Configuration Files
DeviceA configuration file
#
sysname DeviceA
#
interface Eth-Trunk 1
ip address 10.1.1.1 8
#
interface gigabitethernet1/0/0
undo shutdown
eth-trunk 1
#
interface gigabitethernet2/0/0
undo shutdown
eth-trunk 1
#
nqa twamp-light
client
test-session 1 sender-ip 10.1.1.1 reflector-ip 10.2.2.2 sender-port 2001 reflector-port 2010 link-bundle-
interface Eth-Trunk 1
sender
test start-continual test-session 1 period 10
#
telemetry
#
sensor-group twamp
sensor-path huawei-twamp-controller:twamp-controller/client/sessions/session/huawei-twamp-
statistics:statistics
sensor-path huawei-twamp-controller:twamp-controller/client/sessions/session/huawei-twamp-
statistics:link-bundle-statistics/link-bundle-statistic
#
destination-group twamp
ipv4-address 192.168.100.100 port 10001 protocol grpc no-tls
#
subscription twamp
sensor-group twamp sample-interval 5000
destination-group twamp
#
return
#
return
In the STAMP architecture, the Controller covers the Client and Sender roles in the
standard model, and the Responder covers the Reflector role in the standard
TWAMP model. The Controller creates test sessions, collects performance statistics,
and reports statistics to the NMS using Performance Management (PM). In
addition, the Controller parses NMS information and sends the results to the
Responder through private channels. The Responder is responsible for reflecting
STAMP-Test packets based on the test session. In Figure 1, such packets function
as probes for measuring receive and transmit performance, and they carry the IP
address, UDP port number, and fixed TTL value 255 predefined for the test session
between the Controller and Responder. The Controller sends a STAMP-Test packet
to the Responder, and the Responder reflects it to the Controller. The Controller
collects STAMP statistics.
Context
STAMP uses a simplified configuration model. In other words, only the Session-
Sender needs to be configured for link monitoring.
Procedure
Step 1 Run system-view
Step 5 Run stamp { ipv4 | ipv6 } enable [ period periodValue | time-out time-outValue |
dscp dscp-value | nexthop-ip ip-addr | dest-port udp-port ]*
----End
Prerequisites
STAMP has been configured.
Procedure
● Run the display stamp [ ipv4 | ipv6 ] test-session [ verbose | interface
{ ifName | ifType ifNum }] command to check real-time information about
STAMP sessions.
● Run the display stamp [ ipv4 | ipv6 ] responder test-session [ verbose |
interface { ifName | ifType ifNum }] command to check real-time session
information on the Session-Reflector.
● Run the display stamp [ ipv4 | ipv6 ] interface {{ interface-type interface-
number | interface-name } | all } command to check brief information about
STAMP sessions on an interface.
● Run the display stamp { ipv4 | ipv6 } statistic-type twoway-loss interface
{ interface-type interface-number | interface-name } command to check two-
way packet loss statistics of STAMP sessions on an interface.
● Run the display stamp { ipv4 | ipv6 } statistic-type twoway-delay interface
{ interface-type interface-number | interface-name } command to check two-
way delay statistics of STAMP sessions on an interface.
----End
Context
If the existing STAMP session statistics are no longer applicable, you can clear
them before re-collecting statistics.
NOTICE
STAMP session statistics cannot be restored after being cleared. Exercise caution
when clearing the statistics.
Procedure
● Run the reset stamp [ ipv4 | ipv6 ] statistics { interface { interface-type
interface-number | interface-name } | all } command to clear STAMP session
statistics.
----End
Networking Requirements
DeviceA, DeviceB, and DeviceC are three devices on the network shown in Figure
1-60. It is required that link quality be monitored through simple configurations.
By deploying STAMP on DeviceA and DeviceC, you can configure each of them to
function as both the Session-Sender and Session-Reflector to collect packet loss
and delay statistics.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each involved interface so
that all devices can communicate at the network layer.
2. Configure STAMP on DeviceA and DeviceC.
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces on DeviceA: 10.3.3.3 and 10.1.1.1
● IP addresses of interfaces on DeviceB: 10.1.1.2 and 10.2.2.1
● IP addresses of interfaces on DeviceC: 10.4.4.4 and 10.2.2.2
Procedure
Step 1 Configure DeviceA, DeviceB, and DeviceC to be reachable at the network layer. For
configuration details, see the configuration files.
Step 2 Configure STAMP on DeviceA.
<DeviceA> system-view
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
interface Gigabitethernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
stamp ipv4 enable nexthop-ip 10.2.2.2
#
interface Gigabitethernet2/0/0
undo shutdown
ip address 10.3.3.3 255.255.255.0
#
ip route-static 0.0.0.0 0.0.0.0 10.1.1.2
#
nqa stamp
#
return
Definition
Sampled Flow (sFlow) is a traffic monitoring technology that collects and analyzes
traffic statistics based on packet sampling.
Purpose
Enterprise networks are generally smaller and more flexible than carrier networks.
However, they are often prone to attacks and service exceptions. To help ensure
network stability, enterprises require a traffic monitoring technique that can
promptly identify traffic anomalies and the source of attack traffic, allowing them
to quickly rectify faults.
sFlow is developed to meet the preceding requirement. sFlow is an interface-based
traffic analysis technique that collects packets on an interface based on the
sampling rate. In flow sampling, an sFlow agent analyzes the packets including
the packet content and forwarding rule, and encapsulates the original packets and
parsing result into sFlow packets. The sFlow agent then sends the sFlow packets
to an sFlow collector. In counter sampling, an sFlow agent periodically collects
traffic statistics on an interface, CPU usage, and memory usage.
sFlow focuses on traffic on an interface, traffic forwarding, and device running.
Therefore, sFlow can be used to monitor and diagnose network exceptions. The
NOTE
Benefits
sFlow is comparable to NetStream. In NetStream, network devices collect and
analyze traffic statistics. The devices save these statistics to a buffer and export
them when they expire or when the buffer overflows. sFlow does not require a
flow table. In sFlow, network devices only sample packets, and a remote collector
collects and analyzes traffic statistics.
sFlow has the following advantages over NetStream:
● Fewer resources and lower costs. sFlow requires no flow table and uses only a
small number of network devices, lowering costs.
● Flexible collector deployment. The collector can be deployed flexibly, enabling
traffic statistics to be collected and analyzed according to various traffic
characteristics.
Prerequisites
Before configuring sFlow, you have completed the following tasks:
● Ensure that there are reachable routes between the sFlow agent and collector.
● Create a VPN instance if the sFlow agent and collector are deployed on a
private network.
Context
During sFlow configuration, you must create an sFlow collector and specify its
address as the destination address for sFlow packets.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run sflow
The sFlow view is displayed.
NOTE
The address specified by ip-address must be a valid unicast address that has been
configured on an interface of the device. The address specified by ipv6-address must be a
global unicast address (it cannot be a link-local address).
NOTE
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run slot slot-id
The slot view is displayed.
Step 3 Run sflow enable
sFlow is enabled on the board in the slot.
Step 4 Run commit
The configuration is committed.
----End
Prerequisites
sFlow has been configured.
Procedure
Step 1 Run the display sflow configuration command to check global sFlow
configurations.
Step 3 Run the display sflow packet statistics [ interface interface-type interface-
number ] slot slot-id command to check statistics about sFlow packets sent on a
specified interface or sFlow packets sent and received on a specified board.
----End
Networking Requirements
As shown in Figure 1-61, traffic between network 1 and network 2 is exchanged
through device A. Maintenance personnel need to monitor the traffic on interface
2 and devices to identify traffic anomalies and ensure normal operation on
network 1.
NOTE
In this example:
● Interface 1 is GE 1/0/1.
● Interface 2 is GE 1/0/2.
● Interface 3 is GE 1/0/3.
Configuration Roadmap
To configure sFlow, configure device A as an sFlow agent and enable Flow
sampling on interface 2 so that the agent collects traffic statistics. The agent
encapsulates traffic statistics into sFlow packets and sends the sFlow packets from
interface 1 to the sFlow collector. The collector displays the traffic statistics based
on information in the received sFlow packets.
The configuration roadmap is as follows:
1. Assign an IP address to each interface.
2. Configure sFlow agent and collector information on the device.
3. Configure flow sampling on interface 2.
Procedure
Step 1 Assign an IP address to each interface of device A.
<DeviceA> system-view
[~DeviceA] interface GigabitEthernet 1/0/1
[~DeviceA-GigabitEthernet1/0/1] ip address 10.1.10.1 24
[*DeviceA-GigabitEthernet1/0/1] commit
[~DeviceA] interface GigabitEthernet 1/0/2
[~DeviceA-GigabitEthernet1/0/2] ip address 10.1.20.1 24
[*DeviceA-GigabitEthernet1/0/2] commit
[~DeviceA] interface GigabitEthernet 1/0/3
[~DeviceA-GigabitEthernet1/0/3] ip address 10.1.30.1 24
[*DeviceA-GigabitEthernet1/0/3] commit
[~DeviceA-GigabitEthernet1/0/3] quit
----End
Configuration Files
Device A configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/1
ip address 10.1.10.1 255.255.255.0
#
interface GigabitEthernet1/0/2
ip address 10.1.20.1 255.255.255.0
#
interface GigabitEthernet1/0/3
ip address 10.1.30.1 255.255.255.0
#
sflow
sflow agent ip 10.1.10.1
sflow collector 2
sflow server ip 10.1.10.2
slot 1
sflow enable
#
interface GigabitEthernet1/0/2
sflow flow-sampling collector 2 inbound
sflow flow-sampling rate 4000 inbound
#
return
Context
On an IP RAN, if a fault occurs (for example, a base station is disconnected or IP
service traffic is interrupted), the fault has to be diagnosed based on collaborative
information between routers and wireless controllers or base stations. IP traffic
monitoring helps quickly diagnose faults, improving O&M efficiency.
IP traffic monitoring implements automatic traffic statistics based on IP addresses.
After this function is enabled, you can create an IP service flow table, collect
statistics on the number of IP packets, and store the statistics to a CF card
periodically. You can use a tool to parse a stored file and display its content in
graphics, quickly locating faulty nodes.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run service-stream table sip source-ip dip destination-ip [ vpn-instance vpn-
name ]
IP traffic monitoring is enabled.
Step 3 Run commit
The configuration is committed.
----End
NOTE
Definition
In-situ Flow Information Telemetry (IFIT) is an in-band operations, administration
and maintenance (OAM) technology that uses service packets to measure
performance indicators of an IP network such as the packet loss rate and delay.
Purpose
In the 5G era, the Enhanced Mobile Broadband (eMBB), ultra-reliable low-latency
communication (URLLC), and Massive Machine-Type Communications (mMTC)
scenarios pose higher requirements on bearer networks. To meet these
requirements in terms of network O&M and performance measurement, 5G
networks need the following:
● Effective troubleshooting methods that can improve O&M efficiency, because
deterioration of network performance is difficult to diagnose.
● A service flow-based performance measurement mechanism that can
accurately reflect actual user traffic in real time. Existing performance
measurement mechanisms support only coarse OAM granularities (such as
ports, tunnels, and pseudo wires), which is insufficient in the 5G era.
● Functions such as network-wide delay visualization, delay abnormality
monitoring, and delay-based routing are needed to improve 5G user
experience for delay-sensitive 5G services.
Currently, OAM performance measurement can be classified into out-of-band
measurement and in-band measurement by measurement type. Out-of-band
measurement includes NQA and TWAMP, and in-band measurement includes IP
FPM. These methods have the following advantages and disadvantages:
● NQA can monitor the performance of multiple protocols running on a
network. However, it measures the performance using simulated packets
(constructed based on the types of measurement instances) rather than real
service packets transmitted on the network. As such, the performance
indicators collected by NQA may not represent the actual service quality and
should be used as reference only.
● TWAMP can insert probe packets into service flows, achieving fast and flexible
deployment. However, it sends packets at intervals for measurement, resulting
in low-precision measurement results. In addition, TWAMP does not support
hop-by-hop measurement.
● IP FPM is also an in-band flow measurement method, but it has high
requirements on network devices, applies to limited scenarios, and is complex
to deploy.
To address the fact that current OAM detection technologies cannot adequately
meet the performance measurement requirements of 5G bearer networks, Huawei
launches the IFIT performance measurement solution with the following
highlights:
● Extensibility: IFIT features high measurement precision and easy deployment
and can be easily extended in the future.
● Fast fault locating: IFIT provides in-band flow measurement to help measure
the delay and packet loss of service flows in real time.
Benefits
IFIT offers the following benefits:
Pre-configuration Tasks
Before configuring performance measurement based on static IFIT flows, complete
the following tasks:
On a network where the preceding time protocol is not deployed or supported, NTP
can be used to implement clock synchronization. However, if non-high-precision NTP is
deployed, delay detection is not supported.
Context
A static IFIT flow can be uniquely identified based on 5-tuple or specified by
setting the peer address. After creating a static IFIT flow, you can select end-to-
end or hop-by-hop measurement as required.
Figure 1-62 shows the typical networking of performance measurement based on
a static IFIT flow. The target flow enters the transit network through DeviceA,
traverses DeviceB, and leaves the transit network through DeviceC.
● End-to-end measurement: To monitor transit network performance in real
time or diagnose faults, configure IFIT end-to-end measurement on both
DeviceA and DeviceC.
● Hop-by-hop measurement: To measure the packet loss and delay hop by hop
for fault locating when a performance fault occurs on the transit network,
configure hop-by-hop measurement on DeviceA, DeviceB, and DeviceC.
NOTE
In this scenario, data can be reported only through telemetry of the OnChange type.
Procedure
In a non-inter-AS Option A scenario, perform the following steps to configure
performance measurement based on static IFIT flows:
1. Run system-view
The system view is displayed.
2. Run ifit
IFIT is enabled globally and its view is displayed.
Generally, only Steps 1 and 2 need to be performed on DeviceC when it
functions as the egress of an IFIT flow. Note that in bidirectional flow
measurement scenarios, Steps 1 to 4 need to be performed on DeviceC.
3. Run node-id node-id
A node ID is configured.
4. Run encapsulation nexthop ip-address
The device is enabled to encapsulate the IFIT header into packets destined for
a specified next hop IP address.
In a hop-by-hop measurement scenario, a next hop needs to be specified for
the fow to be measured and Steps 1 to 4 need to be configured on DeviceB if
segment routing is configured. In other cases, only Steps 1 and 2 need to be
configured on DeviceB.
5. (Optional) Run measure disable
IFIT measurement is disabled.
6. (Optional) Run clock-source { ntp | auto }
The clock source for IFIT measurement is set to NTP or is automatically
selected.
By default, the clock source is automatically selected. In this mode, the high-
precision clock source is preferentially selected. This step allows you to
manually select a clock type in order to cope with measurement failures
caused by asynchronous clocks in cross-domain interconnection scenarios
where multiple clock protocols are deployed. In addition, you can run the
period-clock-mode { ptp current-leap current-leap-value [ { leap59 |
leap61 } date utc-date ] } command to configure a timing mode for the IFIT
measurement interval and timestamp sampling.
7. Run instance instance-id
An IFIT instance is created, and its view is displayed.
8. Run measure-mode { e2e | trace }
The IFIT measurement mode is set to end-to-end or hop-by-hop.
By default, end-to-end measurement is used.
9. (Optional) Run interval interval-value
A measurement interval is set for the IFIT instance.
10. Run different commands to create static IFIT flows based on service scenarios,
as described in Table 1-12.
EVPN VPWS flow unidirectional evpl-instance evpl- Create a static IFIT flow
instance-value peer-ip peer-ip-address based on the peer IP
address.
NOTE
In VPN scenarios, when a static flow is configured based on 5-tuple, a VPN instance
must be configured unless IFIT is performed for downstream traffic in HoVPN
scenarios or upstream/downstream traffic in H-VPN scenarios.
11. Run binding interface { interface-type interface-number | interface-name }
The IFIT flow is bound to an interface.
Different IFIT instances can be configured with the same flow characteristics,
but cannot be bound to the same interface.
You are advised to configure IFIT mapping in the inbound direction and then in
the outbound direction. Otherwise, traffic may be interrupted.
3. Run commit
The configuration is committed.
If only a single device on the network supports IFIT, you can run the single-
device-measure enable command in the IFIT instance view to enable single-
device measurement after you create a static IFIT flow. However, the following
restrictions exist:
● Currently, single-device IFIT measurement is supported only in public network
and L3VPN/EVPN L3VPN (HVPN) scenarios, not in Layer 2 scenarios.
● In tunnel forwarding scenarios, single-device IFIT measurement is not
supported on P nodes.
● In single-device IFIT measurement scenarios, only unidirectional flows with
characteristics specified can be statically configured and delivered.
Configuring Performance Measurement Based on Static IFIT Flows with the IFIT
Header Encapsulated into the SRH on an SRv6 Network
This section describes how to configure performance measurement based on static
IFIT flows with the IFIT header encapsulated into the SRH on an SRv6 network.
Context
A static IFIT flow can be uniquely identified based on 5-tuple or specified by
setting the peer address. After creating a static IFIT flow, you can select end-to-
end or hop-by-hop measurement as required.
Figure 1-63 shows the typical networking of performance measurement based on
a static IFIT flow. The target flow enters the transit network through DeviceA,
traverses DeviceB, and leaves the transit network through DeviceC.
● End-to-end measurement: To monitor transit network performance in real
time or diagnose faults, configure IFIT end-to-end measurement on both
DeviceA and DeviceC.
● Hop-by-hop measurement: To measure the packet loss and delay hop by hop
for fault locating when a performance fault occurs on the transit network,
configure hop-by-hop measurement on DeviceA, DeviceB, and DeviceC.
NOTE
In this scenario, data can be reported only through telemetry of the OnChange type.
Procedure
In a non-inter-AS Option A scenario, perform the following steps to configure
performance measurement based on static IFIT flows:
1. Run system-view
The system view is displayed.
2. Run ifit
IFIT is enabled globally, and its view is displayed.
Generally, only Steps 1 and 2 need to be performed on DeviceB and DeviceC
when they function as the transit node and egress of an IFIT flow,
respectively.
3. Run node-id node-id
A node ID is configured.
4. (Optional) Run measure disable
Measurement is disabled for the IFIT instance.
5. (Optional) Run decapsulation peer-locator locator-ipv6-prefix locator-prefix-
length
The device is enabled to remove the IFIT header from packets destined for a
specified next-hop locator.
In SRv6 HoVPN-based interworking scenarios, to ensure that the IFIT header
is not carried in packets across domains in an E2E manner, you can perform
this step on inter-domain devices to terminate IFIT packets within a domain.
This allows the NMS to display statistics in a unified manner.
6. (Optional) Run clock-source { ntp | auto }
The clock source for IFIT measurement is set to NTP or is automatically
selected.
By default, the clock source is automatically selected. In this mode, the high-
precision clock source is preferentially selected. This step allows you to
manually select a clock type in order to cope with measurement failures
caused by asynchronous clocks in cross-domain interconnection scenarios
where multiple clock protocols are deployed. In addition, you can run the
period-clock-mode { ptp current-leap current-leap-value [{ leap59 |
leap61 } date utc-date ]} command to configure a timing mode for the IFIT
measurement interval and timestamp sampling.
7. Run instance instance-id
An IFIT instance is created, and its view is displayed.
8. Run measure-mode { e2e | trace } The IFIT measurement mode is set to end-
to-end or hop-by-hop.
By default, end-to-end measurement is used.
9. (Optional) Run interval interval-value
A measurement interval is set for the IFIT instance.
10. Run different commands to create a static IFIT flow based on service
scenarios, as shown in Table 1-13.
IPv6 public network flow { unidirectional | bidirectional } Create a static IFIT flow
source-ipv6 { src-ipv6-address [ src6- based on 5-tuple
mask-length ] | any } destination-ipv6 information.
{ dest-ipv6-address [ dest6-mask-length ] | NOTE
any } [ protocol { { tcp | udp | sctp | Currently, gtp [ gtp-te-id te-
protocol-number4 | protocol-number5 | id-value ] can be configured
protocol-number6 } [ source-port source- only for unidirectional flows
in 5-tuple-based
port ] [ destination-port destination-
measurement.
port ] | { protocol-number | protocol-
number7 | protocol-number8 | protocol-
number3 } } ] [ gtp [ gtp-te-id te-id-
value ] ] [ dscp dscp-value ]
flow unidirectional source-ipv6 any Create a static IFIT flow
destination-ipv6 any peer-locator based on the peer locator.
locator-ipv6-prefix locator-prefix-length
APN6 flow unidirectional apn-id-ipv6 instance Create a static IFIT flow
instance-name based on the APN6
instance.
For details about the
service scenarios where
APN6-based measurement
is supported, see IFIT
Application in IFIT
Description.
NOTE
In VPN scenarios, when a static flow is configured based on 5-tuple, a VPN instance
must be configured unless IFIT is performed for downstream traffic in HoVPN
scenarios or upstream/downstream traffic in H-VPN scenarios.
11. Run binding interface { interface-type interface-number | interface-name }
The IFIT flow is bound to an interface.
Different IFIT instances can be configured with the same flow characteristics,
but cannot be bound to the same interface.
12. (Optional) Run disorder-measure enable
Out-of-order packet measurement is enabled.
13. (Optional) Run gtpu-sn-measure enable
Out-of-order packet measurement is enabled for GTPU packets.
14. (Optional) Run delay-measure per-packet enable
Per-packet delay measurement is enabled.
Per-packet delay measurement can be enabled only after packet loss
measurement is enabled and is mutually exclusive with out-of-order packet
measurement.
15. (Optional) Run loss-measure disable
You are advised to configure IFIT mapping in the inbound direction and then in
the outbound direction. Otherwise, traffic may be interrupted.
3. Run commit
The configuration is committed.
If only a single device on the network supports IFIT, you can run the single-
device-measure enable command in the IFIT instance view to enable single-
device measurement after you create a static IFIT flow. However, the following
restrictions exist:
● Currently, single-device IFIT measurement is supported only in public network
and L3VPN/EVPN L3VPN (HVPN) scenarios, not in Layer 2 scenarios.
● In tunnel forwarding scenarios, single-device IFIT measurement is not
supported on P nodes.
● In single-device IFIT measurement scenarios, only unidirectional flows with
characteristics specified can be statically configured and delivered.
● If IFIT Option A mapping is configured on the outbound interface of single-
device IFIT measurement, a transit output statistical node instead of an egress
statistical node is generated in the outbound direction, and this node
encapsulates the IFIT header into packets and forwards them.
Configuring Performance Measurement Based on Static IFIT Flows with the IFIT
Header Encapsulated into the DOH on an SRv6 Network
This section describes how to configure performance measurement based on static
IFIT flows with the IFIT header encapsulated into the DOH on an SRv6 network.
Prerequisites
When an IFIT instance with Header Type being 16 is configured, each SRv6 SID
node (including SRv6 BE ingress and egress) must be able to identify the DOH and
SRH in packets. Otherwise, traffic will be interrupted (for example, some nodes do
not support IFIT instances with Header Type being 16). To use new functions
related to IFIT instances with Header Type being 16, upgrade SRv6 SID nodes on
the network to versions that support this type of instance in an end-to-end
manner.
Context
When IFIT is implemented by encapsulating the IFIT header into the DOH, only
performance measurement using 5-tuple-based static flows is supported. After
creating a static IFIT flow, you can select end-to-end or hop-by-hop measurement
as required.
Figure 1-64 shows the typical networking of performance measurement based on
a static IFIT flow. The target flow enters the transit network through DeviceA,
traverses DeviceB, and leaves the transit network through DeviceC.
● End-to-end measurement: To monitor transit network performance in real
time or diagnose faults, configure IFIT end-to-end measurement on both
DeviceA and DeviceC.
● Hop-by-hop measurement: To measure the packet loss and delay hop by hop
for fault locating when a performance fault occurs on the transit network,
configure hop-by-hop measurement on DeviceA, DeviceB, and DeviceC.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ifit
IFIT is enabled globally, and its view is displayed.
Generally, only Steps 1 and 2 need to be performed on DeviceB and DeviceC when
they function as the transit node and egress of an IFIT flow, respectively.
Step 3 Run node-id node-id
A node ID is configured.
Step 4 (Optional) Run decapsulation peer-locator locator-ipv6-prefix locator-prefix-
length
The device is enabled to remove the IFIT header from packets destined for a
specified next-hop locator.
In SRv6 HoVPN-based interworking scenarios, to ensure that the IFIT header is not
carried in packets across domains in an E2E manner, you can perform this step on
inter-domain devices to terminate IFIT packets within a domain. This allows the
NMS to display statistics in a unified manner.
Step 5 (Optional) Run clock-source { ntp | auto }
The clock source for IFIT measurement is set to NTP or is automatically selected.
By default, the clock source is automatically selected. In this mode, the high-
precision clock source is preferentially selected. This step allows you to manually
select a clock type in order to cope with measurement failures caused by
asynchronous clocks in cross-domain interconnection scenarios where multiple
clock protocols are deployed. In addition, you can run the period-clock-mode
{ ptp current-leap current-leap-value [{ leap59 | leap61 } date utc-date ]}
command to configure a timing mode for the IFIT measurement interval and
timestamp sampling.
Step 6 Run instance-ht16 instance-id
An IFIT instance with Header Type being 16 is created, and its view is displayed.
Step 7 Run measure-mode { e2e | trace }
The IFIT measurement mode is set to end-to-end or hop-by-hop.
By default, end-to-end measurement is used. In BIERv6 scenarios, only end-to-end
measurement is supported.
Step 8 (Optional) Run interval interval-value
A measurement interval is set.
Step 9 Run different commands to create a static IFIT flow based on service scenarios, as
shown in Table 1-14.
EVPN VPWS flow unidirectional evpl-instance evpl- Create a static IFIT flow
instance-value peer-locator locator-ipv6- based on the peer locator.
prefix locator-prefix-length
NOTE
● Static IFIT flows based on DOH encapsulation support only SRv6 tunnels. Because
tunnel types cannot be distinguished based on the configured 5-tuple, IFIT does not
take effect for unsupported tunnel types.
● In multicast flow measurement scenarios, the vpn-instance vpn-name keyword applies
only to MVPN scenarios, not to GTM scenarios.
Different IFIT instances can be configured with the same flow characteristics, but
cannot be bound to the same interface.
----End
Context
IFIT supports automatic learning of dynamic flows on the ingress by using the
mask or exact match of the source or destination address. In addition, IFIT can
flexibly monitor service quality in real time by configuring a learning whitelist. The
generation of dynamic flows on transit and egress nodes is triggered by packets
with the IFIT header.
Pre-configuration Tasks
Before configuring performance measurement based on dynamic IFIT flows,
complete the following tasks:
● Configure a dynamic routing protocol or static routes so that devices are
reachable at the network layer.
● Configure the network time protocol 1588v2 or G.8275.1 to implement clock
synchronization for all devices that have clocks on the network.
NOTE
On a network where the preceding time protocol is not deployed or supported, NTP
can be used to implement clock synchronization. In this case, delay measurement is
not supported.
Procedure
In a non-inter-AS Option A scenario, perform the following steps to configure
performance measurement based on dynamic IFIT flows:
1. Run system-view
The system view is displayed.
2. Run ifit
IFIT is enabled globally, and its view is displayed.
To collect statistics about dynamic flows, you only need to enable IFIT on
transit and egress nodes.
3. Perform the following steps to configure IFIT automatic flow learning on the
ingress to implement flexible performance measurement based on dynamic
IFIT flows:
a. Run node-id node-id
A node ID is configured.
b. (Optional) Run whitelist-group whitelist-group-name
An IFIT whitelist group is configured and its view is displayed.
c. (Optional) Run either of the following commands to configure a whitelist
rule:
▪ To set dynamic flow parameters, run the dynamic flow source { src-
ip-address [ src-mask-length ] | srcAny } destination { dest-ip-
address [ dest-mask-length ] | dstAny } [ vpn-instance vpn-name ]
interface { ifType ifNum | ifName } [ dscp dscp-value ] [ protocol
{ { tcp | udp | protocol-tcp | protocol-udp | sctp | protocol-sctp }
[ source-port src-port-number ] [ destination-port dest-port-
number ] | { protocol-number | protocol-number3 | protocol-
number4 | protocol-number5 } } ] { measure-mode { e2e | trace } |
delay-measure { enable | disable } | interval interval-value | { loss-
measure-enable | loss-measure-disable } | { disorder-measure-
enable | disorder-measure-disable } | per-packet-delay { enable |
disable } } * command.
▪ To clear the learned dynamic IFIT flows, run the reset dynamic flow
source { src-ip-address [ src-mask-length ] | srcAny } destination
{ dest-ip-address [ dest-mask-length ] | dstAny } [ vpn-instance
vpn-name ] interface { ifType ifNum | ifName } [ dscp dscp-value ]
[ protocol { { tcp | udp | protocol-tcp | protocol-udp | sctp | protocol-
sctp } [ source-port src-port-number ] [ destination-port dest-port-
number ] | { protocol-number | protocol-number3 | protocol-
number4 | protocol-number5 } } ] command.
– In IPv6 scenarios:
▪ To clear the learned dynamic IFIT flows, run the reset dynamic flow
source-ipv6 { src-ipv6-address [ src6-mask-length ] | srcAny }
destination-ipv6 { dest-ipv6-address [ dest6-mask-length ] |
dstAny } [ vpn-instance vpn-name ] interface { ifType ifNum |
ifName } [ dscp dscp-value ] [ protocol { { tcp | udp | protocol-tcp |
protocol-udp | sctp | protocol-sctp } [ source-port src-port-number ]
[ destination-port dest-port-number ] | { protocol-number |
protocol-number3 | protocol-number4 | protocol-number5 } } ]
command.
NOTE
● If automatic flow learning is not performed, this step can also be used to modify
the learned dynamic backward flows in scenarios where IFIT measurement is
performed for bidirectional flows.
● Per-packet delay measurement can be enabled only after packet loss measurement
is enabled and is mutually exclusive with out-of-order packet measurement.
5. (Optional) Run dynamic-flow age interval-multiplier multi-value
The aging time of dynamic IFIT flows is set.
6. (Optional) Run reset dynamic flow { flowId | all }
All learned dynamic flows or a specific one is deleted.
7. Run commit
The configuration is committed.
In an inter-AS Option A scenario, in addition to the preceding configurations, you
also need to perform the following steps to enable IFIT mapping on the interfaces
of the devices (a pair of ASBRs) that connect different ASs:
1. In the system view, run interface { interface-name | interface-type interface-
number }
You are advised to configure IFIT mapping in the inbound direction and then in
the outbound direction. Otherwise, traffic may be interrupted.
3. Run commit
The configuration is committed.
Prerequisites
IFIT has been configured.
Procedure
Step 1 Run the display ifit command to check information about IFIT flows.
Step 2 Run the display ifit { source src-ip-address [ destination dest-ip-address ] |
destination dest-ip-address } command to check IFIT flow information based on
specified IPv4 addresses.
Step 3 Run the display ifit { source-ipv6 src-ipv6-address [ destination-ipv6 dest-ipv6-
address ] | destination-ipv6 dest-ipv6-address } command to check IFIT flow
information based on specified IPv6 addresses.
Step 4 Run the display ifit static [ instance instance-name | flow-id flow-id ] command
to check information about static IFIT flows.
Step 5 Run the display ifit static instance-ht16 instance-name command to check
information about static IFIT flows bound to an IFIT instance with Header Type
being 16.
Step 6 Run the display ifit dynamic [ flow-id flow-id ] command to check information
about dynamic IFIT flows on the ingress.
Step 7 Run the display ifit dynamic-hop [ flow-id flow-id ] command to check
information about dynamic IFIT flows on transit and egress nodes.
Step 8 Run the display ifit { peer-ip peer-ip-address | peer-locator locator-ipv6-prefix }
command to check IFIT flow information by specifying the next hop of the target
flow.
Step 9 Run the display ifit apn-id-ipv6 [ apn-instance apn-instance-name ] command
to check IFIT flow information based on an APN6 instance name.
Step 10 Run the display ifit multicast { { source source-address [ group group-address ] |
group group-address } | { source-ipv6 source-ipv6-address [ group-ipv6 group-
----End
Prerequisites
Before configuring IFIT tunnel-level quality measurement in an SRv6 scenario,
complete the following tasks:
● Configure an SRv6 TE Policy (manual configuration + IS-IS as IGP)/Configure
an SRv6 TE Policy (controller-based dynamic delivery + IS-IS as IGP).
Bidirectional binding SIDs must be configured to ensure normal MCP
calculation.
● Configure a clock synchronization protocol to implement clock
synchronization between devices. In this scenario, NTP is generally used
because interworking with other types of devices may be required. For details,
see NTP Configuration.
Context
Users want to use the NMS to monitor tunnel quality on the network in real time
so that tunnel exceptions can be quickly detected and paths promptly switched.
You can configure IFIT measurement on devices so that the devices can
periodically collect statistics about packet loss and delay and perform intelligent
traffic steering through tunnel quality analysis, simplifying the O&M process and
improving O&M experience.
NOTE
The following steps need to be performed only on the ingress and egress.
Procedure
Step 1 Configure IFIT measurement for an SRv6 TE Policy.
1. Run system-view
The system view is displayed.
2. Run segment-routing ipv6
The SRv6 view is displayed.
A node ID is configured.
----End
Follow-up Procedure
After the preceding configuration is complete, the IFIT measurement result is
reported to the SPR module to facilitate intelligent traffic steering. For details, see
Configuring TE-Class-based Traffic Steering into an SRv6 TE Policy.
Networking Requirements
The L3VPN HVPN shown in Figure 1-65 transmits voice services. Voice flows are
symmetrical and bidirectional, and therefore one voice flow can be divided into
two unidirectional service flows. The upstream service flow enters the network
through the UPE, travels across the SPE, and leaves the network through the NPE.
The downstream service flow enters the network through the NPE, also travels
across the SPE, and leaves the network through the UPE.
To meet users' higher requirements on service quality, it is required that the packet
loss rate and delay of the links between the UPE and NPE be monitored in real
time so that the carrier can promptly respond to network issues if service quality
deteriorates.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an L3VPN HVPN on the UPE, SPE1, SPE2, and NPE. Specifically:
a. Configure an IP address and a routing protocol for each interface so that
all devices can communicate at the network layer. This example uses
OSPF as the routing protocol.
b. Configure MPLS and public network tunnels to carry L3VPN services. In
this example, SR-MPLS TE tunnels are established between the UPE and
each SPE, between SPEs, and between each SPE and the NPE.
c. Create a VPN instance on the UPE and NPE and import the local direct
routes on the UPE and NPE to their respective VPN instance routing
tables.
d. Establish MP-IBGP peer relationships between the UPE and each SPE and
between the NPE and each SPE.
e. Configure the SPEs as RRs and specify the UPE and NPE as RR clients.
f. Configure VPN FRR on the UPE and NPE to improve network reliability.
2. Configure 1588v2 to synchronize the clocks of the UPE, SPEs, and NPE.
3. Configure packet loss and delay measurement on the UPE and NPE to collect
packet loss rate and delay statistics at intervals.
NOTE
● For upstream traffic in the HoVPN over MPLS scenario, an SPE functions as the
ingress, and a VPN instance needs to be configured for the static IFIT flow.
● For downstream traffic in the HoVPN over MPLS scenario or upstream/downstream
traffic in the H-VPN over MPLS scenario, an SPE functions as the ingress, and no
VPN instance needs to be configured for the static IFIT flow.
● For upstream/downstream traffic in the HoVPN over SRv6 scenario, an SPE
functions as the ingress, and a VPN instance needs to be configured for the static
IFIT flow.
4. Configure the device to send statistics to the NMS through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface listed in Table 1
● IGP type (OSPF), process ID (1), and area ID (0)
● Label switching router (LSR) IDs of the UPE (1.1.1.1), SPE1 (2.2.2.2), and SPE2
(3.3.3.3)
● Tunnel interface names (Tunnel11), tunnel IDs (100), and tunnel interface
addresses (loopback interface addresses) for the tunnel interfaces between
the UPE and SPE1
● Tunnel interface names (Tunnel12), tunnel IDs (200), and tunnel interface
addresses (loopback interface addresses) for the tunnel interfaces between
the UPE and SPE2
● Tunnel policy names (policy1) for the tunnels between the UPE and SPEs and
tunnel selector names (BindTE) on the SPEs
● Names (vpna), RDs (100:1), and VPN targets (1:1) of the VPN instances on
the UPE and NPE
● IFIT instance ID (1) and measurement interval (10s)
● Target flow's source IP address (10.1.1.1) and destination IP address (10.2.1.1)
● NMS's IPv4 address (192.168.100.100) and port number (10001), and
reachable routes between the NMS and devices
Procedure
Step 1 Configure an L3VPN HVPN on the UPE, SPE1, SPE2, and NPE. For configuration
details, see Configuration Files.
Step 2 Configure 1588v2 to synchronize the clocks of the UPE, SPE1, and NPE.
1. # Configure SPE1 to import clock signals from BITS0.
[~SPE1] clock bits-type bits0 2mhz
[*SPE1] clock source bits0 synchronization enable
[*SPE1] clock source bits0 priority 1
[*SPE1] commit
Step 3 Configure hop-by-hop IFIT measurement for the link between the UPE and NPE.
# Run the display ifit static and display ifit dynamic-hop commands to check
the UPE configuration and status.
[~UPE] display ifit static instance 1
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Source IP Address/Mask Length : 10.1.1.1/32
Destination IP Address/Mask Length : 10.2.1.1/32
Protocol : any
Source Port : any
Destination Port : any
Gtp : disable
Gtp TeId : --
Dscp : 63
Interface : GigabitEthernet1/0/0
vpn-instance : vpna
Measure State : enable
Loss Measure : enable
Delay Measure : enable
Delay Per packet Measure : disable
Disorder Measure : disable
Gtpu Sequence Measure : disable
Single Device Measure : disable
Measure Mode : trace
Interval : 10(s)
Tunnel Type : --
Flow Match Priority :0
Flow InstType Priority :9
[~UPE] display ifit dynamic-hop
2020-01-14 17:24:39.28 +08:00
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/1
Direction : transitOutput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
# Configure SPE1.
<SPE1> system-view
[~SPE1] ifit
[*SPE1-ifit] node-id 3
[*SPE1-ifit] encapsulation nexthop 4.4.4.4
[*SPE1-ifit] commit
[~SPE1-ifit] quit
# Run the display ifit dynamic-hop command to check the SPE1 configuration
and status.
[~SPE1] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
# Run the display ifit dynamic-hop command to check the NPE configuration
and status.
[~NPE] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/3
Direction : egress
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 513
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/2
Direction : transitInput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
Step 4 Configure the device to send statistics to the NMS through telemetry. The
following uses the UPE as an example.
[~UPE] telemetry
[~UPE-telemetry] destination-group ifit
[*UPE-telemetry-destination-group-ifit] ipv4-address 192.168.100.100 port 10001 protocol grpc
[*UPE-telemetry-destination-group-ifit] quit
[*UPE-telemetry] sensor-group ifit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● UPE configuration file
#
sysname UPE
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy policy1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 1.1.1.1
#
mpls
mpls te
label advertise non-null
#
segment-routing
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.2.1 255.255.255.0
ptp enable
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 172.16.1.1 255.255.255.0
mpls
mpls te
ptp enable
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 172.16.2.1 255.255.255.0
mpls
mpls te
#
interface LoopBack1
interface Tunnel12
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.4
mpls te tunnel-id 200
mpls te reserved-for-binding
mpls te signal-protocol segment-routing
mpls te path explicit-path npe
#
bgp 100
router-id 3.3.3.3
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack1
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
peer 2.2.2.2 enable
peer 4.4.4.4 enable
#
ipv4-family vpnv4
undo policy vpn-target
tunnel-selector bindTE
peer 1.1.1.1 enable
peer 1.1.1.1 reflect-client
peer 1.1.1.1 next-hop-local
peer 2.2.2.2 enable
peer 4.4.4.4 enable
peer 4.4.4.4 reflect-client
peer 4.4.4.4 next-hop-local
#
ospf 1
opaque-capability enable
segment-routing mpls
segment-routing global-block 16000 20000
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 172.16.2.0 0.0.0.255
network 172.16.3.0 0.0.0.255
network 172.16.5.0 0.0.0.255
mpls-te enable
#
tunnel-policy policy1
tunnel binding destination 1.1.1.1 te Tunnel11
tunnel binding destination 4.4.4.4 te Tunnel12
#
return
● NPE configuration file
#
sysname NPE
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
policy vpn-target
peer 2.2.2.2 enable
peer 3.3.3.3 enable
#
ipv4-family vpn-instance vpna
import-route direct
auto-frr
#
ospf 1
area 0.0.0.0
segment-routing mpls
segment-routing global-block 16000 20000
network 4.4.4.4 0.0.0.0
network 172.16.4.0 0.0.0.255
network 172.16.5.0 0.0.0.255
mpls-te enable
#
ifit
node-id 2
#
tunnel-policy policy1
tunnel binding destination 2.2.2.2 te Tunnel11
tunnel binding destination 3.3.3.3 te Tunnel12
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
Networking Requirements
As Layer 3 virtual private network (L3VPN) services develop, carriers place
increasingly higher requirements on VPN traffic statistics collection. After
conventional IP networks carry voice and video services, it has become
commonplace for carriers and their customers to sign Service Level Agreements
(SLAs). To meet users' higher requirements on service quality, IFIT is required on
an L3VPN to monitor the packet loss rate and delay on links between PEs in real
time. This enables timely responses to service quality deterioration.
On the L3VPN shown in Figure 1-66, service flows enter the network through PE1,
traverse the P, and leave the network through PE2.
Interface1, interface2, and interface3 in this example represent GE 1/0/0, GE 2/0/0, and GE
3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an L3VPN on each PE and the P. Specifically:
a. Configure an IP address and a routing protocol for each interface so that
all devices can communicate at the network layer. This example uses IS-IS
as the routing protocol.
b. Configure MPLS and public network tunnels to carry L3VPN services. In
this example, SR-MPLS TE tunnels are used.
c. Configure a VPN instance on each PE, enable the IPv4 address family for
the instance, and bind the instance to the interface connecting the PE to
a CE.
d. Establish an MP-IBGP peer relationship between the PEs.
e. Configure EBGP between the CE and the PE to exchange routing
information.
2. Configure basic 1588v2 functions to synchronize the clocks across all devices.
3. Configure packet loss and delay measurement on the PEs to collect packet
loss rate and delay statistics at intervals.
4. Configure the device to send statistics to the NMS through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface as listed in Figure 1-66
● MPLS LSR IDs on the PEs and P
● SRGB ranges on the PEs and P
Procedure
Step 1 Configure an L3VPN on each PE and the P. For configuration details, see
Configuration Files.
Step 2 Configure basic 1588v2 functions to synchronize the clocks of the PEs and P.
1. # Configure the P to import clock signals from BITS0.
[~P] clock bits-type bits0 2mhz
[*P] clock source bits0 synchronization enable
[*P] clock source bits0 priority 1
[*P] commit
2. # Enable 1588v2 globally.
# Configure the P.
[~P] ptp enable
[*P] ptp domain 1
[*P] ptp device-type bc
[*P] clock source ptp synchronization enable
[*P] clock source ptp priority 1
[*P] commit
# Configure PE1.
[~PE1] ptp enable
[*PE1] ptp domain 1
[*PE1] ptp device-type bc
[*PE1] clock source ptp synchronization enable
[*PE1] clock source ptp priority 1
[*PE1] commit
# Configure PE2.
[~PE2] ptp enable
[*PE2] ptp domain 1
[*PE2] ptp device-type bc
[*PE2] clock source ptp synchronization enable
[*PE2] clock source ptp priority 1
[*PE2] commit
3. # Enable 1588v2 on interfaces.
# Configure the P.
[~P] interface gigabitethernet 1/0/0
[~P-GigabitEthernet1/0/0] ptp enable
[*P-GigabitEthernet1/0/0] commit
[~P-GigabitEthernet1/0/0] quit
[~P] interface gigabitethernet 2/0/0
[~P-GigabitEthernet2/0/0] ptp enable
[*P-GigabitEthernet2/0/0] commit
[~P-GigabitEthernet2/0/0] quit
[~P] interface gigabitethernet 3/0/0
[~P-GigabitEthernet3/0/0] ptp enable
[*P-GigabitEthernet3/0/0] commit
[~P-GigabitEthernet3/0/0] quit
# Configure PE1.
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ptp enable
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
[~PE1] interface gigabitethernet 2/0/0
Step 3 Configure IFIT for the link between PE1 and PE2.
# Configure PE1.
<PE1> system-view
[~PE1] ifit
[*PE1-ifit] node-id 1
[*PE1-ifit] encapsulation nexthop 3.3.3.9
[*PE1-ifit] instance 1
[*PE1-ifit-instance-1] measure-mode e2e
[*PE1-ifit-instance-1] interval 10
[*PE1-ifit-instance-1] flow unidirectional source any destination any vpn-instance vpna peer-ip 3.3.3.9
[*PE1-ifit-instance-1] binding interface gigabitethernet 1/0/0
[*PE1-ifit-instance-1] commit
[~PE1-ifit-instance-1] quit
[~PE1-ifit] quit
# Run the display ifit static command to check the configuration and status of
PE1.
[~PE1] display ifit static instance 1
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Source IP Address/Mask Length : any(IPv4)
Destination IP Address/Mask Length : any(IPv4)
Protocol : any
Source Port : any
Destination Port : any
Gtp : disable
Gtp TeId : --
Dscp : --
Interface : GigabitEthernet1/0/0
vpn-instance : vpna
Measure State : enable
Loss Measure : enable
Delay Measure : enable
Delay Per packet Measure : disable
Disorder Measure : disable
Gtpu Sequence Measure : disable
Single Device Measure : disable
Measure Mode : e2e
Interval : 10(s)
Tunnel Type : MPLS
Flow Match Priority :0
Peer IP : 3.3.3.9
# Configure PE2.
<PE2> system-view
[~PE2] ifit
[*PE2-ifit] node-id 2
[*PE2-ifit] commit
[~PE2-ifit] quit
# Run the display ifit dynamic-hop command to check the configuration and
status of PE2.
[~PE2] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : egress
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
Step 4 Configure the device to send statistics to the NMS through telemetry. The
following uses PE1 as an example.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv4-address 192.168.100.100 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-peer-ip-
statistics/flow-peer-ip-statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] quit
[*PE1-telemetry] subscription ifit
[*PE1-telemetry-subscription-ifit] sensor-group ifit sample-interval 0
[*PE1-telemetry-subscription-ifit] destination-group ifit
[*PE1-telemetry-subscription-ifit] commit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
#
explicit-path pe2
next sid label 16200 type prefix
next sid label 16300 type prefix
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0001.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 20000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.2 255.255.255.0
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.16.1.1 255.255.255.0
isis enable 1
ptp enable
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
isis prefix-sid absolute 16100
#
bgp 100
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.9 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 10.1.1.1 as-number 65410
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path pe2
#
ifit
node-id 1
encapsulation nexthop 3.3.3.9
instance 1
flow unidirectional source any destination any vpn-instance vpna peer-ip 3.3.3.9
binding interface GigabitEthernet1/0/0
#
tunnel-policy p1
tunnel select-seq sr-te load-balance-number 1
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-peer-ip-statistics/flow-peer-ip-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● P configuration file
#
sysname P
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source bits0 synchronization enable
clock source bits0 priority 1
clock source ptp synchronization enable
clock source ptp priority 1
clock bits-type bits0 2mhz
#
mpls lsr-id 2.2.2.9
#
mpls
mpls te
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0002.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 20000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.2 255.255.255.0
isis enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.1 255.255.255.0
isis enable 1
ptp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 172.18.1.1 255.255.255.0
ptp enable
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
isis prefix-sid absolute 16200
#
return
● PE2 configuration file
#
sysname PE2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
apply-label per-instance
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 3.3.3.9
#
mpls
mpls te
#
explicit-path pe1
next sid label 16200 type prefix
next sid label 16100 type prefix
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0003.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 20000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.2 255.255.255.0
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.2 255.255.255.0
isis enable 1
ptp enable
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
isis prefix-sid absolute 16300
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.9
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path pe1
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 10.2.1.1 as-number 65420
#
ifit
node-id 2
#
tunnel-policy p1
tunnel select-seq sr-te load-balance-number 1
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-peer-ip-statistics/flow-peer-ip-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
Networking Requirements
To transmit both Layer 2 and Layer 3 services on a network, deploy an EVPN to
carry Layer 3 service traffic. The EVPN is called an EVPN Layer 3 virtual private
network (L3VPN). To meet users' higher requirements on service quality, IFIT is
required on an EVPN L3VPN to monitor the packet loss rate and delay on links
between PEs in real time. This enables timely responses to service quality
deterioration.
On the EVPN L3VPN shown in Figure 1-67, service flows enter the network
through PE1, traverse the P, and leave the network through PE2.
Interfaces 1 through 3 in this example represent GE1/0/0, GE2/0/0, and GE3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an EVPN L3VPN on each PE and the P. Specifically:
a. Configure an IP address and a routing protocol for each interface so that
all devices can communicate at the network layer. This example uses IS-IS
as the routing protocol.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface as listed in Figure 1-67
● MPLS LSR IDs on the PEs and P
● Name, VPN target, and RD of the VPN instance on each PE
● IFIT instance ID (1) and measurement interval (10s)
● Peer IP address (3.3.3.9) of the IFIT instance
● NMS's IPv4 address (192.168.100.100) and port number (10001), and
reachable routes between the NMS and devices
Procedure
Step 1 Configure an EVPN L3VPN on each PE and the P. For configuration details, see
Configuration Files.
Step 2 Configure basic 1588v2 functions to synchronize the clocks of the PEs and P.
1. # Configure the P to import clock signals from BITS0.
[~P] clock bits-type bits0 2mhz
[*P] clock source bits0 synchronization enable
[*P] clock source bits0 priority 1
[*P] commit
2. # Enable 1588v2 globally.
# Configure the P.
[~P] ptp enable
[*P] ptp domain 1
[*P] ptp device-type bc
[*P] clock source ptp synchronization enable
[*P] clock source ptp priority 1
[*P] commit
# Configure PE1.
[~PE1] ptp enable
[*PE1] ptp domain 1
[*PE1] ptp device-type bc
[*PE1] clock source ptp synchronization enable
[*PE1] clock source ptp priority 1
[*PE1] commit
# Configure PE2.
[~PE2] ptp enable
# Configure PE1.
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ptp enable
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
[~PE1] interface gigabitethernet 2/0/0
[~PE1-GigabitEthernet2/0/0] ptp enable
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] interface gigabitethernet 1/0/0
[~PE2-GigabitEthernet1/0/0] ptp enable
[*PE2-GigabitEthernet1/0/0] commit
[~PE2-GigabitEthernet1/0/0] quit
[~PE2] interface gigabitethernet 2/0/0
[~PE2-GigabitEthernet2/0/0] ptp enable
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
Step 3 Configure IFIT for the link between PE1 and PE2.
# Configure PE1.
<PE1> system-view
[~PE1] ifit
[*PE1-ifit] node-id 1
[*PE1-ifit] encapsulation nexthop 3.3.3.9
[*PE1-ifit] instance 1
[*PE1-ifit-instance-1] measure-mode e2e
[*PE1-ifit-instance-1] interval 10
[*PE1-ifit-instance-1] flow unidirectional source any destination any vpn-instance vpna peer-ip 3.3.3.9
[*PE1-ifit-instance-1] binding interface gigabitethernet 1/0/0
[*PE1-ifit-instance-1] commit
[~PE1-ifit-instance-1] quit
[~PE1-ifit] quit
# Run the display ifit static command to check the configuration and status of
PE1.
[~PE1] display ifit static instance 1
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
# Configure PE2.
<PE2> system-view
[~PE2] ifit
[*PE2-ifit] node-id 2
[*PE2-ifit] commit
[~PE2-ifit] quit
# Run the display ifit dynamic-hop command to check the configuration and
status of PE2.
[~PE2] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : egress
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
Step 4 Configure the device to send statistics to the NMS through telemetry. The
following uses PE1 as an example.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv4-address 192.168.100.100 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-peer-ip-
statistics/flow-peer-ip-statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] quit
[*PE1-telemetry] subscription ifit
[*PE1-telemetry-subscription-ifit] sensor-group ifit sample-interval 0
[*PE1-telemetry-subscription-ifit] destination-group ifit
[*PE1-telemetry-subscription-ifit] commit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 111:1 export-extcommunity evpn
vpn-target 111:1 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.9
#
mpls
#
mpls ldp
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.2 255.255.255.0
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.16.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
ptp enable
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
bgp 100
router-id 1.1.1.9
peer 3.3.3.9 as-number 100
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv4-family vpn-instance vpna
peer 10.1.1.1 as-number 65410
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 3.3.3.9 enable
peer 3.3.3.9 advertise irb
#
ifit
node-id 1
encapsulation nexthop 3.3.3.9
instance 1
interval 10
flow unidirectional source any destination any vpn-instance vpna peer-ip 3.3.3.9
binding interface GigabitEthernet1/0/0
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-peer-ip-statistics/flow-peer-ip-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● P configuration file
#
sysname P
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source bits0 synchronization enable
clock source bits0 priority 1
clock source ptp synchronization enable
clock source ptp priority 1
clock bits-type bits0 2mhz
#
mpls lsr-id 2.2.2.9
#
mpls
#
mpls ldp
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
ptp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 172.18.1.1 255.255.255.0
ptp enable
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
apply-label per-instance
tnl-policy p1 evpn
vpn-target 111:1 export-extcommunity evpn
vpn-target 111:1 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 3.3.3.9
#
mpls
#
mpls ldp
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.2 255.255.255.0
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
ptp enable
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
bgp 100
router-id 3.3.3.9
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv4-family vpn-instance vpna
import-route direct
peer 10.2.1.1 as-number 65420
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 1.1.1.9 enable
peer 1.1.1.9 advertise irb
#
ifit
node-id 2
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-peer-ip-statistics/flow-peer-ip-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
1.1.12.5.4 Example for Configuring Peer Locator-based IFIT on an L3VPN over SRv6
Network
This section provides an example for configuring peer locator-based IFIT end-to-
end packet loss and delay measurement on an L3VPN over SRv6 network.
Networking Requirements
L3VPN over SRv6 uses public network SRv6 tunnels to carry L3VPN services. To
meet users' higher requirements on service quality, IFIT is required on an L3VPN
over SRv6 network to monitor the packet loss rate and delay of links between PEs
in real time. This enables timely responses to service quality deterioration.
On the L3VPN over SRv6 network shown in Figure 1-68, service flows enter the
network through PE1, traverse the P, and leave the network through PE2.
Figure 1-68 Configuring peer locator-based IFIT on an L3VPN over SRv6 network
NOTE
Interface1, interface2, and interface3 in this example represent GE 1/0/0, GE 2/0/0, and GE
3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an L3VPN over SRv6 network on each PE and the P. Specifically:
a. Enable IPv6 forwarding on each device and configure IPv6 addresses for
involved interfaces.
b. Enable IS-IS, configure an IS-IS level, and specify a network entity on
each device.
c. Configure the IS-IS SRv6 capability on each device.
d. Configure a VPN instance on the PEs.
e. Establish an EBGP peer relationship between each PE and its connected
CE.
f. Establish an MP-IBGP peer relationship between the PEs.
Data Preparation
To complete the configuration, you need the following data:
● IPv6 address of each interface as listed in Figure 1-68
● Area numbers of the PEs and P
● Levels on the PEs and P
● Name, RD, and RT of the VPN instance on each PE
● IFIT instance ID (1) and measurement interval (10s)
● Peer locator (2001:db8:40::1/64) of the IFIT instance
● NMS's IPv6 address (2001:db8:101::1) and port number (10001), and
reachable routes between the NMS and device
Procedure
Step 1 Configure an L3VPN over SRv6 network on each PE and the P. For configuration
details, see Configuration Files.
Step 2 Configure basic 1588v2 functions to synchronize the clocks of the PEs and P.
1. # Configure the P to import clock signals from BITS0.
[~P] clock bits-type bits0 2mhz
[*P] clock source bits0 synchronization enable
[*P] clock source bits0 priority 1
[*P] commit
# Configure PE1.
[~PE1] ptp enable
[*PE1] ptp domain 1
[*PE1] ptp device-type bc
[*PE1] clock source ptp synchronization enable
[*PE1] clock source ptp priority 1
[*PE1] commit
# Configure PE2.
[~PE2] ptp enable
[*PE2] ptp domain 1
[*PE2] ptp device-type bc
[*PE2] clock source ptp synchronization enable
[*PE2] clock source ptp priority 1
[*PE2] commit
# Configure PE1.
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ptp enable
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
[~PE1] interface gigabitethernet 2/0/0
[~PE1-GigabitEthernet2/0/0] ptp enable
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] interface gigabitethernet 1/0/0
[~PE2-GigabitEthernet1/0/0] ptp enable
[*PE2-GigabitEthernet1/0/0] commit
[~PE2-GigabitEthernet1/0/0] quit
[~PE2] interface gigabitethernet 2/0/0
[~PE2-GigabitEthernet2/0/0] ptp enable
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
Step 3 Configure IFIT for the link between PE1 and PE2.
# Configure PE1.
<PE1> system-view
[~PE1] ifit
[*PE1-ifit] node-id 1
[*PE1-ifit] instance 1
[*PE1-ifit-instance-1] measure-mode e2e
[*PE1-ifit-instance-1] interval 10
[*PE1-ifit-instance-1] flow unidirectional source any destination any vpn-instance vpna peer-locator
2001:DB8:40::1 64
[*PE1-ifit-instance-1] binding interface gigabitethernet 2/0/0
[*PE1-ifit-instance-1] commit
[~PE1-ifit-instance-1] quit
[~PE1-ifit] quit
# Run the display ifit static command to check the configuration and status of
PE1.
[~PE1] display ifit static instance 1
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Source IP Address/Mask Length : any(IPv4)
Destination IP Address/Mask Length : any(IPv4)
Protocol : any
Source Port : any
Destination Port : any
Gtp : disable
Gtp TeId : --
Dscp : --
Interface : GigabitEthernet2/0/0
vpn-instance : vpna
Measure State : enable
Loss Measure : enable
Delay Measure : enable
Delay Per packet Measure : disable
Disorder Measure : disable
Gtpu Sequence Measure : disable
Single Device Measure : disable
Measure Mode : e2e
Interval : 10(s)
Tunnel Type : SRv6
Flow Match Priority :0
Peer Locator: : 2001:DB8:40::1/64
# Configure PE2.
<PE2> system-view
[~PE2] ifit
[*PE2-ifit] node-id 2
[*PE2-ifit] commit
[~PE2-ifit] quit
# Run the display ifit dynamic-hop command to check the configuration and
status of PE2.
[~PE2] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet2/0/0
Direction : egress
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
Step 4 Configure the device to send statistics to the NMS through telemetry. The
following uses PE1 as an example.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-locator-
statistics/flow-locator-statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] quit
[*PE1-telemetry] subscription ifit
[*PE1-telemetry-subscription-ifit] sensor-group ifit sample-interval 0
[*PE1-telemetry-subscription-ifit] destination-group ifit
[*PE1-telemetry-subscription-ifit] commit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator as1 ipv6-prefix 2001:DB8:100::1 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::1/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.1 255.255.255.0
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:1::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 2001:DB8:3::3 as-number 100
peer 2001:DB8:3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 2001:DB8:3::3 enable
peer 2001:DB8:3::3 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 best-effort
peer 10.1.1.2 as-number 65410
#
ifit
node-id 1
instance 1
interval 10
flow unidirectional source any destination any vpn-instance vpna peer-locator 2001:DB8:40::1 64
binding interface GigabitEthernet2/0/0
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-locator-statistics/flow-locator-statistic
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● P configuration file
#
sysname P
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source bits0 synchronization enable
clock source bits0 priority 1
clock source ptp synchronization enable
clock source ptp priority 1
clock bits-type bits0 2mhz
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::2/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::1/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:30::1/96
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:2::2/64
isis ipv6 enable 1
#
return
Networking Requirements
EVPN L3VPNv6 over SRv6 uses public network SRv6 tunnels to carry EVPN
L3VPNv6 services. The implementation of EVPN L3VPNv6 over SRv6 mainly
involves establishing SRv6 tunnels, advertising VPN routes, and forwarding data.
To meet users' higher requirements on service quality, IFIT is required on an EVPN
L3VPNv6 over SRv6 network to monitor the packet loss rate and delay of links
between PEs in real time. This enables timely responses to service quality
deterioration.
On the EVPN L3VPNv6 over SRv6 network shown in Figure 1-69, service flows
enter the network through PE1, traverse the P, and leave the network through
PE2.
Figure 1-69 Configuring peer locator-based IFIT on an EVPN L3VPNv6 over SRv6
network
NOTE
Interfaces 1 through 3 in this example represent GE1/0/0, GE2/0/0, and GE3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an EVPN L3VPNv6 over SRv6 network on each PE and the P.
Specifically:
a. Enable IPv6 forwarding on each device and configure IPv6 addresses for
involved interfaces.
b. Enable IS-IS, configure an IS-IS level, and specify a network entity on
each device.
c. Configure an IPv6 L3VPN instance on each PE and bind the IPv6 L3VPN
instance to an access-side interface.
d. Establish a BGP EVPN peer relationship between PEs.
e. Configure SRv6 BE on PEs.
2. Configure basic 1588v2 functions to synchronize the clocks across all devices.
3. Configure packet loss and delay measurement on the PEs to collect packet
loss rate and delay statistics at intervals.
Data Preparation
To complete the configuration, you need the following data:
● IPv6 address of each interface as listed in Figure 1-69
● Area numbers of the PEs and P
● Levels on the PEs and P
● Name, RD, and RT of the VPN instance on each PE
● IFIT instance ID (1) and measurement interval (10s)
● Peer locator (2001:db8:60::1/64) of the IFIT instance
● NMS's IPv6 address (2001:db8:101::1) and port number (10001), and
reachable routes between the NMS and device
Procedure
Step 1 Configure an EVPN L3VPNv6 over SRv6 network on each PE and the P. For
configuration details, see Configuration Files.
Step 2 Configure basic 1588v2 functions to synchronize the clocks of the PEs and P.
1. # Configure the P to import clock signals from BITS0.
[~P] clock bits-type bits0 2mhz
[*P] clock source bits0 synchronization enable
[*P] clock source bits0 priority 1
[*P] commit
# Configure PE1.
[~PE1] ptp enable
[*PE1] ptp domain 1
[*PE1] ptp device-type bc
[*PE1] clock source ptp synchronization enable
[*PE1] clock source ptp priority 1
[*PE1] commit
# Configure PE2.
[~PE2] ptp enable
[*PE2] ptp domain 1
[*PE2] ptp device-type bc
[*PE2] clock source ptp synchronization enable
[*PE2] clock source ptp priority 1
[*PE2] commit
[*P-GigabitEthernet2/0/0] commit
[~P-GigabitEthernet2/0/0] quit
[~P] interface gigabitethernet 3/0/0
[~P-GigabitEthernet3/0/0] ptp enable
[*P-GigabitEthernet3/0/0] commit
[~P-GigabitEthernet3/0/0] quit
# Configure PE1.
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ptp enable
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
[~PE1] interface gigabitethernet 2/0/0
[~PE1-GigabitEthernet2/0/0] ptp enable
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] interface gigabitethernet 1/0/0
[~PE2-GigabitEthernet1/0/0] ptp enable
[*PE2-GigabitEthernet1/0/0] commit
[~PE2-GigabitEthernet1/0/0] quit
[~PE2] interface gigabitethernet 2/0/0
[~PE2-GigabitEthernet2/0/0] ptp enable
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
Step 3 Configure IFIT for the link between PE1 and PE2.
# Configure PE1.
<PE1> system-view
[~PE1] ifit
[*PE1-ifit] node-id 1
[*PE1-ifit] instance 1
[*PE1-ifit-instance-1] measure-mode e2e
[*PE1-ifit-instance-1] interval 10
[*PE1-ifit-instance-1] flow unidirectional source-ipv6 any destination-ipv6 any vpn-instance vpna peer-
locator 2001:DB8:60::1 64
[*PE1-ifit-instance-1] binding interface gigabitethernet 2/0/0
[*PE1-ifit-instance-1] commit
[~PE1-ifit-instance-1] quit
[~PE1-ifit] quit
# Run the display ifit static command to check the configuration and status of
PE1.
[~PE1] display ifit static instance 1
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Source IP Address/Mask Length : any(IPv6)
Destination IP Address/Mask Length : any(IPv6)
Protocol : any
Source Port : any
Destination Port : any
Gtp : disable
Gtp TeId : --
Dscp : --
Interface : GigabitEthernet2/0/0
vpn-instance : vpna
Measure State : enable
Loss Measure : enable
Delay Measure : enable
Delay Per packet Measure : disable
# Configure PE2.
<PE2> system-view
[~PE2] ifit
[*PE2-ifit] node-id 2
[*PE2-ifit] commit
[~PE2-ifit] quit
# Run the display ifit dynamic-hop command to check the configuration and
status of PE2.
[~PE2] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet2/0/0
Direction : egress
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
Step 4 Configure the device to send statistics to the NMS through telemetry. The
following uses PE1 as an example.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-locator-
statistics/flow-locator-statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] quit
[*PE1-telemetry] subscription ifit
[*PE1-telemetry-subscription-ifit] sensor-group ifit sample-interval 0
[*PE1-telemetry-subscription-ifit] destination-group ifit
[*PE1-telemetry-subscription-ifit] commit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv6-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator PE1 ipv6-prefix 2001:DB8:100::1 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::1/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ipv6 enable
ipv6 address 2001:DB8:30::3/96
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:1::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 2001:DB8:3::3 as-number 100
peer 2001:DB8:3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family vpn-instance vpna
import-route direct
advertise l2vpn evpn
segment-routing ipv6 locator PE1 evpn
segment-routing ipv6 best-effort evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 2001:DB8:3::3 enable
peer 2001:DB8:3::3 advertise encap-type srv6
#
ifit
node-id 1
instance 1
interval 10
flow unidirectional source-ipv6 any destination-ipv6 any vpn-instance vpna peer-locator
2001:DB8:60::1 64
binding interface GigabitEthernet2/0/0
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-locator-statistics/flow-locator-statistic
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● P configuration file
#
sysname P
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source bits0 synchronization enable
clock source bits0 priority 1
clock source ptp synchronization enable
clock source ptp priority 1
clock bits-type bits0 2mhz
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::2/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::1/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:40::4/96
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:2::2/64
isis ipv6 enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv6-family
route-distinguisher 200:1
apply-label per-instance
vpn-target 1:1 export-extcommunity evpn
vpn-target 1:1 import-extcommunity evpn
#
segment-routing ipv6
encapsulation source-address 2001:DB8:3::3
locator PE2 ipv6-prefix 2001:DB8:60::1 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE2
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::2/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ipv6 enable
ipv6 address 2001:DB8:50::3/96
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:3::3/64
isis ipv6 enable 1
#
bgp 100
router-id 3.3.3.3
peer 2001:DB8:1::1 as-number 100
peer 2001:DB8:1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family vpn-instance vpna
import-route direct
advertise l2vpn evpn
segment-routing ipv6 locator PE2 evpn
segment-routing ipv6 best-effort evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 2001:DB8:1::1 enable
peer 2001:DB8:1::1 advertise encap-type srv6
#
ifit
node-id 2
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-locator-statistics/flow-locator-statistic
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
1.1.12.5.6 Example for Configuring Peer Locator-based IFIT on an EVPN VPWS over
SRv6 Network
This section provides an example for configuring peer locator-based IFIT hop-by-
hop packet loss and delay measurement on an EVPN VPWS over SRv6 network.
Networking Requirements
EVPN VPWS over SRv6 uses public network SRv6 tunnels to carry EVPN VPWS VPN
services. To meet users' higher requirements on service quality, IFIT is required on
an EVPN VPWS over SRv6 network to monitor the packet loss rate and delay of
links between PEs in real time. This enables timely responses to service quality
deterioration.
On the EVPN VPWS over SRv6 network shown in Figure 1-70, service flows enter
the network through PE1, traverse the P, and leave the network through PE2.
Figure 1-70 Configuring peer locator-based IFIT on an EVPN VPWS over SRv6
network
NOTE
Interfaces 1 through 3 in this example represent GE1/0/0, GE2/0/0, and GE3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an EVPN VPWS over SRv6 network on each PE and the P.
Specifically:
a. Enable IPv6 forwarding on each device and configure IPv6 addresses for
involved interfaces.
b. Enable IS-IS, configure a level, and specify a network entity on each
device.
c. Configure EVPN VPWS and EVPL instances on each PE and bind access-
side sub-interfaces to the EVPL instances.
d. Establish a BGP EVPN peer relationship between PEs.
e. Configure SRv6 BE on PEs.
2. Configure basic 1588v2 functions to synchronize the clocks across all devices.
3. Configure packet loss and delay measurement on the PEs to collect packet
loss rate and delay statistics at intervals.
4. Configure the device to send statistics to the NMS through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● IPv6 address of each interface as listed in Figure 1-70
● EVPN instance name (evrf1)
● RD and RT values of the EVPN instance: 100:1 and 1:1
● IFIT instance ID (1) and measurement interval (10s)
● Peer locator (2001:db8:40::1/64) of the IFIT instance
● NMS's IPv6 address (2001:db8:101::1) and port number (10001), and
reachable routes between the NMS and device
Procedure
Step 1 Configure an EVPN VPWS over SRv6 network on each PE and the P. For
configuration details, see Configuration Files.
Step 2 Configure basic 1588v2 functions to synchronize the clocks of the PEs and P.
1. # Configure the P to import clock signals from BITS0.
[~P] clock bits-type bits0 2mhz
[*P] clock source bits0 synchronization enable
[*P] clock source bits0 priority 1
[*P] commit
# Configure PE1.
[~PE1] ptp enable
[*PE1] ptp domain 1
[*PE1] ptp device-type bc
[*PE1] clock source ptp synchronization enable
[*PE1] clock source ptp priority 1
[*PE1] commit
# Configure PE2.
# Configure PE1.
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ptp enable
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
[~PE1] interface gigabitethernet 2/0/0
[~PE1-GigabitEthernet2/0/0] ptp enable
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] interface gigabitethernet 1/0/0
[~PE2-GigabitEthernet1/0/0] ptp enable
[*PE2-GigabitEthernet1/0/0] commit
[~PE2-GigabitEthernet1/0/0] quit
[~PE2] interface gigabitethernet 2/0/0
[~PE2-GigabitEthernet2/0/0] ptp enable
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
Step 3 Configure IFIT for the link between PE1 and PE2.
# Configure PE1.
<PE1> system-view
[~PE1] ifit
[*PE1-ifit] node-id 1
[*PE1-ifit] instance 1
[*PE1-ifit-instance-1] measure-mode trace
[*PE1-ifit-instance-1] interval 10
[*PE1-ifit-instance-1] flow unidirectional evpl-instance 1 peer-locator 2001:DB8:40::1 64
[*PE1-ifit-instance-1] binding interface gigabitethernet 2/0/0
[*PE1-ifit-instance-1] commit
[~PE1-ifit-instance-1] quit
[~PE1-ifit] quit
# Run the display ifit static command to check the configuration and status of
PE1.
[~PE1] display ifit static instance 1
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
# Configure the P.
<P> system-view
[~P] ifit
[*P-ifit] node-id 3
[*P-ifit] commit
[~P-ifit] quit
# Run the display ifit dynamic-hop command to view the configuration and
status of the P.
[~P] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet2/0/0
Direction : transitOutput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 513
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : transitInput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
# Configure PE2.
<PE2> system-view
[~PE2] ifit
[*PE2-ifit] node-id 2
[*PE2-ifit] commit
[~PE2-ifit] quit
# Run the display ifit dynamic-hop command to check the configuration and
status of PE2.
[~PE2] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet2/0/0
Direction : egress
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 513
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : transitInput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
Step 4 Configure the device to send statistics to the NMS through telemetry. The
following uses PE1 as an example.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/
flow-hop-statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-locator-
statistics/flow-locator-statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] quit
[*PE1-telemetry] subscription ifit
[*PE1-telemetry-subscription-ifit] sensor-group ifit sample-interval 0
[*PE1-telemetry-subscription-ifit] destination-group ifit
[*PE1-telemetry-subscription-ifit] commit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
evpn vpn-instance evrf1 vpws
route-distinguisher 100:1
segment-routing ipv6 best-effort
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpl instance 1
evpn binding vpn-instance evrf1
local-service-id 100 remote-service-id 200
segment-routing ipv6 locator PE1
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator PE1 ipv6-prefix 2001:DB8:100::1 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE1
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::1/64
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ptp enable
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 1
evpl instance 1
#
interface LoopBack1
ipv6 enable
ip address 1.1.1.1 255.255.255.255
ipv6 address 2001:DB8:1::1/64
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 2001:DB8:3::3 as-number 100
peer 2001:DB8:3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
l2vpn-family evpn
undo policy vpn-target
peer 2001:DB8:3::3 enable
peer 2001:DB8:3::3 advertise encap-type srv6
#
evpn source-address 1.1.1.1
#
ifit
node-id 1
instance 1
measure-mode trace
interval 10
flow unidirectional evpl-instance 1 peer-locator 2001:DB8:40::1 64
binding interface GigabitEthernet2/0/0
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-locator-statistics/flow-locator-statistic
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● P configuration file
#
sysname P
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source bits0 synchronization enable
clock source bits0 priority 1
clock source ptp synchronization enable
clock source ptp priority 1
clock bits-type bits0 2mhz
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::2/64
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::1/64
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:30::1/64
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:2::2/64
isis ipv6 enable 1
#
ifit
node-id 3
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-locator-statistics/flow-locator-statistic
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● PE2 configuration file
#
sysname PE2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
evpn vpn-instance evrf1 vpws
route-distinguisher 100:1
segment-routing ipv6 best-effort
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpl instance 1
evpn binding vpn-instance evrf1
local-service-id 200 remote-service-id 100
segment-routing ipv6 locator PE2
#
segment-routing ipv6
encapsulation source-address 2001:DB8:3::3
locator PE2 ipv6-prefix 2001:DB8:40::1 64 static 32
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE2
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::2/64
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ptp enable
#
interface GigabitEthernet2/0/0.1 mode l2
encapsulation dot1q vid 1
evpl instance 1
#
interface LoopBack1
ipv6 enable
ip address 3.3.3.3 255.255.255.255
ipv6 address 2001:DB8:3::3/64
isis ipv6 enable 1
#
bgp 100
router-id 3.3.3.3
peer 2001:DB8:1::1 as-number 100
peer 2001:DB8:1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
l2vpn-family evpn
undo policy vpn-target
peer 2001:DB8:1::1 enable
peer 2001:DB8:1::1 advertise encap-type srv6
#
evpn source-address 3.3.3.3
#
ifit
node-id 2
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-locator-statistics/flow-locator-statistic
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
Networking Requirements
Application-aware IPv6 Networking (APN6) is a new network architecture. It
conveys application information (APN attributes), including application identities
(APN IDs) and network performance requirements (APN parameters) to a
network by leveraging the programming space of IPv6 packets, providing fine-
granularity network services and accurate network operations and maintenance
(O&M). After APN IDs identify key applications or users, IFIT can be used to
monitor the performance of key services in real time. On the network shown in
Figure 1-71, a bidirectional SRv6 TE flow group is deployed between PE1 and PE2
to carry L3VPNv4 services.
Interfaces 1 through 3 in this example represent GE1/0/0, GE2/0/0, and GE3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an APN6-based L3VPNv4 over SRv6 TE Policy on PE1, P1, P2, and
PE2. Specifically:
a. Enable IPv6 forwarding and configure an IPv6 address for each interface
on PE1, P1, P2, and PE2.
b. Enable IS-IS, configure an IS-IS level, and specify a network entity title
(NET) on PE1, P1, P2, and PE2.
c. Configure VPN instances on PE1 and PE2.
d. Establish an EBGP peer relationship between each PE and its connected
CE.
e. Set up an MP-IBGP peer relationship between the PEs.
f. Configure SRv6 SIDs and enable IS-IS SRv6 on PE1, P1, P2, and PE2. In
addition, configure PE1 and PE2 to advertise VPN routes carrying SIDs.
g. Deploy an SRv6 TE Policy between PE1 and PE2.
h. Configure APN6 instances on PE1 and PE2.
i. Configure APN IDs for service flows on PE1 and PE2.
j. Configure an SRv6 mapping policy on PE1 and PE2.
k. Configure a tunnel policy on PE1 and PE2 to preferentially use the SRv6
TE flow group for VPN traffic import.
2. Configure basic 1588v2 functions to synchronize the clocks across all devices.
3. Configure packet loss and delay measurement on the PEs to collect packet
loss rate and delay statistics at intervals.
NOTE
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface as listed in Figure 1-71
● IPv6 address of each interface on PE1, P1, P2, and PE2
● IS-IS process ID of each device (PE1, P1, P2, and PE2)
● IS-IS level of each device (PE1, P1, P2, and PE2)
● VPN instance names, RDs, and RTs on PE1 and PE2
● APN6 template and instance names on PE1 and PE2, and the name and
length of the app-group field in the template
● IFIT instance ID (1) and measurement interval (10s)
● IFIT performance measurement instance generated based on the APN6
instance named APN6-instance1
● NMS's IPv6 address (2001:db8:101::1) and port number (10001), and
reachable routes between the NMS and device
Procedure
Step 1 Configure APN6-based L3VPNv4 over SRv6 TE Policy on PE1, P1, P2, and PE2. For
configuration details, see Configuration Files.
Step 2 Configure basic 1588v2 functions to synchronize the clocks of the PEs and P1.
1. # Configure P1 to import clock signals from BITS0.
[~P1] clock bits-type bits0 2mhz
[*P1] clock source bits0 synchronization enable
[*P1] clock source bits0 priority 1
[*P1] commit
[~P1-GigabitEthernet3/0/0] quit
Step 3 Configure hop-by-hop IFIT measurement for the link between PE1 and PE2.
# Configure PE1.
<PE1> system-view
[~PE1] ifit
[*PE1-ifit] node-id 1
[*PE1-ifit] instance 1
[*PE1-ifit-instance-1] measure-mode trace
[*PE1-ifit-instance-1] interval 10
[*PE1-ifit-instance-1] flow unidirectional apn-id-ipv6 instance APN6-instance1
[*PE1-ifit-instance-1] binding interface gigabitethernet 2/0/0
[*PE1-ifit-instance-1] commit
[~PE1-ifit-instance-1] quit
[~PE1-ifit] quit
# Run the display ifit static and display ifit dynamic-hop commands to check
the configuration and status of PE1.
[~PE1] display ifit static instance 1
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Flow Id : 1572865
Instance Type : instance
Flow Type : unidirectional
Apn-id-ipv6 Instance : APN6-instance1
Interface : GigabitEthernet2/0/0
Loss Measure : enable
Delay Measure : enable
Delay Per packet Measure : disable
Disorder Measure : disable
Measure Mode : trace
Interval : 10(s)
Flow Match Priority :0
Flow InstType Priority :2
[~PE1] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : transitOutput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
Step 4 Configure the device to send statistics to the NMS through telemetry. The
following uses PE1 as an example.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
apn
ipv6
apn-id template tmplt1 length 64 app-group 16
app-group index 1 app-group1 length 16
apn-id instance APN6-instance1
template tmplt1
apn-field app-group1 1
#
acl number 3333
rule 5 permit ip source 11.11.11.11 0 destination 22.22.22.22 0
#
traffic classifier c1
if-match acl 3333
#
traffic behavior b1
remark apn-id-ipv6 instance APN6-instance1
#
traffic policy p1
share-mode
statistics enable
classifier c1 behavior b1 precedence 1
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator as1 ipv6-prefix 2001:DB8:100:: 64 static 32
opcode ::100 end psp
opcode ::200 end no-flavor
srv6-te-policy locator as1
segment-list list1
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 traffic-engineer best-effort
peer 10.1.1.2 as-number 65410
#
ifit
node-id 1
instance 1
measure-mode trace
interval 10
flow unidirectional apn-id-ipv6 instance APN6-instance1
binding interface GigabitEthernet2/0/0
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-flow-group load-balance-number 1
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-apn-statistics/flow-apn-statistic
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● P1 configuration file
#
sysname P1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source bits0 synchronization enable
clock source bits0 priority 1
clock source ptp synchronization enable
clock source ptp priority 1
clock bits-type bits0 2mhz
#
segment-routing ipv6
encapsulation source-address 2001:DB8:2::2
locator as1 ipv6-prefix 2001:DB8:200:: 64 static 32
opcode ::100 end psp
opcode ::200 end no-flavor
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:11::2/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:12::1/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:12::3/64
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:2::2/128
isis ipv6 enable 1
#
ifit
node-id 3
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-apn-statistics/flow-apn-statistic
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● P2 configuration file
#
sysname P2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
segment-routing ipv6
encapsulation source-address 2001:DB8:4::4
locator as1 ipv6-prefix 2001:DB8:400::1 64 static 32
opcode ::100 end psp
opcode ::200 end no-flavor
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0004.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:13::2/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:14::1/96
isis ipv6 enable 1
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:4::4/128
isis ipv6 enable 1
#
return
● PE2 configuration file
#
sysname PE2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
apn
ipv6
apn-id template tmplt1 length 64 app-group 16
app-group index 1 app-group1 length 16
apn-id instance APN6-instance1
template tmplt1
apn-field app-group1 1
#
acl number 3333
rule 5 permit ip source 22.22.22.22 0 destination 11.11.11.11 0
#
traffic classifier c1
if-match acl 3333
#
traffic behavior b1
remark apn-id-ipv6 instance APN6-instance1
#
traffic policy p1
share-mode
statistics enable
classifier c1 behavior b1 precedence 1
#
segment-routing ipv6
encapsulation source-address 2001:DB8:3::3
locator as1 ipv6-prefix 2001:DB8:300:: 64 static 32
opcode ::100 end psp
opcode ::200 end no-flavor
srv6-te-policy locator as1
segment-list list1
index 5 sid ipv6 2001:DB8:200::100
index 10 sid ipv6 2001:DB8:100::100
segment-list list2
index 5 sid ipv6 2001:DB8:400::100
index 10 sid ipv6 2001:DB8:100::100
srv6-te policy policy1 endpoint 2001:DB8:1::1 color 10
binding-sid 2001:DB8:300::900
candidate-path preference 100
segment-list list1
srv6-te policy policy2 endpoint 2001:DB8:1::1 color 20
binding-sid 2001:DB8:300::901
candidate-path preference 100
segment-list list2
mapping-policy p1 color 101
match-type apn-id-ipv6
index 10 instance APN6-instance1 match srv6-te-policy color 10
ipv4 default match srv6-te-policy color 20
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:12::2/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.1 255.255.255.0
traffic-policy p1 inbound
ptp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:14::2/96
isis ipv6 enable 1
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:3::3/128
isis ipv6 enable 1
#
bgp 100
router-id 2.2.2.2
peer 2001:DB8:1::1 as-number 100
peer 2001:DB8:1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv6-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 2001:DB8:1::1 enable
peer 2001:DB8:1::1 route-policy p1 import
peer 2001:DB8:1::1 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator as1
segment-routing ipv6 traffic-engineer best-effort
peer 10.2.1.2 as-number 65420
#
ifit
node-id 2
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-flow-group load-balance-number 1
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-apn-statistics/flow-apn-statistic
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
interface LoopBack1
ip address 11.11.11.11 255.255.255.255
#
bgp 65410
peer 10.1.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
network 11.11.11.11 255.255.255.255
peer 10.1.1.1 enable
#
return
● CE2 configuration file
#
sysname CE2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
#
interface LoopBack1
ip address 22.22.22.22 255.255.255.255
#
bgp 65420
peer 10.2.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
network 22.22.22.22 255.255.255.255
peer 10.2.1.1 enable
#
return
Networking Requirements
To meet users' higher requirements on service quality, IFIT is required in the inter-
AS Option A scenario to monitor the packet loss rate and delay of links between
PEs in real time. This enables timely responses to service quality deterioration. As
a basic BGP/MPLS IP VPN application in inter-AS scenarios, Option A does not
require special inter-AS configuration, nor does it require MPLS to run between
ASBRs. In this mode, the ASBRs of two ASs directly connect to each other and
function as PEs in their own ASs. Each ASBR views the peer ASBR as its CE, creates
a VPN instance for each VPN, and advertises IPv4 routes to the peer ASBR through
EBGP.
In the inter-AS VPN Option A scenario shown in Figure 1-72, CE1 and CE2 belong
to the same VPN. CE1 is connected to PE1 in AS 100, and CE2 is connected to PE2
in AS 200. Inter-AS BGP/MPLS IP VPN is implemented through Option A. Service
flows enter the network through PE1, traverse ASBR1 and ASBR2, and leave the
network through PE2.
Interface1, interface2, and interface3 in this example represent GE 1/0/0, GE 2/0/0, and GE
3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure inter-AS VPN Option A. Specifically:
a. Establish an EBGP peer relationship between each PE and its connected
CE.
b. Establish an MP-IBGP peer relationship between the ASBR and PE in the
same AS.
c. Create a VPN instance on each ASBR, bind the VPN instance to the
interface connecting each ASBR to the other ASBR, and establish an EBGP
peer relationship between ASBRs.
2. Configure basic 1588v2 functions to synchronize the clocks across all devices.
3. Configure packet loss and delay measurement on the PEs to collect packet
loss rate and delay statistics at intervals.
4. Configure the device to send statistics to the NMS through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface as listed in Figure 1-72
● MPLS LSR IDs of the PEs and ASBRs
● Name, VPN target, and RD of the VPN instance on each PE and ASBR
● IFIT instance ID (1) and measurement interval (10s)
● Target flow's source IP address (11.11.11.11) and destination IP address
(22.22.22.22) in the IFIT instance
● NMS's IPv4 address (192.168.100.100) and port number (10001), and
reachable routes between the NMS and devices
Procedure
Step 1 Configure inter-AS VPN Option A. For configuration details, see Configuration
Files.
Step 2 Configure basic 1588v2 functions to synchronize the clocks across all devices.
1. # Configure ASBR1 to import clock signals from BITS0.
[~ASBR1] clock bits-type bits0 2mhz
[*ASBR1] clock source bits0 synchronization enable
[*ASBR1] clock source bits0 priority 1
[*ASBR1] commit
Step 3 Configure hop-by-hop IFIT measurement for the inter-AS Option A link between
PE1 and PE2.
# Configure PE1.
<PE1> system-view
[~PE1] ifit
[*PE1-ifit] node-id 1
[*PE1-ifit] encapsulation nexthop 2.2.2.9
[*PE1-ifit] instance 1
[*PE1-ifit-instance-1] measure-mode trace
[*PE1-ifit-instance-1] interval 10
[*PE1-ifit-instance-1] flow unidirectional source 11.11.11.11 destination 22.22.22.22 dscp 63 vpn-
instance vpn1
[*PE1-ifit-instance-1] binding interface gigabitethernet 2/0/0
[*PE1-ifit-instance-1] commit
[~PE1-ifit-instance-1] quit
[~PE1-ifit] quit
# Run the display ifit static and display ifit dynamic-hop commands to check
the configuration and status of PE1.
[~PE1] display ifit static instance 1
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Source IP Address/Mask Length : 11.11.11.11/32
Destination IP Address/Mask Length : 22.22.22.22/32
Protocol : any
Source Port : any
Destination Port : any
Gtp : disable
Gtp TeId : --
Dscp : 63
Interface : GigabitEthernet2/0/0
vpn-instance : vpn1
Measure State : enable
Loss Measure : enable
Delay Measure : enable
Delay Per packet Measure : disable
Disorder Measure : disable
Gtpu Sequence Measure : disable
Single Device Measure : disable
Measure Mode : trace
Interval : 10(s)
Tunnel Type : --
Flow Match Priority :0
Flow InstType Priority :9
[~PE1] display ifit dynamic-hop
2020-01-14 17:24:39.28 +08:00
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : transitOutput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
[*ASBR1-ifit] commit
[~ASBR1-ifit] quit
# Run the display ifit dynamic-hop command to check the configuration and
status of ASBR1.
[~ASBR1] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id :6
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : transitInput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id :7
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet2/0/0
Direction : egress
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
NOTE
● You are advised to configure IFIT mapping in the inbound direction and then in the
outbound direction. Otherwise, traffic may be interrupted.
● After IFIT mapping in the outbound direction is configured, the instance in the egress
direction starts to age.
# Run the display ifit dynamic-hop command to check the configuration and
status of ASBR1.
[~ASBR1] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id :6
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : transitInput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id :8
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet2/0/0
Direction : transitOutput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id :7
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet2/0/0
Direction : egress
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
# Run the display ifit dynamic-hop command to check the configuration and
status of ASBR2.
[~ASBR2] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id :5
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : transitOutput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id :4
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet2/0/0
Direction : transitInput
Loss Measure : enable
# Configure PE2.
<PE2> system-view
[~PE2] ifit
[*PE2-ifit] node-id 4
[*PE2-ifit] commit
[~PE2-ifit] quit
Step 4 Configure the device to send statistics to the NMS through telemetry. The
following uses PE1 as an example.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv4-address 192.168.100.100 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/
flow-hop-statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-
statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] quit
[*PE1-telemetry] subscription ifit
[*PE1-telemetry-subscription-ifit] sensor-group ifit sample-interval 0
[*PE1-telemetry-subscription-ifit] destination-group ifit
[*PE1-telemetry-subscription-ifit] commit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 1.1.1.9
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.10.1.2 255.255.255.0
mpls
mpls ldp
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 10.1.1.2 255.255.255.0
ptp enable
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
bgp 100
peer 2.2.2.9 as-number 100
peer 2.2.2.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.9 enable
#
ipv4-family vpn-instance vpn1
peer 10.1.1.1 as-number 65001
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 10.10.1.0 0.0.0.255
#
ifit
node-id 1
encapsulation nexthop 2.2.2.9
instance 1
measure-mode trace
interval 10
flow unidirectional source 11.11.11.11 destination 22.22.22.22 dscp 63 vpn-instance vpn1
binding interface GigabitEthernet2/0/0
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● ASBR1 configuration file
#
sysname ASBR1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source bits0 synchronization enable
clock source bits0 priority 1
clock source ptp synchronization enable
clock source ptp priority 1
clock bits-type bits0 2mhz
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:2
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
mpls lsr-id 2.2.2.9
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.10.1.1 255.255.255.0
mpls
mpls ldp
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 12.12.12.1 255.255.255.0
ptp enable
ifit egress mapping enable
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 12.12.12.3 255.255.255.0
ptp enable
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpn1
peer 12.12.12.2 as-number 200
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.10.1.0 0.0.0.255
#
ifit
node-id 3
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● ASBR2 configuration file
#
sysname ASBR2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 200:2
apply-label per-instance
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
mpls lsr-id 3.3.3.9
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.40.1.1 255.255.255.0
mpls
mpls ldp
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 12.12.12.2 255.255.255.0
ptp enable
ifit ingress mapping enable
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
bgp 200
peer 4.4.4.9 as-number 200
peer 4.4.4.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 4.4.4.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 4.4.4.9 enable
#
ipv4-family vpn-instance vpn1
peer 12.12.12.1 as-number 100
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 10.40.1.0 0.0.0.255
#
ifit
node-id 2
encapsulation nexthop 4.4.4.9
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● PE2 configuration file
#
sysname PE2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 200:1
apply-label per-instance
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
mpls lsr-id 4.4.4.9
#
mpls
#
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 10.2.1.2 255.255.255.0
ptp enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.40.1.2 255.255.255.0
mpls
mpls ldp
ptp enable
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
#
bgp 200
peer 3.3.3.9 as-number 200
peer 3.3.3.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 3.3.3.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 3.3.3.9 enable
#
ipv4-family vpn-instance vpn1
peer 10.2.1.1 as-number 65002
#
ospf 1
area 0.0.0.0
network 4.4.4.9 0.0.0.0
network 10.40.1.0 0.0.0.255
#
ifit
node-id 4
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
Networking Requirements
G-SRv6 allows SRHs to carry shorter G-SIDs to optimize SRv6 header overheads,
optimizing SRv6 performance and facilitating large-scale SRv6 deployment. To
meet users' higher requirements on service quality, IFIT is required on a G-SRv6
network to monitor the packet loss rate and delay on links between PEs in real
time. This enables timely responses to service quality deterioration. On the
network shown in Figure 1-73, PE1, P1, P2, and PE2 are in the same AS. It is
required that IS-IS be configured for these devices to achieve IPv6 network
connectivity, a bidirectional SRv6 TE Policy be deployed between PE1 and PE2 to
carry L3VPNv4 services, and SRH compression be performed to reduce the SRv6
header size.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an L3VPNv4 over SRv6 TE Policy on PE1, P1, P2, and PE2.
Specifically:
a. Enable IPv6 forwarding and configure an IPv6 address for each involved
interface on PE1, P1, P2, and PE2.
b. Enable IS-IS, configure an IS-IS level, and specify a network entity title
(NET) on PE1, P1, P2, and PE2.
c. Configure an IPv4 L3VPN instance on each PE and bind the IPv4 L3VPN
instance to an access-side interface.
d. Establish an EBGP peer relationship between each PE and its connected
CE.
e. Establish a BGP VPNv4 peer relationship between the PEs.
f. Configure SRv6 SIDs and enable IS-IS SRv6 on PE1, P1, P2, and PE2. In
addition, configure PE1 and PE2 to advertise VPN routes carrying SIDs.
g. Deploy an SRv6 TE Policy between PE1 and PE2.
h. Configure a tunnel policy on PE1 and PE2 to import VPN traffic.
2. Configure basic 1588v2 functions to synchronize the clocks across all devices.
3. Configure packet loss and delay measurement on the PEs to collect packet
loss rate and delay statistics at intervals.
4. Configure the device to send statistics to the NMS through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface as listed in Figure 1-73
● IPv6 address of each interface on PE1, P1, P2, and PE2
● IS-IS process ID of each device (PE1, P1, P2, and PE2)
● IS-IS level of each device (PE1, P1, P2, and PE2)
● VPN instance names, RDs, and RTs on PE1 and PE2
● IFIT instance ID (1) and measurement interval (10s)
● Target flow's source IP address (10.1.1.1) and destination IP address (10.2.1.1)
● NMS's IPv6 address (2001:db8:101::1) and port number (10001), and
reachable routes between the NMS and device
Procedure
Step 1 Configure L3VPNv4 over SRv6 TE Policy on PE1, P1, P2, and PE2. For configuration
details, see Configuration Files.
Step 2 Configure basic 1588v2 functions to synchronize the clocks of the PEs and P.
1. # Configure P1 to import clock signals from BITS0.
[~P1] clock bits-type bits0 2mhz
[*P1] clock source bits0 synchronization enable
[*P1] clock source bits0 priority 1
[*P1] commit
[*P1-GigabitEthernet1/0/0] commit
[~P1-GigabitEthernet1/0/0] quit
[~P1] interface gigabitethernet 2/0/0
[~P1-GigabitEthernet2/0/0] ptp enable
[*P1-GigabitEthernet2/0/0] commit
[~P1-GigabitEthernet2/0/0] quit
[~P1] interface gigabitethernet 3/0/0
[~P1-GigabitEthernet3/0/0] ptp enable
[*P1-GigabitEthernet3/0/0] commit
[~P1-GigabitEthernet3/0/0] quit
Step 3 Configure hop-by-hop IFIT measurement for the link between PE1 and PE2.
# Configure PE1.
<PE1> system-view
[~PE1] ifit
[*PE1-ifit] node-id 1
[*PE1-ifit] instance-ht16 1
[*PE1-ifit-instance-ht16-1] measure-mode trace
[*PE1-ifit-instance-ht16-1] interval 10
[*PE1-ifit-instance-ht16-1] flow unidirectional source 10.1.1.1 destination 10.2.1.1 dscp 63 vpn-instance
vpna
[*PE1-ifit-instance-ht16-1] binding interface gigabitethernet 2/0/0
[*PE1-ifit-instance-ht16-1] delay-measure enable
[*PE1-ifit-instance-ht16-1] commit
[~PE1-ifit-instance-ht16-1] quit
[~PE1-ifit] quit
# Run the display ifit static and display ifit dynamic-hop commands to check
the configuration and status of PE1.
[~PE1] display ifit static instance-ht16 1
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Instance Type : instance-ht16
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Source IP Address/Mask Length : 10.1.1.1/32
Destination IP Address/Mask Length : 10.2.1.1/32
Protocol : any
Source Port : any
Destination Port : any
Dscp : 63
Interface : GigabitEthernet2/0/0
vpn-instance : vpna
Measure State : enable
Loss Measure : enable
Delay Measure : enable
Measure Mode : trace
Interval : 10(s)
[~PE1] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id :4
Instance Type : instance-ht16
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : transitOutput
Loss Measure : enable
Delay Measure : enable
Interval : 10(s)
# Enable IFIT on P1, P2, and PE2. The configuration on P1 is used as an example.
<P1> system-view
[~P1] ifit
[*P1-ifit] node-id 3
[*P1-ifit] commit
[~P1-ifit] quit
Step 4 Configure the device to send statistics to the NMS through telemetry. The
following uses PE1 as an example.
NOTE
The sampling interval configured using the sensor-group command must be a non-zero
value. If the sampling interval is set to a value greater than 10 times the instance
measurement interval, sampling is performed at an interval that is 10 times the instance
measurement interval.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
[*PE1-telemetry-sensor-group-ifit] sensor-path insuitoam:flow-info/flow-entry
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] sensor-path insuitoam:measure-report/report
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] quit
[*PE1-telemetry] subscription ifit
[*PE1-telemetry-subscription-ifit] sensor-group ifit sample-interval 10
[*PE1-telemetry-subscription-ifit] destination-group ifit
[*PE1-telemetry-subscription-ifit] commit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy p1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator PE1 ipv6-prefix 2001:DB8:100:1:: 64 compress block 48 compress-static 8 static 32
opcode compress ::12 end psp-usp-usd
opcode ::55 end-op
srv6-te-policy locator PE1
segment-list list1
index 5 sid ipv6 2001:DB8:100:2:22:: compress block 48
index 10 sid ipv6 2001:DB8:100:3:33:: compress block 48
sensor-group ifit
sensor-path insuitoam:flow-info/flow-entry
sensor-path insuitoam:measure-report/report
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 10
destination-group ifit
#
return
● P1 configuration file
#
sysname P1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source bits0 synchronization enable
clock source bits0 priority 1
clock source ptp synchronization enable
clock source ptp priority 1
clock bits-type bits0 2mhz
#
segment-routing ipv6
encapsulation source-address 2001:DB8:2::2
locator P1 ipv6-prefix 2001:DB8:100:2:: 64 compress block 48 compress-static 8 static 32
opcode compress ::22 end psp-usp-usd-coc
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator P1 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:10::2/64
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::1/64
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::3/64
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:2::2/128
isis ipv6 enable 1
#
ifit
node-id 3
#
telemetry
#
sensor-group ifit
sensor-path insuitoam:flow-info/flow-entry
sensor-path insuitoam:measure-report/report
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 10
destination-group ifit
#
return
● P2 configuration file
#
sysname P2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
segment-routing ipv6
encapsulation source-address 2001:DB8:3::3
locator P2 ipv6-prefix 2001:DB8:100:3:: 64 compress block 48 compress-static 8 static 32
opcode compress ::33 end psp-usp-usd-coc
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0003.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator P2 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:20::2/64
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:30::1/64
isis ipv6 enable 1
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:3::3/128
isis ipv6 enable 1
#
ifit
node-id 4
#
telemetry
#
sensor-group ifit
sensor-path insuitoam:flow-info/flow-entry
sensor-path insuitoam:measure-report/report
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 10
destination-group ifit
#
return
● PE2 configuration file
#
sysname PE2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
tnl-policy p1
apply-label per-instance
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
segment-routing ipv6
encapsulation source-address 2001:DB8:4::4
locator PE2 ipv6-prefix 2001:DB8:100:4:: 64 compress block 48 compress-static 8 static 32
opcode compress ::45 end psp-usp-usd
opcode ::66 end-op
srv6-te-policy locator PE2
segment-list list1
index 5 sid ipv6 2001:DB8:100:3:33:: compress block 48
index 10 sid ipv6 2001:DB8:100:2:22:: compress block 48
index 15 sid ipv6 2001:DB8:100:1:12:: compress block 48
srv6-te policy policy1 endpoint 2001:DB8:1::1 color 101
candidate-path preference 100
segment-list list1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0004.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE2 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:30::2/64
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.22.1.1 255.255.255.0
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:4::4/128
isis ipv6 enable 1
#
bgp 100
router-id 4.4.4.4
peer 2001:DB8:1::1 as-number 100
peer 2001:DB8:1::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv4-family vpnv4
policy vpn-target
peer 2001:DB8:1::1 enable
peer 2001:DB8:1::1 route-policy p1 import
peer 2001:DB8:1::1 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator PE2
segment-routing ipv6 traffic-engineer best-effort
peer 10.22.1.2 as-number 65420
#
ifit
node-id 2
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-policy load-balance-number 1
#
telemetry
#
sensor-group ifit
sensor-path insuitoam:flow-info/flow-entry
sensor-path insuitoam:measure-report/report
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 10
destination-group ifit
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.11.1.2 255.255.255.0
#
interface LoopBack1
ip address 11.1.1.1 32
#
bgp 65410
router-id 11.1.1.1
peer 10.11.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
import-route direct
peer 10.11.1.1 enable
#
return
● CE2 configuration file
#
sysname CE2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.22.1.2 255.255.255.0
#
interface LoopBack1
ip address 22.2.2.2 32
#
bgp 65420
router-id 22.2.2.2
peer 10.22.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
import-route direct
peer 10.22.1.1 enable
#
return
Networking Requirements
MVPNv4 over BIERv6 uses BIERv6 as a bearer tunnel, encapsulates VPN IP
multicast traffic using BIERv6, and transmits the traffic over a VPN in multicast
mode. To meet users' higher requirements on service quality, IFIT is required on a
BIERv6 network to monitor the packet loss rate and delay on links between PEs in
real time. This enables timely responses to service quality deterioration. Figure
1-74 shows the networking.
Interfaces 1 through 4 in this example represent GE1/0/0, GE1/0/1, GE1/0/2, and GE1/0/3,
respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure MVPNv4 over BIERv6 on PE1, the P, PE2, and PE3. Specifically:
a. (Optional) Configure L3VPNv4 over SRv6 and ensure that the unicast
VPN runs properly. If the unicast network has been configured, skip this
step.
b. Configure basic BIERv6 functions and enable IS-ISv6 for BIERv6 on PE1,
PE2, PE3, and the P.
c. Establish BGP MVPN peer relationships between PEs.
d. Configure multicast traffic forwarding over a BIERv6 I-PMSI tunnel.
e. Enable the BIERv6 S-PMSI tunnel function and configure switching
criteria.
f. Enable PIM on PEs.
2. Configure basic 1588v2 functions to synchronize the clocks across all devices.
3. Configure packet loss and delay measurement on the PEs to collect packet
loss rate and delay statistics at intervals.
4. Configure the device to send statistics to the NMS through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface as listed in Figure 1-74
● ID (1) of the public network IS-IS process, in a Level-2 area
● VPN instance name (vpna) on PE1, PE2, and PE3
● PE2's BFR-ID (2), PE3's BFR-ID (3), sub-domain ID (0), BSL (256), and Max-SI
(0)
● IFIT instance ID (1) and measurement interval (10s)
● Multicast source address (192.168.11.0) and multicast group address
(225.1.1.0) of the target flow in the IFIT instance
● NMS's IPv6 address (2001:db8:101::1) and port number (10001), and
reachable routes between the NMS and device
Procedure
Step 1 Configure L3VPNv4 over SRv6 on PE1, the P, PE2, and PE3. For configuration
details, see Configuration Files.
Step 2 Configure basic 1588v2 functions to synchronize the clocks of the PEs and P.
1. # Configure the P to import clock signals from BITS0.
[~P] clock bits-type bits0 2mhz
[*P] clock source bits0 synchronization enable
[*P] clock source bits0 priority 1
[*P] commit
Step 3 Configure end-to-end IFIT measurement for the link between PE1 and PE2.
# Configure PE1.
<PE1> system-view
[~PE1] ifit
[*PE1-ifit] node-id 1
[*PE1-ifit] instance-ht16 1
[*PE1-ifit-instance-ht16-1] interval 10
[*PE1-ifit-instance-ht16-1] flow unidirectional source 192.168.11.0 group 225.1.1.0 vpn-instance vpna
[*PE1-ifit-instance-ht16-1] binding interface gigabitethernet 1/0/1
[*PE1-ifit-instance-ht16-1] delay-measure enable
[*PE1-ifit-instance-ht16-1] commit
[~PE1-ifit-instance-ht16-1] quit
[~PE1-ifit] quit
# Run the display ifit multicast command to check the configuration and status
of PE1.
[~PE1] display ifit multicast source 192.168.11.0 group 225.1.1.0
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Instance Type : instance-ht16
Flow Id : 1310721
Flow Monitor Id : 262145
Flow Node Id :1
Flow Type : unidirectional
Multicast Source Address : 192.168.11.0
Multicast Group Address : 225.1.1.0
Interface : GigabitEthernet1/0/1
Vpn-instance : vpna
Measure State : enable
Loss Measure : enable
Delay Measure : enable
Measure Mode : e2e
Interval : 10(s)
# Run the display ifit dynamic-hop command to check the configuration and
status of PE2.
[~PE2] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id :4
Instance Type : instance-ht16
Flow Id : 1310721
Flow Monitor Id : 262145
Flow Node Id :1
Flow Type : unidirectional
Interface : --
Direction : egress
Loss Measure : enable
Delay Measure : enable
Interval : 10(s)
Step 4 Configure the device to send statistics to the NMS through telemetry. The
following uses PE1 as an example.
NOTE
The sampling interval configured using the sensor-group command must be a non-zero
value. If the sampling interval is set to a value greater than 10 times the instance
measurement interval, sampling is performed at an interval that is 10 times the instance
measurement interval.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
[*PE1-telemetry-sensor-group-ifit] sensor-path insuitoam:flow-info/flow-entry
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] sensor-path insuitoam:measure-report/report
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] quit
[*PE1-telemetry] subscription ifit
[*PE1-telemetry-subscription-ifit] sensor-group ifit sample-interval 10
[*PE1-telemetry-subscription-ifit] destination-group ifit
[*PE1-telemetry-subscription-ifit] commit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
multicast mvpn ipv6-underlay 2001:DB8:10::1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
multicast routing-enable
mvpn
ipv6 underlay enable
sender-enable
src-dt4 locator PE1 sid 2001:DB8:100::2
rpt-spt mode
ipmsi-tunnel
bier
spmsi-tunnel
holddown-time 80
switch-delay 20
group 225.1.1.0 255.255.255.0 source 192.168.11.0 255.255.255.0 threshold 10 bier
#
segment-routing ipv6
encapsulation source-address 2001:DB8:10::1
locator PE1 ipv6-prefix 2001:DB8:100:: 64 static 32
opcode ::111 end psp
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0001.00
bier enable
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE1 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:1::2/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet1/0/1
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.1.1 255.255.255.0
pim sm
ptp enable
#
interface LoopBack2
ip binding vpn-instance vpna
ip address 10.1.1.1 255.255.255.255
pim sm
#
pim vpn-instance vpna
static-rp 10.1.1.1
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:10::1/128
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 2001:DB8:20::1 as-number 100
peer 2001:DB8:20::1 connect-interface LoopBack1
peer 2001:DB8:30::1 as-number 100
peer 2001:DB8:30::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv4-family mvpn
policy vpn-target
peer 2001:DB8:20::1 enable
peer 2001:DB8:30::1 enable
#
ipv4-family vpnv4
policy vpn-target
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:1::1/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet1/0/1
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:2::1/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet1/0/2
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:3::1/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet1/0/3
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:4::1/96
ptp enable
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:40::1/128
isis ipv6 enable 1
#
bier
sub-domain 0 ipv6
bfr-prefix interface LoopBack1
protocol isis
end-bier locator P sid 2001:DB8:400::1
encapsulation-type ipv6 bsl 256 max-si 0
#
return
● PE2 configuration file
#
sysname PE2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
multicast mvpn ipv6-underlay 2001:DB8:20::1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
multicast routing-enable
mvpn
ipv6 underlay enable
c-multicast signaling bgp
rpt-spt mode
#
segment-routing ipv6
encapsulation source-address 2001:DB8:20::1
locator PE2 ipv6-prefix 2001:DB8:200:: 64 static 32
sensor-group ifit
sensor-path insuitoam:flow-info/flow-entry
sensor-path insuitoam:measure-report/report
#
destination-group ifit
ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 10
destination-group ifit
#
return
● PE3 configuration file
#
sysname PE3
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
multicast mvpn ipv6-underlay 2001:DB8:30::1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
multicast routing-enable
mvpn
ipv6 underlay enable
c-multicast signaling bgp
rpt-spt mode
#
segment-routing ipv6
encapsulation source-address 2001:DB8:30::1
locator PE2 ipv6-prefix 2001:DB8:300:: 64 static 32
opcode ::333 end psp
#
isis 1
cost-style wide
network-entity 10.0000.0000.0003.00
bier enable
#
ipv6 enable topology ipv6
segment-routing ipv6 locator PE3 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:3::2/96
isis ipv6 enable 1
ptp enable
#
interface GigabitEthernet1/0/1
undo shutdown
ip binding vpn-instance vpna
ip address 192.168.3.1 255.255.255.0
pim sm
ptp enable
#
pim vpn-instance vpna
static-rp 10.1.1.1
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:30::1/128
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.3
peer 2001:DB8:10::1 as-number 100
peer 2001:DB8:10::1 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv4-family mvpn
policy vpn-target
peer 2001:DB8:10::1 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2001:DB8:10::1 enable
peer 2001:DB8:10::1 prefix-sid
#
ipv4-family vpn-instance vpna
import-route direct
segment-routing ipv6 locator PE3
segment-routing ipv6 best-effort
peer 192.168.3.2 as-number 65412
#
bier
sub-domain 0 ipv6
bfr-id 3
bfr-prefix interface LoopBack1
protocol isis
end-bier locator PE3 sid 2001:DB8:300::1
encapsulation-type ipv6 bsl 256 max-si 0
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.1.2 255.255.255.0
pim sm
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 192.168.11.2 255.255.255.0
pim sm
#
bgp 65410
router-id 11.11.11.11
peer 192.168.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
peer 192.168.1.1 enable
#
return
● CE2 configuration file
#
sysname CE2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.2.2 255.255.255.0
pim sm
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 192.168.4.2 255.255.255.0
pim sm
igmp enable
igmp version 3
#
bgp 65411
router-id 12.12.12.12
peer 192.168.2.1 as-number 100
#
ipv4-family unicast
undo synchronization
peer 192.168.2.1 enable
#
return
Networking Requirements
To meet users' higher requirements on service quality, IFIT is required on an
L3VPN to monitor the packet loss rate and delay on links between PEs in real
time. This enables timely responses to service quality deterioration. IFIT supports
bidirectional flow-based performance measurement. A backward flow instance is
automatically generated based on the forward flow created on one device. On a
live network, there are a large number of access devices but only few core-layer
devices. Creating bidirectional flows on core-layer devices greatly reduces the
configuration and maintenance workload.
On the L3VPN shown in Figure 1-75, PE1 is a core-layer device, and PE2 is an
access-layer device.
Interfaces 1 through 3 in this example represent GE1/0/0, GE2/0/0, and GE3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an L3VPN on each PE and the P. Specifically:
a. Configure an IP address and a routing protocol for each interface so that
all devices can communicate at the network layer. This example uses IS-IS
as the routing protocol.
b. Configure MPLS and public network tunnels to carry L3VPN services. In
this example, SR-MPLS TE tunnels are used.
c. Configure a VPN instance on each PE, enable the IPv4 address family for
the instance, and bind the instance to the interface connecting the PE to
a CE.
d. Establish an MP-IBGP peer relationship between the PEs.
e. Establish an EBGP peer relationship between each CE and PE pair for
them to exchange routing information.
2. Configure basic 1588v2 functions to synchronize the clocks across all devices.
3. Configure packet loss and delay measurement on the PEs to collect packet
loss rate and delay statistics at intervals.
4. Configure the device to send statistics to the NMS through telemetry.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure an L3VPN on each PE and the P. For configuration details, see
Configuration Files.
Step 2 Configure basic 1588v2 functions to synchronize the clocks of the PEs and P.
1. # Configure the P to import clock signals from BITS0.
[~P] clock bits-type bits0 2mhz
[*P] clock source bits0 synchronization enable
[*P] clock source bits0 priority 1
[*P] commit
# Configure PE1.
[~PE1] ptp enable
[*PE1] ptp domain 1
[*PE1] ptp device-type bc
[*PE1] clock source ptp synchronization enable
[*PE1] clock source ptp priority 1
[*PE1] commit
# Configure PE2.
[~PE2] ptp enable
[*PE2] ptp domain 1
[*PE2] ptp device-type bc
[*PE2] clock source ptp synchronization enable
[*PE2] clock source ptp priority 1
[*PE2] commit
# Configure PE1.
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ptp enable
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
[~PE1] interface gigabitethernet 2/0/0
[~PE1-GigabitEthernet2/0/0] ptp enable
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] interface gigabitethernet 1/0/0
[~PE2-GigabitEthernet1/0/0] ptp enable
[*PE2-GigabitEthernet1/0/0] commit
[~PE2-GigabitEthernet1/0/0] quit
[~PE2] interface gigabitethernet 2/0/0
[~PE2-GigabitEthernet2/0/0] ptp enable
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
Step 3 Configure bidirectional flow-based IFIT for the link between PE1 and PE2.
# Configure PE1.
<PE1> system-view
[~PE1] ifit
[*PE1-ifit] node-id 1
[*PE1-ifit] encapsulation nexthop 3.3.3.9
[*PE1-ifit] instance 1
[*PE1-ifit-instance-1] measure-mode trace
[*PE1-ifit-instance-1] interval 10
[*PE1-ifit-instance-1] flow bidirectional source 10.11.1.1 destination 10.22.2.2 dscp 63 vpn-instance
vpna
[*PE1-ifit-instance-1] binding interface gigabitethernet 1/0/0
[*PE1-ifit-instance-1] commit
[~PE1-ifit-instance-1] quit
[~PE1-ifit] quit
# Run the display ifit static command to check the configuration and status of
PE1.
[~PE1] display ifit static instance 1
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : bidirectional
Source IP Address/Mask Length : 10.11.1.1/32
Destination IP Address/Mask Length : 10.22.2.2/32
Protocol : any
Source Port : any
Destination Port : any
Gtp : disable
Gtp TeId : --
Dscp : 63
Interface : GigabitEthernet1/0/0
vpn-instance : vpna
Measure State : enable
Loss Measure : enable
Delay Measure : enable
Delay Per packet Measure : disable
Disorder Measure : disable
Gtpu Sequence Measure : disable
Single Device Measure : disable
Measure Mode : trace
Interval : 10(s)
Tunnel Type : --
Flow Match Priority :0
Flow InstType Priority :9
# Configure the P.
<P> system-view
[~P] ifit
[*P-ifit] node-id 3
[*P-ifit] commit
[~P-ifit] quit
# Configure PE2.
<PE2> system-view
[~PE2] ifit
[*PE2-ifit] node-id 2
[*PE2-ifit] encapsulation nexthop 1.1.1.9
[*PE2-ifit] commit
[~PE2-ifit] quit
# Run the display ifit dynamic-hop command to check the configuration and
status of PE2.
[~PE2] display ifit dynamic
-------------------------------------------------------------------------
Flow Classification : dynamic
Instance Id : 100
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Source IP Address/Mask Length : 10.22.2.2/32
Destination IP Address/Mask Length : 10.11.1.1/32
Protocol : any
Source Port : any
Destination Port : any
Gtp : disable
Gtp TeId : --
Dscp : 63
Interface : GigabitEthernet1/0/0
vpn-instance : vpna
Measure State : enable
Loss Measure : enable
Delay Measure : enable
Delay Per packet Measure : disable
Disorder Measure : disable
Gtpu Sequence Measure : disable
Single Device Measure : disable
Measure Mode : e2e
Interval : 10(s)
Tunnel Type : --
Step 4 Configure the device to send statistics to the NMS through telemetry. The
following uses PE1 as an example.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv4-address 192.168.100.100 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/
flow-hop-statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-
statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] quit
[*PE1-telemetry] subscription ifit
[*PE1-telemetry-subscription-ifit] sensor-group ifit sample-interval 0
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
#
explicit-path pe2
next sid label 16200 type prefix
next sid label 16300 type prefix
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0001.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 20000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.2 255.255.255.0
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.16.1.1 255.255.255.0
isis enable 1
ptp enable
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
isis prefix-sid absolute 16100
#
bgp 100
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0002.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 20000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.2 255.255.255.0
isis enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.1 255.255.255.0
isis enable 1
ptp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 172.18.1.1 255.255.255.0
ptp enable
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
isis prefix-sid absolute 16200
#
ifit
node-id 3
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● PE2 configuration file
#
sysname PE2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
apply-label per-instance
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 3.3.3.9
#
mpls
mpls te
#
explicit-path pe1
next sid label 16200 type prefix
next sid label 16100 type prefix
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0003.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 20000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.2 255.255.255.0
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.2 255.255.255.0
isis enable 1
ptp enable
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
isis prefix-sid absolute 16300
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.9
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path pe1
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpna
peer 10.2.1.1 as-number 65420
#
ifit
node-id 2
encapsulation nexthop 1.1.1.9
#
tunnel-policy p1
tunnel select-seq sr-te load-balance-number 1
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/flow-hop-statistic
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● CE1 configuration file
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
#
interface LoopBack1
ip address 10.11.1.1 255.255.255.255
#
bgp 65410
peer 10.1.1.2 as-number 100
#
ipv4-family unicast
undo synchronization
peer 10.1.1.2 enable
network 10.11.1.1 255.255.255.255
#
return
● CE2 configuration file
#
sysname CE2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.1 255.255.255.0
#
interface LoopBack1
ip address 10.22.2.2 255.255.255.255
#
bgp 65420
peer 10.2.1.2 as-number 100
#
ipv4-family unicast
undo synchronization
peer 10.2.1.2 enable
network 10.22.2.2 255.255.255.255
#
return
1.1.12.5.12 Example for Configuring IFIT in Public Network Traffic over SRv6
Scenarios
This section provides an example for configuring IFIT to implement hop-by-hop
packet loss and delay measurement in public network IPv4 over SRv6 scenarios.
Networking Requirements
To meet users' higher requirements on service quality, IFIT is required on a public
network to monitor the packet loss rate and delay of links between PEs in real
time. This enables timely responses to service quality deterioration. In the public
network IPv4 over SRv6 scenario shown in Figure 1-76, PE1, the P, and PE2 belong
to the same AS and need to run IS-IS to implement IPv6 network connectivity.
PE1, the P, and PE2 are Level-1 devices that belong to IS-IS process 1. An IBGP
peer relationship needs to be established between PE1 and PE2, and EBGP peer
Figure 1-76 Configuring IFIT in public network traffic over SRv6 scenarios
NOTE
Interfaces 1 through 3 in this example represent GE1/0/0, GE2/0/0, andGE 3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure public network IPv4 over SRv6 TE Policy on each PE and the P.
Specifically:
a. Enable IPv6 forwarding on each device and configure IPv6 addresses for
involved interfaces.
b. Enable IS-IS, configure a level, and specify a network entity on each
device.
c. Establish an EBGP peer relationship between the PEs and devices.
d. Set up an MP-IBGP peer relationship between the PEs.
e. Deploy an SRv6 TE Policy between PE1 and PE2, and enable IS-IS SRv6.
2. Configure basic 1588v2 functions to synchronize the clocks across all devices.
3. Configure packet loss and delay measurement on the PEs to collect packet
loss rate and delay statistics at intervals.
4. Configure the device to send statistics to the NMS through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface as listed in Figure 1-76
● IS-IS process IDs of the PEs and P
● IS-IS levels on the PEs and P
Procedure
Step 1 Configure a public network IPv4 over SRv6 TE Policy on each PE and the P. For
configuration details, see Configuration Files.
Step 2 Configure basic 1588v2 functions to synchronize the clocks of the PEs and P.
1. # Configure the P to import clock signals from BITS0.
[~P] clock bits-type bits0 2mhz
[*P] clock source bits0 synchronization enable
[*P] clock source bits0 priority 1
[*P] commit
2. # Enable 1588v2 globally.
# Configure the P.
[~P] ptp enable
[*P] ptp domain 1
[*P] ptp device-type bc
[*P] clock source ptp synchronization enable
[*P] clock source ptp priority 1
[*P] commit
# Configure PE1.
[~PE1] ptp enable
[*PE1] ptp domain 1
[*PE1] ptp device-type bc
[*PE1] clock source ptp synchronization enable
[*PE1] clock source ptp priority 1
[*PE1] commit
# Configure PE2.
[~PE2] ptp enable
[*PE2] ptp domain 1
[*PE2] ptp device-type bc
[*PE2] clock source ptp synchronization enable
[*PE2] clock source ptp priority 1
[*PE2] commit
3. # Enable 1588v2 on interfaces.
# Configure the P.
[~P] interface gigabitethernet 1/0/0
[~P-GigabitEthernet1/0/0] ptp enable
[*P-GigabitEthernet1/0/0] commit
[~P-GigabitEthernet1/0/0] quit
[~P] interface gigabitethernet 2/0/0
[~P-GigabitEthernet2/0/0] ptp enable
[*P-GigabitEthernet2/0/0] commit
[~P-GigabitEthernet2/0/0] quit
[~P] interface gigabitethernet 3/0/0
[~P-GigabitEthernet3/0/0] ptp enable
[*P-GigabitEthernet3/0/0] commit
[~P-GigabitEthernet3/0/0] quit
# Configure PE1.
[~PE1] interface gigabitethernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ptp enable
[*PE1-GigabitEthernet1/0/0] commit
[~PE1-GigabitEthernet1/0/0] quit
[~PE1] interface gigabitethernet 2/0/0
[~PE1-GigabitEthernet2/0/0] ptp enable
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] interface gigabitethernet 1/0/0
[~PE2-GigabitEthernet1/0/0] ptp enable
[*PE2-GigabitEthernet1/0/0] commit
[~PE2-GigabitEthernet1/0/0] quit
[~PE2] interface gigabitethernet 2/0/0
[~PE2-GigabitEthernet2/0/0] ptp enable
[*PE2-GigabitEthernet2/0/0] commit
[~PE2-GigabitEthernet2/0/0] quit
Step 3 Configure IFIT for the link between PE1 and PE2.
# Configure PE1.
<PE1> system-view
[~PE1] ifit
[*PE1-ifit] node-id 1
[*PE1-ifit] instance 1
[*PE1-ifit-instance-1] measure-mode trace
[*PE1-ifit-instance-1] interval 10
[*PE1-ifit-instance-1] flow unidirectional source 10.1.1.1 destination 10.2.1.1 dscp 63
[*PE1-ifit-instance-1] binding interface gigabitethernet 2/0/0
[*PE1-ifit-instance-1] commit
[~PE1-ifit-instance-1] quit
[~PE1-ifit] quit
# Run the display ifit static command to check the configuration and status of
PE1.
[~PE1] display ifit static instance 1
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Source IP Address/Mask Length : 10.1.1.1/32
Destination IP Address/Mask Length : 10.2.1.1/32
Protocol : any
Source Port : any
Destination Port : any
Gtp : disable
Gtp TeId : --
Dscp : 63
Interface : GigabitEthernet2/0/0
vpn-instance : --
Measure State : enable
Loss Measure : enable
Delay Measure : enable
Delay Per packet Measure : disable
Disorder Measure : disable
Gtpu Sequence Measure : disable
Single Device Measure : disable
Measure Mode : trace
Interval : 10(s)
Tunnel Type : --
Flow Match Priority :0
Flow InstType Priority :9
# Configure the P.
<P> system-view
[~P] ifit
[*P-ifit] node-id 3
[*P-ifit] commit
[~P-ifit] quit
# Run the display ifit dynamic-hop command to view the configuration and
status of the P.
[~P] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet2/0/0
Direction : transitOutput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 513
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : transitInput
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
# Configure PE2.
<PE2> system-view
[~PE2] ifit
[*PE2-ifit] node-id 2
[*PE2-ifit] commit
[~PE2-ifit] quit
# Run the display ifit dynamic-hop command to check the configuration and
status of PE2.
[~PE2] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet2/0/0
Direction : egress
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 10(s)
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 513
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : transitInput
Loss Measure : enable
# Configure the device to send statistics to the NMS through telemetry. The
following uses PE1 as an example.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv6-address 2001:DB8:101::1 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-hop-statistics/
flow-hop-statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-
statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] quit
[*PE1-telemetry] subscription ifit
[*PE1-telemetry-subscription-ifit] sensor-group ifit sample-interval 0
[*PE1-telemetry-subscription-ifit] destination-group ifit
[*PE1-telemetry-subscription-ifit] commit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
tunnel-selector p1 permit node 1
apply tunnel-policy p1
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator aa ipv6-prefix 2001:DB8:100:: 64 static 32
opcode ::100 end psp
segment-list list1
index 5 sid ipv6 2001:DB8:200::100
srv6-te policy policy1 endpoint 2001:DB8:2::2 color 101
candidate-path preference 100
segment-list list1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator aa
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
destination-group ifit
#
return
Networking Requirements
To meet users' higher requirements on service quality, IFIT is required on an
L3VPN to monitor the packet loss rate and delay on links between PEs in real
time. This enables timely responses to service quality deterioration. IFIT supports
automatic learning of dynamic flows on the ingress by using the mask or exact
match of the source or destination address. In addition, IFIT can flexibly monitor
service quality in real time by configuring a learning whitelist.
Figure 1-77 shows an L3VPN where service flows enter the network through PE1,
traverse the P, and leave the network through PE2.
Interface1, interface2, and interface3 in this example represent GE 1/0/0, GE 2/0/0, and GE
3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an L3VPN on the P and each PE. Specifically:
a. Configure an IP address and a routing protocol for each interface so that
all devices can communicate at the network layer. This example uses IS-IS
as the routing protocol.
b. Configure MPLS and public network tunnels to carry L3VPN services. In
this example, SR-MPLS TE tunnels are used.
c. Configure a VPN instance on each PE, enable the IPv4 address family for
the instance, and bind the instance to the interface connecting the PE to
a CE.
d. Establish an MP-IBGP peer relationship between the PEs for them to
exchange routing information.
e. Establish an EBGP peer relationship between each CE and PE pair for
them to exchange routing information.
2. Configure basic 1588v2 functions to synchronize the clocks across all devices.
3. Configure packet loss and delay measurement on the PEs to collect packet
loss rate and delay statistics at intervals.
4. Configure the device to send statistics to the NMS through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface as listed in Figure 1-77
● MPLS LSR IDs on the PEs and P
Procedure
Step 1 Configure an L3VPN on each PE and the P. For configuration details, see
Configuration Files.
Step 2 Configure basic 1588v2 functions to synchronize the clocks of the PEs and P.
1. # Configure the P to import clock signals from BITS0.
[~P] clock bits-type bits0 2mhz
[*P] clock source bits0 synchronization enable
[*P] clock source bits0 priority 1
[*P] commit
# Configure PE1.
<PE1> system-view
[~PE1] ifit
[*PE1-ifit] node-id 1
[*PE1-ifit] encapsulation nexthop 3.3.3.9
[*PE1-ifit] whitelist-group 1
[*PE1-ifit-whitelist-group-1] rule rule1 ipv4 source 10.11.1.1 32 destination 10.22.2.2 32
[*PE1-ifit-whitelist-group-1] commit
[~PE1-ifit-whitelist-group-1] quit
[~PE1-ifit] flow-learning vpn-instance vpna
[*PE1-ifit-vpn-instance-vpna] flow-learning unidirectional
[*PE1-ifit-vpn-instance-vpna] flow-learning interface all whitelist-group 1
[*PE1-ifit-vpn-instance-vpna] commit
[~PE1-ifit-vpn-instance-vpna] quit
[~PE1-ifit] quit
# Run the display ifit dynamic command to check statistics about dynamic flows
learned on PE1.
[~PE1] display ifit dynamic
-------------------------------------------------------------------------
Flow Classification : dynamic
Instance Id : 10
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Source IP Address/Mask Length : 10.11.1.1/32
Destination IP Address/Mask Length : 10.22.2.2/32
Protocol : any
Source Port : any
Destination Port : any
Gtp : disable
Gtp TeId : --
Dscp : --
Interface : GigabitEthernet1/0/0
vpn-instance : vpna
Measure State : enable
Loss Measure : enable
Delay Measure : enable
Delay Per packet Measure : disable
Disorder Measure : disable
Gtpu Sequence Measure : disable
Single Device Measure : disable
Measure Mode : e2e
Interval : 60(s)
Tunnel Type : --
# Configure PE2.
<PE2> system-view
[~PE2] ifit
[*PE2-ifit] node-id 2
[*PE2-ifit] commit
[~PE2-ifit] quit
# Run the display ifit dynamic-hop command to check the configuration and
status of PE2.
[~PE2] display ifit dynamic-hop
-------------------------------------------------------------------------
Flow Classification : dynamic-hop
Instance Id : 514
Instance Type : instance
Flow Id : 1572865
Flow Monitor Id : 524289
Flow Node Id :1
Flow Type : unidirectional
Interface : GigabitEthernet1/0/0
Direction : egress
Loss Measure : enable
Delay Measure : enable
Disorder Measure : disable
Interval : 60(s)
Step 4 Configure the device to send statistics to the NMS through telemetry. The
following uses PE1 as an example.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv4-address 192.168.100.100 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-
statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] quit
[*PE1-telemetry] subscription ifit
[*PE1-telemetry-subscription-ifit] sensor-group ifit sample-interval 0
[*PE1-telemetry-subscription-ifit] destination-group ifit
[*PE1-telemetry-subscription-ifit] commit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● PE1 configuration file
#
sysname PE1
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
apply-label per-instance
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 1.1.1.9
#
mpls
mpls te
#
explicit-path pe2
next sid label 16200 type prefix
next sid label 16300 type prefix
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0001.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 20000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.2 255.255.255.0
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.16.1.1 255.255.255.0
isis enable 1
ptp enable
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0002.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 20000
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 172.16.1.2 255.255.255.0
isis enable 1
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.1 255.255.255.0
isis enable 1
ptp enable
#
interface GigabitEthernet3/0/0
undo shutdown
ip address 172.18.1.1 255.255.255.0
ptp enable
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
isis prefix-sid absolute 16200
#
return
● PE2 configuration file
#
sysname PE2
#
ptp enable
ptp domain 1
ptp device-type bc
#
clock source ptp synchronization enable
clock source ptp priority 1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
apply-label per-instance
tnl-policy p1
vpn-target 111:1 export-extcommunity
vpn-target 111:1 import-extcommunity
#
mpls lsr-id 3.3.3.9
#
mpls
mpls te
#
explicit-path pe1
next sid label 16200 type prefix
next sid label 16100 type prefix
#
segment-routing
#
isis 1
is-level level-2
cost-style wide
network-entity 10.0000.0000.0003.00
traffic-eng level-2
segment-routing mpls
segment-routing global-block 16000 20000
#
interface GigabitEthernet1/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.2.1.2 255.255.255.0
ptp enable
#
interface GigabitEthernet2/0/0
undo shutdown
ip address 172.17.1.2 255.255.255.0
isis enable 1
ptp enable
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
isis prefix-sid absolute 16300
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 1.1.1.9
mpls te signal-protocol segment-routing
mpls te tunnel-id 1
mpls te path explicit-path pe1
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 10.2.1.1 as-number 65420
#
ifit
node-id 2
#
tunnel-policy p1
tunnel select-seq sr-te load-balance-number 1
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
Networking Requirements
When IFIT is deployed on a network, it is possible that only a single device
supports IFIT and upstream and downstream devices do not support IFIT. In this
case, IFIT can provide performance measurement for a single device. The native IP
scenario is used as an example. On the network shown in Figure 1-78, DeviceA is
a standalone IFIT-capable device.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each involved interface so
that all devices can communicate at the network layer.
2. Create a static IFIT flow on DeviceA.
3. Configure single-device measurement on DeviceA to periodically collect
statistics on the packet loss rate and transmission delay on the bearer
network.
4. Configure the device to send statistics to the NMS through telemetry.
Data Preparation
To complete the configuration, you need the following data:
● IP address of each interface as listed in Figure 1-78
● IFIT instance ID (1) and measurement interval (30s)
● Target flow's source IP address (10.1.1.1) and destination IP address (10.2.1.1)
● NMS's IPv4 address (192.168.100.100) and port number (10001), and
reachable routes between the NMS and devices
Procedure
Step 1 Configure an IP address and a routing protocol for each involved interface so that
all devices can communicate at the network layer. The configuration details are
not provided here.
# Run the display ifit static command to check the configuration and status of
DeviceA.
[~DeviceA] display ifit static instance 1
-------------------------------------------------------------------------
Flow Classification : static
Instance Id :1
Instance Name :1
Instance Type : instance
Flow Id : 1835009
Flow Monitor Id : 786433
Flow Node Id :1
Flow Type : unidirectional
Source IP Address/Mask Length : 10.1.1.1/32
Destination IP Address/Mask Length : 10.2.1.1/32
Protocol : any
Source Port : any
Destination Port : any
Gtp : disable
Gtp TeId : --
Dscp : 63
Interface : GigabitEthernet1/0/0
vpn-instance : --
Measure State : enable
Loss Measure : enable
Delay Measure : enable
Delay Per packet Measure : disable
Disorder Measure : disable
Gtpu Sequence Measure : disable
Single Device Measure : enable
Measure Mode : e2e
Interval : 30(s)
Tunnel Type : --
Flow Match Priority :0
Flow InstType Priority :9
Step 4 Configure the device to send statistics to the NMS through telemetry.
[~DeviceA] telemetry
[~DeviceA-telemetry] destination-group ifit
[*DeviceA-telemetry-destination-group-ifit] ipv4-address 192.168.100.100 port 10001 protocol grpc
[*DeviceA-telemetry-destination-group-ifit] quit
[*DeviceA-telemetry] sensor-group ifit
[*DeviceA-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/
flow-statistic
[*DeviceA-telemetry-sensor-group-ifit-path] quit
[*DeviceA-telemetry-sensor-group-ifit] quit
[*DeviceA-telemetry] subscription ifit
[*DeviceA-telemetry-subscription-ifit] sensor-group ifit sample-interval 0
[*DeviceA-telemetry-subscription-ifit] destination-group ifit
[*DeviceA-telemetry-subscription-ifit] commit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
Configuration Files
● DeviceA configuration file
#
sysname DeviceA
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.2.1 255.255.255.0
#
ifit
node-id 1
instance 1
flow unidirectional source 10.1.1.1 destination 10.2.1.1 dscp 63
binding interface Gigabitethernet1/0/0
single-device-measure enable
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-statistics/flow-statistic
#
destination-group ifit
ipv4-address 192.168.100.100 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
Networking Requirements
On the network shown in Figure 1-79, a bidirectional SRv6 TE flow group is
deployed between PE1 and PE2 to carry EVPN L3VPNv4 services. The paths PE1-
P1-PE2 and PE1-P2-PE2 are two candidate paths for service flow forwarding. After
the packet loss and delay indicators of each path are measured using IFIT, TE-
Class-based traffic diversion can be used to implement intelligent traffic steering.
Interface1, interface2, and interface3 in this example represent GE 1/0/0, GE 2/0/0, and GE
3/0/0, respectively.
Configuration Roadmap
The configuration roadmap is as follows:
1. Enable IPv6 forwarding and configure an IPv6 address for each interface on
PE1, P1, P2, and PE2.
2. Enable IS-IS, configure an IS-IS level, and specify a network entity title (NET)
on PE1, P1, P2, and PE2.
3. Configure a VPN instance in the IPv4 address family on PE1 and PE2.
4. Establish an EBGP peer relationship between each PE and its connected CE.
5. Establish a BGP EVPN peer relationship between PEs.
6. Configure SRv6 SIDs and enable IS-IS SRv6 on PE1, P1, P2, and PE2. In
addition, configure PE1 and PE2 to advertise VPN routes carrying SIDs.
7. Deploy an SRv6 TE Policy between PE1 and PE2.
8. Configure a TE-Class value on PE1 and PE2.
9. Configure SPR on PE1 and PE2.
10. Configure PE1 and PE2 to use TE-Class to divert traffic to the SRv6 TE Policy.
11. Configure a tunnel policy on PE1 and PE2 to preferentially use the SRv6 TE
flow group for VPN traffic import.
12. Configure IFIT measurement for the SRv6 TE Policy on PE1 and PE2.
13. Configure an IFIT instance on PE1 and PE2.
14. Configure PE1 and PE2 to use telemetry (YANG model sampling) to report
measurement data.
NOTE
Data Preparation
To complete the configuration, you need the following data:
● IPv6 address of each interface on PE1, P1, P2, and PE2
● IS-IS process ID of each device (PE1, P1, P2, and PE2)
● IS-IS level of each device (PE1, P1, P2, and PE2)
● VPN instance names, RDs, and RTs on PE1 and PE2
● NMS's IPv4 address (192.168.1.1) and port number (10001), and reachable
routes between the NMS and devices
Procedure
Step 1 Enable IPv6 forwarding and configure an IPv6 address for each interface. The
configurations of P1, P2, and PE2 are similar to the configuration of PE1. For
configuration details, see Configuration Files in this section.
<HUAWEI> system-view
[~HUAWEI] sysname PE1
[*HUAWEI] commit
[~PE1] interface GigabitEthernet 1/0/0
[~PE1-GigabitEthernet1/0/0] ipv6 enable
[*PE1-GigabitEthernet1/0/0] ipv6 address 2001:DB8:11::1 96
[*PE1-GigabitEthernet1/0/0] quit
[*PE1] interface GigabitEthernet 3/0/0
[*PE1-GigabitEthernet3/0/0] ipv6 enable
[*PE1-GigabitEthernet3/0/0] ipv6 address 2001:DB8:13::1 96
[*PE1-GigabitEthernet3/0/0] quit
[*PE1] interface LoopBack 1
[*PE1-LoopBack1] ipv6 enable
[*PE1-LoopBack1] ipv6 address 2001:DB8:1::1 128
[*PE1-LoopBack1] quit
[*PE1] commit
# Configure P1.
[~P1] isis 1
[*P1-isis-1] is-level level-1
[*P1-isis-1] cost-style wide
[*P1-isis-1] network-entity 10.0000.0000.0002.00
[*P1-isis-1] ipv6 enable topology ipv6
[*P1-isis-1] quit
[*P1] interface GigabitEthernet 1/0/0
[*P1-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P1-GigabitEthernet1/0/0] quit
[*P1] interface GigabitEthernet 2/0/0
[*P1-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P1-GigabitEthernet2/0/0] quit
[*P1] interface loopback1
[*P1-LoopBack1] isis ipv6 enable 1
[*P1-LoopBack1] commit
[~P1-LoopBack1] quit
# Configure PE2.
[~PE2] isis 1
[*PE2-isis-1] is-level level-1
[*PE2-isis-1] cost-style wide
[*PE2-isis-1] network-entity 10.0000.0000.0003.00
[*PE2-isis-1] ipv6 enable topology ipv6
[*PE2-isis-1] quit
[*PE2] interface GigabitEthernet 1/0/0
[*PE2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet1/0/0] quit
[*PE2] interface GigabitEthernet 3/0/0
[*PE2-GigabitEthernet3/0/0] isis ipv6 enable 1
[*PE2-GigabitEthernet3/0/0] quit
[*PE2] interface loopback1
[*PE2-LoopBack1] isis ipv6 enable 1
[*PE2-LoopBack1] commit
[~PE2-LoopBack1] quit
# Configure P2.
[~P2] isis 1
[*P2-isis-1] is-level level-1
[*P2-isis-1] cost-style wide
[*P2-isis-1] network-entity 10.0000.0000.0004.00
[*P2-isis-1] ipv6 enable topology ipv6
[*P2-isis-1] quit
[*P2] interface GigabitEthernet 1/0/0
[*P2-GigabitEthernet1/0/0] isis ipv6 enable 1
[*P2-GigabitEthernet1/0/0] quit
[*P2] interface GigabitEthernet 2/0/0
[*P2-GigabitEthernet2/0/0] isis ipv6 enable 1
[*P2-GigabitEthernet2/0/0] quit
[*P2] interface loopback1
[*P2-LoopBack1] isis ipv6 enable 1
[*P2-LoopBack1] commit
[~P2-LoopBack1] quit
After the configuration is complete, run the display isis peer command to check
whether IS-IS has been configured successfully.
Step 3 Configure a VPN instance on each PE and enable the IPv4 address family for the
instance.
# Configure PE1.
[~PE1] ip vpn-instance vpna
[*PE1-vpn-instance-vpna] ipv4-family
[*PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[*PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both evpn
[*PE1-vpn-instance-vpna-af-ipv4] quit
[*PE1-vpn-instance-vpna] quit
[*PE1] interface GigabitEthernet 2/0/0
# Configure PE2.
[~PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[*PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both evpn
[*PE2-vpn-instance-vpna-af-ipv4] quit
[*PE2-vpn-instance-vpna] quit
[*PE2] interface GigabitEthernet 2/0/0
[*PE2-GigabitEthernet2/0/0] ip binding vpn-instance vpna
[*PE2-GigabitEthernet2/0/0] ip address 10.2.1.1 24
[*PE2-GigabitEthernet2/0/0] quit
[*PE2] commit
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] router-id 1.1.1.1
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] peer 10.1.1.2 as-number 65410
[*PE1-bgp-vpna] import-route direct
[*PE1-bgp-vpna] advertise l2vpn evpn
[*PE1-bgp-vpna] commit
[~PE1-bgp-vpna] quit
[~PE1-bgp] quit
# Configure CE2.
[~CE2] interface loopback 1
[*CE2-LoopBack1] ip address 22.22.22.22 32
[*CE2-LoopBack1] quit
[*CE2] bgp 65420
[*CE2-bgp] router-id 22.22.22.22
[*CE2-bgp] peer 10.2.1.1 as-number 100
[*CE2-bgp] import-route direct
[*CE2-bgp] quit
[*CE2] commit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] router-id 2.2.2.2
[*PE2-bgp] ipv4-family vpn-instance vpna
[*PE2-bgp-vpna] peer 10.2.1.2 as-number 65420
[*PE2-bgp-vpna] import-route direct
[*PE2-bgp-vpna] advertise l2vpn evpn
[*PE2-bgp-vpna] commit
[~PE2-bgp-vpna] quit
[~PE2-bgp] quit
After the configuration is complete, run the display bgp vpnv4 vpn-instance peer
command to check the EBGP peer relationship established between PE1 and CE1.
# Configure PE1.
[~PE1] bgp 100
[*PE1-bgp] peer 2001:DB8:3::3 as-number 100
[*PE1-bgp] peer 2001:DB8:3::3 connect-interface loopback 1
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2001:DB8:3::3 enable
[*PE1-bgp-af-evpn] commit
[~PE1-bgp-af-evpn] quit
[~PE1-bgp] quit
# Configure PE2.
[~PE2] bgp 100
[*PE2-bgp] peer 2001:DB8:1::1 as-number 100
[*PE2-bgp] peer 2001:DB8:1::1 connect-interface loopback 1
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 enable
[*PE2-bgp-af-evpn] commit
[~PE2-bgp-af-evpn] quit
[~PE2-bgp] quit
After completing the configuration, run the display bgp evpn peer command to
check the BGP EVPN peer relationship established between PEs.
Step 6 Configure SRv6 SIDs, and configure the PEs to advertise VPN routes carrying SIDs.
# Configure PE1.
[~PE1] segment-routing ipv6
[*PE1-segment-routing-ipv6] encapsulation source-address 2001:DB8:1::1
[*PE1-segment-routing-ipv6] locator as1 ipv6-prefix 2001:DB8:100:: 64 static 32
[*PE1-segment-routing-ipv6-locator] opcode ::100 end psp
[*PE1-segment-routing-ipv6-locator] opcode ::200 end no-flavor
[*PE1-segment-routing-ipv6-locator] quit
[*PE1-segment-routing-ipv6] quit
[*PE1] bgp 100
[*PE1-bgp] l2vpn-family evpn
[*PE1-bgp-af-evpn] peer 2001:DB8:3::3 advertise encap-type srv6
[*PE1-bgp-af-evpn] quit
[*PE1-bgp] ipv4-family vpn-instance vpna
[*PE1-bgp-vpna] segment-routing ipv6 traffic-engineer best-effort evpn
[*PE1-bgp-vpna] segment-routing ipv6 locator as1 evpn
[*PE1-bgp-vpna] commit
[~PE1-bgp-vpna] quit
[~PE1-bgp] quit
[~PE1] isis 1
[*PE1-isis-1] segment-routing ipv6 locator as1 auto-sid-disable
[*PE1-isis-1] commit
[~PE1-isis-1] quit
# Configure P1.
[~P1] segment-routing ipv6
[*P1-segment-routing-ipv6] encapsulation source-address 2001:DB8:2::2
[*P1-segment-routing-ipv6] locator as1 ipv6-prefix 2001:DB8:200:: 64 static 32
[*P1-segment-routing-ipv6-locator] opcode ::100 end psp
[*P1-segment-routing-ipv6-locator] opcode ::200 end no-flavor
[*P1-segment-routing-ipv6-locator] quit
[*P1-segment-routing-ipv6] quit
[*P1] isis 1
[*P1-isis-1] segment-routing ipv6 locator as1 auto-sid-disable
[*P1-isis-1] commit
[~P1-isis-1] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[*PE2-segment-routing-ipv6] encapsulation source-address 2001:DB8:3::3
[*PE2-segment-routing-ipv6] locator as1 ipv6-prefix 2001:DB8:300:: 64 static 32
[*PE2-segment-routing-ipv6-locator] opcode ::100 end psp
# Configure P2.
[~P2] segment-routing ipv6
[*P2-segment-routing-ipv6] encapsulation source-address 2001:DB8:4::4
[*P2-segment-routing-ipv6] locator as1 ipv6-prefix 2001:DB8:400:: 64 static 32
[*P2-segment-routing-ipv6-locator] opcode ::100 end psp
[*P2-segment-routing-ipv6-locator] opcode ::200 end no-flavor
[*P2-segment-routing-ipv6-locator] quit
[*P2-segment-routing-ipv6] quit
[*P2] isis 1
[*P2-isis-1] segment-routing ipv6 locator as1 auto-sid-disable
[*P2-isis-1] commit
[~P2-isis-1] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[*PE2-segment-routing-ipv6] segment-list list1
[*PE2-segment-routing-ipv6-segment-list-list1] index 5 sid ipv6 2001:DB8:200::100
[*PE2-segment-routing-ipv6-segment-list-list1] index 10 sid ipv6 2001:DB8:100::100
[*PE2-segment-routing-ipv6-segment-list-list1] quit
[*PE2-segment-routing-ipv6] segment-list list2
[*PE2-segment-routing-ipv6-segment-list-list2] index 5 sid ipv6 2001:DB8:400::100
[*PE2-segment-routing-ipv6-segment-list-list2] index 10 sid ipv6 2001:DB8:100::100
[*PE2-segment-routing-ipv6-segment-list-list2] quit
[*PE2-segment-routing-ipv6] srv6-te-policy locator as1
[*PE2-segment-routing-ipv6] srv6-te policy policy1 endpoint 2001:DB8:1::1 color 10
[*PE2-segment-routing-ipv6-policy-policy1] binding-sid 2001:DB8:300::900
[*PE2-segment-routing-ipv6-policy-policy1] candidate-path preference 100
[*PE2-segment-routing-ipv6-policy-policy1-path] segment-list list1 binding-sid 2001:DB8:300::800
reverse-binding-sid 2001:DB8:100::800
[*PE2-segment-routing-ipv6-policy-policy1-path] quit
[*PE2-segment-routing-ipv6-policy-policy1] quit
[*PE2-segment-routing-ipv6] srv6-te policy policy2 endpoint 2001:DB8:1::1 color 20
[*PE2-segment-routing-ipv6-policy-policy2] binding-sid 2001:DB8:300::901
[*PE2-segment-routing-ipv6-policy-policy2] candidate-path preference 100
[*PE2-segment-routing-ipv6-policy-policy2-path] segment-list list2 binding-sid 2001:DB8:300::801
reverse-binding-sid 2001:DB8:100::801
[*PE2-segment-routing-ipv6-policy-policy2-path] commit
[~PE2-segment-routing-ipv6-policy-policy2-path] quit
[~PE2-segment-routing-ipv6-policy-policy2] quit
[~PE2-segment-routing-ipv6] quit
After completing the configuration, run the display srv6-te policy command to
check SRv6 TE Policy information.
Step 8 Configure a TE-Class value.
# Configure PE1.
[~PE1] acl number 3333
[*PE1-acl-advance-3333] rule 5 permit ip source 11.11.11.11 0 destination 22.22.22.22 0
[*PE1-acl-advance-3333] rule 10 permit ip source 22.22.22.22 0 destination 11.11.11.11 0
[*PE1-acl-advance-3333] commit
[~PE1-acl-advance-3333] quit
[~PE1] traffic classifier c1
[*PE1-classifier-c1] if-match acl 3333
[*PE1-classifier-c1] commit
[~PE1-classifier-c1] quit
[~PE1] traffic behavior b1
[*PE1-behavior-b1] remark te-class 1
[*PE1-behavior-b1] commit
[~PE1-behavior-b1] quit
[~PE1] traffic policy p1
[*PE1-trafficpolicy-p1] classifier c1 behavior b1
[*PE1-trafficpolicy-p1] share-mode
[*PE1-trafficpolicy-p1] statistics enable
[*PE1-trafficpolicy-p1] quit
[*PE1] interface GigabitEthernet 2/0/0
[*PE1-GigabitEthernet2/0/0] traffic-policy p1 inbound
[*PE1-GigabitEthernet2/0/0] commit
[~PE1-GigabitEthernet2/0/0] quit
# Configure PE2.
[~PE2] acl number 3333
[*PE2-acl-advance-3333] rule 5 permit ip source 22.22.22.22 0 destination 11.11.11.11 0
[*PE2-acl-advance-3333] rule 10 permit ip source 11.11.11.11 0 destination 22.22.22.22 0
[*PE2-acl-advance-3333] commit
[~PE2-acl-advance-3333] quit
[~PE2] traffic classifier c1
[*PE2-classifier-c1] if-match acl 3333
[*PE2-classifier-c1] commit
[~PE2-classifier-c1] quit
[~PE2] traffic behavior b1
[*PE2-behavior-b1] remark te-class 1
[*PE2-behavior-b1] commit
[~PE2-behavior-b1] quit
[~PE2] traffic policy p1
[*PE2-trafficpolicy-p1] classifier c1 behavior b1
[*PE2-trafficpolicy-p1] share-mode
# Configure PE2.
[~PE2] segment-routing ipv6
[*PE2-segment-routing-ipv6] smart-policy-route
[*PE2-segment-routing-ipv6-spr] spr-policy spr1
[*PE2-segment-routing-ipv6-spr-policy-spr1] delay threshold 1000
[*PE2-segment-routing-ipv6-spr-policy-spr1] loss threshold 300
[*PE2-segment-routing-ipv6-spr-policy-spr1] jitter threshold 1000
[*PE2-segment-routing-ipv6-spr-policy-spr1] cmi threshold 5000
[*PE2-segment-routing-ipv6-spr-policy-spr1] srv6-te-policy color 10 priority 1
[*PE2-segment-routing-ipv6-spr-policy-spr1] srv6-te-policy color 20 priority 2
[*PE2-segment-routing-ipv6-spr-policy-spr1] commit
[~PE2-segment-routing-ipv6-spr-policy-spr1] quit
[~PE2-segment-routing-ipv6-spr] quit
[~PE2-segment-routing-ipv6] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[*PE2-segment-routing-ipv6] mapping-policy p1 color 101
[*PE2-segment-routing-ipv6-mapping-policy-p1] match-type te-class
[*PE2-segment-routing-ipv6-mapping-policy-p1-te-class] index 10 te-class 1 match spr-policy spr1
[*PE2-segment-routing-ipv6-mapping-policy-p1-te-class] commit
[~PE2-segment-routing-ipv6-mapping-policy-p1-te-class] quit
[~PE2-segment-routing-ipv6-mapping-policy-p1] quit
[~PE2-segment-routing-ipv6] quit
# Configure PE2.
[~PE2] route-policy p1 permit node 10
[*PE2-route-policy] apply extcommunity color 0:101
[*PE2-route-policy] quit
[*PE2] bgp 100
[*PE2-bgp] l2vpn-family evpn
[*PE2-bgp-af-evpn] peer 2001:DB8:1::1 route-policy p1 import
[*PE2-bgp-af-evpn] quit
[*PE2-bgp] quit
[*PE2] tunnel-policy p1
[*PE2-tunnel-policy-p1] tunnel select-seq ipv6 srv6-te-flow-group load-balance-number 1
[*PE2-tunnel-policy-p1] quit
[*PE2] ip vpn-instance vpna
[*PE2-vpn-instance-vpna] ipv4-family
[*PE2-vpn-instance-vpna-af-ipv4] tnl-policy p1 evpn
[*PE2-vpn-instance-vpna-af-ipv4] commit
[~PE2-vpn-instance-vpna-af-ipv4] quit
[~PE2-vpn-instance-vpna] quit
After completing the configuration, run the display srv6-te flow-group command
to check the SRv6 TE flow group status. You can also run the display ip routing-
table vpn-instance command to check the IPv4 routing table of a VPN instance
and check whether VPN routes are successfully recursed to the SRv6 TE Policy.
Step 12 Configure IFIT measurement for the SRv6 TE Policy.
# Configure PE1.
[~PE1] segment-routing ipv6
[*PE1-segment-routing-ipv6] srv6-te policy policy1 endpoint 2001:DB8:3::3 color 10
[*PE1-segment-routing-ipv6-policy-policy1] ifit loss-measure enable
[*PE1-segment-routing-ipv6-policy-policy1] ifit delay-measure enable
[*PE1-segment-routing-ipv6-policy-policy1] quit
[*PE1-segment-routing-ipv6] srv6-te policy policy2 endpoint 2001:DB8:3::3 color 20
[*PE1-segment-routing-ipv6-policy-policy2] ifit loss-measure enable
[*PE1-segment-routing-ipv6-policy-policy2] ifit delay-measure enable
[*PE1-segment-routing-ipv6-policy-policy2] commit
[~PE1-segment-routing-ipv6-policy-policy2] quit
[~PE1-segment-routing-ipv6] quit
# Configure PE2.
[~PE2] segment-routing ipv6
[*PE2-segment-routing-ipv6] srv6-te policy policy1 endpoint 2001:DB8:1::1 color 10
[*PE2-segment-routing-ipv6-policy-policy1] ifit loss-measure enable
[*PE2-segment-routing-ipv6-policy-policy1] ifit delay-measure enable
[*PE2-segment-routing-ipv6-policy-policy1] quit
[*PE2-segment-routing-ipv6] srv6-te policy policy2 endpoint 2001:DB8:1::1 color 20
[*PE2-segment-routing-ipv6-policy-policy2] ifit loss-measure enable
# Configure PE2.
[~PE2] ifit
[*PE2-ifit] node-id 20
[*PE2-ifit] work-mode dcp
[*PE2-ifit-work-mode-dcp] service-type srv6-segment-list
[*PE2-ifit-work-mode-dcp] commit
[~PE2-ifit-work-mode-dcp] quit
[~PE2-ifit] quit
Step 14 Configure the device to send statistics to the NMS through telemetry. The
following example uses the configuration on PE1. The configuration of PE2 is
similar to the configuration of PE1.
[~PE1] telemetry
[~PE1-telemetry] destination-group ifit
[*PE1-telemetry-destination-group-ifit] ipv4-address 192.168.1.1 port 10001 protocol grpc
[*PE1-telemetry-destination-group-ifit] quit
[*PE1-telemetry] sensor-group ifit
[*PE1-telemetry-sensor-group-ifit] sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-sr-policy-
statistics/flow-sr-policy-statistic
[*PE1-telemetry-sensor-group-ifit-path] quit
[*PE1-telemetry-sensor-group-ifit] quit
[*PE1-telemetry] subscription ifit
[*PE1-telemetry-subscription-ifit] sensor-group ifit sample-interval 0
[*PE1-telemetry-subscription-ifit] destination-group ifit
[*PE1-telemetry-subscription-ifit] commit
NOTE
You are advised to configure devices to send data using a secure TLS encryption mode. For
details, see Telemetry Configuration.
----End
818051519 0 0 0 0 OK
818051518 0 0 0 0 OK
818051517 0 0 0 0 OK
818051516 0 0 0 0 OK
818051515 0 0 0 0 OK
818051514 0 0 0 0 OK
818051513 0 0 0 0 OK
818051512 0 0 0 0 OK
818051511 0 0 0 0 OK
818051510 0 0 0 0 OK
818051509 0 0 0 0 OK
818051508 0 0 0 0 OK
818051507 0 0 0 0 OK
818051506 0 0 0 0 OK
818051505 0 0 0 0 OK
818051504 0 0 0 0 OK
818051503 0 0 0 0 OK
818051502 0 0 0 0 OK
818051501 0 0 0 0 OK
818051500 0 0 0 0 OK
818051499 0 0 0 0 OK
818051498 0 0 0 0 OK
818051497 0 0 0 0 OK
------------------------------------------------------------------------------------------------------------------------------
-
Info: The actual loss ratio is the value displayed in the Loss-Ratio column divided by 10^6.
Configuration Scripts
● PE1
#
sysname PE1
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
tnl-policy p1 evpn
apply-label per-instance
vpn-target 111:1 export-extcommunity evpn
vpn-target 111:1 import-extcommunity evpn
#
acl number 3333
rule 5 permit ip source 11.11.11.11 0 destination 22.22.22.22 0
rule 10 permit ip source 22.22.22.22 0 destination 11.11.11.11 0
#
traffic classifier c1
if-match acl 3333
#
traffic behavior b1
remark te-class 1
#
traffic policy p1
share-mode
statistics enable
classifier c1 behavior b1 precedence 1
#
segment-routing ipv6
encapsulation source-address 2001:DB8:1::1
locator as1 ipv6-prefix 2001:DB8:100:: 64 static 32
opcode ::100 end psp
opcode ::200 end no-flavor
srv6-te-policy locator as1
segment-list list1
index 5 sid ipv6 2001:DB8:200::100
index 10 sid ipv6 2001:DB8:300::100
segment-list list2
index 5 sid ipv6 2001:DB8:400::100
index 10 sid ipv6 2001:DB8:300::100
srv6-te policy policy1 endpoint 2001:DB8:3::3 color 10
binding-sid 2001:DB8:100::900
ifit loss-measure enable
ifit delay-measure enable
candidate-path preference 100
segment-list list1 binding-sid 2001:DB8:100::800 reverse-binding-sid 2001:DB8:300::800
srv6-te policy policy2 endpoint 2001:DB8:3::3 color 20
binding-sid 2001:DB8:100::901
ifit loss-measure enable
ifit delay-measure enable
candidate-path preference 100
segment-list list2 binding-sid 2001:DB8:100::801 reverse-binding-sid 2001:DB8:300::801
smart-policy-route
spr-policy spr1
delay threshold 1000
jitter threshold 1000
loss threshold 300
cmi threshold 5000
srv6-te-policy color 10 priority 1
srv6-te-policy color 20 priority 2
mapping-policy p1 color 101
match-type te-class
index 10 te-class 1 match spr-policy spr1
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0001.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:11::1/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpna
ip address 10.1.1.1 255.255.255.0
traffic-policy p1 inbound
#
interface GigabitEthernet3/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:13::1/96
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:1::1/128
isis ipv6 enable 1
#
bgp 100
router-id 1.1.1.1
peer 2001:DB8:3::3 as-number 100
peer 2001:DB8:3::3 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
#
ipv4-family vpn-instance vpna
import-route direct
advertise l2vpn evpn
segment-routing ipv6 locator as1 evpn
segment-routing ipv6 traffic-engineer best-effort evpn
peer 10.1.1.2 as-number 65410
#
l2vpn-family evpn
policy vpn-target
peer 2001:DB8:3::3 enable
peer 2001:DB8:3::3 route-policy p1 import
peer 2001:DB8:3::3 advertise encap-type srv6
#
ifit
node-id 10
work-mode mcp
service-type srv6-segment-list
#
route-policy p1 permit node 10
apply extcommunity color 0:101
#
tunnel-policy p1
tunnel select-seq ipv6 srv6-te-flow-group load-balance-number 1
#
telemetry
#
sensor-group ifit
sensor-path huawei-ifit:ifit/huawei-ifit-statistics:flow-sr-policy-statistics/flow-sr-policy-statistic
#
destination-group ifit
ipv4-address 192.168.1.1 port 10001 protocol grpc
#
subscription ifit
sensor-group ifit sample-interval 0
destination-group ifit
#
return
● P1
#
sysname P1
#
segment-routing ipv6
encapsulation source-address 2001:DB8:2::2
locator as1 ipv6-prefix 2001:DB8:200:: 64 static 32
opcode ::100 end psp
opcode ::200 end no-flavor
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0002.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:11::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:12::1/96
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:2::2/128
isis ipv6 enable 1
#
return
● PE2
#
sysname PE2
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 200:1
tnl-policy p1 evpn
apply-label per-instance
vpn-target 111:1 export-extcommunity evpn
vpn-target 111:1 import-extcommunity evpn
#
acl number 3333
rule 5 permit ip source 22.22.22.22 0 destination 11.11.11.11 0
rule 10 permit ip source 11.11.11.11 0 destination 22.22.22.22 0
#
traffic classifier c1
if-match acl 3333
#
traffic behavior b1
remark te-class 1
#
traffic policy p1
share-mode
statistics enable
classifier c1 behavior b1 precedence 1
#
segment-routing ipv6
encapsulation source-address 2001:DB8:3::3
● P2
#
sysname P2
#
segment-routing ipv6
encapsulation source-address 2001:DB8:4::4
locator as1 ipv6-prefix 2001:DB8:400::1 64 static 32
opcode ::100 end psp
opcode ::200 end no-flavor
#
isis 1
is-level level-1
cost-style wide
network-entity 10.0000.0000.0004.00
#
ipv6 enable topology ipv6
segment-routing ipv6 locator as1 auto-sid-disable
#
#
interface GigabitEthernet1/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:13::2/96
isis ipv6 enable 1
#
interface GigabitEthernet2/0/0
undo shutdown
ipv6 enable
ipv6 address 2001:DB8:14::1/96
isis ipv6 enable 1
#
interface LoopBack1
ipv6 enable
ipv6 address 2001:DB8:4::4/128
isis ipv6 enable 1
#
return
● CE1
#
sysname CE1
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.2 255.255.255.0
#
interface LoopBack1
ip address 11.11.11.11 255.255.255.255
#
bgp 65410
router-id 11.11.11.11
peer 10.1.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
import-route direct
peer 10.1.1.1 enable
#
return
● CE2
#
sysname CE2
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.2.1.2 255.255.255.0
#
interface LoopBack1
ip address 22.22.22.22 255.255.255.255
#
bgp 65420
router-id 22.22.22.22
peer 10.2.1.1 as-number 100
#
ipv4-family unicast
undo synchronization
import-route direct
peer 10.2.1.1 enable
#
return
you to obtain performance indicators such as the packet loss rate, disorder rate,
and jitter. eMDI is easy to deploy, applies to all NEs, and provides high statistical
precision.
Background
As multicast video services, such as IPTV, become more commonplace on carrier
networks and become an important source of revenue for carriers, monitoring the
quality of videos becomes more and more important. Packet loss, jitter, and
disorder are major factors that affect video quality. A packet loss rate and disorder
rate less than 0.01% may result in artifacts, pixelation, and similar problems
occurring on a terminal, while jitter may cause the terminal to display a black
screen. These problems negatively affect the quality of experience (QoE) of
services, and in turn affect carriers' revenue and reputation. Carriers therefore
urgently need a video quality monitoring and fault locating solution that monitors
and maintains service quality in real time and quickly demarcates faults and
clarifies responsibilities.
The eMDI solution is a quality monitoring and fault locating solution designed for
multicast video streams such as IPTV. It can monitor quality indicators such as the
packet loss rate, packet disorder rate, and jitter of real service packets in real time
and provides precise statistics and reliable data. The solution can be deployed on
each network node from the edge device to the core device. The detection results
of multiple nodes can be used to quickly locate faulty network segments.
Detection Principles
The eMDI detection solution is a distributed board detection solution. It supports
distributed detection for video streams of a specified multicast channel on a
specified board.
This solution supports detection only of UDP-based RTP video streams. The NP of
the board to be detected performs a validity check and an RTP check on the IP
header, UDP header, and RTP header of RTP packets and calculates the packet loss
rate and packet disorder rate based on the sequence number in the RTP header.
The NP then calculates jitter based on the timestamp in the RTP header, achieving
real-time monitoring of video quality.
The implementation process of eMDI detection is as follows:
1. The NMS delivers eMDI monitoring instructions to a device.
2. The device monitors eMDI indicators in real time.
3. The device periodically reports the monitored eMDI indicators and alarms to
the NMS.
4. The NMS displays eMDI indicators on its GUI and supports segment-based
fault demarcation and analysis.
Indicator Collection
eMDI can obtain monitoring data from a device on a regular basis and periodically
send the data to the NMS using various methods, such as telemetry. After analysis
is performed on the NMS, the monitoring data can be displayed in various forms,
such as a trend chart.
eMDI also supports reporting of alarms to the NMS. The alarm thresholds and the
number of times that alarms are suppressed can be configured as required.
Indicators
The detection indicators supported by eMDI include the packet loss rate (RTP-LR),
packet out-of-order rate (RTP-SER), and jitter. The packet loss rate and packet out-
of-order rate are calculated based on the sequence number in an RTP packet
header. The jitter is calculated based on the timestamp in an RTP packet header.
For details, see eMDI Detection Indicators.
Usage Scenario
As a distributed board detection solution, eMDI depends on the establishment of a
channel group and a board group and the binding of the channel group and board
group. Besides, when the jitter indicator needs to be detected and the eMDI
detection must be supported on Ps, jitter detection and P detection must be
configured.
Context
Before configuring the multicast channels to be detected, create a channel group
and add the multicast channels to the channel group.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run emdi
eMDI detection is enabled and the eMDI view is displayed.
Step 3 Run emdi channel-group channel-group-name
An eMDI channel group is created or the view of an existing eMDI channel group
is displayed.
Step 4 Run either of the following commands:
● To add a specified multicast channel to the eMDI channel group in a BIER
eMDI scenario, run the emdi channel channel-name source source-address
group group-address vpn-instance vpn-instance-name sub-domain sub-
domain-value bsl bsl-value command.
● To add a specified multicast channel to the eMDI channel group in other
scenarios, run the emdi channel channel-name source source-address group
group-address [ vpn-instance vpn-instance-name | vlan vlan-id | vsi vsi-name
| bd bd-id | transit ] [ pt pt-value ] [ clock-rate clock-rate-value ]
[ uncompressed ] command.
Step 5 Run commit
The configuration is committed.
----End
Context
As a distributed board detection solution, the eMDI detection solution requires the
configuration of boards for eMDI detection. Create a board group and then bind
the eMDI-capable boards to the board group so that eMDI detection can be
performed on the boards.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run emdi
----End
Context
As a distributed board detection solution, eMDI requires the binding of a channel
group to a board group. After the channel group and board group are bound, the
board NP in the board group performs real-time monitoring of the video streams
of a specified channel in order to obtain detection indicators such as the packet
loss rate and packet out-of-order rate.
Procedure
Step 1 Run system-view
NOTE
Before binding an eMDI channel to the outbound interface, bind the corresponding eMDI
channel group to an eMDI board group in the downstream direction.
----End
Context
By default, the eMDI detection solution detects only the packet loss rate and
packet out-of-order rate. If the jitter indicator also needs to be detected, eMDI
jitter detection needs to be configured.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run emdi
The eMDI view is displayed.
Step 3 Run emdi rtp-jitter enable
Jitter detection is enabled.
Step 4 Run commit
The configuration is committed.
----End
Context
Only NG MVPN networks support eMDI detection on video streams passing
through Ps. eMDI detection is disabled by default. To enable eMDI detection on Ps,
perform the following steps.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run emdi
NOTE
eMDI detection takes effect on Ps only after both the emdi match-mpls-label enable and
emdi channel source source-address group group-address transit commands are run.
----End
Usage Scenario
After basic eMDI detection functions are configured, configure eMDI related
attributes. The monitoring period determines the frequency of eMDI detection.
The alarm thresholds and the number of alarm suppression times determine the
frequency of reporting eMDI alarms. The configuration of detection only on the
rate of video streams ensures the accuracy of detection indicators on the NG
MVPN network where the transit and bud nodes overlap.
Context
With eMDI, monitoring data can be obtained from a device on a regular basis and
periodically send to uTraffic in various modes such as Telemetry. After analysis on
uTraffic, the monitoring data can be displayed in various forms, such as a trend
chart. To change a monitoring period, perform the following operations.
Procedure
Step 1 Run system-view
The system view is displayed.
----End
Context
In addition to monitoring various indicators of video streams, the eMDI detection
solution allows reporting of alarms to NMS. eMDI alarm triggering is determined
by an alarm threshold and the number of alarm suppression times. If the alarm
threshold is M and the number of alarm suppression times is N, when an indicator
reaches M for N consecutive times, the device reports an eMDI alarm to the NMS.
Therefore, to control the frequency at which eMDI alarms are reported, configure
a proper alarm threshold and alarm suppression times.
NOTE
If statistics are all below the threshold within 60 consecutive detection intervals, the alarm
is automatically cleared.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run emdi
The eMDI view is displayed.
Step 3 Run emdi alarm { rtp-lr | rtp-ser } { sd | hd | 4k } threshold threshold-value
An alarm threshold for eMDI detection is configured.
Step 4 Run emdi alarm suppress times value
The number of eMDI alarm suppression times is configured.
Step 5 Run commit
The configuration is committed.
----End
Context
On the NG MVPN network, if the transit and bud nodes overlap, two pieces of
traffic will be detected on the same node, causing offset in the detection results of
the packet loss rate, packet out-of-order rate, and jitter. To avoid detection offset
when the transit and bud nodes overlap and ensure the accuracy of detection
results, enable eMDI detection only on the rate of video streams that pass through
the node instead of the packet loss rate, packet out-of-order rate, and jitter.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run emdi
The eMDI view is displayed.
Step 3 Run emdi monitor-rate-only
eMDI detection only on the rate of video streams is enabled.
Step 4 Run commit
The configuration is committed.
----End
Context
In a BIER eMDI scenario, if no traffic has been detected in a multicast group
within a period of time, the corresponding ACL entry ages, and the traffic of the
multicast group is no longer detected. The interval between when the traffic rate
becomes 0 and when the ACL entry starts to age is considered as the aging period
of the eMDI BIER channels. You can perform the following steps to configure the
aging period.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run emdi
The eMDI view is displayed.
----End
Usage Scenario
After eMDI detection on video traffic is performed on a device, you can view the
eMDI statistics of a specified channel or all channels. To avoid interference from
irrelevant records, you can also clear the historical statistics of a specified channel
or all channels.
Procedure
● Run the display emdi statistics history channel [ channel-name ] [ start
start-index end end-index | latest-record record-number ] command to view
historical statistics about incoming traffic of a specified channel or all
channels.
● Run the display emdi statistics history outbound channel [ channel-name ]
[ start start-index end end-index | latest-record record-number ] slot slot-id
command to view historical statistics about outgoing traffic of a specified
channel or all channels.
● Run the display emdi statistics history bier channel [ source source-address
group group-address ] [ start start-index end end-index | latest-record
record-number ] slot slot-id command to view historical statistics about
incoming traffic of eMDI BIER channels on a specified board.
● Run the display emdi statistics history bier outbound channel [ source
source-address group group-address ] [ start start-index end end-index |
latest-record record-number ] slot slot-id command to view historical
statistics about outgoing traffic of eMDI BIER channels on a specified board.
● Run the reset emdi statistics history channel [ channel-name ] command to
clear historical statistics about incoming traffic of a specified channel or all
channels.
● Run the reset emdi statistics history outbound channel [ channel-name ]
slot slot-id command to clear historical statistics about outgoing traffic of a
specified channel or all channels.
● Run the reset emdi statistics history bier channel [ source source-address
group group-address ] slot slot-id command to clear statistics about
incoming traffic of eMDI BIER channels on a specified board.
● Run the reset emdi statistics history bier outbound channel [ source
source-address group group-address ] slot slot-id command to clear statistics
about outgoing traffic of eMDI BIER channels on a specified board.
----End
Networking Requirements
On the network shown in Figure 1-81, IPTV programs are provided for host users
in multicast mode. eMDI detection is deployed on Device A, Device B, Device C,
and Device D to monitor the quality of IPTV service packets. Network O&M
personnel can check the detection results reported by the devices through
telemetry in real time on the monitor platform, quickly demarcating and locating
faults.
Configuration Roadmap
The configuration roadmap is as follows:
2. Configure PIM-DM.
3. Configure eMDI detection.
a. Configure eMDI to monitor a channel group.
b. Configure eMDI to monitor a board group.
c. Bind the channel group to the board group.
4. Configure telemetry.
Data Preparation
To complete the configuration, you need the following data:
● Multicast group G address: 225.1.1.1/24
● Multicast source S address: 10.1.4.100/24
● Version number of IGMP running between the router and user hosts: 2
● Name of the channel group monitored by eMDI: IPTV-channel
● Name of the board group monitored by eMDI: IPTV-lpu
Procedure
Step 1 Assign IP addresses to router interfaces and configure a unicast routing protocol.
For configuration details, see Configuration Files in this section.
Step 2 Configure PIM-DM.
● Enable multicast on each device and PIM-DM on each interface.
# Configure Device A.
<HUAWEI> system-view
[~HUAWEI] sysname DeviceA
[*HUAWEI] commit
[~DeviceA] multicast routing-enable
[*DeviceA] interface gigabitethernet 1/0/0
[*DeviceA-GigabitEthernet1/0/0] pim dm
[*DeviceA-GigabitEthernet1/0/0] quit
[*DeviceA] interface gigabitethernet 1/0/1
[*DeviceA-GigabitEthernet1/0/1] pim dm
[*DeviceA-GigabitEthernet1/0/1] quit
[*DeviceA] commit
The configurations of Device B, Device C, and Device D are similar to the
configuration of Device A. For configuration details, see Configuration Files.
● Configure IGMP on the router interfaces connected to the user hosts.
# Configure Device C.
[~DeviceC] interface gigabitethernet 1/0/0
[~DeviceC-GigabitEthernet1/0/0] igmp enable
[*DeviceC-GigabitEthernet1/0/0] igmp static-group 225.1.1.1
[*DeviceC-GigabitEthernet1/0/0] quit
[*DeviceC] commit
# Configure Device D.
[~DeviceD] interface gigabitethernet 1/0/1
[~DeviceD-GigabitEthernet1/0/1] igmp enable
[*DeviceD-GigabitEthernet1/0/1] igmp static-group 225.1.1.1
[*DeviceD-GigabitEthernet1/0/1] quit
[*DeviceD] commit
After completing the configuration, run the following commands to check whether
the multicast service is configured successfully.
● Run the display pim interface command to check the PIM-DM configuration
and status of each router interface. The following example uses the command
output on Device B.
<DeviceB> display pim interface
VPN-Instance: public net
Interface State NbrCnt HelloInt DR-Pri DR-Address
GE1/0/0 up 1 30 1 10.1.1.2 (local)
GE1/0/1 up 1 30 1 10.1.2.2
GE1/0/2 up 1 30 1 10.1.3.2
● Run the display pim neighbor command to check the PIM-DM neighbor
relationship between routers. The following example uses the command
output on Device B.
<DeviceB> display pim neighbor
VPN-Instance: public net
Total Number of Neighbors = 3
● Run the display pim routing-table command to check the PIM routing table
of each router. Assume that both user A and user B need to receive
information about multicast group G (225.1.1.1/24). When multicast source S
(10.1.4.100/24) sends multicast data to multicast group G, a multicast
distribution tree (MDT) is generated through flooding. Each router on the
MDT path has (S, G) entries. When user A and user B join multicast group G,
Device C and Device D generate (*, G) entries. The command output on each
router is as follows:
<DeviceA> display pim routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(10.1.4.100, 225.1.1.1)
Protocol: pim-dm, Flag: LOC ACT
UpTime: 00:08:18
Upstream interface: GigabitEthernet1/0/1
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/0
Protocol: pim-dm, UpTime: 00:08:18, Expires: never
<DeviceB> display pim routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(10.1.4.100, 225.1.1.1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:10:25
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: 10.1.1.1
RPF prime neighbor: 10.1.1.1
Downstream interface(s) information:
Total number of downstreams: 2
1: GigabitEthernet1/0/1
Protocol: pim-dm, UpTime: 00:06:48, Expires: never
2: GigabitEthernet1/0/2
Protocol: pim-dm, UpTime: 00:05:53, Expires: never
<DeviceC> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
Protocol: pim-dm, Flag: WC
UpTime: 00:11:47
Upstream interface: NULL
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/0
Protocol: static, UpTime: 00:11:47, Expires: never
(10.1.4.100, 225.1.1.1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:17:13
Upstream interface: GigabitEthernet1/0/1
Upstream neighbor: 10.1.2.1
RPF prime neighbor: 10.1.2.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/0
Protocol: pim-dm, UpTime: 00:11:47, Expires: -
<DeviceD> display pim routing-table
VPN-Instance: public net
Total 1 (*, G) entry; 1 (S, G) entry
(*, 225.1.1.1)
Protocol: pim-dm, Flag: WC
UpTime: 00:05:26
Upstream interface: NULL
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: static, UpTime: 00:05:26, Expires: never
(10.1.4.100, 225.1.1.1)
Protocol: pim-dm, Flag: ACT
UpTime: 00:09:58
Upstream interface: GigabitEthernet1/0/2
Upstream neighbor: 10.1.3.1
RPF prime neighbor: 10.1.3.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: pim-dm, UpTime: 00:05:26, Expires: -
After completing the configuration, run the display emdi statistics history
command to check the detection result when multicast traffic passes through
DeviceA.
● Check the detection result in the inbound direction.
<DeviceA> display emdi statistics history channel 1 start 3 end 5
Channel Name : 1
Total Records : 3 Latest Rate(pps) : 0 Latest Detect Time : 2021-02-18 21:22:40
-----------------------------------------------------------------------------------------------------------------------
----------------------------------
Record Record Monitor Monitor Received Ra
te Rate RTP-LC RTP-SE RTP-LR RTP-SER RTP
Index Time Period(s) Status Packets pps bps
(1/100000) (1/100000) Jitter(ms)
-----------------------------------------------------------------------------------------------------------------------
----------------------------------
3 2019-02-02:08-33-00 60 Normal 4393232 439323 4871215641 6700
6633 152 151 0
4 2019-02-02:08-32-00 60 Normal 4388533 438853 4866005390 6700
6633 152 151 0
5 2019-02-02:08-31-00 60 Normal 4388218 438821 4865656118 6700
6633 152 151 0
-----------------------------------------------------------------------------------------------------------------------
----------------------------------
After completing the configuration, check the real-time detection result reported
through telemetry on the monitor platform.
----End
Configuration Files
● Device A configuration file
#
sysname DeviceA
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo portswitch
undo shutdown
ip address 10.1.1.1 255.255.255.0
pim dm
#
interface GigabitEthernet1/0/1
undo portswitch
undo shutdown
ip address 10.1.4.1 255.255.255.0
pim dm
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.4.0 0.0.0.255
#
emdi
emdi channel-group IPTV-channel
emdi channel 1 source 10.1.4.100 group 225.1.1.1 pt 33 clock-rate 90kHz
emdi lpu-group _default_
emdi bind slot all
emdi lpu-group IPTV-lpu
emdi bind slot all
emdi bind channel-group IPTV-channel lpu-group IPTV-lpu
emdi bind channel-group IPTV-channel lpu-group IPTV-lpu outbound
#
telemetry
#
sensor-group emdimonitor
sensor-path huawei-emdi:emdi/emdi-telem-reps/emdi-telem-rep
sensor-path huawei-emdi:emdi/emdi-telem-rtps/emdi-telem-rtp
sensor-path huawei-emdi:emdi/out-telem-reps/out-telem-rep
#
destination-group Monitor
ipv4-address 10.1.7.2 port 10001 protocol grpc
#
subscription EMDI
sensor-group emdimonitor
destination-group Monitor
#
return
● Device B configuration file
#
sysname DeviceB
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo portswitch
undo shutdown
ip address 10.1.1.2 255.255.255.0
pim dm
#
interface GigabitEthernet1/0/1
undo portswitch
undo shutdown
ip address 10.1.2.1 255.255.255.0
pim dm
#
interface GigabitEthernet1/0/2
undo portswitch
undo shutdown
ip address 10.1.3.1 255.255.255.0
pim dm
#
ospf 1
area 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.2.0 0.0.0.255
network 10.1.3.0 0.0.0.255
#
emdi
emdi channel-group IPTV-channel
emdi channel 1 source 10.1.4.100 group 225.1.1.1 pt 33 clock-rate 90kHz
emdi lpu-group _default_
emdi bind slot all
emdi lpu-group IPTV-lpu
emdi bind slot all
emdi bind channel-group IPTV-channel lpu-group IPTV-lpu
emdi bind channel-group IPTV-channel lpu-group IPTV-lpu outbound
#
telemetry
#
sensor-group emdimonitor
sensor-path huawei-emdi:emdi/emdi-telem-reps/emdi-telem-rep
sensor-path huawei-emdi:emdi/emdi-telem-rtps/emdi-telem-rtp
#
destination-group Monitor
ipv4-address 10.1.7.2 port 10001 protocol grpc
#
subscription EMDI
sensor-group emdimonitor
destination-group Monitor
#
return
● Device C configuration file
#
sysname DeviceC
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo portswitch
undo shutdown
ip address 10.1.5.1 255.255.255.0
pim dm
igmp enable
igmp static-group 225.1.1.1
#
interface GigabitEthernet1/0/1
undo portswitch
undo shutdown
ip address 10.1.2.2 255.255.255.0
pim dm
#
ospf 1
area 0.0.0.0
network 10.1.2.0 0.0.0.255
network 10.1.5.0 0.0.0.255
#
emdi
emdi channel-group IPTV-channel
emdi channel 1 source 10.1.4.100 group 225.1.1.1 pt 33 clock-rate 90kHz
emdi lpu-group _default_
emdi bind slot all
emdi lpu-group IPTV-lpu
emdi bind slot all
emdi bind channel-group IPTV-channel lpu-group IPTV-lpu
emdi bind channel-group IPTV-channel lpu-group IPTV-lpu outbound
#
telemetry
#
sensor-group emdimonitor
sensor-path huawei-emdi:emdi/emdi-telem-reps/emdi-telem-rep
sensor-path huawei-emdi:emdi/emdi-telem-rtps/emdi-telem-rtp
#
destination-group Monitor
ipv4-address 10.1.7.2 port 10001 protocol grpc
#
subscription EMDI
sensor-group emdimonitor
destination-group Monitor
#
return
● Device D configuration file
#
sysname DeviceD
#
multicast routing-enable
#
interface GigabitEthernet1/0/1
undo portswitch
undo shutdown
ip address 10.1.6.1 255.255.255.0
pim dm
igmp enable
igmp static-group 225.1.1.1
#
interface GigabitEthernet1/0/2
undo portswitch
undo shutdown
ip address 10.1.3.2 255.255.255.0
pim dm
#
ospf 1
area 0.0.0.0
network 10.1.3.0 0.0.0.255
network 10.1.6.0 0.0.0.255
#
emdi
emdi channel-group IPTV-channel
emdi channel 1 source 10.1.4.100 group 225.1.1.1 pt 33 clock-rate 90kHz
Networking Requirements
On the network shown in Figure 1-82, a BGP MPLS/IP VPN over an MPLS LDP LSP
is deployed to carry unicast services, and an NG MVPN over an mLDP P2MP LSP is
deployed to carry multicast services. In addition, eMDI is deployed on the network
to monitor multicast service quality. Network maintenance personnel can check
real-time detection results reported through telemetry on the monitor platform,
quickly demarcating and locating faults.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● Public network OSPF process ID: 1; area ID: 0 OSPF multi-instance process ID:
2; area ID: 0
● VPN instance name on PE1, PE2, and PE3: VPNA
CE1 - - -
configurati 1.1.1.1 - AS65001
on file
2.2.2.2 2.2.2.2 20 3:
0: 3
PE1 2.2.2.2 AS100
1 4:
4
3.3.3.3 3.3.3.3 30
3:
PE2 3.3.3.3 0: AS100
3
1
4.4.4.4 4.4.4.4 40
4:
PE3 4.4.4.4 0: AS100
4
1
Procedure
Step 1 Configure a BGP MPLS/IP VPN.
1. Assign an IP address to each interface of devices on the backbone network
and VPN sites.
Assign an IP address to each interface according to Figure 1-82. For
configuration details, see Configuration Files in this section.
2. Configure an IGP to interconnect devices on the backbone network.
OSPF is used in this example. For configuration details, see Configuration
Files in this section.
3. Configure basic MPLS functions and MPLS LDP on the backbone network to
establish LDP LSPs.
– # Configure PE1.
[~PE1] mpls lsr-id 2.2.2.2
[*PE1] mpls
[*PE1-mpls] quit
[*PE1] mpls ldp
[*PE1-mpls-ldp] quit
[*PE1] interface gigabitethernet1/0/0
[*PE1-GigabitEthernet1/0/0] mpls
[*PE1-GigabitEthernet1/0/0] mpls ldp
[*PE1-GigabitEthernet1/0/0] quit
– # Configure CE2.
[~CE2] ospf 2
[*CE2-ospf-2] area 0
[*CE2-ospf-2-area-0.0.0.0] network 192.168.2.0 0.0.0.255
[*CE2-ospf-2-area-0.0.0.0] network 10.1.4.0 0.0.0.255
[*CE2-ospf-2-area-0.0.0.0] network 5.5.5.5 0.0.0.0
[*CE2-ospf-2-area-0.0.0.0] quit
[*CE2-ospf-2] quit
[*CE2] commit
– # Configure CE3.
[~CE3] ospf 2
[*CE3-ospf-2] area 0
[*CE3-ospf-2-area-0.0.0.0] network 192.168.3.0 0.0.0.255
[*CE3-ospf-2-area-0.0.0.0] network 10.1.5.0 0.0.0.255
[*CE3-ospf-2-area-0.0.0.0] network 6.6.6.6 0.0.0.0
[*CE3-ospf-2-area-0.0.0.0] quit
[*CE3-ospf-2] quit
[*CE3] commit
● Configure PIM.
– # Configure PE1.
[*PE1] interface gigabitethernet1/0/1
[*PE1-GigabitEthernet1/0/1] pim sm
[*PE1-GigabitEthernet1/0/1] quit
[*PE1] commit
– # Configure CE1.
[~CE1] multicast routing-enable
[*CE1] interface gigabitethernet1/0/0
[*CE1-GigabitEthernet1/0/0] pim sm
[*CE1-GigabitEthernet1/0/0] quit
[*CE1] interface gigabitethernet1/0/1
[*CE1-GigabitEthernet1/0/1] pim sm
[*CE1-GigabitEthernet1/0/1] quit
[*CE1] commit
– # Configure PE2.
[*PE2] interface gigabitethernet1/0/1
[*PE2-GigabitEthernet1/0/1] pim sm
[*PE2-GigabitEthernet1/0/1] quit
[*PE2] commit
– # Configure CE2.
[~CE2] multicast routing-enable
[*CE2] interface gigabitethernet1/0/0
[*CE2-GigabitEthernet1/0/0] pim sm
[*CE2-GigabitEthernet1/0/0] quit
[*CE2] interface gigabitethernet1/0/1
[*CE2-GigabitEthernet1/0/1] pim sm
[*CE2-GigabitEthernet1/0/1] quit
[*CE2] commit
– # Configure PE3.
[*PE3] interface gigabitethernet1/0/1
[*PE3-GigabitEthernet1/0/1] pim sm
[*PE3-GigabitEthernet1/0/1] quit
[*PE3] commit
– # Configure CE3.
[~CE3] multicast routing-enable
[*CE3] interface gigabitethernet1/0/0
[*CE3-GigabitEthernet1/0/0] pim sm
[*CE3-GigabitEthernet1/0/0] quit
[*CE3] interface gigabitethernet1/0/1
[*CE3-GigabitEthernet1/0/1] pim sm
[*CE3-GigabitEthernet1/0/1] quit
[*CE3] commit
● Configure IGMP.
– # Configure CE2.
[~CE2] interface gigabitethernet1/0/1
[*CE2-GigabitEthernet1/0/1] pim sm
[*CE2-GigabitEthernet1/0/1] igmp enable
[*CE2-GigabitEthernet1/0/1] igmp version 3
[*CE2-GigabitEthernet1/0/1] commit
[~CE2-GigabitEthernet1/0/1] quit
– # Configure CE3.
[~CE3] interface gigabitethernet1/0/1
[*CE3-GigabitEthernet1/0/1] pim sm
[*CE3-GigabitEthernet1/0/1] igmp enable
[*CE3-GigabitEthernet1/0/1] igmp version 3
[*CE3-GigabitEthernet1/0/1] commit
[~CE3-GigabitEthernet1/0/1] quit
– # Configure CE1.
[~CE1] pim
[*CE1-pim] static-rp 1.1.1.1
[*CE1-pim] commit
[~CE1-pim] quit
– # Configure CE2.
[~CE2] pim
[*CE2-pim] static-rp 1.1.1.1
[*CE2-pim] commit
[~CE2-pim] quit
– # Configure CE3.
[~CE3] pim
[*CE3-pim] static-rp 1.1.1.1
[*CE3-pim] commit
[~CE3-pim] quit
– # Configure PE1.
[~PE1] pim vpn-instance VPNA
[*PE1-pim-VPNA] static-rp 1.1.1.1
[*PE1-pim-VPNA] commit
[~PE1-pim-VPNA] quit
– # Configure PE2.
[~PE2] pim vpn-instance VPNA
[*PE2-pim-VPNA] static-rp 1.1.1.1
[*PE2-pim-VPNA] commit
[~PE2-pim-VPNA] quit
– # Configure PE3.
[~PE3] pim vpn-instance VPNA
[*PE3-pim-VPNA] static-rp 1.1.1.1
[*PE3-pim-VPNA] commit
[~PE3-pim-VPNA] quit
After the configurations are complete, NG MVPN functions have been configured.
If CE2 or CE3 has access users, CE1 can use the BGP MPLS/IP VPN to forward
multicast data to the users. Configure users on CE2 or CE3 to send IGMPv3 Report
messages and the multicast source 10.1.3.1 to send multicast data. Then, check
multicast routing entries to verify whether the NG MVPN is configured
successfully.
Run the display pim routing-table command on CE2, CE3, and CE1 to check the
PIM routing table. Run the display pim vpn-instance routing-table command on
PE2, PE3, and PE1 to check the PIM routing table of the VPN instance.
[~CE2] display pim routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(10.1.3.1, 225.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT SG_RCVR ACT
UpTime: 00:54:11
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: 192.168.2.1
RPF prime neighbor: 192.168.2.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: igmp, UpTime: 00:54:11, Expires: -
[~CE3] display pim routing-table
VPN-Instance: public net
Total 0 (*, G) entry; 1 (S, G) entry
(10.1.3.1, 226.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT SG_RCVR ACT
UpTime: 00:01:57
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: 192.168.3.1
RPF prime neighbor: 192.168.3.1
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: igmp, UpTime: 00:01:57, Expires: -
[~PE2] display pim vpn-instance VPNA routing-table
VPN-Instance: VPNA
Total 0 (*, G) entry; 1 (S, G) entry
(10.1.3.1, 225.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT ACT
UpTime: 00:48:18
Upstream interface: through-BGP
Upstream neighbor: 2.2.2.2
RPF prime neighbor: 2.2.2.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: pim-sm, UpTime: 00:48:18, Expires: 00:03:12
[~PE3] display pim vpn-instance VPNA routing-table
VPN-Instance: VPNA
Total 0 (*, G) entry; 1 (S, G) entry
(10.1.3.1, 226.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT ACT
UpTime: 00:02:06
Upstream interface: through-BGP
Upstream neighbor: 2.2.2.2
RPF prime neighbor: 2.2.2.2
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: pim-sm, UpTime: 00:02:06, Expires: 00:03:26
[~PE1] display pim vpn-instance VPNA routing-table
VPN-Instance: VPNA
Total 0 (*, G) entry; 2 (S, G) entries
(10.1.3.1, 225.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT SG_RCVR ACT
UpTime: 00:46:58
Upstream interface: GigabitEthernet1/0/1
Upstream neighbor: 192.168.1.1
RPF prime neighbor: 192.168.1.1
Downstream interface(s) information:
Total number of downstreams: 1
1: pseudo
Protocol: BGP, UpTime: 00:46:58, Expires: -
(10.1.3.1, 226.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT SG_RCVR ACT
UpTime: 00:00:23
Upstream interface: GigabitEthernet1/0/1
Upstream neighbor: 192.168.1.1
RPF prime neighbor: 192.168.1.1
Downstream interface(s) information:
Total number of downstreams: 1
1: pseudo
Protocol: BGP, UpTime: 00:00:26, Expires: -
[~CE1] display pim routing-table
(10.1.3.1, 225.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT LOC ACT
UpTime: 00:47:29
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: pim-sm, UpTime: 00:47:29, Expires: 00:03:03
(10.1.3.1, 226.1.1.1)
RP:1.1.1.1
Protocol: pim-sm, Flag: SPT LOC ACT
UpTime: 00:00:54
Upstream interface: GigabitEthernet1/0/0
Upstream neighbor: NULL
RPF prime neighbor: NULL
Downstream interface(s) information:
Total number of downstreams: 1
1: GigabitEthernet1/0/1
Protocol: pim-sm, UpTime: 00:00:54, Expires: 00:02:36
The command outputs show that CE1 connecting to the multicast source has
received PIM Join messages from CE2 and CE3 connecting to multicast receivers
and that CE1 has generated PIM routing entries.
Step 4 Configure eMDI detection.
Configure eMDI detection on PE1, PE2, and PE3.
● Configure eMDI to monitor a channel group.
– # Configure PE1.
[~PE1] emdi
[*PE1-emdi] emdi channel-group PE1
[*PE1-emdi-channel-group-PE1] emdi channel 1 source 10.1.3.1 group 225.1.1.1 vpn-instance
VPNA pt 33 clock-rate 90kHz
[*PE1-emdi-channel-group-PE1] emdi channel 2 source 10.1.3.1 group 226.1.1.1 vpn-instance
VPNA pt 33 clock-rate 90kHz
[*PE1-emdi-channel-group-PE1] quit
[*PE1-emdi] quit
[*PE1] commit
– # Configure PE2.
[~PE2] emdi
[*PE2-emdi] emdi channel-group PE2
[*PE2-emdi-channel-group-PE2] emdi channel 1 source 10.1.3.1 group 225.1.1.1 vpn-instance
VPNA pt 33 clock-rate 90kHz
[*PE2-emdi-channel-group-PE2] quit
[*PE2-emdi] quit
[*PE2] commit
– # Configure PE3.
[~PE3] emdi
[*PE3-emdi] emdi channel-group PE3
[*PE3-emdi-channel-group-PE3] emdi channel 2 source 10.1.3.1 group 226.1.1.1 vpn-instance
VPNA pt 33 clock-rate 90kHz
[*PE3-emdi-channel-group-PE3] quit
[*PE3-emdi] quit
[*PE3] commit
● Configure eMDI to monitor a board group.
The following uses PE1 as an example. The configurations of PE2 and PE3 are
similar to the configuration of PE1. For configuration details, see
Configuration Files in this section.
[~PE1] emdi
[*PE1-emdi] emdi lpu-group PE1
[*PE1-emdi-lpu-group-PE1] emdi bind slot all
[*PE1-emdi-lpu-group-PE1] quit
[*PE1-emdi] quit
[*PE1] commit
After completing the configuration, run the display emdi statistics history
channel command to check the detection result when multicast traffic passes
through PE1.
[~PE1] display emdi statistics history channel 1 start 3 end 5
Channel Name : 1
Total Records : 3 Latest Rate(pps) : 0 Latest Detect Time : 2021-02-18 21:22:40
-----------------------------------------------------------------------------------------------------------------------
----------------------------------
Record Record Monitor Monitor Received Rate Rate RTP-LC RTP-
SE RTP-LR RTP-SER RTP
Index Time Period(s) Status Packets pps bps
(1/100000) (1/100000) Jitter(ms)
-----------------------------------------------------------------------------------------------------------------------
----------------------------------
3 2019-02-02:08-33-00 60 Normal 4393232 439323 4871215641 6700
6633 152 151 0
4 2019-02-02:08-32-00 60 Normal 4388533 438853 4866005390 6700
6633 152 151 0
5 2019-02-02:08-31-00 60 Normal 4388218 438821 4865656118 6700
6633 152 151 0
-----------------------------------------------------------------------------------------------------------------------
----------------------------------
The following uses PE1 as an example. The configurations of PE2 and PE3 are
similar to the configuration of PE1. For configuration details, see Configuration
Files in this section (Only key configurations are provided here. For details, see
Telemetry Configuration).
[*PE1-telemetry-subscription-PE1] commit
After completing the configuration, check the eMDI detection result reported
through telemetry on the monitor platform.
----End
Configuration Files
● CE1 configuration file
#
sysname CE1
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.3.2 255.255.255.0
pim sm
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 192.168.1.1 255.255.255.0
pim sm
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ospf 2
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.3.0 0.0.0.255
network 192.168.1.0 0.0.0.255
#
pim
static-rp 1.1.1.1
#
return
● PE1 configuration file
#
sysname PE1
#
multicast mvpn 2.2.2.2
#
ip vpn-instance VPNA
ipv4-family
route-distinguisher 200:1
vpn-target 3:3 4:4 export-extcommunity
vpn-target 3:3 4:4 import-extcommunity
multicast routing-enable
mvpn
sender-enable
c-multicast signaling bgp
rpt-spt mode
ipmsi-tunnel
mldp
spmsi-tunnel
group 224.0.0.0 255.255.255.0 mldp limit 1
#
mpls lsr-id 2.2.2.2
mpls
#
mpls ldp
mldp p2mp
#
interface GigabitEthernet1/0/0
undo shutdown
sensor-group emdimonitor
sensor-path huawei-emdi:emdi/emdi-telem-reps/emdi-telem-rep
sensor-path huawei-emdi:emdi/emdi-telem-rtps/emdi-telem-rtp
#
destination-group Monitor
ipv4-address 10.1.6.2 port 10001 protocol grpc
#
subscription PE1
sensor-group emdimonitor
destination-group Monitor
#
return
● CE2 configuration file
#
sysname CE2
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.2.2 255.255.255.0
pim sm
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.4.1 255.255.255.0
pim sm
igmp enable
igmp version 3
#
interface LoopBack1
ip address 5.5.5.5 255.255.255.255
#
ospf 2
area 0.0.0.0
network 5.5.5.5 0.0.0.0
network 10.1.4.0 0.0.0.255
network 192.168.2.0 0.0.0.255
#
pim
static-rp 1.1.1.1
#
return
● PE2 configuration file
#
sysname PE2
#
multicast mvpn 3.3.3.3
#
ip vpn-instance VPNA
ipv4-family
route-distinguisher 300:1
vpn-target 3:3 export-extcommunity
vpn-target 3:3 import-extcommunity
multicast routing-enable
mvpn
c-multicast signaling bgp
rpt-spt mode
#
mpls lsr-id 3.3.3.3
mpls
#
mpls ldp
mldp p2mp
#
interface GigabitEthernet1/0/0
undo shutdown
sensor-group emdimonitor
destination-group Monitor
#
return
● CE3 configuration file
#
sysname CE3
#
multicast routing-enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.3.2 255.255.255.0
pim sm
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.1.5.1 255.255.255.0
pim sm
igmp enable
igmp version 3
#
interface LoopBack1
ip address 6.6.6.6 255.255.255.255
#
ospf 2
area 0.0.0.0
network 6.6.6.6 0.0.0.0
network 10.1.5.0 0.0.0.255
network 192.168.3.0 0.0.0.255
#
pim
static-rp 1.1.1.1
#
return
● PE3 configuration file
#
sysname PE3
#
multicast mvpn 4.4.4.4
#
ip vpn-instance VPNA
ipv4-family
route-distinguisher 400:1
vpn-target 4:4 export-extcommunity
vpn-target 4:4 import-extcommunity
multicast routing-enable
mvpn
c-multicast signaling bgp
rpt-spt mode
#
mpls lsr-id 4.4.4.4
mpls
#
mpls ldp
mldp p2mp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.2.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip binding vpn-instance VPNA
ip address 192.168.3.1 255.255.255.0
pim sm
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
ipv4-family vpnv4
policy vpn-target
peer 2.2.2.2 enable
#
ipv4-family mvpn
policy vpn-target
peer 2.2.2.2 enable
#
ipv4-family vpn-instance VPNA
import-route ospf 2
#
ospf 1
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 10.1.2.0 0.0.0.255
#
ospf 2 vpn-instance VPNA
import-route bgp
area 0.0.0.0
network 192.168.3.0 0.0.0.255
#
pim vpn-instance VPNA
static-rp 1.1.1.1
#
emdi
emdi channel-group PE3
emdi channel 1 source 10.1.3.1 group 226.1.1.1 vpn-instance VPNA pt 33 clock-rate 90kHz
emdi lpu-group _default_
emdi bind slot all
emdi lpu-group PE3
emdi bind slot all
emdi bind channel-group PE3 lpu-group PE3
#
telemetry
#
sensor-group emdimonitor
sensor-path huawei-emdi:emdi/emdi-telem-reps/emdi-telem-rep
sensor-path huawei-emdi:emdi/emdi-telem-rtps/emdi-telem-rtp
#
destination-group Monitor
ipv4-address 10.1.6.2 port 10001 protocol grpc
#
subscription PE3
sensor-group emdimonitor
destination-group Monitor
#
return
1.1.13.6.3 Example for Configuring eMDI Detection for NG MVPN over BIER
Services
This section provides an example for configuring eMDI detection for NG MVPN
over BIER services.
Networking Requirements
On the network shown in Figure 1-83, next-generation multicast VPN (NG MVPN)
over BIER services is deployed to resolve multicast service traffic congestion,
reliability, and security issues on the carrier's backbone network. In addition, BIER
eMDI is deployed on the root, P, Leaf1, and Leaf2 nodes to detect the quality of
multicast services. Network maintenance personnel can view the real-time
detection results reported through telemetry on the monitor platform and quickly
demarcate and locate network faults based on the detection results.
Configuration Roadmap
The configuration roadmap is as follows:
Data Preparation
To complete the configuration, you need the following data:
● IP addresses of interfaces (shown in Figure 1-83)
● Multicast group address: 225.1.1.1/24
● Multicast source address: 192.168.0.100/24
● Name of a multicast VPN instance: VPNA
● ID of a BIER sub-domain: 0
● BIER BSL: 256
● Name of the channel group for BIER eMDI: BIER-channel
● Name of the board group for BIER eMDI: BIER-lpu
Procedure
Step 1 Configure NG MVPN over BIER. For common configuration details, see Configuring
NG MVPN over BIER. For configuration details in this example, see Configuration
Files.
Step 2 Configure a channel group for BIER eMDI.
A channel group for BIER eMDI is required only on the root node.
[~Root] emdi
[~Root-emdi] emdi channel-group BIER-channel
[*Root-emdi-channel-group-BIER-channel] emdi channel 1 source 192.168.0.100 group 225.1.1.1 vpn-
instance VPNA sub-domain 0 bsl 256
[*Root-emdi-channel-group-BIER-channel] quit
[*Root-emdi] quit
[*Root] commit
[*Root] commit
Step 5 After the preceding configurations are complete, run the display emdi statistics
history bier channel command to query the detection result when BIER packets
are forwarded by the detected device. The following uses the P node as an
example to describe how to query the detection result of outgoing traffic on the P
node.
[~P] display emdi statistics history bier outbound channel slot 9
Source Address:192.168.0.100 Group Address:225.1.1.1 Vpn Label:
256
Bfir Id:1 Sub Domain:0 Bsl:256 SI:0 Token:9
Interface : gigabitethernet1/0/1
Total Records : 3 Latest Rate(pps) : 188226 Latest Detect Time : 2021-02-18 21:30:50
------------------------------------------------------------------------------------------------------------------------------
-----------------
Record Record Monitor Monitor Received Rate Rate RTP-LC RTP-SE
RTP-LR RTP-SER
Index Time Period(s) Status Packets pps bps (1/100000)
(1/100000)
------------------------------------------------------------------------------------------------------------------------------
-----------------
1 2019-08-08:21-11-20 10 Normal 4393232 439323 4871215641 6700
6633 152 151
2 2019-08-08:21-11-10 10 Normal 4388533 438853 4866005390 6700
6633 152 151
3 2019-08-08:21-11-00 60 Normal 4388218 438821 4865656118 6700
6633 152 151
------------------------------------------------------------------------------------------------------------------------------
-----------------
After completing the configuration, check the BIER eMDI detection result reported
through telemetry on the monitor platform.
----End
Configuration Files
● Root node configuration file
#
sysname Root
#
multicast routing-enable
#
multicast mvpn 1.1.1.1
#
ip vpn-instance VPNA
ipv4-family
route-distinguisher 200:1
apply-label per-instance
vpn-target 3:3 export-extcommunity
vpn-target 4:4 export-extcommunity
vpn-target 3:3 import-extcommunity
vpn-target 4:4 import-extcommunity
multicast routing-enable
mvpn
sender-enable
c-multicast signaling bgp
rpt-spt mode
ipmsi-tunnel
bier
spmsi-tunnel
group 224.0.0.0 255.255.255.0 source 192.168.1.0 255.255.255.0 bier limit 16
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
ipv4-family
#
isis 1
is-level level-2
cost-style wide-compatible
network-entity 10.0000.0000.0001.00
bier enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.0.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip binding vpn-instance VPNA
ip address 192.168.1.1 255.255.255.0
pim sm
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
isis enable 1
#
interface LoopBack1
ip binding vpn-instance VPNA
ip address 1.1.1.2 255.255.255.255
#
bgp 100
peer 4.4.4.1 as-number 100
peer 4.4.4.1 connect-interface LoopBack0
peer 5.5.5.1 as-number 100
peer 5.5.5.1 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 4.4.4.1 enable
peer 5.5.5.1 enable
#
ipv4-family mvpn
policy vpn-target
peer 4.4.4.1 enable
peer 5.5.5.1 enable
#
ipv4-family vpnv4
policy vpn-target
peer 4.4.4.1 enable
peer 5.5.5.1 enable
#
ipv4-family vpn-instance VPNA
import-route direct
#
pim vpn-instance VPNA
static-rp 1.1.1.2
#
bier
sub-domain 0
bfr-id 1
bfr-prefix interface LoopBack0
protocol isis
encapsulation-type mpls bsl 256 max-si 2
#
emdi
emdi channel-group BIER-channel
emdi channel 1 source 192.168.0.100 group 225.1.1.1 vpn-instance VPNA sub-domain 0 bsl 256
emdi lpu-group _default_
emdi bind slot all
emdi lpu-group BIER-lpu
emdi bind slot all
emdi bier bind lpu-group BIER-lpu
#
telemetry
#
sensor-group emdimonitor
sensor-path huawei-emdi:emdi/bier-out-telem-reps/bier-out-telem-rep
sensor-path huawei-emdi:emdi/bier-out-telem-rtps/bier-out-telem-rtp
#
destination-group Monitor
ipv4-address 10.1.7.2 port 10001 protocol grpc
#
subscription EMDI
sensor-group emdimonitor
destination-group Monitor
#
return
● P node configuration file
#
sysname P
#
mpls lsr-id 2.2.2.1
#
mpls
#
mpls ldp
#
ipv4-family
#
isis 1
is-level level-2
cost-style wide-compatible
network-entity 10.0000.0000.0004.00
traffic-eng level-2
bier enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.0.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 192.168.45.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface LoopBack0
ip address 2.2.2.1 255.255.255.255
isis enable 1
#
bier
sub-domain 0
bfr-prefix interface LoopBack0
protocol isis
encapsulation-type mpls bsl 256 max-si 2
#
#
emdi
emdi lpu-group _default_
emdi bind slot all
emdi lpu-group BIER-lpu
emdi bind slot all
emdi bier bind lpu-group BIER-lpu
#
telemetry
#
sensor-group emdimonitor
sensor-path huawei-emdi:emdi/bier-out-telem-reps/bier-out-telem-rep
sensor-path huawei-emdi:emdi/bier-out-telem-rtps/bier-out-telem-rtp
sensor-path huawei-emdi:emdi/bier-telem-reps/bier-telem-rep
sensor-path huawei-emdi:emdi/bier-telem-rtps/bier-telem-rtp
#
destination-group Monitor
ipv4-address 10.1.7.2 port 10001 protocol grpc
#
subscription EMDI
sensor-group emdimonitor
destination-group Monitor
#
return
● Leaf1 node configuration file
#
sysname Leaf1
#
multicast routing-enable
#
multicast mvpn 4.4.4.1
#
ip vpn-instance VPNA
ipv4-family
route-distinguisher 300:1
apply-label per-instance
vpn-target 3:3 export-extcommunity
vpn-target 3:3 import-extcommunity
multicast routing-enable
mvpn
c-multicast signaling bgp
rpt-spt mode
#
mpls lsr-id 4.4.4.1
#
mpls
#
mpls ldp
#
ipv4-family
#
isis 1
is-level level-2
cost-style wide-compatible
network-entity 10.0000.0000.0002.00
traffic-eng level-2
bier enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.45.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip binding vpn-instance VPNA
ip address 192.168.2.1 255.255.255.0
pim sm
igmp enable
#
interface LoopBack0
ip address 4.4.4.1 255.255.255.255
pim sm
isis enable 1
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
#
ipv4-family mvpn
policy vpn-target
peer 1.1.1.1 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.1 enable
#
ipv4-family vpn-instance VPNA
import-route direct
#
pim vpn-instance VPNA
static-rp 1.1.1.2
#
bier
sub-domain 0
bfr-id 4
bfr-prefix interface LoopBack0
protocol isis
encapsulation-type mpls bsl 256 max-si 2
#
emdi
emdi lpu-group _default_
emdi bind slot all
emdi lpu-group BIER-lpu
emdi bind slot all
emdi bier bind lpu-group BIER-lpu
#
telemetry
#
sensor-group emdimonitor
sensor-path huawei-emdi:emdi/bier-telem-reps/bier-telem-rep
sensor-path huawei-emdi:emdi/bier-telem-rtps/bier-telem-rtp
#
destination-group Monitor
ipv4-address 10.1.7.2 port 10001 protocol grpc
#
subscription EMDI
sensor-group emdimonitor
destination-group Monitor
#
return
● Leaf2 node configuration file
#
sysname Leaf2
#
multicast routing-enable
#
multicast mvpn 5.5.5.1
#
ip vpn-instance VPNA
ipv4-family
route-distinguisher 400:1
apply-label per-instance
vpn-target 4:4 export-extcommunity
vpn-target 4:4 import-extcommunity
multicast routing-enable
mvpn
c-multicast signaling bgp
rpt-spt mode
#
mpls lsr-id 5.5.5.1
#
mpls
#
mpls ldp
#
ipv4-family
#
isis 1
is-level level-2
cost-style wide-compatible
network-entity 10.0000.0000.0003.00
traffic-eng level-2
bier enable
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 192.168.45.3 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet1/0/1
undo shutdown
ip binding vpn-instance VPN
ip address 192.168.3.1 255.255.255.0
pim sm
igmp enable
#
interface LoopBack0
ip address 5.5.5.1 255.255.255.255
isis enable 1
#
bgp 100
peer 1.1.1.1 as-number 100
peer 1.1.1.1 connect-interface LoopBack0
#
ipv4-family unicast
undo synchronization
peer 1.1.1.1 enable
#
ipv4-family mvpn
policy vpn-target
peer 1.1.1.1 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.1 enable
#
ipv4-family vpn-instance VPNA
import-route direct
#
pim vpn-instance VPNA
static-rp 1.1.1.2
#
lldp enable
#
bier
sub-domain 0
bfr-id 5
bfr-prefix interface LoopBack0
protocol isis
encapsulation-type mpls bsl 256 max-si 2
#
emdi
emdi lpu-group _default_
emdi bind slot all
emdi lpu-group BIER-lpu
emdi bind slot all
emdi bier bind lpu-group BIER-lpu
#
telemetry
#
sensor-group emdimonitor
sensor-path huawei-emdi:emdi/bier-telem-reps/bier-telem-rep
sensor-path huawei-emdi:emdi/bier-telem-rtps/bier-telem-rtp
#
destination-group Monitor
ipv4-address 10.1.7.2 port 10001 protocol grpc
#
subscription EMDI
sensor-group emdimonitor
destination-group Monitor
#
return
Definition
Enhanced stream quality monitoring (ESQM) collects information about
Transmission Control Protocol (TCP), Stream Control Transmission Protocol (SCTP),
or GPRS Tunneling Protocol (GTP) packets on a board through which the packets
pass based on the quintuple information (source and destination IP addresses,
source and destination port numbers, and transport layer protocol). The collected
information includes the packet type, timestamp, inbound and outbound
interfaces, VPN, and packet statistics. The information is reported to the Huawei
Controller for forwarding path restoration, traffic restoration, or fault detection.
Purpose
Traditional communication networks are unable to "perceive" services, preventing
customers' ever-changing service requirements from being responded to in real
time. To solve this problem, ESQM has been developed to help devices monitor
the quality of services on networks. This technology integrates network
deployment with service requirements and provides the data that is the
foundation for automatic and intelligent network lifecycle management.
Benefits
ESQM offers the following benefits:
● Helps communication networks perceive service quality, and enables devices
to proactively detect services with poor QoE for fault diagnosis, demarcation,
and service optimization, thereby effectively shortening the duration of
network interruptions and reducing customers' OPEX.
● Helps customers perceive networks according to multiple metrics, including
service quality, forwarding path, and load, providing data support for routine
maintenance and network optimization.
Context
Traditional communication networks are unable to "perceive" services, preventing
customers' ever-changing service requirements from being responded to in real
time. To solve this problem, ESQM has been developed to help devices monitor
the quality of services on networks. This technology integrates network
deployment with service requirements and provides the data that is the
foundation for automatic and intelligent network lifecycle management.
Procedure
1. Run system-view
The system view is displayed.
2. Run esqm
The ESQM view is displayed.
3. (Optional) Run esqm session aging-time sctp tmval
An aging time is set for SCTP flow tables.
NOTE
The configured aging time takes effect only for subsequently created SCTP flow tables.
4. (Optional) Run esqm protocol tcp enable
The device is enabled to create flow tables for sampled TCP protocol packets.
5. (Optional) Run esqm protocol { sctp | gtp } disable
The device is disabled from creating flow tables for sampled SCTP, or GTP
protocol packets.
6. (Optional) Run esqm filter permit ip ip-addr mask masklen
The function of filtering sampled packets is enabled.
7. Run any of the following commands:
– To perform ESQM for inbound or outbound packets on all the interfaces
to which a VPN instance is bound, run the esqm service-stream
{ inbound | outbound } vpn-instance vpn-instance-name command in
the ESQM view.
– To perform ESQM for inbound or outbound packets on all the interfaces
to which no VPN instance is bound, run the esqm service-stream
{ inbound | outbound } command in the ESQM view.
– To perform ESQM for inbound or outbound packets on an interface, run
the following commands:
i. Run quit
Exit from the ESQM view.
ii. Run interface interface-type interface-num
The interface view is displayed.
iii. Run esqm service-stream { inbound | outbound }
A packet sampling direction is configured for ESQM on the interface.
iv. Run quit
Return to the system view.
8. Run commit
The configuration is committed.
Networking Requirements
As networks rapidly develop and applications become diversified, various value-
added services are widely used. Link connectivity and network performance
influence network quality. Therefore, performance monitoring is especially
important for service transmission.
● For example, users will not sense any change in voice quality if the packet
loss rate on voice links is lower than 5%. However, if the packet loss rate is
higher than 10%, user experience obviously degrades.
● The real-time services such as Voice over Internet Protocol (VoIP), online
gaming, and online video require the delay lower than 100 ms. Some delay-
sensitive services even require that the delay be lower than 50 ms. Otherwise,
user experience will degrade.
To meet high requirements for voice, online gaming, and online video on the
network, carriers should be able to monitor the packet loss and delay of the links.
They can adjust the links if service quality decreases.
Configuration Roadmap
The configuration roadmap is as follows:
1. Deploy IGPs between the UPE and SPE and between the SPE and NPE. In this
example, OSPF runs between the UPE and SPE, and IS-IS runs between the
SPE and NPE.
2. Configure MPLS LDP on the UPE, SPE, and NPE.
3. Configure VPN instances on the UPE, SPE, and NPE.
4. Bind the access-side interfaces on the UPE and NPE to the VPN instances.
5. Configure VPN static default routes on the SPE.
6. Configure a route-policy on the NPE to disable the NPE from receiving the
default routes.
7. Configure BGP EVPN on the SPE and NPE.
8. Configure a BGP-VPNv4 peer relationship between the UPE and SPE, specify
the UPE as the lower-level PE of the SPE, and configure the SPE to import
default VPN routes.
9. Configure route regeneration on the SPE.
10. Configure packet loss and delay measurement on the link between the UPE
and NPE to monitor the link status in an end-to-end manner.
Data Preparation
To complete the configuration, you need the following data:
Procedure
1. Configure an L3VPN HoVPN with an L3EVPN on the UPE, SPE, and NPE. For
configuration details, see Configuration Files.
2. Configure ESQM measurement on the UPE and NPE, and inject unidirectional
traffic from the UPE to the NPE.
# Configure inbound ESQM on the user side of the UPE.
<UPE> system-view
[~UPE] esqm
[*UPE-esqm] commit
[~UPE-esqm] interface GigabitEthernet2/0/0
[~UPE—GigabitEthernet2/0/0] esqm service-stream inbound
[*UPE—GigabitEthernet2/0/0] commit
# Configure inbound ESQM on the user side of the NPE.
<NPE> system-view
[~NPE] esqm
[*NPE-esqm] commit
[~NPE-esqm] interface GigabitEthernet1/0/0
[~NPE—GigabitEthernet1/0/0] esqm service-stream inbound
[*NPE—GigabitEthernet1/0/0] commit
Configuration Files
● UPE configuration file
#
sysname UPE
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 100:1
apply-label per-instance
vpn-target 2:2 export-extcommunity evpn
vpn-target 2:2 import-extcommunity evpn
evpn mpls routing-enable
#
mpls lsr-id 1.1.1.1
#
mpls
#
mpls ldp
#
interface GigabitEthernet1/0/0
undo shutdown
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet2/0/0
undo shutdown
ip binding vpn-instance vpn1
ip address 192.168.20.1 255.255.255.0
esqm service-stream inbound
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
#
ipv4-family vpn-instance vpn1
import-route direct
advertise l2vpn evpn
#
l2vpn-family evpn
undo policy vpn-target
peer 2.2.2.2 enable
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
esqm
#
return
#
ip ip-prefix default index 10 permit 0.0.0.0 0
#
route-policy SPE deny node 10
if-match ip-prefix default
#
route-policy SPE permit node 20
#
esqm
#
return
Background
In the radio and television industry, especially in TV stations or media centers, IP-
based production and broadcasting networks are gaining in popularity. Related IP
standards are being formulated, which is an important step in the development of
the 4K industry. However, IP-based production and broadcasting networks require
switching among multiple video sources or cameras during video production and
live transmission to achieve the optimal video display effect. Currently, IP-based
devices use IGMP to switch between different multicast groups. When IGMP starts
or stops multicast forwarding, it does not determine the frame boundary of a
video. As a result, the forwarded content is incomplete and the video is damaged
(such as artifacts, jitters, black screens, and static frames).
The quintuple information (source and destination IP addresses, source and
destination port numbers, and protocol type) may not distinguish between flows.
To resolve this issue, configure flow recognition based on the septuple information
(source and destination MAC addresses, source and destination IP addresses,
source and destination port numbers, and protocol type) on a device. A controller
calculates the flow rate based on the information reported by the device, and
determines whether the flow is video or audio flow based on the flow rate. The
device then replicates and broadcasts the identified flow quickly and accurately.
Flow recognition is used to identify video and audio flows on IP-based production
and broadcasting networks.
Implementation
On the topology shown in Figure 1-85, traffic enters the device through the
inbound interface. The device extracts the septuple information from the traffic
and generates matching rules based on the septuple information. Each flow that
matches the septuple information has a statistical ID. The device collects statistics
about the numbers of packets and bytes for each statistical ID. The device sends
the statistics of each statistical ID to the controller over telemetry. The controller
calculates the flow rate based on the current and last data records to identify the
video or audio flow.
To calculate the flow rate, the controller needs to receive the data information of
each flow. Table 1-18 describes the flow data fields collected and sent by the
device to the controller.
If the device consecutively collects statistics about a flow twice, the flow's rate is
calculated as follows:
● Number of packets forwarded per second = (packetNum2 - packetNum1)/
(timeStampSec2 - timeStampSec1)
● Number of bytes forwarded per second = (bytesNum2 - bytesNum1)/
(timeStampSec2 - timeStampSec1)
Context
Figure 1-86 shows the typical networking of flow recognition. Target flows enter
the transport network from the multimedia terminal and then reach the device
through Interface 1. After flow recognition is enabled on the device, the device
collects data and then sends the data to the controller over telemetry.
Pre-configuration Tasks
Before configuring flow recognition, complete the following tasks:
Procedure
Step 1 Run system-view
----End
Prerequisites
Flow recognition has been configured.
Procedure
Step 1 Run the display flow-recognition cache command to check the flow table
information of a slot in the flow cache.
----End
Networking Requirements
Figure 1-87 shows a typical media network. The functions of each node are
described as follows:
● Controller: delivers control instructions to the device to control, manage, and
monitor the system.
● Device: provides functions such as forwarding, replication, scheduling, clean
switching, and flow recognition of media traffic.
● Multimedia terminal A: functions as the transmit end of media signals and
transmits traffic to the device.
● Multimedia terminal B: functions as the receive end of media signals and
receives traffic from the device.
On the network:
1. Multimedia terminal A transmits video and audio streams to the device.
2. The device collects the streams on the inbound interface, and reports each
stream's septuple information (source and destination MAC addresses, source
and destination IP addresses, source and destination port numbers, and
protocol type) and other information (such as the numbers of packets and
bytes) to the controller.
3. The controller calculates the stream rate based on the information to identify
audio and video streams.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an IP address and a routing protocol for each interface so that all
the nodes can communicate at the network layer.
2. Configure static telemetry subscription.
3. Configure flow recognition.
Data Preparation
To complete the configuration, you need the following data:
Procedure
Step 1 Configure an IP address and a routing protocol for each interface so that all the
nodes can communicate at the network layer. For configuration details about the
device, see Configuration Files.
----End
Configuration Files
#
telemetry
#
sensor-group sensor1
sensor-path huawei-flow-recognition:flow-recognition/streaminfos/streaminfo self-defined-event
#
destination-group destination1
ipv4-address 10.1.1.2 port 10001 protocol grpc
#
subscription subscription1
sensor-group sensor1
destination-group destination1
#
#
interface GigabitEthernet 1/0/1
flow-recognition inbound
#
Basic Functions
Intelligent monitoring consists of intelligent exception identification, intelligent log
exception detection, and intelligent resource trend prediction. By reporting
exception detection results, it enables users to adjust services in advance or locate
faults promptly, thereby ensuring service quality. Table 1-19 describes the basic
functions of intelligent monitoring.
Benefits
Intelligent monitoring brings the following benefits:
● O&M personnel can quickly detect service faults on the network and use
existing fault locating methods to quickly identify and rectify the faults,
reducing their impact.
● O&M personnel can monitor device running status and preemptively allocate
resources and adjust services, effectively reducing the possibility of service
loss.
Context
When a device is deployed at the aggregation or core layer of a carrier network, it
plays an important role and transmits a large number of services. Once an
exception occurs on the device, services will be severely affected. Intelligent
exception identification and intelligent log exception detection can promptly
perceive and detect exceptions for fault locating. Intelligent resource trend
prediction can predict network resource trends for resource allocation.
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run eai
The EAI view is displayed.
Step 3 Run any of the following commands as required:
● To enable forwarding plane exception detection, run the intelligent-
anomaly-detection enable command.
● To enable exception identification, run the intelligent-anomaly-identify
enable command.
NOTE
----End
Networking Requirements
On the network shown in Figure 1-88, intelligent monitoring is enabled on Device
to intelligently identify exceptions, detect log exceptions, and predict resource
trends.
Configuration Roadmap
The configuration roadmap is as follows:
NOTE
Ensure that any required license has been installed on Device before the configuration.
Data Preparation
To complete the configuration, you need the following data:
● Collector's IP address: 10.20.2.1; port number: 10001
● Sampling paths for static telemetry subscription:
– Intelligent exception identification: huawei-eai-service:eai-service/
anomaly-identify-datas/anomaly-identify-data
– Intelligent log exception detection: huawei-eai-service:eai-service/
logrecord-detection-recommend-datas/logrecord-detection-recommend-
data
– Intelligent resource trend prediction: huawei-eai-service:eai-service/
resource-prediction-datas/resource-prediction-data
Procedure
Step 1 Configure static telemetry subscription.
# Configure a destination collector.
[~HUAWEI] telemetry
[~HUAWEI-telemetry] destination-group destination1
# Create a subscription.
[*HUAWEI-telemetry] subscription subscription1
[*HUAWEI-telemetry-subscription subscription1] sensor-group sensor1 sample-interval 0
[*HUAWEI-telemetry-subscription subscription1] destination-group destination1
[*HUAWEI-telemetry-subscription subscription1] commit
[~HUAWEI-telemetry-subscription subscription1] quit
[~HUAWEI-telemetry] quit
NOTE
----End
Configuration Files
#
telemetry
#
destination-group destination1
ipv4-address 10.20.2.1 port 10001 protocol grpc no-tls
#
sensor-group sensor1
sensor-path huawei-eai-service:eai-service/anomaly-identify-datas/anomaly-identify-data
sensor-path huawei-eai-service:eai-service/logrecord-detection-recommend-datas/logrecord-detection-
recommend-data
sensor-path huawei-eai-service:eai-service/resource-prediction-datas/resource-prediction-data
#
subscription subscription1
sensor-group sensor1 sample-interval 0
destination-group destination1
#
eai
#
intelligent-anomaly-identify enable
intelligent-logrecord-detection enable
intelligent-resource-prediction enable
#
return
Definition
Path detection restores the forwarding path of service traffic on a VXLAN network
by constructing detection packets.
Purpose
With the fast development of network services, networks grow significantly.
Understanding the forwarding path of a specific flow or the path between two
network devices will help you locate network faults quickly.
Path detection can be used to determine the forwarding path of a specific flow or
the path between two network devices. To provide this function, the path
detection-capable device must work with the controller. You can configure the
path detection function on the device through the CLI or NMS and enable the
inbound interface of the device to construct and forward a 5-tuple packet. The
devices along the path can identify the packet and obtain the packet information
based on the detection flag, and sends the packet information as well as the
inbound and outbound interface information to the controller. The controller
computes the entire path through which the flow passes based on the information
reported by the devices.
Benefits
Path detection helps O&M personnel quickly locate the faulty device when traffic
is interrupted on the network.
Prerequisites
Before configuring path detection, ensure that a NETCONF connection has been
established between the controller and device.
Context
To learn about the forwarding path of a specific flow on a network, O&M
personnel can use the path detection function to restore the complete path
through which the flow passes.
Procedure
Step 1 Run system-view
The ingress is enabled to construct and forward an IPv4 path detection packet.
NOTE
----End
Procedure
Step 1 Run system-view
The system view is displayed.
Step 2 Run ip path detection enable ipv6 dscp dscp-value
DSCP-based IPv6 path detection is enabled.
Step 3 Run ip path detection send-packet src-mac src-mac-address dst-mac dst-mac-
address [ pe-vlan pe-vlan-id [ 8021p 8021p-value ] [ ce-vlan ce-vlan-id ] ] src-
ipv6 src-ipv6-addr dst-ipv6 dst-ipv6-address protocol { icmp | { tcp | udp [ gtp-u
gtp-teid teid-value ] | sctp } src-port src-port-value dst-port dst-port-value } dscp
dscp-value [ vpn-instance vpn-name ] interface { ifType ifNum | ifName } testid
test-id
The ingress is enabled to construct and forward an IPv6 path detection packet.
NOTE
----End
Networking Requirements
The NFVI telco cloud solution is based on Data Center Interconnect (DCI) + data
center network (DCN) networking, as shown in Figure 1-89.
● DCGWs are the DCN's border gateways and can exchange Internet routes
with the external network.
● L2GW/L3GW1 and L2GW/L3GW2 access the virtualized network functions
(VNFs).
● VNF1 and VNF2 can be deployed as virtualized NEs to implement the vUGW
and vMSE functions and connect to L2GW/L3GW1 and L2GW/L3GW2 through
the interface processing unit (IPU).
Assume that the detected path is DCGW2 -> L2GW/L3GW2 -> VNF2. DSCP-based
IPv4 path detection is enabled on all devices along the path, and a detection
packet is constructed and forwarded on DCGW2 (ingress).
Generally, the NE9000 is used as the DCGW for path detection.
Interfaces 1 through 5 in this example represent GE1/0/1, GE1/0/2, GE1/0/3, GE1/0/4, and
GE1/0/5, respectively.
GigabitEthernet1/0/3 -
LoopBack0 9.9.9.9/32
LoopBack1 3.3.3.3/32
LoopBack2 33.33.33.33/32
GigabitEthernet1/0/3 -
LoopBack0 9.9.9.9/32
LoopBack1 4.4.4.4/32
LoopBack2 44.44.44.44/32
GigabitEthernet 1/0/3 -
GigabitEthernet 1/0/4 -
GigabitEthernet 1/0/5 -
LoopBack1 1.1.1.1/32
GigabitEthernet 1/0/3 -
GigabitEthernet 1/0/4 -
LoopBack1 2.2.2.2/32
Configuration Roadmap
The configuration roadmap is as follows:
NOTE
Before the configuration, ensure that a NETCONF connection has been established between
the controller and device.
Procedure
Step 1 Assign an IP address to each device interface, including the loopback interfaces.
Step 2 Configure a routing protocol on each DCGW and each L2GW/L3GW to ensure
Layer 3 communication. OSPF is used in this example.
For configuration details, see Configuration Files.
Step 3 Configure IPv4 NFVI distributed gateway networking on DCGWs and L2GWs/
L3GWs.
For the configuration roadmap, see VXLAN Configuration. For configuration
details, see Configuration Files.
Step 4 Configure DSCP-based IPv4 path detection on DCGW2 and L2GW/L3GW2.
# Configure DCGW2.
[~DCGW2] ip path detection enable dscp 3
# Configure L2GW/L3GW2.
[~L2GW/L3GW2] ip path detection enable dscp 3
----End
Configuration Files
● DCGW1 configuration file
#
sysname DCGW1
#
evpn
bypass-vxlan enable
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 1:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpn vpn-instance evrf2 bd-mode
route-distinguisher 2:2
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
evpn vpn-instance evrf3 bd-mode
route-distinguisher 3:3
vpn-target 3:3 export-extcommunity
vpn-target 3:3 import-extcommunity
#
evpn vpn-instance evrf4 bd-mode
route-distinguisher 4:4
vpn-target 4:4 export-extcommunity
vpn-target 4:4 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 11:11
apply-label per-instance
export route-policy dp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
vxlan vni 200
#
bridge-domain 10
vxlan vni 100 split-horizon-mode
evpn binding vpn-instance evrf1
#
bridge-domain 20
vxlan vni 110 split-horizon-mode
evpn binding vpn-instance evrf2
#
bridge-domain 30
vxlan vni 120 split-horizon-mode
evpn binding vpn-instance evrf3
#
bridge-domain 40
vxlan vni 130 split-horizon-mode
evpn binding vpn-instance evrf4
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0002
vxlan anycast-gateway enable
#
interface Vbdif20
ip binding vpn-instance vpn1
ip address 10.2.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0003
vxlan anycast-gateway enable
#
interface Vbdif30
ip binding vpn-instance vpn1
ip address 10.3.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0001
vxlan anycast-gateway enable
#
interface Vbdif40
ip binding vpn-instance vpn1
ip address 10.4.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0004
vxlan anycast-gateway enable
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.1.1 255.255.255.0
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.2.1 255.255.255.0
#
interface GigabitEthernet1/0/3
undo shutdown
#
interface GigabitEthernet1/0/3.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface GigabitEthernet1/0/3.2 mode l2
encapsulation dot1q vid 40
rewrite pop single
bridge-domain 40
#
interface LoopBack0
ip address 9.9.9.9 255.255.255.255
#
interface LoopBack1
if-match ip-prefix lp
#
route-policy dp deny node 20
#
route-policy p1 deny node 10
#
route-policy stopuIP deny node 10
if-match ip-prefix uIP
#
route-policy stopuIP permit node 20
#
ip route-static vpn-instance vpn1 0.0.0.0 0.0.0.0 NULL0 tag 2000
#
return
● DCGW2 configuration file
#
sysname DCGW2
#
evpn
bypass-vxlan enable
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 1:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpn vpn-instance evrf2 bd-mode
route-distinguisher 2:2
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
evpn vpn-instance evrf3 bd-mode
route-distinguisher 3:3
vpn-target 3:3 export-extcommunity
vpn-target 3:3 import-extcommunity
#
evpn vpn-instance evrf4 bd-mode
route-distinguisher 4:4
vpn-target 4:4 export-extcommunity
vpn-target 4:4 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 11:11
apply-label per-instance
export route-policy dp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
vxlan vni 200
#
bridge-domain 10
vxlan vni 100 split-horizon-mode
evpn binding vpn-instance evrf1
#
bridge-domain 20
vxlan vni 110 split-horizon-mode
evpn binding vpn-instance evrf2
#
bridge-domain 30
vxlan vni 120 split-horizon-mode
evpn binding vpn-instance evrf3
#
bridge-domain 40
vxlan vni 130 split-horizon-mode
evpn binding vpn-instance evrf4
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
#
evpn vpn-instance evrf1 bd-mode
route-distinguisher 1:1
vpn-target 1:1 export-extcommunity
vpn-target 1:1 import-extcommunity
#
evpn vpn-instance evrf2 bd-mode
route-distinguisher 2:2
vpn-target 2:2 export-extcommunity
vpn-target 2:2 import-extcommunity
#
evpn vpn-instance evrf3 bd-mode
route-distinguisher 3:3
vpn-target 3:3 export-extcommunity
vpn-target 3:3 import-extcommunity
#
evpn vpn-instance evrf4 bd-mode
route-distinguisher 4:4
vpn-target 4:4 export-extcommunity
vpn-target 4:4 import-extcommunity
#
ip vpn-instance vpn1
ipv4-family
route-distinguisher 11:11
apply-label per-instance
export route-policy sp evpn
vpn-target 11:1 export-extcommunity evpn
vpn-target 11:1 import-extcommunity evpn
vxlan vni 200
#
bridge-domain 10
vxlan vni 100 split-horizon-mode
evpn binding vpn-instance evrf1
#
bridge-domain 20
vxlan vni 110 split-horizon-mode
evpn binding vpn-instance evrf2
#
bridge-domain 30
vxlan vni 120 split-horizon-mode
evpn binding vpn-instance evrf3
#
bridge-domain 40
vxlan vni 130 split-horizon-mode
evpn binding vpn-instance evrf4
#
interface Vbdif10
ip binding vpn-instance vpn1
ip address 10.1.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0002
vxlan anycast-gateway enable
arp collect host enable
#
interface Vbdif20
ip binding vpn-instance vpn1
ip address 10.2.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0003
vxlan anycast-gateway enable
arp collect host enable
#
interface Vbdif30
ip binding vpn-instance vpn1
ip address 10.3.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0001
vxlan anycast-gateway enable
arp collect host enable
#
interface Vbdif40
ip binding vpn-instance vpn1
ip address 10.4.1.1 255.255.255.0
arp generate-rd-table enable
mac-address 00e0-fc00-0004
vxlan anycast-gateway enable
arp collect host enable
#
interface GigabitEthernet1/0/1
undo shutdown
ip address 10.6.4.1 255.255.255.0
#
interface GigabitEthernet1/0/2
undo shutdown
ip address 10.6.2.2 255.255.255.0
#
interface GigabitEthernet1/0/3.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface GigabitEthernet1/0/4.1 mode l2
encapsulation dot1q vid 20
rewrite pop single
bridge-domain 20
#
interface GigabitEthernet1/0/5.1 mode l2
encapsulation dot1q vid 10
rewrite pop single
bridge-domain 10
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
interface Nve1
source 1.1.1.1
vni 100 head-end peer-list protocol bgp
vni 110 head-end peer-list protocol bgp
vni 120 head-end peer-list protocol bgp
vni 130 head-end peer-list protocol bgp
#
bgp 100
peer 2.2.2.2 as-number 100
peer 2.2.2.2 connect-interface LoopBack1
peer 3.3.3.3 as-number 100
peer 3.3.3.3 connect-interface LoopBack1
peer 4.4.4.4 as-number 100
peer 4.4.4.4 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 2.2.2.2 enable
peer 3.3.3.3 enable
peer 4.4.4.4 enable
#
ipv4-family vpn-instance vpn1
import-route static
maximum load-balancing 16
advertise l2vpn evpn import-route-multipath
#
l2vpn-family evpn
undo policy vpn-target
bestroute add-path path-number 16
peer 2.2.2.2 enable
peer 2.2.2.2 advertise arp
peer 2.2.2.2 advertise encap-type vxlan
peer 3.3.3.3 enable
peer 3.3.3.3 advertise arp