xrdocs_io_cnbng_tutorials_cnbng_cp_single_vm_aio_deployment_guide
xrdocs_io_cnbng_tutorials_cnbng_cp_single_vm_aio_deployment_guide
Gurpreet S
TME MIG Follow
Save to PDF
O N T H I S PA G E
INTRODUCTION
NETWORKING
P R E R E Q U I S I T E ( O P T I O N A L , I F D C A L R E A DY E X I S T S )
S T E P 1 : I N C E P T I O N C M D E P LOY M E N T
S T E P 2 : S S H K E Y G E N E R AT I O N
S T E P 3 : C N B N G C P D E P LOY M E N T U S I N G I N C E P T I O N C M
V E R I F I C AT I O N S
I N I T I A L C N B N G C P C O N F I G U R AT I O N S
Introduction
AIO VMware Deployment means All-In-One deployment of cnBNG Control Plane in VMWare ESXi environment.
This is a single VM deployment for cnBNG Control Plane in VMWare environment which deploys following in a
single customized base Ubuntu VM:
SMI cluster: Cloud Native Infra including K8s, docker, keepalived, calico, istio etc.
CEE application: Common Execution Environment for telemetry, snmp, bulkstats, alert etc.
cnBNG Control Plane application: Core BNG Control Plane application
The deployment is done in 3 steps, where the 3rd step uses automation provided by Inception Server to deploy
cnBNG CP in All-In-One form. The main steps for deployment are:
After the deployment, we generally apply the initial init-con g for cnBNG CP Ops Center and move the system to
running mode. In this tutorial this step is explained towards the end.
4. Now add a host to the cluster. By selecting add host from right click menu on newly created cluster. Follow on
screen instructions to add the host.
This is for security reasons, where cnBNG CP VM can only be accessed by ssh key and not through password by
default.
ssh-keygen -t rsa
Note down ssh keys in a le from .ssh/id_rsa (private) and ./ssh/id_rsa.pub (public)
Remove line breaks from private key and replace them with string “\n”
Remove line breaks from public key if there are any.
Note: You can use following commands to replace line breaks: sed -z ‘s/\n/\n/g’ .ssh/id_rsa sed -z ‘s/\n/\n/g’ .ssh/id_rsa.pub
In the next few steps we will be setting up con guration in SMI Cluster Deployer. We will be using con g mode
and after adding all the con gs we will be using commit.
Create environment con gs. This con g is for inception deployer to get access to vCenter.
environments vmware
vcenter server your-vcenter-ip
vcenter allow-self-signed-cert true
vcenter user your-vcenter-username
vcenter password your-vcenter-password
vcenter datastore your-vcenter-host-local-datastore
vcenter cluster cnBNG_SMI_CL01
vcenter datacenter cnBNG_DC_01
vcenter host your-vcenter-host
vcenter nics "VM Network"
exit
exit
network:
version: 2
ethernets:
ens192:
dhcp4: false
addresses:
- { {K8S_SSH_IP}}/24
routes:
- to: 0.0.0.0/0
via: your-gateway-ip
nameservers:
addresses:
- your-dns1
"\n network:\n version: 2\n ethernets:\n ens192:\n dhcp4: false
Note: You can use following command to replace line breaks:\n sed -z ‘s/\n/\n/g’ netplan.yaml
Create Cluster con gs with SSH public and private keys copied in Step-2 and netplan from previous step:
clusters your-cnbng-cp-cluster-name
environment vmware
addons ingress bind-ip-address your-cnbng-cp-vm-ip
addons ingress enabled
addons istio enabled
configuration master-virtual-ip your-cnbng-cp-vm-ip
configuration master-virtual-ip-interface ens192
configuration pod-subnet 192.202.0.0/16
configuration allow-insecure-registry true
node-defaults initial-boot default-user your-cnbng-cp-vm-login-name
node-defaults initial-boot default-user-ssh-public-key "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDDH4uQdrxTvgFg32s
node-defaults k8s ssh-username your-cnbng-cp-vm-login-name
node-defaults k8s ssh-connection-private-key "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEAwx+LkHa8U74BYN9
node-defaults netplan template " network:\n version: 2\n ethernets:\n ens192:\n
node-defaults os ntp enabled
node-defaults os ntp servers your-ntp-server1
exit
node-defaults os ntp servers your-ntp-server2
exit
In the above con g, we have enabled ingress for Grafana access and con gured two ntp servers.
clusters your-cnbng-cp-cluster-name
nodes your-cnbng-cp-vm-name
k8s node-type master
k8s ssh-ip your-cnbng-cp-vm-ip
k8s node-labels disktype ssd
exit
k8s node-labels smi.cisco.com/node-type oam
exit
cnBNG software is available as a tarball and it can be hosted on local http server for o ine deployment. In this
step we con gure the software repository locations for tarball. We setup software cnf for both cnBNG CP and
CEE. URL and SHA256 depends on the version of the image and the url location, so these two could change for
your deployment.
software cnf bng
url http://192.168.107.148/your/image/location/bng.2021.04.m0.i74.tar
sha256 e36b5ff86f35508539a8c8c8614ea227e67f97cf94830a8cee44fe0d2234dc1c
description bng-products
exit
software cnf cee
url http://192.168.107.148/your/image/location/cee-2020.02.6.i04.tar
sha256 b5040e9ad711ef743168bf382317a89e47138163765642c432eb5b80d7881f57
description cee-products
exit
clusters your-cnbng-cp-cluster-name
ops-centers bng bng
repository-local bng
sync-default-repository true
netconf-ip your-cnbng-cp-vm-ip
netconf-port 3022
ssh-ip your-cnbng-cp-vm-ip
ssh-port 2024
ingress-hostname your-cnbng-cp-vm-ip.nip.io
initial-boot-parameters use-volume-claims true
initial-boot-parameters first-boot-password your-password1
initial-boot-parameters auto-deploy false
initial-boot-parameters single-node true
exit
ops-centers cee global
repository-local cee
sync-default-repository true
netconf-ip your-cnbng-cp-vm-ip
netconf-port 3024
ssh-ip your-cnbng-cp-vm-ip
ssh-port 2023
ingress-hostname your-cnbng-cp-vm-ip.nip.io
initial-boot-parameters use-volume-claims true
Monitor sync using below command. It will take 45mins to sync and deploy the cluster.
E.g.
Veri cations
cisco@pod1-cnbng-cp:~$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
bng-bng documentation-95c8f45d9-8g4qd 1/1 Running 0 5m
bng-bng ops-center-bng-bng-ops-center-77fb6479fc-dtvt2 5/5 Running 0 5m
bng-bng smart-agent-bng-bng-ops-center-8d9fffbfb-5cdqm 1/1 Running 1 5m
cee-global alert-logger-7c6d5b6596-jrrx6 1/1 Running 0 5m
cee-global alert-router-549c8fb66c-4shz6 1/1 Running 0 5m
cee-global alertmanager-0 1/1 Running 0 5m
cee-global blackbox-exporter-n9mzx 1/1 Running 0 5m
Check Grafana ingress and try logging to it (username: admin, password: «your password as per CEE Ops
Center con g»)
We can login to Grafana GUI from Chrome/ Any browser @URL: https://grafana.your-cnbng-cp-vm-ip.nip.io/
cloud-user@inception:~$ ssh [email protected] -p 2024
[email protected]'s password:
We can also test Netconf Interface availability of cnBNG Ops Center using ssh
If you have deployed cnBNG CP a fresh then most probably, initial cnBNG CP con guration is not applied on Ops
Center. Follow below steps to apply initial con guratuon to cnBNG CP Ops Center
cisco@pod100-cnbng-cp:~$ ssh [email protected] -p 2024
Warning: Permanently added '[192.168.107.150]:2024' (RSA) to the list of known hosts.
[email protected]'s password:
[pod100/bng] bng# config
Entering configuration mode terminal
[pod100/bng] bng(config)#
Apply following initial con guration. With changes to “endpoint radius” and “udp proxy” con gs. Both “endpoint
radius” and “udp-proxy” should use IP of cnBNG CP service network side protocol VIP or in case of AIO it
should be the IP of AIO VM used for peering between cnBNG CP and UP.
Tags: bng broadband network gateway cisco cloud native bng cnbng
SHARE ON
Leave a Comment
0 Comments
1 Login
Name
This site is maintained by Cisco Systems, Inc. employees. Powered by Jekyll & Minimal Mistakes.