0% found this document useful (0 votes)
2 views

Install Debug Out Put for Values

The document contains a detailed log of the installation process for a Kubernetes application named 'ckey', including warnings about insecure configuration files and the deployment status. It outlines various resources being created and deleted, along with their configurations and image details. Additionally, it provides information on the global settings and specific components related to the application, such as databases and job configurations.

Uploaded by

ganeshlakshman00
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Install Debug Out Put for Values

The document contains a detailed log of the installation process for a Kubernetes application named 'ckey', including warnings about insecure configuration files and the deployment status. It outlines various resources being created and deleted, along with their configurations and image details. Additionally, it provides information on the global settings and specific components related to the application, such as databases and job configurations.

Uploaded by

ganeshlakshman00
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 98

Wed Mar 5 11:42:28 IST 2025 | INFO | Installing ckey

Wed Mar 5 11:42:28 IST 2025 | INFO | Check


/home/nokia/Builds/ric_package/deployment_scripts/logs/install.log file to track
the current status
WARNING: Kubernetes configuration file is group-readable. This is insecure.
Location: /home/labadmin/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure.
Location: /home/labadmin/.kube/config
install.go:224: 2025-03-05 11:42:28.836450204 +0530 IST m=+0.044111525 [debug]
Original chart version: ""
install.go:241: 2025-03-05 11:42:28.836517423 +0530 IST m=+0.044178742 [debug]
CHART PATH: /home/nokia/Builds/ric_package/ckey-1.0.0.tgz

client.go:486: 2025-03-05 11:42:30.260778182 +0530 IST m=+1.468439493 [debug]


Starting delete for "ricplt-ckey-chart-ckey" Secret
client.go:490: 2025-03-05 11:42:30.264091018 +0530 IST m=+1.471752358 [debug]
Ignoring delete failure for "ricplt-ckey-chart-ckey" /v1, Kind=Secret: secrets
"ricplt-ckey-chart-ckey" not found
wait.go:104: 2025-03-05 11:42:30.264143258 +0530 IST m=+1.471804569 [debug]
beginning wait for 1 resources to be deleted with timeout of 1h0m0s
client.go:142: 2025-03-05 11:42:30.372891835 +0530 IST m=+1.580553150 [debug]
creating 1 resource(s)
client.go:142: 2025-03-05 11:42:30.379238762 +0530 IST m=+1.586900072 [debug]
creating 32 resource(s)
client.go:486: 2025-03-05 11:42:30.466920943 +0530 IST m=+1.674582263 [debug]
Starting delete for "ricplt-ckey-chart-master-realm-configuration-job" Job
client.go:490: 2025-03-05 11:42:30.474317877 +0530 IST m=+1.681979206 [debug]
Ignoring delete failure for "ricplt-ckey-chart-master-realm-configuration-job"
batch/v1, Kind=Job: jobs.batch "ricplt-ckey-chart-master-realm-configuration-job"
not found
wait.go:104: 2025-03-05 11:42:30.474682549 +0530 IST m=+1.682343868 [debug]
beginning wait for 1 resources to be deleted with timeout of 1h0m0s
client.go:142: 2025-03-05 11:42:30.542990402 +0530 IST m=+1.750651729 [debug]
creating 1 resource(s)
client.go:712: 2025-03-05 11:42:30.551415633 +0530 IST m=+1.759076948 [debug]
Watching for changes to Job ricplt-ckey-chart-master-realm-configuration-job with
timeout of 1h0m0s
client.go:740: 2025-03-05 11:42:30.567205791 +0530 IST m=+1.774867108 [debug]
Add/Modify event for ricplt-ckey-chart-master-realm-configuration-job: ADDED
client.go:779: 2025-03-05 11:42:30.567283352 +0530 IST m=+1.774944673 [debug]
ricplt-ckey-chart-master-realm-configuration-job: Jobs active: 0, jobs failed: 0,
jobs succeeded: 0
client.go:740: 2025-03-05 11:42:30.573583012 +0530 IST m=+1.781244350 [debug]
Add/Modify event for ricplt-ckey-chart-master-realm-configuration-job: MODIFIED
client.go:779: 2025-03-05 11:42:30.573640025 +0530 IST m=+1.781301360 [debug]
ricplt-ckey-chart-master-realm-configuration-job: Jobs active: 1, jobs failed: 0,
jobs succeeded: 0
client.go:740: 2025-03-05 11:42:32.596079787 +0530 IST m=+3.803741119 [debug]
Add/Modify event for ricplt-ckey-chart-master-realm-configuration-job: MODIFIED
client.go:779: 2025-03-05 11:42:32.596161956 +0530 IST m=+3.803823286 [debug]
ricplt-ckey-chart-master-realm-configuration-job: Jobs active: 1, jobs failed: 0,
jobs succeeded: 0
client.go:740: 2025-03-05 11:45:46.963579546 +0530 IST m=+198.171240861 [debug]
Add/Modify event for ricplt-ckey-chart-master-realm-configuration-job: MODIFIED
client.go:779: 2025-03-05 11:45:46.963716529 +0530 IST m=+198.171377856 [debug]
ricplt-ckey-chart-master-realm-configuration-job: Jobs active: 1, jobs failed: 0,
jobs succeeded: 0
client.go:740: 2025-03-05 11:45:48.25785033 +0530 IST m=+199.465511669 [debug]
Add/Modify event for ricplt-ckey-chart-master-realm-configuration-job: MODIFIED
client.go:779: 2025-03-05 11:45:48.257912748 +0530 IST m=+199.465574082 [debug]
ricplt-ckey-chart-master-realm-configuration-job: Jobs active: 0, jobs failed: 0,
jobs succeeded: 0
client.go:740: 2025-03-05 11:45:48.279145238 +0530 IST m=+199.486806577 [debug]
Add/Modify event for ricplt-ckey-chart-master-realm-configuration-job: MODIFIED
client.go:486: 2025-03-05 11:45:48.291699265 +0530 IST m=+199.499360576 [debug]
Starting delete for "ricplt-ckey-chart-master-realm-configuration-job" Job
wait.go:104: 2025-03-05 11:45:48.305335418 +0530 IST m=+199.512996728 [debug]
beginning wait for 1 resources to be deleted with timeout of 1h0m0s
NAME: ricplt-ckey-chart
LAST DEPLOYED: Wed Mar 5 11:42:28 2025
NAMESPACE: ricplt
STATUS: deployed
REVISION: 1
USER-SUPPLIED VALUES:
ckey:
cbur:
enabled: false
securityContext:
runAsGroup: auto
runAsUser: auto
dbAddress: ricplt-cmdb-chart-mysql.ricplt.svc.cluster.local
enabled: true
frontendURL: https://10.183.147.71:31776
httpRelativePath: /usermgmt
images:
cbur:
imageName: cbur-agent
imagePullSecrets: []
imageRepo: ric
imageTag: 1.3.0-alpine-1338
keycloak:
imageName: ckey-keycloak
imagePullSecrets: []
imageRepo: ric
imageTag: 24.0.5.2-rocky8-jre17-47
kubectl:
imageName: kubectl
imagePullSecrets: []
imageRepo: ric
imageTag: 1.28.12-rocky8-nano-20240801
masterRealmConfigJob:
imageName: ckey-py
imagePullSecrets: []
imageRepo: ric
imageTag: 1.1.4-rocky8-python3.11-3
pullPolicy: IfNotPresent
resourceWatcherJob:
imageName: ckey-py
imagePullSecrets: []
imageRepo: ric
imageTag: 1.1.4-rocky8-python3.11-3
ingress:
enabled: true
internalCburRegistry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
internalCustomProviderRegistry: edgeapps-docker-local.artifactory-
blr1.int.net.nokia.com
internalKeycloakPyRegistry: edgeapps-docker-local.artifactory-
blr1.int.net.nokia.com
internalKeycloakRegistry: edgeapps-docker-local.artifactory-
blr1.int.net.nokia.com
internalKubectlRegistry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
masterRealmConfigurationJob:
jobActiveDeadline: 6000
probeDelays:
startupProbeFailureThreshold: 6000
replicaCount: 2
securityContext:
fsGroup: auto
runAsGroup: auto
runAsUser: auto
securityenabled: false
ckng:
ckng:
_imageFlavorMapping:
- flavor: rocky8
repository: ric/ckng-api-gateway
tag: 3.4.x-622-rocky8
image:
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
ckng-operator:
cleanerJob:
securityContext:
enabled: false
controller:
_imageFlavorMapping:
- flavor: rocky8
repository: ric/ckng-controller
tag: 5.0.0-94-rocky8
image:
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
kubectl:
_imageFlavorMapping:
- flavor: rocky8
repository: ric/kubectl
tag: 1.28.12-rocky8-nano-20240801
image:
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
migrations:
_imageFlavorMapping:
- flavor: rocky8
repository: ric/ckng-migrations
tag: 5.0.0-94-rocky8
enabled: false
image:
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
securityContext:
enabled: false
multitenant:
namespaceUrlPrefix: false
serviceUrlPrefix: false
replicas: 2
securityContext:
enabled: false
ckngValidator:
config:
replicas: 2
securityContext:
enabled: false
cleanerJob:
securityContext:
enabled: false
config:
replicas: 2
securityContext:
enabled: false
configProvider:
_imageFlavorMapping:
- flavor: rocky8
repository: ric/ckng-config-provider
tag: 3.4.x-622-rocky8
image:
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
enabled: true
kubectl:
_imageFlavorMapping:
- flavor: rocky8
repository: ric/kubectl
tag: 1.28.12-rocky8-nano-20240801
image:
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
postDeleteJob:
securityContext:
enabled: false
service:
ports:
proxy:
targetPort: 8000
tests:
securityContext:
enabled: false
cmdb:
_internalRegistry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
admin:
image:
name: ric/cmdb-admin
tag: 6.3-4.7005
cbur:
enabled: false
image:
name: ric/cbur-agent
tag: 1.2.2-alpine-51
cluster_type: simplex
containerSecurityContext:
disabled: true
enabled: true
global:
_registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
registry: null
internalRegistry: null
mariadb:
auth:
image:
name: ric/csfdb-zt-proxy
tag: 1.1-7.58
count: 1
databases:
- name: db4keycloak
image:
name: ric/cmdb-mariadb
tag: 6.3-4.7005
metrics:
image:
name: ric/cmdb-mysqld-exporter
tag: 0.15.1-4.310
pdb:
enabled: true
minAvailable: 0%
persistence:
backup:
storageClass: standard-csi-nova
storageClass: standard-csi-nova
temp:
storageClass: standard-csi-nova
users:
- credentialName: null
host: '%'
name: keycloak
object: '`%db4keycloak`.*'
password: cjAwdHIwMHQ=
privilege: ALL
requires: ""
with: GRANT OPTION
maxscale:
auth:
image:
name: ric/csfdb-zt-proxy
tag: 1.1-7.58
image:
name: ric/cmdb-maxscale
tag: 6.3-4.7005
metrics:
enabled: false
image:
name: ric/cmdb-maxctrl-exporter
tag: 0.1.0-26.310
podSecurityContext:
disabled: true
securityenabled: false
define: "1234"
global:
activeDeadlineSeconds: 300
backoffLimit: 0
infranamespace: ricinfra
k8sAPIHost: https://kubernetes.default.svc.cluster.local/
mecnamespace: ricplt
mlpaaskfnamespace: mlpaaskubeflow
nc_image: ric/mlpaas_oss_ubuntu_nc:2.0
platformnamespace: ricplt
projectname: ric
ricuinamespace: ric-dashboard
sepmlpaasnamespace: sepmlpaas
tillerNamespace: ricxapp
ttlSecondsAfterFinished: 300
xappnamespace: ricxapp
ric:
a1mediator:
a1mediator:
image:
name: a1
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
resources:
limits:
cpu: 50m
memory: 4Gi
requests:
cpu: 5m
memory: 125Mi
rmr_timeout_config:
a1_rcv_retry_times: 20
ins_del_no_resp_ttl: 5
ins_del_resp_ttl: 10
enabled: false
loglevel: ERROR
securityenabled: true
alarmmanager:
alarmmanager:
alertManagerAddress: infra-cpro-alertmanager-ext
image:
name: alarm_go
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
enabled: true
loglevel: "1"
securityenabled: true
appmgr:
enabled: true
image:
name: appmgr
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
loglevel: "1"
resources:
limits:
cpu: 50m
memory: 500Mi
requests:
cpu: 5m
memory: 125Mi
securityenabled: false
backuprestore:
backuprestore:
image:
name: vpp_restore_backup
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: 06.10.2022
persistence:
enabled: true
persistentVolume:
size: 30Gi
storageClass: ocs-storagecluster-cephfs
pvClusterSize: 1
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 100m
memory: 100Mi
controls:
logger:
loglevel: "1"
enabled: true
securityenabled: false
ccm:
ccm:
image:
name: ccm
nc_image: mlpaas_oss/ubuntu_nc:2.0
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: 06.10.2022
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 100m
memory: 100Mi
controls:
ccm:
ricBuildId: 25.03.05.0551
ricInstanceId: "1234"
ricReleaseId: 25r2ric
logger:
loglevel: "1"
enabled: true
oamgui:
enabled: false
image:
name: oamgui
pullPolicy: IfNotPresent
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: latest
securityenabled: false
dcapterm:
dcapterm:
image:
name: dcapterm
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: 20.50.3
replicaCount: 1
securityenabled: false
enabled: false
loglevel: "1"
e2mgr:
appConfigFile: |
loglevel: 1
e2mgr:
globalRicId:
mcc: "310"
mnc: "411"
ricId: AACCE
image:
name: e2mgr
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
liveness:
api: v1/health
enabled: true
initialDelaySeconds: 10
periodSeconds: 60
privilegedmode: false
readiness:
api: v1/health
enabled: true
initialDelaySeconds: 10
periodSeconds: 60
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 100m
memory: 256Mi
rnibWriter:
ranManipulationMessageChannel: RAN_MANIPULATION
stateChangeMessageChannel: RAN_CONNECTION_STATUS_CHANGE
enabled: false
securityenabled: true
e2term:
common_env_variables:
ConfigMapName: /etc/config/log-level
ServiceName: RIC_E2_TERM
e2term:
alpha:
cni:
Multus:
enabled: false
interface: gnb
namespace: ricplt
network: macvlan-conf-1
e2termnodeportenabled: false
env:
messagecollectorfile: /data/outgoing/
print: "1"
hostnetworkmode: false
image:
name: e2
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
nodeport: ""
pizpub:
enabled: false
privilegedmode: false
replicaCount: 2
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 100m
memory: 256Mi
enabled: false
loglevel: "1"
securityenabled: true
enabled: true
infrastructure:
enabled: false
jaegeradapter:
enabled: false
jaegeradapter:
image:
name: jaegertracing/all-in-one
registry: docker.io
tag: 1.12
lwsd:
enabled: true
loglevel: "1"
lwsd:
image:
name: lwsd
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: 21.01.01
resources:
limits:
cpu: 1000m
memory: 5Gi
requests:
cpu: 50m
memory: 100Mi
securityenabled: false
noma:
cni:
Multus:
enabled: false
interface: eth1
interface_ip: 2a00:8a00:a000:1111::3d/64
namespace: ricplt
network: macvlan-conf-noma
enabled: false
image:
name: noma
repository: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: 28.09.2021
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 50m
memory: 100Mi
ricInstanceId: "1234"
ricInstanceName: RIC
ricReleaseId: 25r2ric
ricReleaseVer: 25.03.05.0551
securityenabled: true
server:
host: 0.0.0.0
ne3sPort: 8080
nodePort: null
restPort: 8087
tls:
enabled: false
nodePort: null
port: 8443
o1mediator:
enabled: false
loglevel: "1"
o1mediator:
image:
name: o1
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
resources:
limits:
cpu: 50m
memory: 150Mi
requests:
cpu: 10m
memory: 50Mi
securityenabled: true
oamtermination:
OAMTAdminResources:
resources:
limits:
cpu: 8000m
memory: 30Gi
requests:
cpu: 500m
memory: 1Gi
OAMTAdminreplicaCount: 1
OAMTDistResources:
resources:
limits:
cpu: 1000m
memory: 500Mi
requests:
cpu: 25m
memory: 100Mi
OAMTDistreplicaCount: 1
appConfigFile: |
"enbdetails": [
#{
# "btshost": "10.53.203.36",
# "btsport": "443",
# "btsusername": "Nemuadmin",
# "btspassword": "nemuuser",
# "btstype": "enb21b",
# "btsid": "13B6",
# "connectionmode": "server"
#},
]
"crandetails": [
#{
# "btshost": "10.53.203.37",
# "btsport": "443",
# "btsusername": "Nemuadmin",
# "btspassword": "nemuuser",
# "btstype": "cran",
# "btsid": "13B7",
# "connectionmode": "server"
#},
]
"SupportedMO": [
{
"MOClass": "NOKLTE:LNHOIF",
"MOName": "MRBTS.LNBTS.LNCEL.LNHOIF"
},
{
"MOClass": "NOKLTE:LNREL",
"MOName": "MRBTS.LNBTS.LNCEL.LNREL"
},
{
"MOClass": "NOKLTE:AMLEPR",
"MOName": "MRBTS.LNBTS.LNCEL.AMLEPR"
},
{
"MOClass": "NOKLTE:LNCEL",
"MOName": "MRBTS.LNBTS.LNCEL"
},
{
"MOClass": "NOKLTE:IAFIM",
"MOName": "MRBTS.LNBTS.LNCEL.IAFIM"
},
{
"MOClass": "NOKLTE:IRFIM",
"MOName": "MRBTS.LNBTS.LNCEL.IRFIM"
},
{
"MOClass": "NOKLTE:PSGRP",
"MOName": "MRBTS.LNBTS.PSGRP"
},
{
"MOClass": "NOKLTE:SIB",
"MOName": "MRBTS.LNBTS.LNCEL.SIB"
},
{
"MOClass": "NOKLTE:LNBTS",
"MOName": "MRBTS.LNBTS"
},
{
"MOClass": "NOKLTE:LNADJL",
"MOName": "MRBTS.LNBTS.LNADJ.LNADJL"
},
{
"MOClass": "NOKLTE:LNADJ",
"MOName": "MRBTS.LNBTS.LNADJ"
},
{
"MOClass": "NOKLTE:LNCEL_FDD",
"MOName": "MRBTS.LNBTS.LNCEL.LNCEL_FDD"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRDU",
"MOName": "MRBTS.NRBTS.NRDU"
},
{
"MOClass": "NOKLTE:LNADJGNB",
"MOName": "MRBTS.LNBTS.LNADJGNB"
},
{
"MOClass": "com.nokia.srbts.eqm:RMOD",
"MOName": "MRBTS.EQM.APEQM.RMOD"
},
{
"MOClass": "com.nokia.srbts.eqmr:RMOD_R",
"MOName": "MRBTS.EQM_R.APEQM_R.RMOD_R"
},
{
"MOClass": "com.nokia.srbts.eqm:RETU",
"MOName": "MRBTS.EQM.APEQM.ALD.RETU"
},
{
"MOClass": "com.nokia.srbts.eqmr:RETU_R",
"MOName": "MRBTS.EQM_R.APEQM_R.ALD_R.RETU_R"
},
{
"MOClass": "com.nokia.srbts.mnl:CHANNEL",
"MOName": "MRBTS.MNL.MNLENT.CELLMAPPING.LCELL.CHANNELGROUP.CHANNEL"
},
{
"MOClass": "com.nokia.srbts.mnl:CHANNEL",
"MOName": "MRBTS.MNL.MNLENT.CELLMAPPING.LTTRX.CHANNELGROUP.CHANNEL"
},
{
"MOClass": "com.nokia.srbts.eqmr:GNSSE_R",
"MOName": "MRBTS.EQM_R.APEQM_R.CABINET_R.SMOD_R.GNSSE_R"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRCELL",
"MOName": "MRBTS.NRBTS.NRCELL"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRBTS",
"MOName": "MRBTS.NRBTS"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRPGRP",
"MOName": "MRBTS.NRBTS.NRPGRP"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRADJECELL",
"MOName": "MRBTS.NRBTS.NRADJECELL"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRADJNRCELL",
"MOName": "MRBTS.NRBTS.NRADJNRCELL"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRRELE",
"MOName": "MRBTS.NRBTS.NRCELL.NRRELE"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRREL",
"MOName": "MRBTS.NRBTS.NRCELL.NRREL"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRPLMN_UACBAR",
"MOName": "MRBTS.NRBTS.NRCELL.NRPLMN_UACBAR"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRCELL_FDD",
"MOName": "MRBTS.NRBTS.NRCELL.NRCELL_FDD"
},
{
"MOClass": "NOKLTE:PMRPQH",
"MOName": "MRBTS.LNBTS.PMRNL.PMRPQH"
},
{
"MOClass": "com.nokia.srbts.mnl:CHANNEL",
"MOName": "MNL.MNLENT.CELLMAPPING.LCELL.CHANNELGROUP.CHANNEL"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRMOPR_SA",
"MOName": "MRBTS.NRBTS.NRMOPR_SA"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRMOIMP_SA",
"MOName": "MRBTS.NRBTS.NRMOPR_SA.NRMOIMP_SA"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRDRB_5QI",
"MOName": "MRBTS.NRBTS.NRDRB_5QI"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRDRB_QCI",
"MOName": "MRBTS.NRBTS.NRDRB_QCI"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRDRB",
"MOName": "MRBTS.NRBTS.NRDRB"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRPMRNL",
"MOName": "MRBTS.NRBTS.NRPMRNL"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRPMQAP",
"MOName": "MRBTS.NRBTS.NRPMRNL.NRPMQAP"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRIRFIM",
"MOName": "MRBTS.NRBTS.NRSYSINFO_PROFILE.NRIRFIM"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRLIM",
"MOName": "MRBTS.NRBTS.NRSYSINFO_PROFILE.NRLIM"
},
{
"MOClass" : "com.nokia.cu.5g:MRBTS",
"MOName" : "MRBTS"
},
]
"enbconf": {
"retry": 1
}
"loglevel": 1
"RANRequestTimeout" : 200 # maximum time that the go-routine waits for RAN
to respond.
"MaxNoOfParallelPOSTReqAllowedPerBts": 1 # this flag is added for future
purpose. This should be 1 always for now.
"ConnectionTimeout" : 20 # this value is for agent-cli to get response from
admin-cli/BTS
"AutoConnect": "true" # if true OAMTAdmin will attempt making connection to
BTS automatically during startup
"ResetConnectionTimeout" : 0 # Resets BTS connections after configured
seconds without restarting POD, valid only if value is ">0"
"ConnectionsPerInstance" : 1 # number of connection per OAMTAdmin pod
"HbTimer" : 40 #should be changed only in case of using WebEm simulator and
should never be 0
"ALARM_OAM_CONNECTION_FAILURE" : 72006
"ALARM_OAM_OPERATION_FAILURE" : 72013
conf:
ENVIRONMENT_VARIABLES:
pod_interface: eth0
Multus:
enabled: false
interface: enb
namespace: ricplt
network: macvlan-conf-1
enabled: false
image:
oamtadminImage: oamtadmin
oamtdistImage: oamtdistributor
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
oamtadminaffinity: false
securityenabled: true
pmbgen:
controls:
logger:
level: "1"
pmbParams:
build: 25.03.05.0551
instance: "1234"
release: 25r2ric
enabled: true
messaging:
ports:
nodeport: null
pmbgen:
image:
name: pmbgen
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 100m
memory: 200Mi
securityenabled: true
ric_dashboard:
enabled: true
loglevel: "1"
ric_dashboard:
image:
name: sep_dashboard
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
ingress:
host: null
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 100m
memory: 100Mi
securityenabled: false
rtmgr:
enabled: false
loglevel: "1"
rtmgr:
image:
name: rtmgr
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1
memory: 4Gi
requests:
cpu: 100m
memory: 125Mi
securityenabled: true
securityenabled: false
submgr:
enabled: false
loglevel: "1"
securityenabled: true
submgr:
image:
name: submgr
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
resources:
limits:
cpu: 100m
memory: 256Mi
requests:
cpu: 50m
memory: 100Mi
trblmgr:
enabled: true
loglevel: "1"
resources:
limits:
cpu: 500m
memory: 5Gi
requests:
cpu: 50m
memory: 100Mi
securityenabled: false
trblmgr:
image:
name: trblmgr
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: 20.50.3
xapp-onboarder:
enabled: false
xapponboarder:
allow_redeploy: "True"
image:
chartmuseum:
name: chartmuseum/chartmuseum
registry: docker.io
tag: v0.8.2
xapponboarder:
name: o-ran-sc/xapp-onboarder
registry: nexus3.o-ran-sc.org:10002
tag: 1.0.7

COMPUTED VALUES:
ckey:
_ChartVersion: 12.3.1
alarmStateStorage: File
appendClusterDomainForJgroupsQuery: true
automountServiceAccountToken: false
cbur:
autoEnableCron: false
autoUpdateCron: false
backendMode: local
backupStorage:
class: ""
size: 400Mi
brHookPostRestore:
enable: false
timeout: 600
weight: 5
brPolicyWeight: 5
cronSpec: 0 0 * * *
enabled: false
ignoreFileChanged: true
maxiCopy: 5
resources:
limits:
ephemeral-storage: 1Gi
memory: 256Mi
requests:
cpu: 250m
ephemeral-storage: 1Gi
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: auto
runAsUser: auto
seccompProfile:
type: RuntimeDefault
terminationGracePeriodSecondsForBackup: 30
certExpiryAlarmPeriod: 10
certManager:
enabled: false
clusterDomain: cluster.local
commonLabels: true
containerSecurityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
custom:
job:
annotations: {}
labels: {}
pod:
annotations: {}
labels: {}
statefulset:
annotations: {}
labels: {}
customImagePullSecrets:
imagePullSecrets: []
customJavaOpts: ""
customPreStartScript: ""
databaseAlarmInitialDelay: 240
dbAddress: ricplt-cmdb-chart-mysql.ricplt.svc.cluster.local
dbAlarmCheckPeriod: 30000
dbName: db4keycloak
dbPort: 3306
dbUser: keycloak
dbVendor: mariadb
enableServiceLinks: false
enabled: true
extraEnv: |
# List of allowed TLS ciphers.
# - name: KC_HTTPS_CIPHER_SUITES
# value:
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305
_SHA256"
# List of supported TLS versions by Keycloak. Possible values: "TLSv1.3"
TLSv1.2" "TLSv1.2,TLSv1.3". Default value is "TLSv1.3" which allows TLS 1.2 and 1.3
protocols.
# - name: KC_HTTPS_PROTOCOLS
# value: "TLSv1.3"
frontendURL: https://10.183.147.71:31776
genericEventListenerData: {}
geoRedundancy:
enabled: false
service:
jgroupsExternalTCPPort: 30556
jgroupsTCPPort: 7900
type: NodePort
global:
activeDeadlineSeconds: 300
annotations: {}
backoffLimit: 0
brHookServiceAccountName: ""
certManager: {}
containerNamePrefix: ""
createOCPInternalCertificateSecretServiceAccountName: ""
deletionServiceAccountName: ""
disablePodNamePrefixRestrictions: false
enableDefaultCpuLimits: false
flatRegistry: false
healingServiceAccountName: ""
hpa: {}
imagePullSecrets: []
infranamespace: ricinfra
ipFamilies: []
istio:
mtls: {}
sharedHttpGateway: {}
sidecar:
stopPort: 15000
isuServiceAccountName: ""
k8sAPIHost: https://kubernetes.default.svc.cluster.local/
labels: {}
masterRealmServiceAccountName: ""
mecnamespace: ricplt
mlpaaskfnamespace: mlpaaskubeflow
nc_image: ric/mlpaas_oss_ubuntu_nc:2.0
platformnamespace: ricplt
podNamePrefix: ""
populateSecretAdminPasswordServiceAccountName: ""
postheal: 0
preUpgradeServiceAccountName: ""
preheal: 0
projectname: ric
resourceWatcherServiceAccountName: ""
ricuinamespace: ric-dashboard
sepmlpaasnamespace: sepmlpaas
serviceAccountName: ""
statefulServiceAccountName: ""
tillerNamespace: ricxapp
ttlSecondsAfterFinished: 300
unifiedLogging:
extension: {}
syslog:
keyStore: {}
keyStorePassword: {}
rfc: {}
trustStore: {}
trustStorePassword: {}
xappnamespace: ricxapp
hookDeletePolicy: before-hook-creation, hook-succeeded
hostAliases: []
hpa:
maxReplicas: 2
minReplicas: 1
predefinedMetrics:
averageCPUThreshold: 80
averageMemoryThreshold: 80
enabled: true
httpRelativePath: /usermgmt
httpsPort: 8443
images:
cbur:
imageName: cbur-agent
imagePullSecrets: []
imageRepo: ric
imageTag: 1.3.0-alpine-1338
keycloak:
imageName: ckey-keycloak
imagePullSecrets: []
imageRepo: ric
imageTag: 24.0.5.2-rocky8-jre17-47
kubectl:
imageName: kubectl
imagePullSecrets: []
imageRepo: ric
imageTag: 1.28.12-rocky8-nano-20240801
masterRealmConfigJob:
imageName: ckey-py
imagePullSecrets: []
imageRepo: ric
imageTag: 1.1.4-rocky8-python3.11-3
pullPolicy: IfNotPresent
resourceWatcherJob:
imageName: ckey-py
imagePullSecrets: []
imageRepo: ric
imageTag: 1.1.4-rocky8-python3.11-3
ingress:
allowedPaths:
- path: /
type: Prefix
enabled: true
keycloakServicePort: 8443
initBusyBoxContainer:
resources:
limits:
ephemeral-storage: 1Gi
memory: 256Mi
requests:
cpu: 250m
ephemeral-storage: 1Gi
memory: 256Mi
internalCburRegistry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
internalCustomProviderRegistry: edgeapps-docker-local.artifactory-
blr1.int.net.nokia.com
internalKeycloakPyRegistry: edgeapps-docker-local.artifactory-
blr1.int.net.nokia.com
internalKeycloakRegistry: edgeapps-docker-local.artifactory-
blr1.int.net.nokia.com
internalKubectlRegistry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
ipFamilies: []
istio:
createDrForClient: false
drTrafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: AUTH_SESSION_ID
enabled: false
gateways:
- annotations: {}
enabled: false
hosts:
- '*'
ingressPodSelector:
istio: ingressgateway
labels: {}
name: ckey-gw
port: 443
protocol: HTTPS
tls:
credentialName: null
custom: {}
mode: SIMPLE
redirect: false
isVirtualServiceRequiredForHTTP: false
isVirtualServiceRequiredForHTTPS: true
mtls:
enabled: true
prefixReleaseNameForGatewayName: true
resources:
limits: {}
requests: {}
sharedHttpGateway: {}
virtualService:
gateways: []
hosts:
- ckey.io
tls:
port: 443
isuUpgrade:
cbur:
host: http://cbur-master-cbur.ncms.svc:80
namespace: ncms
passwordSecret: {}
cburBackup:
enabled: true
dbSwitching:
enabled: true
enabled: true
excludeCheckPattern: .*/(authenticate|token).*
preserveBackupRestoreMechanism: false
tmpPodsReplicaCount: 2
traceLogging: false
jdbcParams: ?autoReconnect=true
jgroupsFDPort: 57800
jgroupsTCPBindPort: 7800
kcHealthcheckEndpointEnabled: true
kcHostName: ""
kcMetricsEndpointEnabled: true
kcProxy: reencrypt
ldapalarm:
enabled: true
ldapAlarmInitialDelay: 240
realm: master
loggingConfiguration: |-
log4j.appender.syslog.Ssl.Timeout=15
log4j.appender.syslog.Ssl.CloseRequestType=GNUTLS_SHUT_WR
loginBannerAcceptMessage: OK
loginBannerFailedLoginCounterMessage: Failed login attempts after last successful
login
loginBannerMainMessage: You are about to access a private system. This system is
for the use of authorized users only. All connections are logged. Any
unauthorized
access or access attempts may be punishable to the fullest extent possible
under
the applicable local legislation.
loginBannerPreviousSuccessMessage: Your last successful login was on {0}.
loginBannerTitle: Login Banner
loginBannerWelcomeFirstNameMessage: Welcome, {0}.
loginBannerWelcomeUsernameMessage: Welcome, {0}.
managedBy: Helm
masterRealmConfigurationJob:
adminEventsExpiration: "1576800"
enableAdminEvents: true
enableBruteForceProtection: true
enableForgotPassword: true
enableLoginEvents: true
enableSSLRequireForAll: true
enabled: true
jobActiveDeadline: 6000
jobBackOffLimit: 6
loginEventsExpiration: "129600"
overwritePasswordPolicies: true
overwritePasswordTimestamp: true
setNokiaLoginTheme: true
memoryFactorForKeycloak: 0.7
memoryFactorForUnifiedLogger: 0.1
metric:
annotations:
prometheus.io/path: <httpRelativePath>/realms/master/metrics
prometheus.io/scheme: http
prometheus.io/scrape: "false"
scrapeMetricsOnHttpPort: false
uriMetricsDetailed: false
uriMetricsEnabled: true
nodeAffinity:
enabled: false
key: is_worker
value: true
nodeAntiAffinity:
enabled: true
nodeSelector: {}
partOf: ckey
passwordExpirationNotifer:
isEmailNotificationEnabled: false
passwordExpiryThresholdDays: 15
scheduleAtMinutes: 30
scheduledAtHour: 22
textBody: Your WebSSO login password is expiring soon. Please renew it.
pdb:
enabled: false
podAntiAffinity:
node:
topologyKey: kubernetes.io/hostname
zone:
topologyKey: topology.kubernetes.io/zone
podManagementPolicy: Parallel
preserve_keycloak_pvc: false
probeDelays:
livenessProbeFailureThreshold: 5
livenessProbeInitialDelay: 1
livenessProbePeriodSeconds: 15
livenessProbeTimeoutSeconds: 10
readinessProbeFailureThreshold: 1
readinessProbeInitialDelay: 1
readinessProbePeriodSeconds: 2
readinessProbeTimeoutSeconds: 1
startupProbeFailureThreshold: 6000
startupProbeInitialDelaySeconds: 1
startupProbePeriodSeconds: 2
startupProbeTimeoutSeconds: 1
proxyForwarding: xforwarded
pushEventListenerData: {}
rbac:
enabled: true
removeChartNameFromResourceName: false
replicaCount: 2
replicasManagedByHpa: false
repopulateSecretAdminPasswordField: true
resourceWatcherJob:
enabled: false
resources:
limits:
ephemeral-storage: 1Gi
memory: 2048Mi
requests:
cpu: 500m
ephemeral-storage: 1Gi
memory: 1024Mi
secretCredentials: {}
securityContext:
fsGroup: auto
runAsGroup: auto
runAsNonRoot: true
runAsUser: auto
seccompProfile:
type: RuntimeDefault
securityenabled: false
serviceSessionAffinity: ClientIP
serviceType: ClusterIP
terminationGracePeriodSecondsForSSO: 30
timeZone: {}
tls:
certManager:
caIssuer:
group: cert-manager.io
kind: ClusterIssuer
duration: 8760h
enabled: false
isTlsExternalCertViaCertManager: false
privateKey:
rotationPolicy: Always
renewBefore: 360h
useCaCert: false
tokenExpirationSeconds: 3600
unifiedLogging:
extension: {}
kcLog: gelf
logLevel: INFO
loggingJavaOpts: '-Djdk.internal.httpclient.disableHostnameVerification=true '
syslog:
keyStore: {}
keyStorePassword: {}
rfc: {}
trustStore: {}
trustStorePassword: {}
useServiceAccountVolumeProjection: true
ckng:
ckng:
_imageFlavorMapping:
- flavor: rocky8
repository: ric/ckng-api-gateway
tag: 3.4.x-622-rocky8
image:
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
ckng-operator:
cleanerJob:
securityContext:
enabled: false
controller:
_imageFlavorMapping:
- flavor: rocky8
repository: ric/ckng-controller
tag: 5.0.0-94-rocky8
image:
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
kubectl:
_imageFlavorMapping:
- flavor: rocky8
repository: ric/kubectl
tag: 1.28.12-rocky8-nano-20240801
image:
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
migrations:
_imageFlavorMapping:
- flavor: rocky8
repository: ric/ckng-migrations
tag: 5.0.0-94-rocky8
enabled: false
image:
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
securityContext:
enabled: false
multitenant:
namespaceUrlPrefix: false
serviceUrlPrefix: false
replicas: 2
securityContext:
enabled: false
ckngValidator:
config:
replicas: 2
securityContext:
enabled: false
cleanerJob:
securityContext:
enabled: false
config:
replicas: 2
securityContext:
enabled: false
configProvider:
_imageFlavorMapping:
- flavor: rocky8
repository: ric/ckng-config-provider
tag: 3.4.x-622-rocky8
image:
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
enabled: true
kubectl:
_imageFlavorMapping:
- flavor: rocky8
repository: ric/kubectl
tag: 1.28.12-rocky8-nano-20240801
image:
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
postDeleteJob:
securityContext:
enabled: false
service:
ports:
proxy:
targetPort: 8000
tests:
securityContext:
enabled: false
cmdb:
_internalRegistry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
admin:
image:
name: ric/cmdb-admin
tag: 6.3-4.7005
cbur:
enabled: false
image:
name: ric/cbur-agent
tag: 1.2.2-alpine-51
cluster_type: simplex
containerSecurityContext:
disabled: true
enabled: true
global:
_registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
registry: null
internalRegistry: null
mariadb:
auth:
image:
name: ric/csfdb-zt-proxy
tag: 1.1-7.58
count: 1
databases:
- name: db4keycloak
image:
name: ric/cmdb-mariadb
tag: 6.3-4.7005
metrics:
image:
name: ric/cmdb-mysqld-exporter
tag: 0.15.1-4.310
pdb:
enabled: true
minAvailable: 0%
persistence:
backup:
storageClass: standard-csi-nova
storageClass: standard-csi-nova
temp:
storageClass: standard-csi-nova
users:
- credentialName: null
host: '%'
name: keycloak
object: '`%db4keycloak`.*'
password: cjAwdHIwMHQ=
privilege: ALL
requires: ""
with: GRANT OPTION
maxscale:
auth:
image:
name: ric/csfdb-zt-proxy
tag: 1.1-7.58
image:
name: ric/cmdb-maxscale
tag: 6.3-4.7005
metrics:
enabled: false
image:
name: ric/cmdb-maxctrl-exporter
tag: 0.1.0-26.310
podSecurityContext:
disabled: true
securityenabled: false
define: "1234"
global:
activeDeadlineSeconds: 300
backoffLimit: 0
infranamespace: ricinfra
k8sAPIHost: https://kubernetes.default.svc.cluster.local/
mecnamespace: ricplt
mlpaaskfnamespace: mlpaaskubeflow
nc_image: ric/mlpaas_oss_ubuntu_nc:2.0
platformnamespace: ricplt
projectname: ric
ricuinamespace: ric-dashboard
sepmlpaasnamespace: sepmlpaas
tillerNamespace: ricxapp
ttlSecondsAfterFinished: 300
xappnamespace: ricxapp
ric:
a1mediator:
a1mediator:
image:
name: a1
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
resources:
limits:
cpu: 50m
memory: 4Gi
requests:
cpu: 5m
memory: 125Mi
rmr_timeout_config:
a1_rcv_retry_times: 20
ins_del_no_resp_ttl: 5
ins_del_resp_ttl: 10
enabled: false
loglevel: ERROR
securityenabled: true
alarmmanager:
alarmmanager:
alertManagerAddress: infra-cpro-alertmanager-ext
image:
name: alarm_go
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
enabled: true
loglevel: "1"
securityenabled: true
appmgr:
enabled: true
image:
name: appmgr
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
loglevel: "1"
resources:
limits:
cpu: 50m
memory: 500Mi
requests:
cpu: 5m
memory: 125Mi
securityenabled: false
backuprestore:
backuprestore:
image:
name: vpp_restore_backup
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: 06.10.2022
persistence:
enabled: true
persistentVolume:
size: 30Gi
storageClass: ocs-storagecluster-cephfs
pvClusterSize: 1
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 100m
memory: 100Mi
controls:
logger:
loglevel: "1"
enabled: true
securityenabled: false
ccm:
ccm:
image:
name: ccm
nc_image: mlpaas_oss/ubuntu_nc:2.0
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: 06.10.2022
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 100m
memory: 100Mi
controls:
ccm:
ricBuildId: 25.03.05.0551
ricInstanceId: "1234"
ricReleaseId: 25r2ric
logger:
loglevel: "1"
enabled: true
oamgui:
enabled: false
image:
name: oamgui
pullPolicy: IfNotPresent
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: latest
securityenabled: false
dcapterm:
dcapterm:
image:
name: dcapterm
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: 20.50.3
replicaCount: 1
securityenabled: false
enabled: false
loglevel: "1"
e2mgr:
appConfigFile: |
loglevel: 1
e2mgr:
globalRicId:
mcc: "310"
mnc: "411"
ricId: AACCE
image:
name: e2mgr
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
liveness:
api: v1/health
enabled: true
initialDelaySeconds: 10
periodSeconds: 60
privilegedmode: false
readiness:
api: v1/health
enabled: true
initialDelaySeconds: 10
periodSeconds: 60
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 100m
memory: 256Mi
rnibWriter:
ranManipulationMessageChannel: RAN_MANIPULATION
stateChangeMessageChannel: RAN_CONNECTION_STATUS_CHANGE
enabled: false
securityenabled: true
e2term:
common_env_variables:
ConfigMapName: /etc/config/log-level
ServiceName: RIC_E2_TERM
e2term:
alpha:
cni:
Multus:
enabled: false
interface: gnb
namespace: ricplt
network: macvlan-conf-1
e2termnodeportenabled: false
env:
messagecollectorfile: /data/outgoing/
print: "1"
hostnetworkmode: false
image:
name: e2
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
nodeport: ""
pizpub:
enabled: false
privilegedmode: false
replicaCount: 2
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 100m
memory: 256Mi
enabled: false
loglevel: "1"
securityenabled: true
enabled: true
infrastructure:
enabled: false
jaegeradapter:
enabled: false
jaegeradapter:
image:
name: jaegertracing/all-in-one
registry: docker.io
tag: 1.12
lwsd:
enabled: true
loglevel: "1"
lwsd:
image:
name: lwsd
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: 21.01.01
resources:
limits:
cpu: 1000m
memory: 5Gi
requests:
cpu: 50m
memory: 100Mi
securityenabled: false
noma:
cni:
Multus:
enabled: false
interface: eth1
interface_ip: 2a00:8a00:a000:1111::3d/64
namespace: ricplt
network: macvlan-conf-noma
enabled: false
image:
name: noma
repository: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: 28.09.2021
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 50m
memory: 100Mi
ricInstanceId: "1234"
ricInstanceName: RIC
ricReleaseId: 25r2ric
ricReleaseVer: 25.03.05.0551
securityenabled: true
server:
host: 0.0.0.0
ne3sPort: 8080
nodePort: null
restPort: 8087
tls:
enabled: false
nodePort: null
port: 8443
o1mediator:
enabled: false
loglevel: "1"
o1mediator:
image:
name: o1
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
resources:
limits:
cpu: 50m
memory: 150Mi
requests:
cpu: 10m
memory: 50Mi
securityenabled: true
oamtermination:
OAMTAdminResources:
resources:
limits:
cpu: 8000m
memory: 30Gi
requests:
cpu: 500m
memory: 1Gi
OAMTAdminreplicaCount: 1
OAMTDistResources:
resources:
limits:
cpu: 1000m
memory: 500Mi
requests:
cpu: 25m
memory: 100Mi
OAMTDistreplicaCount: 1
appConfigFile: |
"enbdetails": [
#{
# "btshost": "10.53.203.36",
# "btsport": "443",
# "btsusername": "Nemuadmin",
# "btspassword": "nemuuser",
# "btstype": "enb21b",
# "btsid": "13B6",
# "connectionmode": "server"
#},
]
"crandetails": [
#{
# "btshost": "10.53.203.37",
# "btsport": "443",
# "btsusername": "Nemuadmin",
# "btspassword": "nemuuser",
# "btstype": "cran",
# "btsid": "13B7",
# "connectionmode": "server"
#},
]
"SupportedMO": [
{
"MOClass": "NOKLTE:LNHOIF",
"MOName": "MRBTS.LNBTS.LNCEL.LNHOIF"
},
{
"MOClass": "NOKLTE:LNREL",
"MOName": "MRBTS.LNBTS.LNCEL.LNREL"
},
{
"MOClass": "NOKLTE:AMLEPR",
"MOName": "MRBTS.LNBTS.LNCEL.AMLEPR"
},
{
"MOClass": "NOKLTE:LNCEL",
"MOName": "MRBTS.LNBTS.LNCEL"
},
{
"MOClass": "NOKLTE:IAFIM",
"MOName": "MRBTS.LNBTS.LNCEL.IAFIM"
},
{
"MOClass": "NOKLTE:IRFIM",
"MOName": "MRBTS.LNBTS.LNCEL.IRFIM"
},
{
"MOClass": "NOKLTE:PSGRP",
"MOName": "MRBTS.LNBTS.PSGRP"
},
{
"MOClass": "NOKLTE:SIB",
"MOName": "MRBTS.LNBTS.LNCEL.SIB"
},
{
"MOClass": "NOKLTE:LNBTS",
"MOName": "MRBTS.LNBTS"
},
{
"MOClass": "NOKLTE:LNADJL",
"MOName": "MRBTS.LNBTS.LNADJ.LNADJL"
},
{
"MOClass": "NOKLTE:LNADJ",
"MOName": "MRBTS.LNBTS.LNADJ"
},
{
"MOClass": "NOKLTE:LNCEL_FDD",
"MOName": "MRBTS.LNBTS.LNCEL.LNCEL_FDD"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRDU",
"MOName": "MRBTS.NRBTS.NRDU"
},
{
"MOClass": "NOKLTE:LNADJGNB",
"MOName": "MRBTS.LNBTS.LNADJGNB"
},
{
"MOClass": "com.nokia.srbts.eqm:RMOD",
"MOName": "MRBTS.EQM.APEQM.RMOD"
},
{
"MOClass": "com.nokia.srbts.eqmr:RMOD_R",
"MOName": "MRBTS.EQM_R.APEQM_R.RMOD_R"
},
{
"MOClass": "com.nokia.srbts.eqm:RETU",
"MOName": "MRBTS.EQM.APEQM.ALD.RETU"
},
{
"MOClass": "com.nokia.srbts.eqmr:RETU_R",
"MOName": "MRBTS.EQM_R.APEQM_R.ALD_R.RETU_R"
},
{
"MOClass": "com.nokia.srbts.mnl:CHANNEL",
"MOName": "MRBTS.MNL.MNLENT.CELLMAPPING.LCELL.CHANNELGROUP.CHANNEL"
},
{
"MOClass": "com.nokia.srbts.mnl:CHANNEL",
"MOName": "MRBTS.MNL.MNLENT.CELLMAPPING.LTTRX.CHANNELGROUP.CHANNEL"
},
{
"MOClass": "com.nokia.srbts.eqmr:GNSSE_R",
"MOName": "MRBTS.EQM_R.APEQM_R.CABINET_R.SMOD_R.GNSSE_R"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRCELL",
"MOName": "MRBTS.NRBTS.NRCELL"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRBTS",
"MOName": "MRBTS.NRBTS"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRPGRP",
"MOName": "MRBTS.NRBTS.NRPGRP"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRADJECELL",
"MOName": "MRBTS.NRBTS.NRADJECELL"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRADJNRCELL",
"MOName": "MRBTS.NRBTS.NRADJNRCELL"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRRELE",
"MOName": "MRBTS.NRBTS.NRCELL.NRRELE"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRREL",
"MOName": "MRBTS.NRBTS.NRCELL.NRREL"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRPLMN_UACBAR",
"MOName": "MRBTS.NRBTS.NRCELL.NRPLMN_UACBAR"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRCELL_FDD",
"MOName": "MRBTS.NRBTS.NRCELL.NRCELL_FDD"
},
{
"MOClass": "NOKLTE:PMRPQH",
"MOName": "MRBTS.LNBTS.PMRNL.PMRPQH"
},
{
"MOClass": "com.nokia.srbts.mnl:CHANNEL",
"MOName": "MNL.MNLENT.CELLMAPPING.LCELL.CHANNELGROUP.CHANNEL"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRMOPR_SA",
"MOName": "MRBTS.NRBTS.NRMOPR_SA"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRMOIMP_SA",
"MOName": "MRBTS.NRBTS.NRMOPR_SA.NRMOIMP_SA"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRDRB_5QI",
"MOName": "MRBTS.NRBTS.NRDRB_5QI"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRDRB_QCI",
"MOName": "MRBTS.NRBTS.NRDRB_QCI"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRDRB",
"MOName": "MRBTS.NRBTS.NRDRB"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRPMRNL",
"MOName": "MRBTS.NRBTS.NRPMRNL"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRPMQAP",
"MOName": "MRBTS.NRBTS.NRPMRNL.NRPMQAP"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRIRFIM",
"MOName": "MRBTS.NRBTS.NRSYSINFO_PROFILE.NRIRFIM"
},
{
"MOClass": "com.nokia.srbts.nrbts:NRLIM",
"MOName": "MRBTS.NRBTS.NRSYSINFO_PROFILE.NRLIM"
},
{
"MOClass" : "com.nokia.cu.5g:MRBTS",
"MOName" : "MRBTS"
},
]
"enbconf": {
"retry": 1
}
"loglevel": 1
"RANRequestTimeout" : 200 # maximum time that the go-routine waits for RAN
to respond.
"MaxNoOfParallelPOSTReqAllowedPerBts": 1 # this flag is added for future
purpose. This should be 1 always for now.
"ConnectionTimeout" : 20 # this value is for agent-cli to get response from
admin-cli/BTS
"AutoConnect": "true" # if true OAMTAdmin will attempt making connection to
BTS automatically during startup
"ResetConnectionTimeout" : 0 # Resets BTS connections after configured
seconds without restarting POD, valid only if value is ">0"
"ConnectionsPerInstance" : 1 # number of connection per OAMTAdmin pod
"HbTimer" : 40 #should be changed only in case of using WebEm simulator and
should never be 0
"ALARM_OAM_CONNECTION_FAILURE" : 72006
"ALARM_OAM_OPERATION_FAILURE" : 72013
conf:
ENVIRONMENT_VARIABLES:
pod_interface: eth0
Multus:
enabled: false
interface: enb
namespace: ricplt
network: macvlan-conf-1
enabled: false
image:
oamtadminImage: oamtadmin
oamtdistImage: oamtdistributor
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
oamtadminaffinity: false
securityenabled: true
pmbgen:
controls:
logger:
level: "1"
pmbParams:
build: 25.03.05.0551
instance: "1234"
release: 25r2ric
enabled: true
messaging:
ports:
nodeport: null
pmbgen:
image:
name: pmbgen
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 100m
memory: 200Mi
securityenabled: true
ric_dashboard:
enabled: true
loglevel: "1"
ric_dashboard:
image:
name: sep_dashboard
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
ingress:
host: null
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 100m
memory: 100Mi
securityenabled: false
rtmgr:
enabled: false
loglevel: "1"
rtmgr:
image:
name: rtmgr
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 1
memory: 4Gi
requests:
cpu: 100m
memory: 125Mi
securityenabled: true
securityenabled: false
submgr:
enabled: false
loglevel: "1"
securityenabled: true
submgr:
image:
name: submgr
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
resources:
limits:
cpu: 100m
memory: 256Mi
requests:
cpu: 50m
memory: 100Mi
trblmgr:
enabled: true
loglevel: "1"
resources:
limits:
cpu: 500m
memory: 5Gi
requests:
cpu: 50m
memory: 100Mi
securityenabled: false
trblmgr:
image:
name: trblmgr
registry: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com
tag: 20.50.3
xapp-onboarder:
enabled: false
xapponboarder:
allow_redeploy: "True"
image:
chartmuseum:
name: chartmuseum/chartmuseum
registry: docker.io
tag: v0.8.2
xapponboarder:
name: o-ran-sc/xapp-onboarder
registry: nexus3.o-ran-sc.org:10002
tag: 1.0.7

HOOKS:
---
# Source: ckey/charts/ckey/templates/rbac/deletion_rbac.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
name: ricplt-ckey-chart-ckey-delete-sa
namespace: "ricplt"
labels:
app: ricplt-ckey-chart-ckey
heritage: Helm
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-weight": "-8"
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
---
# Source: ckey/charts/ckey/templates/rbac/isu_rbac.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
name: ricplt-ckey-chart-ckey-isu-sa
namespace: "ricplt"
labels:
app: ricplt-ckey-chart-ckey
heritage: Helm
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
helm.sh/hook: pre-upgrade, pre-rollback, post-upgrade, post-rollback
helm.sh/hook-delete-policy: before-hook-creation, hook-succeeded
helm.sh/hook-weight: "-10"
---
# Source: ckey/charts/ckey/templates/rbac/populate_secret_rbac.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
name: ricplt-ckey-chart-ckey-populate-secret-sa
namespace: "ricplt"
annotations:
helm.sh/hook: pre-upgrade, pre-rollback
helm.sh/hook-delete-policy: before-hook-creation, hook-succeeded
helm.sh/hook-weight: "-10"
labels:
app: ricplt-ckey-chart-ckey
heritage: Helm
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1
---
# Source: ckey/charts/ckey/templates/rbac/pre_upgrade_rbac.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: true
metadata:
name: ricplt-ckey-chart-ckey-pre-upgrade-sa
namespace: "ricplt"
annotations:
"helm.sh/hook": pre-upgrade, pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
"helm.sh/hook-weight": "-10"
labels:
app: ricplt-ckey-chart-ckey
heritage: Helm
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1
---
# Source: ckey/charts/ckey/templates/ckey-secret.yaml
apiVersion: v1
kind: Secret
metadata:
annotations:
"helm.sh/hook": pre-install, pre-upgrade, pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook-weight": "-10"
"restartOnUpdate": "true"
name: ricplt-ckey-chart-ckey
labels:
app: ricplt-ckey-chart-ckey
chart: "ckey-12.3.1"
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

type: Opaque
data:
keycloak-admin-user: "YWRtaW4="

keycloak-admin-password: Ym1Kc1MxcFJZbUV3TWc9PQ==
---
# Source: ckey/charts/ckey/templates/isu/isu-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ricplt-ckey-chart-ckey-isu
labels:
app: ricplt-ckey-chart-ckey
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
"helm.sh/hook": pre-upgrade, post-upgrade, pre-rollback, post-rollback
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
data:
isu-common.sh: |
function get_ckey_sts_name {
CKEY_STS=$(kubectl get statefulset -n ricplt -l app=ckey,release=ricplt-ckey-
chart,csf-component=ckey,csf-subcomponent=keycloak --no-headers=true --
output=custom-columns=":metadata.name" --ignore-not-found=true | grep -v "isu-
upgrade")
if [ -z "$CKEY_STS" ]; then
CKEY_STS=ricplt-ckey-chart-ckey
fi
echo ${CKEY_STS}
}

function get_ckey_isu_sts_name {
CKEY_ISU_STS=$(kubectl get statefulset -n ricplt -l app=ckey,release=ricplt-
ckey-chart,csf-component=ckey,csf-subcomponent=keycloak --no-headers=true --
output=custom-columns=":metadata.name" --ignore-not-found=true | grep "isu-
upgrade")
if [ -z "$CKEY_ISU_STS" ]; then
CKEY_ISU_STS=ricplt-ckey-chart-ckey-isu-upgrade
fi
echo ${CKEY_ISU_STS}
}

function get_ckey_pod_0_name {
CKEY_POD_0=$(get_ckey_sts_name)-0
echo ${CKEY_POD_0}
}

function get_isu_mysql_pod_name {
CKEY_ISU_MYSQL_POD=$(get_ckey_pod_0_name)
echo ${CKEY_ISU_MYSQL_POD}
}

export CKEY_STS=$(get_ckey_sts_name)
export CKEY_ISU_STS=$(get_ckey_isu_sts_name)
export CKEY_POD_0=$(get_ckey_pod_0_name)
export EXEC_CKEY_POD="kubectl exec -n ricplt ${CKEY_POD_0} -- /bin/bash -c "
export CBUR_HOST="http://cbur-master-cbur.ncms.svc:80"
export CBUR_NAMESPACE="ncms"

export HELM_VERSION=3
export BACKUP_NAMESPACE=ricplt

echo "Waiting for main CKEY Statefulset to ready"


kubectl rollout status sts ${CKEY_STS} --namespace ricplt

export DB_USER=keycloak
if [ -f /ckey-secret/db-password ]; then
export DB_PASSWORD=$(cat /ckey-secret/db-password)
else
export DB_PASSWORD="r00tr00t"
fi
export DB_ARGS="-h "ricplt-cmdb-chart-mysql.ricplt.svc.cluster.local" -P 3306 -
u$DB_USER -p$DB_PASSWORD -f "
export PATCH_DB_PASSWORD=$DB_PASSWORD

function remove_isu_statefulset {
info "Deleting older ISU Statefulset if exists"
CKEY_ISU_STS=$(get_ckey_isu_sts_name)
kubectl delete sts --namespace ricplt ${CKEY_ISU_STS}
}

function cleanup {

info "Waiting for main CKEY Statefulset to ready"


CKEY_STS=$(get_ckey_sts_name)
kubectl rollout status sts ${CKEY_STS} --namespace ricplt

info "Redirecting CKEY service traffic back to main CKEY Statefulset"


kubectl patch svc ricplt-ckey-chart-ckey -n ricplt -p '{"spec":{"selector":
{"isu-upgrade":null}}}'

remove_isu_statefulset
# As the ckey containers will continue running for
additional .Values.terminationGracePeriodSecondsForSSO seconds due to intentional
sleeping within preStop lifecycle, we need to ensure 'tmdb4keycloak' is not removed
until the isu-statefulset pods have successfully terminated.
sleep 30

CKEY_ISU_MYSQL_POD=$(get_isu_mysql_pod_name)
kubectl exec -n ricplt ${CKEY_ISU_MYSQL_POD} -- /bin/bash -c \
'mysql '"$DB_ARGS"' <<< "\
DROP DATABASE IF EXISTS tmpdb4keycloak;"'
}

function cbur_curl {
BACKUP_AUTH_STATUS=$(curl -s -kL --post301 -X GET $CBUR_HOST/v2/auth/status |
grep "message" | sed "s/\"message\"\://" | sed "s/,//")
export BACKUP_AUTH_STATUS=${BACKUP_AUTH_STATUS//[[:blank:]]/}
if [ "$BACKUP_AUTH_STATUS" = "0" ]; then
echo "Backup Authentiucation disabled"
curl_req="curl $1 $2"
elif [ "$BACKUP_AUTH_STATUS" = "1" ]; then
echo "Using Keycloak Authentication for BACKUP_AUTH"
TKN=$(curl -skL --post301 -X POST "$CBUR_HOST/v2/auth/users/login" -H
"accept: application/json" -H "Content-Type: application/json" \
-d '{"username":"","password":"$CBUR_PASSWORD"}'| grep
"accessToken" | sed "s/\"accessToken\"\://" | sed "s/,//")
TKN=${TKN//[[:blank:]]/}
curl_req="curl $1 $2 -H \"Authorization: Bearer $TKN\""
elif [ "$BACKUP_AUTH_STATUS" = "2" ]; then
echo "Using Basic Authentication for BACKUP_AUTH"
export USERNAME=$(kubectl get secret -n $CBUR_NAMESPACE cbur-basic-auth -
o=jsonpath='{.data.username}' | base64 -d)
export PASSWORD=$(kubectl get secret -n $CBUR_NAMESPACE cbur-basic-auth -
o=jsonpath='{.data.password}' | base64 -d)
curl_req="curl $1 $2 -u $USERNAME:$PASSWORD"
fi
local out=$(eval $curl_req)
echo $out
}

function cbur_backup {
info "Calling CBUR to backup"

info "Using BACKUP_AUTH_STATUS=$BACKUP_AUTH_STATUS"


out=$(cbur_curl "-d \"\" -skL --post301 -X POST"
"$CBUR_HOST/v2/helm/release/backup/$BACKUP_NAMESPACE/ricplt-ckey-chart?
helm_version=$HELM_VERSION")
info "Backup output is: $out"

resp_code=$(echo $out | awk 'BEGIN{RS=","; FS="code"}NF>1{print $NF}' | tr -


dc '[:alnum:]\n\r')
if [ "$resp_code" = "200" ]; then
info "Backup Success"
elif [ "$resp_code" = "202" ]; then
backup_id=$(echo $out | awk 'BEGIN{RS="is"; FS="="}NF>1{print $NF}' | tr -d
'[:space:]\n\r')
backup_sts=$(cbur_curl "-skL" "$CBUR_HOST/v2/task/$backup_id")
info "$backup_sts"
while [[ "$backup_sts" =~ "\"status\": \"InProgress\"" ]]; do
sleep 10
info "Backup in progress..."
backup_sts=$(cbur_curl "-skL" "$CBUR_HOST/v2/task/$backup_id")
info "$backup_sts"
done
else
info "Backup failed"
fi
}

function cbur_restore {
if [ "$BACKUP_ID" != "skip" ]; then
info "Calling CBUR to restore CKEY and database from backup.
BACKUP_ID=$BACKUP_ID"
out=$(cbur_curl "-skL --post301 -X POST -H \"accept: application/json\" -
H \"Content-Type: application/x-www-form-urlencoded\" -d \"backup_id=$BACKUP_ID\""
"$CBUR_HOST/v2/helm/release/restore/$BACKUP_NAMESPACE/ricplt-ckey-chart?
helm_version=$HELM_VERSION")
info "Restore output is: $out"
resp_code=$(echo $out | awk 'BEGIN{RS=","; FS="code"}NF>1{print $NF}' | tr
-dc '[:alnum:]\n\r')
if [ "$resp_code" = "200" ]; then
info "Restore Success"
elif [ "$resp_code" = "202" ]; then
restore_id=$(echo $out | awk 'BEGIN{RS="is"; FS="="}NF>1{print $NF}' | tr
-d '[:space:]\n\r')
restore_sts=$(cbur_curl "-skL" "$CBUR_HOST/v2/task/$restore_id")
info "$restore_sts"
while [[ "$restore_sts" =~ "\"status\": \"InProgress\"" ]]; do
sleep 60
info "Restore in progress..."
restore_sts=$(cbur_curl "-skL" "$CBUR_HOST/v2/task/$restore_id")
info "$restore_sts"
done
else
info "Restore failed"
fi
else
info "Skipping restoring CKEY data from backup"
fi
}

#Note that app name of the ISU statefulset is retained same as the original
ckey statefulset.
#This is because if ISU statefulset would have different app name than the
original ckey statefulset,
#we need to switch the ckey http service's selector to this ISU statefulset app
name to redirect the incoming traffic
#to the ISU statefulset pods. But that would have caused service discontinuity
#when helm would detect(as part of upgrade process) that the http service
resource has changed
#and helm would patch this service forcefully back to the original ckey
statefulset app.
function create_isu_statefulset {
info "Creating ISU Statefulset with infinispan group separation fix"
REPLICAS=$1
CKEY_STS=$(get_ckey_sts_name)
CKEY_ISU_STS=$(get_ckey_isu_sts_name)
#JGROUPS_DNS_QUERY of ckey-sts and the isu-upgrade-sts should be separated
from each other.
local CKEY_HEADLESS_SVC="ricplt-ckey-chart-ckey-
headless.ricplt.svc.cluster.local"
#Thus, $CKEY_HEADLESS_SVC value of JGROUPS_DNS_QUERY environment variable has
to be replaced by $CKEY_ISU_HEADLESS_SVC
local CKEY_ISU_HEADLESS_SVC="ricplt-ckey-chart-ckey-isu-
headless.ricplt.svc.cluster.local"
kubectl patch sts --namespace ricplt ${CKEY_STS} -o yaml --type "json" --dry-
run --record=false -p '[
{"op":"replace","path":"/metadata/name","value":"'$CKEY_ISU_STS'"},
{"op":"replace","path":"/spec/replicas","value":'"$REPLICAS"'},
#CRUCIAL: by setting isu-upgrade:true we are ensuring isu-statefulset is
visible only
#to isu specific headless service. This will ensure infinispan members
which are part of isu-statefulset,
#find their peer members only within isu specific domain search.
{"op":"replace","path":"/spec/template/metadata/labels/isu-
upgrade","value":"true"},
{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":
{"name":"ISU_UPGRADE", "value":"1"}},
{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":
{"name":"ISU_EXCLUDE_CHECK_PATTERN", "value":".*/(authenticate|token).*"}}]' |
#Two times sed is required because when patching ISU pod, it will not fully
replace the string with the short service FQDN
#It just replaces <svcname>.<namespace> from the full FQDN so deployments
where user has configured trailing dot it will cause issue to form cluster
#So replacing it explicitly
sed "s|$CKEY_HEADLESS_SVC|$CKEY_ISU_HEADLESS_SVC|g" | \
kubectl apply --namespace ricplt --record=false -f -
}

function create_isu_statefulset_with_temp_db {
info "Creating ISU Statefulset with temporary database with infinispan group
separation fix"
REPLICAS=$1
CKEY_STS=$(get_ckey_sts_name)
CKEY_ISU_STS=$(get_ckey_isu_sts_name)
#JGROUPS_DNS_QUERY of ckey-sts and the isu-upgrade-sts should be separated
from each other.
local CKEY_HEADLESS_SVC="ricplt-ckey-chart-ckey-
headless.ricplt.svc.cluster.local"
#Thus, $CKEY_HEADLESS_SVC value of JGROUPS_DNS_QUERY environment variable has
to be replaced by $CKEY_ISU_HEADLESS_SVC
local CKEY_ISU_HEADLESS_SVC="ricplt-ckey-chart-ckey-isu-
headless.ricplt.svc.cluster.local"
kubectl patch sts --namespace ricplt ${CKEY_STS} -o yaml --type "json" --dry-
run --record=false -p '[
{"op":"replace","path":"/metadata/name","value":"'$CKEY_ISU_STS'"},
{"op":"replace","path":"/spec/replicas","value":'"$REPLICAS"'},
#CRUCIAL: by setting isu-upgrade:true we are ensuring isu-statefulset is
visible only
#to isu specific headless service. This will ensure infinispan members
which are part of isu-statefulset,
#find their peer members only within isu specific domain search.
{"op":"replace","path":"/spec/template/metadata/labels/isu-
upgrade","value":"true"},
{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":
{"name":"ISU_UPGRADE", "value":"1"}},
{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":
{"name":"ISU_EXCLUDE_CHECK_PATTERN", "value":".*/(authenticate|token).*"}},
{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":
{"name":"DB_NAME", "value":"tmpdb4keycloak"}},
{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":
{"name":"KC_DB_URL_DATABASE", "value":"tmpdb4keycloak"}}]' |
#Two times sed is required because when patching ISU pod, it will not fully
replace the string with the short service FQDN
#It just replaces <svcname>.<namespace> from the full FQDN so deployments
where user has configured trailing dot it will cause issue to form cluster
#So replacing it explicitly
sed "s|$CKEY_HEADLESS_SVC|$CKEY_ISU_HEADLESS_SVC|g" | \
kubectl apply --namespace ricplt --record=false -f -
}

function redirect_requests_to_isu_statefulset {
info "Switching CKEY service to ISU Statefulset"
kubectl patch svc ricplt-ckey-chart-ckey -n ricplt -p '{"spec":{"selector":
{"isu-upgrade":"true"}}}'
}

function change_isu_statefulset_db {
info "Switching ISU Statefulset to temporary database"
CKEY_ISU_STS=$(get_ckey_isu_sts_name)
kubectl patch sts --namespace ricplt ${CKEY_ISU_STS} --type "json" --
record=false -p '[
{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":
{"name":"DB_NAME", "value":"tmpdb4keycloak"}},
{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":
{"name":"KC_DB_URL_DATABASE", "value":"tmpdb4keycloak"}}]'
}

function wait_isu_statefulset {
info "Waiting for ISU Statefulset to be ready"
CKEY_ISU_STS=$(get_ckey_isu_sts_name)
kubectl rollout status sts ${CKEY_ISU_STS} --namespace ricplt
ROLLOUT_STATUS=$?
if [[ $ROLLOUT_STATUS != 0 ]]; then
info "ISU Statefulset is not ready. ROLLOUT_STATUS=$ROLLOUT_STATUS.
Stopping ISU."
exit 1
fi
}

function create_temp_db {
info "Creating temporary Keycloak database tmpdb4keycloak"

CKEY_ISU_MYSQL_POD=$(get_isu_mysql_pod_name)
mysqldump_version=$(kubectl exec -n ricplt ${CKEY_ISU_MYSQL_POD} --
/bin/bash -c "mysqldump --version")

if [[ "$mysqldump_version" =~ "MariaDB" ]]; then


export MYSQL_DUMP_ARGS=""
else
export MYSQL_DUMP_ARGS="--column-statistics=0"
fi

kubectl exec -n ricplt \


${CKEY_ISU_MYSQL_POD} -- /bin/bash -c \
'mysql '"$DB_ARGS"' <<< "\
DROP DATABASE IF EXISTS tmpdb4keycloak; \
CREATE DATABASE IF NOT EXISTS tmpdb4keycloak;" && \
{ mysqldump '"$DB_ARGS"' db4keycloak --no-data '"$MYSQL_DUMP_ARGS"'; \
mysqldump '"$DB_ARGS"' db4keycloak --no-create-info '"$MYSQL_DUMP_ARGS"'
--ignore-table=db4keycloak.EVENT_ENTITY --ignore-
table=db4keycloak.ADMIN_EVENT_ENTITY --ignore-table=db4keycloak.JGROUPSPING; \
} | mysql '"$DB_ARGS"' tmpdb4keycloak'
}

function remove_main_sts {
info "Deleting Main Statefulset"
CKEY_STS=$(get_ckey_sts_name)
kubectl delete sts ${CKEY_STS} --namespace ricplt
}

function info() {
echo "["`date +'%Y-%m-%d %H:%M:%S'`"] $*"
}
---
# Source: ckey/charts/ckey/templates/rbac/deletion_rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: "ricplt"
name: ricplt-ckey-chart-ckey-delete-role
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-weight": "-7"
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
rules:
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["list","delete"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["list","delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["list","delete"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["list","delete"]
- apiGroups: [""]
resources: ["services"]
verbs: ["list","delete"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["delete"]
- apiGroups: [ "" ]
resources: ["serviceaccounts"]
verbs: ["list","delete"]
- apiGroups: ["cert-manager.io"]
resources: ["certificates"]
verbs: ["get", "list","delete"]
---
# Source: ckey/charts/ckey/templates/rbac/isu_rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: "ricplt"
name: ricplt-ckey-chart-ckey-isu-role
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
helm.sh/hook: pre-upgrade, pre-rollback, post-upgrade, post-rollback
helm.sh/hook-delete-policy: before-hook-creation, hook-succeeded
helm.sh/hook-weight: "-10"
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get", "list", "watch", "patch", "create", "delete"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "delete"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
---
# Source: ckey/charts/ckey/templates/rbac/populate_secret_rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: "ricplt"
name: ricplt-ckey-chart-ckey-populate-secret-role
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
helm.sh/hook: pre-upgrade, pre-rollback
helm.sh/hook-delete-policy: before-hook-creation, hook-succeeded
helm.sh/hook-weight: "-10"
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "patch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
---
# Source: ckey/charts/ckey/templates/rbac/pre_upgrade_rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ricplt-ckey-chart-ckey-pre-upgrade-role
namespace: "ricplt"
annotations:
"helm.sh/hook": pre-upgrade, pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
"helm.sh/hook-weight": "-10"
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["watch", "get", "list", "delete"]
---
# Source: ckey/charts/ckey/templates/rbac/deletion_rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ricplt-ckey-chart-ckey-delete-rolebinding
namespace: "ricplt"
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-weight": "-6"
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
subjects:
- kind: ServiceAccount
name: ricplt-ckey-chart-ckey-delete-sa
namespace: "ricplt"
apiGroup: ""
roleRef:
kind: Role
name: ricplt-ckey-chart-ckey-delete-role
apiGroup: rbac.authorization.k8s.io
---
# Source: ckey/charts/ckey/templates/rbac/isu_rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ricplt-ckey-chart-ckey-isu-rolebinding
namespace: "ricplt"
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
helm.sh/hook: pre-upgrade, pre-rollback, post-upgrade, post-rollback
helm.sh/hook-delete-policy: before-hook-creation, hook-succeeded
helm.sh/hook-weight: "-10"
subjects:
- kind: ServiceAccount
name: ricplt-ckey-chart-ckey-isu-sa
namespace: "ricplt"
apiGroup: ""
roleRef:
kind: Role
name: ricplt-ckey-chart-ckey-isu-role
apiGroup: rbac.authorization.k8s.io
---
# Source: ckey/charts/ckey/templates/rbac/populate_secret_rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ricplt-ckey-chart-ckey-populate-secret-rolebinding
namespace: "ricplt"
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
helm.sh/hook: pre-upgrade, pre-rollback
helm.sh/hook-delete-policy: before-hook-creation, hook-succeeded
helm.sh/hook-weight: "-10"
subjects:
- kind: ServiceAccount
name: ricplt-ckey-chart-ckey-populate-secret-sa
namespace: "ricplt"
apiGroup: ""
roleRef:
kind: Role
name: ricplt-ckey-chart-ckey-populate-secret-role
apiGroup: rbac.authorization.k8s.io
---
# Source: ckey/charts/ckey/templates/rbac/pre_upgrade_rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ricplt-ckey-chart-ckey-pre-upgrade-rolebinding
namespace: "ricplt"
annotations:
"helm.sh/hook": pre-upgrade, pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
"helm.sh/hook-weight": "-10"
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

subjects:
- kind: ServiceAccount
name: ricplt-ckey-chart-ckey-pre-upgrade-sa
namespace: "ricplt"
roleRef:
kind: Role
name: ricplt-ckey-chart-ckey-pre-upgrade-role
apiGroup: rbac.authorization.k8s.io
---
# Source: ckey/charts/ckey/templates/test.yaml
apiVersion: v1
kind: Pod
metadata:
name: ricplt-ckey-chart-ckey-test
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

sidecar.istio.io/inject: "false"
annotations:
"helm.sh/hook": test-success
"helm.sh/hook-delete-policy": hook-succeeded, before-hook-creation
spec:
restartPolicy: Never
serviceAccountName: ricplt-ckey-chart-ckey-stateful-sa
automountServiceAccountToken: false
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: test
image: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com/ric/ckey-
keycloak:24.0.5.2-rocky8-jre17-47
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
env:

- name: TZ
value: "UTC"
- name: KC_HTTPS_PORT
value: "8443"
- name: SERVICE_IP
value: ricplt-ckey-chart-ckey.ricplt.svc.cluster.local
- name: KC_HTTP_RELATIVE_PATH
value: /usermgmt
- name: URL_PROTOCOL
value: "https"

command:
- sh
- -c
- |
echo 'Test for release: ricplt-ckey-chart'
check(){
port=$1
protocol=$2
for i in {1..10}
do

echo "Call $i"

curl_out=$(curl --write-out '%{http_code}' --silent --output /dev/null


-k --connect-timeout 20 $protocol://ricplt-ckey-chart-
ckey.ricplt.svc.cluster.local:$port${KC_HTTP_RELATIVE_PATH}/)
ret_code=$?

if [ $ret_code -ne 0 ]
then
echo "Error. Curl command exit with code: $ret_code"

exit 1
fi

curl_status_out=$(curl --silent -k --connect-timeout 20


$protocol://ricplt-ckey-chart-ckey.ricplt.svc.cluster.local:$port$
{KC_HTTP_RELATIVE_PATH}/health/ready | jq -r .status)
echo "curl_out: $curl_out"
echo "curl_status_out: $curl_status_out"
// Starting from FOSS version 24, the curl_out check has been updated
to reflect a 302 status code. This change accounts for the URL redirection to the
Admin Console upon access.
if (($curl_out == 302 && $curl_status_out == "UP")); then
echo "http code: $curl_out"
echo "Ckey health endpoint is ready. Ckey is ok"
break
else
echo "Error. http code: $curl_out. Expected code is 302"

exit 1
fi

sleep 1

done
}
check 8443 ${URL_PROTOCOL}

echo 'Test finished successfully'

exit 0
resources:

limits:
memory: "256Mi"
ephemeral-storage: "1Gi"
requests:
cpu: 250m
memory: "256Mi"
ephemeral-storage: "1Gi"

affinity:

podAntiAffinity:
---
# Source: ckey/charts/ckey/templates/isu/isu-rollback-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ricplt-ckey-chart-ckey-isu-pre-rollback-job
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
"helm.sh/hook": pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
spec:
template:
metadata:
name: isu-container
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

sidecar.istio.io/inject: "false"
annotations:
spec:
restartPolicy: Never
serviceAccountName: ricplt-ckey-chart-ckey-isu-sa

securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
volumes:
- name: isu-common
configMap:
name: ricplt-ckey-chart-ckey-isu
- name: ckey-secret
projected:
sources:
- secret:

name: ricplt-ckey-chart-ckey-db-secret

optional: true

- name: service-account-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 3600
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
enableServiceLinks: false
containers:
- name: isu-container
image:
edgeapps-docker-local.artifactory-blr1.int.net.nokia.com/ric/kubectl:1.28.12-
rocky8-nano-20240801
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true

resources:

limits:
memory: "256Mi"
ephemeral-storage: "1Gi"
requests:
cpu: 250m
memory: "256Mi"
ephemeral-storage: "1Gi"

volumeMounts:

- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: service-account-token
- name: isu-common
mountPath: /opt/keycloak/isu-common
- name: ckey-secret
mountPath: /ckey-secret
env:

- name: TZ
value: "UTC"

command:
- sh
- -c
- |

. /opt/keycloak/isu-common/isu-common.sh
info "Pre-rollback job started"
remove_isu_statefulset 2>/dev/null
create_temp_db
create_isu_statefulset_with_temp_db 2

wait_isu_statefulset
redirect_requests_to_isu_statefulset
cbur_restore
remove_main_sts
STATUS=$?
info "Pre-rollback job complete"

exit $STATUS

affinity:

podAntiAffinity:
---
# Source: ckey/charts/ckey/templates/isu/isu-rollback-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ricplt-ckey-chart-ckey-isu-post-rollback-job
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart

app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
"helm.sh/hook": post-rollback
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
spec:
template:
metadata:
name: isu-container
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart

app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

sidecar.istio.io/inject: "false"
annotations:

spec:
restartPolicy: Never
serviceAccountName: ricplt-ckey-chart-ckey-isu-sa

securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
volumes:
- name: isu-common
configMap:
name: ricplt-ckey-chart-ckey-isu
- name: ckey-secret
projected:
sources:
- secret:

name: ricplt-ckey-chart-ckey-db-secret

optional: true
- name: service-account-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 3600
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
enableServiceLinks: false
containers:
- name: isu-container
image:
edgeapps-docker-local.artifactory-blr1.int.net.nokia.com/ric/kubectl:1.28.12-
rocky8-nano-20240801
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true

resources:

limits:
memory: "256Mi"
ephemeral-storage: "1Gi"
requests:
cpu: 250m
memory: "256Mi"
ephemeral-storage: "1Gi"

volumeMounts:

- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: service-account-token
- name: isu-common
mountPath: /opt/keycloak/isu-common
- name: ckey-secret
mountPath: /ckey-secret
env:

- name: TZ
value: "UTC"

command:
- sh
- -c
- |
. /opt/keycloak/isu-common/isu-common.sh
info "Post-rollback job started"
cleanup
info "Post-rollback job complete"
STATUS=$?

exit $STATUS

affinity:

podAntiAffinity:
---
# Source: ckey/charts/ckey/templates/isu/isu-upgrade-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ricplt-ckey-chart-ckey-isu-pre-upgrade-job
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
"helm.sh/hook": pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
"helm.sh/hook-weight": "10"
spec:
template:
metadata:
name: isu-container
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

sidecar.istio.io/inject: "false"
annotations:
spec:
restartPolicy: Never
serviceAccountName: ricplt-ckey-chart-ckey-isu-sa

securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
volumes:
- name: isu-common
configMap:
name: ricplt-ckey-chart-ckey-isu
- name: ckey-secret
projected:
sources:
- secret:

name: ricplt-ckey-chart-ckey-db-secret

optional: true

- name: service-account-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 3600
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
enableServiceLinks: false
containers:
- name: isu-container
image:
edgeapps-docker-local.artifactory-blr1.int.net.nokia.com/ric/kubectl:1.28.12-
rocky8-nano-20240801
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true

resources:

limits:
memory: "256Mi"
ephemeral-storage: "1Gi"
requests:
cpu: 250m
memory: "256Mi"
ephemeral-storage: "1Gi"

volumeMounts:

- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: service-account-token
- name: isu-common
mountPath: /opt/keycloak/isu-common
- name: ckey-secret
mountPath: /ckey-secret
env:

- name: TZ
value: "UTC"

command:
- sh
- -c
- |

. /opt/keycloak/isu-common/isu-common.sh

info "Pre-upgrade job started"


remove_isu_statefulset 2>/dev/null
info "Putting flag in backup volume to let CBUR know ISU is in
progress"
$EXEC_CKEY_POD 'echo 1 > /ckey/backup/.isu_inprogress'
sleep 10
cbur_backup
create_temp_db
create_isu_statefulset_with_temp_db 2

wait_isu_statefulset
redirect_requests_to_isu_statefulset
remove_main_sts

STATUS=$?
info "Pre-upgrade job complete"

exit $STATUS

affinity:

podAntiAffinity:
---
# Source: ckey/charts/ckey/templates/isu/isu-upgrade-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ricplt-ckey-chart-ckey-isu-post-upgrade-job
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart

app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
"helm.sh/hook": post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
"helm.sh/hook-weight": "15"
spec:
template:
metadata:
name: ricplt-ckey-chart-ckey-isu-post-upgrade-job
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart

app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

sidecar.istio.io/inject: "false"
annotations:

spec:
restartPolicy: Never
serviceAccountName: ricplt-ckey-chart-ckey-isu-sa

securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
volumes:
- name: isu-common
configMap:
name: ricplt-ckey-chart-ckey-isu
- name: ckey-secret
projected:
sources:
- secret:

name: ricplt-ckey-chart-ckey-db-secret

optional: true

- name: service-account-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 3600
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
enableServiceLinks: false
containers:
- name: isu-container
image:
edgeapps-docker-local.artifactory-blr1.int.net.nokia.com/ric/kubectl:1.28.12-
rocky8-nano-20240801
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true

resources:

limits:
memory: "256Mi"
ephemeral-storage: "1Gi"
requests:
cpu: 250m
memory: "256Mi"
ephemeral-storage: "1Gi"

volumeMounts:

- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: service-account-token
- name: isu-common
mountPath: /opt/keycloak/isu-common
- name: ckey-secret
mountPath: /ckey-secret
env:

- name: TZ
value: "UTC"

command:
- sh
- -c
- |

. /opt/keycloak/isu-common/isu-common.sh
info "Post-upgrade job started"
cleanup
STATUS=$?
info "Post-upgrade job complete"

exit $STATUS

affinity:

podAntiAffinity:
---
# Source: ckey/charts/ckey/templates/jobs/master-realm-configuration-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ricplt-ckey-chart-master-realm-configuration-job
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
spec:
backoffLimit: 6
activeDeadlineSeconds: 6000
template:
metadata:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

sidecar.istio.io/inject: "false"
annotations:
spec:
# shareProcessNamespace: true
restartPolicy: Never
serviceAccountName: ricplt-ckey-chart-ckey-master-realm-sa
automountServiceAccountToken: false
enableServiceLinks: false
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: master-realm-configuration-job-ckey
image: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com/ric/ckey-
py:1.1.4-rocky8-python3.11-3
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
env:

- name: TZ
value: "UTC"
- name: KEYCLOAK_USER
valueFrom:
secretKeyRef:

name: ricplt-ckey-chart-ckey

key: keycloak-admin-user
- name: KEYCLOAK_PASSWORD
valueFrom:
secretKeyRef:

name: ricplt-ckey-chart-ckey

key: keycloak-admin-password
- name: USE_CACERT

value: "false"

- name: KC_HTTPS_PORT
value: "8443"
- name: SERVICE_IP
value: ricplt-ckey-chart-ckey.ricplt.svc.cluster.local
- name: KC_HTTP_RELATIVE_PATH
value: "/usermgmt"
- name: URL_PROTOCOL
value: "https"
command:
- sh
- -c
- |
set -x

USE_CA=$(echo "$USE_CACERT" | awk '{print tolower($0)}')


if [ ! -f /opt/keycloak/tls/ca.crt ] || [ "$USE_CA" = "false" ]; then
echo 'Using --insecure for curl'
CURL_CERT="--insecure"
else
echo 'ca.crt found. Using --cacert for curl'
CURL_CERT="--cacert /opt/keycloak/tls/ca.crt"
fi

KEYCLOAK_URL="${URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}$
{KC_HTTP_RELATIVE_PATH}"
KEYCLOAK_PWD=$(sed "s/'/'\\\''/g" <<< "$KEYCLOAK_PASSWORD")

touch /tmp/curl_out.txt
db_check='if .status == "UP" and any(.checks[]; .name == "Keycloak
database connections async health check" and .status == "UP") then true else false
end'
until [ $(curl ${CURL_CERT} --write-out '%{http_code}' --include --output
/tmp/curl_out.txt --connect-timeout 5 ${KEYCLOAK_URL}/health/ready) == "200" ] && [
$(curl ${CURL_CERT} --connect-timeout 5 ${KEYCLOAK_URL}/health/ready | jq -r "$
{db_check}") == "true" ]; do
echo "Current Date and time: $(date)"
echo "Waiting for Keycloak connection..."
echo -e "Printing /tmp/curl_out.txt:\n$(cat /tmp/curl_out.txt)"
sleep 2;
done;

echo 'Keycloak is OK!'


echo -e "Printing /tmp/curl_out.txt after success: \n$(cat
/tmp/curl_out.txt)"
EXIT_CODE=0
echo "Running custom master realm configuration script..."
/ckey/custom-scripts/configure-realm-settings.sh
EXIT_CODE=$?

exit $EXIT_CODE
volumeMounts:
- name: custom-realm-config-scripts
mountPath: /ckey/custom-scripts
readOnly: true
- name: ricplt-ckey-chart-ckey-tmp
mountPath: "/tmp"
resources:

limits:
memory: "256Mi"
ephemeral-storage: "1Gi"
requests:
cpu: 250m
memory: "256Mi"
ephemeral-storage: "1Gi"

affinity:

podAntiAffinity:
volumes:
- name: custom-realm-config-scripts
configMap:
name: ricplt-ckey-chart-ckey-custom-realm-config-scripts
defaultMode: 0555
- name: ricplt-ckey-chart-ckey-tmp
emptyDir: {}
---
# Source: ckey/charts/ckey/templates/jobs/populate-secret-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ricplt-ckey-chart-ckey-secret-populate-job
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
"helm.sh/hook": pre-upgrade, pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
spec:
template:
metadata:
name: ricplt-ckey-chart-ckey-secret-populate-job
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart

app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

sidecar.istio.io/inject: "false"
annotations:
spec:
restartPolicy: Never
serviceAccountName: ricplt-ckey-chart-ckey-populate-secret-sa
enableServiceLinks: false

volumes:
- name: service-account-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 3600
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: secret-populate
image:
edgeapps-docker-local.artifactory-blr1.int.net.nokia.com/ric/kubectl:1.28.12-
rocky8-nano-20240801
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true

volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: service-account-token
env:

- name: TZ
value: "UTC"
command:
- sh
- -c
- |

kubectl get pod ricplt-ckey-chart-ckey-0 -n ricplt

if [[ $? != 0 ]]; then
echo 'ckey pod not found! Exiting...'
exit 0
fi

EXEC_CKEY_POD="kubectl exec -n ricplt ricplt-ckey-chart-ckey-0 -c


ricplt-ckey-chart-ckey -- /bin/bash -c "

ADMIN_PASSWORD=$($EXEC_CKEY_POD 'cat /opt/keycloak/security/ckey-


secret/keycloak-admin-password')

kubectl patch secret ricplt-ckey-chart-ckey -n ricplt \


--type='json' -p='[{"op" : "replace" ,"path" : "/data/keycloak-admin-
password" ,"value" : "'"$(echo -n $ADMIN_PASSWORD | base64)"'"}]'
STATUS=$?

exit $STATUS
resources:

limits:
memory: "256Mi"
ephemeral-storage: "1Gi"
requests:
cpu: 250m
memory: "256Mi"
ephemeral-storage: "1Gi"

affinity:

podAntiAffinity:
---
# Source: ckey/charts/ckey/templates/jobs/post-delete-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ricplt-ckey-chart-post-delete-job
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
"helm.sh/hook": post-delete
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
spec:
template:
metadata:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

sidecar.istio.io/inject: "false"
annotations:
spec:
restartPolicy: Never
serviceAccountName: ricplt-ckey-chart-ckey-delete-sa
enableServiceLinks: false

volumes:
- name: service-account-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 3600
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: post-delete-secrets
image:
edgeapps-docker-local.artifactory-blr1.int.net.nokia.com/ric/kubectl:1.28.12-
rocky8-nano-20240801
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true

volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: service-account-token
env:

- name: TZ
value: "UTC"
command:
- sh
- -c
- |
echo 'Post delete for release: ricplt-ckey-chart'
echo 'Delete CKEY secret'
kubectl delete secrets -l 'app in (ricplt-ckey-chart-
ckey,ckey),release=ricplt-ckey-chart' --namespace=ricplt
echo 'Delete CKEY PVCs because preserve_keycloak_pvc is set to false'
kubectl delete pvc -l app=ckey,release=ricplt-ckey-chart --
namespace=ricplt
kubectl delete pvc -l app=ckey,release=ricplt-ckey-chart --
namespace=ricplt
echo 'Delete CKEY pods'

kubectl delete configmap ricplt-ckey-chart-ckey-k8s-restricted-config --


namespace=ricplt

kubectl delete secrets --namespace=ricplt ricplt-ckey-chart-ckey-server-


cert --ignore-not-found=true
kubectl delete secrets --namespace=ricplt ricplt-ckey-chart-ckey-server-
cert-ingress-internal --ignore-not-found=true
kubectl delete secrets --namespace=ricplt ricplt-ckey-chart-ckey-server-
cert-ingress-external --ignore-not-found=true
kubectl delete secrets --namespace=ricplt ricplt-ckey-chart-ckey-server-
cert-istio --ignore-not-found=true
kubectl delete secrets --namespace=ricplt ricplt-ckey-chart-ckey-ocp-
secret --ignore-not-found=true
echo 'Delete CKEY serviceaccounts'
kubectl delete serviceaccounts -l app=ckey-infinispan,release=ricplt-
ckey-chart --namespace=ricplt

echo 'Deleting Failed master realm job in case of failures'


kubectl --namespace ricplt delete job ricplt-ckey-chart-master-realm-
configuration-job --ignore-not-found=true

echo 'Deleting ISU headless service'


kubectl --namespace ricplt delete service ricplt-ckey-chart-ckey-isu-
headless --ignore-not-found=true

echo 'Waiting 5 seconds before job cleanup'


sleep 5
exit 0
resources:

limits:
memory: "256Mi"
ephemeral-storage: "1Gi"
requests:
cpu: 250m
memory: "256Mi"
ephemeral-storage: "1Gi"

affinity:

podAntiAffinity:
---
# Source: ckey/charts/ckey/templates/jobs/pre-upgrade-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ricplt-ckey-chart-pre-upgrade-job
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
"helm.sh/hook-weight": "10"
"helm.sh/hook": pre-upgrade, pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation, hook-succeeded
spec:
template:
metadata:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

sidecar.istio.io/inject: "false"
annotations:
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
serviceAccountName: ricplt-ckey-chart-ckey-pre-upgrade-sa
enableServiceLinks: false

volumes:
- name: service-account-token
projected:
sources:
- serviceAccountToken:
path: token
expirationSeconds: 3600
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
restartPolicy: Never
containers:
- name: pre-upgrade-job
image:
edgeapps-docker-local.artifactory-blr1.int.net.nokia.com/ric/kubectl:1.28.12-
rocky8-nano-20240801
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true

volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: service-account-token
env:

- name: TZ
value: "UTC"
command:
- sh
- -c
- |
echo "Running ckey pre-upgrade/pre-rollback job..."
kubectl --namespace ricplt delete pod ricplt-ckey-chart-resource-watcher-
job --ignore-not-found=true
resources:

limits:
memory: "256Mi"
ephemeral-storage: "1Gi"
requests:
cpu: 250m
memory: "256Mi"
ephemeral-storage: "1Gi"

affinity:

podAntiAffinity:
MANIFEST:
---
# Source: ckey/charts/ckey/templates/rbac/brhook_rbac.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
name: ricplt-ckey-chart-ckey-brhook-sa
namespace: "ricplt"
annotations:
labels:
app: ricplt-ckey-chart-ckey
heritage: Helm
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1
---
# Source: ckey/charts/ckey/templates/rbac/healing_rbac.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
name: ricplt-ckey-chart-ckey-heal-sa
namespace: "ricplt"
annotations:
labels:
app: ricplt-ckey-chart-ckey
heritage: Helm
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1
---
# Source: ckey/charts/ckey/templates/rbac/masterrealm_rbac.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
name: ricplt-ckey-chart-ckey-master-realm-sa
namespace: "ricplt"
annotations:
labels:
app: ricplt-ckey-chart-ckey
heritage: Helm
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1
---
# Source: ckey/charts/ckey/templates/rbac/rbac.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
name: ricplt-ckey-chart-ckey-sa
namespace: "ricplt"
annotations:
labels:
app: ricplt-ckey-chart-ckey
heritage: Helm
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1
---
# Source: ckey/charts/ckey/templates/rbac/resource-watcher-rbac.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
name: ricplt-ckey-chart-ckey-resource-watcher-sa
namespace: "ricplt"
annotations:
labels:
app: ricplt-ckey-chart-ckey
heritage: Helm
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1
---
# Source: ckey/charts/ckey/templates/rbac/stateful_rbac.yaml
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: false
metadata:
name: ricplt-ckey-chart-ckey-stateful-sa
namespace: "ricplt"
annotations:
labels:
app: ricplt-ckey-chart-ckey
heritage: Helm
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1
---
# Source: ckey/charts/ckey/templates/custom-attributes-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ricplt-ckey-chart-ckey-custom-attributes
annotations:
labels:
app: ricplt-ckey-chart-ckey
chart: "ckey-12.3.1"
release: "ricplt-ckey-chart"
heritage: "Helm"

app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

data:
custom-attributes-config.conf: ""
---
# Source: ckey/charts/ckey/templates/custom-ckey-scripts-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ricplt-ckey-chart-ckey-custom-ckey-scripts
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

data:
keycloak-custom-script.sh: |
# Install extension-jar and theme
---
# Source: ckey/charts/ckey/templates/custom-realm-config-scripts-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ricplt-ckey-chart-ckey-custom-realm-config-scripts
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1
data:
configure-realm.sh: |
#!/bin/bash
# Configure Master Realm

KEYCLOAK_PATH=${KC_HTTP_RELATIVE_PATH:-"/"}
FUNC="$1"
USER="$2"
PASSWORD="$3"
UPDATE_REALM="$4"
EXTRA_PARAM1="$5"

echo "FUNC: $FUNC"


echo "UPDATE_REALM: $UPDATE_REALM"

USE_CA=$(echo "$USE_CACERT" | awk '{print tolower($0)}')


if [ ! -f /opt/keycloak/tls/ca.crt ] || [ "$USE_CA" = "false" ] ||
[ "$URL_PROTOCOL" = "http" ]; then
echo "Using --insecure for curl"
CURL_CERT="--insecure"
else
echo "ca.crt found. Using --cacert for curl"
CURL_CERT="--cacert /opt/keycloak/tls/ca.crt"
fi

if [ "$FUNC" == "set-password" ];
then
TKN=$(curl ${CURL_CERT} -g -X POST "${URL_PROTOCOL}://${SERVICE_IP}:$
{KC_HTTPS_PORT}${KEYCLOAK_PATH}/realms/master/protocol/openid-connect/token" \
-H "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode 'username='"$USER"'' \
--data-urlencode 'password='"$PASSWORD"'' \
-d 'grant_type=password' \
-d 'client_id=admin-cli' | jq -r '.access_token')

echo $TKN
echo $SERVICE_IP
echo $KC_HTTPS_PORT
echo $KEYCLOAK_PATH

if [ -z "$TKN" ];
then
echo "Access Token for user $USER is null. Application is not
running properly or invalid user credentials used"
exit 1
else
USERID=$(curl -w "%{http_code}" --silent ${CURL_CERT} -g "$
{URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}${KEYCLOAK_PATH}/admin/realms/
master/users/" -H "Authorization: Bearer ${TKN}" | jq -r '.[0].id')
UPDATE_PASSWORD=$(curl -w "%{http_code}" --silent ${CURL_CERT}
-g -X PUT "${URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}${KEYCLOAK_PATH}/admin/
realms/master/users/${USERID}/reset-password" -H "Authorization: Bearer ${TKN}" -H
"Content-Type: application/json" -d
'''{"type":"password","temporary":false,"value":"'"$PASSWORD"'"}''')
if [ "$UPDATE_PASSWORD" != "204" ];
then
echo "Error configuring Master Realm: Could not set
password."
echo "Set password failed with error code
$UPDATE_PASSWORD."
exit 1
fi
fi
fi

if [ "$FUNC" == "update-realm" ];
then
ACCESS_TOKEN=$(curl ${CURL_CERT} -g -X POST "${URL_PROTOCOL}://$
{SERVICE_IP}:${KC_HTTPS_PORT}${KEYCLOAK_PATH}/realms/master/protocol/openid-
connect/token" \
-H "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode 'username='"$USER"'' \
--data-urlencode 'password='"$PASSWORD"'' \
-d 'grant_type=password' \
-d 'client_id=admin-cli')
if [ -z "$ACCESS_TOKEN" ];
then
echo "Access Token for user $USER is null. Application is not
running properly or invalid user credentials used"
exit 1
fi

echo $TKN
echo $SERVICE_IP
echo $KC_HTTPS_PORT
echo $KEYCLOAK_PATH

# Workaround for ISU rollback. Master realm job starting in ISU


rollback to revision 1 when not suppose to (helm issue)
# and token call returns forbidden. Finish Master realm job without
error if we get forbidden response in this point.
if [[ "$ACCESS_TOKEN" == *"403 - Forbidden"* ]];
then
echo "WARNING: 403 - Forbidden response from token call. Finishing
the job. It is OK if happens on rollback to revision 1."
echo "ACCESS_TOKEN: $ACCESS_TOKEN"
DISPLAY_NAME_UPDATED="true"
return 0
fi

TKN=$(echo $ACCESS_TOKEN | jq -r '.access_token')

if [ -z "$TKN" ];
then
echo "Access Token for user $USER is null. Application is not
running properly or invalid user credentials used"
exit 1
fi

if [ "$UPDATE_REALM" == "resetPasswordAllowed" ];
then
RESET_PW=$(curl -w "%{http_code}" --silent ${CURL_CERT} -g -X
PUT "${URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}${KEYCLOAK_PATH}/admin/
realms/master/" -H "Authorization: Bearer ${TKN}" -H "Content-Type:
application/json" -d '{"resetPasswordAllowed":true}')
if [ "$RESET_PW" != "204" ];
then
echo "Error configuring Master Realm: Could not update
realm."
echo "Update realm for resetPasswordAllowed failed with
error code $RESET_PW"
exit 1
fi

elif [ "$UPDATE_REALM" == "bruteForceProtection" ];


then
BFP=$(curl -w "%{http_code}" --silent ${CURL_CERT} -g -X PUT "$
{URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}${KEYCLOAK_PATH}/admin/realms/
master/" -H "Authorization: Bearer ${TKN}" -H "Content-Type: application/json" -d
'{"bruteForceProtected":true}')
if [ "$BFP" != "204" ];
then
echo "Error configuring Master Realm: Could not update
realm."
echo "Update realm for bruteForceProtection failed with
error code $BFP"
exit 1
fi
elif [ "$UPDATE_REALM" == "displayName" ];
then
DISPLAY_NAME=$(curl -w "%{http_code}" --silent ${CURL_CERT} -g
-X PUT "${URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}${KEYCLOAK_PATH}/admin/
realms/master/" -H "Authorization: Bearer ${TKN}" -H "Content-Type:
application/json" -d '{"displayName":"User and Role Management", "displayNameHtml":
"<div class=\"kc-logo-text\"><span>User and Role Management</span></div>"}')
if [ "$DISPLAY_NAME" != "204" ];
then
echo "Error configuring Master Realm: Could not update
realm."
echo "Update realm for displayName failed with error
code $DISPLAY_NAME"
exit 1
fi

elif [ "$UPDATE_REALM" == "passwordPolicy" ];


then
PASS_POLICY=$(curl -w "%{http_code}" --silent ${CURL_CERT} -g -
X PUT "${URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}${KEYCLOAK_PATH}/admin/
realms/master/" -H "Authorization: Bearer ${TKN}" -H "Content-Type:
application/json" -d '{"passwordPolicy":"forceExpiredPasswordChange(180) and
length(10) and specialChars(1) and upperCase(1) and lowerCase(1) and
passwordHistory(3) and passwordBlacklist(linux.words) and
notUsername(undefined)"}')
if [ "$PASS_POLICY" != "204" ];
then
echo "Error configuring Master Realm: Could not update
realm."
echo "Update realm for passwordPolicy failed with error
code $PASS_POLICY"
exit 1
fi

elif [ "$UPDATE_REALM" == "adminEventsEnabled" ];


then
echo "Enabling Admin events"
ADMIN_EVENTS=$(curl -w "%{http_code}" --silent ${CURL_CERT} -g
-X PUT "${URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}${KEYCLOAK_PATH}/admin/
realms/master/events/config" -H "Authorization: Bearer ${TKN}" -H "Content-Type:
application/json" -d '{ "eventsListeners":["Audit Logging","jboss-
logging"],"adminEventsEnabled":true, "adminEventsDetailsEnabled": true}')
if [ "$ADMIN_EVENTS" != "204" ];
then
echo "Error configuring Master Realm: Could not update
realm."
echo "Update realm for AdminEventsEnabled failed with
error code $ADMIN_EVENTS"
exit 1
fi

if [ ! -z $EXTRA_PARAM1 ];
then
echo "Setting admin events expiration to $EXTRA_PARAM1
min"
let EXPIRATION_SEC=$EXTRA_PARAM1*60
ADMIN_EVENTS_EXPIRATION=$(curl -w "%{http_code}" --
silent ${CURL_CERT} -g -X PUT "${URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}$
{KEYCLOAK_PATH}/admin/realms/master/" -H "Authorization: Bearer ${TKN}" -H
"Content-Type: application/json" -d "{"\""attributes"\"":
{"\""adminEventsExpiration"\"":$EXPIRATION_SEC}"})
if [ "$ADMIN_EVENTS_EXPIRATION" != "204" ];
then
echo "Error configuring Master Realm: Could not
update realm."
echo "Update realm for adminEventsExpiration
failed with error code $ADMIN_EVENTS_EXPIRATION"
exit 1
fi
fi

elif [ "$UPDATE_REALM" == "eventsEnabled" ];


then
echo "Enabling Login events"
EVENTS=$(curl -w "%{http_code}" --silent ${CURL_CERT} -g -X PUT
"${URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}${KEYCLOAK_PATH}/admin/realms/
master/events/config" -H "Authorization: Bearer ${TKN}" -H "Content-Type:
application/json" -d '{"eventsEnabled":true,"eventsListeners":["Audit
Logging","jboss-logging"],"enabledEventTypes":
["SEND_RESET_PASSWORD","UPDATE_CONSENT_ERROR","GRANT_CONSENT","VERIFY_PROFILE_ERROR
","REMOVE_TOTP","REVOKE_GRANT","UPDATE_TOTP","LOGIN_ERROR","CLIENT_LOGIN","RESET_PA
SSWORD_ERROR","IMPERSONATE_ERROR","CODE_TO_TOKEN_ERROR","CUSTOM_REQUIRED_ACTION","O
AUTH2_DEVICE_CODE_TO_TOKEN_ERROR","RESTART_AUTHENTICATION","IMPERSONATE","UPDATE_PR
OFILE_ERROR","LOGIN","OAUTH2_DEVICE_VERIFY_USER_CODE","UPDATE_PASSWORD_ERROR","CLIE
NT_INITIATED_ACCOUNT_LINKING","TOKEN_EXCHANGE","AUTHREQID_TO_TOKEN","LOGOUT","REGIS
TER","DELETE_ACCOUNT_ERROR","CLIENT_REGISTER","IDENTITY_PROVIDER_LINK_ACCOUNT","DEL
ETE_ACCOUNT","UPDATE_PASSWORD","CLIENT_DELETE","FEDERATED_IDENTITY_LINK_ERROR","IDE
NTITY_PROVIDER_FIRST_LOGIN","CLIENT_DELETE_ERROR","VERIFY_EMAIL","CLIENT_LOGIN_ERRO
R","RESTART_AUTHENTICATION_ERROR","EXECUTE_ACTIONS","REMOVE_FEDERATED_IDENTITY_ERRO
R","TOKEN_EXCHANGE_ERROR","PERMISSION_TOKEN","SEND_IDENTITY_PROVIDER_LINK_ERROR","E
XECUTE_ACTION_TOKEN_ERROR","SEND_VERIFY_EMAIL","OAUTH2_DEVICE_AUTH","EXECUTE_ACTION
S_ERROR","REMOVE_FEDERATED_IDENTITY","OAUTH2_DEVICE_CODE_TO_TOKEN","IDENTITY_PROVID
ER_POST_LOGIN","IDENTITY_PROVIDER_LINK_ACCOUNT_ERROR","OAUTH2_DEVICE_VERIFY_USER_CO
DE_ERROR","UPDATE_EMAIL","REGISTER_ERROR","REVOKE_GRANT_ERROR","EXECUTE_ACTION_TOKE
N","LOGOUT_ERROR","UPDATE_EMAIL_ERROR","CLIENT_UPDATE_ERROR","AUTHREQID_TO_TOKEN_ER
ROR","UPDATE_PROFILE","CLIENT_REGISTER_ERROR","FEDERATED_IDENTITY_LINK","SEND_IDENT
ITY_PROVIDER_LINK","SEND_VERIFY_EMAIL_ERROR","RESET_PASSWORD","CLIENT_INITIATED_ACC
OUNT_LINKING_ERROR","OAUTH2_DEVICE_AUTH_ERROR","UPDATE_CONSENT","REMOVE_TOTP_ERROR"
,"VERIFY_EMAIL_ERROR","SEND_RESET_PASSWORD_ERROR","CLIENT_UPDATE","CUSTOM_REQUIRED_
ACTION_ERROR","IDENTITY_PROVIDER_POST_LOGIN_ERROR","UPDATE_TOTP_ERROR","CODE_TO_TOK
EN","VERIFY_PROFILE","GRANT_CONSENT_ERROR","IDENTITY_PROVIDER_FIRST_LOGIN_ERROR"],"
eventsExpiration":null}')
if [ "$EVENTS" != "204" ];
then
echo "Error configuring Master Realm: Could not update
realm."
echo "Update realm for EventsEnabled failed with error
code $EVENTS"
exit 1
fi

if [ ! -z $EXTRA_PARAM1 ];
then
echo "Setting login events expiration to $EXTRA_PARAM1
min"
let EXPIRATION_SEC=$EXTRA_PARAM1*60
LOGIN_EVENTS_EXPIRATION=$(curl -w "%{http_code}" --
silent ${CURL_CERT} -g -X PUT "${URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}$
{KEYCLOAK_PATH}/admin/realms/master/" -H "Authorization: Bearer ${TKN}" -H
"Content-Type: application/json" -d "{ "\""eventsExpiration"\"":$EXPIRATION_SEC}")
if [ "$LOGIN_EVENTS_EXPIRATION" != "204" ];
then
echo "Error configuring Master Realm: Could not
update realm."
echo "Update realm for eventsExpiration failed
with error code $LOGIN_EVENTS_EXPIRATION"
exit 1
fi
fi

elif [ "$UPDATE_REALM" == "sslRequired" ];


then
SSL=$(curl -w "%{http_code}" --silent ${CURL_CERT} -g -X PUT "$
{URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}${KEYCLOAK_PATH}/admin/realms/
master/" -H "Authorization: Bearer ${TKN}" -H "Content-Type: application/json" -d
'{"sslRequired":"all"}')
if [ "$SSL" != "204" ];
then
echo "Error configuring Master Realm: Could not update
realm."
echo "Update realm for sslRequired failed with error
code $SSL"
exit 1
fi

elif [ "$UPDATE_REALM" == "loginTheme" ];


then
THEME=$(curl -w "%{http_code}" --silent ${CURL_CERT} -g -X PUT
"${URL_PROTOCOL}://${SERVICE_IP}:${KC_HTTPS_PORT}${KEYCLOAK_PATH}/admin/realms/
master/" -H "Authorization: Bearer ${TKN}" -H "Content-Type: application/json" -d
'{"loginTheme":"nokia-csf"}')
if [ "$THEME" != "204" ];
then
echo "Error configuring Master Realm: Could not update
realm."
echo "Update realm for loginTheme failed with error code
$THEME"
exit 1
fi

elif [ "$UPDATE_REALM" == "getDisplayName" ];


then
DISPLAY_NAME_UPDATED="false"
echo "configure-realm.sh: getDisplayName"
DISPLAY_NAME=$(curl ${CURL_CERT} -g -X GET "${URL_PROTOCOL}://$
{SERVICE_IP}:${KC_HTTPS_PORT}${KEYCLOAK_PATH}/admin/realms/master/" -H
"Authorization: Bearer ${TKN}" | jq -r '.displayName')
echo "configure-realm.sh. getDisplayName: $DISPLAY_NAME"
if [ "$DISPLAY_NAME" == "User and Role Management" ];
then
echo "configure-realm.sh. CKEY display name found.
Exiting configuration script."
DISPLAY_NAME_UPDATED="true"
fi
return 0
else
echo "Realm configuration not found. Exiting..."
exit 1
fi
exit 0
fi
configure-realm-settings.sh: |
#!/usr/bin/env bash
# debug
# set -x
# exit on failure
set -e
# exit on failed parameter expansion
# set -u
echo "Waiting 10 seconds for the Keycloak service"
sleep 10

echo "Configuring Keycloak's Master realm"

#As readOnlyRootFilesystem flag is set to true. For execution of custom master


realm configuration script (configure-realm-settings.sh), directory with write
access is required.
#An emptyDir volume is created and mounted in the /tmp path. This ensures that
the /tmp directory has write access.
#The default working directory is "/opt/keycloak". The current working
directory is set to /tmp so that the files are written within the /tmp directory
which has write access.
cd /tmp/
# Initialize needed variables:
KEYCLOAK_PASSWORD=$(sed "s/'/'\\\''/g" <<< "$KEYCLOAK_PASSWORD")
# This step is used to overwrite the password timestamp for the admin user. If
this step doesn't take place, the admin will be required to update his password
when he attempts to log in for the first time if the forceExpiredPasswordChange
policy is set.

echo "Check if custom realm config script already been applied"


source /ckey/custom-scripts/configure-realm.sh update-realm "${KEYCLOAK_USER}"
"${KEYCLOAK_PASSWORD}" getDisplayName
if [ "$DISPLAY_NAME_UPDATED" == "true" ];
then
echo "Custom realm config script was already applied. Exiting."
exit 0
fi
echo "Custom realm config script was not applied. Continue."

bash -c "/ckey/custom-scripts/configure-realm.sh set-password '$


{KEYCLOAK_USER}' '${KEYCLOAK_PASSWORD}'"
bash -c "/ckey/custom-scripts/configure-realm.sh update-realm '$
{KEYCLOAK_USER}' '${KEYCLOAK_PASSWORD}' resetPasswordAllowed"
bash -c "/ckey/custom-scripts/configure-realm.sh update-realm '$
{KEYCLOAK_USER}' '${KEYCLOAK_PASSWORD}' passwordPolicy"
bash -c "/ckey/custom-scripts/configure-realm.sh update-realm '$
{KEYCLOAK_USER}' '${KEYCLOAK_PASSWORD}' adminEventsEnabled "1576800""
bash -c "/ckey/custom-scripts/configure-realm.sh update-realm '$
{KEYCLOAK_USER}' '${KEYCLOAK_PASSWORD}' eventsEnabled "129600""
bash -c "/ckey/custom-scripts/configure-realm.sh update-realm '$
{KEYCLOAK_USER}' '${KEYCLOAK_PASSWORD}' sslRequired"
bash -c "/ckey/custom-scripts/configure-realm.sh update-realm '$
{KEYCLOAK_USER}' '${KEYCLOAK_PASSWORD}' bruteForceProtection"
bash -c "/ckey/custom-scripts/configure-realm.sh update-realm '$
{KEYCLOAK_USER}' '${KEYCLOAK_PASSWORD}' loginTheme"

bash -c "/ckey/custom-scripts/configure-realm.sh update-realm '$


{KEYCLOAK_USER}' '${KEYCLOAK_PASSWORD}' displayName"

#Reverting back the working directory to /opt/keycloak


cd ..
echo "Done configuring master realm"
echo "Waiting 5 seconds before job cleanup"
sleep 5
exit 0
---
# Source: ckey/charts/ckey/templates/custom-upgrade-realm-config-scripts-
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
name: ricplt-ckey-chart-ckey-custom-upgrade-realm-config-scripts
labels:
app: ricplt-ckey-chart-ckey
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1
---
# Source: ckey/charts/ckey/templates/isu/isu-rollback-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ricplt-ckey-chart-ckey-isu-rollback
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

data:
backup-id: ""
---
# Source: ckey/charts/ckey/templates/logging-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
"restartOnUpdate": "true"
name: ricplt-ckey-chart-ckey-logging-configuration
labels:
app: ricplt-ckey-chart-ckey
chart: "ckey-12.3.1"
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

data:
log4cxx.property: |-
log4j.appender.syslog.Ssl.Timeout=15
log4j.appender.syslog.Ssl.CloseRequestType=GNUTLS_SHUT_WR
log4j.rootLogger=INFO, console

log4j.rootLogger.suppressFailure=TRUE

log4j.appender.console=org.apache.log4j.ConsoleAppender

log4j.appender.console.Layout.Type=org.apache.log4j.PatternLayout

log4j.appender.console.Layout.Encoding=UTF-8

log4j.appender.console.ImmediateFlush=TRUE
---
# Source: ckey/charts/ckey/templates/logging-extensions.yaml
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
"restartOnUpdate": "true"
name: ricplt-ckey-chart-ckey-logging-extensions
labels:
app: ricplt-ckey-chart-ckey
chart: "ckey-12.3.1"
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

data:
logging-extensions.json: |-
{}
---
# Source: ckey/charts/ckey/templates/notification-listener-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ricplt-ckey-chart-ckey-notification-listener
annotations:
"restartOnUpdate": "true"
labels:
app: ricplt-ckey-chart-ckey
chart: "ckey-12.3.1"
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

data:
notification-listener-registry.json: |-
{
}
---
# Source: ckey/charts/ckey/templates/push-listener-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ricplt-ckey-chart-ckey-push-listener
annotations:
"restartOnUpdate": "true"
labels:
app: ricplt-ckey-chart-ckey
chart: "ckey-12.3.1"
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1
data:
push-listener-registry.json: |-
{
}
---
# Source: ckey/charts/ckey/templates/rbac/brhook_rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: "ricplt"
name: ricplt-ckey-chart-ckey-brhook-role
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","list","delete", "watch"]
---
# Source: ckey/charts/ckey/templates/rbac/healing_rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: "ricplt"
name: ricplt-ckey-chart-ckey-heal-role
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "delete"]
---
# Source: ckey/charts/ckey/templates/rbac/masterrealm_rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: "ricplt"
name: ricplt-ckey-chart-ckey-master-realm-role
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "delete"]
---
# Source: ckey/charts/ckey/templates/rbac/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: "ricplt"
name: ricplt-ckey-chart-ckey-role
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get", "list", "watch", "patch", "create", "delete"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "delete", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "delete"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list", "watch"]
- apiGroups: ["cert-manager.io"]
resources: ["certificates"]
verbs: ["get", "list","delete"]
---
# Source: ckey/charts/ckey/templates/rbac/resource-watcher-rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: "ricplt"
name: ricplt-ckey-chart-ckey-resource-watcher-role
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["statefulsets"]
verbs: ["get", "list", "watch", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
---
# Source: ckey/charts/ckey/templates/rbac/stateful_rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: "ricplt"
name: ricplt-ckey-chart-ckey-stateful-role
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
# Source: ckey/charts/ckey/templates/rbac/brhook_rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ricplt-ckey-chart-ckey-brhook-rolebinding
namespace: "ricplt"
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

subjects:
- kind: ServiceAccount
name: ricplt-ckey-chart-ckey-brhook-sa
namespace: "ricplt"
apiGroup: ""
roleRef:
kind: Role
name: ricplt-ckey-chart-ckey-brhook-role
apiGroup: rbac.authorization.k8s.io
---
# Source: ckey/charts/ckey/templates/rbac/healing_rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ricplt-ckey-chart-ckey-heal-rolebinding
namespace: "ricplt"
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

subjects:
- kind: ServiceAccount
name: ricplt-ckey-chart-ckey-heal-sa
namespace: "ricplt"
apiGroup: ""
roleRef:
kind: Role
name: ricplt-ckey-chart-ckey-heal-role
apiGroup: rbac.authorization.k8s.io
---
# Source: ckey/charts/ckey/templates/rbac/masterrealm_rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ricplt-ckey-chart-ckey-master-realm-rolebinding
namespace: "ricplt"
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

subjects:
- kind: ServiceAccount
name: ricplt-ckey-chart-ckey-master-realm-sa
namespace: "ricplt"
apiGroup: ""
roleRef:
kind: Role
name: ricplt-ckey-chart-ckey-master-realm-role
apiGroup: rbac.authorization.k8s.io
---
# Source: ckey/charts/ckey/templates/rbac/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ricplt-ckey-chart-ckey-rolebinding
namespace: "ricplt"
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1
subjects:
- kind: ServiceAccount
name: ricplt-ckey-chart-ckey-sa
namespace: "ricplt"
apiGroup: ""
roleRef:
kind: Role
name: ricplt-ckey-chart-ckey-role
apiGroup: rbac.authorization.k8s.io
---
# Source: ckey/charts/ckey/templates/rbac/resource-watcher-rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ricplt-ckey-chart-ckey-resource-watcher-rolebinding
namespace: "ricplt"
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

subjects:
- kind: ServiceAccount
name: ricplt-ckey-chart-ckey-resource-watcher-sa
namespace: "ricplt"
apiGroup: ""
roleRef:
kind: Role
name: ricplt-ckey-chart-ckey-resource-watcher-role
apiGroup: rbac.authorization.k8s.io
---
# Source: ckey/charts/ckey/templates/rbac/stateful_rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ricplt-ckey-chart-ckey-stateful-rolebinding
namespace: "ricplt"
annotations:
labels:
app: ricplt-ckey-chart-ckey
release: ricplt-ckey-chart
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

subjects:
- kind: ServiceAccount
name: ricplt-ckey-chart-ckey-stateful-sa
namespace: "ricplt"
apiGroup: ""
roleRef:
kind: Role
name: ricplt-ckey-chart-ckey-stateful-role
apiGroup: rbac.authorization.k8s.io
---
# Source: ckey/charts/ckey/templates/headless-service.yaml
apiVersion: v1
kind: Service
metadata:
name: ricplt-ckey-chart-ckey-headless
annotations:
labels:
app: ckey
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

spec:
type: ClusterIP
clusterIP: None
# Below flag will set the endpoints once they are scheduled and will not wait for
them to be ready
# Setting below helps in the quicker infinispan cluster formation among the
replicas
publishNotReadyAddresses: true
ports:
# Istio docs recommends to prefix port name with the appProtocol.
# Reference: https://istio.io/latest/docs/ops/configuration/traffic-
management/protocol-selection/#explicit-protocol-selection
# Also according to CKEY's istio design we want istio to consider all the app
traffic as plain TCP (although actually they are HTTP or HTTPS). This is because
keycloak server by default generates traffic always in TLS mode.
- name: tcp-secure-keycloak
port: 8443
targetPort: 8443
protocol: TCP
appProtocol: tcp
selector:
app: ckey
release: "ricplt-ckey-chart"
isu-upgrade: "false"
---
# Source: ckey/charts/ckey/templates/http-service.yaml
apiVersion: v1
kind: Service
metadata:
name: ricplt-ckey-chart-ckey
labels:
app: ckey
chart: "ckey-12.3.1"
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
prometheus.io/path: <httpRelativePath>/realms/master/metrics
prometheus.io/scheme: http
prometheus.io/scrape: "false"
"prometheus.io/port": "8443"
spec:
type: ClusterIP
ports:
# Istio docs recommends to prefix port name with the appProtocol.
# Reference: https://istio.io/latest/docs/ops/configuration/traffic-
management/protocol-selection/#explicit-protocol-selection
# Also according to CKEY's istio design we want istio to consider all the app
traffic as plain TCP (although actually they are HTTP or HTTPS). This is because
keycloak server by default generates traffic always in TLS mode.
- name: https-secure-keycloak
port: 8443
targetPort: 8443
protocol: TCP
appProtocol: https

selector:
app: ckey
release: "ricplt-ckey-chart"
sessionAffinity: ClientIP
---
# Source: ckey/charts/ckey/templates/isu-headless-service.yaml
#This headless service is created only for the purpose of inservice upgrade /
rollback timeframe. During that time, the isu specific statefulset should separate
out its infinispan cluster from the original/upgrading/rolling back ckey sts.

apiVersion: v1
kind: Service
metadata:
name: ricplt-ckey-chart-ckey-isu-headless
annotations:
labels:
app: ckey
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

spec:
type: ClusterIP
clusterIP: None
# Below flag will set the endpoints once they are scheduled and will not wait for
them to be ready.
# Setting below helps in the quicker infinispan cluster formation among the ISU
replicas
publishNotReadyAddresses: true
ports:
- name: https-keycloak
port: 8443
targetPort: 8443
protocol: TCP
appProtocol: HTTPS
#note: ISU-headless-service selects its endpoints according to app,release and isu-
upgrade:true
#The isu-upgrade:true is crucial here as it distinguishes the pod specific to ISU
upgrade activity.
#The app selector has to be same as the original statefulset of ckey to maintain
service continuity of ckey http service.
#The underneath logic how the ckey http service continuity is maintained is
explained in brief in isu-configmap.yaml
selector:
app: ckey
release: "ricplt-ckey-chart"
isu-upgrade: "true"
---
# Source: ckey/charts/ckey/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ricplt-ckey-chart-ckey
labels:
app: ckey
release: "ricplt-ckey-chart"
heritage: "Helm"
csf-component: ckey
csf-subcomponent: keycloak
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
spec:
selector:
matchLabels:
app: ckey
release: "ricplt-ckey-chart"
replicas: 2
serviceName: ricplt-ckey-chart-ckey-headless
podManagementPolicy: "Parallel"
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: ckey
release: "ricplt-ckey-chart"
isu-upgrade: "false"
sidecar.istio.io/inject: "false"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

annotations:
rollme: "0zyLP"
kubectl.kubernetes.io/default-container: ricplt-ckey-chart-ckey
kubectl.kubernetes.io/default-exec-container: ricplt-ckey-chart-ckey
spec:
initContainers:

securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
serviceAccountName: ricplt-ckey-chart-ckey-stateful-sa
automountServiceAccountToken: false
enableServiceLinks: false
containers:
- name: ricplt-ckey-chart-ckey
image: edgeapps-docker-local.artifactory-blr1.int.net.nokia.com/ric/ckey-
keycloak:24.0.5.2-rocky8-jre17-47
imagePullPolicy: "IfNotPresent"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
env:

- name: TZ
value: "UTC"
- name: KC_HTTP_RELATIVE_PATH
value: /usermgmt
- name: KC_PROXY_HEADERS
value: xforwarded

- name: KC_SPI_STICKY_SESSION_ENCODER_INFINISPAN_SHOULD_ATTACH_ROUTE
value: "false"

- name: KC_HOSTNAME_URL
value: https://10.183.147.71:31776/usermgmt
- name: KC_HOSTNAME_ADMIN_URL
value: https://10.183.147.71:31776/usermgmt

- name: KC_HOSTNAME_STRICT
value: "true"
- name: KC_HOSTNAME_STRICT_BACKCHANNEL
value: "true"
- name: KC_DB_URL_HOST
value: "ricplt-cmdb-chart-mysql.ricplt.svc.cluster.local"
- name: KC_DB
value: mariadb
- name: KC_DB_URL_PORT
value: "3306"
- name: KC_DB_URL_DATABASE
value: "db4keycloak"
- name: KC_DB_USERNAME
value: "keycloak"
- name: KC_DB_URL_PROPERTIES
value: "?autoReconnect=true"
# Replicating DB_IP, DB_PORT, DB_NAME and DB_USER historical env
variables to support ISU Update/Rollback
- name: DB_IP
value: "ricplt-cmdb-chart-mysql.ricplt.svc.cluster.local"
- name: DB_PORT
value: "3306"
- name: DB_NAME
value: "db4keycloak"
- name: DB_USER
value: "keycloak"
- name: KC_HTTPS_PORT
value: "8443"

- name: DATABASE_RETRY_COUNTER
value: "12000"
- name: DATABASE_CHECK_TIMEGAP
value: "1000"
- name: DB_ALARM_CHECK_PERIOD
value: "30000"
- name: CERT_EXPIRY_ALARM_PERIOD
value: "10"
- name: ALARM_STATE_STORAGE
value: "File"
- name: DATABASE_ALARM_INITIAL_DELAY
value: "240"
- name: KC_HEALTH_ENABLED
value: "true"
- name: KC_METRICS_ENABLED
value: "true"
- name: LDAP_HEALTH_ENABLED
value: "true"
- name: LDAP_ALARM_INITIAL_DELAY
value: "240"
- name: REALM
value: "master"
- name: COPYRIGHT_YEAR
value: ""
- name: LB_BANNER_TITLE
value: "Login Banner"
- name: LB_WELCOME_FIRST_NAME_MSG
value: "Welcome, {0}."
- name: LB_WELCOME_USERNAME_MSG
value: "Welcome, {0}."
- name: LB_PREVIOUS_SUCCESS_MSG
value: "Your last successful login was on {0}."
- name: LB_FAILED_LOGIN_COUNTER_MSG
value: "Failed login attempts after last successful login"
- name: LB_MAIN_MSG
value: "You are about to access a private system. This system is for
the use of authorized users only. All connections are logged. Any unauthorized
access or access attempts may be punishable to the fullest extent possible under
the applicable local legislation."
- name: LB_ACCEPT_MSG
value: "OK"
- name: MEMORY_LIMIT
value: "2048Mi"
- name: JAVA_OPTS_APPEND
value: ""
- name: KCSH_COMMAND_LINE
value: ""
- name: JGROUPS_DNS_QUERY
value: "ricplt-ckey-chart-ckey-headless.ricplt.svc.cluster.local"
- name: IP_FAMILY
value:
- name: MEMORY_FACTOR_FOR_KEYCLOAK
value: "0.7"
- name: MEMORY_FACTOR_FOR_UNIFIED_LOGGER
value: "0.1"
- name: CKEY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIPs
- name: JGROUPS_BIND_PORT
value: "7800"

- name: JGROUPS_FD_OFFSET
value: "50000"

- name: KC_LOG_GELF_PORT
value: "8888"
- name: VERTX_HEALTH_CHECK_PORT
value: "8887"
- name: LOG4CXX_PROP_PATH
value: "/tmp/opt/keycloak/vertx/log4cxx.property"
- name: KC_LOG
value: "gelf"
- name: KC_LOG_LEVEL
value: "INFO"
- name: KC_LOG_GELF_LEVEL
value: "INFO"
- name: LOGGING_JAVA_OPTS
value: "-Djdk.internal.httpclient.disableHostnameVerification=true "
- name: QUARKUS_DATASOURCE_REACTIVE_IDLE_TIMEOUT
value: "15s"
- name: QUARKUS_DATASOURCE_JDBC_IDLE_REMOVAL_INTERVAL
value: "20s"
- name: QUARKUS_MICROMETER_BINDER_VERTX_ENABLED
value: "true"
- name: QUARKUS_MICROMETER_BINDER_SYSTEM
value: "true"
- name: QUARKUS_MICROMETER_BINDER_HTTP_CLIENT_ENABLED
value: "true"
- name: QUARKUS_MICROMETER_BINDER_HTTP_SERVER_ENABLED
value: "true"
- name: QUARKUS_DATASOURCE_METRICS_ENABLED
value: "true"
- name: QUARKUS_MICROMETER_BINDER_JVM
value: "true"
- name: QUARKUS_LOG_METRICS_ENABLED
value: "true"
- name: QUARKUS_SCHEDULER_METRICS_ENABLED
value: "true"
- name: QUARKUS_THREAD_POOL_MAX_THREADS
value: "50"
- name: JGROUPS_THREAD_POOL_MAX_THREADS
value: "100"
- name: KC_DB_POOL_MAX_SIZE
value: "25"
- name: QUARKUS_THREAD_POOL_QUEUE_SIZE
value: "1000"
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: URI_METRICS_ENABLED
value: "true"
- name: URI_METRICS_DETAILED
value: "false"

# Password Expiration Notifier related configuration


- name: PASSWORD_EXPIRY_THRESHOLD_DAYS
value: "15"
- name: SCHEDULE_AT_HOUR
value: "22"
- name: SCHEDULE_AT_MINUTES
value: "30"
- name: IS_EMAIL_NOTIFICATION_ENABLED
value: "false"
- name: TEXT_BODY
value: "Your WebSSO login password is expiring soon. Please renew
it."
# List of allowed TLS ciphers.
# - name: KC_HTTPS_CIPHER_SUITES
# value:
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305
_SHA256"
# List of supported TLS versions by Keycloak. Possible values:
"TLSv1.3" TLSv1.2" "TLSv1.2,TLSv1.3". Default value is "TLSv1.3" which allows TLS
1.2 and 1.3 protocols.
# - name: KC_HTTPS_PROTOCOLS
# value: "TLSv1.3"

startupProbe:
httpGet:
path: /usermgmt/health/ready
port: https-keycloak
scheme: HTTPS
initialDelaySeconds: 1
timeoutSeconds: 1
periodSeconds: 2
failureThreshold: 6000

readinessProbe:
httpGet:
path: /usermgmt/health/ready
port: https-keycloak
scheme: HTTPS
initialDelaySeconds: 1
timeoutSeconds: 1
periodSeconds: 2
failureThreshold: 1
livenessProbe:
exec:
command:
- sh
- -c
- |+
CURL_TIMEOUT=5
ISTIO_ENABLED=false

#!/bin/bash
LIVENESS_LOG_FILE="/tmp/opt/keycloak/liveness.log"

log() {
if [ ! -f $LIVENESS_LOG_FILE ]
then
touch $LIVENESS_LOG_FILE
fi
echo ""$(date +'%Y-%m-%d %H:%M:%S,%3N')
[quarkusLivenessprobe.sh] " $*" >>${LIVENESS_LOG_FILE}
}

PORT=${KC_HTTPS_PORT:-8443}
INGRESS_PATH=${KC_HTTP_RELATIVE_PATH:-"/"}
KEYCLOAK_URL="https://localhost:"${PORT}"${INGRESS_PATH}"/health
VERTX_URL="http://localhost:"${VERTX_HEALTH_CHECK_PORT}"/health"

KEYCLOAK_STATUS_CODE=$(curl -skL --max-time ${CURL_TIMEOUT} -w "%


{http_code}" "${KEYCLOAK_URL}" -o /dev/null)
VERTX_STATUS_CODE=$(curl -sL --max-time ${CURL_TIMEOUT} -w "%
{http_code}" "${VERTX_URL}" -o /dev/null)

# log "DEBUG Keycloak" "${KEYCLOAK_URL} returned response with $


{KEYCLOAK_STATUS_CODE}\n"
# log "DEBUG Logger" "${VERTX_URL} returned response with $
{VERTX_STATUS_CODE}\n"

if [ "${KEYCLOAK_STATUS_CODE}" -eq '200' ] && [ "$


{VERTX_STATUS_CODE}" -eq '200' ]; then
exit 0
fi

log "ERROR: Exited with keycloak status ${KEYCLOAK_STATUS_CODE} and


vertx status ${VERTX_STATUS_CODE}"
exit 1

failureThreshold: 5
initialDelaySeconds: 1
timeoutSeconds: 10
periodSeconds: 15
# We do not want to keycloak container to immediately terminate as the
keycloak clients may be holding up the ckey-http-service fqdn to keycloak pod's ip
due to DNS caching. Thus, we want keycloak container to handle incoming request
until 'terminationGracePeriodSeconds' is expired. Thus, we intentionally sleep
within preStop lifecycle hook so that keycloak container is terminated in delayed
fashion.
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- echo 'Container will be terminated'&& sleep 30
resources:
requests:
memory: "1024Mi"
cpu: "500m"
ephemeral-storage: "1Gi"
limits:
memory: "2048Mi"
ephemeral-storage: "1Gi"
volumeMounts:
- name: ricplt-ckey-chart-ckey-custom-providers
mountPath: /customProviders
- name: custom-ckey-scripts
mountPath: /ckey/custom-scripts
readOnly: true
- name: ricplt-ckey-chart-ckey-tmp
mountPath: "/tmp"
- name: alarms
mountPath: "/ckey/alarms"

- name: logging-configuration
mountPath: /opt/keycloak/vertx/log4cxx.property
subPath: log4cxx.property
- name: logging-extensions
mountPath: /opt/keycloak/vertx/logging-extensions.json
subPath: logging-extensions.json

- name: push-listener-config-json
mountPath: /opt/etc/keycloak/push-listener-registry.json
subPath: push-listener-registry.json

- name: custom-attribute-config
mountPath: /opt/etc/keycloak/custom-attributes-config.conf
subPath: custom-attributes-config.conf
- name: notification-listener-config-json
mountPath: /opt/etc/keycloak/notification-listener-registry.json
subPath: notification-listener-registry.json
- name: ckey-secret
mountPath: /opt/keycloak/security/ckey-secret/keycloak-admin-password
subPath: keycloak-admin-password
- name: ckey-secret
mountPath: /opt/keycloak/security/ckey-secret/keycloak-admin-user
subPath: keycloak-admin-user

ports:
- containerPort: 8443
name: https-keycloak
- containerPort: 7800
name: jgroups-port
- containerPort: 57800
name: jgroups-fd-port

terminationGracePeriodSeconds: 30
volumes:

- name: ricplt-ckey-chart-ckey-custom-providers
emptyDir: {}
- name: ricplt-ckey-chart-ckey-tmp
emptyDir: {}
- name: cbur-tmp
emptyDir: {}
- name: custom-ckey-scripts
configMap:
name: ricplt-ckey-chart-ckey-custom-ckey-scripts
defaultMode: 0555

- name: logging-configuration
configMap:

name: ricplt-ckey-chart-ckey-logging-configuration

- name: logging-extensions
configMap:

name: ricplt-ckey-chart-ckey-logging-extensions

- name: push-listener-config-json
configMap:
name: ricplt-ckey-chart-ckey-push-listener
items:
- key: push-listener-registry.json
path: push-listener-registry.json

- name: custom-attribute-config
configMap:
name: ricplt-ckey-chart-ckey-custom-attributes
items:
- key: custom-attributes-config.conf
path: custom-attributes-config.conf
- name: notification-listener-config-json
configMap:
name: ricplt-ckey-chart-ckey-notification-listener
items:
- key: notification-listener-registry.json
path: notification-listener-registry.json
- name: ckey-secret
projected:
sources:
- secret:

name: ricplt-ckey-chart-ckey

optional: true

affinity:

podAntiAffinity:
volumeClaimTemplates:
- metadata:
name: alarms
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "1Mi"
storageClassName:
---
# Source: ckey/charts/ckey/templates/ingress.yaml
# Below Check will assure that kubernetes ingress resource will not be created when
istio is enabled and istio-gateway/sharedHttpGateway is configured. Also if user
wants to use kubernetes ingress inconjunction with istio, then this check will make
sure right configurations are done from user.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
#Below annotation to enable the tls communication between ckey and ingress.
#In passthrough mode directly ckey servers public cert will be visible and In
edge mode in internal side https is absent
labels:
app: ricplt-ckey-chart-ckey
chart: "ckey-12.3.1"
release: "ricplt-ckey-chart"
heritage: "Helm"
app.kubernetes.io/name: ricplt-ckey-chart-ckey
app.kubernetes.io/instance: ricplt-ckey-chart
app.kubernetes.io/version: 12.3.1
app.kubernetes.io/component: Security
app.kubernetes.io/part-of: "ckey"
app.kubernetes.io/managed-by: "Helm"
helm.sh/chart: ckey-12.3.1

name: ricplt-ricplt-ckey-chart-8443
spec:

rules:

- http:
paths:
- path: /usermgmt
pathType: Prefix
backend:
service:
name: ricplt-ckey-chart-ckey
port:
number: 8443
---
# Source: ckey/charts/ckey/templates/horizontal-pod-autoscale.yaml
# keycloak hpa
# End of keycloak hpa
---
# Source: ckey/charts/ckey/templates/istio/istio-gateway.yaml
# Below CKEY gateway and Virtual Service assumes Shared gateway mode will always
have value other than PASSTHROUGH

You might also like