0% found this document useful (0 votes)
42 views

Installing Automation Suite On AKS

This document provides an overview of deploying UiPath Automation Suite on an Azure Kubernetes Service (AKS) cluster. It describes the infrastructure prerequisites needed, including setting up an AKS cluster, object storage, databases, caching, and networking. It also outlines the steps to download and configure the Automation Suite software using a configuration file.

Uploaded by

siddesh shinde
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Installing Automation Suite On AKS

This document provides an overview of deploying UiPath Automation Suite on an Azure Kubernetes Service (AKS) cluster. It describes the infrastructure prerequisites needed, including setting up an AKS cluster, object storage, databases, caching, and networking. It also outlines the steps to download and configure the Automation Suite software using a configuration file.

Uploaded by

siddesh shinde
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 6

Installing Automation Suite on AKS (Azure Kubernetes Services)

Deployment scenarios
Online deployment
An online deployment of Automation Suite is one that requires internet access
during installation and runtime. All the UiPath products and supporting libraries
are hosted in the UiPath registry or UiPath-trusted third-party store.

***overview****
An AKS cluster is deployed in a single region where the worker nodes are
distributed across the system and user node pools.Each Node Pool hosts the Virtual
Machine Scale Set (VMSS), ensuring that worker nodes are distributed across
multiple zones to provide resiliency to zone failure and scale when
requiredDatasources such as Microsoft SQL Server, Azure Storage Account, and Azure
Redis Cache should be set up to have enough redundancy in case of failure and must
be accessed from the subnet where the AKS worker nodes are hosted.Additionally,
there may be a need for an additional Jump Box / Bastion Server, which may have all
the required privileges to operate the AKS cluster.

Automation Suite on AKS/EKS does not currently support Federal Information


Processing Standard 140-2 (FIPS 140-2). If you require FIPS 140-2 for Automation
Suite, you can deploy Automation Suite on FIPS 140-2-enabled RHEL machines.

Step 1: Provisioning the infrastructure prerequisites


Before installing Automation Suite, you must configure the cloud resources in your
environment. This includes:
AKS or EKS cluster,Object Storage - Azure Blob or Amazon S3, Block Storage,File
Storage,Database, Caching,Networking (e.g., VNETs / VPC, DNS, subnets, NSGs /
security groups, NAT gateway, elastic IP and internet gateway),Certificates,
Networking Policies

Each Automation Suite Long-Term Support release comes with a compatibility matrix.
s
Compatibility matrix:
Azure Kubernetes Service (AKS) Architecture-x86, Azure Kubernetes Service (AKS)-
1.27, 1.28, 1.29, Ubuntu-22.04

Node capacity: At a minimum, to start with the mandatory platform services


(Identity, licensing, and routing) and Orchestrator, you must provision 8 vCPU and
16 GB RAM per node.

Additional Automation Suite Robots requirements


Automation Suite Robots require additional worker node(s).
The hardware requirements for the Automation Suite Robots node depend on the way
you plan to use your resources.

Agent node size:


The resources of the Automation Suite Robots agent node have an impact on the
number of jobs that can be run concurrently. The reason is that the number of CPU
cores and the amount of RAM capacity are divided by the CPU/memory requirements of
the job.
Job sizes can be mixed, so at any given moment, the same node could run a
combination of jobs, such as the following:
10 Small jobs (consuming 5 CPUs and 10 GiB of memory)
4 Standard jobs (consuming 4 CPUs and 8 GiB of memory)
3 Medium jobs (consuming 6 CPUs and 12 GiB of memory)

Automatic machine size selection:


Remote debugging job-Medium,Process depends on UI Automation OR Process depends on
the UiPath Document Understanding activities-Standard,Other unattended process-
small.

Additional Document Understanding recommendations:


For increased performance, you can install Document Understanding on an additional
agent node with GPU support. Note, however, that Document Understanding is fully
functional without the GPU node. Actually, Document Understanding uses CPU VMs for
all its extraction and classification tasks, while for OCR we strongly recommend
the usage of a GPU VM
Processor-8 (v-)CPU/cores, RAM-52 GiB,Cluster binaries and state disk-256 GiB SSD
Min IOPS: 1100,Data disk-N/A,GPU RAM-11GiB
When adding the GPU node pool, it is important that you use --node-taints
nvidia.com/gpu=present:NoSchedule instead of --node-taints sku=gpu:NoSchedule.

Step 2: Downloading the software on your client machine


You must install the following software on your management machine. Management
machine refers to the machine you use to operate your cluster and that can access
your cluster via the kubeconfig file. Your management machine can run Linux,
Windows, or MacOS.

uipathctl-uipathctl is a UiPath command-line tool that allows you to run commands


against Automation Suite Kubernetes hosted on Azure Kubernetes Service (AKS)

Step 3: Configuring input.json:


kubernetes_distribution-Specificy which Kubernetes distrubution you use. Can be aks
or eks.,registries-URLs to pull the docker images and helm charts for UiPath
products and Automation Suite.registry.uipath.com,fqdn-The load balancer endpoint
for Automation Suite,admin_username-The username that you would like to set as an
admin for the host organization.,admin_password-The host admin password to be
set.,profile-Default value, not changeable ha: multi-node HA-ready production
profile.,telemetry_optout-true or false - used to opt out of sending telemetry back
to UiPath. It is set to false by default.
If you want to opt out, then set to true.

Certificate configuration :If no certificate is provided at the time of


installation, the installer creates self-issued certificates and configures them in
the cluster NOTE: Make sure to specify the absolute path for the certificate files.
Run pwd to get the path of the directory where files are placed and append the
certificate file name to the input.json.
1.server_certificate.ca_cert_file- Absolute path to the Certificate Authority (CA)
certificate. This CA is the authority that signs the TLS certificate. A CA bundle
must contain only the chain certificates used to sign the TLS certificate. The
chain limit is nine certificates.If you use a self-signed certificate, you must
specify the path to rootCA.crt, which you previously created. Leave blank if you
want the installer to generate ir.
2.server_certificate.tls_cert_file- Absolute path to the TLS certificate
(server.crt is the self-signed certificate). Leave blank if you want the installer
to generate it.
3.server_certificate.tls_key_file- Absolute path to the certificate key (server.key
is the self-signed certificate). Leave blank if you want the installer to generate
it.
4.identity_certificate.token_signing_cert_file-Absolute path to the identity token
signing certificate used to sign tokens (identity.pfx is the self-signed
certificate). Leave blank if you want the installer to generate an identity
certificate using the server certificate
5.identity_certificate.token_signing_cert_pass-Plain text password set when
exporting the identity token signing certificate.
6.additional_ca_certs- Absolute path to the file containing the additional CA
certificates that you want to be trusted by all the services running as part of
Automation Suite. All certificates in the file must be in valid PEM format.
For example, you need to provide the file containing the SQL server CA certificate
if the certificate is not issued by a public certificate authority

SQL database:

NOTE: Make sure that the SQL server can be accessed from the cluster nodes.
Automatically create the necessary databases:
If you want the installer to create the databases: sql.create_db- Set to true.,
sql.server_url- FQDN of the SQL server, where you want the installer to configure
database., sql.port- Port number on which a database instance should be hosted in
the SQL server., sql.username-Username / user ID to connect to the SQL server.,
sql.password- Password of the username provided earlier to connect to the SQL
server.

NOTE: Ensure the user has the dbcreator role. This grants them permission to create
the database in SQL Server. Otherwise the installation fails.

Bring your own database:If you bring your own database, you must provide the SQL
connection strings for every database.
SQL Connection String:
1.sql_connection_string_template-Platform, Orchestrator, Automation Suite Robots,
Test Manager, Automation Hub, Automation Ops, Insights, Task Mining, Data Service,
Process Mining, Document Understanding
2.sql_connection_string_template_jdbc - AI Center
3.sql_connection_string_template_sqlalchemy_pyodbc-Process Mining
IMPORTANT:
Make sure the SQL account specified in the connection strings is granted the
db_securityadmin and db_owner roles for all Automation Suite databases. If security
restrictions do not allow the use of db_owner, then the SQL account should have the
following roles and permissions on all databases:
db_securityadmin
db_ddladmin
db_datawriter
db_datareader
EXECUTE permission on dbo schema
IMPORTANT:
If you manually set the connection strings in the configuration file, you can
escape SQL, JDBC, ODBC, or PYODBC passwords as follows:
for SQL: add ' at the beginning and end of the password, and double any other '.
for JDBC/ODBC: add { at the beginning of the password and } at the end, and double
any other }.
for PYODBC: username and password should be url encoded to account for special
characters. Document Understanding database passwords cannot start with {.
IMPORTANT: The AutomationSuite_ProcessMining_Airflow database for Process Mining
product must have READ_COMMITTED_SNAPSHOT enabled.
NOTE:
By default, TrustServerCertificate is set to False, and you must provide an
additional CA certificate for the SQL Server. This is required if the SQL Server
certificate is self-signed or signed by an internal CA. If you do not provide the
SQL Server certificate in this scenario, the prerequisite check will fail.
NOTE:
If you you want to override the connection string for any of the services above,
set the sql_connection_str for that specific service.
You still have to manually create these databases before running the installer.
Azure Active Directory based access to SQL from AKS: You may choose to access
Microsoft SQL server via Azure Active Directory from AKS cluster

Caching:
Multiple services in Automation Suite, such as Orchestrator and Identity, use Redis
as a distributed cache
Basic: it is not recommended for production deployment since it does not offer
Service Level Agreement (SLA). However, it could be used for a test environment.
Standard C1 (1GB): It provides decent capacity and performance suitable for a
majority of installations. It also allows future scaling to higher levels,
including Standard C2 or Premium.
Standard C2: A step above Standard C1, it provides larger capacity and better
performance as compared to C1.
Premium: The most recommended option, as it provides availability zones promoting a
higher SLA, and VNet integration for enhanced security.

Storage:
Storage estimate for each Automation Suite components:1.Orchestrator(NuGet
automation packages for deployed automation,Queues and their data)-Typically, a
package is 5 Mb, and buckets, if any, are less than 1 Mb. A mature enterprise
deploys around 10 GB of packages and 12 GB of Queues.
Objectstore:AKS(Azure Storage (blob))-Account Key
Configuration:passed to the storage_class parameter in the input.json file.
NOTE:Sometimes the EKS or AKS cluster already installs the CSI driver and provides
the storage class. If these storage classes are not configured, you must configure
them before installation.
You must make the storage class for the block storage the default one, as shown in
the following example
The size of the block store depends on the size of the deployed and running
automation. Therefore, it is difficult to provide an accurate estimate initially
during the installation. However, you should expect approximately 50 GB of
storageto be a good start. To understand the usage of the block store, see Storage
estimates for each Automation Suite component.
NOTE: As your automation scales, you may need to account for the increase in your
block storage size.
File storage:AKS-Azure Files-azurefile-csi-premium-file.csi.azure.com
Backup and restore:AKS-Azure Storage Account

Networking:
the HA mode requires two replicas and can go up to ten or more replicas. Make sure
your network supports this scaling level.IMPORTANT:
Automation Suite does not support the IPv6 internet protocol.
Configuring NGINX ingress controller:configure the Kubernetes service_type as
cluster_IP instead of Load Balancer

NGINX to Istio via HTTP


Updating your NGINX ingress configuration
You must update your NGINX specification with istio-ingressgateway as a backend
service and specify the port number 80.

NGINX to Istio via HTTPs


Updating the NGINX ingress configuration
You must update your NGINX specification with istio-ingressgateway as a backend
service and specify https as the port name

or for NGINX ingress


Load balancer configuration: You have two options to configure the load balancer:
Preallocated IPs: Allocate public or private IPs for the load balancer, configure
the DNS records to map the FQDNs to these IPs, and provide these IPs as part of the
ingress section of input.json.
Dynamically allocated IPs: If you do not provide an IP address, Automation Suite
dynamically allocates IPs from the cluster subnet to the load balancer.

Preallocated IPs:The following example shows how to allocate public IPs from Azure
and provision a public load balancer. :
...
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/azure-load-balancer-internal": "false",
"service.beta.kubernetes.io/azure-load-balancer-ipv4": "<IP>"
}
}
...
The following example shows how to allocate private IPs to an internal load
balancer from the AKS cluster subnets.
...
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/azure-load-balancer-internal": "true",
"service.beta.kubernetes.io/azure-load-balancer-ipv4": "<IP>",
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet":
"<SUBNET_0>", "<SUBNET_1>"
}
}
...
DNS configuration
Ensure that the DNS records are configured to map the following UiPath® FQDNs to
the load balancer:
FQDN
alm.FQDN
monitoring.FQDN
insights.FQDN (if installing UiPath Insights)
NOTE:
The FQDN is one of the prerequisite checks before installation. If you do not
provide an IP address or have not yet done the FQDN mapping, the check will fail.
Dynamically allocated IPs
If you do not provide any IPs in input.json, Automation Suite dynamically allocates
the private IPs from the worker node subnets

Orchestrator-specific configuration:
Orchestrator can save robot logs to an Elasticsearch server. You can configure this
functionality in the orchestrator.orchestrator_robot_logs_elastic section.
orchestrator_robot_logs_elastic-Elasticsearch configuration.,elastic_uri-The
address of the Elasticsearch instance that should be used. It should be provided in
the form of a URI. If provided, then username and password are also
required.,elastic_auth_username-The Elasticsearch username, used for
authentication.,elastic_auth_password-The Elasticsearch password, used for
authentication.

Insights-specific configuration:
If enabling Insights, users can include SMTP server configuration that will be used
to send scheduled emails/alert emails. If not provided, scheduled emails and alert
emails will not function.

Process Mining-specific configuration:


If enabling Process Mining, we recommend users to specify a SECONDARY SQL server to
act as a data warehouse that is separate from the primary Automation Suite SQL
Server.

Automation Suite Robots-specific configuration:


Automation Suite Robots can use package caching to optimize your process runs and
allow them to run faster. NuGet packages are fetched from the filesystem instead of
being downloaded from the Internet/network. This requires an additional space of
minimum 10GiB and should be allocated to a folder on the host machine filesystem of
the dedicated nodes.

Step 4: Accessing your cluster with uipathctl:


uipathctl requires access to the KubeAPI Server to perform cluster-level operations
such as deployment, resource creation, etc. To access the KubeAPI server, uipathctl
uses the kubeconfig file, which contains the admin-level credentials needed to
access the cluster. This file must be present in the ~/.kube/config folder (default
location) of your local (management) machine.
Optionally, if you are concerned about storing the kubeconfig file in the default
location, you can alternatively provide it with help of the --kubeconfig flag
during every execution of uipathctl.
For example, you can use your preferred method to update your ~/.kube/config file
by using AWS CLI or Azure CLI.

Step 5: Checking the infrastructure prerequisites:


Prerequisite checks ensure that the needed cloud infrastructure is provisioned
appropriately and is accessible by the client machine before starting the
installation of Automation Suite.
The installer can automatically generate the following configurations on your
behalf:
The SQL databases required for the installation on the SQL server based if the
sql.create_db key is set in your input.json file.
The object storage buckets required in your cloud provider if the
external_object_storage.create_bucket key is set in the configuration file.
To check the prerequisites based on the inputs you configured in the input.json,
run the following command:
uipathctl prereq run input.json --versions versions.json
If you want to exclude components from the execution, use the --excluded flag. For
example, if you do not want to check the database connection strings, run uipathctl
prereq --excluded SQL. The command runs all the prerequisite checks except for the
SQL-related one.
If you want to include only certain components in the execution, use the --included
flag. For example, if you only want to check the DNS and objectstore, run uipathctl
prereq --included DNS,OBJECTSTORAGE.
IMPORTANT: You may receive a throttling message from AKS, such as Waited for
1.0447523s due to client-side throttling, not priority and fairness. In this case,
allow a few minutes for the command to fully complete or try to re-run it.

Step 6: Installing Automation Suite:


After successfully validating the prerequisites, you can proceed to install
Automation Suite by running the following command:
uipathctl manifest apply input.json --versions versions.json
To rerun the installation, use the same command as in step 1, with all the
arguments and flags.
To validate that your installation is successful, and services are healthy, run the
following command:
uipathctl health check

You might also like