AAE Architecture Logs Messages etcGoodLearn
AAE Architecture Logs Messages etcGoodLearn
Automation Anywhere and the Automation Anywhere logo are trademarks owned by Automation Anywhere
Software,
The information contained in this documentation is proprietary and confidential.Your use of this information and
Automation Anywhere Software products is subject to the terms and conditions of the applicable End-User
License Agreement and/or Nondisclosure Agreement and the proprietary and restricted rights notices included
therein.
You may print, copy, and use the information contained in this documentation for the internal needs of your
user base only. Unless otherwise agreed to by Automation Anywhere and you in writing, you may not otherwise
distribute this documentation or the information contained here outside of your organization without obtaining
Automation Anywhere’s prior written consent for each such distribution.
Contents
Automation Anywhere Enterprise architecture overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
• Automation Anywhere Enterprise architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
• Deployed components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
• Firewall and port requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
• Load balancer requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
• Bot Creator overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
• Bot Runner overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
• Control Room overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
• Capacity and performance planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
• Bot Quality of Service priorities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
• Deployment and repository access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
• Remote desktop Bot sessions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
• Bot Runner reactive status processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
• Workload management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
• Bot concurrent schedules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
• Deployment models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
• Single-Node deployment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
• High Availability deployment model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
• High Availability Disaster Recovery deployment model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
• Disaster recovery failover steps overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
• Graceful degradation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
• Minimum hardware specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
• Installation and configuration files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
• Control Room operations overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
• Data retention. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
• Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
• Monitoring and alerts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
• Database maintenance plan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
• SQL database backup and recovery for Control Room overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
• Bot design guidelines and standards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
• Testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
• Commenting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
• Naming Conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
• Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
• VB Script. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
• Configuration Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Overview
These collection of topics are primarily intended for IT Managers, Enterprise Architects, and Technical
Decision Makers to assist in implementing RPA using Automation Anywhere Enterprise.
This set of topics introduces the core Automation Anywhere Enterprise components and the required data
center configuration required to install and deploy the Automation Anywhere Enterprise components.
This set of topics is primarily intended for IT Managers, Enterprise Architects, and Technical Decision
Makers to assist in implementing RPA using Automation Anywhere Enterprise.
Related concepts
Deployed components
Related reference
Firewall and port requirements
Load balancer requirements
Bot Creator overview
Bot Runner overview
Control Room overview
Deployed components
Describes the required and optional components used in Automation Anywhere Enterprise datacener
installations.
The following shows how the Automation Anywhere Enterprise components interact with your data center.
Objects in yellow are Automation Anywhere Enterprise components. Objects in blue are data center
components.
Configure the firewall rules for Bot Creator, Bot Runner, and Control Room. Refer to the following tables
for lists of required ports and their use.
Control Room
Warning: It is critical that communication between the Control Room servers is properly
protected. These Control Room servers contain security sensitive information that is not
encrypted. Therefore, excepting the Control Room servers, block all other network hosts from
accessing the listed Automation Anywhere cluster communication ports.
Web browsers
HTTPS and
HTTPS 443 Bot Runners
Web Socket
Bot Creators
Pros:
Cons:
• If audit logging is required, the load balancer cannot report the requests from clients
• Does not utilize TLS hardware offloading, even if the load balancer supports it
Pros:
Cons:
• Certificates must be managed both on the load balancer and on the control room nodes
• Possible interception of data at the load balancer hardware level, because TLS session is not end-
to-end
A Bot is a self-contained task designed to run with little-to-no human intervention. The Bot Creator is the
Automation Anywhere proprietary development client used to author Bots. The following shows the Bot
Creator components.
The Bot Runner is the software machine that runs the Bot. When a Bot is created using Bot Creator, then
Bot Runners can run Bots at scale. The following shows the Bot Runner components.
The Control Room is a centralized management point for all Bots. Control Room manages, schedules,
executes, configures various aspects of Bots and Bot Runners using a collection of specialized web
services. A reverse proxy is responsible for listening for remote connection requests and forwarding those
requests to the correct specialized service. The following shows the Control Room components and
general data center interaction.
Hardware requirements
Table 1. Required Hardware for Automation Anywhere Enterprise Components
PostgreSQL
Bot Runner Bot Creator Control Room SQL server
server
8 core - 3.0 GHz
Intel Xeon
Intel Core i5 2.6 Intel Core i5 2.6 Platinum 4 Core Intel Xeon 2 Core Intel Xeon
Processor
GHz GHz processor (Turbo processor processor
Boost to 3.5
GHz)
RAM 8 GB 8 GB 16 GB 8 GB 4 GB
Storage 32 GB 32 GB 500 GB 500 GB 10 GB
Network 1 GbE 1 GbE 10 GbE 1 GbE 1 GbE
To maintain high operational availability the Control Room is designed around the concept of Quality of
Service (QoS).
Each incoming request is examined to identify, and requests are prioritized based on:
Number of devices Bot and dependencies total size Approximate deployment time
1000 10 MB 1 minute
1000 50 MB 1 minute
1000 100 MB 5 minutes
Devices and the Control Room co-ordinate to implement a fair queuing strategy for download and uploads
of chunked data to the repository.
As the number of devices simultaneously downloading and uploading increase, the time taken for the
Control Room to start processing a chunk request increases. A device waits for up to two minutes for a
response before timing out a request to upload or download. With the default limit of 10 parallel processed
repository requests and simultaneous deployment and execution to 1000 devices, the average time to
queue and process a chunk is approximately 10 seconds.
Note: Downloads and uploads from bot creators are subject to the same QoS policies as
deployments.
If the option Run bot runner session on control room is selected when running a Bot, the Control
Room opens a Remote Desktop (RDP) session to the devices.
To determine the number of simultaneously supported RDP sessions possible:
• Divide the unallocated Control Room memory by the memory usage of an individual RDP client.
Typically, this is 75 MB.
1. Manually increase the amount of memory available for the non-interactive desktop heap size. To do
this, edit the Windows registry value at:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager
\SubSystems
2. In the Windows registry, locate the SharedSection option. Change it to:
SharedSection=A,B,C
Table 1. SharedSection settings for RDP sessions
How quickly status update messages from Bot Runners are processed is rate limited. The rate limit is
based on the Bot Runner's Control Room node. This prevents overload when concurrently executing a
large number of Bots.
• The rate limit is adjusted dynamically based on the number of unprocessed status update
messages.
• Critical status update messages that indicate start, stop or error are never rate-limited.
• If reactive rate-limiting is activated, the progress reported on the Activity page is updated at a lower
frequency than normal.
Workload management
The CSV import rate configured in workload management can be configured.
Data stored as comma separated values (CSV) files can be imported. By default, Workload Management
(WLM) is configured with a conservative CSV import rate to minimize resource utilization.
For a more aggressive import strategy, configure the number of lines imported per batch and the interval
between each batch.
Table 1. Default CSV import rates
When configuring Bots to run repeatedly on a schedule it is important to make sure that the time between
runs does not drop below the total time for deployment and execution of the Bot. Otherwise, sequential
executions of the Bot might overlap leading to unexpected behavior.
For example if it takes 20 seconds to run a Bot, do not schedule the Bot to run every 15 seconds. The
previous run cannot complete before the next run begins.
With the reference specification, it is possible to successfully configure concurrent schedules.
Table 1. Concurrent schedule timing estimates
Deployment models
It is important to identify key requirements before selecting a deployment model.
Planning
The Automation Anywhere Enterprise suite offers mutliple deployment options to meet various levels of
enterprise cost/price performance and resiliency needs.
Deployment models
At a high-level, there are three (3) ways to install Automation Anywhere Enterprise, each depends on your
business continuity requirements.
• Single-Node Deployment
• High Availability Deployment Model
• High Availability with Disaster Recovery Deployment Model
Single-Node deployment
A single-node deployment is used for some proof-of-concept deployments.
A single-node Control Room installation is deployed without the need of a load balancer. It is useful for
some proof-of-concept deployments.
Cons
• No disaster recovery
• No high availability
• Susceptible to hardware failures
Use Cases
• Proof of concept
• Single-user use scenarios
The High Availability (HA) deployment model provides failure tolerance for the Control Room cluster
servers and database servers. The following shows the Automation Anywhere Enterprise and data center
components.
In this example, the Control Room servers and Microsoft SQL servers have HA redundancy. The asterisk
(*) on the PostgreSQL server and Subversion (SVN) server, indicate that those servers do not require HA
redundancy. However, see Graceful Degradation for Bot support if either of these fail.
Configure a load balancer to front all HTTP(S) requests for the Control Room. For best practice, configure
the load balancer to use round-robin host selection.
Configure synchronous replication between the primary site (master) and secondary site (slave) MS SQL
servers to ensure consistency in the event of a database node failure. For the required HA synchronous
replication, configure one of the following:
Pros
Cons
Use Cases
Disaster Recovery (DR) is a method where the two High Availbility (HA) data center configurations are
separated geographically. The extra benefit here from a single location HA configuration, is in the event of
a localized disaster, the physically removed data center can resume functions with minimum downtime.
In this example, all the servers have HA redundancy. Every data center component, including the
PostgreSQL server and Subversion (SVN) server, that don't necessarily require require HA redundancy.
See Graceful Degradation for Bot support if either of these fail.
Disaster recovery in Automation Anywhere Enterprise environment.
• Deploy a second Control Room cluster in an additional datacenter that is in a separate geographic
location.
• In the event of a primary site failure, perform the disaster recovery manually. See the Disaster
recovery failover steps overview.
Note: When a failover to a backup site occurs, very recent changes made on the primary
site might be lost.
Pros
Cons
Note: Schedules are stored in UTC and therefore run at the same time regardless of the
physical location or time zone settings of the server.
Failure mode
With asynchronous replication there is the possibility that a transaction that occurs on the primary site
might not reach the recovery site replica before the failure occurs.
Note: This possibility of loosing the most recent transactions applies to all DR automated
application solutions using asynchronous replication, not just Automation Anywhere solution.
Deployment requires strict consistency between distant geographical locations. Synchronous-Commit
configured between replicas with significant latency has a detrimental effect on all Control Room
operations.
To prevent work items being processed twice when a failure occurs, some work items awaiting delivery to
a device are placed into an error state. This ensures they can be manually reviewed and marked as ready
to be processed or complete as appropriate.
The procedure is identical regardless of whether switching over from primary to secondary (recovery), or
secondary to primary.
If the failed Control Room nodes are still available:
Graceful degradation
Graceful high availability options with degraded primary site.
Certain dependencies of the Automation Anywhere system do not require full High Availability (HA) to
continue to successfully deploy and run bots.
PostgreSQL server
PostgreSQL is used to store dashboard metadata. If PostgreSQL fails, dashboard graphs are not available
until PostgreSQL service is restored.
Automation Anywhere Enterprise supports up-to 1000 simultaneous Bot deployments and executions with
the following hardware.
Table 1. Minimum hardware specifications
PostgreSQL
Bot Runner Bot Creator Control Room SQL Server
Server
Windows Server
Windows
Operation Microsoft Microsoft Windows Server 2008 R2 or later /
Server 2008
System Windows 7 SP1 Windows 7 SP1 2012 R2 or later Red Hat Enterprise
SP2 or later
Linux / Ubuntu LTS
4 core Intel
Intel Core i5 2.6 Intel Core i5 2.6 8 core Intel Xeon 2 core Intel Xeon
Processor Xeon
GHz GHz Processor Processor
Processor
RAM 8 GB 8 GB 16 GB 8 GB 4 GB
Version control
Version control is through Subversion (SVN). Configure the location through the Control Room UI.
From the Control Room, there are several configuration options for managing and monitoring. This section
provides some general guidelines for setting Control Room operations, such as: data retention, logging,
monitoring and alerts, database maintenance planning.
Data retention
The performance of searching and sorting historical data on the Control Room is impacted by the amount
of historical data stored.
When the amount of historical data stored starts to affect the performance of searching and sorting
historical data on the Control Room, you can delete selected data from the historical records.
Only delete data from the listed SQL tables. This data can be deleted and not impact the availability of the
Control Room.
CAUTION: Do not modify or remove any records other than those listed here. Any other change
could leave the Control Room cluster in an unrecoverable state.
Logging
Logging data needs to be gathered in one central location for better and more efficient consumption.
Logging data is generated throughout the Automation Anywhere Enterprise product. For logging to be
more useful, we recommend that you consolidate your logs into one central machine or area.
Splunk strategy
To be practical and useful, logging methods need to: collect logging events across all systems and apps,
provide a holistic view of the entire environment, and display the collected logs in a single area or tool.
Tools, such as Splunk, aggregate various types of logs from various sources into one central location.
Splunk is compatible with the Automation Anywhere Enterprise logging infrastructure, as well as network
and operating system (OS) environments, and provides a single holistic view of an entire system. Splunk's
light-weight software agent, Universal Forwarder, can be installed on most operating systems and
networking environment. The Universal Forwarders monitor logs as they are generated and forward them
to the Splunk Indexing Server, all in real-time. Splunk deployment is both easy and scalable. More
importantly, it provides the top level view of the whole enterprise and provides drill-down options into all of
your logging data.
Infrastructure logging
Network, router, switch, firewall, gateways, etc.
Systems logging
Windows Event Viewer, Web serverlLogs, and machine logs.
Application logging
• Control Room
• Bot Runner
• Bot Creator (Dev Client)
• Bot Farm
• Credential Vault
• Bot Insight for analytics
Log retention
How long you retain collected logs is typically determined by company policy, and typically defined in
terms of currently active (hot), accessible backup (warm), and historical records (cold).
• Hot storage – The current, active log files. Stored on the server where they were generated or where
they are collected. Keep these files locally, on your servers, for at least a month.
• Warm storage – Corporate-wide backups that are generally available for at least one year. Typically,
are moved from warm storage to cold storage after five years.
• Cold storage – Long term archive storage such as the use of magnetic tape that survives the test of
time. These are the files that are moved from warm storage, five years after origination date.
Log rotation
Log rotation is highly recommended where dated log files are archived. This ensures logs are kept a
manageable file size in the file system. Recommended rotation is every 24 hours, that is, archive the log
file every 24 hours. If your system generates a lot of log data, adjust the frequency of the log rotation.
Alternatively, choose a combination method based on your environment. For example:
• Log Rotation by Time – Create a single new log file per 24 hours.
• Log Rotation by Size – Create new log file based on the size of the log file.
• Log Rotation by Bot – Some combination of both to limit size and time per log file.
Nagios
Nagios is a powerful enterprise-grade hardware, network, and server monitor and alert product that
provides an instant awareness of your Automation Anywhere Enterprise IT infrastructure. There are 2
parts to consider when it comes to monitoring as a strategy:
• Monitoring
• Alerting
Monitoring
Objects to monitor for each group include, for example:
• Machines—(machine, VM, devices, etc.)CPU load, memory pressure, disk space, disk IO,
processes, and other system metrics. Machines include, for example, Control Room, Bot Farm, and
Bot Runner.
• Network—Protocol, uptime, overload, throughput, ping, latency, DNS.
• Application—Bot running time, log peek, scheduling service, database service, web server, load
balancing, log truncation.
Alerting
Nagios can send alerts when critical infrastructure components fail. It can also be configured to notify
recovery. There are 3 alerting methods:
Important: Verify that your database backups and file system backups are in sync in order to
maintain consistency between the DB and the file system for recovery efforts.
Following are the installation and configuration files to back up:
You can also manage the backup and recovery processes based on third party applications such as SQL
Server Management Studio. You can then automate this process using an Automation Anywhere BOT.
This topic provides an introduction to common Bot design guidelines and standards. Avoiding these
common mistakes and including these processes and considerations in your Bot design standards,
creates Bots that are clean, easier to read, test, and maintain, and are stable. Most of the guidelines
improve either efficient use of production resources or reduce maintenance time and errors.
• Best practice
Generally, keep a Automation Anywhere task Bots to less than 500 lines, preferably only a few
hundred lines.
• Break out processes: Giant business processes > giant Bots
The key to the successful Bot development of a business process is a well-defined, well-thought-out
strategy. If a business process is so large that it requires more than 8 or 10 sub-tasks, or any one
task contains thousands of lines, then reconsidered the Bot approach to the process.
Evaluate your business processes. Use this understanding when creating your associated Bot
approach.
• Can the business process itself be simplified? Identify any redundant or circular steps.
• Identify logical, contained breaks or splits in your processes.
• Can parts of the business process be split into separate Bots?
• Reduce repetition
Don't repeat yourself principle (DRY) and Rule of three, both basically mean, reduce repetition.
Create a loop that contains a single call, rather than calling a same small number of steps
separately.
Use variables where appropriate.
Break out repeated bits of logic or commands into sub-tasks. If a set of commands is repeated
mutliple times across a task, it makes maintenance difficult. If it needs an update, all instances need
to be located and correctly updated.
• Plan for maintenance
When a rule is encoded in a replicated set of tasks changes, whoever maintains the code has to
change it in all places correctly. This process is error-prone and often leads to problems.
Contain the set of command and rules in one location. If the set of commands or rules exist in only
one place, then they can be easily changed there.
• Test-driven design
Smaller tasks can easily be tested alone, in a unit-test fashion. Tasks without dependencies can use
automated testing. Tasks split into sub-tasks by separate functions, even tasks that are performed
once at the beginning of a sequence, increase maintainability and ease of testing.
Sub-task overview
A sub-task is called by the parent task that needs the service. They are also referred to as a helper tasks
or utility tasks, since their only purpose is to assist the calling task.
Tip:
Sub-tasks should be small and focused, having only a single or only a few responsibilities.
Excel sessions, CSV/text file sessions and browser sessions (web recorder) cannot be shared
across separate tasks. So sub-tasks must be included in such a way so they do not break these
sessions.
Benefits include:
Sub-task considerations
When possible, define sub-tasks so they do not require the calling task to provide information. The
required information is a dependency. Identify the dependency and include it in the sub-task. This
makes the sub-tasks stand-alone, enables unit-testing, and allows it to be called by other tasks
without adding dependencies.
For example, if a login sub-tasks can only be called if the calling task provides a URL, it has
dependency. All parent tasks calling the sub-task must provide a URL. If the URL changes, more
than one task must be change. If the login sub-task includes the URL, it is decoupled from the
parent task. If the URL changes, only the one sub-task needs an update.
• Bi-directional dependencies
If sub-tasks cannot change without calling tasks changing, they are dependent and not truly
decoupled. If calling tasks cannot change without all sub-tasks changing, they are not truly
decoupled, and have bi-directional dependencies. These interwoven dependencies make unit-
testing nearly impossible.
• Avoid too many sub-tasks
While all of the above principles are excellent principles for designing Bots, too many sub-tasks also
becomes prone to maintenance challenges and confusion. The number of sub-tasks needs to be a
manageable amount.
A Bot that has 30 sub-tasks, or would be thousands and thousands of lines without using sub-tasks,
probably indicates a business process that is too large for one Bot. Break down large processes into
pieces, then encapsulate each of the separate pieces into their own Bots.
Sub-task example
For example, suppose a Bot has the need print a notepad document as a PDF file. The task might look
like the following:
In this example, there is a need to print a file as a PDF document three times. On the example
development machine the PDF print driver is called Pdf995.
Recommendations:
• Because there is likelihood that the PDF print driver in production has a different name, investigate if
using a variable is practical.
• Because there is the potential for this task to promoted to production, and possibly repeated multiple
times, recommendation is to turn this into a sub-task.
If any changes are required to this specific set of commands, only this helper task needs to be edited, and
only this helper task needs to be retested.
Testing
Bot tasks should be fully tested.
A required step in Bot development is testing. Fully test all Bot tasks before they are deployed to
production. The goal is to identify and correct known errors, and prevent unexpected events from causing
the Bot to fail. If a Bot does not pass testing:
Commenting
Most Bots require changes after they are placed into production. Use comment to help updates and
maintenance.
Most Bots require changes after they are placed into production. Sometimes those changes can be
frequent, depending on the type and scope of the Bot. The difference between a change being a relatively
straight-forward task and a complete nightmare is determined by two things: how clean is the Bot
architecture, and how well the Bot is documented and commented.
Good commenting can mean a difference of hours during a maintenance cycle. Write all comments in the
same language, ensure they are grammatically correct and contain appropriate punctuation.
• Use one-line comments to explain assumptions, known issues and logic insights, or mark
automation segments:
• Always use comments when you identify bad task lines with some common phrase, such as //FIX
THIS – otherwise remove or rewrite that part of the task!
• Include comments using Task-List keyword flags to allow comment-filtering. Example:
• Never leave disabled task lines in the final production version. Always delete disabled task lines.
• Try to focus comments on the why and what of a command block and not the how. Try to help the
reader understand why you chose a certain solution or approach and what you are trying to achieve.
If applicable, also mention that you chose an alternative solution because you ran into a problem
with the obvious solution.
Naming Conventions
Capitialization and spacing styling in names.
• CamelCase—The practice of writing compound words or phrases where each word or abbreviation
begins with a capital letter. For example PrintUtility.
• bumpyCase—The same, but always begins with a lower letter. For example backgroundColor.
• Do not use underscores.—Underscores waste space and do not provide any value in these
contexts. Readability can be achieved by using Bumpy Casing and Camel Casing.
• Consistent values and flags.—Always use lower case Boolean values "true" and "false".
Never deviate, stick to this method of defining a boolean state. This also applies to flags. Always use
"true" or "false" for Boolean variables, never a 0 or 1 or anything else.
• Variable names—
Don’t include numbers in variable names.
Avoid single character variable names. Never use i or x for example. Use a variable name that
provides some clue about the variable purpose.
• Flag and Script names—
Name flags with Is, Has, Can, Allows or Supports, like isAvailable, isNotAvailable,
hasBeenUpdated.
Name scripts with a noun, noun phrase or adjective like Utility or Helper for example
FileSaveHelper.atmx.
• Pre-fixed fields—Don’t prefix fields. For example, don’t use g_ or s_ or just _.
Exception is the letter v as a prefix in order to make finding variables easier.
• Verb-object naming—Also use verb-object pairs when naming scripts like
GetMostRecentVersion.
Name variables with a descriptive name like employeeFirstName or socialSecurityNumber.
Logging
Logs should be easy to read and easy to parse.
Log files store messages issued from various application and system components.
Logs need to be easy to read, understand, and parse. Keep the log file readable, clean, and descriptive.
Show the data being processed and show its meaning. Show what the Bot is actually doing. Good logs
can serve as a great documentation of the Bot itself.
Types of logs
Types of messages
• ERROR—Something terribly wrong had happened, that must be investigated immediately. The task
cannot perform its function properly here. For example: database unavailable, mission critical use
case cannot be continued, file is busy and cannot be opened.
• WARN—The task might be continued, but take extra caution. For example: Task is running in
development mode. The task can continue to operate, but always justify and examine the
message.
• INFO—Important business process has finished. The information message, sometimes cryptically,
states information about the application. For example:
• Application action complete. Best case for an airline booking application, it issues only one
INFO statement per each ticket, and it states [Who] booked ticket from [Where] to
[Where].
• Application changes state significantly. Database update or External system request
• DEBUG—Any information that is helpful in debugging a Bot, typically for use by the Bot developer.
These messages do not go into the process log. Use an isProductionMode variable to turn these
statements off when the Bot is moved to production.
• PERFORMANCE—Performance logging can either go into the process/informational log or it can go
into the performance log, if a separate performance log has been created. Performance tracks how
long it takes to perform specific steps, but avoid too much granularity. In most cases, limit
performance logging to an overall business process. For example, how long it took to complete an
order, or how long it took to process an invoice.
• Consumers
There are two consumers for log files: people and machines.
People consumers—When people are the consumer, their role influences the type of information
they are looking for. A developer might need information to debug, analyze performance, or locate
errors. An analyst might need audit information or performance information.
Machine consumers—Machines read log files typically through shell scripts written by system
administrators. Design logs suitable for both these log file consumers.
• Content
• Include objects—A good logs includes: timestamp, logging level, machine name, name of the
task, and the message.
• Error log statements.—Include the line number and error description for any error from the
Automation Anywhere Enterprise error handling block.
• Debug statements—Use debugging log statements when passing variable between sub-
tasks. Include the variable values as they enter and exit a sub-task. Use isProductionMode
variable to turn off debugging statement when the Bot is moved to production.
• Interface calls—If a Bot interfaces with other systems, such as Metabots, APIs, REST or
SOAP calls, log those calls and, if appropriate, their responses.
• Formatting
• Delimiters—Delimit content values. To support easy log file importing and parsing, use tab
delimiting to separate the values.
• Log-to-file—Use the log-to-file feature built into Automation Anywhere.
Note: Don’t create your own method and format for time stamping, even for Excel.
Only modify from the built-in version, if there is a specific need for different timestamp.
VB Script
VB script option.
Automation Anywhere has the ability to call VB script. However, we recommend that you use alternative
methods if at all possible. Limit your use of VB script to situations where there are simply no other
choices. The reasons are:
Another thing to remember is to never use Enterprise Identity Management to write VB script, or create a
VB script file. Doing so is extremely difficult to maintain and is an anti-pattern at best. At worst, it
demonstrates the ability to embed and deliver a malicious payload in a Bot.
Configuration Files
Use configuration files to separate initial variable values.
Always separate the initial variable values from the task. You must change the variable values when you
run the task in different environments such as UAT or PROD. Use a configuration file and read those
variables into the task at start time. Make use of system path variables to load the configuration file. This
ensures the configuration file can be located no matter where Enterprise Identity Management is installed
on the system.