CMDB7.6.04 NormalizationReconciliationGuide
CMDB7.6.04 NormalizationReconciliationGuide
04
Normalization and
Reconciliation Guide
January 2011
www.bmc.com
Contacting BMC Software
You can access the BMC Software website at http://www.bmc.com. From this website, you can obtain information
about the company, its products, corporate offices, special events, and career opportunities.
United States and Canada
Address BMC SOFTWARE INC Telephone 713 918 8800 or Fax 713 918 8000
2101 CITYWEST BLVD 800 841 2031
HOUSTON TX 77042-2827
USA
Outside United States and Canada
Telephone (01) 713 918 8800 Fax (01) 713 918 8000
If you have comments or suggestions about this documentation, contact Information Design and Development by email at
[email protected].
Support website
You can obtain technical support from BMC Software 24 hours a day, 7 days a week at
http://www.bmc.com/support. From this website, you can:
■ Read overviews about support services and programs that BMC Software offers.
■ Find the most current information about BMC Software products.
■ Search a database for problems similar to yours and possible solutions.
■ Order or download product documentation.
■ Report a problem or ask a question.
■ Subscribe to receive email notices when new product versions are released.
■ Find worldwide BMC Software support center locations and contact information, including email addresses, fax
numbers, and telephone numbers.
Contents 5
Chapter 2 Reconciling data 65
Overview of reconciliation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Identify activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Merge activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Reconciliation console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Namespaces and reconciliation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Reconciliation IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Reconciliation in a server group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Glossary 151
Index 161
Contents 7
8 Normalization and Reconciliation Guide
BMC Atrium Core documentation
This section describes the complete set of BMC Atrium Core documentation,
including manuals, help systems, videos, and so on.
Unless otherwise noted, documentation is available free of charge on the BMC
Atrium Core documentation media (DVD or Electronic Product Download
bundle) and on the BMC Customer Support site, at http://www.bmc.com/support.
To find this documentation on the BMC Customer Support site, choose Product
Documentation > Supported Product A-Z List > BMC Atrium CMDB Enterprise
Manager >7.6.04
1 Normalizing data
The process of normalization makes sure that product names and categorization
are consistent across different datasets and from different data providers.
The following topics are provided:
! Overview of normalization (page 14)
! Preparing for normalization (page 28)
! Typical normalization setup (page 30)
! Optional normalization tasks (page 49)
Overview of normalization
As part of BMC Atrium CMDB, the Normalization Engine provides a centralized,
customizable, and uniform way to overcome data consistency problems.
When multiple sources provide data to the BMC Atrium Configuration
Management Database (BMC Atrium CMDB) product, data consistency problems
such as the following can occur:
! Inconsistent categories and naming
! Duplicate configuration items (CIs)
The Normalization Engine normalizes the following attributes for hardware and
software products:
! Product categorization attributes: Category, Type, and Item (CTI)
! ManufacturerName
! Model (the product name, applicable to software and hardware)
! MarketVersion (Applicable to software)
! VersionNumber (Applicable to software)
! PatchNumber (Applicable to software)
! DictionaryID (the Product Instance ID that points to the Product Catalog
entry)
The Normalization Engine includes Normalization Features that you can enable
on individual datasets. Some features allow you to define rules with conditions,
actions to perform, and the classes to normalize.
NOTE
The DML Based Instance Normalization feature is enabled for all datasets by
default and cannot be configured for individual datasets. This feature normalizes
the Model, ManufacturerName, Categorization, VersionNumber, PatchNumber,
Type, and Item attributes based on a corresponding Product Catalog entry.
! If the Product Catalog entry for Calbro Financial Advisor Pro 1.0 is not
approved, then the Normalization Engine normalizes the CI and updates the
CI’s NormalizationStatus attribute to Normalized and Unapproved.
BMC Atrium
CMDB
Other
BMC Configuration Management Consumers
Configuration Discovery Production
BMC Atrium dataset
Reconciliation
Normalization
Integration Engine BMC Asset
Management
Engine
Engine
Import Sandbox
Other data providers dataset
datasets
Federated
BMC Foundation Discovery data
and
BMC Topology Discovery
BMC Atrium
Product Catalog
Normalization process
The Normalization Engine normalizes non-normalized and modified CIs by
evaluating the CI using the Product Catalog.
Normalization works in the following ways:
! Normalizes all CIs that have a status of not normalized
! Incrementally normalizes CIs that have been modified after normalization or
after a normalization job is interrupted and resumed
All modes normalize the CIs in the datasets in the following conditions:
! The CIs are not normalized.
! The normalization configuration has changed.
The Normalization Engine normalizes non-normalized and modified CIs by
evaluating CIs using the Product Catalog:
! Signature ID—Stored in the Product Catalog and created by discovery tools for
identifying products. For more information about generating the Signature ID,
see BMC Atrium Core 7.6.04 Product Catalog and DML Guide. This lookup option
is available only with the Normalization API. For more information, see BMC
Atrium CMDB 7.5.00 Developer’s Reference Guide.
! File name and size—The name and size of the file.This lookup option is available
only with the Normalization API. For more information, see BMC Atrium CMDB
7.5.00 Developer’s Reference Guide.
! Product attributes—The Model, ManufacturerName, and VersionNumber
attributes of the CI.
Figure 1-2 shows the process of normalizing CIs in more detail.
No Update the
Is class set up for NormalizationStatus
normalization? attribute and exit
1
Yes
Yes
Yes
Yes
Get CTI values
and replace them
in the CI
Step 1 The Normalization Engine checks that the CI class is configured for normalization.
Step 2 The Normalization Engine checks the NE:ProductNameAlias form for aliases for
the ManufacturerName and Model attributes.
Step 3 The Normalization Engine searches the Product Catalog for a product that
matches the CI.
If the Product Catalog returns multiple matches, and the Normalization Engine
rejects the CI and reports an error.
Step 4 If the Normalization Engine finds a Product Catalog entry, it normalizes the CI,
including all of the Normalization Features enabled for the dataset.
Step 5 If the Normalization Engine finds no Product Catalog entry, it checks the Product
Catalog Alias Mapping form for aliases.
Step 6 If it finds aliases, the Normalization Engine applies them to the CI.
a Using the alias values for product and manufacturer, the Normalization Engine
searches the Product Catalog for a matching product.
b If the Normalization Engine finds no Product Catalog entry, it updates the CI’s
NormalizationStatus attribute as Normalization Failed and ends the
normalization process for the CI.
c If the Normalization Engine finds a Product Catalog entry, it normalizes the CI,
including all of the Normalization Features enabled for the dataset.
! The Normalization Engine updates the CI with the Category, Type, and Item
attribute values from the Product Catalog entry.
! The Normalization Engine updates the NormalizationStatus attribute to
Normalized and Approved or Normalized and Unapproved, and then ends
the normalization process for the CI.
Step 7 If the Normalization Engine finds no aliases in the Product Catalog Alias Mapping
form, it checks if the Allow new Product Catalog entry option is enabled.
Step 1 Define the rules for rolling up versions to the specified Market Version. (See
“Configuring Version Rollup rules” on page 55.)
Step 2 Create suites and the rules for identifying CIs as suites or components. (See
“Configuring Suite Rollup rules” on page 59.)
Step 3 For each dataset, enable the Version Rollup and Suite Rollup features. (See
“Configuring datasets” on page 31.)
NOTE
The Normalization Engine allows you to only disable or enable the normalization
of impact relationships based on the best-practice impact normalization rules that
are provided by BMC. You cannot create additional best-practice impact
normalization rules.
The impact relationship rules are framed using the following guidelines:
! The rules are defined at the relationship class level. The rules defined for the
class are applied to every instance of the relationship class that you create.
! The rules are defined in BMC Atrium CMDB depending upon the source class,
destination class, and the relationship class itself.
For example, for the BMC_HostedSystemComponents relationship class,
BMC_ComputerSystem source class, and BMC_DiskDrive destination class, a
rule is defined to set the HasImpact attribute to Yes and ImpactDirection
attribute to Destination-Source. This rule is defined to indicate non-availability
of a system caused due to a hard drive failure.
! The rules are applicable to all datasets in BMC Atrium CMDB. You cannot
configure a separate rule for each dataset.
! When rules are set for a relationship class at both parent and child class levels,
the child class rule overrides the parent class rule.
! Apart from the regular attributes that are automatically set for a relationship
based on the relationship, source, and destination classes, you can specify
additional qualification string. The qualifier is used to determine whether the
impact attributes should be applied on the CI that is being processed.
! The rules eliminates duplicate impact relationships by merging impact
relationships between any two given endpoints. If the Merge process finds any
matching Impact relationships when the rule is applied on a relationship
instance, the impact relationships are merged. The rules merge those instances
of the BMC_BaseRelationship class that have the same values for the Name,
HasImpact, ImpactPropagationModel, ImpactDirection, and ImpactWeight
attributes.
If you update the values of the HasImpact, ImpactDirection, ImpactWeight and
ImpactPropagationModel impact attributes of a relationship instance and then
run the normalization job, the Normalization Engine retains the modified values
for these attributes by default. However, you can configure the Normalization
Engine to reset the values of these impact attributes based on the best-practice
impact normalization rules, each time you run a normalization job. For
information about configuring the Auto Impact Manual Edit setting, see
“Configuring datasets” on page 31.
WARNING
Although you can disable the Relation Name Normalization feature in the
Configuration Editor, BMC recommends that you do not disable the normalization
of relationships based on the best-practice relationship rules that are shipped with
the product. Other BMC products that consume the configuration data in BMC
Atrium CMDB reference the best practice relationship rules and disabling the
Relation Name Normalization feature can result in errors in these products.
The best-practice rules for relationship classes are framed using the following
guidelines:
! The rules can modify only the Name attribute for the relationship classes.
! The rules are defined in BMC Atrium CMDB depending upon the source and
destination classes for a relationship.
For example, a rule can be defined to set the Name attribute of the relationship
instance with the value ContainedDomain if the relationship class is
BMC_Component, the source class BMC_AdminDomain, and destination class is
also BMC_AdminDomain.
! The rules are applicable to all datasets in BMC Atrium CMDB. To define a rule
for specific dataset, include the datasetID attribute in the rule qualification.
! When rules are set for a relationship class at both parent and child class levels,
the child class rule overrides the parent class rule.
Step 1 Define the rules for setting the row-level permissions. (See “Creating rules to set
row-level permissions” on page 53.)
Step 2 For each dataset, enable the row-level security feature. (See “Configuring datasets”
on page 31.)
In addition to the CMDB Data View and CMDB Data Change roles, users must also
have row-level access to instances. Each class has two attributes that specify users
with read and write access to the class instances.
! CMDBRowLevelSecurity—Users who are members of a group with row-level
access have permission to view the instance if they also have the CMDB Data
View or CMDB Data Change role.
! CMDBWriteSecurity—Users who are members of a group with write access
have permission to modify the instance if they also have row-level access and
the CMDB Data Viewer role. This permission is useful for giving someone write
access to a specific instance without giving write access to all instances with one
of the CMDB Data Change roles.
You can define groups for the following permissions:
! View—Members of these groups and roles can view the attribute in the class
form, but cannot modify its value.
! Change—Members of these groups and roles can view and modify the attribute
value.
For more information about permissions, see BMC Atrium CMDB 7.6.04
Administrator's Guide.
Normalization status
Each CI has a NormalizationStatus attribute to track the CI’s stages of
normalization. Table 1-1 describes the possible values for NormalizationStatus.
Table 1-1: NormalizationStatus values
Status value Description
Not Applicable for The CI is not normalized but did not fail because, for
Normalization some classes, normalization is not applicable. The
Normalization Engine assigns this status to CIs with
classes not configured for normalization. If needed you
can configure the classes or remove them for
normalization.
This status value is set when a CI is created or modified.
Normalization Failed The CI is not normalized because no Product Catalog
entry is found for the CI.
Normalized and Approved The CI is normalized and approved because the
following items are true:
! The CI matched a unique entry in the Product Catalog.
! The matching product is approved.
! The CI did not match a Product Catalog entry and the
Normalization Engine created a new entry for it
Normalized but Not Approved The CI is normalized but not approved because the CI
matched a Product Catalog entry that is not approved
Not Normalized The default status of an instance is always Not
Normalized.
Modified After Last The CI is normalized but has changed since it was
Normalization normalized:
! The CI has been normalized but at least one attribute
that can be normalized has been modified.
! If the CI has been normalized, the status is changed to
Modified After Last Normalization after the CI is
reconciled.
If the dataset is set to inline normalization, you might
not see this status for a CI. This status appears in the
cases of continuous and batch normalization.
Inline and continuous modes can take much longer for normalizing initial CI
loading because these modes process each CI as it is written or after it is written to
a dataset. Batch mode normalizes all of the CIs at one time. You can also schedule
batch normalization to occur when users are least affected.
Step 2 Create aliases for product and manufacturer names so that the product can be
found in the Product Catalog.
Step 3 Set the Normalization type option to CTI Only to normalize only product
categorization.
Step 4 In the BMC Atrium Product Catalog, create a Product Catalog Mapping Alias so
that, if a product is not in the Product Catalog, an entry can be created.
Step 5 From the Catalog Mapping window, create product catalog mapping entries so
that CIs are updated with mapped categorization.
BMC
Data for servers, desktops,
Conifguration
laptops, and mobile devices
Discovery
BMC.IMPORT.CONFIG
BMC Atrium
Discovery and Data for network and
Dependency non-e-commerce applications
Mapping
CALBRO.IMPORT.TOPO
CALBRO.DISC
Export of Calbro
Calbro-developed
applications
applications from IT
(via AIE)
CALBRO.APPS
Calbro’s staff has analyzed the company’s needs and answered the normalization
questions to design its normalization processes.
Table 1-2: Calbro’s normalization analysis
Question Calbro’s answer
Must more than one dataset be Yes—Calbro has four different datasets that must
reconciled to the BMC.ASSET be set up for normalization before they can be
production database? reconciled and added to BMC.ASSET.
Should discovered products that do Yes—Any instances in CALBRO.APPS should be
not exist in the Product Catalog be used to create Product Catalog entries if they do
added automatically? not exist. CALBRO.APPS contains a list of all the
applications that Calbro has developed. Calbro
trusts this data because they created it manually
and consistently.
Should data be normalized as it is Inline—CALBRO.APPS is a list of Calbro
written to the datasets (inline), or as it applications which is updated with new versions
is written to BMC Atrium CMDB and patches. Because this data source have a high
(continuous)? volume of changes, Calbro sets this dataset to
inline normalization so that, if needed, new
Product Catalog entries can be created from these
instances.
Continuous—Calbro initially normalizes the
CALBRO.DISC with a batch job and then sets to
continuous normalization after 20 modifications
or creations. Because of the importance of
maintaining their e-commerce services, Calbro
wants updated and new instances to be
normalized and reconciled as quickly as possible
without a significant impact on other users and
processes.
Batch—Because CALBRO.IMPORT.TOPO and
CALBRO.IMPORT.CONFIG have a large number
of changes, Calbro schedules normalization of
these instances during off hours.
Do any datasets not require No—CALBRO.APPS contains a list of all the
normalization? applications that Calbro has developed. Calbro
trusts this data because they created it manually
and consistently. Calbro uses this dataset to create
Product Catalog entries.
Step 1 Create Product Catalog entries for the class instances to be normalized.
Step 2 Approve the Product Catalog entries to define the Definitive Media Library (DML)
and Definitive Hardware Library (DHL).
For more information about these steps, see theBMC Atrium Core 7.6.04 Product
Catalog and DML Guide.
Step 3 From the Product Catalog Console, use the NE:ProductNameAlias form to create
product and manufacturer aliases.
In BMC Atrium Product Catalog, you can create an alias for the Model or
ManufacturerName attribute for a discovered or imported CI so that the
Normalization Engine can find the product in the Product Catalog and normalize
the CI. In the normalization process, the Normalization Engine always checks for
a Model or ManufacturerName alias. If the CI has a Model or ManufacturerName
alias, the Normalization Engine replaces the CI’s Model or ManufacturerName
attribute value with the alias and searches for an entry in the Product Catalog.
For more information, see BMC Atrium Core 7.6.04 Product Catalog and DML Guide.
Step 4 For software license management, set the Market Version field for each product.
For more information, see BMC Atrium Core 7.6.04 Product Catalog and DML Guide.
You can create these aliases from the Catalog Mapping in the Normalization
console. For more information, see “Mapping product categorization aliases” on
page 41.
Each step represents a procedure. In the sections that relate to this process, the
graphic is repeated, and the related step is highlighted.
Best Practice
In a server group environment, you do not need to duplicate normalization
configurations for primary and secondary servers, which use the same database.
The normalization configurations are saved to the database. The Normalization
Engine runs on every server in the group but the normalization jobs are scheduled
only on the primary server.
Do not run a batch normalization job simultaneously on the same dataset on the
primary and secondary servers to avoid possible errors.
! To set up normalization
Step 1 Configure each dataset for normalization. See “Configuring datasets” on page 31.
Step 2 Add any custom classes that you have created. See “Configuring classes for
normalization” on page 36.
Step 3 Simulate normalization for selected datasets, and fix any errors. See “Simulating
normalization” on page 37.
Step 4 Select the mode (inline, continuous, and batch) for when to normalize datasets. See
“Normalization modes” on page 42.
Step 5 View normalization job history and details. See “Monitoring normalization jobs”
on page 48.
Configuring datasets
A dataset is a collection of CIs and relationships, and each data provider has its
own dataset. You can configure the default normalization settings that are applied
as datasets are created.
You also have the option of applying these defaults to datasets that are already
configured.
These settings include the option of full or incremental normalization. All of the
other default settings can also be defined for each dataset.
You can configure normalization in the following ways:
! Configure default normalization of CIs across all datasets.
These settings are the default normalization settings for all datasets, but these
settings do not override the individual dataset configurations unless you select
Apply Default Settings to all Datasets.
! Customize normalization for each dataset.
The dataset normalization settings override the system defaults.
NOTE
By default, all datasets have Normalization Type set to CTI Only and the DML
Based Instance Normalization, Relation Name Normalization, and Impact
Normalization features enabled. You must create a job to normalize the datasets.
To normalize when CIs are created, enable inline or continuous normalization for
a dataset. Also, create a batch job to normalize the CIs that were created when the
normalization plug-in is stopped, such as during upgrades.
In the Calbro example, the system configuration is set first to the most common
values:
! Both product names and categories are normalized.
! Products are normalized as the datasets are updated (inline mode).
! Products with errors are not updated.
! Discovered products that are not found in the Product Catalog are not created.
Then, the Calbro staff configures the individual datasets that need to be configured
differently from the system configuration. For example, for CFS.APPS, any new
products found there are created in the Product Catalog.
Table 1-4 describes the settings that you can configure for normalization, either for
individual datasets or as a system default.
Table 1-4: Dataset and system normalization settings
Setting Value Description
Normalization CTI Only (Default) Only the categories for a CI are
Type normalized, but the CI Model and
ManufacturerName attributes are not. For more
information about using CTI Only, see “BMC
Remedy ITSM and category normalization” on
page 25.
If this option is selected, you can map the
categorization of the provider’s data to the
Product Catalog categorization. For more
information, see “Exporting and importing
normalization configurations” on page 52. If there
is no mapping, the Normalization Engine tries to
create an entry with the categories that the data
provider supplies.
Name & CTI The product name, attributes, and categories are
Lookup normalized.
Disabled (System Configuration only) Normalization is
turned off.
Inline Deselected (Default) Normalize data in the continuous mode
Normalization (if enabled) or, if defined, as scheduled.
For more information, see “Configuring inline
normalization” on page 45.
Selected Normalize data as it is created or modified.
For more information, see “Normalization
modes” on page 42.
Allow Unapproved Selected (Default) If the product is unapproved in the
CI Product Catalog, the Normalization Engine
normalizes the CI and sets its
NormalizationStatus attribute to Normalized
and Not Approved.
Deselected If the product is unapproved, the Normalization
Engine does not normalize the CI or, if the Allow
new Product Catalog entry option is enabled, does
not create a Product Catalog entry.
NOTE
The DML Based Instance Normalization feature is enabled for all datasets by
default and cannot be configured for individual datasets. This feature normalizes
the Model, ManufacturerName, Categorization, VersionNumber, PatchNumber,
Type, and Item attributes based on a corresponding Product Catalog entry.
3 To enable or disable the Normalization Features, click the ... button in the
Normalization Features column.
4 Click Save.
You select the classes to normalize. By default, only a subset of class attributes from
the Product Catalog can be used for normalization.
In the Mechanism List, the methods for normalizing the class attributes are
automatically selected.
! DML—Uses the Product Catalog interfaces to find normalized attributes.
! Alias Lookup—Uses the CI name and dataset ID to check the alias lookup table.
NOTE
You cannot set the class normalization for a particular normalization job or dataset
because the settings are global.
NOTE
You can configure normalization for custom classes that use
BMC.CORE:BMC_BaseElement or one of its child classes as its superclass. However,
you cannot select custom attributes for normalization: you must use the standard
normalization attributes. BMC Atrium CMDB must have completed
synchronization for the custom class, and the class status is Active.
TIP
You might need to scroll to the bottom of the table to see the New Class
Configuration Area.
3 In the New Class Configuration area, select from the Class Name list.
4 Click Save.
Simulating normalization
You can use the Normalization Simulation utility to preview normalization for a
specific dataset and class.
You can use this utility to improve the number of CIs that are normalized and the
quality of data created in the Product Catalog.
! For CIs that fail to normalize, verify and correct their Model and
ManufacturerName values.
! For new Product Catalog entries, verify that it contains the desired data.
After you execute the utility, you can see the following information:
! The total number of CIs for the specified dataset and class
! Number of CIs that failed normalization
These are CIs that would have a NormalizationStatus value of Normalized
Failed. In the output file, failed CIs can have the following status:
! Failed (Manufacturer is null)—The Manufacturer attribute is null, and the
Normalization Engine could not find a Product Catalog entry.
! Failed (Model is null)—The Model attribute is null, and the Normalization
Engine could not find a Product Catalog entry.
! Failed (No CTI values)—The Normalization Engine cannot even create the
product because the Category, Type, and Item attributes have no values.
! Number of CIs that successfully normalized
Depending on the dataset’s normalization settings, these are CIs that would
have a NormalizationStatus value of Normalized and Approved and
Normalized and Unapproved.
Setting up aliases
After simulating normalization, you can reduce failures by creating product and
manufacturer aliases or mapping product categorizations.
NOTE
To create aliases for the Model or ManufacturerName attributes, use the
Normalization Product Name Alias form in the Product Catalog, even though the
Catalog Mapping window allows you to map aliases in the Product Name and
Manufacturer Name fields. The Normalization Engine always checks the
Normalization Product Name Alias form in the first step in the normalization
process.
Unlike with the product and manufacturer aliases, the Normalization Engine does
not always check for product categorization aliases. Instead, it checks the
categorization aliases for any of the following conditions:
! If the combination of the values of Product Category, Product Type, Product
Item, Product Name, and Manufacturer/Vendor is not in the Product Catalog
data.
! If the combination of values is in the Product Catalog but is not related to the
company for whom the CI is being submitted.
For example, if you specify Desktop in the Category field in the Discovery Product
Categorization area and Hardware in the Category field in the Mapped Product
Categorization area, any incoming CI that has a value of Desktop for the Category
attribute is saved as Hardware for Category.
Second, you should also create product categorization aliases for CIs where the
Model or ManufacturerName attribute has no value. For more information, see
“Null values for product and manufacturer” on page 23.
Third, when the Allow new Product Catalog entry option is enabled, the
Normalization Engine uses the product categorization aliases to create a Product
Catalog entry when it does not find one for the CI.
Normalization modes
You can normalize using these different modes: inline, continuous, and batch.
! Inline (realtime)—CIs are normalized any time that they are created or
modified in BMC Atrium CMDB. In this mode, CIs are normalized before they
are saved in BMC Atrium CMDB. If a CI cannot be normalized, you can define
how to handle inline errors:
! Reject and do not save the CI to BMC Atrium CMDB.
! Save the CI to BMC Atrium CMDB but flag it as not normalized.
For more information, see “Configuring inline normalization” on page 45.
! Continuous—In this mode, CIs are normalized after they are saved in BMC
Atrium CMDB based on changes to the CIs, not to the dataset. When CIs are
added or changed, BMC Atrium CMDB notifies the Normalization Engine
which then checks and normalizes the modified CIs.
You must configure both of the following conditions for continuous
normalization, which starts when one of the conditions is met.
! When a specified number of creation and modification events occurs
! After a specified interval of time
For more information, see “Configuring continuous normalization” on page 45.
! Batch (scheduled)—In this mode, CIs are normalized after they are saved in
BMC Atrium CMDB based on a schedule for a dataset, unlike the Continuous
mode which is based on individual CIs. For more information, see “Initial CI
loading with batch normalization” on page 24.
IMPORTANT
When normalizing a large amount of data, use the batch mode, and schedule it to
run outside of heavy use hours to minimize the impact on users. You can use the
continuous mode for dataset updates.
Do not run more than one BMC Atrium Integration Engine, Normalization Engine,
or Reconciliation Engine job at the same time because they might query or update
the same data.
Figure 1-6 on page 44 shows the difference between inline and continuous
normalization. With inline normalization, CIs are normalized before they are
written to BMC Atrium CMDB datasets. With continuous normalization, CIs are
written to a dataset before they are normalized.
In this example, Calbro uses inline normalization on CALBRO.APPS because it is
not frequently updated. Normalizing CIs one at a time would have minimal
performance impact on users. Calbro uses continuous normalization on the
CALBRO.DISC dataset for specific reasons. First, Calbro completed a bulk
normalization with a batch job. Second, because the discovery tool typically adds
or changes few CIs in the dataset, Calbro sets this to continuous mode. Calbro staff
also sets normalization to start when 10 CIs are changed or created or when five
minutes have elapsed since the previous normalization.
AIE exchange
Inline of Calbro
applications
CALBRO.APPS Normalization
Calbro
Continuous Discovery
CALBRO.DISC
Use the following procedure to select inline normalization, which does not require
creating a normalization job. Normalization for the selected dataset starts when
instances are created or modified.
NOTE
The inline normalization mode always takes precedence over the continuous mode
or a schedule.
5 Click Save.
6 In the Normalization console, select the continuous job, and click Start Job.
When started, a continuous job continues to run until you stop the job.
5 Click Save.
After you create a batch job, you do not need to start the job. It runs automatically
at the scheduled time. However, when you need a job to run immediately, you can
select the job in Normalization console and click Start Job.
The History tab displays the job run information for batch and continuous
normalization.
You can also display jobs with a specific status or jobs that executed within a range
of dates.
! Name—The name assigned to the job when it was created. You can click the
name of a job to view all of its past and current executions. For each job run, you
can view its details, including processed CIs and classes, logging information,
and the reason for an aborted run.
! Status—One of the following states:
! Started—The job is currently in progress.
! Completed—The job finished successfully.
! Failed—The job could not finish and error messages were logged.
! Start time—Job start time.
! End time—Job end time.
You can expand each job history entry to view its details.
! Job Run ID
! Status
! Start time
! End time
! Is full run
! Reason for abort
! Number of classes processed
! Total number of classes
! Total number of CI processed
! Current class being processed
! Total number of CIs processed in current class
! Total number of CIs in current class
! Log file name
AR RPC Queue RPC queue for BMC Remedy AR System API calls
back to BMC Remedy AR System.
Select from any of the possible values: 390600,
390621-390634, 390636-390669, and 390680-390694.
The default is 0.
CMDB RPC Queue RPC queue for BMC Atrium CMDB API calls back
to BMC Remedy AR System server.
The possible values are 390698 or 390699. The
default is 0.
Threads in Batch Pool Maximum number of threads in the thread pool for
batch jobs.
The Normalization Engine checks whether batch
jobs need to be started. If so, a new thread is
spawned from the batch normalization thread pool
for normalization
If a batch job is normalizing a dataset and a new job
requests to run the batch for that dataset again, that
request is aborted.
The default is 4.
Threads in Continuous Maximum number of threads in the thread pool for
Pool continuous mode.
The continuous threads check whether the
continuous normalization needs to be started. If so,
a new thread is spawned from the continuous
normalization thread pool. When the threads in the
pool have been exhausted, the Normalization
Engine queues any further requests to wait for an
available thread
The default is 4.
Configuring logging
The Normalization Engine provides logging information in different areas.
All types have different contexts and can occur simultaneously. For example, BMC
Atrium CMDB has three datasets configured for normalization. Dataset A is
normalized once a week as a batch job so its logging information is saved to
NE.datasetA.nnn.log. DatasetB is normalized in continuous mode, and its
logging data is saved to necont.log.
! Batch normalization jobs—The log file is automatically generated as
AtriumCore_install\cmdb\server\logs\neJob.dataSetId.jobID.log. It
captures information for batch and inline normalization jobs. The job ID is
available in the Normalization console. By default, the logging level for
normalization is set to Information.
! Continuous normalization—The log file
AtriumCore_install\cmdb\server\logs\neContinuous.log is
automatically generated to capture information for continuous normalization.
By default, the logging level for continuous normalization is set to Warning.
! To configure logging
1 In the Normalization console, click Edit Configuration.
2 In the System Configuration tab, for each type of logging (Batch, API, and
Continuous), modify the following settings:
Log file location This defines the path on the server where the log
file is saved.
Maximum size for the log When the log file reaches the file size limit, it is
file renamed and a new log file is created. The older
files are not automatically deleted. For example, if
the log file is named neJob.BMC.ASSET.0001.log,
the older files are named
neJob.BMC.ASSET.0001.log.1 and
neJob.BMC.ASSET.0001.log.2.
Log level This option defines what type of information is
saved in the log file. For more information about
the levels, see Table 1-5 on page 51.
NOTE
Do not save the file using Microsoft Notepad because it does not save the carriage
returns and line feeds (CR+LF) properly. Use a plain text editor that retains
CR+LF. Otherwise, importing the definitions fails.
! To export settings
1 In the Normalization console, click Export Configuration.
2 In Select Export Options, click options to exclude or include in the XML.
The XML is automatically generated.
3 Click Save to copy the XML configuration information to your system clipboard.
4 Open or create a file in a text editor, and paste the copied XML data.
! To import settings
1 From a saved file or from the Export Normalization Configuration dialog box,
copy the XML configuration data.
2 In the Normalization console, click Import Configuration.
3 In XML input for Import, paste the XML data.
4 Click Import.
5 When the import message appears, click Close.
3 In the Row Level Security Rule dialog box, configure the following parameters.
! Active—Click to activate the rule so that the Normalization Engine can apply
the rule on datasets.
! Class Type—Select whether rule applies to CIs or Relationships. For
relationships, click Details to create the required qualifications.
In the Relationship Details, configure the following parameters.
! Source Class—Select the parent class in the relationship.
! Destination Class—Select the child class in the relationship.
! Qualifier—Use BMC Remedy AR System qualifications to define the CIs in
the source or destination class that the rule applies to.
! Class Name—Select the class to create a rule for.
! Rule Name—Enter a descriptive name for the rule.
! Precedence—Enter a value to determine the rule’s execution order. The value
can be 0 to 1000, inclusive, with higher numbers determining a higher priority.
If more than one rule applies to an instance, the Normalization Engine applies
the rules sequentially from the highest precedence value to the lowest.
4 In the Row Permission table, for each group, click to the View and Change check
boxes enable or disable read and write permissions.
5 Click OK.
The rule is available for normalization.
Although the Normalization Engine uses the Market Version field from the
Product Catalog to update the CI’s MarketVersion attribute, the Version Rollup
rules are useful for several reasons:
! If a CI has no corresponding Product Catalog entry, then the Market Version
value is used if the Allow new Product Catalog entry option is enabled. For
example, instead of creating a product with the version 5.2.013.1, the
Normalization Engine uses 5.2 as defined in a rule.
! If a CI has a corresponding Product Catalog entry, the product might not have
a Market Version value.
For example, Calbro Services has multiple service packs and versions for Microsoft
Excel: 11.0.5614.0, 11.0.6355.0, 11.0.7969.0, and 11.0.8173.0. However, Calbro
Services wants to track all of these products with a MarketVersion of 2003. To do
this, Andy Admin creates the following rule in the Version Rollup tab of the
Configuration Editor:
Name=" Microsoft Excel" AND ManufacturerName="Microsoft Corporation" AND
VersionNumber=11.0.%
By default, the Version Rollup feature is not enabled for datasets. If you enable the
Version Normalization feature for an individual dataset, all active Version Rollup
rules are checked. You cannot select particular rules to apply to individual
datasets. The MarketVersion value for a Product Catalog entry takes precedence
over Version Rollup rules.
The Normalization Engine has Version Rollup rules for Microsoft, Oracle, and
Adobe products. You must create rules for other products. Also, if the
Normalization Engine finds no specific rule for a product, it uses a default rule to
set the MarketVersion to use the VersionNumber value.
3 In the New Rule for Version Rollup dialog box, configure the following
parameters.
! Active—Select Yes to activate the rule so that the Normalization Engine applies
the rule on datasets.
! Rule Name—Define a descriptive name for the rule.
! Class Name—Select the name of the class for which the qualification will return
instances.
! Precedence—Enter a value to determine the rule’s execution order. The value
can be 0 to 1000, inclusive, with higher numbers determining a higher priority.
If more than one rule applies to an instance, the Normalization Engine applies
the rules sequentially from the highest precedence value to the lowest.
! Qualifier—Type your qualification or click Qualification Builder to build the
qualification interactively.
For example, Andy Admin specifies the following qualification to create a
Version Rollup rule for instances of Microsoft Excel version 11.0.xx:
(‘Name’="Microsoft Excel" AND ‘ManufacturerName’="Microsoft
Corporation") AND (‘VersionNumber’LIKE 11.0.%)
! Manufacturer Name—Specify a string to apply to the ManufacturerName
attribute.
! Product Name —Specify a string to apply to the Model attribute.
4 Click OK.
NOTE
Only rules that are active are executed.
4 Click OK.
5 In the Configuration Editor, click Save.
! To assign Version Rollup rules a higher priority than the Product Catalog
1 In the AtriumCore_install\cmdb\plugins\ne\ directory on the BMC Atrium
Core server, open the com_bmc_ne_feature_VersionRollupFeature file in a text
editor.
2 Change the RulePriority property to True.
3 Save the file.
4 Restart the BMC Remedy AR System server.
The MarketVersion value in the Version Rollup rules take precedence over the
value in the Product Catalog.
NOTE
You cannot define composite suites that contain other suites.
You can use Suite Rollup with batch and continuous normalization jobs: it does not
work with inline normalization.
NOTE
If you enable the continuous normalization mode and then create a new Suite
Rollup rule, that rule is applied only to new or updated product CIs related to the
systems that are pushed to BMC Atrium CMDB. To apply the new Suite Rollup
rule to the previously normalized data, you must rerun the continuous
normalization job with the Normalize All Instances option selected.
By default, the Suite Rollup feature is not enabled for datasets. If you enable the
Suite Rollup Normalization Feature option for an individual dataset, the
Normalization Engine checks all active Suite Rollup rules. You cannot select
particular rules to apply to individual datasets.
NOTE
Suite Rollup rules do not run if the MarketVersion, Product, and Manufacturer
attributes are not normalized.
When you select products as part of a suite, the suite rule applies only if all the
required components are found with a relationship to the same system. If they are
not, the rule does not apply.
For example, Calbro Services must manage its licenses for a suite of graphic and
web tools called Cre8ive Design, which includes the following products:
! Cre8 HTML
! Cre8 Studio
! Cre8 Photo
! Anim8 Studio
Andy Admin first creates the Product Catalog entries for these products, then
creates a company entry for Cre8ive Solutions in the Product Catalog, and finally
creates a new suite in the Suite Rollup tab:
! Suite Name: Cre8ive Design
! Manufacturer: Cre8ive Solutions
! Market Version: 2010
! Tier 1: Software
! Product List: Cre8 HTML, Cre8 Studio, Cre8 Photo (all required)
! Precedence: 50
! Active: Yes
When the datasets are normalized with the Suite Rollup feature enabled,
installations of the Cre8ive Design suite are identified, and Calbro Services can
accurately track their software licenses. The rule is not applied if any one of the
products defined for the Cre8ive Design suite does not exist.
Best Practices
! If you have created the Suite Rollup rules, you can enable the Version Rollup
Normalization and Suite Rollup Normalization features and create one batch
job to normalize the CI instances. However, if you have not yet created the Suite
Rollup rules, you can create one batch job to normalize the Name,
ManufacturerName, and MarketVersion attributes and to apply the Suite
Rollup rules but you need to run the batch job twice. During the first run, the job
normalizes the instances; during the second run, the job applies the Suite Rollup
rules. Before you run the batch job for the second time, you must enable the
Normalize all Instances option.
! BMC recommends that you create and enable the Suite Rollup rules based on
the Product Catalog data before you enable the continuous normalization mode.
3 In the Suite Information screen, define the suite to create in the Product Catalog,
and click Next.
! Suite Name—Enter the name of the suite to create in Product Catalog.
! Manufacturer—Select from a list of manufacturers that are defined in the
Product Catalog. If needed, you can create a new company in the Product
Catalog. For more information, see BMC Atrium Core 7.6.04 Product Catalog and
DML Guide.
! Market Version—Select from a list of manufacturers that are defined in the
Product Catalog. If needed, you can create a Market Version by typing the value.
! Model/Version—Enter the VersionNumber attribute value for the suite.
! Tier 1 (Category)—Enter the Category attribute value for the suite.
4 In the Product List screen, select the products to include in the suite, and click
Next.
a Search for products by the following methods. You can include the % wildcard.
! by product name—Click Product, and enter a value for Name.
! by manufacturer and version—Click Product, and enter values for
Manufacturer and Version.
! by products in existing suites—Click Suite, and enter a suite name for Search.
NOTE
Searches are case sensitive.
b To add a product to the suite, select it from the Product Catalog list, and click
the > button.
c To designate a required product in the suite, in the Product List, click the Reqd
check box.
5 In the Suite Summary screen, review the suite definition and set the following
options, and click Next.
! Active—Click to activate the rule so that the Normalization Engine applies the
rule on datasets.
! Precedence—Enter a value to determine the rule’s execution order. The value
can be 0 to 1000, inclusive, with higher numbers determining a higher priority.
If more than one rule applies to an instance, the Normalization Engine applies
the rules sequentially from the highest precedence value to the lowest.
The Create New Suite Wizard adds a new suite in the Product Catalog with the
defined Suite Name, Manufacturer, Version and Product List information. It also
creates a NE Suite Rule with the give Suite Id, Active and Precedence values.
6 In the Finish screen, review for errors.
! If the wizard successfully creates the new suite, click Finish.
! If the wizard encountered errors, click Back to review and make changes.
The new suite appears in the Current Suites list of Version Rollup tab.
NOTE
Only rules that are active are executed.
4 Click OK.
5 In the Configuration Editor, click Save.
For example, Calbro Services creates a Suite Rollup rule in which the following
products are defined as required components of the Cre8ive Design suite:
! Cre8 HTML
! Cre8 Studio
! Cre8 Photo
If you uninstall Cre8 HTML from a system, the discovery tool sets the
ProductType attribute for that Cre8 HTML product CI instance to Standard
Product. When the discovered data is loaded into BMC Atrium CMDB, the Suite
Rollup feature sets the ProductType attribute of the Cre8ive Design suite CI
instances and the Cre8 Studio and Cre8 Photo product CI instances to Standard
Product.
Identifying suites that are upgraded on a system
If you upgrade a suite on a system to a different version or edition, you must
! create a new suite rule for the upgraded version or edition of the suite
! set the precedence of the suite rule for the upgraded version or edition as higher
than the precedence of suite rule for the original version or edition
When the discovery tool loads the system with the upgraded suite components
into BMC Atrium CMDB, the Suite Rollup feature initially applies the suite rules
that have a higher precedence. The Normalization Engine creates a new suite CI
for the upgraded version or edition and marks the component products as part of
the upgraded suite. When the suite rule for the original version is applied, the
value of the ProductType attribute of that suite CI is set to Standard Product.
For example, Calbro Services upgrades Microsoft Office 2007 Standard Edition on
the systems in the production environment to Microsoft Office 2007 Professional
Edition. Andy Admin creates a new suite rule for Microsoft Office 2007
Professional Edition and adds the Microsoft Outlook product as a required
component along with the Microsoft Word, Microsoft Excel, and Microsoft
PowerPoint products. Andy Admin also sets the precedence for the Microsoft
Office 2007 Professional Edition as higher than the Microsoft Office 2007 Standard
Edition.
When the system CIs on which the upgraded Microsoft Office 2007 Professional
suite is installed are loaded into BMC Atrium CMDB, the new suite rule marks the
Microsoft Outlook, Microsoft Word, Microsoft Excel, and Microsoft PowerPoint as
components of the Microsoft Office 2007 Professional Edition and creates a new
Microsoft Office 2007 Professional Edition suite CI. The Normalization Engine also
sets the value of the ProductType attribute of the Microsoft Office 2007 Standard
Edition suite CI to Standard Product.
2 Reconciling data
Overview of reconciliation
When multiple data providers load data into multiple datasets of BMC Atrium
CMDB, you need a reconciliation process to enable you to compare data from
different data sources and to create one complete and correct production dataset.
The Reconciliation Engine is a component of BMC Atrium CMDB that performs
the following important reconciliation activities:
! Identifies class instances are the same entity in two or more datasets
! Merges CI attributes from a dataset to a production dataset
The reconciliation job is a container for reconciliation activities, which themselves
can have different components. The primary activities are Identify and Merge. A
reconciliation job can have one or more activities, each of which defines one or
more datasets and rules for that activity. In addition, you can use a Qualification
Set to restrict the instances participating in a reconciliation activity.
You can choose several methods for starting a reconciliation job, including
manually, scheduled jobs, a continuous job, BMC Atrium API, or a Run Process
Workflow.
Rules
Qualification Set
Qualifications
Jobs can use standard or customized rules. Standard rules use defaults for Identify
and Merge activities and automate the creation of reconciliation jobs. You can also
create custom jobs that include all of the different activities and for which you can
modify the default settings.
NOTE
BMC recommends reconciling only CIs that have been normalized or that do not
require normalization. To reconcile the appropriate CIs, enable the Process
normalized CIs only option in the Job Editor.
A reconciliation job can include activities other than Identify and Merge. For more
information, see “Additional reconciliation activities” on page 119.
Identify activity
Before you can merge different versions of instances, you must determine that they
represent the same entity. Identification accomplishes this matching by applying
rules against instances of the same class in two or more datasets.
For example, a rule intended to identify computer system instances might specify
that the IP addresses of both instances be equal. When the rules find a match, both
instances are tagged with the same reconciliation identity, an extra attribute
showing that they each represent the same item in their respective datasets.
You can also manually identify instances that failed to be identified by the rules in
an Identify activity.
NOTE
An instance must be identified before it can be compared or merged.
Merge activity
Merging takes two or more datasets and creates a composite dataset according to
precedence values specified at the dataset, class, and attribute levels.
Merging is essential for producing a single valid configuration when different
discovery applications provide overlapping data about the same items, or when
you need to commit changes that were made in your sandbox dataset as a test. To
take advantage of the areas of strength in each dataset, you create precedence values
that favor those strengths. This gives you one CI instance with the best of all
discovered data.
You give an overall precedence value to each dataset, but you can override that
value for particular classes and attributes in each dataset. Whichever dataset has
the highest precedence value for a given attribute has its value for that attribute
placed in the target dataset. A precedence value for a class also applies to its
subclasses unless the subclasses have their own precedence values.
When an instance is added or updated in a dataset, BMC Atrium CMDB sets the
ReconciliationMergeStatus attribute of that instance to Ready to Merge. The
Merge activity considers only those instances that have been given an identity and
for which the ReconciliationMergeStatus attribute is set to Ready to Merge.
After merging a CI, the Reconciliation Engine updates the value of the
ReconciliationMergeStatus attribute from Ready To Merge to Merge Done.
Datasets
Creating datasets in BMC Atrium CMDB to store data provided by different data
providers is the first step when importing data.
A dataset is a logical grouping of data. It can represent data from a particular
source, a snapshot from a particular date, or other purpose.
Make sure that each data provider has its own import dataset.
You should also note what dataset is your production, or golden, dataset so that you
can plan your normalization and reconciliation jobs. By default, the BMC.ASSET
dataset is the production dataset. In reconciliation, the production dataset is used
in different ways. First, it is used as a master dataset to identify duplicate CIs,
matching attributes for the CI in the production dataset with the CIs in the
imported datasets. Second, it can be the target dataset in a merge activity so that
the CIs are updated to keep the production dataset current and accurate. Also, do
not normalize the production dataset because you should normalize CIs before
identifying and merging them.
In cases where you need to merge more than one dataset at time, you might want
to create an intermediate dataset for merging. For better performance and to
minimize impact on users of the production dataset, BMC recommends that you
merge one import or discovered dataset at a time with the production dataset. You
might want to merge multiple source datasets in separate jobs to an intermediate
dataset and then merge the intermediate dataset with the production dataset.
For more information about datasets, see BMC Atrium Core 7.6.04 Concepts and
Planning Guide. For more information about creating datasets, see “Creating
datasets” on page 105.
Reconciliation console
You can manage the reconciliation of data between datasets from the
Reconciliation console in the BMC Atrium Core Console.
From this console, you can create, view, modify, and delete jobs, rulesets, and
other reconciliation definitions.
Table 2-1 provides instructions for the tasks that you can perform related to
reconciliation.
Table 2-1: Reconciliation tasks
Task Instruction
Create a standard Click Create Standard Identification & Merge Job.
reconciliation job. For detailed instructions, see “Creating a standard
reconciliation job” on page 74.
Create a customized Click Create Job.
reconciliation job. For detailed instructions, see “Creating and editing a
customized reconciliation job” on page 89.
WARNING
Inheritance of reconciliation definitions by a subclass happens only within the
namespaces specified for the definition. This can be important if you use BMC
Atrium Core with other BMC Software products that extend the CDM.
For example, in a Precedence set, you might set an attribute precedence on the
MarkAsDeleted attribute of BMC.CORE:BMC_BaseElement because it is the base
class from which all others inherit so you can change the precedence value of
MarkAsDeleted for all classes with one definition. If you define this attribute
precedence for the BMC.CORE namespace, it does not apply to subclasses of
BMC_BaseElement that were created by other BMC Software products with a
different namespace, and Merge activities that use this precedence set will have
unpredictable results.
Reconciliation IDs
The Identify activity can review all incoming data, determine similar CIs across
more than one dataset, and mark where these dataset instances see the same CI.
The Reconciliation Engine marks CI and relationship instances with Reconciliation
IDs that are unique to individual CIs within a dataset.
This initial marking step is critical to enabling the compare and Merge activities
without causing conflicts from overlapping data that could potentially corrupt
your CMDB with unreliable CI data. After identification has occurred, you can
move to the next step and determine how to interpret and combine this data.
IMPORTANT
Do not change the value for unidentified and identified CIs or relationships. The
Reconciliation Engine looks for a value of 0 for unidentified CIs. Changing this
value to NULL or some other character causes identification to fail.
You can use the following settings to modify the Reconciliation ID:
! Generate IDs (in Additional Parameters)—When the source dataset does not
already have an identity, you can assign an automatically generated identity to
instances in the dataset in the Identification set. See “Creating an Identify
activity” on page 98.
! Generate IDs (in Dataset Configuration)—If this option is enabled, the
Reconciliation Engine searches for CIs in the production dataset that have a
Reconciliation ID of 0 and sets the ID to a nonzero value that is unique across all
datasets. See “Creating an Identify activity” on page 98.
NOTE
If you do not want jobs automatically changing servers in a server group, remove
or comment the arrecond.exe entry from armonitor.cfg, and do not configure
the Reconciliation Engine entry for the server in the AR System Group Operation
Ranking form.
The BMC Remedy AR System server transfers control from one server to another
depending on the Rank value specified in the AR System Group Operation
Ranking form. The Reconciliation Engine takes three iterations of 60 seconds (180
sec) to transfer a job from one server to another so that the first server properly
pauses the running job(s) and the second server resumes the. job(s). This process
avoids running the same job on both servers simultaneously.
For more information about installing BMC Atrium Core in a server group, see
BMC Atrium Core 7.6.04 Installation Guide.
For more information about BMC Remedy AR System and server groups, see BMC
Remedy Action Request System 7.6.04 Configuration Guide.
3 Reconciliation jobs
NOTE
Do not populate the TokenID attribute unless you know the formulas that BMC
discovery products use to populate it. Some BMC discovery applications use
TokenID.
NOTE
By default, a standard job identifies and merges CIs that have not been normalized.
To reconcile only CIs that have been normalized, enable the Process normalized
CIs only option in the Job Editor.
Best practices
! To get the most use out of discovered data, reconcile it into your production
dataset immediately after your discovery application loads data into BMC
Atrium CMDB.
! Do not create jobs that simultaneously identify the same dataset or merge data
into the same target dataset. Simultaneous reconciliation can either overwrite
data you wanted to keep from the other job or create duplicate CI instances.
If you need to create multiple jobs to merge data into the same production
dataset, use the Execute activity to run the jobs sequentially or set the jobs as
continuous and run them in parallel with the Look Into Other Datasets for
Parallel Continuous Jobs option selected. For information about configuring
Look Into Other Datasets for Parallel Continuous Jobs option, see “Configuring
Reconciliation Engine system parameters” on page 143.
! Do not run more than one BMC Atrium Integration Engine, Normalization
Engine, or Reconciliation Engine job at the same time because they might query
or update the same data.
! For a large amount of data, such as an initial load, run separate identify and
merge jobs to allow for better diagnostics.
! For incremental updates, run identify and merge activities in one job. For new
CIs, the identify and merge activities are run. For modified CIs, the identify
activity runs quickly because the CIs have reconciliation IDs, and the merge
activity runs.
! Consider indexing attributes used in identification rules. Consult your DBA to
determine what indexes would help you.
Before you begin
Verify that the default identification and merge settings for a standard job. See
“Standard identification and merge rules” on page 146.
Table 3-2 describes the default settings for merging in the standard job.
Table 3-2: Default merge settings
Setting Description Default value
Status Defines whether the Merge Active
activity can execute (Active or
Inactive).
Continue on Error Specifies whether the Merge No
activity continues to run if an
error occurs in this activity.
Precedence Defines the precedence values Generated
Association for classes and attributes.
Include Unchanged Defines whether to perform an No
CIs incremental merge on
attributes.
! Yes—Merges all attributes
even if their value has not
changed in the source dataset
since the last time this Merge
activity ran.
! No—Merges only the
attributes that changed value
in the source dataset since the
last run and instances that
were created in the source
dataset since the last run. To
improve performance, select
this option.
Target Dataset Specifies the dataset into which BMC.ASSET
data is updated from the source
dataset in a Merge activity.
Note: The target dataset is
typically the same as the
production dataset in an
Identify activity.
Dataset Specifies the dataset from Defined by the user during job
which data is used to update creation
the target dataset in a Merge
activity.
Qualification Set Specifies the qualifications that Empty
restricts the instances
participating in a reconciliation
activity. An instance that meets
one or more of the
qualifications in the set
participates in an activity where
that set is used.
IMPORTANT
Do not delete this job because other BMC applications that add reconciliation jobs
during installation use the BMC Default Continuous job.
The purpose of the BMC Default Continuous job is to reconcile a small but critical
set of data that needs to be updated frequently. For large updates or initial loads,
create a scheduled job to minimize the impact on users.
The activities in the BMC Default Continuous job do not run in parallel or
simultaneously but run sequentially.
NOTE
When you start a job using a BMC Atrium CMDB API program, the Last Activity
Time is neither read nor updated. This means that you cannot use this method to
perform an incremental merge. Also, all Qualification Sets defined within the job
are ignored, even if you did not dynamically specify qualifications.
For example, to start a job named Merge Datasets, use the following command:
Application-Command Reconciliation Trigger-Job -o "Merge Datasets"
You can enter multiple class-qualification pairs and dataset pairs. For example, if
the Merge Datasets job is defined to merge the BMC Configuration Import dataset
into the BMC Asset dataset but you want to run it using the TestSource and
TestTarget datasets, respectively, you would use the following command:
Application-Command Reconciliation Trigger-Job -o "Merge Datasets" -l "-w
TestSource -d BMC Configuration Import";"-w TestTarget -d BMC Asset"
NOTE
When you start a job using a Run Process action, the Last Activity Time is neither
read nor updated. This means that you cannot use this method to perform an
incremental merge. Also, all Qualification Sets defined within the job are ignored,
even if you did not dynamically specify qualifications.
Substituting datasets
Substituting datasets works for any reconciliation activity type, and for any dataset
specified in the activity. You specify pairs of dataset IDs, where one represents the
defined dataset that is saved in the activities in the job and the other represents the
working dataset to use in place of the defined dataset during this run. You can
specify as many dataset pairs as you want for a job run.
For example, you have a job that includes an Identify activity identifying Dataset
1 against Dataset 2 and Dataset 3, and a Merge activity that merges Dataset 1 and
Dataset 2 into Dataset 3. On certain occasions, you want to use the Identification
rules and Precedence sets defined in these activities to identify and merge source
datasets 4 and 5 into the same target, or you want to merge the original sources into
a different target. Figure 3-1 on page 84 illustrates these scenarios.
Best practice
Consider using this feature when working with overlay datasets. For example, you
can use it to test the reconciliation of several different test states, merging from a
different overlay source dataset into a different overlay target dataset for each job
run.
Defined job
Activity Datasets
Identification Dataset 1, Dataset 2, Dataset 3
Activity Source datasets Target dataset
Merge Dataset 1, Dataset 2 Dataset 3
Called with
dataset pairs
Dataset Working Dataset Dataset Working Dataset
Dataset 1 Dataset 4 Dataset 3 Dataset 6
Dataset 2 Dataset 5
Runs with
Activity Datasets Activity Datasets
Identification Dataset 4, Dataset 5, Dataset 3 Identification Dataset 1, Dataset 2, Dataset 6
Activity Source datasets Target dataset Activity Source datasets Target dataset
Merge Dataset 4, Dataset 5 Dataset 3 Merge Dataset 1, Dataset 2 Dataset 6
NOTE
If you use dynamic dataset substitution on a job containing a Merge activity, the
dataset ID stored in the AttributeDataSourceList attribute is that of the defined
dataset, not the working dataset. For more information about
AttributeDataSourceList, see “Merging datasets” on page 109.
WARNING
Any dataset pair you supply when executing a job must be valid for every activity
in the job. If you supply a pair with a defined dataset that is not used in one or more
activities, the entire job run fails.
If you have jobs that contain several different datasets, consider breaking them up
into multiple jobs to avoid the requirement that a defined dataset must exist in
every activity. When you need to use dynamic dataset substitution, you can then
call the jobs separately and pass appropriate dataset pairs. When you do not need
this flexibility, schedule an umbrella job that calls each piece with an Execute
activity.
For instructions for using this feature with workflow, see “Executing workflow
against compared instances” on page 128. For instructions for using this feature
with an API program, see the Developer’s Reference Guide.
Substituting qualifications
When you substitute a qualification, it replaces all Qualification Sets used in the
job. This allows you to run a job against a different subset of data each time. You
specify each substitute qualification for a particular class, and can specify as many
as you want for a job run.
Best practice
Consider using this feature when you’ve created or modified a small number of
instances in a provider dataset. After creating or modifying the data, you can run
your usual reconciliation job that identifies and merges the dataset, but substitute
qualifications that restrict the job to only the data you just worked with.
For example, you have a job that identifies and merges all active CIs in two
datasets, then copies some of that data to a third dataset. You’ve just discovered
several new computer systems and printers, or perhaps just computer systems,
and want to reconcile them the same way. Figure 3-2 on page 85 illustrates
qualification substitution for both scenarios.
Defined job
Activity Qualifications
Identification 'MarkAsDeleted' = $NULL$
Merge 'MarkAsDeleted' = $NULL$
Copy 'AccountID' = "Acme"
Runs with
Activity Qualification set Activity Qualification set
Identification ('ClassId' = "BMC_ComputerSystem" AND Identification 'ClassId' = "BMC_ComputerSystem" AND
'CreateDate' > ($TIMESTAMP$ - 86400)) 'CreateDate' > ($TIMESTAMP$ - 86400)
OR ('ClassId' = "BMC_Printer" AND Merge 'ClassId' = "BMC_ComputerSystem" AND
'CreateDate' > ($TIMESTAMP$ - 86400)) 'CreateDate' > ($TIMESTAMP$ - 86400)
Merge ('ClassId' = "BMC_ComputerSystem" AND Copy 'ClassId' = "BMC_ComputerSystem" AND
'CreateDate' > ($TIMESTAMP$ - 86400)) 'CreateDate' > ($TIMESTAMP$ - 86400)
OR ('ClassId' = "BMC_Printer" AND
'CreateDate' > ($TIMESTAMP$ - 86400))
Copy (ClassId = "BMC_ComputerSystem" AND
'CreateDate' > ($TIMESTAMP$ - 86400))
OR ('ClassId' = "BMC_Printer" AND
'CreateDate' > ($TIMESTAMP$ - 86400))
When the job is paused Where Reconciliation Engine resumes the job
In Purge The Reconciliation Engine starts the job with the Purge activity
as if a new job is started. It completes the Purge, Identification,
and Merge activities.
In Identification The Reconciliation Engine restarts identification from the
beginning. If you added new records while the job was paused,
all of those records are identified and merged. The
Reconciliation Engine does not perform the Purge activity.
In Merge After merging a CI, the Reconciliation Engine updates the
value of the ReconciliationMergeStatus attribute from
Ready To Merge to Merge Done. The Reconciliation Engine
only considers the CIs for which the
ReconciliationMergeStatus attribute value was Ready
to Merge when the job was paused. It does not perform Purge
and Identification activities.
! Aborted—The Reconciliation Engine process was stopped while the job was
running.
! Successful—The job finished successfully.
! Warning—The job finished, but some activities were not successful.
! Error—The job could not finish and error messages were logged.
! Paused—The job has been paused and can be resumed.
When you select a job, the following progress information is displayed:
! Total Instances—Total number of items that the current job reconciles.
! Processed Instances—Number of items that the current job has successfully
reconciled.
! Failed Instances—Number of items that the current job could not reconcile.
You can check the Statistics updated on field to verify whether the Reconciliation
Engine is running.If the Reconciliation Engine is not responding for whatever
reason, this field might not get updated indicating some problem.
Each job run can also accumulate events, which are listed when you view job run
information. Types of events include:
! Error—Error events generated by an activity contain details about how to solve
the problem. These events can include information about datasets or objects
causing the error.
! Warning—Warning events generated by an activity contain details about
something that might be a problem, for example, the need to manually identify
an object.
! Information—Informational events announce milestones, statistics, or results,
such as the number of records found or created. These events might include
attachments, for example, a Compare activity might create an Information event
with a comparison report attached.
Events include the following information:
! Event Name—Name of event
! Event Type—Information, warning, or an error
! Event Description—Detailed description of the event. For information about
interpreting descriptions, see “Event descriptions” on page 88.
! Attachment—Attached files, like a comparison report
! Timestamp—Time the event occurred
Event descriptions
Each run of an Identify, Merge, Copy, Compare, Delete, or Purge activity creates
an Information event to provide statistical results. Each job run also creates an
event. This section explains how to interpret those results.
Activity statistics
Events pertaining to an activity can contain these headings.
! Number of records found: number—The number of instances found for
reconciliation. This is usually fewer than the total number of instances in the
dataset against which the activity operated, because certain options restrict the
number of instances processed. For example, setting Include Unchanged CIs to
No for a Merge activity or setting Identity Required to Yes for a Copy activity
reduces the number of instances processed by the activity.
For a Merge, Copy, or Compare activity, this number reflects instances in both
the source and target datasets. If an instance exists only in the source dataset
before the activity run, it adds 1 to the number of records found. If it exists with
the same reconciliation ID in both datasets, it adds 2 to this number.
NOTE
The number of records found includes both CI and relationship instances.
Job statistics
Each job run creates an event with these headings.
! Number of log files created: number—The number of log files written for the
job. When the amount of data logged for a job run exceeds the maximum log file
size you specify, another log file is created. For example, if your maximum log
file size is 20 KB and you run a job that logs 50 KB of data, three log files are
written.
! Log File Path—The absolute path to the directory on the BMC Atrium CMDB
server where the log files were written.
! <First>/<Last> Log File Name—The name of the first or last log file written for
the job run. Files are named for the job with a numerical suffix. For example, the
first run of My Job might write the log files My Job_1.log and My Job_2.log, and
a later run writes My Job_3.log.
! Create small jobs, containing the fewest number of activities that must always
run together in a given order. This allows you the flexibility of reusing jobs in
different combinations using any of the methods described in “Starting and
stopping a job” on page 79.
! Do not create jobs that simultaneously identify the same dataset or merge data
into the same target dataset. Simultaneous reconciliation can either overwrite
data you wanted to keep from the other job or create duplicate CI instances.
If you need to create multiple jobs to merge data into the same production
dataset, use the Execute activity to run the jobs sequentially or set the jobs as
continuous and run them in parallel with the Look Into Other Datasets for
Parallel Continuous Jobs option selected. For information about configuring
Look Into Other Datasets for Parallel Continuous Jobs option, see “Configuring
Reconciliation Engine system parameters” on page 143.
! Do not run more than one BMC Atrium Integration Engine, Normalization
Engine, or Reconciliation Engine job at the same time because they might query
or update the same data.
! For a large amount of data, such as an initial load, run separate identify and
merge jobs to allow for better diagnostics.
! For incremental updates, run identify and merge activities in one job. For new
CIs, the identify and merge activities are run. For modified CIs, the identify
activity runs quickly because the CIs have reconciliation IDs, and the merge
activity runs.
! In a standard reconciliation job, the identify and merge activities are run
sequentially. Because of this, the instances start being merged into the target
dataset only after all instances are identified. For a large amount of data, you can
create separate identify and merge jobs and configure the merge job to run in a
continuous mode. When the continuous merge job runs at the specified interval,
all identified instances with the ReconciliationMergeStatus attribute set to
Ready to Merge are merged into the target dataset. This ensures that identified
instances can start being merged into the target dataset while the Identify
activity is still running on the remaining unidentified instances.
4 To use the standard identification and merge rules, select Use standard rules for
participating datasets.
For more information, see “Standard identification and merge settings” on
page 77.
5 To identify and merge only CIs that have been successfully normalized, enable
Process normalized CIs only.
The Reconciliation Engine processes CIs with the follow status:
! Normalized and Approved
! Not Applicable For Normalization
! Normalized and Not Approved
For more information, see “Normalization status” on page 24.
6 Create activities for the job.
NOTE
Activities cannot be reused between jobs. If you delete a job, all its activities are
also deleted. Also, when you remove an activity from a job, it is deleted and cannot
be used in other jobs.
Each reconciliation job must have at least one activity. Each activity has
components that must be defined.
The following topics are provided:
! Using Qualification Sets (page 94)
! Building a qualification (page 95)
! Identifying data across datasets (page 97)
! Merging datasets (page 109)
9 From the Class list, select the name of the class for which the qualification will
return instances.
10 If you want the qualification to return CIs based on attribute values in related child
CIs, select a class name from the Related Child Class list.
When you select a related child class, the qualification uses the attributes of a child
CI to return a source CI. For example, if your Class Name is BMC_ComputerSystem,
you might specify BMC_DiskDrive in the Related Child Class field. This enables
you to match computer systems based not on their own attributes, but on the size
of their disk drives.
A referring class qualification only matches when the instance of the class in the
Related Child Class field is the child member of a relationship to the instance of the
Class field. It does not match when the Related Child Class instance is the source.
11 In the Qualification field, type your qualification or click Build Qualification to
build it interactively.
For more information, see “Building a qualification” on page 95.
12 Click Done, and then click Save.
Building a qualification
You can use the Qualification Builder to build qualifications interactively instead
of typing them. It is accessible from any window in the Reconciliation console that
has a Qualification field and works similarly to the Advanced Search Bar in BMC
Remedy User.
Qualification conventions
When building qualifications, the easiest way is to select the fields, keywords, and
values from the Qualification Builder. You can also create the qualification
manually. If you choose this option, observe the following conventions.
For more information about keyword definitions, relational operators, and
advanced search bar conventions, see the BMC Remedy Action Request System 7.6.04
Mid Tier Guide.
! Enclose attribute names in single quotation marks.
NOTE
Keywords are case-sensitive. Use only UPPERCASE.
Creating qualifications
You can create a qualification to limit instances that a reconciliation activity
processes.
! To create a qualification
1 From the Activities area in the Job Editor, click New, or select an existing activity
and click Edit Activity.
2 Clear the Use all classes and instances check box in the Qualification area.
3 Click New/Edit Qualification Set.
4 In the Qualifications area, click Add Rule, or select an existing set and click Edit
Rule.
5 From the New/Edit Rule area, click Build Qualification.
6 In the Attribute list, select a class attribute, and click From Current Dataset.
7 To add an operator, place your cursor in the expression, and click the appropriate
operator.
8 To select a value to use, select an attribute from the Attribute list, and click From
Target Dataset.
9 To add a keyword, double-click the appropriate item in the Keyword list.
10 To change the qualification manually, click Allow Manual Edit, and modify the
expression.
Step 1 Create a job to hold the Identify activity, if no job exists. See “Creating and editing
a customized reconciliation job” on page 89.
Step 2 Create an Identify activity which stores the rules and datasets for identification.
See “Creating an Identify activity.”
NOTE
Activities cannot be reused between jobs. If you delete a custom job, all its activities
are also deleted. If you delete a a standard job, all its activities and identification
and merge rulesets are also deleted.
Step 3 Create Identification rulesets, each of which has rules that match instances
between two datasets. See “Creating an Identification ruleset” on page 101.
Step 4 Select the datasets to identify. See “Relating datasets and Identification rulesets”
on page 104.
Step 5 Optionally, select or create a Qualification set to define which instances participate
in the activity. See “Using Qualification Sets” on page 94.
Use all classes and Select to run an unqualified Identify activity so that
instances all classes and instances are included.
To restrict the classes and instances, remove the
check.
Qualification Set Select a qualification that restricts the classes and
instances used in the activity.
If the Use all classes and instances is unchecked,
select from the Qualification Set. list
To create a Qualification Set, click New. For more
information, see “Using Qualification Sets” on
page 94.
Production Dataset Select the dataset to use with the Generate IDs
feature.
This setting does not affect the Identify Against
dataset selected in the Identification rules.
Generate IDs For the selected Production Dataset, define how to
handle any instances that have not been identified
(or have a Reconciliation ID of 0).
Checked— Unidentified instances in the selected
Product Dataset are assigned a nonzero
Reconciliation ID.
Unchecked— Unidentified instances in the
selected Product Dataset are not assigned a
nonzero Reconciliation ID. These instances retain a
Reconciliation ID of 0.
Exclude Subclasses Defines whether to use explicit Identification
rulesets for each class.
Yes—Requires that an Identification ruleset be
specified for every class and subclass.
No—Applies the Identification ruleset for a class to
all of its subclasses. You can still specify individual
rulesets for any class.
5 If you did not select Use standard rules for participating datasets for the job, create
an Identification ruleset.
To create a ruleset, see “Creating an Identification ruleset” on page 101.
6 If you selected Use standard rules for participating datasets, you do not need to
create an Identification ruleset.
For more information, see “Standard identification and merge rules” on page 146.
NOTE
In an Identify activity, each participating dataset except the production dataset
must be paired with an Identification ruleset. If the activity involves three or more
datasets, including the master dataset, each dataset's Identification ruleset must
include Identification rules comparing it to all the other datasets. For instance, in
an Identify activity involving datasets A, B, and C where C is the master, the
Identification ruleset paired with dataset A must have a rule that identifies its
instances against B and a rule that identifies its instances against C, and the ruleset
paired with B must have a rule identifying against A and a rule identifying against
C. Without these rules, a job that includes this activity will not run.
Best practices
! Use the highest class level possible to take advantage of inheritance and to use
the Identification ruleset with multiple classes.
! Always put the most specific identification rules first in the Identification ruleset
execution order, so the best match is identified first.
! After the production dataset is initially populated, set Generate IDs to No in the
dataset and Identification ruleset entry. This helps prevent duplicate records by
requiring a BMC Atrium CMDB administrator to manually identify new
configuration items.
! You can use multiple Identification rulesets in an Identify activity to identify
multiple classes in one reconciliation job.
! Regularly review your identification rules to make sure they are still
appropriate for your environment and spot check instances to confirm that they
are being identified properly.
! Consider indexing attributes used in identification rules. Consult your DBA to
determine what indexes would help you.
! The standard identification rules use the following guidelines for identifying
physical and virtual servers. If you create identification rules, consider these
attributes to distinguish between physical and virtual servers and to uniquely
identify each. You can use the standard rules as a guide to creating custom
identification rules for servers.
! For physical servers, BMC recommends that data providers populate the
TokenID attribute with a concatenation of the host name and domain name.
! For virtual servers, BMC recommends that data providers set the following
attributes:
Set Name to the name of the virtual machine.
Set isVirtual to Yes.
Set TokenID to prefix:uniqueID, which is a concatenation of the virtual
machine prefix and unique ID.
Typically, virtual machines have a unique identifier, which does not change
when the virtual machine is moved to a new host. For example, the TokenID
for a VMware virtual machine is VI-UUID:123456789.
Before you begin
! Plan how many rules are needed in what order.
! Know what dataset to use as reference in each rule.
! To create an Identification ruleset, clear the Use standard rules for participating
datasets option for the job. Otherwise, your Identify activity uses the standard
rules. See “Standard identification and merge settings” on page 77.
Identify Against Select the dataset where you want to find a match.
This is often the production dataset, such as
BMC.ASSET.
Execution Select or type a number to identify this rule's
position in the ruleset execution order (0 to 1000).
All rules in the ruleset are processed according to
this order.
Namespace Use the same Namespace that you specified for the
Identification ruleset.
Class Use the same class that you specified for the
Identification ruleset.
Qualification Type the criteria that match the class between
datasets, or click Build Qualification to create the
criteria with a tool.
Use dollar signs ($) to enclose attribute names from
the dataset that is paired with this Identification
ruleset in an Identify activity, and use single quotes
(') to enclose attribute names from the dataset in the
Find in Dataset field.
Example: 'IP Address' = $IP Address$,
The example qualification identifies instances of
the class when they share the same IP address.
5 Click OK.
6 In the Set Editor, click Save.
! For more information about starting jobs, see “Starting and stopping a job” on
page 79.
! For more information about monitoring jobs, see “Viewing job status, results,
and history” on page 86.
Creating datasets
You can create datasets from the Reconciliation console.
When you create a dataset, you give it both a name and an ID. The naming
convention for dataset IDs is as follows, and should be written using all capital
letters:
VENDOR_NAME.PURPOSE[.VENDOR _PECIFIC_PRODUCT]
NOTE
Use datasets primarily to represent different data providers, but you can use
datasets to represent other types or groupings of data, such as test data, obsolete
data, or data for different companies or organizations for multitenancy.
Best practice
Typically, you should create a regular dataset. Do not create an overlay dataset for
for a data provider. For more information about overlay datasets, see the BMC
Atrium Core 7.6.04 Concepts and Planning Guide.
! To create a dataset
1 Create a dataset from different places in the console.
! From the Identify activity, click Add Dataset Identification Group Association,
and then click Create Dataset.
! In the Reconciliation console., click Create Dataset.
2 Complete the following fields.
Client Type List If you selected Writable by client only, type the
client IDs of each BMC Atrium client that can write
to this dataset in the ClientTypeList field.
To allow all clients to write to this dataset, leave
this field blank. If you enter any IDs here, only
those clients can create, modify, or delete instances
in the dataset.
Client IDs are integer values and must be delimited
by semicolons. The allowable client values are the
following IDs:
BMC Impact Publishing Server: 28
BMC Impact Service Model Editor: 29
Reconciliation Engine: 32
Type Select Regular or Overlay.
For information about overlay datasets, see the
BMC Atrium Core 7.6.04 Concepts and Planning
Guide.
Source Dataset ID If you selected Overlay, type the SourceDatasetId
for this new dataset.
This is the Dataset ID of the existing regular dataset
that your new dataset overlays.
3 Click Save.
Step 2 Define a Qualification set that restricts identification to specified classes,. and
associate the Qualification set with the Identify activity.
NOTE
With the previous qualification, your job identifies computer systems, not
subclasses such as BMC_Mainframe. You can update the qualification as needed.
Merging datasets
After you have identified your data, you can merge it from one or more source
datasets into one target dataset to create a reconciled view of that data.
Using the standard or custom precedence rules, you set a precedence value for
each dataset that participates in the merge, including the target, and then create
individual values for any classes or attributes in those datasets that should be
higher or lower. These precedence values determine which dataset, including the
target if needed, supplies the data that is written to the target dataset for each class
and attribute.
Only instances that have reconciliation identities and for which the
ReconciliationMergeStatus attribute is set to Ready to Merge can participate in
a merge. For more information about identification, see “Identifying data across
datasets” on page 97. After an instance is merged into the target dataset, the
ReconciliationMergeStatus attribute is set to Merge Done.
NOTE
When a Merge activity compares the precedence value for an attribute in a source
dataset against the precedence value for the dataset that last supplied the attribute
to the target dataset, that target precedence value is taken from the Precedence
Association set selected for the Merge activity. Whichever Precedence Association
is paired with the “stored” source dataset in that Precedence Association set
supplies the precedence value for the attribute.
NOTE
Though you cannot merge a NULL value, you can work around this in character
attributes by using blanks. This visually clears an existing value from an attribute,
but can cause confusion if you are using that attribute in Identification rules or
performing Compare activities. Blanks also cannot be used in the MarkAsDeleted
attribute because it is a selection, not character, attribute.
Step 1 Create a Merge activity and select source and production sets to merge. See
“Creating a Merge activity” on page 111.
NOTE
Activities cannot be reused between jobs. If you delete a job, all its activities are
also deleted.
Step 2 Create Precedence Association Sets, which have the following parts.
! Define precedence values for a dataset and its classes and attributes. See
“Creating a Precedence set” on page 115.
! Assign a Precedence set to each dataset participating in the Merge activity. See
“Creating a Precedence Association” on page 118.
Step 3 Optionally, create a Qualification set to define which instances participate in the
Merge activity. See “Using Qualification Sets” on page 94.
IMPORTANT
Do not merge one source dataset into more than one target dataset. To merge to
two target datasets, create two merge activities—one to merge the source dataset
to an intermediate target dataset and then a second job to merge the first target
dataset into another target dataset, such as BMC.ASSET.
Best practices
! To avoid redundant processing, make all Merge activities incremental by
clearing the Include Unchanged CIs option.
! Use only one source dataset for each Merge activity, and pair the Identify and
Merge activities for each source dataset in the job. For example, identify and
merge dataset 1, then identify and merge dataset 2, and so on. This ensures that
attributes required for Identify activities are merged into the target dataset in the
right order.
! Instead of merging multiple discovery sources directly into your production
dataset, merge them into a “consolidated discovery” dataset first. You can
compare this against your production dataset, and use the results to generate
change requests or exception reports for any discrepancies.
TIP
Create and edit a Standard Job so that you can take advantage of the standard rules
as much as possible. For more about Standard Jobs and their defaults, see
“Standard identification and merge rules” on page 146.
Continue on Error Define whether the job continues if the activity has
an error.
Checked—A job containing this activity continues
to run if an error occurs in this activity.
Unchecked—A job containing this activity
terminates if an error occurs in this activity.
Sequence Specify in what order you want this activity to run
relative to other activities in a job. For example, if
this activity has a value of 2 it runs before an
activity with a value of 3. The sequence can be 0 to
1000, inclusive.
Qualification Select a qualification that restricts the classes and
instances used in the activity.
If the Use all classes and instances is unchecked,
select from the Qualification Set. list
To create a Qualification Set, click New/Edit
Qualification Set. For more information, see
“Using Qualification Sets” on page 94.
Name Type a unique name.
The name cannot contain any characters that are
not allowed in file names for the operating system
of your server. For example, on a Windows server,
your job name cannot contain the following
characters: \ / : * ? “ < > |
8 From the Merge Order list, select one of the following options:
WARNING
If used incorrectly, the Defer if Null option can cause instances that cannot be
deleted. For information about this, see “Handling NULL values” on page 110.
Best practices
! Use the highest class level possible to take advantage of inheritance.
! A Precedence set does not have to be used exclusively with one dataset. It can
be paired with different datasets in different activities. Likewise, a given dataset
can be paired with different Precedence Association Sets in different activities.
Therefore, design your Precedence Association Sets for flexibility. You need
fewer of them.
Append to Lists Set whether list values from all source datasets are
appended to list-formatted Character attributes in
the production dataset. Duplicate entries are not
appended to the list.
A list-formatted attribute is a Character attribute
that is intended to hold a list of values according to
a specified format.
The Append to Lists setting at the Precedence set
level is overridden by the same setting in any
Precedences defined for the set.
Checked—If this Precedence set has the highest
precedence value for a list-formatted Character
attribute in a Merge activity, the list values from all
source datasets are appended to the list in the
target dataset. If some other set has the highest
precedence value for the attribute, the Append To
Lists option for that set determines whether values
are appended.
Unchecked—If this Precedence set has the highest
precedence value for a list-formatted Character
attribute in a Merge activity, the list values from
the source dataset paired with this set overwrite
the list in the production dataset.
Precedence Value Enter or select a value for the dataset that uses this
Precedence Set. The weight is 0 to 1000, inclusive,
with higher numbers meaning a higher priority.
The Precedence Value is overridden by the value in
any Precedences defined for the set.
Require Explicit Define whether to define precedences for all
Precedences classes or attributes.
Select to require explicit Precedence entries for
classes and attributes. You must add precedence
rules for the classes and attributes.
Deselect to apply this Precedence set to all classes
and attributes for which a Precedence is not
defined.
5 Additional reconciliation
activities
Each reconciliation job must have at least one activity. Each activity has
components that must be defined.
The following topics are provided:
! Overview of additional activities (page 120)
! Deleting data (page 122)
! Purging soft-deleted data (page 124)
! Comparing datasets (page 126)
! Executing workflow against compared instances (page 128)
! Copying datasets (page 131)
! Renaming datasets (page 137)
! Executing reconciliation jobs (page 139)
Reconciliation Job
Identify Compare
Identification Workflow
Ruleset Execution Ruleset
Rules Rules
Qualification Set
Qualifications
Copy Purge
Precedence Dataset
Compare activity
The Compare activity operates against instances in two datasets and either
produces a report or executes workflow based on the comparison results.
The report shows those instances that appear in only one of the datasets and details
the differences between instances that appear in both.
The Compare activity lets you compare an expected configuration against an
actual one, which you could use for more than one purpose. You might use
comparison to alert you that something has changed in a configuration that you
expected to remain static. Alternatively, if you have a change request in progress,
you might use comparison to verify that the configuration reaches its expected
new state.
Only instances that have been given an identity can be compared, and they are
compared only against other instances with the same identity. If you choose to
execute workflow as a result of the comparison instead of creating a report, that
workflow can execute against instances from either dataset but not both.
NOTE
An instance must be identified before it can be compared or merged.
Rename activity
You use the Rename activity to rename a dataset. Renaming a dataset does not
change the DatasetId, so all reconciliation definitions that include the dataset still
work with the new name.
Copy activity
You use the Copy activity to copy instances from one dataset to another. You can
set options to determine which relationships and related CIs are copied along with
the selected instances.
Delete activity
You use the Delete activity to delete instances from one or more datasets. This
activity does not delete the dataset itself.
Purge activity
You use the Purge activity to delete instances that have been marked as deleted
from one or more datasets. You can opt to have it verify that each instance has also
been marked as deleted in another dataset before deleting it. This option is useful
when you are purging data from a discovery dataset but only want to purge
instances that are marked as deleted in your production dataset.
Execute activity
You use the Execute activity to execute a reconciliation job. This activity is useful
when you want to execute one reconciliation job immediately before or after
another.
Qualification sets
For most reconciliation activities, you can specify a qualification set for the
purpose of restricting the instances that participate in an activity. Qualification
sets, which are reusable between activities, are qualification rules that each select
certain attribute values. Any instance that matches at least one qualification in a set
can participate in an activity that specifies the qualification set.
For example, you might create a qualification set that selects instances that were
discovered within the last 24 hours and have the domain “Frankfurt” if your
company just opened a Frankfurt office and you are reconciling its discovered CIs
for the first time.
Deleting data
You can delete instances from one or more datasets using a Delete activity. The
Delete activity performs a physical delete, not a soft delete, and deletes instances
regardless of whether they are soft deleted.
You optionally restrict the instances to be deleted by using a Qualification Set. You
can also choose to delete only identified instances.
The Delete activity is similar to the Purge activity. For information about the Purge
activity, see “Purging soft-deleted data” on page 124.
5 From the Dataset list, select the datasets from which you want to delete data.
6 In the Additional Parameters area, select one of the following options.
! Identified & Unidentified—Both identified and unidentified instances are
deleted.
! Identified—Only identified instances are deleted.
! Unidentified—Only unidentified instances are deleted.
7 In the Qualification area, define which classes and instances to compare.
a For Use all classes and instances, select to enable or disable.
! Checked—Compares all classes and instances without restrictions.
! Unchecked—Allows you to restrict the Compare activity using a qualification
set.
b From the Qualification Set list, select a qualification set.
c To create a qualification set, click New/Edit Qualification Set. For more
information, see “Using Qualification Sets” on page 94.
8 To save the activity, click Done.
9 In the Job Editor, click Save.
When you have added activities to a job, you can execute the job manually or with
a schedule. See “Starting and stopping a job” on page 79.
Continue on Error Define whether the job continues if the activity has
an error.
Checked—A job containing this activity continues
to run if an error occurs in this activity.
Unchecked—A job containing this activity
terminates if an error occurs in this activity.
Sequence Specify in what order you want this activity to run
relative to other activities in a job. For example, if
this activity has a value of 2 it runs before an
activity with a value of 3. The sequence can be 0 to
1000, inclusive.
5 From the Datasets list, select a dataset from which you want to purge data.
6 From the Purge Instances list, select one of the following options.
! Identified & Unidentified—Both identified and unidentified instances are
purged.
! Identified—Only identified instances are purged.
! Unidentified—Only unidentified instances are purged.
7 For Verify in Target Dataset, select one of the following values:
! Checked—An identified instance is only purged from one of the datasets in the
Datasets table if an instance with the same reconciliation identity in the Target
Dataset is also marked as deleted. Unidentified instances are deleted.
! Unchecked—The target dataset is ignored, and both identified and unidentified
instances are deleted.
This list is disabled if you selected Unidentified in the previous step because there
is no way to verify an unidentified instance across datasets.
8 If Verify in Target Dataset is enabled, select a dataset against which to validate the
instances to be purged.
9 To save the activity, click Done.
10 In the Job Editor, click Save.
When you have added activities to a job, you can execute the job manually or with
a schedule. See “Starting and stopping a job” on page 79.
Comparing datasets
You can compare identified data in two datasets, which is useful for things such as
validating expected data versus discovered data or testing a new reconciliation
process.
You can create a Compare activity to compare the data between two datasets. This
activity either creates a comparison report detailing the differences between
datasets or executes workflow based on values in compared instances. For more
information about executing workflow, see “Executing workflow against
compared instances” on page 128.
A comparison report displays instances present in only one of the two datasets,
and also shows differences between the attributes of instances that are in both
datasets. The report is an attachment to an Information event.
Only instances that have reconciliation identities are compared. For more
information about identification, see “Identifying data across datasets” on page 97.
You can also exclude individual attributes from being compared.
5 From the Dataset 1 and Dataset 2 lists, select the datasets to compare.
6 In the Qualification area, define which classes and instances to compare.
a For Use all classes and instances, select to enable or disable.
! Checked—Compares all classes and instances without restrictions.
! Unchecked—Allows you to restrict the Compare activity using a qualification
set.
b From the Qualification Set list, select a qualification set.
c To create a qualification set, click New/Edit Qualification Set. For more
information, see “Using Qualification Sets” on page 94.
NOTE
If you select a Workflow Execution ruleset, no comparison report is generated by
this activity. A Compare activity can either generate reports or execute workflow.
WARNING
Do not modify the OBJSTR:Instance_REEscapeToCompareFilters filter, and do not
create any other filters on class forms at execution order 0.
You must create a filter at execution order 1000 that performs the workflow you
want for this class. If the necessary workflow can’t be contained in one filter, you
can use the filter at execution order 1000 to launch a filter guide containing
multiple filters.
NOTE
You do not need a filter to create a Compare activity or a reconciliation job that
includes it. You only need the filter to create a Workflow Execution rule for that
activity. For information about creating a Compare activity, see “Creating a
Compare activity” on page 126.
You can specify multiple Workflow Execution rules against a given class, each
with different qualifications. If you do this, you should have the same number of
filters at execution order 1000, each matching a particular action code and
performing the appropriate filter actions.
7 Click OK.
8 In the New/Edit Workflow Execution Set area, click Save and then click Close.
9 To save the activity, click Done.
10 In the Job Editor, click Save.
11 In BMC Remedy Developer Studio, create one filter for each Workflow Execution
rule that you created.
The filters must have these characteristics:
! Execution order—1000
! Form Name—The join form for the class used in your Workflow Execution rule.
For example, if the rule operates on the BMC_ComputerSystem class, select the
BMC.CORE:BMC_ComputerSystem form.
! Execute On—Modify
Copying datasets
You can copy instances from one dataset to another. You might do this to create
baselines, snapshots, archives, future states, or other types of datasets.
The Copy activity has interdependent options. Table 5-3 on page 133 shows the
Copy activity’s behavior for each possible combination of options. The options are:
! Copy Relationships—Determines whether to restore direct relationships to
instances in the target dataset. This means that when a copied CI is a member of
a relationship in the source dataset, and the other member exists in the target
dataset without the relationship, the relationship instance is copied to restore
the connection between the CIs in the target dataset. The other member is not
copied, and the action is not recursive.
A setting of Copy All restores direct relationships, and a setting of By Qualifier
does not.
Table 5-2 describes situations for when to use the Copy Relationship settings.
Table 5-3 describes the behavior of the Copy activity for each combination of
options.
5 From the Source Dataset and Target Dataset lists, select datasets to use in the
Copy activity.
6 For Copy Relationships, select which relationship instances to copy.
! By Qualifier—Copy only the relationships included in the Qualification Set.
! Copy All—Copy relationships included in the Qualification Set, and also restore
direct relationships to instances in the target dataset.
7 For Collision Resolution, select how to handle matching instances in the source
and target databases.
! Overwrite—Replace the existing instance in the target dataset with the instance
from the source dataset.
! Display Error—Write an error message to the activity log file and not copy the
instance.
8 For Include Child CIs, select whether to copy weak destination CIs.
! Unchecked—Copy only the CIs included in the Qualification Set.
Renaming datasets
The Reconciliation Engine works with logical dataset names, each corresponding
to a dataset ID. The dataset ID is what is stored with instance data.
This allows you to rename a dataset without modifying the dataset ID of every
instance in the dataset.
A Rename activity changes only the logical name of the dataset you select. It
retains the old logical name in a new dataset with a GUID as its dataset ID.
IMPORTANT
When you executes rename a dataset, you should manually update all job
definitions from the old name to the new one. Otherwise, the job fails to execute.
For example, if you had a dataset with the name “Scan - Current” and the ID
“Scan01May2006” and used a Rename activity to rename it “Scan - Last Week,”
you would then have two datasets:
WARNING
Do not use an Execute activity to run the same job of which it is a member. This
creates an endless loop.
Continue on Error Define whether the job continues if the activity has
an error.
Checked—A job containing this activity continues
to run if an error occurs in this activity.
Unchecked—A job containing this activity
terminates if an error occurs in this activity.
Sequence Specify the order in which to run this activity
relative to other activities in a job. For example, if
this activity has a value of 2, it runs before an
activity with a value of 3. The sequence can be 0 to
1000, inclusive.
6 Reconciliation configuration
Server settings
By default, the Reconciliation Engine is installed in the following locations:
! Microsoft Windows—C:\Program Files\BMC
Software\AtriumCore\cmdb\server\bin
! UNIX operating system—/opt/bmc/AtriumCore/serverName/cmdb/
server/bin/
The Reconciliation Engine is managed by armonitor and is stopped and started
with the BMC Remedy AR System server. You can modify your Reconciliation
Engine server configuration and set the number of threads.
Maximum Log File Size When the log file reaches its maximum size, it is
(KB) renamed using the jobNameN file name syntax, and
log messages continue to be written to the original
file, which is now empty. A value of zero indicates
no maximum file size. The default is 300KB.
Look Into Other Datasets Select this option if you have configured two or
for Parallel Continuous more continuous jobs that are running in parallel
Jobs on the same production dataset. When this option
is selected, the continuous jobs run the identify
activity on the source and production datasets that
are configured for the jobs and on all the other
source datasets that merge data in that production
dataset. This ensures that the same Reconciliation
ID is assigned to identical CIs in all the source
datasets and the data integrity is maintained.
! To configure the deletion of the 0 byte log files for a reconciliation job
1 In the Reconciliation console, select a job and click Edit Job.
2 In the Job Editor, enable Delete files on exit.
3 Click Save, and then click Close.
The Reconciliation Engine will delete any 0 byte log files that are created during
runs of this reconciliation job.
Configuring threads
The Reconciliation Engine is multithreaded, which improves its performance. The
number of threads available to the Reconciliation Engine is determined by settings
for the BMC Remedy AR System server where it is installed.
The BMC Remedy AR System server has Fast and List server queues defined, and
a maximum number of threads is specified for each. By default, the maximum
number of threads for the Reconciliation Engine is the higher of these two
numbers. You can also directly specify a maximum. Use BMC Remedy User to
access the AR System Administration Console to modify the threads.
Best practice
Take advantage of Reconciliation Engine multithreading by breaking up large jobs
into smaller ones and running them concurrently, but limit your number of
concurrent threads to twice the number of CPUs in the server.
Using BMC.ASSET as the production dataset, the standard merge rules assign a
precedence of 100 for each BMC dataset and, for some datasets, set precedences for
specific classes and attributes. For every attribute during a Merge activity, the
Reconciliation Engine compares the most specific precedence from each dataset.
For example, in the BMC Configuration Import dataset, the BMC_Memory class has
a precedence of 800. Because all datasets have a precedence of 100 and no other
dataset has a precedence defined for BMC_Memory, data from BMC Configuration
Import overwrites data from other datasets when merging BMC_Memory instances.
NOTE
Do not save the file using Microsoft Notepad because it does not save the carriage
returns and line feeds (CR+LF) properly. Use a plain text editor that retains
CR+LF. Otherwise, importing the definitions fails.
NOTE
Before using the CLI on UNIX for the first time, you must add an entry to your
library path. The CLI also has several other options not described in the following
procedure, some of which might be necessary depending on your AR System
server environment. For more information about these topics, see the BMC Remedy
Action Request System 7.6.04 Integration Guide.
Glossary 151
BMC Atrium CMDB 7.6.04
Glossary 153
BMC Atrium CMDB 7.6.04
DMTF federation
See Distributed Management Task Force The act of linking CIs in BMC Atrium CMDB
(DMTF). to external data.
DSL Federation Manager
See Definitive Media Library (DML). A component of BMC Atrium CMDB that you
Enterprise Integration Engine can use to manage federated data. From the
See BMC Atrium Integration Engine. Federation Manager, you can view, create,
and modify federated products, federated
event interfaces, and federated links.
A particular type of change to the instances of
specified classes. You can publish an event so filter
that any instance of it is written to the A set of criteria for restricting the information
CMDB:Events form. You can receive displayed by the Atrium Explorer. This is
notification each time an instance of the event different from a BMC Remedy AR System
occurs by polling the form. filter.
Exclusion rule final class
A rule that specifies an attribute to be excluded A class that cannot have subclasses.
from participation in a Comparison activity. foreign key substitution
Execute Job activity A method of federation that assigns a key
A Reconciliation Engine activity that executes a from the federated product to each linked CI.
job. Foreign key substitution is useful when no
attributes that also exist in BMC Atrium
extension CMDB are stored in the federated product.
A logical set of classes and attributes, usually in
its own namespace, that is not part of the graph walk
Common Data Model (CDM). The act of searching for CIs and relationships
in BMC Atrium CMDB.
extension loader
The cmdbExtLoader program, which is used graph walk functions
for installing data model extensions and A set of specific functions that are used to
importing other BMC Atrium CMDB data and search for CIs and relationships in BMC
metadata. Atrium CMDB. Use these functions when you
want to search for CIs regardless of their class
federated data or relationship.
Data linked from CIs in BMC Atrium CMDB
but stored externally. Federated data might group
represent more attributes of the CIs or related A set of a particular type of reconciliation
information such as change requests on the definition that is referenced by an activity. See
CIs. also Identification group, Precedence group,
Qualification group, Workflow Execution group.
federated interface
An instance of the BMC_FederatedInterface GUID
class that specifies how to access a particular A globally unique identifier, automatically
type of federated data. See also federated link. generated by the BMC Remedy AR System
server. GUIDs are used for instance IDs,
federated link reconciliation IDs, and other cases where a
The connection between a class or CI and a unique value must be generated without
federated interface. human interaction.
federated product
A product that holds federated data. It can be
linked to more than one federated interface.
Glossary 155
BMC Atrium CMDB 7.6.04
Glossary 157
BMC Atrium CMDB 7.6.04
subclass workflow
A class that is derived from another class, BMC Remedy AR System objects such as
which is called its superclass. The subclass active links, escalations, and filters that
inherits all the attributes of its superclass and perform actions against data.
any superclasses above it in the hierarchy, and Workflow Execution group
can also participate in relationships defined A set of Workflow Execution rules. Each
for all superclasses. Comparison activity can optionally reference
superclass one Workflow Execution group.
A class from which other classes, called Workflow Execution rule
subclasses, are derived. A rule used when comparing instances
synchronization between datasets. When a compared instance
The automatic process of creating BMC matches the qualification for the rule,
Remedy AR System forms and workflow to specified BMC Remedy AR System workflow
represent a class that has just been created or is executed against the instance or the instance
modified. The class is not available until against which it is compared.
synchronization completes. working dataset
text normalization One of a pair of dataset IDs that is specified
See normalize. when executing a job with dynamic dataset
unqualified data substitution. The job is executed with the
Information about an unknown device at a working dataset in place of the defined dataset.
known IP endpoint. If you discover an IP write security
address, but lack the credentials to identify The permission required along with row-level
the device at that endpoint, data for that security to modify or delete a specific instance.
device is unqualified. For example, the device See also row-level security.
might be a laptop computer, printer, router, or
some other type of device. BMC Atrium
CMDB stores unqualified data as
BMC_ComputerSystem instances.
weak reference
See weak relationship.
weak relationship
An optional characteristic for relationship
classes, signifying that the members of a
relationship form a composite object that can
be reconciled as one. The destination member
is considered the weak member of a weak
relationship, existing as part of the source
member (also known as the strong member).
Windows Management Instrumentation (WMI)
The Microsoft application of the Web-Based
Enterprise Management initiative for an
industry standard for accessing management
information.
WMI
See Windows Management Instrumentation
(WMI).
Glossary 159
BMC Atrium CMDB 7.6.04
Index
Index 161
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
D importing
normalization configurations 52
datasets reconciliation definitions 149
auto-identifying 97 incremental merge 82, 83, 111, 113
comparing 120, 126 inline normalization
configuring for normalization 31 overview 42
copying instances 121 Instance Permissions
creating 105 creating rules 53
deleting instances 121 instances
identifying instances 97 copying dataset 121
merging 67 deleting dataset 121
merging data 109 identifying in datasets 67
purging instances 121 purging dataset 121
reconciliation 68 IT Service Management (ITSM), using with
reconciliation and 137 Normalization Engine 25
renaming 121, 137
deleting dataset instances 121
disabling J
Normalization Features 31 jobs
disabling normalization globally 52 See normalization jobs or reconciliation jobs
E L
enabling log files
Normalization Features 31 Normalization Engine 50
event types Reconciliation Engine 144
error 87 logging
information 87 batch normalization 50
warning 87 continuous normalization 50
exporting normalization 50
normalization configurations 52 normalization API 51
reconciliation definitions 148 reconciliation jobs 144
G M
global normalization, disabling 52 manual identification 107
groups mapping
See sets. categorization aliases 41
product and manufacturer aliases 40
I MarketVersion attribute 19
merge
identification configuring standard rules 147
configuring standard rules 147 standard rules 146
standard rules 146 merging data from datasets 109
Identification rules required 101 merging data, incrementally 82, 83, 113
identifying merging datasets
activity overview 67 incrementally 111
overview 97 overview 67
impact relationships multitenancy in normalization 23
and normalization 19
N Q
naming, for relationships 21 Qualification Sets, creating 94
NE Administrator role 28 queues, configuring for RPC 49
NE User role 28
ne_classconfig 36
normalization R
batch, overview 42 reconciliation
configuring classes 36 comparing datasets 120
configuring datasets 31 copying dataset instances 121
configuring logging 50 deleting dataset instances 121
continuous, overview 42 executing jobs 121
exporting configurations 52 exporting definitions 148
importing configurations 52 identifying datasets 67
inline, overview 42 importing definitions 149
modes 42 merging datasets 67
overview 14 namespaces and 70
process overview 16 purging datasets 121
Normalization Features renaming datasets 121
enabling 31 Reconciliation Engine
Impact Normalization 19 executing jobs 121
overview 14 modifying server configuration 143
Relation Name 21 operations by way of workflow 82
Version Rollup 19 Reconciliation Identity attribute 97
normalization jobs Reconciliation IDs
batch 47 overview 70
continuous 47 reconciliation jobs
Normalization Simulation utility 37 continuous 80, 145
NormalizationStatus, values 24 creating 89
null value and normalization 23 initiating with workflow 82
ReconciliationIdentity
P overview 70
ReconciliationMergeStatus attribute 67, 80, 86
permissions Relation Name
CMDB RE Definitions Admin role 142 overview 21
CMDB RE Manual Identification role 142 relationships
CMDB RE User 142 impact normalization 19
CMDB RE User role 142 name normalization 21
NE Administrator role 28 removing activities 91
NE User role 28 renaming datasets 121, 137
row-level rules 53 roles
ports, configuring for RPC 49 CMDB RE Definitions Admin 142
Precedence Sets, creating 115 CMDB RE Manual Identification 142
previewing normalization 37 CMDB RE User 142
Product Catalog NE Administrator 28
and normalization 15 NE User 28
CI Type and normalization 15 row-level permissions
preparing for normalization 28 creating rules for 53
See BMC Atrium Product Catalog.
simulating changes after normalization 37
product support 3
purging dataset instances 121
Index 163
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
rules
configuring for identification 147
W
configuring for merge 147 workflow
standard for identification and merge 146 executing against compared instances 122, 128
Suite Rollup 59 starting jobs with 82
Version Rollup 55
running jobs 81
S
schedules, reconciliation weekly 81
server group
normalization 30
reconciliation 71
sets
creating Precedence 115
creating Qualification 94
simulating normalization 37
software license management
and normalization 19
standard rules 146
configuring for identification 147
configuring for merge 147
Suite Rollup
creating rules 59
suites
creating rules 59
support, customer 3
system normalization, disabling 52
T
technical support 3
threads
reconciliation 146
RPC, configuring 49
types of reconciliation events
error 87
information 87
warning 87
V
Version Rollup
creating rules 55
overview 19