SQL 2012 SP1 For System Center 2012 R2
SQL 2012 SP1 For System Center 2012 R2
1|Page
Contents
Introduction ............................................................................................................................................ 8
Overview of SQL ...................................................................................................................................... 9
Overview of System Center .................................................................................................................... 9
The SQL server ...................................................................................................................................... 10
SQL Server Workloads....................................................................................................................... 11
OLTP .............................................................................................................................................. 11
DW ................................................................................................................................................ 11
Windows Server Settings .................................................................................................................. 12
Disks, Partitions and RAID ............................................................................................................. 12
SQLIO............................................................................................................................................. 12
AV Settings .................................................................................................................................... 12
Processor Scheduling .................................................................................................................... 14
Visual Effects ................................................................................................................................. 14
System Failure ............................................................................................................................... 14
Page files ....................................................................................................................................... 15
Perform volume maintenance tasks. ............................................................................................ 16
Lock Pages in Memory (LPIM) ....................................................................................................... 16
Power Settings .............................................................................................................................. 17
Location......................................................................................................................................... 17
Instance specific configuration ......................................................................................................... 19
The TempDB .................................................................................................................................. 19
Optimize for ad-hoc workloads..................................................................................................... 20
Min and Max Server Memory ....................................................................................................... 21
SQL backup compression .............................................................................................................. 21
Max degree of parallelism ............................................................................................................ 22
The Server Principle Name (SPN) ...................................................................................................... 23
Standard or Enterprise ...................................................................................................................... 26
Running SQL on a VM........................................................................................................................ 28
SQL and Hyper-V ........................................................................................................................... 28
SQL and VMware........................................................................................................................... 29
2|Page
4|Page
5|Page
6|Page
7|Page
Introduction
This guide has been written to help you make decisions about arguably one of the most important
building blocks of any System Center deployment; the database server. As more organisations are
deploying more of the applications within the suite we will look at the options of co-existing
different System Center databases on the same instance or a new instance on the existing SQL
server.
This guide was written by Paul Keely with the intention of helping other System Center
administrators plan for the SQL platform that will be needed for an enterprise deployment, and help
the SQL DBA get a better understanding of System Center and its requirements on SQL. The data in
this guide comes from sources like TechNet, Microsoft guides, white papers, blogs and personal
experience. Where a personal opinion is given it will be listed as such.
The main area of personal opinion will be centred on database sizing, virtualisation and co-location.
The reason this guide is giving advice on database sizing etc. is that the TechNet forums are
constantly choked with questions from people like;
What size should my Operations Manager DB be?
Where should it be located?
Can I have the SCCM DB on the same instance as my SCVMM DB?
How do I move my SCO DB?
This guide will endeavour to answer as many of these questions as possible and act as a reference
for SQL server performance with regard to System Center.
Chapter 3 The SQL Server is designed to help the typical System Center Administrator make
informed decisions about the SQL backend. It goes into some detail on SQL performance and how it
can be configured. It will likely be of limited use to the dedicated SQL admin. The rest of the
chapters will likely be more helpful to the SQL admin as they will guide you through the System
Center requirements for SQL and will help you understand some of the requirements on your
systems.
This guide is not a diehard SQL performance guide, as that is not what the guide was intended to be
and there is plenty of content of indepth SQL performance and tuning information on the web.
8|Page
Overview of SQL
SQL server is used by System Center for operational databases and reporting data warehouses. It is
the single most critical component of any System Center application. It needs to be designed and
configured correctly and its failure is a big issue for your management system.
The first revision of this document will focus on SQL 2008 R2 as at the time of writing System Center
2012 SP1 was not released, but as soon as it is, the document will be updated to look at SQL 2012
also.
9|Page
No matter what you decide on, make one thing clear, dont place your System Center application on
the same server as your SQL instance, just dont. Microsoft has really deemphasized the focus of
System Center away from small companies to the medium to large enterprise. The reason for this is
small organisations in general lack the IT skills needed to support the complexity of System Center.
So dont go deploying an enterprise product or products sharing the same server with your DB
instance and start moaning later that the console performance sucks, it doesnt, your design does.
10 | P a g e
Type
OLTP
DW
OLTP
OLTP
DW
OLTP
OLTP
OLTP
OLTP
OLTP databases will have a high number of short transactions. The data tends to be more volatile
and it has a higher number of write activities to the data files and in general the OLTP DB will
generate more IOPS than a DW. From a SC perspective data in an OLTP DB like the SCOM
operational DB only stays in the DB for short period of time.
DW
The DW in SC is used to hold data for longer time frames and from the DW we can run reports to
ascertain for example performance data over a long timeframe, normally up to a timeframe of just
over one year. In general the DW will normally have longer running queries than the OLTP, and
often the data is more static.
11 | P a g e
12 | P a g e
13 | P a g e
Visual Effects
Should be set for adjust for Best Performance
System Failure
Set the system failure to Automatically restart
14 | P a g e
Page files
By default Windows sets the page file to be 1.5 times the size of the available amount of RAM. The 3
main components that make the foundation of your OS are the CPU, Memory and the Hard Disk.
Windows, applications and drivers use virtual memory addresses that are translated to actual RAM
by the hardware. Virtual Memory Manager (VMM and not to be confused with the VMM that
manages your virtual environment) is the component within the OS that translates the virtual
address space to the physical memory on your server. When a process tries to write into memory it
will request a virtual address from VMM. Once it has been written VMM then decides if it will stay
in RAM, if there is sufficient stress on RAM you may need to move the data to the page file to make
room for another process. When you try and access the data again it has to be loaded from the page
file and a page fault occurs.
There are 2 types of page fault that can occur, Soft and Hard. The soft page fault occurs when SQL
requests a new page from VMM. The second and more damaging from a performance standpoint is
the Hard Page Fault. When VMM has assigned SQL pages to its virtual address space but then ran
out of physical memory and so sent one of SQLs pages to the page file on your hard disk. When SQL
now makes a call for that page then VMM has to find that page in the page file, and then write the
page to memory. When you are paging a lot you need to add more memory to the server.
If you leave the page file at 1.5 times the amount of RAM you wont hurt performance, but on a
server with large amounts of RAM do you still need such a large page file? On systems with
hundreds of gigs of memory a page file of 8GB would seem appropriate, it is also considered a good
idea to have a second page file on a separate disk to the OS.
15 | P a g e
16 | P a g e
An error similar to this in the SQL error log indicates that the SQL working set has been trimmed and
that you may either call Microsoft Support or enable LPIM
A significant part of SQL memory has been paged out.
When you enable LPIM its only the SQL buffer pool that gets locked, and its strongly recommended
that you enable Max Memory Settings at the instance. When you use LPIM the SQL buffer pool cant
be moved and as such it requires less overhead to manage it. This can have a significant
performance gain on high end SQL environments.
Power Settings
Running Windows Server on the default power plan can result in less than optimum CPU
performance on the server. In Glenn Berrys excellent book SQL Server Hardware he has
conducted performance testing using the Balanced power plan and using High performance
power plan and he recorded significant performance gains using the latter.
Location
Some of the applications like VMM support hosting your SQL server in a separate domain to your
VMM server, while others dont. In my opinion it makes no sense to locate your SQL instances in
separate domains or separate data centres. If you are storing your SQL server in a separate domain
to your System Center application servers then you run the risk of authentication failures with
Kerberos and trusts etc. There are enough things to troubleshoot and manage in System Center
without adding more complexity into the mix. My rule of thumb is having my SQL backend in the
same domain and data center as my System Center servers.
17 | P a g e
Storage
The data center that hosts the greatest number of servers is a good
location in terms of the pure number of servers to monitor and
backup etc.
Hosting your SQL in a data center where you have fast storage
possibly with tired storage. SAN replication, availability etc all help
swing the decision
Housing SQL closer to the greater number of admin users in the
past was a great idea for applications like SCOM. It still is a good
idea to place a terminal server in the data center where you place
your SQL backend.
18 | P a g e
When this server restarts the DB will be flushed, a new one will be created at 8MB and it will
autogrow continually to reach the 609MB size.
Store them on fast disks
Ideally your tempdb should be stored on a RAID 10 set. When Microsoft published a blog on how
they deployed SCOM 2007 they made a big deal of how they placed tempdb on high performing
RAID 10 set.
20 | P a g e
2Gb
2Gb
10Gb
10Gb
8Gb
Changes you make to the memory settings take effect straight away and you dont need to restart
services.
How can you know if you are on the money with your memory settings? Well it turns out there are
two counters that can help, Available Mbytes and Page Life Expectancy (PLE).
PLE is a measure of how long a page will stay in the buffer caches memory uncontested. The figure
of 300 seconds is considered about as low as youre looking for. Values like 30 seconds is considered
critical, on many SQL servers that I am working on I see values like 4000 seconds!
The Available Mbytes should really not descend below 500Mb, and a value of about 1Gb would be
considered the safe low water mark. So how do I use these two counters to work out my max
memory settings?
Well if you have a low PLE value (below 300) but you have a high available memory value than you
have room to assign more memory to your SQL instance. On the other hand a very high PLE value
paired with a low memory value could indicate that you have too much memory assigned to SQL and
not enough for the OS.
SQL backup compression
If you are using SQL native backup then it makes sense to use the SQL backup compression setting.
This allows the backup to complete quicker. Your backup disk set is the one SQL file that can exist on
a RAID5 set as the relatively poorer IOPS of RAID 5 is not as noticeable on the backup, and certainly
not if you use SQL backup compression. With compression there is always a trade-off, in SQL you
get a higher CPU usage to perform the compression but reduced I/O due to the smaller file size of
the backup.
A lot of Host Bus Adaptors are optimized to handle throughputs of 128KB of I/O per second. SQL is
configured to runs its backup routine with a throughput 1,000 KB of I/O per second.
21 | P a g e
22 | P a g e
23 | P a g e
24 | P a g e
On a last point of note, if you are using a clustered SQL instance there can sometimes be an issue
with registering the SPN, have a read here for some more info.
25 | P a g e
Standard or Enterprise
Microsoft offers a number of SQL options and in the main the System Center admin is only going to
deal with SQL Standard and or Enterprise, the exceptions for this are;
A: The default install of a SCCM secondary site will install a SQL Express
B: DPM will give you the option of a local SQL Express install
In both of here situations you can opt for a pre-existing SQL instance.
The following tables will give an overview of the feature sets of the two versions. In the first table
we are manly looking at features as they pertain to System Center or are common features. The
second table mainly shows the additional features of the Enterprise version.
KEY: Full Feature =
Feature
Partial/Limited =
Number of CPUs
Always On
Security
Data Warehouse
Business Intelligence
Enterprise Manageability
Programmability
50 Instances
16 Instances
16-Node Failover
2-Node Failover
Full
Single-thread
Hypervisor Support
Clustering
DB Mirroring
Integration Services
26 | P a g e
Application Embedding
Report Builder
Database snapshots
Indexed views
27 | P a g e
Running SQL on a VM
In the survey I posted with the results listed at the end of this document, 70% of respondents are
running SQL on a VM to support System Center. Its likely that this number will continue to rise in
the future as companies will only deploy servers in a virtual environment. As most VM
environments are built on a highly available model you get the benefit of another level of
redundancy for your SQL environment. When I was doing some research before writing this e-book I
posed the following question on the TechNet SQL forum;
If you are going to deploy a SQL instance to a VM what are the things you might do differently
compared to running it physical?
A number of the SQL MVPs responded to the question and one reassuring comment was If you
provide the correct resources for the VM, no there won't be a difference. If you don't then there will
be.
Acknowledging the fact that you are going to need the same IOPS and memory requirements no
matter if you deploy to physical or virtual then some of the biggest mistakes are;
1. Oversubscribing host hardware with too many vCPUs mapped to pCPUs. Anything beyond
2:1 is definitely going to affect performance regardless of what the hypervisor vendor says.
2. Oversubscription of memory on the host hardware leading to ballooning issues of memory
demanding VMs like SQL server. This goes against all the vendor recommended practices for
virtualizing SQL Server
3. Undersized I/O subsystem. The I/O requirements for SQL Server are still the same in a
VM. The 6TB SAN with only 15 drives in it that is being used for the VM environment using
1Gb/s iSCSI is a recipe for disaster if you put your entire infrastructure on it without testing
and knowing your I/O workload and demand. Size is not the same thing as performance.
4. Using more virtual CPUs than necessary for a VM. Each virtual CPU has a scheduling
overhead associated with it and at times the hypervisor has to schedule all of the virtual
CPUs concurrently through a process known as co-scheduling. To do this it has to pre-empt
the same number of physical processors as virtual CPUs. When other activity from smaller
VMs is being scheduled this can cause a larger VM to wait, and performance is affected. If
this is a problem, a 4vCPU VM can perform faster than an 8vCPU VM by as much as 3-5
times depending on the level of problem that exists. Less is more in a virtual world.
SQL and Hyper-V
Running SQL on Hyper-V has been fully tested and benchmarked by Microsoft. With Server 2012,
Hyper-V has closed the gap with ESX and more people are likely to deploy SQL on Hyper-V 3.0.
There are two useful documents from Microsoft with regard to SQL.
The first is Running SQL Server 2008 in a virtual Environment Best Practices and Performance
Recommendations
The second is from the SQL CAT team and that is the High Performance SQL Server Workloads on
Hyper-V
Some of the key points to underline are;
Using pass through disks there was almost no difference in IOPS when using SQL on Hyper-V
or Physical
28 | P a g e
Dynamically expanding disks will hurt your performance on SQL big time
Running iSCSI through a NIC that has standard network traffic going through it can
dramatically reduce throughput efficiency.
Everyone is using virtualising for SQL and so will I. not quite. When Microsoft recently published a
blog on how they deployed SCCM 2012, the blog speaks about how they used physical SQL. So why
might you decide to go physical instead of virtual?
Capacity to an existing SQL cluster
Knowing in advance that a greater number of System Center products will be deployed from
the start and likely to the mid-sized and above environments.
29 | P a g e
SQL Performance
There are 3 main areas that SQL has to interact with; they are obviously, CPU, Disk and Memory.
These 3 areas are quite important to the IT pro as you may need to go to other teams and let them
know that you need more CPU clock cycles, more memory or faster disks.
I wasnt sure if it was better to talk about the SQL Server BPA here or in the Troubleshooting SQL,
but then decided to detail it here. The reason for this is that running the RBA should be your first
step in looking at your SQL server.
So how do we go about measuring performance in SQL server for System Center? Obviously the first
thing to think about is Perfmon, SCOM and once we look at these two tools we can then move onto
deeper monitoring components like Dynamic Management Views.
Perfmon
Perfmon is fairly lightweight but is best run from another server to get a more comprehensive
monitoring picture. The advantages of Perfmon are that you can add a large amount of counters for
a SQL server.
WOW and Perfmon
If you are trying to monitor 32bit SQL on a 64 bit version of Windows then you will be using WOW or
Windows on Windows. If you are managing an older version of System C enter in your environment
and have to look at the SQL performance, you wont see any SQL performance counters loaded into
your standard Perfmon view. To access your 32bit counters you need to open your Perfmon with;
Mmc /32 perfmon.msc
Running Perfmon local on your SQL server should have almost no impact on the results except if you
are monitoring a server with very poor performance. Running a smaller number of counters often
makes it easier to read the results, another tip that I often do is view it in a report format as can be
seen in the following image.
30 | P a g e
31 | P a g e
Note, when you enter a DMV to look at a performance counter as shown above and you enter the
syntax incorrectly you will not get a T-SQL error, it will just return a null value.
Performance counters
There are so many SQL and Windows performance counters that its possible to lose your way when
trying to monitor SQL. So in this section we are going to start off with a cheat-sheet of just a few
counters that you can start off with and then we will go deeper into a large number of SQL counters
and what ideally they should be.
Cheat Sheet
MSSQL$: Buffer Manger\Buffer
cache hit ratio
LogicalDisk(*)\Avg Disk
sec/Read
LogicalDisk(*)\Avg Disk
sec/Write
\Processor
Should be below 60% and over
Information(_Total)\%Processor 80% indicates processor
Time
pressure
32 | P a g e
Object
Memory
Counter
Available Mbytes
What should it be
400MB
Memory
Pages/Sec
< 40/50
SQL Server:
Buffer Manager
>300
SQL Server:
Buffer Manager
Between 97-100%
SQL Server:
Buffer Manager
Page Reads/sec
<90
SQL Server:
Buffer Manager
Lazy Writes/sec
<20
Paging File
% Usage
<50%
Paging File
% Usage
<70%
System
Network
Interface
Physical Disk
Physical Disk
Processor
Processor Queue
Length
Bytes Total /sec
Average Disk
Sec/Read
Average Disk
Sec/Write
%Processor Time
< 8ms
See description
< 8ms
<60%
Description
We all like to sweat the tin but
give your SQL servers at least
400MB free
This is the rate that pages are
read or written to the page file,
too much paging = not enough
memory
This is the estimated time a
page will stay in memory and
could be tens of thousands on a
system with lots of memory
This is the number of pages that
were found in cache and not
read from disk
Higher than 90 pages per
second can indicate either a low
amount of memory or poor
indexing
The number of times SQL moves
pages to disk to free up space.
The ideal number is 0
The amount of the page file in
use. Its obvious that you want
this the lower the better
The peak amount of the page
file used since the last time the
server was rebooted
Access Methods
Access Methods
Workfiles
Created/Sec
Full Scans / sec
< 20
<1
threading is enabled
Number of files created per
second
Greater than 2 shows a high
amount of index scanning
There are obviously a large number of counters both SQL and OS but what you have now is a good
baseline to start with.
34 | P a g e
Paul Randal has adapted the script to look at some more details on the waits. And using that I can
see some more details on the waits.
So now you have a method to get into waits, what do they mean? Here are some of the most
common waits and some resources on what they mean.
35 | P a g e
Description
Comments
CXPACKET
PAGEIOLATCH
OLEDB
ASYNC_NETWORK_IO
THREADPOOL
36 | P a g e
37 | P a g e
Operational DB
The operational DB server must have;
.NET Framework 3.5 and 4
Full Text search.
The default size the installer offers is to create a 1 GB DB. From a
personal perspective this is unsuitable for anything other than a
home lab. A starting point of between 30 GB to 50 GB with the log
file of half the size of the DB makes more sense. The DB needs to have
at least 50% free space at all times to allow maintenance tasks to run.
Microsoft recommends that the DB files should be on a RAID 10 set
With the RAID controller being a Battery Backed Write Cache.
Estimated IOPS
1- 500
500
501-1000
875
1001-3000
1000
3001 6000
1500
6001 - 10000
2000
The exact profile of the DB depends on a great number of factors; the obvious ones are the number
of agents, number of MPs and the number of custom rules and monitors.10001
Infront- 15000
Consulting2500
has a
number of client MPs and they have found it make better sense to have a new Management Group
with a new DB. It goes without saying that the longer you keep data in the DB the bigger its going
to be. Some SCOM admins switch off performance counters in the operational DB completely and
just rely on the DW for reporting data. This frees up space and improves response times; however
having performance data in the operations console is so quick and useful that most SCOM admins
leave it in.
Lets have a look at a SQL DB for SCOM and what should be configured.
Recovery Model = Simple
Auto Close
= False
Auto Shrink
= False
38 | P a g e
39 | P a g e
Kevin Holman wrote a fantastic guide on maintenance plans for operations manager and its a great
read if you are planning your SQL environment or having to justify your recovery model.
40 | P a g e
SCOM DW
The DW holds data like alerts, performance availability and events. By default the DW holds data for
400days, but it rationalizes the data. What that means is that if a specific performance counter
constantly returns the same value over a period of time then the DW will just represent all those
same values to just one value.
You can have a number of Management Groups all report to the same DW, if you are using SQL
Enterprise and you can support scale out reporting you may want to look at consolidating a
number of instances to support all your DW requirements.
Then from the object explorer go to dbo.StandardDatasetAggregation and select to open the top
200 rows. Find the aggregation type from the list in the AggregationTypeId column by using the
following values:
o
10 = subhourly
20 = hourly
41 | P a g e
30 = daily
Use the MaxDataAgeDays column to edit the value to set the grooming interval.
42 | P a g e
Disk Sizing
Microsoft offer some good sizing guides and information that you can get here but the following
table will summarise the DB sizing.
Data Usage
Base calc
25,000
50.000
100,000
MDF
75Gb per 25,000
75Gb
150Gb
300Gb
LDF
25Gb per 25,000
25Gb
50Gb
100Gb
tempdb*
4-8 1GB files
4-8 1.5GB files
8 2Gb files
IOPS**
500 IOPS
1000 IOPS
2000 IOPS
*The tempdb figure is a personal opinion and not taken from the TechNet site.
** Unlike the SCOM sizing guide the SCCM team do not offer any guidance for IOPS in SCCM as the
actual amount will depend on a large number of factors that come into play. In this example I am
giving these figures to offer some guidance.
SCCM could really do with the same sizer as SCOM has and I know when I have asked for it I am told
its impossible to give a sizing guide as it has too many changeable and configurable options. I dont
agree with this and think SCOM can be just as volatile in terms of the number of MPs, the number
of custom rules and monitors, and the number of management groups reporting to one DW etc.
Here are some resources to help
http://blogs.msdn.com/b/scstr/archive/2012/05/31/configuration_2d00_manager_2d00_2012_2d0
0_sizing_2d00_considerations.aspx
44 | P a g e
Table Structure
SCCM has a large number of tables containing all the data that SCCM is collecting. An easy way to
view the tables is from T-SQL use your SCCM DB, in my case its CM_PSS, so you can use the
following;
USE CM_PSS
Select * from information_schema.columns
And from that I get an output like what we see below
45 | P a g e
The DB will be backed up to the specified location with a standard .bak file.
46 | P a g e
Afterbak.bat
The SCCM backup process will overwrite the previous backup, and as such you only ever have one
backup. This is probably not the safest approach for your SCCM environment. Luckily there is an
easy solution. Edit the following file to suite your own site and then drop it to C:\Program
Files\Microsoft Configuration Manager\inboxes\smsbkup.box
The file needs to be called afterbakup.bat
setlocal enabledelayedexpansion
set target=\\primary_site_server_name\ConfigMgr\Backups\%date:~0,3%
If not exist %target% goto datacopy
RD %target% /s /q
:datacopy
xcopy "C:\Backups\SCCM\*" "%target%\" /E /-Y
It creates 7 days worth of backups in folders Mon, Tue, Wed etc and then backup those files with
normal file backups
47 | P a g e
In a production install I was working on, it was decided to move the SQL from one SQL
server to another. The reconfigure worked fine except that the new SQL had no
permission with the SQL on the CAS. As a result the DB replication links could not be
reinitialized. I added the two SQL servers to the local admin groups on each other,
rebooted and replication started again. I wont forget that in the future!
1.
2.
3.
4.
48 | P a g e
Enter the new SQL details and SCCM will configure the rest.
Sudheesh N from Microsoft has written a great post to help with DB synchronisation
49 | P a g e
Task
Description
Running time
Rebuild Indexes
Monitor Keys
Saturday at 5:00
Daily at 5:00
Every 5 min!
50 | P a g e
ServiceManager
DWRepository
DWDataMart
DWStagingAndConfig
CMDWDataMart
OMDWDataMart
Analyst
ReportServer
Requirements
Sizing
Luckily the SCSM team wrote a great sizing tool for SCSM. The tool contains a series of Visio
documents, templates and an excel spread-sheet. Its a great collection of tools to help with SCSM
and not only gives you a sizing guide but also helps with some great architecture Visios to explain
some of the process flows.
Here is an example from the guide.
Users
100-500
501-2000
500
2000
Computers
1
1
Incidents
20
100
CR/Month
1.5
3
DB Size GB
7.5
15
DW Size GB
2-5k
3000
1
150
4
20
5-10k
6000
1
1000
8
38.5
10k
50000
1
2000
12.5
62
It goes without saying that correct placement of the data bases especially the tempdb needs to
follow the best practices as mentioned in this document.
Analysis Services Files: In the Enterprise edition of SQL Server 2008, you can decide where
Analysis Services database files will be stored. In the Standard edition, there is only one
default location for the files.
Cube Processing: In the Enterprise edition, cubes are processed incrementally each night. In
the Standard edition, the entire cube is processed each night and therefore, the amount of
processing time required will increase as more data is accumulated. Cubes can still be
queried when being processed however, reporting performance will be reduced.
Measure Group Partitions: In the Enterprise edition, measure groups are partitioned on a
monthly basis, instead of as one large partition. This reduces the amount of time it takes to
process the partition.
PowerPivot: In the Enterprise edition, you can use Microsoft SQL Server PowerPiviot for
SharePoint.
There is a great blog that explains the DW in some nice details, here is a summary of some of the
points in the blog.
Moving the DB
When it comes to moving either of the data bases the SCSM product group has again produced a
great procedure on the move. The DW move is a very considerable and as such you will need to
understand the process well before you begin. Travis Wright wrote a great blog to describe the
process
Moving the OLTP
To move the SCSM DB follow these steps:
A. On all management servers stop the System Center services so that no data can be
submitted to the DB
B. Take a backup of the ServiceManager database
C. Restore the Service Manager database on the target SQL Server
D. Configure the Service Manager database
E. Change the registry key on your management servers
Navigate to HKEY_LOCAL_MACHINE\Software\Microsoft\System Center\2010\Common\Database
There are two keys that can be configured here - one for the server name (DatabaseServerName)
and one for the database name (DatabaseName)
Set those values to the new SQL Server name and database name (if different than originally).
F. Start the System Center services on all the management servers
52 | P a g e
Locate the various user accounts and SQL Servers used by the Data Warehouse Management Server
Stop Service Manager Services on the Data Warehouse Management Server
Back up the Data Warehouse databases on the old SQL Server.
Take the Data Warehouse databases offline on the old SQL Server.
Restore the Data Warehouse databases on the new SQL server.
Prepare the Data Warehouse databases on the new database server
Update Data Warehouse Management server with the new database server name.
Update the Data sources on the Reporting server to point to the new SQL server.
Update the Data sources on the Analysis server to point to the new SQL server.
10. Start Service Manager Services on the Data Warehouse Management Server
Important:
1. After the move, the databases DWStagingAndConfig and DWRepository databases have to be
restored on the same SQL instance. Restoring them on separate SQL instance is not supported.
2. The SQL Server collation on the new SQL instance has to match the collation on the original SQL
server instances where the Data Warehouse databases were originally hosted.
How to locate the various accounts and SQL servers that are used by the Data Warehouse
Management Server
To identify the SQL Server database and instance name used by the Data Warehouse Management Server
1.
2.
3.
4.
5.
Log on to the Data Warehouse Management Server as a user with administrative credentials.
On the Windows desktop, click Start, and then click Run.
In the Run dialog box, in the Open box, type regedit, and then click OK.
In the Registry Editor window, expand HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\System
Center\2010\Common\Database
Make a note of the following registry values:
DatabaseName
DatabaseServerName
DataMartDatabaseName
DataMartSQLInstance
RepositoryDatabaseName
RepositorySQLInstance
StagingDatabaseName
StagingSQLInstance
OMDataMartDatabaseName
OMDataMartSQLInstance
CMDataMartDatabaseName
CMDataMartSQLInstance
53 | P a g e
To identify the SQL Reporting server and instance name used by Data Warehouse Management Server
1.
2.
3.
4.
Log on to the Data Warehouse Management Server as a user with administrative credentials.
On the Windows desktop, click Start, and then click Run.
In the Run dialog box, in the Open box, type regedit, and then click OK.
In the Registry Editor window, expand HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\System
Center\2010\Common\Reporting
Server
ServerInstance
WebServiceURL
To identify the Service account used by the Data Warehouse Management Server
1.
2.
3.
4.
5.
6.
To identify the Reporting account used by the Data Warehouse Management Server
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
Log on to the SQL Reporting Services server hosting the Service manager Reports instances. You
can get the server name from step 6 by locating the registry value for Server
Once you are on the SQL reporting Service server, click Start, click All Programs, click Microsoft
SQL Server 2008 R2 or Microsoft SQL Server 2008, click Configuration Tools and then click on
Reporting Services Configuration Manager
In the Reporting Services Configuration Connection, connect to the correct SQL Reporting
instance as noted in Step 6
In the Reporting Services Configuration Manager window, click Reporting Manager URL
In the Reporting Manager URL page, click on the hyperlink similar that would be of the
following format: http://<Servername>:portnumber/Reports
Clicking on the hyperlink will open an Internet browser window. Click on the System Center
folder.
Click the Service Manager Folder.
Click the Data Source DWDataMart
Make a note of the User name value under the radio button setting Credentials stored securely
in the report server
Hit the back button to return to the Service Manager folder.
Click the Data Source DWStagingAndConfig
Make a note of the User name value under the radio button setting Credentials stored securely
in the report server
Hit the back button to return to the Service Manager folder.
Click the Data Source ConfigurationManager
Make a note of the User name value under the radio button setting Credentials stored securely
in the report server
Hit the back button to return to the Service Manager folder.
Click the Data Source MultiMartDatasource
Make a note of the User name value under the radio button setting Credentials stored securely
in the report server
Hit the back button to return to the Service Manager folder.
54 | P a g e
To identify the OLAP Account used by the Data Warehouse Management Server
1.
2.
Log on to the Service Manager Server, click Start, click All Programs, Click Microsoft System Center
2012, click Service Manager and then click Service Manager Shell
In the PowerShell window, type the following commands one at a time and Hit Enter
$class= get-scclass Name Microsoft.SystemCenter.ResourceAccessLayer.ASResourceStore
ComputerName <DWServerName>
$OLAPServer= get-scclassinstance class $class ComputerName <DWServerName>
$OLAPServer.Server
Note: Replace <DWServerName> with the computer name of the Data Warehouse
management server
3.
4.
The last command above will return the name of the OLAP server hosting the DWASDataBase
On a machine where you have the SQL Server Management Studio installed do the following:
a. Click on the Start menu, Click All Programs, Click Microsoft SQL Server 2008 R2 or Microsoft
SQL Server 2008
b. In the Connect to Server window, select Analysis Services option in the Server Type dropdown menu.
c. In the Server Name, type the name you got from the output of the $OLAPServer.Server
cmdlet in the previous steps.
d. Once you are successfully connected to the OLAP server, in the Object Explorer window,
under the server name, expand Databases and locate the OLAP database DWASDataBase
e. Expand the DWASDataBase node.
f. Expand the folder Data Sources
g. Double click on CMDataMart
h. In the Properties window CMDataMart make note of the Connection string.
i. In the same window, under the Security Settings->Impersonation Info->Impersonation
Account, there is a button , click on it.
j. In the Impersonation Info window make a note of the User name
k. Click Cancel, then Click Cancel again.
l. Repeat Steps g-k to make a note of the Connection string and the User name for the
DWDataMart and OMDataMart.
m. The account that you made a note of in steps g-l is the OLAP Account
How to Stop Service Manager Services on the Data Warehouse Management Server
Use the following procedure to stop the Service Manager services on the Data Warehouse Management
Server.
55 | P a g e
1. In the Run dialog box, in the Open text field, type services.msc, and then click OK.
2. In the Services window, in the Services (Local) pane, locate the following three services
and for each one, click Stop:
Use the following procedure to back up the Data Warehouse databases on the original SQL Server
Log on to the SQL Server computer hosting the Data Warehouse databases which you
noted in the previous section.
2.
On the SQL Server computer hosting the original Data warehouse databases, click Start,
click All Programs, click Microsoft SQL Server 2008 R2 or Microsoft SQL Server 2008, and
then click SQL Server Management Studio.
3.
4.
5.
Right-click on the DWStagingAndConfig database, click Tasks, and then click Back Up.
6.
In the Back Up Database dialog box, type a path and a file name in the Destination on disk
box and then click OK.
Important
The destination location must have enough available free disk space to store the backup
files.
7.
Click on the OK button in the Back Up Database dialog box to start the back up.
8.
56 | P a g e
On the SQL Server computer hosting the original Service Manager database, click Start,
click All Programs, click Microsoft SQL Server 2008 R2, and then click SQL Server
Management Studio.
2.
3.
4.
Right-click on the Data Warehouse database, click Tasks, and then click Take Offline.
5.
6.
Repeat the steps 3-5 for databases DWRepository, CMDWDataMart, OMDWDataMart and
DWDataMart.
How to restore the Data warehouse databases on the new SQL Server
Use the following procedure to restore the Data Warehouses databases on the new SQL Server
On the new SQL Server computer, click Start, click All Programs, click Microsoft SQL Server 2008 R2,
and then click SQL Server Management Studio.
2.
57 | P a g e
1. On the new SQL Server computer, click Start, click All Programs, click Microsoft SQL Server
2008 R2, and then click SQL Server Management Studio.
2. In the Connect to Server dialog box, follow these steps:
3. In the Server Type list, select Database Engine.
4. In the Server Name list, select the new SQL server name that is DWStagingAndConfig
hosting the database.
5. In the Authentication list, select Windows Authentication, and then click Connect.
6. In the Object Explorer pane, expand Databases, and then click DWStagingAndConfig.
7. In the toolbar, click New Query.
8. In the center pane, type the following commands, and then click Execute.
sp_configure 'clr enabled', 1
go
reconfigure
go
9. In the center pane, remove the commands you typed in the previous step, type the
following commands, and then click Execute.
ALTER DATABASE DWStagingAndConfig SET SINGLE_USER WITH ROLLBACK IMMEDIATE
10. In the center pane, remove the commands you typed in the previous step, type the
following commands, and then click Execute.
ALTER DATABASE DWStagingAndConfig SET ENABLE_BROKER
11. In the center pane, remove the commands you typed in the previous step, type the
following commands, and then click Execute.
ALTER DATABASE DWStagingAndConfig SET MULTI_USER
1. In the Object Explorer pane, expand Security, and then expand Logins.
58 | P a g e
configsvc_users
db_accessadmin
db_datareader
db_datawriter
db_ddladmin
db_securityadmin
dbmodule_users
public
sdk_users
sql_dependency_subscriber
db_owner
9. In the Database role membership for: DWRepository area, make sure that the following
entries are selected:
db_owner
public
10. In the Database role membership for: DWDataMart area, make sure that the following
entries are selected:
db_owner
public
59 | P a g e
11. Click Ok
12. In the Object Explorer pane, expand Security, and then expand Logins.
13. Right-click Logins, and then click New Login
14. Perform the following procedures in the Login New wizard:
15. Click Search.
16. Type the username (domain\username) for the Reporting account, click Check Names, and
then click OK.
17. In the Select a page pane, click User Mapping.
18. In the Users mapped to this login area, in the Map column, click the row that represents
the name of the DWStagingAndConfig (DWStagingAndConfig is the default database
name).
19. In the Database role membership for: DWStagingAndConfig area, make sure that the
following entries are selected:
db_datareader
public
20. In the Database role membership for: DWRepository area, make sure that the following
entries are selected:
db_datareader
public
reportuser
21. In the Database role membership for: DWDataMart area, make sure that the following
entries are selected:
db_datareader
public
reportuser
22. In the Database role membership for: OMDWDataMart area, make sure that the following
entries are selected:
db_datareader
public
reportuser
23. In the Database role membership for: CMDWDataMart area, make sure that the following
entries are selected:
60 | P a g e
db_datareader
public
reportuser
24. Click Ok
25. In the Object Explorer pane, expand Security, and then expand Logins.
26. Right-click Logins, and then click New Login
27. Perform the following procedures in the Login New wizard:
28. Click Search.
29. Type the username (domain\username) for the OLAP account, click Check Names, and
then click OK.
30. In the Select a page pane, click User Mapping.
31. In the Database role membership for: DWDataMart area, make sure that the following
entries are selected:
db_datareader
public
reportuser
32. In the Database role membership for: OMDWDataMart area, make sure that the following
entries are selected:
db_datareader
public
reportuser
33. In the Database role membership for: CMDWDataMart area, make sure that the following
entries are selected:
db_datareader
public
reportuser
34. Click Ok
1. In the Object Explorer pane, expand Databases, expand DWStagingAndConfig, and then
expand Tables.
2. Right-click dbo.MT_Microsoft$SystemCenter$ManagementGroup, and then click Edit Top
61 | P a g e
How to update the Data Warehouse Server to use the new Database Server name
Use the following procedure to update all the Data Warehouse Server to use the new Database Server name.
Caution
Incorrectly editing the registry might Severely damage your system; therefore, before making changes
to the registry, back up any valued data on the computer.
62 | P a g e
7.
8.
OMDataMartSQLInstance
CMDataMartSQLInstance
How to update the connection strings for the Data Sources on the SQL Reporting Services hosting
Service Manager Reports
Use the following procedure to update the connection strings for the Data Sources on the SQL Reporting
Services hosting Service Manager Reports
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
Log on to the SQL Reporting Services server hosting the Service manager Reports instances. This is
the registry value called Server you noted in the section To identify the SQL Reporting server and
instance name used by Data Warehouse Management Server
Once you are on the SQL reporting Service server, click Start, click All Programs, click Microsoft SQL
Server 2008 R2 or Microsoft SQL Server 2008, click Configuration Tools and then click on Reporting
Services Configuration Manager
In the Reporting Services Configuration Connection, connect to the correct SQL Reporting instance
as noted in the section To identify the SQL Reporting server and instance name used by Data
Warehouse Management Server
In the Reporting Services Configuration Manager window, click Reporting Manager URL
In the Reporting Manager URL page, click on the hyperlink similar that would be of the following
format: http://<Servername>:portnumber/Reports
Clicking on the hyperlink will open an Internet browser window. Click on the System Center folder.
Click the Service Manager Folder.
Click the Data Source DWDataMart
Update the Connection string value data source=<server name>;initial catalog=DWDataMart. Type
the new SQL Server name in place of <server name>.
Hit the back button to return to the Service Manager folder.
Click the Data Source DWStagingAndConfig
Update the Connection string value data source=<server name>;initial
catalog=DWStagingAndConfig. Type the new SQL Server name in place of <server name>.
Hit the back button to return to the Service Manager folder.
Click the Data Source ConfigurationManager
Update the Connection string value data source=<server name>;initial catalog=CMDWDataMart.
Type the new SQL Server name in place of <server name>.
Hit the back button to return to the Service Manager folder.
Click the Data Source MultiMartDatasource
63 | P a g e
How to update the connection strings for the Data Sources on the SQL Analysis services hosting the
Service Manager Analysis Database
Use the following procedure to update the connection strings for the Data Sources on the SQL Analysis
Services hosting Service Manager Analysis Database
1.
2.
3.
4.
5.
6.
7.
8.
9.
Log on to the SQL Analysis services server hosting the Service Manager Analysis Database. This is
the value of the $OLAPServer.Server you noted in Step 29 under the section How to find out what
user accounts and SQL servers that are used by the Data Warehouse Server.
Click on the Start menu, Click All Programs, Click Microsoft SQL Server 2008 R2 or Microsoft SQL
Server 2008
In the Connect to Server window, select Analysis Services option in the Server Type drop-down
menu.
In the Server Name, type the name you got from the output of the $OLAPServer.Server cmdlet in
the previous steps.
Once you are successfully connected to the OLAP server, in the Object Explorer window, under the
server name, expand Databases and locate the OLAP database DWASDataBase
Expand the DWASDataBase node.
Expand the folder Data Sources
Double click on CMDataMart
In the Properties window CMDataMart update the Connection string Provider=SQLNCLI10.1;Data
Source=<servername>;Integrated Security=SSPI;Initial Catalog=CMDWDataMart
Replace <servername> with the name of the SQL server hosting the CMDWDataMart database.
10. Click Ok.
11. Repeat steps 8-10 to update the Connection strings for the data sources DWDataMart and
OMDataMart.
64 | P a g e
Performance
Service Manager DB performance, like many of the other products in the suite, have a number of
factors that affect how SCSM will perform. When you consider that SCSM can be importing data
from AD, SCOM, SCCM, SCO etc. its easy to see how critical the SQL backend is. As with SCOM for
example the number of consoles and the quantity of data coming through the connectors will all
impact performance.
Ideally you need a minimum of 8 (GB) of RAM for the management server that hosts the
Service Manager database so that you can have an acceptable response time in typical
scenarios.
You should have at least 8 CPU cores on the computer hosting the Service Manager
database.
Create your DB to be a large enough size and configure the auto growth so that growth
happens infrequently.
65 | P a g e
Requirements
Sizing
TechNet offers two tables that place a sizing mark at everything up to 150 hosts and then for
everything higher. The two sizing guides are fairly generic values but if your company is deploying
greater than 150 hosts then you will have to think a bit more about how you size your DB
environment. There are no indications of IOPS or needs on the tempdb.
In the following two tables I am not covering the minimum, as I dont believe the low watermark is
the right place for any SQL backend. Also on the TechNet page it assumes you are deploying SQL on
the same server, and we dont need to rant on again about local SQL, but, keep them separate.
For 150 hosts and below
CPU
RAM
Disk Space
*I would view 4GB of RAM to be too little for SQL and would have this at 8GB with 5GB reserved for
SQL.
Moving the DB
Moving the DB to another SQL instance is quite a simple process but its not the prettiest DB move
compared to some of the other System Center applications. The process involves a re-install with
the existing DB.
1. Stop the VMM services on the VMM server and make a note of the location of your VMM
LDF and MDF files
2. In SQL management studio, backup the VMM DB
3. Uninstall SCVMM with the Retail Database option
4. In SQL management studio detach the VMM DB from SQL
5. Copy the files to the new SQL instance
6. On the new SQL instance in SQL management studio attach the MDF and LDF
66 | P a g e
An unsupported hack is to change the SQL listing in the registry. I have used this twice in my lab and
demo environments and it worked great, but its not supported in a production environment.
Backup
The backup and restore process for SCVMM has less support restrictions for the backup process then
for example SCCM. While there is a backup task in the SCVMM GUI you can also just as easily use
the native SQL tools to perform the backup and restore process. SCVMM has a recovery tool called
SCVMMRecovery.exe that is located on the SCVMM server. This tool however cant be used if you
have your SCVMM DB located on a cluster.
To restore the DB from an elevated CMD navigate to where the SCVMM binaries are located and
look to the SCVMMRecover.exe as can be seen in the screen clipping below
Requirements
SQL 2008 R2, with the database engine and the usual SQL collation settings,
SQL_Latin1_General_CP1_CI_AS. Thats it.
Database growth
(The following section is almost a direct copy from TechNet
Normally, when you call a runbook the database will not grow.
Each time a runbook activity runs it should be counted individually, which should be
considered when looping features, this can cause a single activity to run multiple times.
To determine the storage needed for each invocation of the runbook, multiply the number
of activities in the runbook by the number of bytes added to the database according to the
selected logging level. These values are as follows:
524 bytes
Logging Level
DB Growth Factor
Performance Factor
Recommended for
Production
Default
Yes
Common published
11.6x
2.5x
68 | P a g e
planning
Greater than 11.6x
No
Backup
The SQL DB is backed up through SQL native backup, SCDPM or another 3RD party backup facility. In
the same way as VMM or SCOM is backed up, its just a simple DB backup process.
List all running Jobs with activities that started more than 5 mins ago and have not finished
SELECT pin.PolicyID
, pin.State
, pin.Status
69 | P a g e
Find out the highest number of runbooks that were run per hour at any given time in the last 30
days:
SELECT MAX(jobs.MaxInstances) FROM
(SELECT [ActionServer]
, CONVERT(VARCHAR(19), dateadd(hour,datediff(hour,0,TimeStarted),0),
120) as Hourly
, COUNT(*) as MaxInstances
FROM [Opalis].[dbo].[POLICYINSTANCES]
WHERE (DateDiff(M, TimeStarted,GETDATE()) < 1 )
GROUP BY ActionServer, CONVERT(VARCHAR(19),
dateadd(hour,datediff(hour,0,TimeStarted),0), 120)) as jobs
Robert also wrote another good blog on clearing the queue for your RB servers (and this is
unsupported). Consider a situation whereby for example you are monitoring a drive and when a
document gets placed in the drive it will kick off a job to FTP that file, by accident then a large
number of files are dropped into the folder and thousands of jobs are created. If you stop the
services on your runbook server than your secondary server will just pick up the workload. Using the
following two scripts will add a Stored Procedure to the DB to delete the jobs.
USE [Orchestrator]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[sp_StopAllRequestsForPolicy]
@PolicyID uniqueidentifier
AS
70 | P a g e
With the two scripts completed you now have a stored procedure to clear the queue, run
sp_StopAllRequests
71 | P a g e
Requirements
The SQL backend needs to be a dedicated instance SQL Server 2012 or SQL Server 2008 R2 or SQL
Server 2008 R2 SP1, Enterprise or Standard Edition.
72 | P a g e
Survey Results
What started this project off was that I decided to create a survey to discover how the rest of the
community was dealing with the interaction between System Center and its most critical
component, the SQL server. Here is what people said, in some areas I am paraphrasing or just
including the important parts.
Nearly 70% of us are hosting SQL instances on VMs to support System Center, so have a read of the
section on virtualising SQL.
73 | P a g e
74 | P a g e
75 | P a g e
76 | P a g e
77 | P a g e
One point from these results; if you leave autogrowth to the default and the tempdb to default then
you are going to start off with a tiny tempdb file (8mb) and its going to grow by 10% constantly until
it reaches its necessary size. On some SCOM / SCSM environments this could be gigs and gigs of
dreadful autogrowth and corresponding terrible performance of your DB!
78 | P a g e
79 | P a g e
80 | P a g e
While talking to an old friend and SQL MVP, he said that he never uses Windows firewall on a SQL
instance due to the excess in CPU usage from packet inspection, and I thought it would be
interesting for us all to see what others are doing.
81 | P a g e
82 | P a g e
Whilst speaking to several SQL experts they often say, always enable the SA and then you have a
way in that is independent to AD. This will often be enough to fail a security audit.
83 | P a g e
84 | P a g e
85 | P a g e
86 | P a g e
87 | P a g e
88 | P a g e
89 | P a g e
90 | P a g e
91 | P a g e
92 | P a g e
Wrap up
If I have made errors in this document I would appreciate your findings, and when System Center
2012 SP1 comes out the guide will be updated for SQL 2012 and some of the new features.
93 | P a g e