Best Practices SQL Server For OpenText Content Server 16 and 16.2
Best Practices SQL Server For OpenText Content Server 16 and 16.2
Best Practices
Microsoft® SQL Server for OpenText™
Content Server™ 16.x
John Postma, Director, Engineering Services
Contents
Audience ...................................................................................................................... 4
Executive Summary .................................................................................................... 5
General Overview of Content Server Database Connections ................................ 6
Configuration Recommendations ............................................................................. 7
Installation Options ................................................................................................. 7
SQL Server Setup Best Practices.............................................................................. 9
Antivirus Software................................................................................................... 9
Backup Compression ............................................................................................. 9
Instant Database File Initialization ........................................................................ 10
Lock Pages in Memory ......................................................................................... 11
Maximum Degree of Parallelism (MaxDOP) ........................................................ 12
Min and Max Memory ........................................................................................... 13
Perform Volume Maintenance Task ...................................................................... 14
Storage Best Practices ......................................................................................... 14
tempDB Configuration .......................................................................................... 15
SQL Server Configuration Settings......................................................................... 16
Allocate Full Extent ............................................................................................... 16
AlwaysOn Availability Groups ............................................................................... 16
Cost Threshold for Parallelism ............................................................................. 17
Optimize for Ad hoc Workloads ............................................................................ 18
Content Server Database Settings .......................................................................... 19
Clustered Indexes ................................................................................................. 19
Collation ................................................................................................................ 20
Compatibility Level ............................................................................................... 21
Data Compression ................................................................................................ 22
Database Data, Log File Size, and AutoGrowth ................................................... 23
Recovery Model .................................................................................................... 24
Statistics ............................................................................................................... 25
Table and Index Fragmentation, Fill factor ........................................................... 26
Monitoring and Benchmarking ................................................................................ 27
Benchmark ........................................................................................................... 28
SQL Server Performance Monitoring Tools .......................................................... 30
Activity Monitor ............................................................................................... 30
Azure Monitoring ............................................................................................ 31
Management Data Warehouse ...................................................................... 31
Identifying Worst-Performing SQL .......................................................................... 32
Content Server Connect Logs .............................................................................. 32
Performance Analyzer ............................................................................. 32
SQL Server DMVs ................................................................................................ 32
Locking ...................................................................................................................... 34
Audience
The document is intended for a technical audience that is planning an
implementation of OpenText™ products. OpenText recommends consulting with
OpenText Professional Services who can assist with the specific details of
individual implementation architectures.
Disclaimer
The tests and results described in this document apply only to the OpenText
configuration described herein. For testing or certification of other configurations,
contact OpenText Corporation for more information.
All tests described in this document were run on equipment located in the
OpenText Performance Laboratory and were performed by the OpenText
Performance Engineering Group. Note that using a configuration similar to that
described in this document, or any other certified configuration, does not
guarantee the results documented herein. There may be parameters or variables
that were not contemplated during these performance tests that could affect
results in other test environments.
For any OpenText production deployment, OpenText recommends a rigorous
performance evaluation of the specific environment and applications to ensure
that there are no configuration or custom development bottlenecks present that
hinder overall performance.
Executive Summary
This white paper is intended to explore aspects of Microsoft® SQL Server that may
be of value when configuring and scaling OpenText Content Server™ 16.x. It is
relevant to SQL Server Azure, 2016, 2014, and 2012, and is based on customer
experiences, performance lab tests with a typical document management workload,
and technical advisements from Microsoft.
Most common performance issues can be solved by ensuring that the hardware used
to deploy SQL Server has sufficient CPU, RAM and fast I/O devices, properly
balanced.
These topics explore non-default options available when simple expansion of
resources is ineffective, and discuss best practices for administration of Content
Server’s database. It concentrates on non-default options, because as a
recommended starting point, Content Server on SQL Server installations uses
Microsoft’s default deployment options. Usage profiles vary widely, so any actions
taken based on topics discussed in this paper must be verified in your own
environment prior to production deployment, and a rollback plan must be available in
case adverse effects are detected.
These recommendations are not intended to replace the services of an experienced
and trained SQL Server database administrator (DBA), and do not cover standard
operational procedures for SQL Server database maintenance, but rather offer
advice specific to Content Server on the SQL Server platform.
Configuration Recommendations
This section provides recommendations for configuring an SQL Server instance to
host a Content Server ECM repository.
Installation Options
Consider the following tips and guidelines when initially installing Content Server for
use with SQL Server:
• Use the configuration options in SQL Server Configuration Settings and Content
Server Database Settings
• For Microsoft Azure SQL Databases
o The database must be created in the Microsoft Azure Portal (see article
Azure Create a SQL Database)
o The database can be managed in SQL Server Management Studio (SSMS)
2016 and in the Microsoft Azure Portal
The user account for the Azure database should be created using SQL Server
Management Studio (SSMS). Please note: database user, database name and
password should be altered.
CREATE SCHEMA dbuser AUTHORIZATION dbuser
CREATE USER dbuser WITH PASSWORD = 'pwd' WITH
DEFAULT_SCHEMA = dbname
ALTER ROLE db_owner ADD MEMBER dbuser
GRANT CONNECT TO dbuser
Apply database configurations.
ALTER DATABASE dbname SET COMPATIBILITY_LEVEL = 110
The core Content Server database is installed using Create Tables In Existing
Database option in Content Server using the Azure user account.
• For SQL Server 2016 and prior
Core Content Server database is installed using the sa SQL Server account. For
SQL Server DBAs who are uncomfortable allowing an application to log into the
SQL Server database with the sa account, the SQL Server DBA can create the
database and user account.
To create the database, the SQL Server DBA can follow the SQL command
below or can use SQL Server Management Studio (SSMS) to create the
database GUI. Please note: database name, name, filename, size and
password should be altered.
CREATE DATABASE dbname
ON PRIMARY (NAME = 'livelink_Data',
FILENAME = 'C:\databases\dbname.mdf', SIZE = 5MB,
MAXSIZE = UNLIMITED)
LOG ON (NAME = ‘livelink_Log',
FILENAME = 'C:\databases\dbname.ldf', SIZE = 1MB,
MAXSIZE = UNLIMITED)
Antivirus Software
Description Antivirus software scans files and monitors activity to prevent,
detect, and remove malicious software. Guidelines for antivirus
software configuration are provided in the Microsoft support article:
File Locations for Default Named Instances of SQL Server.
Recommendation Exclude all database data and log files from scanning (including
tempDB). Exclude SQL Server engine process from active
monitoring.
Notes Follow the Microsoft support article for SQL Server version-specific
details.
Backup Compression
Description A backup that has been compressed to take up less space and use
less I/O which increases backup speed, see Backup Compression.
Description Allows faster creation and auto-growth of database and log files by
not filling reclaimed disk space with zeroes before use. See the
article, Database Instant File Initialization.
Notes Microsoft states that, because deleted disk data is overwritten only
when data is written to files, an unauthorized principal who gains
access to data files or backups may be able to access the deleted
content. Ensure that access to these files is secured, or disable this
setting when potential security concerns outweigh the performance
benefit.
If the database has Transparent Data Encryption enabled, it
cannot use instant initialization.
Permissions To set this for the SQL Server service, you must have
administrative rights on the Windows server.
Description Memory for the buffer pool is allocated in a way that makes it non-
page-able, avoiding delays that can occur when information has to
be loaded from the page file. See the article: Enable the Lock
Pages in Memory Option.
Recommendation Enable this feature by assigning the Lock pages in memory to the
SQL Server Service account in the User Rights Assignment folder
in Windows Security Local Policies.
If you enable this setting, be sure to also set max memory
appropriately to leave sufficient memory for the operating system
and other background services.
Permissions To set this for the SQL Server service, you must have
administrative rights on the Windows server.
Default 0 (unlimited)
Recommendation Consider modifying the default value when SQL Server experiences
excessive CXPACKET wait types.
For non-NUMA servers, set MaxDOP no higher than the number of
physical cores, to a maximum of 8.
For NUMA servers, set MaxDOP to the number of physical cores
per NUMA node, to a maximum of 8.
Note: Non-uniform memory access (NUMA) is a processor
architecture that divides system memory into sections that are
associated with sets of processors (called NUMA nodes). It is
meant to alleviate the memory-access bottlenecks that are
associated with SMP designs. A side effect of this approach is that
each node can access its local memory more quickly than it can
access memory on remote nodes, so you can improve performance
by ensuring that threads run on the same NUMA node.
Also see the Cost Threshold for Parallelism section for related
settings that restrict when parallelism is used, to allow best
performance with Content Server.
Note: Any value that you consider using should be thoroughly
tested against the specific application activity or pattern of queries
before you implement that value on a production server.
Notes Several factors can limit the number of processors that SQL Server
will utilize, including:
• licensing limits related to the SQL Server edition
• custom processor affinity settings and limits defined in a
Resource Governor pool
These factors may require you to adjust the recommended
MaxDOP setting. See related reference items in Appendix A –
References for background information.
See Appendix B – Dynamic Management Views (DMVs) for
examples of monitoring SQL Server wait types.
Description The min server memory and max server memory settings
configure the amount of memory that is managed by the SQL
Server Memory Manager. SQL Server will not release memory
below the min. memory setting and will not allocate more than the
max memory while it runs. See the article, Server Memory
Configuration Options.
Default The default setting for min server memory is 0, and the default
setting for max server memory is 2,147,483,647 MB. SQL
Server dynamically determines how much memory it will use, based
on current activity and available memory.
Permissions To set this for the SQL Server service, you must have
administrative rights on the Windows server.
tempDB Configuration
Versions SQL Server 2012, 2014 and 2016.
Description The tempDB is a global resource that stores user objects (such as
temp tables), internal objects (such as work tables, work files,
intermediate results for large sorts and index builds). When
snapshot isolation is used, the tempDB stores the before images of
blocks that are being modified, to allow for row versioning and
consistent committed read access.
Notes Be mindful of factors that can limit the number of processors SQL
Server will utilize. Appropriately set the number of tempDB files.
Recommendation Consider enabling this flag if latch waits on pages in tempDB cause
long delays that are not resolved by the recommendations in the
tempDB section.
Default Disabled
Recommendation Content Server supports SQL Server 2014 and 2016 AlwaysOn
solution with the ODBC SQL Server Native Client 11.0 driver.
Follow the documentation suggestions from Microsoft to configure
AlwaysOn: Overview Always On Availability Groups; Step By Step
Creating AlwaysOn. The Content Server system should connect to
the availability group listener (virtual IP).
Notes When using the SQL Server Native Client driver, an entry in the
opentext.ini is required.
The SQL Server Native Client driver with the INI entry behaves
differently – it appears leave a transaction open. It, in fact, closes
the transaction and opens a new transaction.
High workspace creation with Always On Synchronous commit will
be less performant.
Default 5
Recommendation Content Server mainly issues small OLTP-type queries where the
overhead of parallelism outweighs the benefit, but it does issue a
small number of longer queries that may run faster with parallelism.
OpenText recommends that you increase the cost threshold setting
in combination with configuring the Maximum Degree of Parallelism
(MaxDOP) setting. This reduces the overhead for smaller queries,
while still allowing longer queries to benefit from parallelism. See
the article, Configure the cost threshold.
The optimal value depends on a variety of factors including
hardware capability and load level. Load tests in the OpenText
performance lab achieved improved results with a Cost Threshold
of 50, which is a reasonable starting point. Monitor the following and
adjust the cost threshold as needed:
• CXPACKET wait type: When a parallel plan is used for a
query there is some overhead coordinating the threads
that are tracked under the CXPACKET wait. It is normal to
have some CXPACKET waits when parallel plans are used.
However, if it is one of the highest wait types, further
changes to this setting may be warranted. See Appendix
B – Dynamic Management Views (DMVs) for examples of
querying DMVs for wait info.
• See Appendix B – Dynamic Management Views (DMVs)
for examples of querying DMVs for queries using
Parallelism.
• THREADPOOL wait type: If many queries are using a
parallel plan, there can be periods when SQL Server uses
all of its available worker threads. Time spent by a query
waiting for an available worker thread is tracked under the
THREADPOOL wait type. If this is one of the highest wait
types, it may be an indication that too many queries are
using parallel plans. In this case, the cost threshold for
parallelism should be increased, or maximum worker
threads increased if the system is not experiencing CPU
pressure. However, there can be other causes for an
increase in this wait type (blocked or long running queries),
so it should only be considered in combination with a more
comprehensive view of query performance and locking.
permission.
Default Off
Recommendation When there is memory pressure, and the plan cache contains a
significant number of single-use plans, enable this setting.
Monitoring Check the portion of the plan cache used by single use queries:
see Appendix B – Dynamic Management Views (DMVs) Cached
Query Plans.
Clustered Indexes
Description Clustered indexes store data rows for the index columns in sorted
order. In general, the primary key or the most frequently used index
on each table is a good candidate for a clustered index. This is
especially important for key highly-active core tables. Only one
clustered index can be defined per table.
Default In Content Server 10.5 and later, many tables in the Content Server
database have a clustered index.
Collation
Description The collation for a database defines the language and character set
used to store data, sets rules for sorting and comparing characters.
It also determines case-sensitivity, accent-sensitivity, and kana-
sensitivity.
Compatibility Level
Description The database compatibility level sets certain database behaviors to
be compatible with the specified version of SQL Server.
Default The compatibility level for newly created databases is the same as
the MODEL database which, by default, is the same as the installed
version of SQL Server.
When upgrading the database engine, compatibility level for user
databases is not altered, unless it is lower than the minimum
supported. Restoring a database backup to a newer version also
does not change its compatibility level.
Recommendation In general, using the latest compatibility mode allows the Content
Server database to benefit from all performance improvements in
the installed SQL Server version.
However, because of performance issues with SQL Server 2014
and later, set the Compatibility Level to the version of SQL Server
2012 (see article Alter Database Compatibility Level).
ALTER DATABASE <dbname> SET COMPATIBILITY_LEVEL = 110
When you change the compatibility level of the Content Server
database, be sure to update statistics on the database after making
the change.
As an alternative, you can use trace flag 9481 and leave the
compatibility level unchanged. Using trace flag 9481 affects all of
the databases on the SQL Server instance. For more details, see
Technical Alert.
Data Compression
Description SQL Server offers data compression at the row and page level, but
only in the Enterprise Edition. Compression reduces I/O and the
amount of storage and memory used by SQL Server. It only adds a
small amount of overhead in the form of additional CPU usage.
Recommendation When storage space, available memory, or disk I/O are under
pressure, and the database server is not CPU-bound, consider
using compression on selected tables and indexes.
Microsoft recommends compressing large objects that have either a
low ratio of update operations, or a high ratio of scan operations.
See articles Data Compression, Enable Compression on a Table
or Index, and sp_estimate_data_compression_savings.
You can automate the process using a script. (An example of this
type of approach and a sample script, which was used for internal
testing, is covered in the Practical Data Compression article.) The
script analyzes the usage of Content Server tables and indexes that
have more than 100 pages and selects candidates for compression.
It estimates the savings from row or page compression, and
generates a command to implement the recommended
compression. The script relies on usage data from the DMVs, so it
should be run after a period of representative usage.
Overall impact from compression on performance, storage,
memory, and CPU will depend on many factors related to the
environment and product usage. Testing in the OpenText
performance lab has demonstrated the following:
Performance: For load tests involving a mix of document-
management operations, with a small set of indexes compressed
based on only high-read-ratio indexes, there was minimal
performance impact. When a larger set of tables and indexes was
compressed, performance was less consistent, and degraded by up
to 20%. For high-volume ingestion of documents with metadata,
there was no impact on ingestion throughput.
CPU: CPU usage increased by up to 8% in relative terms.
MDF File Storage: Reduced by up to 40% depending on what was
compressed. Specific large tables like LLAttrData were reduced by
as much as 82%.
I/O: Read I/O on MDF files reduced up to 30%; write I/O up to 18%.
Memory Usage: SQL Buffer memory usage reduced up to 25%. As
with any configuration change, test the performance impact of any
compression changes on a test system prior to deploying on
production systems.
Notes It can take longer to rebuild indexes when they are compressed.
Description Data files contain data and. Log files contain the information to
recover transactions. These files can grow automatically from their
original size. This growth adds to the size of the database (see
article Database Files and Filegroups).
Autogrowth of log files can cause delays. Frequent growth of
data or log files can cause them to become fragmented, which may
lead to performance issues.
Recommendation Optimal data and log file sizes really depend on the specific
environment. In general, it is preferable to size the data and log
files to accommodate expected growth so that you avoid frequent
autogrowth events.
Leave Autogrowth enabled to accommodate unexpected growth.
A general rule is to set autogrow increments to about one-eighth the
size of the file. See article Defining Auto-Growth Settings.
Leave the autoshrink parameter set to False for the Content
Server database.
Recovery Model
Versions SQL Server 2012, 2014 and 2016.
Description The Recovery Model is how SQL Server controls how transaction
logs are maintained per database. For more information, see article
Recovery Models.
Default SIMPLE model is the default. This does not cover backups of the
transaction logs.
Recommendation Content Server supports both SIMPLE and FULL recovery models.
SIMPLE recovery is initially set up when creating the database with
Content Server. FULL recovery requires setting the recovery
directories and can only be handled manually by a SQL Server
DBA. The DBA can change the recovery setting once the database
has been created or by creating the database using the steps in
(Installation Options).
The Content Server database should be configured to FULL. This
requires the DBA to set the transaction log backups to prevent the
log file from growing too large. BULK LOGGED is not
recommended as it is not compatible with many of the operations
that Content Server can perform.
Statistics
Description The query optimizer uses statistics to aid in creating high-quality
query plans that improve performance. Statistics contain
information about the distribution of values in one or more columns
of a table or view. They are used to estimate the number of rows in
a query result. See articles, Statistics.
The following settings control the automation of the statistics:
AUTO_CREATE_STATISTICS: When set to TRUE, automatically
build any missing statistics needed by a query for optimization.
AUTO_UPDATE_STATISTICS: When set to TRUE, automatically
build any out-of-date statistics needed by a query for optimization.
AUTO_UPDATE_STATISTICS_ASYNC: When set to TRUE and
AUTO_UPDATE_STATISTICS is set to TRUE, queries that initiate
an automatic update of out-of-date statistics will not wait for the
statistics to be updated before compiling.
Default The first two settings above are on by default, and the third is off.
All settings can be changed in the MODEL database.
view.
Default Server index fill factor default is 0 (meaning fill leaf-level pages to
capacity).
Notes A table lock is held for the duration of an index rebuild by default,
preventing user access to the table. Specifying ONLINE=ON in the
command avoids the table lock, allowing user access to the table
during the rebuild. However, this feature is available only in
Enterprise editions of SQL Server. In SQL Server 2012 and 2014,
corruption issues can occur in certain cases. See article, FIX: Data
corruption.
Monitoring To observe the rate of page splits and to help evaluate the
effectiveness of fill factor settings, use the PerfMon SQLServer:
AccessMethods:PageSplits/Sec. This includes mid-page
(cause fragmentation) and end-page splits (increasing index).
the table.
Description Change the settings on the parameter to reduce I/O spikes. For
more information, see article Change Target Recovery Time.
Recommendation The default setting of 0 seconds may be too low for that database.
Adjust the value to a lower value like 60 seconds.
ALTER DATABASE <databasename>
SET target_recovery_time = 60 seconds;
Check the Windows Performance Monitor to see if the I/O spikes
have decreased. If they have no, adjust again.
For more information, see article Adjust Target Recovery Time.
Benchmark
Collect the following as the basis for a benchmark and further analysis of worst-
performing aspects:
• For Azure, the server is not directly available to you. Review Microsoft
Documentation for Azure SQL Database Benchmark Overview.
Physical Disk Track the following counters per disk or per partition: % Idle Time,
Avg. Disk Read Queue Length, Avg. Disk Write Queue Length, Avg.
Disk sec/Read, Avg. Disk sec/Write, Disk Reads/sec, Disk
Writes/sec, Disk Write Bytes/sec, and Disk Read Bytes/sec.
In general, % Idle Time should not drop below 20%. Disk queue
lengths should not exceed twice the number of disks in the array.
Disk latencies vary based on the type of storage. General
guidelines:
Reads: Excellent < 8 msec, Good < 12 msec, Fair < 20 msec, Poor
> 20 msec;
Non-cached writes: Excellent < 08 msec, Good < 12 msec, Fair <
20 msec, Poor > 20 msec;
Cached writes: Excellent < 01 msec, Good < 02 msec, Fair < 04
msec, Poor > 04 msec
To show I/O requests and virtual file latency data per data/log file,
review sys.dm_io_virtual_file_stats in Appendix B –
Dynamic Management Views (DMVs).
SQL Server The SQL Server Buffer cache hit ratio should be > 90%. In OLTP
Counters applications, this ratio should exceed 95%. Use the PAL tool and
the SQL Server 2012 template for additional counters and related
thresholds.
o Note any Windows event log errors present after or during the monitored
period.
Note that connect logging requires substantial space. Depending on the activity
level of the site, the connect log files may be 5 to 10 GB, so adequate disk space
should be planned. Content Server logs can be redirected to a different file
system if necessary. There is also an expected performance degradation of 10%
to 25% while connect logging is on. If the system is clustered, you should enable
connect logging on all front-end nodes.
• Use Trace Capture/Replay (Azure and SQL Server 2016) or collect SQL Server
profiling events (SQL Server 2012/2014) to trace files for periods of three to four
hours during core usage hours that fall within the monitored period.
You can use SQL Server Extended Events to monitor activity. For more
information, see articles Extended Events and Convert an Existing SQL Trace
Script.
• Obtain the results of a Content Server Level 5 database verification report (run
from the Content Server Administration page, Maintain Database section). To
speed up the queries involved in this verification, ensure there is an index
present on DVersData.ProviderID. Note that for a large site this may take
days to run. If there is a period of lower activity during the night or weekends,
that would be an ideal time to run this verification.
• Gather feedback from Content Server business users that summarizes any
current performance issues or operational failures that might be database-
related.
Activity Monitor
The Activity Monitor offers real-time views of system activity and wait states. It lets
you drill down on specific slow or blocking queries. It also provides historical
information on Resource Waits, Data File I/O, and Expensive Queries.
Azure Monitoring
To monitor Azure databases, you will need to log onto the Azure portal. From there,
you have many options available under Monitoring and Support + Troubleshooting.
For information on Azure Monitoring tools, see the articles: Microsoft Azure Monitor a
Cloud Service or Microsoft Azure Web Application Performance.
Performance Analyzer
OpenText Performance Analyzer is a tool that works with Content Server Logs. This
tool allows you to open connect logs, see the percentage mixture, and see the
relative importance of your site’s usage profile.
The Raw Data tab allows you to sort the transactions by overall execution time and
SQL time. This will show you which Content Server transactions are taking the most
SQL time (not just the individual statements that they issue).
To see the individual SQL statements (from the Raw Data tab), right-click on the line
and select Show SQL. Performance Analyzer will display each SQL statement that
the transaction issues, how long each one took to execute, and how many rows were
affected.
Description Check size of Indexes over 900 Bytes comply with SQL Server.
Integrity Check
Corruption in the database can cause performance issues. It is suggested that
before and after a backup is performed, an integrity check is done on the database.
The command DBCC CHECKDB will check the logical and physical integrity of all of
the objects in the specified database. Follow the suggestions made by Microsoft.
Locking
Lock Escalation
Description Some bulk operations, such as copying or moving a large subtree,
or changing permissions on a tree, can cause SQL Server resource
thresholds to be exceeded. Lock escalation is triggered when one
of the following conditions exists:
• A single Transact-SQL statement acquires at least 5,000
locks on a single non-partitioned table or index.
• A single Transact-SQL statement acquires at least 5,000
locks on a single partition of a partitioned table and the
ALTER TABLE SET LOCK_ESCALATION option is set to
AUTO.
• The number of locks in an instance of the Database
Engine exceeds memory or configuration thresholds. (The
thresholds vary depending on memory usage and the
Locks server setting.)
Although escalation to a lower granularity of lock can free
resources, it also affects concurrency. This means that other
sessions accessing the same tables and indexes can be put in a
wait state and degrade performance.
Default Locks defaults to 0. This means that lock escalation occurs when
memory used by lock objects is 24% of the memory used by the
database engine.
All objects have a default lock escalation value of TABLE. That
means when lock escalation is triggered, it is done at the table level.
Notes For a description of the Lock Escalation process in SQL Server, see
the article, Lock Escalation (Database Engine).
Permissions Changing this setting requires the ALTER permission on the table.
Transaction Isolation
Description When snapshot isolation is enabled, all statements see a snapshot
of data as it existed at the start of the transaction. This reduces
blocking contention and improves concurrency since readers do not
block writers and vice-versa. It also reduces the potential for
deadlocks. See the article, Snapshot Isolation in SQL Server.
Appendices
Appendix A – References
Buffer Manager Object
https://msdn.microsoft.com/en-us/library/ms189628.aspx
Tools
Diskspd Utility (SQLIO)
https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223
PAL Tool
https://pal.codeplex.com/
Blocking sessions
Description Methods on how to view the blocking sessions and the cause.
There is also SSMS reports available:
Reports–Standard Reports–Activity–All Blocking Transactions
Management-Extended-Events-Sessions-Watch Live Data
You can also create your own Blocked Process Report – see below.
More can be found: Identify the cause of SQL Server blocking
Sample Show total plan count and memory usage, highlighting single-use
plans:
SELECT objtype AS [CacheType],
count_big(*) AS [Total Plans],
sum(cast(size_in_bytes as decimal(18,2)))
/1024/1024 AS [Total MBs],
avg(usecounts) AS [Avg Use Count],
sum(cast((CASE WHEN usecounts = 1
THEN size_in_bytes ELSE 0 END)
AS decimal(18,2)))/1024/1024
AS [Total MBs - USE Count 1],
sum(CASE WHEN usecounts = 1 THEN 1 ELSE 0 END)
AS [Total Plans - USE Count 1]
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY [Total MBs - USE Count 1] DESC
PAGELATCH_XX Waits
Description The following script can help you identify active tasks that are
blocked on tempDB PAGELATCH_XX waits.
SELECT session_id, wait_type, wait_duration_ms,
Sample
blocking_session_id, resource_description,
ResourceType = CASE
WHEN Cast(Right(resource_description,
Len(resource_description) –
Charindex(':', resource_description, 3))
AS Int) - 1 % 8088 = 0 THEN 'Is PFS Page'
WHEN Cast(Right(resource_description,
Len(resource_description) –
Charindex(':', resource_description, 3))
AS Int) - 2 % 511232 = 0 THEN 'Is GAM Page'
WHEN Cast(Right(resource_description,
Len(resource_description) –
Charindex(':', resource_description, 3))
AS Int) - 3 % 511232 = 0 THEN 'Is SGAM Page'
ELSE 'Is Not PFS, GAM, or SGAM page'
END
FROM sys.dm_os_waiting_tasks
WHERE wait_type LIKE 'PAGE%LATCH_%'
AND resource_description LIKE '2:%'
Notes This DMV query shows data about parallel cached query plans,
including their cost and number of times executed. It can be helpful
to identify a new cost threshold for parallelism setting that will strike
a balance between letting longer queries use parallelism, while
avoiding the overhead for shorter queries.
Note that the cost threshold for parallelism is compared to the serial
plan cost for a query when determining whether to use a parallel
plan.
The above DMV query shows the cost of the generated parallel plan
and is typically different (smaller) than the serial plan cost.
Consider the parallel plan costs as a general guideline towards
setting cost threshold for parallelism.
IF @partitioncount > 1
SET @command = @command + N' PARTITION=' +
CAST(@partitionnum AS nvarchar(10));
-- EXEC (@command);
IF LEN( @command ) > 0
PRINT @command;
END;
-- Close and deallocate the cursor
CLOSE partitions;
DEALLOCATE partitions;
-- Drop the temporary table
DROP TABLE #work_to_do;
--GO
Waits (sys.dm_os_wait_stats)
Description Shows aggregate time spent on different wait categories.
SELECT wait_type, wait_time_ms, waiting_tasks_count,
Sample
max_wait_time_ms, signal_wait_time_ms,
wait_time_ms/waiting_tasks_count AS AvgWaitTimems
FROM sys.dm_os_wait_stats
WHERE waiting_tasks_count > 0
ORDER BY wait_time_ms desc
Notes Consider excluding wait types that do not impact user query
performance. See details about wait statistics in article
sys.dm_os_wait_stats and in the SQL Skills blog post, SQL Server
Wait Statistics.
Description Get a list of objects where the statistics may need to be updated.
This query is based on the large data tables where the suggested
percent to update statistics is 20% - this number can be changed.
This query will return a SQL statement for each object so that you
can update the statistics.
SELECT rowmodcounter.modPercent, ss.name schema_name,
Sample
st.name table_name, sst.name stats_name,
sst.auto_created, sst.no_recompute,
sp.last_updated, sp.rows, sp.rows_sampled, sp.steps,
sp.unfiltered_rows, sp.modification_counter,
sampleRate = (1.0 * sp.rows_sampled / sp.rows)
* 100,
'UPDATE STATISTICS ' + ss.name + '.' + st.name +
'(' + sst.name + ')'
FROM sys.stats sst
CROSS APPLY sys.dm_db_stats_properties(sst.object_id,
sst.stats_id) sp
INNER JOIN sys.tables st
ON sst.object_id = st.object_id
INNER JOIN sys.schemas ss
ON st.schema_id = ss.schema_id
CROSS APPLY (SELECT (1.0 * sp.modification_counter /
NULLIF(sp.rows, 0)) * 100) AS
rowmodcounter(modPercent)
WHERE ss.name = '<schema>'
AND rowmodcounter.modPercent >= 20
ORDER BY rowmodcounter.modPercent DESC;
• Remove the ad hoc and prepared plan cache for the entire instance
DBCC FREESYSTEMCACHE ('SQL Plans');
• Flush the ad hoc and prepared plan cache for a resource pool
Find all the resource pools on a SQL Server (query above)
Free the resource pool cache
DBCC FREESYSTEMCACHE ('SQL Plans', '<name>');
• Remove all query plan caches from the Compute nodes. This can be done
with or without the regular completion message (WITH NO_INFOMSGS).
USE <databasename>
DBCC FREEPROCCACHE (COMPUTE);
• Remove all elements for one database. Note: this will not work in Azure
Get the database id
SELECT name, dbid
FROM master.dbo.sysdatabases
WHERE name = N'<databasename>';
Flush the plan cache for that database
DBCC FLUSHPROCINDB (<dbid>);
• Remove all the plan cache for the current database. This is only for SQL
Server 2016 and Azure.
USE <databasename>
ALTER DATABASE SCOPED CONFIGURATION
CLEAR PROCEDURE_CACHE;
About OpenText
OpenText enables the digital world, creating a better way for organizations to work with information, on premises or in the
cloud. For more information about OpenText (NASDAQ: OTEX, TSX: OTC) visit opentext.com.
Connect with us:
55
www.opentext.com/contact
Copyright © 2020 Open Text SA or Open Text ULC (in Canada).
All rights reserved. Trademarks owned by Open Text SA or Open Text ULC (in Canada).