0% found this document useful (0 votes)
84 views35 pages

Configuring The Server For Optimal Performance: Course: Database Administration Effective Period: September 2017

The document provides an overview of configuring a database server for optimal performance. It discusses setting up power management, enabling instant file initialization, and using beneficial trace flags. It also covers defining good performance by identifying critical targets, creating a baseline, and monitoring. The document emphasizes focusing on what is most important to users by understanding hardware resources, SQL statements, data usage patterns, and using tools like Task Manager and Dynamic Management Views. It concludes by discussing optimizing CPU and memory on the server.

Uploaded by

Venus Dhammiko
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views35 pages

Configuring The Server For Optimal Performance: Course: Database Administration Effective Period: September 2017

The document provides an overview of configuring a database server for optimal performance. It discusses setting up power management, enabling instant file initialization, and using beneficial trace flags. It also covers defining good performance by identifying critical targets, creating a baseline, and monitoring. The document emphasizes focusing on what is most important to users by understanding hardware resources, SQL statements, data usage patterns, and using tools like Task Manager and Dynamic Management Views. It concludes by discussing optimizing CPU and memory on the server.

Uploaded by

Venus Dhammiko
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Course : Database Administration

Effective Period : September 2017

CONFIGURING THE SERVER


FOR OPTIMAL PERFORMANCE

Session 4
Learning Objectives
To explain the strategy to manage and monitor the
Database Server

To apply SQL Server Security Features


Contents
 What Every DBA Needs To Know About
Performance
 What The Developer DBA Needs To Know
About Performance
 What The Production DBA Needs To Know
About Performance
 CPU and Memory
 I/O
Acknowledgement

These slides have been adapted from:

Adam Jorgensen, Bradley Ball, Steven Wort,


Ross LoForte, Brian Knight - Professional
Microsoft SQL Server 2014 Administration 1st
ed.
ISBN: 978-1-118-85913-1

Chapter 10
What Every DBA Needs To Know About
Performance
Three things in mind regarding performance:

 The performance tuning cycle


 Defining good performance
 Focusing on what is most important

1. THE PERFORMANCE TUNING CYCLE

Performance tuning is an iterative process, and ideally starts at the beginning


of the design process.

Obtaining favorable performance starts with configuring the server, and


continues with designing an efficient schema and specifying tuned SQL
statements, which leads to ideal index selection
illustrates this design system.
- Configuration

Before and after installing SQL Server, we must make configuration changes to
power management, changes to group policies, configure tempdb, and enable
beneficial default trace flags

Power Configuration
Power configuration describes the way the operating system behaves
regarding power consumption
Three default power configuration options:

A. Balanced
B. High Performance
C. Power Saver
- minimal performance impact

- core parking
Figure above shows the commands you will need to type using Windows
PowerShell
- Instant Database File Initialization

In SQL Server, wecan skip the zero initialization of the data files, database
backups, and log backups

Example Instant Database File Initialization


In order to enable this feature, we must have a
domain or local service account that is located
within the Perform Volume Maintenance Tasks
group
The easiest way to do this to open the Run box
and type gpedit.msc
Expand Windows Settings, Security Settings, Local Policies, and, finally, User
Rights Assignment. Double-click “Perform volume maintenance tasks” and
add the SQL Service Account.
In order to validate if you currently have
Instant Database File Initialization turned on
inside your database, you can run the following
query and examine the SQL Server Error Log:
Shows an example of a system in which Instant Database File Initialization is not
turned on
- Trace Flags

Trace Flags are added to SQL Server using startup parameters such as –T<Trace
Flag #> and separated by a semicolon
2. DEFINING GOOD PERFORMANCE

 Does the system in question have good performance now?

 Planning, sizing, testing, and monitoring can provide us with the


information we need to start answering this question
1. Start by identifying your critical targets for
CPU, memory, and I/O.
2. Then create a baseline.
3. Finally, after deploying, monitor your
critical measurements

 After the critical targets have been identified, the current system must be
measured to create a baseline. SQL Server monitoring methodology is
covered more in Chapter 12, of this book.
3. FOCUS ON WHAT'S MOST IMPORTANT

 The final essential aspect of performance is to focus on what's important


—achieving the performance that users demand.
 We must know what we need to measure, how to measure it, and what
the limitations of that measurement might be.
 Consider a typical system. The end users' experience is the net sum of
all performance from their client machine through many layers to the
database server and back again
A schematic diagram of a typical web-based architecture.
This schematic is typical of any enterprise customer using the Microsoft Windows
Server System Reference Architecture to implement an enterprise solution.

This may be the first time


that many DBAs have
looked at something like
this and understood where
their puzzle piece (that is,
the database) fits into the
big picture.
What The Developer DBA Needs To Know About
Performance
1. Users
We first need to know who is going to use the system: (a) the number of users
and their concurrency, (b) peak usage level, and (c) what they are going to do

2. SQL Statements
After determining the different groups of users, we must understand what they
do, which SQL statements will be run, and how often each of these is run for a
user action
3. Data Usage Patterns
The final part of the picture is the data in the database. We need an
understanding of the total data volume in each table, including how that
data gets there and how it changes over time.

4. Database Schema
 We must understand where the hot spots are in the data to enable the
physical design to be implemented in the most efficient manner.
 If we are going through the process of designing a logical data model, we
normally would not plan for optimum performance through database
design.
 However, when you are ready to design the physical model, you should take
this information into account and modify the design to incorporate your
knowledge of data access patterns.
What The Production DBA Needs To Know About
Performance
 The starting point for this process must be an understanding of what the
hardware can deliver, what hardware resources the system needs, and what the
expectations of the users are in terms of user response time.
 The key elements to understanding the hardware are (1) processor speed, (2)
type, and (3) cache size.
 Additionally we need to know how much memory there is, and what the bus
speed is.
 Finally, it is important to determine the number of I/O disks, how they are
configured, and how many network interface cards (NICs) exist.
Following are the tools that the Production DBA uses to perform these tasks:

a) Task Manager—This gives a quick, high-level view of server performance and use
of resources.
b) System Performance Monitor This provides a more detailed view of Windows
server performance and per-instance SQL Server–specific counters.
c) SQL Server Management Data Warehouse (MDW)—The MDW is a relational
database that collects and stores Perfmon and Data Collector outputs for
retrieval when the DBA needs to troubleshoot a system issue.
d) SQL Server Management Studio (SSMS)—This enables long-running transactions
to be analyzed, and for bottlenecks to be found and resolved.
e) Dynamic Management Views (DMVs)—These are system objects that contain
server state information that can be used to diagnose problems and monitor the
health of a SQL Server.
f) Extended Events—This is a lightweight monitoring system that collects data
about the performance of the SQL Server. This data can be viewed through the
Session user interface (UI) that was introduced in SQL Server 2012.
1. Optimizing the Server's Hardware
Three key resources in server optimization: •CPU
•Memory
•I/O

2. Hardware Management
 On most small- to medium-size database servers, a common
configuration is to make a BIOS change to enable hyper-threading
• On nearly all systems, we can find a variety of management software to
help us to configure, operate, and maintain the hardware
 On large enterprise systems such as the HP Superdome 2, NEC
Express5800, or SGI Altix UV, configuring the server hardware enters a
whole new dimension using a management processor (MP) within the
server
CPU and Memory
1. CPU

 SQL Server 2014 operates in a different environment than previous


versions of SQL Server
 SQL Server 2014 can run on the largest servers, with up to 64 processors
and up to 320 cores
 Additionally, SQL Server 2014 can run on machines with up to 4TB of RAM
running Windows Server 2012 R2 Standard Edition
 SQL Server 2014 is supported on one processor architecture only: 64-bit
(x64).
x64

 x64 implemented by Intel as EM64T. It is compatible with x86 machine


code and can support 64-bit micro-code extensions.
 The x64 platform can run SQL Server 2014 (using Windows Server 2012
R2), delivering memory beyond 4GB and up to 4TB of natively
addressable memory and up to 64 physical CPUs and 320 logical cores.

Cache

 The reason modern processors need onboard cache is because the


processor runs at 2 to 3 GHz, and, though main memory is improving, it
still cannot keep up with the processor's memory appetite.
Hyper-Threading
 Hyper-threading (officially dubbed Hyper-threading Technology) is Intel
proprietary technology that works by duplicating certain sections of the
physical processor core to improve parallelization of computations

Multicore Terminology
 The socket is the physical socket into which you plug the processor. Before
multicore systems arrived, there used to be a direct one-to-one relationship
between sockets and execution units.
 A core is equivalent to an execution unit, or what you would previously have
considered to be a processor. With a multicore processor there will be two
or more of these per socket.
 A thread in this context is not the same as the thread you might create in
your program, or the operating system threads
NUMA

 NUMA stands for “non-uniform memory access.” This architecture is often


also referred to as ccNUMA, meaning a cache-coherent version of NUMA
 The main difference between an old SMP system and a NUMA system is
where the memory is connected, and how processors are arranged on
the system bus.
 NUMA systems typically have a two-sockets-per-node configuration and
implement multiple nodes up to the system maximum.
2. MEMORY

Physical Memory
 Physical memory is the RAM that we install into the server. You are
probably already familiar with memory in the form of Dynamic Inline
Memory Modules (DIMMs) that go into desktop PCs and servers.
 as the latest editions of Windows Server 2012 R2 Datacenter and Standard
Editions can now support systems with 4TB of RAM

Physical Address Space


 The physical address space is the set of addresses that the processor uses to
access anything on its bus
 Much of this space is occupied by memory, but some parts of this address
space are reserved for things such as mapping hardware buffers, and
interface-specific memory areas such as video RAM.
Virtual Memory Manager

 The Virtual Memory Manager (VMM) is the part of the operating system
that manages all the physical memory and shares it between all the
processes that need memory on the system.
 The VMM does this by managing the virtual memory for each process, and,
when necessary, it takes back the physical memory behind virtual memory,
and puts the data that resided in that memory into the page file so that it is
not lost
 The page file is a disk file that the computer uses to increase the amount of
physical storage for virtual memory.
 Page faults are generally problematic for SQL Server, but not all page faults
are the same. Some are unavoidable, and some have limited impact on
performance, whereas others can cause severe performance degradation
and are the kind you want to avoid
I/O
 Configuring I/O for a server storage system is perhaps the place where you
have the most options, and it can have the largest impact on the
performance of your SQL Server system
 In most cases with SQL Server, we are primarily concerned with disk I/O
because that's where the data resides. However, we also need to understand
the effect that poor network I/O can have as a bottleneck to performance

1. Network

 The network is a key component in any SQL Server system.


 The network is the link over which SQL Server receives all its requests to do
something, and by which it sends all its results back to the client
2. Magnetic Disks

Magnetic disks are still very prevalent in servers, so they are worth covering

Various interfaces are used in disk storage systems:


a. Advanced Technology Attachment (ATA)—This is also known as
Integrated Drive Electronics (IDE), which refers not only to the
connector and interface definition, but also to the fact that the drive
controller is integrated into the drive.
b. Small Computer Systems Interface (SCSI)—This is a set of standards
that was developed to connect and transfer data between computers
and a myriad of peripheral devices, including disks.

c. More Disks

d. Faster Disks
e. Cache—Read and Write
Following are some of the many uses for disk cache:

a. Read-ahead/read-behind—When a read operation is sent to the disk, the


disk may read unrequested data it deems SQL Server is going to need at a
later date.
b. Write acceleration—The disk controller may signal to SQL Server that the
write operation has succeeded immediately after receiving the data, even
though the data has not actually been written to the disk
c. Input/Output speed balancing—The rate of read and write operations sent
to the disk may fluctuate during the normal course of operation.

3. Solid State/Flash Drives


 SSDs do not need to spin, and, therefore, they do not have rotational
latency. SSDs can have challenges with mixed reads and writes
 Flash memory has two types: NAND and NOR. NOR operates like RAM and
requires power to store data. NAND is non-volatile, and it doesn't require
electricity to maintain data stored in it
Storage Considerations : Target disk latencies are as follows:
• Use the Vendors' Expertise a. Database transaction log—Less than 5ms,
• Every System Is Different ideally 0ms
• Simple Is Better b. OLTP data—Less than 10ms
• Test c. Decision Support Systems (OLAP and
Reporting) data—Less than 25ms

RAID
As part of the “how many disks do you need?” question, we must consider the
RAID level you require because this can influence the total number of disks
required to build a storage system of a certain size and with the I/O
characteristics you require. Consider the following: a. Availability—
b. Cost—
c. Space—
d. Performance—
RAID 0—Striping without Parity or Mirroring
RAID 1—Mirroring without Striping or Parity (Two Disks
RAID 10—Striping with Mirroring (Minimum Four Disks)---- Recommendation
RAID 5—Striping with Parity (Minimum Three Disks)---- Alternative
RAID 6—Striping with Double Parity (Minimum Four Disks)
References
• Adam Jorgensen, Bradley Ball, Steven Wort, Ross
LoForte, Brian Knight - Professional Microsoft SQL
Server 2014 Administration 1st ed.
ISBN: 978-1-118-85913-1

You might also like