Performance, requirements, implications of Flash storage solutions for SQL server deployment.
Visit http://www.virident.com/solutions/ms-sql-server/ for flash storage solutions for SQL server.
cynapspro endpoint data protection - installation guidecynapspro GmbH
The document provides installation instructions for cynapspro Endpoint Data Protection 2010. It discusses installing the cynapspro server component, which manages client agents through a centralized database. The server requires a supported SQL server and reads an organization's directory service. Client components use a kernel driver to enforce policies controlled through the management console. Steps are outlined for installing SQL Server if needed, running the server setup, configuring database and directory settings, deploying the client agent, and installing additional modules like CryptionPro HDD.
Configuring Oracle Enterprise Manager Cloud Control 12c for High AvailabilityLeighton Nelson
This document discusses configuring Oracle Enterprise Manager Cloud Control 12c for high availability. It outlines three levels of high availability - Level 1 uses a single OMS and repository, Level 2 uses an active/passive OMS with a local Data Guard repository, and Level 3 uses multiple active/active OMS instances behind a load balancer with a RAC Data Guard repository. It provides recommendations for configuring high availability for the repository, OMS instances, agents, and software library. The presentation also covers backup and recovery procedures.
High Availability Options for Oracle Enterprise Manager 12c Cloud ControlSimon Haslam
This document discusses high availability options for Oracle Enterprise Manager 12c. It describes the architecture with a web tier, application tier, database, and agents. It outlines approaches for high availability including using a load balancer with two OMS nodes and a single database instance. Additional licensing is required for high availability configurations beyond a single database instance like RAC, Data Guard, or multiple OMS nodes. It concludes with a demonstration of simulating an OMS node or network failure in an environment with a load balancer and dual OMS nodes.
This document provides instructions for installing Oracle Database Client 12c Release 1 (12.1) on IBM: Linux on System z. It describes the requirements including supported operating system distributions, packages, hardware requirements and instructions for installing Oracle software. Key requirements include a minimum of 256MB RAM, 400MB free disk space in /tmp, supported distributions of Red Hat Enterprise Linux 5/6 or SUSE Linux Enterprise Server 11, and required packages specific to each distribution. The document provides detailed steps for logging in as root, configuring servers, and installing Oracle Database Client.
This document discusses over-provisioning (OP) on SSDs and how to configure OP on Micron's M600 SSD to optimize it for data center workloads. It provides the following key points:
1) OP refers to spare NAND capacity that SSDs can use to improve performance and endurance. More OP leads to better steady-state performance and higher write endurance.
2) The document estimates M600 performance and endurance at different OP levels through testing and provides tables with recommended OP configurations.
3) It describes how to use Micron's Storage Executive tool to set the maximum address on an M600, implementing a specific level of OP to tailor it for data center applications.
The document provides instructions for quickly installing Oracle Database Client 12c Release 1 (12.1) on HP-UX Itanium systems. It describes logging in as the root user, configuring servers by checking hardware requirements and installing required patches. It also reviews security practices and checks software requirements before installing the Oracle Database Client.
CTX138217 - IntelliCache Reduction in IOPS: XenDesktop 5.6 FP1 on XenServer 6.1 - Citrix Knowledge Center http://ow.ly/o3Ma4
The purpose of this document is to provide testing results based on MCS-delivered streamed virtual desktops leveraging IntelliCache
Oracle Corporation develops database software and cloud systems. An Oracle database administrator's responsibilities include installing and upgrading the Oracle database, allocating storage, creating database objects, maintaining security, backing up the database, and recovering from failures. Key tools for administration include Oracle Universal Installer, Database Configuration Assistant, SQL*Plus, and Recovery Manager.
This document provides instructions for quickly installing Oracle Database Client 12c Release 1 (12.1) on Linux x86-64 systems. It describes the installation types, requirements for the Linux operating system, and steps for installing Oracle software including configuring servers, creating required users and groups, mounting the product disc, and installing Oracle Database Client.
This document provides an overview of new features in SQL Server 2005, including SQLCLR which allows writing functions, procedures and triggers in .NET languages. It discusses how to install and debug SQLCLR assemblies, and create user-defined data types and aggregates that can extend the functionality of SQL Server. Key enhancements to T-SQL are also summarized, such as common table expressions, ranking commands, and exception handling.
This document provides instructions for quickly installing Oracle Database Client 12c on Oracle Solaris on SPARC (64-bit) systems. It reviews requirements such as verifying the operating system packages and patches, configuring users and directories, and describes the installation process which includes mounting the product disc and running the Oracle Universal Installer. It also provides additional resources for more detailed installation instructions.
Open world exadata_top_10_lessons_learnedchet justice
The document summarizes lessons learned from implementing Oracle Exadata at The Hartford insurance company. Some key points include: (1) Exadata provided a 400x performance gain over their previous system and was very cost effective; (2) With performance now commoditized across vendors, Exadata's advanced functionality and features give it an advantage; (3) Fundamentals like parallelism, partitioning and query tuning still matter. The implementation required promoting how Exadata was compatible with existing Oracle systems, leaving many database and system settings at default, and consolidating databases and implementing resource management. Ongoing enhancements by Oracle position Exadata well for the future by continuing to push more functionality directly into storage hardware.
This document provides information and mitigation steps for CVE-2018-3646 (L1 Terminal Fault - VMM) on VMware vSphere. It describes the vulnerability, which allows a malicious VM to infer privileged information from the hypervisor or other VMs on the same physical core. It outlines a three phase mitigation process: 1) Apply updates and patches, 2) Assess environment impact, 3) Enable the ESXi Side-Channel-Aware Scheduler which schedules VMs on one logical processor per core and may impact performance.
VirtualCenter Database Maintenance: VirtualCenter 2.0.x and ...webhostingguy
This technical note discusses ways to maintain the VirtualCenter database for increased performance and manageability. It covers database backup and recovery, reducing database privileges, improving performance through purging old data and enabling automatic statistics, and upgrading SQL Server. The document provides guidance on installing VirtualCenter with SQL Server, including using supported versions, separating critical files onto separate drives, sizing the database, selecting the appropriate recovery model, avoiding the master database, setting up a new database, and creating a system DSN.
This document discusses backup and restore strategies for SharePoint 2010. It outlines the critical SharePoint components that need to be backed up, including databases, IIS configuration, and custom templates. It then describes the various backup tools available, such as the Central Administration tool, PowerShell, STSADM commands, SQL maintenance plans, and System Center Data Protection Manager. It provides details on how to implement backups using these different methods and also discusses best practices for architecting a DPM environment to back up an entire SharePoint farm.
This document provides instructions for installing Oracle Database Client 12c on Oracle Solaris on x86-64 systems. It describes reviewing requirements, configuring the system, installing necessary packages and patches, and performing the installation. Key steps include verifying the operating system release and packages, configuring sufficient disk space, memory and swap space, and installing required drivers before installing the Oracle Database Client software.
This technical report provides best practices for implementing NetApp's Performance Acceleration Module (PAM I) and Flash Cache solutions. It discusses how PAM I and Flash Cache improve storage system performance by caching frequently accessed data in memory or flash modules, allowing data to be served much faster than from disk. The report covers how the solutions work, different modes of operation, performance expectations, interactions with other Data ONTAP features, and how to monitor performance. The goal is to help readers make well-informed decisions about using PAM I and Flash Cache to enhance storage controller performance without adding disk drives.
Processor Selection for Middleware Price Performance Optimizationdakra137
This document analyzes published benchmark results to determine the most cost-effective processor architecture and operating system combinations for running Oracle's Database Server middleware. It finds that Intel Xeon processors generally provide the best performance per weighted core, with hyper-threaded Xeon models performing best. For TPC-C benchmarks, hyper-threaded Xeons occupied the top 5 spots and half of the top 10 and 20. For TPC-H, results varied more with benchmark scale, with Itanium performing best at the highest scale. The methodology allows determining the most price/performance solution without running own benchmarks.
This document provides a support matrix for ArcSight ESM and its components, including supported operating systems and end of support dates. It lists supported operating systems and browsers for the ESM Manager, Console, and Express. Products at end of support include ESM versions 5.0.x and earlier as well as appliance models E7400 and E7200. Supported operating systems include recent versions of RHEL, CentOS, Windows Server and Mac OS X. The document defines key terms and provides detailed version and patch level information.
This document discusses considerations for planning Oracle VM 3 server pool deployments for scalability, availability, and reliability. It describes key concepts of Oracle VM 3 including Oracle VM Manager, Oracle VM Server, and server pools. Server pools group multiple physical servers with shared storage so virtual machines can run on any server and live migrate between servers. The document provides best practices for configuring server pools for high availability, including enabling high availability options, sizing the server pool file system, using live migration, ensuring excess pool capacity, and planning multiple pools for large infrastructures.
This document discusses virtualization and provides guidance on virtualizing servers. It covers:
- Reasons for virtualization like increased server utilization and efficiency
- Steps for planning virtualization including addressing organizational challenges
- Factors for identifying good candidates for virtualization like application vendor support
- Best practices for the virtualization process including establishing a baseline and testing
- Potential issues to watch out for called "gotchas" and hints to improve performance
- Case studies on how Allstate and Accenture benefitted from virtualization.
This document discusses testing the capabilities of virtual CPUs in a VMware ESX Server 2.5.1 environment running Microsoft Exchange Server 2003 front-end services. Dell engineers configured ESX Server on a Dell PowerEdge 2850 server connected to a Dell/EMC CX300 storage area network. Two virtual machines running Windows 2003 Server were created with 1 VCPU and 3.6GB RAM each to test Exchange 2003 front-end protocols like SMTP and Outlook Web Access under varying workloads. The document focuses on exploring virtual CPU resource management options in ESX Server and how they can provide flexibility and optimize resource utilization for Internet mail scenarios.
1. ConfigMgr 2012 SP1 includes many new features such as support for Windows Server 2012, Windows 8, SQL Server 2012 SP1, and Mac, Linux, and UNIX clients. It also includes expanded hierarchy capabilities and support for virtual environments.
2. Upgrading to ConfigMgr 2012 SP1 requires downloading prerequisites, upgrading SQL Server, installing updates, backing up databases, testing the database upgrade, and installing SP1 on the site servers.
3. After installing SP1, it is important to perform health checks, backup the database, test functionality, deploy updated clients and content, review and update task sequences, and install any post-SP1 hotfixes.
This document discusses various methods for backing up and restoring SharePoint 2010 environments. It covers the built-in SharePoint backup tools like the Central Administration backup tool. It also discusses using STSADM commands and PowerShell for backups. SQL backup tools and third party solutions like Data Protection Manager are presented. The critical SharePoint components needing backup are outlined. An in-depth look at architecting a DPM environment is provided along with demonstrations of DPM for SharePoint backups.
Planning Optimal Lotus Quickr services for Portal (J2EE) DeploymentsStuart McIntyre
As per the Quickr Wiki ( http://www-10.lotus.com/ldd/lqwiki.nsf/dx/20052009045545WEBCGW.htm ):
"This document contains the presentation from Quickr masterclass covering planning optimal deployments – crawl/walk/run.
Discussing simplistic deployment architectures which can be linearily scaled over time (e.g. from POC to simple-non-clustered to clustered)
Sharing of key tips/recommendations from SVT and Perf - so as to help avoid expensive crit-sits in the field
Tuning for performance, stability and reliability"
Please note, I do not claim any ownership of this presentation, just am uploading to allow sharing via the Quickr Blog. Any questions/comments/issues, just let me know!
The document outlines new features in Oracle Solaris 11.1, including enhancements to installation, system configuration, virtualization, security, networking, data management, and the kernel/platform. Over 300 performance and feature enhancements are included. Specific improvements mentioned are parallel zone updates for faster maintenance, zones on shared storage for easy mobility, per-zone file system statistics for monitoring individual zones, and network features like edge virtual bridging and data center bridging.
- Oracle Database 11g Release 2 provides many advanced features to lower IT costs including in-memory processing, automated storage management, database compression, and real application testing capabilities.
- It allows for online application upgrades using edition-based redefinition which allows new code and data changes to be installed without disrupting the existing system.
- Oracle provides multiple upgrade paths from prior database versions to 11g to allow for predictable performance and a safe upgrade process.
This document provides an overview of Intel® Speed Select Technology – Base Frequency (Intel® SST-BF), which allows select Intel Xeon Scalable processors to operate with an asymmetric core frequency configuration. Enabling Intel® SST-BF and deploying key workloads on the higher frequency cores can increase overall system performance by up to 14.5% while maintaining comparable power consumption. The document outlines how to configure the BIOS and operating system to utilize Intel® SST-BF and use a script to set the core frequencies. Benchmark results demonstrate the performance gains possible when using this capability.
This document provides an overview of a new CPU capability called Intel® Speed Select
Technology – Base Frequency (Intel® SST-BF), which is available on select SKUs of 2nd
generation Intel® Xeon® Scalable processor (formerly codenamed Cascade Lake). The
document also includes benchmarking data and instructions on how to enable the
capability.
Value propositions of this capability include:
• Select SKUs of 2nd generation Intel® Xeon® Scalable processor (5218N, 6230N, and
6252N) offer a new capability called Intel® SST-BF.
• Intel® SST-BF allows the CPU to be deployed with an asymmetric core frequency
configuration.
• The placement of key workloads on higher frequency Intel® SST-BF enabled cores
can result in an overall system workload increase and potential overall energy
savings when compared to deploying the CPU with symmetric core frequencies
Oracle Corporation develops database software and cloud systems. An Oracle database administrator's responsibilities include installing and upgrading the Oracle database, allocating storage, creating database objects, maintaining security, backing up the database, and recovering from failures. Key tools for administration include Oracle Universal Installer, Database Configuration Assistant, SQL*Plus, and Recovery Manager.
This document provides instructions for quickly installing Oracle Database Client 12c Release 1 (12.1) on Linux x86-64 systems. It describes the installation types, requirements for the Linux operating system, and steps for installing Oracle software including configuring servers, creating required users and groups, mounting the product disc, and installing Oracle Database Client.
This document provides an overview of new features in SQL Server 2005, including SQLCLR which allows writing functions, procedures and triggers in .NET languages. It discusses how to install and debug SQLCLR assemblies, and create user-defined data types and aggregates that can extend the functionality of SQL Server. Key enhancements to T-SQL are also summarized, such as common table expressions, ranking commands, and exception handling.
This document provides instructions for quickly installing Oracle Database Client 12c on Oracle Solaris on SPARC (64-bit) systems. It reviews requirements such as verifying the operating system packages and patches, configuring users and directories, and describes the installation process which includes mounting the product disc and running the Oracle Universal Installer. It also provides additional resources for more detailed installation instructions.
Open world exadata_top_10_lessons_learnedchet justice
The document summarizes lessons learned from implementing Oracle Exadata at The Hartford insurance company. Some key points include: (1) Exadata provided a 400x performance gain over their previous system and was very cost effective; (2) With performance now commoditized across vendors, Exadata's advanced functionality and features give it an advantage; (3) Fundamentals like parallelism, partitioning and query tuning still matter. The implementation required promoting how Exadata was compatible with existing Oracle systems, leaving many database and system settings at default, and consolidating databases and implementing resource management. Ongoing enhancements by Oracle position Exadata well for the future by continuing to push more functionality directly into storage hardware.
This document provides information and mitigation steps for CVE-2018-3646 (L1 Terminal Fault - VMM) on VMware vSphere. It describes the vulnerability, which allows a malicious VM to infer privileged information from the hypervisor or other VMs on the same physical core. It outlines a three phase mitigation process: 1) Apply updates and patches, 2) Assess environment impact, 3) Enable the ESXi Side-Channel-Aware Scheduler which schedules VMs on one logical processor per core and may impact performance.
VirtualCenter Database Maintenance: VirtualCenter 2.0.x and ...webhostingguy
This technical note discusses ways to maintain the VirtualCenter database for increased performance and manageability. It covers database backup and recovery, reducing database privileges, improving performance through purging old data and enabling automatic statistics, and upgrading SQL Server. The document provides guidance on installing VirtualCenter with SQL Server, including using supported versions, separating critical files onto separate drives, sizing the database, selecting the appropriate recovery model, avoiding the master database, setting up a new database, and creating a system DSN.
This document discusses backup and restore strategies for SharePoint 2010. It outlines the critical SharePoint components that need to be backed up, including databases, IIS configuration, and custom templates. It then describes the various backup tools available, such as the Central Administration tool, PowerShell, STSADM commands, SQL maintenance plans, and System Center Data Protection Manager. It provides details on how to implement backups using these different methods and also discusses best practices for architecting a DPM environment to back up an entire SharePoint farm.
This document provides instructions for installing Oracle Database Client 12c on Oracle Solaris on x86-64 systems. It describes reviewing requirements, configuring the system, installing necessary packages and patches, and performing the installation. Key steps include verifying the operating system release and packages, configuring sufficient disk space, memory and swap space, and installing required drivers before installing the Oracle Database Client software.
This technical report provides best practices for implementing NetApp's Performance Acceleration Module (PAM I) and Flash Cache solutions. It discusses how PAM I and Flash Cache improve storage system performance by caching frequently accessed data in memory or flash modules, allowing data to be served much faster than from disk. The report covers how the solutions work, different modes of operation, performance expectations, interactions with other Data ONTAP features, and how to monitor performance. The goal is to help readers make well-informed decisions about using PAM I and Flash Cache to enhance storage controller performance without adding disk drives.
Processor Selection for Middleware Price Performance Optimizationdakra137
This document analyzes published benchmark results to determine the most cost-effective processor architecture and operating system combinations for running Oracle's Database Server middleware. It finds that Intel Xeon processors generally provide the best performance per weighted core, with hyper-threaded Xeon models performing best. For TPC-C benchmarks, hyper-threaded Xeons occupied the top 5 spots and half of the top 10 and 20. For TPC-H, results varied more with benchmark scale, with Itanium performing best at the highest scale. The methodology allows determining the most price/performance solution without running own benchmarks.
This document provides a support matrix for ArcSight ESM and its components, including supported operating systems and end of support dates. It lists supported operating systems and browsers for the ESM Manager, Console, and Express. Products at end of support include ESM versions 5.0.x and earlier as well as appliance models E7400 and E7200. Supported operating systems include recent versions of RHEL, CentOS, Windows Server and Mac OS X. The document defines key terms and provides detailed version and patch level information.
This document discusses considerations for planning Oracle VM 3 server pool deployments for scalability, availability, and reliability. It describes key concepts of Oracle VM 3 including Oracle VM Manager, Oracle VM Server, and server pools. Server pools group multiple physical servers with shared storage so virtual machines can run on any server and live migrate between servers. The document provides best practices for configuring server pools for high availability, including enabling high availability options, sizing the server pool file system, using live migration, ensuring excess pool capacity, and planning multiple pools for large infrastructures.
This document discusses virtualization and provides guidance on virtualizing servers. It covers:
- Reasons for virtualization like increased server utilization and efficiency
- Steps for planning virtualization including addressing organizational challenges
- Factors for identifying good candidates for virtualization like application vendor support
- Best practices for the virtualization process including establishing a baseline and testing
- Potential issues to watch out for called "gotchas" and hints to improve performance
- Case studies on how Allstate and Accenture benefitted from virtualization.
This document discusses testing the capabilities of virtual CPUs in a VMware ESX Server 2.5.1 environment running Microsoft Exchange Server 2003 front-end services. Dell engineers configured ESX Server on a Dell PowerEdge 2850 server connected to a Dell/EMC CX300 storage area network. Two virtual machines running Windows 2003 Server were created with 1 VCPU and 3.6GB RAM each to test Exchange 2003 front-end protocols like SMTP and Outlook Web Access under varying workloads. The document focuses on exploring virtual CPU resource management options in ESX Server and how they can provide flexibility and optimize resource utilization for Internet mail scenarios.
1. ConfigMgr 2012 SP1 includes many new features such as support for Windows Server 2012, Windows 8, SQL Server 2012 SP1, and Mac, Linux, and UNIX clients. It also includes expanded hierarchy capabilities and support for virtual environments.
2. Upgrading to ConfigMgr 2012 SP1 requires downloading prerequisites, upgrading SQL Server, installing updates, backing up databases, testing the database upgrade, and installing SP1 on the site servers.
3. After installing SP1, it is important to perform health checks, backup the database, test functionality, deploy updated clients and content, review and update task sequences, and install any post-SP1 hotfixes.
This document discusses various methods for backing up and restoring SharePoint 2010 environments. It covers the built-in SharePoint backup tools like the Central Administration backup tool. It also discusses using STSADM commands and PowerShell for backups. SQL backup tools and third party solutions like Data Protection Manager are presented. The critical SharePoint components needing backup are outlined. An in-depth look at architecting a DPM environment is provided along with demonstrations of DPM for SharePoint backups.
Planning Optimal Lotus Quickr services for Portal (J2EE) DeploymentsStuart McIntyre
As per the Quickr Wiki ( http://www-10.lotus.com/ldd/lqwiki.nsf/dx/20052009045545WEBCGW.htm ):
"This document contains the presentation from Quickr masterclass covering planning optimal deployments – crawl/walk/run.
Discussing simplistic deployment architectures which can be linearily scaled over time (e.g. from POC to simple-non-clustered to clustered)
Sharing of key tips/recommendations from SVT and Perf - so as to help avoid expensive crit-sits in the field
Tuning for performance, stability and reliability"
Please note, I do not claim any ownership of this presentation, just am uploading to allow sharing via the Quickr Blog. Any questions/comments/issues, just let me know!
The document outlines new features in Oracle Solaris 11.1, including enhancements to installation, system configuration, virtualization, security, networking, data management, and the kernel/platform. Over 300 performance and feature enhancements are included. Specific improvements mentioned are parallel zone updates for faster maintenance, zones on shared storage for easy mobility, per-zone file system statistics for monitoring individual zones, and network features like edge virtual bridging and data center bridging.
- Oracle Database 11g Release 2 provides many advanced features to lower IT costs including in-memory processing, automated storage management, database compression, and real application testing capabilities.
- It allows for online application upgrades using edition-based redefinition which allows new code and data changes to be installed without disrupting the existing system.
- Oracle provides multiple upgrade paths from prior database versions to 11g to allow for predictable performance and a safe upgrade process.
This document provides an overview of Intel® Speed Select Technology – Base Frequency (Intel® SST-BF), which allows select Intel Xeon Scalable processors to operate with an asymmetric core frequency configuration. Enabling Intel® SST-BF and deploying key workloads on the higher frequency cores can increase overall system performance by up to 14.5% while maintaining comparable power consumption. The document outlines how to configure the BIOS and operating system to utilize Intel® SST-BF and use a script to set the core frequencies. Benchmark results demonstrate the performance gains possible when using this capability.
This document provides an overview of a new CPU capability called Intel® Speed Select
Technology – Base Frequency (Intel® SST-BF), which is available on select SKUs of 2nd
generation Intel® Xeon® Scalable processor (formerly codenamed Cascade Lake). The
document also includes benchmarking data and instructions on how to enable the
capability.
Value propositions of this capability include:
• Select SKUs of 2nd generation Intel® Xeon® Scalable processor (5218N, 6230N, and
6252N) offer a new capability called Intel® SST-BF.
• Intel® SST-BF allows the CPU to be deployed with an asymmetric core frequency
configuration.
• The placement of key workloads on higher frequency Intel® SST-BF enabled cores
can result in an overall system workload increase and potential overall energy
savings when compared to deploying the CPU with symmetric core frequencies
Oracle Analytics Server Infrastructure Tuning guide v2.pdfsivakodali7
This document provides tuning recommendations to optimize the performance of Oracle Analytics Server. It discusses tuning the operating system, WebLogic Server, Java Virtual Machines, HTTP servers, browser settings, and database. Some key recommendations include increasing connection pools and file descriptors, reducing TCP timeout values, enabling HTTP server compression, and monitoring built-in BI metrics. Tuning is an iterative process that requires testing changes and analyzing performance impacts.
Boosting virtualization performance with Intel SSD DC Series P3600 NVMe SSDs ...Principled Technologies
When it comes time to make your server purchase or if you’re looking for an easy way to boost performance of existing infrastructure, consider upgrading your server’s internal storage. As our hands-on tests with a Dell EMC PowerEdge R630 environment running VMware Virtual SAN proved, Intel SSD DC P3600 Series NVMe SSDs could increase virtualized mixed-workload performance by as much as 59.9 percent compared to SATA SSDs while allowing you to run a large additional number of VMs. When you improve performance for your virtualized workloads, your employees and customers will benefit. By increasing performance with Intel NVMe SSDs on your Dell EMC PowerEdge R630 servers, you can potentially slash wait times and do more work on your servers without having to expand your infrastructure with additional storage arrays, which can translate to happier users and a more efficient infrastructure.
SQL Server 2014 New Features (Sql Server 2014 Yenilikleri)BT Akademi
The document summarizes several topics discussed by Ismail Adar including buffer pool extension, resource governor for I/O, delayed durability, DMV sys.dm_exec_query_profiles, and selecting into parallel. Buffer pool extension allows using SSD storage to increase the amount of memory available for the buffer pool. Resource governor for I/O provides I/O level isolation between workloads. Delayed durability controls the durability of transactions. The DMV sys.dm_exec_query_profiles profiles query execution. Selecting into allows inserting results of a query in parallel into a table.
Premier integration with logix, pf drives and ft view (pf755)Marcos Romanholo
This document provides instructions for a lab that demonstrates integrating a PowerFlex 755 drive with a ControlLogix controller using RSLogix 5000 software. The lab includes steps for connecting hardware, creating an RSLogix 5000 project with an integrated drive profile for the PowerFlex 755, configuring network I/O between the devices, and using FactoryTalk View ME to monitor and control the drive. Optional sections provide additional exercises for explicit messaging, viewing drive web pages, a conveyor application example, and using a non-AB drive.
1. SQL Server performance on VMware was tested and found to achieve equivalent or better performance than physical hardware, with 10x better disk subsystem I/O performance.
2. Critical SQL performance counters were monitored and maintained acceptable levels.
3. Consolidating multiple SQL VMs onto fewer physical servers through virtualization can save significant costs on hardware, space, power and cooling while providing high availability and disaster recovery.
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, and techniques for configuring and optimizing all-flash Ceph performance.
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
This document discusses an all-flash Ceph array design from QCT based on NUMA architecture. It provides an agenda that covers all-flash Ceph and use cases, QCT's all-flash Ceph solution for IOPS, an overview of QCT's lab environment and detailed architecture, and the importance of NUMA. It also includes sections on why all-flash storage is used, different all-flash Ceph use cases, QCT's IOPS-optimized all-flash Ceph solution, benefits of using NVMe storage, QCT's lab test environment, Ceph tuning recommendations, and benefits of using multi-partitioned NVMe SSDs for Ceph OSDs.
This document summarizes a presentation on database optimization techniques for DBAs. It discusses using reports like AWR, ASH, and ADDM to analyze performance issues. It also covers using explain plans and trace files to diagnose problems. Specific troubleshooting steps are provided for examples involving parallel processing issues, performance degradation after an upgrade, and temporary space usage. The presentation emphasizes using data from tools like these to identify and address real performance problems, rather than superficial "tinsel" optimizations.
Dell PowerEdge R920 and Microsoft SQL Server 2014 Migration and Benefits GuidePrincipled Technologies
The latest Dell PowerEdge R920 server is designed to provide highly scalable performance for large enterprises, with greater memory capacity, improved and expanded attached storage options, and processor architectures designed for high availability. Microsoft SQL Server 2014 is the perfect companion software to take advantage of the Dell PowerEdge R920’s impressive specifications. Upgrading has never looked more attractive, and with hardware/software upgrades must come data migration.
Migrating legacy database applications to the latest database technologies on newer Dell server platforms is a common task for businesses upgrading their hardware/software stack. As this guide shows, the process is straightforward and the cost benefits can be enormous. We calculated the savings attainable from multiple consolidation ratios, as well as how long it would take to pay off the replacement server. We found that a consolidation ratio of 13 to 1 could yield $531,725 in software savings, many times the cost of the replacement hardware itself. So not only will the business benefit from the massively-scalable current-generation Dell server technology paired with Microsoft Windows Server 2012 R2 running SQL Server 2014, but you can save money in the process.
Configuring oracle enterprise manager cloud control 12 c for high availabilitySon Hyojin
This document discusses configuring Oracle Enterprise Manager Cloud Control 12c for high availability. It outlines four levels of high availability configurations, with levels 1-3 being covered. Level 1 has the OMS and repository on separate hosts with no redundancy. Level 2 introduces redundancy with an active/passive OMS configuration using a shared storage and local Data Guard for the repository database. Level 3 uses multiple active/active OMS instances and a RAC database for the repository. The document provides details on implementing different high availability options for the management repository and service.
Configuring Oracle Enterprise Manager Cloud Control 12c for HA White PaperLeighton Nelson
This document discusses configuring Oracle Enterprise Manager Cloud Control 12c for high availability. It outlines four levels of high availability configurations, with levels 1-3 utilizing separate hosts, active/passive failover, and multiple active/active OMS instances respectively. Level 2 implements an active/passive configuration with the OMS on shared storage and a virtual IP address, while the repository uses local Data Guard. The document provides detailed steps for setting up a level 2 configuration using Oracle Clusterware for failover of the virtual IP and OMS between nodes.
This document provides a checklist for configuring SQL Server that includes recommendations for the Windows OS, patches, page file, antivirus, RAID, disk formatting, power options, network cards, installation directories, SQL Server services accounts, trace flags, server properties, memory settings, TempDB configuration, and database configuration. It offers guidance on topics like Windows version, patch level, page file size, antivirus exclusions, RAID level, disk formatting, power plan, network redundancy, installation directories, services accounts, trace flags, memory allocation, TempDB files, and database growth settings.
Using preferred read groups in oracle asm michael aultLouis liu
This document describes an optimized Oracle database architecture that leverages Automatic Storage Management (ASM) and Preferred Read Groups (PRG) to maximize performance while maintaining reliability and controlling costs. It uses solid state disks (SSDs) mirrored with traditional disks in ASM to provide fast reads from SSDs without sacrificing redundancy. Benchmark results show this architecture completes the same workload over 12 times faster than an all-disk configuration by serving reads from SSDs through the ASM preferred read feature.
Oracle grid control setup and usage challenges version5Jeff Hinds
This presentation will concentrate on the technical aspects and concepts of the Oracle GRID technologies and its usage for monitoring Oracle Databases and Application Servers. Topics such as installation, configuration, and usage will be discussed. The presentation will include demonstration materials, technical challenges, open discussions and breaks. This workshop will be divided into three major segments.
1)
Will discuss installation, startup and shutdown, and hardware configurations.
2)
Will include configuration, security concepts, and agent installation.
3)
Will cover monitoring and technical challenges.
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
TrustArc Webinar: Consumer Expectations vs Corporate Realities on Data Broker...TrustArc
Most consumers believe they’re making informed decisions about their personal data—adjusting privacy settings, blocking trackers, and opting out where they can. However, our new research reveals that while awareness is high, taking meaningful action is still lacking. On the corporate side, many organizations report strong policies for managing third-party data and consumer consent yet fall short when it comes to consistency, accountability and transparency.
This session will explore the research findings from TrustArc’s Privacy Pulse Survey, examining consumer attitudes toward personal data collection and practical suggestions for corporate practices around purchasing third-party data.
Attendees will learn:
- Consumer awareness around data brokers and what consumers are doing to limit data collection
- How businesses assess third-party vendors and their consent management operations
- Where business preparedness needs improvement
- What these trends mean for the future of privacy governance and public trust
This discussion is essential for privacy, risk, and compliance professionals who want to ground their strategies in current data and prepare for what’s next in the privacy landscape.
Procurement Insights Cost To Value Guide.pptxJon Hansen
Procurement Insights integrated Historic Procurement Industry Archives, serves as a powerful complement — not a competitor — to other procurement industry firms. It fills critical gaps in depth, agility, and contextual insight that most traditional analyst and association models overlook.
Learn more about this value- driven proprietary service offering here.
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
TrsLabs - Fintech Product & Business ConsultingTrs Labs
Hybrid Growth Mandate Model with TrsLabs
Strategic Investments, Inorganic Growth, Business Model Pivoting are critical activities that business don't do/change everyday. In cases like this, it may benefit your business to choose a temporary external consultant.
An unbiased plan driven by clearcut deliverables, market dynamics and without the influence of your internal office equations empower business leaders to make right choices.
Getting things done within a budget within a timeframe is key to Growing Business - No matter whether you are a start-up or a big company
Talk to us & Unlock the competitive advantage
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, presentation slides, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
AI and Data Privacy in 2025: Global TrendsInData Labs
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the today’s world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
In today's fast-paced retail environment, efficiency is key. Every minute counts, and every penny matters. One tool that can significantly boost your store's efficiency is a well-executed planogram. These visual merchandising blueprints not only enhance store layouts but also save time and money in the process.
Linux Support for SMARC: How Toradex Empowers Embedded DevelopersToradex
Toradex brings robust Linux support to SMARC (Smart Mobility Architecture), ensuring high performance and long-term reliability for embedded applications. Here’s how:
• Optimized Torizon OS & Yocto Support – Toradex provides Torizon OS, a Debian-based easy-to-use platform, and Yocto BSPs for customized Linux images on SMARC modules.
• Seamless Integration with i.MX 8M Plus and i.MX 95 – Toradex SMARC solutions leverage NXP’s i.MX 8 M Plus and i.MX 95 SoCs, delivering power efficiency and AI-ready performance.
• Secure and Reliable – With Secure Boot, over-the-air (OTA) updates, and LTS kernel support, Toradex ensures industrial-grade security and longevity.
• Containerized Workflows for AI & IoT – Support for Docker, ROS, and real-time Linux enables scalable AI, ML, and IoT applications.
• Strong Ecosystem & Developer Support – Toradex offers comprehensive documentation, developer tools, and dedicated support, accelerating time-to-market.
With Toradex’s Linux support for SMARC, developers get a scalable, secure, and high-performance solution for industrial, medical, and AI-driven applications.
Do you have a specific project or application in mind where you're considering SMARC? We can help with Free Compatibility Check and help you with quick time-to-market
For more information: https://www.toradex.com/computer-on-modules/smarc-arm-family
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025BookNet Canada
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, transcript, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
Talk at the final event of Data Fusion Dynamics: A Collaborative UK-Saudi Initiative in Cybersecurity and Artificial Intelligence funded by the British Council UK-Saudi Challenge Fund 2024, Cardiff Metropolitan University, 29th April 2025
https://alandix.com/academic/talks/CMet2025-AI-Changes-Everything/
Is AI just another technology, or does it fundamentally change the way we live and think?
Every technology has a direct impact with micro-ethical consequences, some good, some bad. However more profound are the ways in which some technologies reshape the very fabric of society with macro-ethical impacts. The invention of the stirrup revolutionised mounted combat, but as a side effect gave rise to the feudal system, which still shapes politics today. The internal combustion engine offers personal freedom and creates pollution, but has also transformed the nature of urban planning and international trade. When we look at AI the micro-ethical issues, such as bias, are most obvious, but the macro-ethical challenges may be greater.
At a micro-ethical level AI has the potential to deepen social, ethnic and gender bias, issues I have warned about since the early 1990s! It is also being used increasingly on the battlefield. However, it also offers amazing opportunities in health and educations, as the recent Nobel prizes for the developers of AlphaFold illustrate. More radically, the need to encode ethics acts as a mirror to surface essential ethical problems and conflicts.
At the macro-ethical level, by the early 2000s digital technology had already begun to undermine sovereignty (e.g. gambling), market economics (through network effects and emergent monopolies), and the very meaning of money. Modern AI is the child of big data, big computation and ultimately big business, intensifying the inherent tendency of digital technology to concentrate power. AI is already unravelling the fundamentals of the social, political and economic world around us, but this is a world that needs radical reimagining to overcome the global environmental and human challenges that confront us. Our challenge is whether to let the threads fall as they may, or to use them to weave a better future.
Technology Trends in 2025: AI and Big Data AnalyticsInData Labs
At InData Labs, we have been keeping an ear to the ground, looking out for AI-enabled digital transformation trends coming our way in 2025. Our report will provide a look into the technology landscape of the future, including:
-Artificial Intelligence Market Overview
-Strategies for AI Adoption in 2025
-Anticipated drivers of AI adoption and transformative technologies
-Benefits of AI and Big data for your business
-Tips on how to prepare your business for innovation
-AI and data privacy: Strategies for securing data privacy in AI models, etc.
Download your free copy nowand implement the key findings to improve your business.
How Can I use the AI Hype in my Business Context?Daniel Lehner
𝙄𝙨 𝘼𝙄 𝙟𝙪𝙨𝙩 𝙝𝙮𝙥𝙚? 𝙊𝙧 𝙞𝙨 𝙞𝙩 𝙩𝙝𝙚 𝙜𝙖𝙢𝙚 𝙘𝙝𝙖𝙣𝙜𝙚𝙧 𝙮𝙤𝙪𝙧 𝙗𝙪𝙨𝙞𝙣𝙚𝙨𝙨 𝙣𝙚𝙚𝙙𝙨?
Everyone’s talking about AI but is anyone really using it to create real value?
Most companies want to leverage AI. Few know 𝗵𝗼𝘄.
✅ What exactly should you ask to find real AI opportunities?
✅ Which AI techniques actually fit your business?
✅ Is your data even ready for AI?
If you’re not sure, you’re not alone. This is a condensed version of the slides I presented at a Linkedin webinar for Tecnovy on 28.04.2025.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Role of Data Annotation Services in AI-Powered ManufacturingAndrew Leo
From predictive maintenance to robotic automation, AI is driving the future of manufacturing. But without high-quality annotated data, even the smartest models fall short.
Discover how data annotation services are powering accuracy, safety, and efficiency in AI-driven manufacturing systems.
Precision in data labeling = Precision on the production floor.
2. Table of Contents
Executive Summary................................................................................................................................... 1
FlashMAX Benefits .................................................................................................................................. 1
Implementation Requirements ................................................................................................................ 2
System....................................................................................................................................................... 2
Software ................................................................................................................................................... 2
General High-Performance Tunables ..................................................................................................... 2
Operating System Configurations ......................................................................................................... 2
Microsoft SQL Server Settings ............................................................................................................... 3
Virident FlashMAX Settings ................................................................................................................... 4
Placement of Databases on the Virident FlashMAX ............................................................................. 5
All Tables and Indexes on the FlashMAX .............................................................................................. 5
Individually Tiering Tables and Indexes ................................................................................................ 5
Placing Specific Indexes on FlashMAX ............................................................................................................ 6
Placing Specific Tables on FlashMAX............................................................................................................... 7
Placing Parts of Tables on FlashMAX .............................................................................................................. 8
Transaction Logs on FlashMAX............................................................................................................ 10
Tempdb on FlashMAX ........................................................................................................................... 11
Data Reliability ........................................................................................................................................ 12
Flash Management ................................................................................................................................ 12
RAID-1 Across Multiple Cards ............................................................................................................. 12
Data Availability ...................................................................................................................................... 13
SQL Server Mirroring ............................................................................................................................ 13
Transaction Log Shipping ...................................................................................................................... 14
Conclusion................................................................................................................................................ 14
2012, Virident Systems, Inc. All Rights Reserved.
3. Executive Summary
The Virident FlashMAX Drive delivers uncompromising performance to Microsoft SQL Server installations.
A single card provides up to 2.2 terabytes of database space in an industry-standard PCI Express half-height,
half-length form factor. For larger databases, up to eight cards can be employed for combined storage of
over 17TB.
FlashMAX’s incredibly high IOPS performance is a great match for OLTP-type applications, while its great
bandwidth allows it to replace racks of short-stroked disk drives in DSS-type workloads. FlashMAX can
increase the CPU core utilization on your database server, maximizing the investment in your hardware.
Best of all, FlashMAX is easy for the database administrator to implement. Utilizing standard optimization
techniques, FlashMAX can accelerate entire databases, specific indexes or tables, or even only portions of
tables without any user-level application changes.
FlashMAX Benefits
Easy to implement
Utilizes Microsoft standard STORPORT architecture
Allows administrators to use standard disk-management utilities, both Microsoft and third party
Works in any industry-standard, server-class system
Low memory usage
More room for SQL Server buffers
High-speed DSS and OLTP workload support
No need to choose one workload or another
No need to overprovision
No short-stroking required for highest performance (unlike HDD arrays or other flash)
Best utilization of storage investment
Reduced costs via consolidation
Consolidation minimizes over-provisioning of storage and servers
Predictable performance means meeting SLAs with fewer resources
New business intelligence
Convert batch processing into interactive applications
Make faster, more informed decisions
Page 1
2012, Virident Systems, Inc. All Rights Reserved.
4. Implementation Requirements
The Virident FlashMAX supports most industry-standard server installations, thanks to its small size
and adherence to Microsoft and industry standards.
System
Hardware
A single half-height, half-length PCI Express Gen 2 x8 slot in any server-class chassis
from 1-U to 4-U. No additional power is required for the card to operate.
Processor
Modern Intel Xeon or AMD Opteron dual- or quad-socket server. Hyper-threading has
been shown to be helpful in most workloads, but individual database administrators
should verify this.
Memory
8GB DRAM or more. The FlashMAX device driver requires approximately two gigabytes
for each terabyte of flash present in the system (i.e., a 2TB card requires around 4GB of
DRAM).
Software
Operating
System
Microsoft Windows Server 2008 R2 SP1, 64-bit edition
SQL Server
Microsoft SQL Server 2008 R2, SP1 or later. The latest service pack is required to ensure
that SQL server is able to deliver the highest performance on the Virident FlashMAX’s
4K physical sectors.
General High-Performance Tunables
All standard tunables within both Microsoft Windows and Microsoft SQL Server can apply with a Virident
FlashMAX.
Operating System Configurations
At the operating-system level, the most important tunable is the performance mode of the system. By
default servers are configured in a “Balanced” or “Power Savings” mode (via the Control Panel -> Power
Options panel). This mode trades off slower processor clock frequencies and longer latencies for a
reduction in server power.
When paired with a low-performing disk subsystem, this performance tradeoff makes sense because SQL
Server is often bottlenecked waiting for data from the slow disk. However, the Virident FlashMAX is able to
provide data to SQL Server at such a high rate that this IO waiting time is reduced almost to nothing. It thus
makes sense to use the “High Performance” setting to ensure the CPU is always running at the highest
speed.
Page 2
2012, Virident Systems, Inc. All Rights Reserved.
5. Microsoft SQL Server Settings
Several general settings and techniques are configurable from within Microsoft SQL Server to get the highest
performance possible with the Virident FlashMAX.
Multiple tempdb files should be used no matter where these files are stored, to avoid any bottlenecks on
tempdb allocations.
SQL Server should be given as much memory as practicable. With the lower memory requirements of the
Virident FlashMAX, it may be possible to increase the memory that is allocated to SQL Server.
Finally, because the FlashMAX performs best at the highest levels of IO parallelism, SQL Server should be
given a high setting for worker threads. The appropriate SP_CONFIGURE settings to achieve this are “max
worker threads” (set very high, in the range of 1,000 to 3,000) and “cost threshold for parallelism” (set to 0 to
encourage SQL server to parallelize query execution).
Command Example
Windows PowerShell
Copyright (C) 2009 Microsoft Corporation. All rights reserved.
PS C:UsersAdministrator> sqlcmd
1> -- Enable the advanced configuration options
2> sp_configure 'show advanced options', 1;
3> reconfigure with override
4> go
Configuration option 'show advanced options' changed from 1 to 1. Run the RECONFIGURE statement to install.
1> -- Make SQL server prefer parallelizing queries
2> sp_configure 'cost threshold for parallelism', 0
3> go
Configuration option 'cost threshold for parallelism' changed from 0 to 0. Run the RECONFIGURE statement to install.
1> sp_configure 'lightweight pooling', 1
2> go
Configuration option 'lightweight pooling' changed from 1 to 1. Run the RECONFIGURE statement to install.
1> sp_configure 'max worker threads', 3200
2> go
Configuration option 'max worker threads' changed from 3200 to 3200. Run the RECONFIGURE statement to install.
1> sp_configure 'priority boost', 1
2> go
Configuration option 'priority boost' changed from 1 to 1. Run the RECONFIGURE statement to install.
1> exit
PS C:UsersAdministrator>
Page 3
2012, Virident Systems, Inc. All Rights Reserved.
6. Optimization Note
It is often not necessary to change any SQL Server settings to see a significant performance impact with
the FlashMAX drive. It is often worthwhile to start benchmarking without any changes to SQL
configuration before individually tweaking specific settings.
Virident FlashMAX Settings
No special settings are required on the Virident FlashMAX card to achieve the highest performance.
However, to ensure that the card has been installed properly and the system properly configured, the
special “test.exe” utility provided by Virident should be run. This will allow any performance
configuration problems related to the card to be isolated and fixed before any SQL server tests are
performed.
Command Example
C:>test.exe
*** VIRIDENT PERFORMANCE TEST ***
(C) Copyright 2012 Virident Systems, Inc.
--------------------------------------------------------------------------Usage: TEST <drive letter>
ex: "test e"
C:>test.exe g
*** VIRIDENT PERFORMANCE TEST ***
(C) Copyright 2012 Virident Systems, Inc.
--------------------------------------------------------------------------Checking for administrator permissions...OK
Checking the Virident Driver is loaded...OK
Checking power settings...High performance mode not detected.
You may not get highest performance without this setting.
Would you like to enable this setting now, automatically? (y/n) y
Checking CPU configuration...Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz...OK
Checking number of CPU cores...32...OK
Creating 4GB test file...OK
--------------------------------------------------------------------------Running Max Read Bandwidth test... (20 seconds)
Max Read Bandwidth measured is 2554MB/s
Running Max Write Bandwidth test... (20 seconds)
Max Write Bandwidth measured is 1023MB/s
Running Max Read IOPS test... (20 seconds)
Max Read IOPS measured is 315K
Running Max Write IOPS test... (20 seconds)
Page 4
2012, Virident Systems, Inc. All Rights Reserved.
7. Max Write IOPS measured is 255K
Running Max 512b Read IOPS test... (20 seconds)
Max 512b Read IOPS measured is 761K
Placement of Databases on the Virident FlashMAX
There are two general implementation strategies for using the Virident FlashMAX to accelerate SQL
Server: either placing all tables and indexes on the drive, or selecting individual tables or indexes to place
on the drive, while still employing a slower and larger storage tier to hold the bulk of the database.
All Tables and Indexes on the FlashMAX
The fastest and most effortless way of implementing the Virident FlashMAX into a database application is
to place everything onto the FlashMAX drive. When the database is larger than the capacity of an
individual card, multiple cards (up to eight in a single server) can be installed and striped using standard
Windows Disk Manager tools.
If the entire database (tables, indexes, logs), including planned growth, fits in the server, this option is as
simple as possible for the database administrator and delivers the greatest performance gain possible.
Management and maintenance are no different in this case from that of any other direct-attached
database store.
Virident FlashMAX can reduce up-front investment by completely eliminating hardware such as batterybacked RAID cards and large, short-stroked disk arrays. Ongoing costs, such as power, rack space and
cooling, are also reduced the most in this configuration.
Optimization Note
One often overlooked benefit, beyond simply increasing transaction throughput with a FlashMAX drive,
is stability of performance. Thanks to its superior design, at any workload level the FlashMAX can
deliver a steadier, much more predictable level of service than any other HDD or SSD solution. This in
turn translates into a reduction in over-provisioning, with resultant CAPEX and OPEX savings.
Individually Tiering Tables and Indexes
In some cases it is not practical to place the entire database on Virident FlashMAX volumes. The dataset
itself could be too large to fit practically, there could be an existing SAN infrastructure that needs to be
retained, or high performance may only needed for individual tables (or sometimes even just parts of
tables). In these cases, users can undertake a more involved effort to place only portions of the database
on a Virident FlashMAX yet still derive quantifiable benefits.
Page 5
2012, Virident Systems, Inc. All Rights Reserved.
8. Placing Specific Indexes on FlashMAX
In general, table indexes have higher IO requirements (much more random, and many more, updates) than
the main table data files. Table indexes are also, in general, significantly smaller than the tables
themselves; even indexes of very large databases can fit on a single FlashMAX card. Finally, indexes may
have somewhat less stringent availability requirements (since they can be regenerated in case of storage
failure) and may be easier to migrate from a SAN to locally attached storage.
Command Example
Windows PowerShell
Copyright (C) 2009 Microsoft Corporation. All rights reserved.
PS C:UsersAdministrator> SQLCMD
1> ALTER DATABASE test ADD FILEGROUP FlashMAX_index;
2> GO
1> ALTER DATABASE test ADD FILE ( NAME='flashidx', FILENAME='V:TESTflashidx.mdf',
SIZE=1000MB, FILEGROWTH=1000MB) TO FILEGROUP FlashMAX_index;
2> GO
1> USE test
2> GO
Changed database context to 'test'.
1> CREATE INDEX birthday_idx ON usertable(birthday) WITH ( FILLFACTOR=100,
SORT_IN_TEMPDB=ON ) ON FlashMAX_index;
2> GO
Optimization Note
Indexes may be placed onto the FlashMAX via the use of standard FILEGROUPS in SQL Server. Care
should be taken when moving clustered indexes to the FlashMAX, since moving a clustered index to a
filegroup will also move the data into that filegroup as well.
Choosing which indexes to place on a FlashMAX drive is a manual process, but input to that process can
be generated either by database administrators with the requisite intuition and application knowledge
or via Microsoft SQL Server Dynamic Management Views: “sys.dm_db_index_usage_stats “ and
“sys.dm_db_index_operational_stats.”
Page 6
2012, Virident Systems, Inc. All Rights Reserved.
9. Placing Specific Tables on FlashMAX
Like indexes, data tables themselves can be placed on FlashMAX. Criteria for choosing these tables should
include their table size (which should be large enough that the tables are not already completely cached in
buffers, but small enough to fit with expected growth on the FlashMAX), their update frequency (even
smaller tables with frequently updated contents can benefit from the higher random-IO performance of the
FlashMAX drive) and their criticality.
Optimization Note
SQL Profiler can be used to capture real-world traces of queries; the database administrator can then
examine the result to identify frequently accessed tables. A proxy for this would be the same Dynamic
Management Views mentioned previously, “sys.dm_db_index_usage_stats “ and
“sys.dm_db_index_operational_stats,” since index access also implies table accesses.
Command Example
Windows PowerShell
Copyright (C) 2009 Microsoft Corporation. All rights reserved.
PS C:UsersAdministrator> SQLCMD
1> USE tpcc;
2> SELECT i.name, s.* FROM sys.dm_db_index_usage_stats s INNER JOIN
sys.indexes i ON i.object_id = s.object_id AND i.index_id = s.index_id;
3> GO
...
name
user_seeks user_scans user_lookups user_updates system_seeks system_scans system_lookups system_updates
NULL
0
0
0
130013
0
0
0
0
pkey_new_orders
261370
0
0
260699
0
14
0
0
pkey_orders
0
28
0
42
0
0
0
0
idx_orders
13001
0
0
130485
0
3
0
0
pkey_orders
1701272
0
0
261057
0
2
0
0
pkey_order_line
287146
0
0
1427685
0
4
0
0
pkey_warehouse
390511
0
0
130013
0
0
0
0
pkey_item
0
14
0
14
0
0
0
0
pkey_item
1298468
0
0
0
0
0
0
0
pkey_district
664010
0
0
260498
0
0
0
0
pkey_stock
5815022
0
0
1297113
0
1
0
0
pkey_customer
852009
0
7874
260585
0
0
0
0
idx_customer
171416
0
0
0
0
1
0
0
idx_customer
14
0
0
0
0
0
0
0
Page 7
2012, Virident Systems, Inc. All Rights Reserved.
10. Placing Parts of Tables on FlashMAX
Even more finely grained tuning of database placement is possible via the use of partitioned tables. By
partitioning a table (often done by date or sequence number), historic data can be manually tiered to more
slowly rotating media, while recent data can be stored on FlashMAX locally for frequent access and updates.
Note, however, that this method can require significant database administrator work, since these
partitioning schemes, and the management of the FILEGROUPS which contain these PARTITIONS,, we will
need to be continyally updated as the database access patterns vary overtime.
Optimization Note
Partitioning tables and placing frequently accessed partitions on the Virident FlashMAX can
significantly speed up application performance, but be sure that queries accessing these tables are not
also accessing indexes, other tables or other partitions of the same table that are not stored on the
FlashMAX.
Page 8
2012, Virident Systems, Inc. All Rights Reserved.
11. Command Example
Windows PowerShell
Copyright (C) 2009 Microsoft Corporation. All rights reserved.
PS C:UsersAdministrator> SQLCMD
1> -- Make the two filegroups
2> ALTER DATABASE test ADD FILEGROUP data_2010
3> GO
1> ALTER DATABASE test ADD FILEGROUP data_2011
2> GO
1> -- Historic FG lives on the SAN...
2> ALTER DATABASE test ADD FILE (NAME = 'data_2010', FILENAME = 'S:datadata_2010.mdf',
SIZE = 1000000MB, FILEGROWTH = 1000MB) TO FILEGROUP data_2010
3> GO
1> -- Active FG lives on Virident...
2> ALTER DATABASE test ADD FILE (NAME = 'data_2011', FILENAME = 'V:datadata_2011.mdf',
SIZE = 300000MB, FILEGROWTH = 1000MB) TO FILEGROUP data_2011
3> GO
1> -- Define the partition function, only have 1 range point as anything past
2> CREATE PARTITION FUNCTION part_yearly(DATETIME) AS RANGE LEFT FOR VALUES
( '20111231 23:59:59.999' );
3> GO
1> -- If it's prior to first cutoff go to 2010, otherwise
2> CREATE PARTITION SCHEME partsch_yearly AS PARTITION part_yearly TO ( data_2010, data_2011 );
3> GO
1> -- Make a new partitioned table and populate it...
2> CREATE TABLE orders_part ( ... ) on partsch_yearly;
3> GO
1> -- Go get some coffee and a good book...
2> INSERT orders_part SELECT * FROM orders;
3> GO
Page 9
2012, Virident Systems, Inc. All Rights Reserved.
12. Transaction Logs on FlashMAX
Transaction logs are written in small blocks, sequentially. The size of a transaction log varies widely with the
database side, as does the activity logged therein. For OLTP workloads, where many updates and inserts
are performed, the transaction log can often become a limiting factor. Conversely, for DSS workloads,
which have much less frequent updates, the transaction log may not be particularly performance sensitive.
On rotating media, transaction logs are often stored on their own RAID mirrored drive set. If the logs were
stored on the main table volume set, the random-access nature of table and index access would destroy
their sequential-write access pattern. Since HDDs perform sequential-write accesses significantly faster
than random ones, the cost (in terms of money, power and space) of the additional log-drive pair is
outweighed by the performance improvement received.
No such design decision needs to be made with the Virident FlashMAX. Log and database can successfully
coexist on the same drive without any impact on performance. Multiple database logs can also be stored on
a single FlashMAX, without any of the negative performance implications such a configuration would have
on rotating media. The FlashMAX Drive provides enough raw IO horsepower to handle sequential
transaction logging without having an impact on the main database random read/write access, and vice
versa.
2.5
One Log
3.8
Two Logs
7.3
Four Logs
0X
2X
4X
6X
8X
Performance Relative to 15K RPM HDDs with BBU RAID
Optimization Note
SQL Server writes only a single log file per database. Specifying multiple log files per database on the
Virident FlashMAX, or any other storage, will not improve performance.
Page 10
2012, Virident Systems, Inc. All Rights Reserved.
13. Tempdb on FlashMAX
SQL Server uses tempdb to store intermediate query results; tempdb is also used during index creation and
updates. Writes to SQL Server are often large-block sequential in nature, while reads are commonly
random in nature. Data turnover is high, since the data inside each tempdb table is only valid for a single
query or index-generation step. Index creation and updates are an ideal use for the very high write
bandwidth and random IOPS.
Command Example
Windows PowerShell
Copyright (C) 2009 Microsoft Corporation. All rights reserved.
PS C:UsersAdministrator> MKDIR V:tempdb
PS C:UsersAdministrator> SQLCMD
1> -- Move TEMPDB file to Virident, and add additional 7 for 8 total files
2> ALTER DATABASE tempdb MODIFY FILE (NAME=tempdev , FILENAME='V:tempdbtdb0.mdf', SIZE=500MB);
3> ALTER DATABASE tempdb ADD FILE (NAME=tempdev1 , FILENAME='V:tempdbtdb1.mdf', SIZE=500MB);
4> ALTER DATABASE tempdb ADD FILE (NAME=tempdev2 , FILENAME='V:tempdbtdb2.mdf', SIZE=500MB);
5> ALTER DATABASE tempdb ADD FILE (NAME=tempdev3 , FILENAME='V:tempdbtdb3.mdf', SIZE=500MB);
6> ALTER DATABASE tempdb ADD FILE (NAME=tempdev4 , FILENAME='V:tempdbtdb4.mdf', SIZE=500MB);
7> ALTER DATABASE tempdb ADD FILE (NAME=tempdev5 , FILENAME='V:tempdbtdb5.mdf', SIZE=500MB);
8> ALTER DATABASE tempdb ADD FILE (NAME=tempdev6 , FILENAME='V:tempdbtdb6.mdf', SIZE=500MB);
9> ALTER DATABASE tempdb ADD FILE (NAME=tempdev7 , FILENAME='V:tempdbtdb7.mdf', SIZE=500MB);
10>
11> -- Don't forget to move the log, too
12> ALTER DATABASE tempdb MODIFY FILE (NAME=templog , FILENAME='V:tempdbtdb.ldf', SIZE=100MB);
13>
14> GO
The file "tempdev" has been modified in the system catalog. The new path will be used the next time the database is
started.
The file "templog" has been modified in the system catalog. The new path will be used the next time the database is
started.
1> EXIT
PS C:UsersAdministrator> NET STOP MSSQLSERVER
The SQL Server (MSSQLSERVER) service is stopping.
The SQL Server (MSSQLSERVER) service was stopped successfully.
PS C:UsersAdministrator> NET START MSSQLSERVER
The SQL Server (MSSQLSERVER) service is starting.
The SQL Server (MSSQLSERVER) service was started successfully.
Page 11
2012, Virident Systems, Inc. All Rights Reserved.
14. Optimization Note
Standard best practices for tempdb should be followed. Ensure that tempdb is specified with as many
separate files as there are cores in your server to minimize any tempdb allocation bottlenecks. This is
especially important for highly utilized SQL Server installations, as it allows multiple queries to utilize
the tempdb in parallel.
Data Reliability
As with all direct attach storage, proper care should be taken of databases stored on the Virident FlashMAX
drive. As an easy start, any existing data protection and availability plans can be implemented identically on
the FlashMAX drive.
Flash Management
FlashMAX implements the highest level available of onboard flash wear leveling and of error detection and
correction. It actively monitors flash performance and can move data behind the scenes, transparently, to
SQL Server, to ensure continual data availability.
RAID-1 Across Multiple Cards
Multiple cards can be striped in a mirrored (RAID-1) volume using the standard Disk Manager. No special
setup or configuration is required.
Page 12
2012, Virident Systems, Inc. All Rights Reserved.
15. Command Example
Windows PowerShell
Copyright (C) 2009 Microsoft Corporation. All rights reserved.
PS C:UsersAdministrator> DISKPART
Microsoft DiskPart version 6.1.7601
Copyright (C) 1999-2008 Microsoft Corporation.
On computer: KANHA03
DISKPART> SELECT DISK=3
Disk 3 is now the selected disk.
DISKPART> CONVERT DYNAMIC
DiskPart successfully converted the selected disk to dynamic format.
DISKPART> SELECT DISK=4
Disk 4 is now the selected disk.
DISKPART> CONVERT DYNAMIC
DiskPart successfully converted the selected disk to dynamic format.
DISKPART> CREATE VOLUME MIRROR DISK=3,4
DiskPart successfully created the volume.
DISKPART> FORMAT FS=NTFS LABEL=Virident-R1 QUICK
100 percent completed
DiskPart successfully formatted the volume.
DISKPART> ASSIGN LETTER=V
DiskPart successfully assigned the drive letter or mount point.
DISKPART> EXIT
Leaving DiskPart...
Optimization Note
RAID is not a backup strategy. A software error or unexpected “DROP DATABASE” can destroy all user
data protected by either of these methods.
Data Availability
Microsoft SQL Server provides both mirroring and log-shipping modes to ensure high data availability in
cases of server failure.
SQL Server Mirroring
Microsoft SQL Server provides mirroring capabilities to allow a primary and secondary database server to
stay synchronized through a network connection. This mode supports asynchronous and synchronous
replication. The data-consistency guarantees for these two modes vary somewhat, and the proper mode
depends on your own requirements. Asynchronous mode gives the highest performance replication for
frequently updated databases.
Page 13
2012, Virident Systems, Inc. All Rights Reserved.
16. Whichever mode of replication is used, the lowest-latency network interconnect possible should be used.
While the 10-fold improvement in bandwidth available when moving from 1G Ethernet to 10G Ethernet
may not be useful, the reduction in the latency of smaller messages will impact replication traffic and
responsiveness.
Optimization Note
Servers at both ends of a mirroring operation should be of similar design. If the database at the primary
mirror is placed on a Virident FlashMAX, it should be on a FlashMAX on the secondary mirror as well.
Otherwise the performance of the system can become limited to that of the lowest-performing server,
not only in the case of failover but also in steady state.
Transaction-Log Shipping
Transaction-log shipping allows for a master database to periodically send updated transaction-log copies to
one or many secondary servers over shared storage. Logs are backed up at the primary server, then copied
and finally restored to secondary servers to present a delayed view of the primary database.
Little changes when implementing this replication on servers utilizing the Virident FlashMAX. The
additional bandwidth available for backing up or restoring transaction logs stored on the FlashMAX drive
can slightly reduce the load on any server during this operation.
Optimization Note
Transaction-log shipping to secondary servers is often used for reporting or analysis work, to avoid
making an impact on the main database server. By implementing a Virident FlashMAX on the main
server, however, the need for these secondary servers can be greatly diminished or eliminated entirely.
The limiting performance factor in most database servers is the storage subsystem. The FlashMAX has
enough IO performance and flexibility to support on-line analysis on live databases without the need
for secondary copies.
Conclusion
Adding the Virident FlashMAX to SQL Server architectures is the simplest, most cost-effective way of
increasing application performance. FlashMAX reduces capital and operating expenses by removing noisy,
power-hungry and failure-prone hard drives. With the additional IO power that FlashMAX provides, more
users can be served by fewer SQL Server installations.
Page 14
2012, Virident Systems, Inc. All Rights Reserved.