Oracle 11G New Features For Dbas: Arup Nanda
Oracle 11G New Features For Dbas: Arup Nanda
Arup Nanda
Agenda
Tons of new features in 11g It's not "new" anymore. Plenty of material available blogs, articles, books
Compelling reasons for upgrade. Biggest bang for the buck. Exclusively for DBAs; not Developers
Coverage
Only the most valuable features Stress on "how to use", rather than syntax Companion material "Oracle Database 11g: The Top New Features for DBAs and Developers" on OTN
http://www.oracle.com/technology/pub/arti cles/oracle-database-11g-topfeatures/index.html
Database Replay
Change is the only constant
What happens when you change something init params, storage, O/S, kernel params
There are always risks of a change You can mitigate by subjecting the changed system to the very similar workload and comparing the results The keyword is "similar workload" Load generators do not have the fidelity
4
A True Test
Target system is similar to the Subject system same O/S, same DB version, same data, etc.
capture
apply
The SQL statements In the order they happened With the same bind variables
Target and Subject are identical except the variable you want to test, e.g. the O/S.
Target Target
DB
capture
Capture files
Capture Files
ftp
6
FS
ASM
standby
8
Compared to QA Tools
How does it compare to QA tools like Load Runner?
QA tools use synthetic workload, i.e. the SQLs you provide to it. DBR uses the real SQLs that ran good, bad and ugly That's why it's called Real Application Testing (RAT) QA Tools measure end to end app webserver to app server to DB. DBR only measures the DB performance
Caveats
DBR captures only the SQLs executed in the database; not the activity on the apps such as clicks. No guarantee of elapsed time between SQLs Concurrency of statements not guaranteed
11
capture
SQLs Can be edited SQL Tuning Set Export Import SQLs SQL Tuning Set
apply
12
Different from DR
RAT Real Application Testing DR captures all the SQLs.
You can apply filters; but not very flexible
DR follows the sequence and repetition of SQLs; SPA does not. SPA is good for individual SQL tuning; DR is for DB.
13
Good for
SPA is good for single SQL or single app Where concurrency is not important Checking if these are better:
Profiles Outlines Parameters session/system
14
15
Expanded Sub-Partitioning
New composite partitioning schemes
Range-range
2 date columns
Hash-range
PK first and then date
Hash-hash
PK and then another unique key
Hash-list
PK and discrete values
List-range
16
Referential Partitioning
You want to partition CUSTOMERS on ACC_REP column The column is not present on child tables Earlier option: add the column to all tables and update it
Difficult and error-prone
CUSTOMERS CUST_ID ACC_REP part
17
Referential Partitioning
Partition CUSTOMERS as usual create table SALES ( SALES_ID number not null, CUST_ID number not null, TOT_AMT number constraint fk_sales_01 foreign key (cust_id) references customers) partition by reference (fk_sales_01); Partitions of SALES are created with data from CUSTOMERS.
CUSTOMERS CUST_ID ACC_REP part
18
19
INTERVAL Partitioning
SALES table partitioned on SALES_DT
Partitions defined until SEP 2008. Before Oct starts, you have to create the partition If you don't create the part, the INSERT will fail on Oct 1st.
To mitigate the risk, you created the PMAX partition. Undesirable When you finally add the OCT08 partition, you will need to split the PMAX highly undesirable
20
Interval Partitions
create table SALES ( sales_id number, Specifies one partition per sales_dt date ) month partition by range (sales_dt) interval (numtoyminterval(1,'MONTH')) store in (TS1,TS2,TS3) ( partition SEP08 values less than (to_date('2008-10-01','yyyy-mmThis is the first dd')) partition. The ); subsequent Creates a partition automatically when a new row comes in
partition names are system generated
21
USER_TAB_PARTITIONS view:
high_value shows the upper bound of partition
22
Physical Standby
Physical Standby Database with Real Time Apply Almost real time, savings in CPU, etc. But opening in read only access makes it miss the SLA So, the investment just sits idle => inefficient
1. Backups can be off this, less CPU load on primary 2. Can be open for Read Only access. Good for reporting 3. But if open, the recovery stops, defeating the purpose of standby Primary Standby
23
Physical Standby Database with Real Time Apply But you can open the database in read only And then start the managed recovery process Primary So, you meet the SLA for uptime while making efficient use of the Extra-cost investment. option
1. Backups can be off this, less CPU load on primary 2. Can be open for Read Only access. Good for reporting 3. The recovery continues even when the database is open for read only access Standby
24
Comparison
10g
Standby in managed recovery mode
11g
Standby in managed recovery mode
alter database managed alter database managed standby database cancel standby database cancel alter database open read only alter database open read only
shutdown, startup mount alter database recover managed standby database disconnect alter database recover managed standby database disconnect
25
Snapshot Standby
You can open a standby as read write
alter database recover managed standby database cancel; alter database convert to snapshot standby;
26
Other Enhancements
Easier Creation Physical -> Logical; Back to Physical
alter database recover to logical standby DBName; alter database start logical standby apply immediate;
27
Rolling Upgrades
1. 2. 3. 4. 5. 6. 7. 8. Convert S to Logical primary Reverse the roles P=standby, S=primary P Apps will move to S Stop standby Upgrade P Reverse roles. P=primary, S=standby standby S Upgrade S Convert back to Physical
28
Parameter Testing
1. Capture workload from P using Database Replay 2. Convert S to Snapshot Standby 3. Create a restore point rp1 4. Change parameter 5. Replay captured workload on S 6. Measure performance 7. Repeat with new values 8. Convert S back to physical
primary
standby
29
Another Scenario:
Typical Solutions
Stored Outlines
Forces a plan May be a bad plan later
SQL Profiles
Data based; may be worse later
Hints
Forces a plan which could be worse later Not possible in canned apps
31
If enabled, Oracle stores the SQL and the plan in a repository called SQL Management Base (SMB) When a new plan is generated, it is compared against the old plan If better, the new plan is implemented Else, the old plan is forced (like outlines) The DBA can examine the plans and force a specific plan
optimization
32
SQL Baselines
Similar to Stored Outlines
SQL> alter system optimizer_capture_sql_plan_baselin es = true;
All the baselines are captured Don't confuse with AWR Baselines
33
Enabled will it be considered or not? Accepted Current plan by optimizer Fixed the plan is fixed, i.e. optimizer forces it. Similar to outlines Auto Purged after some days the plan is purged, unless accepted
34
Inputs: ------PLAN_LIST
Plan: SYS_SQL_PLAN_b5429522ee05ab0e ----------------------------------Plan was verified: Time used 3.9 seconds. Failed performance criterion: Compound improvement ratio <= 1.4. Baseline Plan ------------COMPLETE 1 3396 1990 7048 4732 0 4732 1 Test Plan --------COMPLETE 1 440 408 5140 53 0 25 1
Execution Status: Rows Processed: Elapsed Time(ms): CPU Time(ms): Buffer Gets: Disk Reads: Direct Writes: Fetches: Executions:
35
Testing Statistics
Scenario
SQL was performing well You want to collect stats But you hesitate will be make it worse?
36
Private Statistics
1. Mark a table's stats as private 2. Collect stats; but optimizer will not see 3. Issue alter session set
optimizer_use_pending_statistics = true;
4. Now optimizer will see the new stats in that session alone 5. Test SQL. If OK, publish stats:
dbms_stats.publish_pending_stats('S chema', 'TableName');
37
Further Notes
You set a table's preference:
dbms_stats.set_table_prefs ( ownname => 'Schema', tabname => 'TableName', pname => 'PUBLISH', pvalue => 'FALSE' );
Now the table's stats will always be private until you publish them You can delete private stats:
dbms_stats.delete_pending_stats ('Schema','Table');
38
Stats History
History
desc DBA_TAB_STATS_HISTORY
OWNER TABLE_NAME PARTITION_NAME SUBPARTITION_NAME STATS_UPDATE_TIME
Encrypted Tablespaces
Transparent Data Encryption (TDE) allows column level encryption Performance hit, since index scans can't be used and every time the data has to be decrypted to be compared
select * from payments where CC# like '1234%'
SGA encrypted
PAYMENTS
PAY_ID CC# encrypted
CUST_ID
40
All objects stored in the tablespace are encrypted, all columns But when they are loaded to the SGA, they are in cleartext So index scans are a good
PAYMENTS
PAY_ID CC# encrypted
CUST_ID
41
Dictionary
SQL> desc v$encrypted_tablespaces Name Null? Type ----------------- -------- ----------TS# NUMBER ENCRYPTIONALG VARCHAR2(7) ENCRYPTEDTS VARCHAR2(3)
The column ENCRYPT_IN_BACKUP in V$TABLESPACE shows the encryption during RMAN backup
42
Gets information from Undo Segments When undo gets filled up, the information is gone. Not reliable. Solution triggers to populate user defined change tables.
43
FBDA
Flashback Archive
Flashback Archive
FA1
Stores the undo information, similar to undo segments; but permanent.
FA2
44
Syntax
Create a FB Archive
create flashback archive FB1 tablespace TS1 retention 1 year alter table ACCOUNTS flashback archive FA1;
Comparison w/Triggers
Manually create change tables and trigger logic The triggers can be disabled, making it legally non-binding Change tables can be deleted by DBA, so not immutable. Triggers do a context switch; FBAR process runs in the background with minimal impact. Purging is not automatic
46
Usage
Just normal flashback query:
select from accounts as of
Purge is automatic after the retention period. Manually possible too. DBA can't modify data; so legally binding.
47
Faster for non-data portions Requires C library Usually not available in production systems
48
11g Way
SQL> alter session set plsql_code_type = native; SQL> alter procedure p1 compile; C-complier is built into the database Compilation Time(plsql_optimize_level=2) Interpreted Native 10g 1.66 4.66 11g 1.64 2.81
Computation intensive code will benefit. Data manipulation code will not.
49
Caching
Query is often executed on tables that do not change much. Typical Solution: Materialized Views
Results are already available; no need to reexecute the query Results could be stale; not updated unless refreshed Underlying data doesn't change; but MV doesn't know that, unless fast refresh
Not practical
50
Result Cache
select /*+ result_cache */ The results of the query are stored in the SGA Result Cache a new area in SGA result_cache_max_size states the size of RC The query executes as usual if the cache is not found The cache is refreshed automatically when the underlying data changes
51
DDL Waits
Session 1:
update t1 set col1 = 2;
Session 2:
alter table t1 drop column col2 * ERROR at line 1: ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
In a busy system you will never get the exclusive lock. In 11g alter session set ddl_lock_timeout = 15; This will make the session wait for 15 seconds before erroring with ORA-54.
52
Trigger Execution
You have 3 pre-insert triggers tr1, tr2 and tr3. How do you make sure they fire in that sequence? You can, now in 11g.
create trigger tr3 before insert on TableName follows tr2 begin ...
53
54
Upgrade Advice
1. Use snapshot standby to test your upgrade process 2. Use Workload Capture in 10g and replay in snapshot standby 3. Modify parameters, replay and modify: repeat until you get it right 4. Use SQL Performance Analyzer to test the handful of errant queries 5. Use SQL Baselines to fix them
55
Thank You!
56