This is the Article (White Paper) that accompanied my Presentation "Partitioning Tables and Indexing Them" (which, too, is on slideshare) for AIOUG Sangam 11
This document discusses partitioning tables and indexing them in Oracle databases. It covers the different types of partitioning including range, list, hash, and composite partitioning. It provides examples of creating partitioned tables and indexes. It also discusses strategies for maintaining partitioned tables, including adding, dropping, splitting, merging and exchanging partitions. It recommends different partitioning and indexing approaches for optimizing query performance and archiving old data.
Partitioning on Oracle 12c - What changed on the most important Oracle featureLuis Marques
It was introduced in Oracle 8.0 in 1997 and since then Oracle Partitioning is mandatory for a big number Oracle Database architectures and implementations to ensure that high availabity or multi-terabyte systems keep the performance requirements.
This talk will demonstrate the improvements made in Oracle Partition on 12c from new interval reference partitions to partial partitioned and global async global indexes and how the today's critical Oracle databases that still run on 11g can revamp on this set of features.
Topic Objective: This topic is about Oracle Partition, the most used and most important paid option of Oracle Database. Learning how 12c improved it is vital for any Oracle DBA. Using this new set of new features can reduce your downtime, save DBA time and reduce the number of DBA "workarounds" to deal with specific situations when current 11g set of partition features is limited.
1. The document provides information on database concepts like the system development life cycle, data modeling, relational database management systems, and creating and managing database tables in Oracle.
2. It discusses how to create tables, add, modify and delete columns, add comments, define constraints, create views, and perform data manipulation operations like insert, update, delete in Oracle.
3. Examples are provided for SQL statements like CREATE TABLE, ALTER TABLE, DROP TABLE, CREATE VIEW, INSERT, UPDATE, DELETE.
This document provides an overview of working with multiple tables in SQL, including topics like joins, aliases, inner joins, outer joins, and joining more than two tables. It discusses how joins interact with the relational database structure and ERD diagrams. It provides examples of different join types and how they handle discrepancies in the data. It also covers adding calculations to queries using functions like COUNT and aggregate functions. The document uses the sample sTunes database to demonstrate various SQL queries and joins.
Oracle 9i is changing the ETL (Extract, Transform, Load) paradigm by providing powerful new ETL capabilities within the database. Key features discussed include external tables for reading flat files directly without loading to temporary tables, the MERGE statement for updating or inserting rows with one statement, multi-table inserts for conditionally inserting rows into multiple tables, pipelined table functions for efficiently passing row sets between functions, and native compilation for improving PL/SQL performance. These new Oracle 9i capabilities allow for simpler, more efficient, and lower cost ETL processes compared to traditional third-party ETL tools.
The document provides an overview of SQL and database implementation. It discusses SQL environments, data types, database definition using DDL statements to create tables and views, and DML statements for data manipulation including SELECT, INSERT, UPDATE, DELETE. Examples are provided for each statement type. The SELECT statement is discussed in more depth, with examples demonstrating clauses like WHERE, ORDER BY, GROUP BY, HAVING, functions and operators.
The document provides guidelines for naming conventions, structure, formatting, and coding of SQL Server databases. It recommends:
1) Using Pascal casing and suffixes like "s" for table names and prefixes for other objects.
2) Normalizing data to third normal form and avoiding TEXT data types when possible.
3) Formatting code for readability using styles like uppercase SQL keywords and indentation.
4) Coding best practices like optimizing queries, avoiding cursors, and checking for errors.
The document discusses database management in an internet environment. It describes how businesses use the internet for e-commerce and interactions with customers and suppliers. It also defines common web technologies like HTML, URLs, browsers, web servers and how they enable dynamic web pages and interactions with databases through scripts, APIs and middleware. Specific examples are provided of Active Server Pages code that queries a database and dynamically generates web page content. The document concludes with a discussion of managing website data and security considerations to prevent unauthorized access.
Presented by,
Mr. Abhilash K
Database Architect, Livares Technologies
Introduction
About DBMS
A database management system (DBMS) is a software for
creating and managing databases. DBMS provides
users/programmers with a systematic way to create,
retrieve, update and manage data.
What is RDBMS
A type of DBMS in which the database is organized and
accessed according to the relationships between data
values. RDBMS are designed to take care of large amounts
of data and also the security of this data
The document provides guidance on developing metadata in Oracle Business Intelligence Enterprise Edition (OBIEE). It discusses best practices for importing data, defining physical, business and presentation layers, creating hierarchies, applying formatting changes through analytic applications, and more. Repository documentation utilities are also described that can generate metadata dictionaries and documentation of repository mappings.
This document provides an introduction to using Structured Query Language (SQL) with Teradata databases. It describes SQL and its three categories of statements: Data Definition Language (DDL), Data Manipulation Language (DML), and Data Control Language (DCL). It also introduces the Basic Teradata Query (BTEQ) tool for submitting SQL statements to a Teradata database interactively and covers setting session parameters in BTEQ like transaction semantics and the SQL Flagger. Key topics include the SELECT statement, ORDER BY and DISTINCT clauses, naming conventions for database objects, and setting the default database in a session.
This document provides an overview of SQL programming including:
- A brief history of SQL and how it has evolved over time.
- Key SQL fundamentals like database structures, tables, relationships, and normalization.
- How to define and modify database structures using commands like CREATE, ALTER, DROP.
- How to manipulate data using INSERT, UPDATE, DELETE, and transactions.
- How to retrieve data using SELECT statements, joins, and other techniques.
- How to aggregate data using functions like SUM, AVG, MAX, MIN, and COUNT.
- Additional topics covered include subqueries, views, and resources for further learning.
SQL is a standard language for storing, manipulating and retrieving data in databases. It allows users to access and manipulate data in databases. Some key functions of SQL include executing queries against a database, retrieving data from a database, inserting, updating and deleting records in a database, and creating, altering and dropping database objects like tables.
This document provides an overview of SAP ABAP (Advanced Business Application Programming) and covers various topics related to data modeling and programming in SAP. It discusses the SAP data dictionary, data types, tables, views, search helps, and lock objects. It also describes how to create and maintain tables, views, and search helps using transactions codes like SE11, SE93, and SM30.
The document discusses the concept of tables in databases and how to create tables in SQL. It defines what a table is, explains that tables can represent entities, relationships between entities, or lists. It then covers the syntax and rules for creating tables, including specifying the table name, columns, data types, constraints like primary keys, unique keys, foreign keys, default values and check constraints. Examples are provided for creating tables with different constraints. The roles of constraints in enforcing data integrity are also discussed.
This document summarizes advanced SQL concepts including joins, subqueries, transactions, and other features. It discusses join types like equi-joins and outer joins. It describes how to use subqueries in the WHERE, FROM, and HAVING clauses. It also covers ensuring transaction integrity with commands like BEGIN TRANSACTION, COMMIT, and ROLLBACK. The document provides examples of joins, subqueries, and transactions.
The document is an introduction to SQL that covers the basic SQL statements and concepts. It defines SQL and its uses, including retrieving, inserting, updating, and deleting data from databases. It also covers key SQL statements like SELECT, WHERE, ORDER BY, JOIN, and aggregate functions. The document provides syntax examples for each SQL statement and concept discussed.
SQL Server 2008 Performance Enhancementsinfusiondev
This document summarizes several performance improvements introduced in SQL Server 2008 including partitioning enhancements, sparse columns, filtered indexes, plan freezing, and the MERGE statement. It provides information on how each feature works and example use cases.
Sql server ___________session 3(sql 2008)Ehtisham Ali
This document discusses several new features in SQL Server 2008 related to data manipulation language (DML) and XML data types, including table value constructors, table-valued parameters, the MERGE statement, enhanced GROUP BY functionality using ROLLUP, CUBE, and GROUPING SETS, and improved XML data type handling. It provides examples and explanations of the syntax and usage for each feature.
This document provides information about SQL queries and joins. It begins by introducing SQL (Structured Query Language) which is used to communicate with databases and retrieve required information. It describes the basic CRUD (Create, Read, Update, Delete) functions of SQL. It then discusses different types of SQL queries - aggregate function queries, scalar function queries, and join queries. It provides the syntax and explanation of inner joins, outer joins (left, right, full) which are used to query data from multiple tables based on relationships between columns. The document is presented by Hammad, Bilal and Awais.
This document discusses advanced SQL topics including joins, subqueries, and ensuring transaction integrity. It provides examples of different types of joins like equi-joins, natural joins, outer joins, and union joins. It also discusses using subqueries in WHERE clauses, FROM clauses, and HAVING clauses, and differentiates between correlated and noncorrelated subqueries. The document concludes by defining transactions and describing SQL commands like BEGIN TRANSACTION, COMMIT, and ROLLBACK that are used to ensure transaction integrity.
Constraints are the rules enforced on the data columns of a table. These are used to limit the type of data that can go into a table. This ensures the accuracy and reliability of the data in the database.
Constraints can be divided into following two types:
Column level constraints : limits only column data
Table level constraints : limits whole table data
Aggregate Functions
Sap abap-data structures and internal tablesMustafa Nadim
Data structures and internal tables allow programs to store and manipulate data in memory. Structures define the layout of related data fields, while internal tables provide a way to store multiple occurrences of structured data. The document demonstrates how to declare structures and internal tables, populate them with data from database tables, and process the stored data within programs.
This document discusses how PROC SQL can be useful for traditional SAS programming. It offers several advantages over procedural code, including easily accessing and combining data from multiple datasets, performing Cartesian products, fuzzy matching, and summarization. The document provides examples of using PROC SQL to solve problems involving these tasks more easily than procedural code. It also discusses how PROC SQL and macros can interact powerfully to solve complex programming problems.
This document provides an overview of using SQLite and the sTunes database for SQL and scripting training. It discusses launching the DB Browser software and opening the sTunes database for exploration. It also covers getting started with SQLite queries, including notation techniques, basic query structure, sorting results, limiting results, and using various SQL operators in queries. The document provides examples of queries using concepts like aliases, WHERE clauses, wildcards, dates, logical operators, and CASE statements.
DB2 is a multi-platform database server that can scale from laptops to large systems handling terabytes of data. It provides tools for extending capabilities to support multimedia, is fully integrated for web access, and supports universal access and multiple platforms. The tutorial covered key DB2 concepts like instances, schemas, tables, and indexes. It demonstrated how to use Control Center and other GUIs to perform tasks like creating databases and tables, querying data, and setting user privileges. Java applications can also access DB2 data through JDBC.
Table partitioning allows large tables to be split across multiple filegroups to improve performance. A partition function defines the data ranges and a partition scheme maps those ranges to filegroups. Tables, indexes, and views can then be created on partition schemes. The SWITCH operator can move partitions between filegroups with minimal locking to archive old data or distribute it across storage.
The document discusses various aspects of Oracle databases including how Oracle software was developed, different types of triggers and their uses, tablespaces and how they divide a database, partitioning which divides large tables into smaller pieces, and concurrency control which allows for simultaneous read and write access through multiversioning and locking. It also briefly outlines tools that can be used for database administration.
The document discusses database management in an internet environment. It describes how businesses use the internet for e-commerce and interactions with customers and suppliers. It also defines common web technologies like HTML, URLs, browsers, web servers and how they enable dynamic web pages and interactions with databases through scripts, APIs and middleware. Specific examples are provided of Active Server Pages code that queries a database and dynamically generates web page content. The document concludes with a discussion of managing website data and security considerations to prevent unauthorized access.
Presented by,
Mr. Abhilash K
Database Architect, Livares Technologies
Introduction
About DBMS
A database management system (DBMS) is a software for
creating and managing databases. DBMS provides
users/programmers with a systematic way to create,
retrieve, update and manage data.
What is RDBMS
A type of DBMS in which the database is organized and
accessed according to the relationships between data
values. RDBMS are designed to take care of large amounts
of data and also the security of this data
The document provides guidance on developing metadata in Oracle Business Intelligence Enterprise Edition (OBIEE). It discusses best practices for importing data, defining physical, business and presentation layers, creating hierarchies, applying formatting changes through analytic applications, and more. Repository documentation utilities are also described that can generate metadata dictionaries and documentation of repository mappings.
This document provides an introduction to using Structured Query Language (SQL) with Teradata databases. It describes SQL and its three categories of statements: Data Definition Language (DDL), Data Manipulation Language (DML), and Data Control Language (DCL). It also introduces the Basic Teradata Query (BTEQ) tool for submitting SQL statements to a Teradata database interactively and covers setting session parameters in BTEQ like transaction semantics and the SQL Flagger. Key topics include the SELECT statement, ORDER BY and DISTINCT clauses, naming conventions for database objects, and setting the default database in a session.
This document provides an overview of SQL programming including:
- A brief history of SQL and how it has evolved over time.
- Key SQL fundamentals like database structures, tables, relationships, and normalization.
- How to define and modify database structures using commands like CREATE, ALTER, DROP.
- How to manipulate data using INSERT, UPDATE, DELETE, and transactions.
- How to retrieve data using SELECT statements, joins, and other techniques.
- How to aggregate data using functions like SUM, AVG, MAX, MIN, and COUNT.
- Additional topics covered include subqueries, views, and resources for further learning.
SQL is a standard language for storing, manipulating and retrieving data in databases. It allows users to access and manipulate data in databases. Some key functions of SQL include executing queries against a database, retrieving data from a database, inserting, updating and deleting records in a database, and creating, altering and dropping database objects like tables.
This document provides an overview of SAP ABAP (Advanced Business Application Programming) and covers various topics related to data modeling and programming in SAP. It discusses the SAP data dictionary, data types, tables, views, search helps, and lock objects. It also describes how to create and maintain tables, views, and search helps using transactions codes like SE11, SE93, and SM30.
The document discusses the concept of tables in databases and how to create tables in SQL. It defines what a table is, explains that tables can represent entities, relationships between entities, or lists. It then covers the syntax and rules for creating tables, including specifying the table name, columns, data types, constraints like primary keys, unique keys, foreign keys, default values and check constraints. Examples are provided for creating tables with different constraints. The roles of constraints in enforcing data integrity are also discussed.
This document summarizes advanced SQL concepts including joins, subqueries, transactions, and other features. It discusses join types like equi-joins and outer joins. It describes how to use subqueries in the WHERE, FROM, and HAVING clauses. It also covers ensuring transaction integrity with commands like BEGIN TRANSACTION, COMMIT, and ROLLBACK. The document provides examples of joins, subqueries, and transactions.
The document is an introduction to SQL that covers the basic SQL statements and concepts. It defines SQL and its uses, including retrieving, inserting, updating, and deleting data from databases. It also covers key SQL statements like SELECT, WHERE, ORDER BY, JOIN, and aggregate functions. The document provides syntax examples for each SQL statement and concept discussed.
SQL Server 2008 Performance Enhancementsinfusiondev
This document summarizes several performance improvements introduced in SQL Server 2008 including partitioning enhancements, sparse columns, filtered indexes, plan freezing, and the MERGE statement. It provides information on how each feature works and example use cases.
Sql server ___________session 3(sql 2008)Ehtisham Ali
This document discusses several new features in SQL Server 2008 related to data manipulation language (DML) and XML data types, including table value constructors, table-valued parameters, the MERGE statement, enhanced GROUP BY functionality using ROLLUP, CUBE, and GROUPING SETS, and improved XML data type handling. It provides examples and explanations of the syntax and usage for each feature.
This document provides information about SQL queries and joins. It begins by introducing SQL (Structured Query Language) which is used to communicate with databases and retrieve required information. It describes the basic CRUD (Create, Read, Update, Delete) functions of SQL. It then discusses different types of SQL queries - aggregate function queries, scalar function queries, and join queries. It provides the syntax and explanation of inner joins, outer joins (left, right, full) which are used to query data from multiple tables based on relationships between columns. The document is presented by Hammad, Bilal and Awais.
This document discusses advanced SQL topics including joins, subqueries, and ensuring transaction integrity. It provides examples of different types of joins like equi-joins, natural joins, outer joins, and union joins. It also discusses using subqueries in WHERE clauses, FROM clauses, and HAVING clauses, and differentiates between correlated and noncorrelated subqueries. The document concludes by defining transactions and describing SQL commands like BEGIN TRANSACTION, COMMIT, and ROLLBACK that are used to ensure transaction integrity.
Constraints are the rules enforced on the data columns of a table. These are used to limit the type of data that can go into a table. This ensures the accuracy and reliability of the data in the database.
Constraints can be divided into following two types:
Column level constraints : limits only column data
Table level constraints : limits whole table data
Aggregate Functions
Sap abap-data structures and internal tablesMustafa Nadim
Data structures and internal tables allow programs to store and manipulate data in memory. Structures define the layout of related data fields, while internal tables provide a way to store multiple occurrences of structured data. The document demonstrates how to declare structures and internal tables, populate them with data from database tables, and process the stored data within programs.
This document discusses how PROC SQL can be useful for traditional SAS programming. It offers several advantages over procedural code, including easily accessing and combining data from multiple datasets, performing Cartesian products, fuzzy matching, and summarization. The document provides examples of using PROC SQL to solve problems involving these tasks more easily than procedural code. It also discusses how PROC SQL and macros can interact powerfully to solve complex programming problems.
This document provides an overview of using SQLite and the sTunes database for SQL and scripting training. It discusses launching the DB Browser software and opening the sTunes database for exploration. It also covers getting started with SQLite queries, including notation techniques, basic query structure, sorting results, limiting results, and using various SQL operators in queries. The document provides examples of queries using concepts like aliases, WHERE clauses, wildcards, dates, logical operators, and CASE statements.
DB2 is a multi-platform database server that can scale from laptops to large systems handling terabytes of data. It provides tools for extending capabilities to support multimedia, is fully integrated for web access, and supports universal access and multiple platforms. The tutorial covered key DB2 concepts like instances, schemas, tables, and indexes. It demonstrated how to use Control Center and other GUIs to perform tasks like creating databases and tables, querying data, and setting user privileges. Java applications can also access DB2 data through JDBC.
Table partitioning allows large tables to be split across multiple filegroups to improve performance. A partition function defines the data ranges and a partition scheme maps those ranges to filegroups. Tables, indexes, and views can then be created on partition schemes. The SWITCH operator can move partitions between filegroups with minimal locking to archive old data or distribute it across storage.
The document discusses various aspects of Oracle databases including how Oracle software was developed, different types of triggers and their uses, tablespaces and how they divide a database, partitioning which divides large tables into smaller pieces, and concurrency control which allows for simultaneous read and write access through multiversioning and locking. It also briefly outlines tools that can be used for database administration.
Presentation on tablespaceses segments extends and blocksVinay Ugave
This presentation discusses database storage concepts in Oracle including blocks, extends, segments, and tablespaces. It defines each concept as follows:
Blocks are the smallest logical unit of storage in Oracle and represent a specific number of bytes on disk. Extents are collections of contiguous data blocks that make up segments. Segments store specific data structures like tables or indexes and are made up of one or more extents. Tablespaces logically store segments and physically store data in associated datafiles.
The document discusses different types of Oracle tables, including partitioned tables which decompose large tables into smaller pieces called partitions for improved manageability and performance. Clustered tables store related data in the same data blocks, reducing disk I/O. Index-organized tables use indexes as the primary key to access rows. Compression tables reduce storage requirements. External tables allow querying external data sources, while temporary tables hold private session data that exists only for the duration of a transaction or session.
Lecture_Notes_Unit4_Chapter_8_9_10_RDBMS for the students affiliated by alaga...Murugan Solaiyappan
Title: Relational Database Management System Concepts(RDBMS)
Description:
Welcome to the comprehensive guide on Relational Database Management System (RDBMS) concepts, tailored for final year B.Sc. Computer Science students affiliated with Alagappa University. This document covers fundamental principles and advanced topics in RDBMS, offering a structured approach to understanding databases in the context of modern computing. PDF content is prepared from the text book Learn Oracle 8I by JOSE A RAMALHO.
Key Topics Covered:
Main Topic : DATA INTEGRITY, CREATING AND MAINTAINING A TABLE AND INDEX
Sub-Topic :
Data Integrity,Types of Integrity, Integrity Constraints, Primary Key, Foreign key, unique key, self referential integrity,
creating and maintain a table, Modifying a table, alter a table, Deleting a table
Create an Index, Alter Index, Drop Index, Function based index, obtaining information about index, Difference between ROWID and ROWNUM
Target Audience:
Final year B.Sc. Computer Science students at Alagappa University seeking a solid foundation in RDBMS principles for academic and practical applications.
About the Author:
Dr. S. Murugan is Associate Professor at Alagappa Government Arts College, Karaikudi. With 23 years of teaching experience in the field of Computer Science, Dr. S. Murugan has a passion for simplifying complex concepts in database management.
Disclaimer:
This document is intended for educational purposes only. The content presented here reflects the author’s understanding in the field of RDBMS as of 2024.
Feedback and Contact Information:
Your feedback is valuable! For any queries or suggestions, please contact [email protected]
The document discusses various sources of waste and performance issues in the NETS database, along with improvements made over two years to address these issues. Key sources of waste included unused indexes, inefficient index placement, sparse table blocks from nologging inserts, and excessive temporary work areas from sorting. Improvements such as SQL tuning, rebuilds to increase block density, and compressed objects helped reduce waste and contention for storage and processing resources. Ongoing efforts aim to further optimize reliability, capacity planning, disaster recovery, and workload processing.
This document discusses index-organized tables in Oracle8i. Index-organized tables store the entire contents of a table in an index structure, allowing both indexed and non-indexed columns to be retrieved with a single index access. This provides faster access times for queries using primary keys compared to conventional tables. The document outlines several applications that can benefit from index-organized tables, such as OLTP, e-commerce, and data warehousing applications involving large amounts of data accessed via primary keys. It also summarizes the results of a performance study showing index-organized tables outperforming conventional tables for primary key access.
This document provides an overview of creating and managing database objects in Microsoft Access. It discusses:
1) The basic database objects like tables, queries, forms and reports and how they are used to store and display data.
2) How to create tables, set primary keys, add and modify fields, and create relationships between tables.
3) How to create other database objects like queries, forms and reports and customize their layout and formatting.
Getting to know oracle database objects iot, mviews, clusters and more…Aaron Shilo
This document provides an overview of various Oracle database objects and storage structures including:
- Index-organized tables store data within the index based on key values for faster access times and reduced storage.
- Materialized views store the results of a query for faster access instead of re-executing joins and aggregations.
- Virtual indexes allow testing whether a potential new index would be used by the optimizer before implementing.
The presenter discusses how different segment types like index-organized tables, materialized views, and clusters can reduce I/O and improve query performance by organizing data to reduce physical reads and consistent gets. Experienced Oracle DBAs use these features to minimize disk I/O, the greatest factor in
This document provides an overview of SQL tuning and optimization techniques. It discusses various indexing options in Oracle like bitmap indexes and reverse key indexes. It also covers execution plan analysis using tools like EXPLAIN PLAN and tuning techniques like hints. The goal of SQL tuning is to identify resource-intensive queries and optimize them using better indexing, rewriting queries, and other optimization strategies.
The document discusses database design and NoSQL databases like Couchbase. It covers topics such as data structures, the differences between relational and non-relational databases, handling conflicts in Couchbase, and optimizing performance in Couchbase by using efficient document structures and SDK methods. Effective document structures and database configuration can improve the read and write efficiency of Couchbase applications.
This document provides an overview of Bigtable, Google's distributed storage system. Bigtable is designed to manage large amounts of structured data across thousands of machines. It provides a simple data model with dynamic control over data layout and high scalability. Bigtable stores data as a sparse, multi-dimensional sorted map and uses row keys, column families and timestamps to index data. It was developed to meet the varied demands of Google's applications for data size, latency and flexibility in data management across a distributed environment.
HTML tables are used to display tabular data in rows and columns. The basic structure of an HTML table includes <table>, <tr>, <th>, and <td> tags. <th> tags are used for table headers while <td> tags contain the table data. Attributes like colspan and rowspan allow cells to span multiple rows or columns.
AB Database Assignment 1 –FOR STUDENTS TO COMPLETEFirst create .docxbartholomeocoombs
AB Database Assignment 1 –FOR STUDENTS TO COMPLETE
First: create the initial database:
1. Follow the instructions in the walkthrough beginning in section 1.6 through 1.9 of the AB Database Materials Part 1 and create the first three tables of the Adventure Bikes sales database described there. You will work on this database again for assignment 2, so don’t lose it.
When you open your database ‘Enable Content’ when asked so that you can complete your work.
Your table and attribute names should apply the ‘Database Rules to Remember’ from the walkthrough, e.g.,
a. Consistency
b. no spaces in object names
c. upper and lower case characters
d. meaningful names
Then add new components:
2. Create an EmployeeClassifications table with a primary key field and a description field (both fields are attributes of employee classification). Now switch to datasheet view and enter rows (or records) for Salaried, Hourly, and Contracted employee categories, using the first letter of the category name as the primary key value, e.g., use ‘H’ as the key value for ‘Hourly’. Confused? Read the walk through, and especially section 1.7 where we created the table for categorizing term codes.
3. Create an Employees table to the database with an AutoNumber EmployeeID as the primary key plus first and last name fields. Also add a field for the employee type using the same data type as the primary key of your employee classification table.
4. Enter at least five sample employees using names of your choice. To help illustrate the system’s functionality, assign at least one employee to each of the three employee categories.
5. Add a foreign key constraint (relationship) so that the database’s referential integrity functions will enforce mutually exclusive employee classifications. Confused? Review how TermsCode worked in the walk through.
6. In the SalesInvoices table, add a new field to hold a foreign key to reference the Employees table. Add a foreign key constraint (relationship) to connect the SalesInvoices table to the Employees table. This applies the Object and Transaction design pattern, recording which employee entered each invoice. In datasheet view, make the first two invoices entered by the same employee, and the third by a different employee.
Then: document your work – this is what you will turn in
Make a word document for your database assignment submission. For Database Assignment Part 1 include the following:
1. Provide illuminating examples explaining which of the tables you added is an Object table and which is a Category table - that’s two paragraphs. Remember that an illuminating example includes a definition of the concept (what is a category table in one paragraph and what is an object in the other) a specific example (what is your table name) and a sentence or two connecting your example to the definition. These all should be in paragraph form. If you are not clear on what is being asked of you here go back and reread sections 1.1 and 1.2.
Module 08 Access & Use Database Application.pptxEsubalew21
This module covers how to create and manage database objects in Microsoft Access, including tables, queries, forms, and reports. It discusses how to create a database file and add tables to define fields and set properties. It also explains how to create relationships between tables by setting primary keys and foreign keys. The module teaches how to modify existing database objects, such as adding or removing fields from tables, and changing data types and field sizes.
Oracle architecture with details-yogiji creationsYogiji Creations
Oracle is a database management system with a multi-tiered architecture. It consists of a database on disk that contains tables, indexes and other objects. An Oracle instance contains a memory area called the System Global Area that services requests from client applications. Background processes facilitate communication between the memory structures and database files on disk. Logical database structures like tablespaces, segments, extents and blocks help organize and manage the physical storage of data.
The document provides instructions on how to create tables in Microsoft Access. It discusses the benefits of storing data in tables and using relationships between tables. It explains how to create a new table by designing it from scratch, using a template, or importing/linking from an external data source. The key steps covered are setting a table's fields and their data types, primary key, and other properties. Setting these properly is important for organizing the data and enabling relationships between tables.
The document provides explanations of various SQL concepts including cross join, order by, distinct, union and union all, truncate and delete, compute clause, data warehousing, data marts, fact and dimension tables, snowflake schema, ETL processing, BCP, DTS, multidimensional analysis, and bulk insert. It also discusses the three primary ways of storing information in OLAP: MOLAP, ROLAP, and HOLAP.
An index-organized table keeps its information categorized according to the primary key line principles for the table. An index-organized table shops its information as if the whole table was held in a catalog. Indices provide two main purposes:
This document summarizes various SQL tracing methods in Oracle using the command line, including:
1) Tracing your own session using SQL_TRACE, DBMS_SESSION.SET_SQL_TRACE, or DBMS_SESSION.SESSION_TRACE_ENABLE.
2) Using DBMS_APPLICATION_INFO to set client identifiers for tracing.
3) Tracing another session using DBMS_MONITOR procedures or client identifiers.
4) Tracing a specific process or SQL statement using ALTER SYSTEM events.
5) Identifying trace files and explicitly setting the trace file name.
This document provides guidance on database monitoring privileges that are less privileged than the DBA role. It discusses various Oracle views that can be used to monitor sessions and SQL, and how to interpret the information provided, such as distinguishing between current and last wait events. It also provides examples of using views like V$SESSION_LONGOPS and DBA_HIST_ACTIVE_SESS_HISTORY to analyze session and SQL performance.
The document discusses latches and enqueues in Oracle databases. Latches are points of concurrency that protect access to shared memory resources and allow multiple processes to access or modify data simultaneously. Enqueues are points of serialization that impose a queue on processes accessing the same data in a chronological, serialized order. The document provides examples to illustrate the differences and describes some common latches and enqueues as well as how they can be monitored and diagnosed in an Oracle database.
The document discusses different types of joins in Oracle: nested loop joins and hash joins. It provides an example query and explains plan output to demonstrate a nested loop join. The same example is then run with a hint to use a hash join, and the plan is explained, showing the hash table operations instead of nested loops. The key steps of each join method are outlined.
1. The document discusses two methods for explaining SQL execution plans without executing the query: using the EXPLAIN PLAN statement and the GATHER_PLAN_STATISTICS hint.
2. It explains the components of an execution plan such as operation IDs, costs, and predicate information. Filter operations may validate logic before child operations execute.
3. Displaying execution plan statistics with DBMS_XPLAN after running a query with GATHER_PLAN_STATISTICS shows runtime metrics like number of rows and buffers accessed.
- Session 17 is blocking session 26 from updating a row in the test_row_lock table as it currently holds a lock on that row.
- Session 1 inserted a row into the test_unique_insert_row_lock table and is blocking session 36 from inserting a duplicate value into the same table until session 1 commits.
- Lock trees in Oracle represent multiple sessions waiting to acquire the same row lock, with sessions lower in the tree waiting on those above it.
Oracle database performance diagnostics - before your beginHemant K Chitale
This is an article that I had written in 2011 for publication on OTN. It never did appear. So I am making it available here. It is not "slides" but is only 7 pages long. I hope you find it useful.
The document discusses the role of the DBA (Database Administrator). It describes how a DBA must have expertise in areas like availability, administration, and professionalism. A key part of the DBA's role is ensuring high database performance and uptime. The document also outlines expectations for DBAs, such as having the right attitude, contributing value, and responding helpfully to issues. It cautions against mistakes like not capturing metrics or testing changes thoroughly.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
In today's fast-paced retail environment, efficiency is key. Every minute counts, and every penny matters. One tool that can significantly boost your store's efficiency is a well-executed planogram. These visual merchandising blueprints not only enhance store layouts but also save time and money in the process.
DevOpsDays Atlanta 2025 - Building 10x Development Organizations.pptxJustin Reock
Building 10x Organizations with Modern Productivity Metrics
10x developers may be a myth, but 10x organizations are very real, as proven by the influential study performed in the 1980s, ‘The Coding War Games.’
Right now, here in early 2025, we seem to be experiencing YAPP (Yet Another Productivity Philosophy), and that philosophy is converging on developer experience. It seems that with every new method we invent for the delivery of products, whether physical or virtual, we reinvent productivity philosophies to go alongside them.
But which of these approaches actually work? DORA? SPACE? DevEx? What should we invest in and create urgency behind today, so that we don’t find ourselves having the same discussion again in a decade?
Noah Loul Shares 5 Steps to Implement AI Agents for Maximum Business Efficien...Noah Loul
Artificial intelligence is changing how businesses operate. Companies are using AI agents to automate tasks, reduce time spent on repetitive work, and focus more on high-value activities. Noah Loul, an AI strategist and entrepreneur, has helped dozens of companies streamline their operations using smart automation. He believes AI agents aren't just tools—they're workers that take on repeatable tasks so your human team can focus on what matters. If you want to reduce time waste and increase output, AI agents are the next move.
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
Talk at the final event of Data Fusion Dynamics: A Collaborative UK-Saudi Initiative in Cybersecurity and Artificial Intelligence funded by the British Council UK-Saudi Challenge Fund 2024, Cardiff Metropolitan University, 29th April 2025
https://alandix.com/academic/talks/CMet2025-AI-Changes-Everything/
Is AI just another technology, or does it fundamentally change the way we live and think?
Every technology has a direct impact with micro-ethical consequences, some good, some bad. However more profound are the ways in which some technologies reshape the very fabric of society with macro-ethical impacts. The invention of the stirrup revolutionised mounted combat, but as a side effect gave rise to the feudal system, which still shapes politics today. The internal combustion engine offers personal freedom and creates pollution, but has also transformed the nature of urban planning and international trade. When we look at AI the micro-ethical issues, such as bias, are most obvious, but the macro-ethical challenges may be greater.
At a micro-ethical level AI has the potential to deepen social, ethnic and gender bias, issues I have warned about since the early 1990s! It is also being used increasingly on the battlefield. However, it also offers amazing opportunities in health and educations, as the recent Nobel prizes for the developers of AlphaFold illustrate. More radically, the need to encode ethics acts as a mirror to surface essential ethical problems and conflicts.
At the macro-ethical level, by the early 2000s digital technology had already begun to undermine sovereignty (e.g. gambling), market economics (through network effects and emergent monopolies), and the very meaning of money. Modern AI is the child of big data, big computation and ultimately big business, intensifying the inherent tendency of digital technology to concentrate power. AI is already unravelling the fundamentals of the social, political and economic world around us, but this is a world that needs radical reimagining to overcome the global environmental and human challenges that confront us. Our challenge is whether to let the threads fall as they may, or to use them to weave a better future.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
I started my online journey with several hosting services before stumbling upon Ai EngineHost. At first, the idea of paying one fee and getting lifetime access seemed too good to pass up. The platform is built on reliable US-based servers, ensuring your projects run at high speeds and remain safe. Let me take you step by step through its benefits and features as I explain why this hosting solution is a perfect fit for digital entrepreneurs.
Artificial Intelligence is providing benefits in many areas of work within the heritage sector, from image analysis, to ideas generation, and new research tools. However, it is more critical than ever for people, with analogue intelligence, to ensure the integrity and ethical use of AI. Including real people can improve the use of AI by identifying potential biases, cross-checking results, refining workflows, and providing contextual relevance to AI-driven results.
News about the impact of AI often paints a rosy picture. In practice, there are many potential pitfalls. This presentation discusses these issues and looks at the role of analogue intelligence and analogue interfaces in providing the best results to our audiences. How do we deal with factually incorrect results? How do we get content generated that better reflects the diversity of our communities? What roles are there for physical, in-person experiences in the digital world?
HCL Nomad Web – Best Practices and Managing Multiuser Environmentspanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-and-managing-multiuser-environments/
HCL Nomad Web is heralded as the next generation of the HCL Notes client, offering numerous advantages such as eliminating the need for packaging, distribution, and installation. Nomad Web client upgrades will be installed “automatically” in the background. This significantly reduces the administrative footprint compared to traditional HCL Notes clients. However, troubleshooting issues in Nomad Web present unique challenges compared to the Notes client.
Join Christoph and Marc as they demonstrate how to simplify the troubleshooting process in HCL Nomad Web, ensuring a smoother and more efficient user experience.
In this webinar, we will explore effective strategies for diagnosing and resolving common problems in HCL Nomad Web, including
- Accessing the console
- Locating and interpreting log files
- Accessing the data folder within the browser’s cache (using OPFS)
- Understand the difference between single- and multi-user scenarios
- Utilizing Client Clocking
AI and Data Privacy in 2025: Global TrendsInData Labs
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the today’s world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
Linux Support for SMARC: How Toradex Empowers Embedded DevelopersToradex
Toradex brings robust Linux support to SMARC (Smart Mobility Architecture), ensuring high performance and long-term reliability for embedded applications. Here’s how:
• Optimized Torizon OS & Yocto Support – Toradex provides Torizon OS, a Debian-based easy-to-use platform, and Yocto BSPs for customized Linux images on SMARC modules.
• Seamless Integration with i.MX 8M Plus and i.MX 95 – Toradex SMARC solutions leverage NXP’s i.MX 8 M Plus and i.MX 95 SoCs, delivering power efficiency and AI-ready performance.
• Secure and Reliable – With Secure Boot, over-the-air (OTA) updates, and LTS kernel support, Toradex ensures industrial-grade security and longevity.
• Containerized Workflows for AI & IoT – Support for Docker, ROS, and real-time Linux enables scalable AI, ML, and IoT applications.
• Strong Ecosystem & Developer Support – Toradex offers comprehensive documentation, developer tools, and dedicated support, accelerating time-to-market.
With Toradex’s Linux support for SMARC, developers get a scalable, secure, and high-performance solution for industrial, medical, and AI-driven applications.
Do you have a specific project or application in mind where you're considering SMARC? We can help with Free Compatibility Check and help you with quick time-to-market
For more information: https://www.toradex.com/computer-on-modules/smarc-arm-family
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Quantum Computing Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, presentation slides, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
IEDM 2024 Tutorial2_Advances in CMOS Technologies and Future Directions for C...organizerofv
Partitioning Tables and Indexing Them --- Article
1. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
Partitioning Tables and Indexing Them
Hemant K Chitale
Standard Chartered Bank
Introduction
Twenty years ago, databases were not very large; they did not support very many users.
Backup (and Recovery) requirements were simple. However, Oracle recognized the advent
of large datasets and introduced the VLDB (“Very Large Database”) terminology in the
Oracle8 documentation. Oracle Partitioning was made available as an optional feature to
facilitate handling of large tables.
This paper is an *Introduction* to Partitioning. Complex Partitioning types and advanced
commands in Partition maintenance are not in the scope.
Examples of much of the material in this paper are published in the Appendix pages.
Pre – Oracle8 (and for those without the Partitioning Option installed)
Prior to Version 8, “home-grown” partitioning implementations by DBAs and Consultants
were based on UNION-ALL Views on physically separate tables. Even now, if you do not
purchase the Partitioning Option with the Enterprise Edition, you have to build multiple
tables and “join” them together with views. Such partitioning is not transparent --- DML like
INSERT, UPDATE and DELETE must explicitly specify the base table being manipulated,
although queries can be executed across all the tables using the view.
In V7.3 you could also define Partition Views with explicit Check Constraints to identify the
base table. [See the References in the Appendix]
Elements of Table Partitioning
A Table is Partitioned into separate “pieces” on the basis of the values stored in one or more
columns, known as the “Partition Key”. The Partition Key unambiguously specifies which
“piece” or Partition a row will be stored in (you can specify MAXVALUE and DEFAULT
Partitions for “undefined” values in Range and List partitioning). Each Partition within the
table is a separate Object and a separate Segment. (Similarly, if you extend Partitions into
SubPartitions in Composite Partitioning, each SubPartition also is distinct from the “parent”
Partition and Table).
Each Partition (and SubPartition) of a Table has the same logical attributes (Column Names,
DataTypes, Scale and Precision). Constraints defined for the Table apply to all Partitions.
However, Partitions can have distinct physical attributes (Tablespace for Storage, Extent
Sizes, Compression). Note: If you create SubPartitions, then no Partition Segments are
created because the physical storage is now in the SubPartitions.
[See the References in the Appendix]
Common Types of Partitioning
2. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
Although 11g has introduced some more “complex” partitioning definitions, the most
common ones still are:
o Range Partitioning (introduce in V8 and still the most popular)
o Hash Partitioning (introduced in 8i but much less used than Range and List)
o List Partitioning (introduced in 9i)
o Composite (Range-Hash) (Range-List) (List-Range) (List-Hash) etc Partitioning
11g has expanded the list of combinations for Composite Partitioning and added these
complex definitions:
o Virtual Column Partitioning
o Reference Partitioning
o Interval Partitioning (as an extension to Range Partitioning)
Here, I shall be touching on only the simpler Partitioning methods as an introduction to
Partitioning.
Range Partitioning
In Range Partitioning, each Partition is defined by an upper bound for the values in the
Partition Key column(s). Thus, data is mapped to a Partition based on the values in the
Partition Key columns(s) vis-à-vis the boundary values for the Partition Keys. Each Partition
is thus allowed a range of values from the value after the upper bound of the previous
partition up to the upper bound of the same partition. Typically Range Partitioning is used
against DATE columns (and Partitioning by Date has been the most popular method).
However, when you need to Partition on multiple columns as a Partition Key (e.g. just as a
Primary Key can be a composite of multiple columns), you may need to consider Range
Partitioning as well.
Multiple columns can be concatenated together to act as the Partition Key. In such a case,
Oracle matches the value of each incoming row in the order of the columns. If an incoming
row matches the first Key column, Oracle checks the second Key column and so on.
[See the example in the Appendix]
Hash Partitioning
In Hash Partitioning, you are only allowed to specify the Partition Key and the number of
Partitions (i.e. you do not define the values to be used for partitioning). Hash Partitioning is
used when you need to partition the table into smaller “pieces” but cannot identify how the
rows are to be distributed across the Partitions. Oracle applies a Hashing algorithm to the
values in the Partition Key in an attempt to distribute rows across the defined Partitions. This
means that the Partition Key should have a large distribution of values. Ideally, you would
define the number of Partitions to be a power of 2 – i.e. 2^N.
[See the examples in the Appendix]
List Partitioning
List Partitioning is useful where you have a well-defined list of possible values for the
Partition Key. Every row in a Partition will then have the same value for the Partition Key.
Note that List Partitioning, unlike Range Partitioning, is restricted to a single column. 11g
now allows composite List-List Partitioning where the SubPartitions can be based on a list of
secondary column values.
[See the example in the Appendix]
3. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
Choosing the Partitioning Method
Why would you need to / want to partition a table? What is the nature of the data in the
table? The answers to these questions should drive your selection. Here are some pointers:
1. Range Partitioning by a DATE column allows you to archive out older records simply
by exporting the oldest partition and then truncating (or dropping) it.
2. Similarly, Date Ranged Partitions containing “old” data can be set to Read Only and
backed up less frequently (by creating them in separate Tablespaces and, on expiry of
the desired period – e.g. 3 years – setting the Tablespaces to Read Only).
3. If you periodically load “new” data (by date) and purge “old” data, Date Range
Partitioning makes sense.
4. If you simply need to randomly distribute data across multiple “pieces” (each
Partition could be in a separate Tablespace on a separate set of disks as well), without
relation to a business or logical view, you may consider using Hash Partitioning.
Note that Range-Scan queries on the Partition Key will not perform – all queries must
be based on Equality Predicates.
5. List Partitioning works for a small, discrete and “static” list of possible values for the
Partitioning Key. There may be a few changes where you might occasionally add a
new Partition because a new value for the Partition Key is expected to be inserted into
the table.
Examples
Let’s look at some examples:
A. Sales data is loaded into a SALES_FACT table and needs to be retained for only 3
years. Data is loaded daily or weekly. The table can be Date Range Partitioned by
Month with a new Partition created each month (a new Partitions at the upper-end is
“added” by executing a SPLIT PARTITION against the MAXVALUE Partition).
B. Historical data of transactions needs to be preserved for 7 years. However, data once
inserted will not be updated. Data older than 1 year will be very rarely referenced.
The data can be stored in a Date Range Partitioned table whereby data is maintained
such :
a. Data of years below 2004 has been purged by truncating their partitions
b. Data of years 2004 to 2007 are in Partitions that have 1-year ranges (thus 4
Partitions) which have been moved one or more Tablespaces that are on Tier-3
storage. Optionally, the COMPRESS attribute has also been set for these
Partitions. These Partitions (i.e. Tablespaces) are then set to READ ONLY
after a Backup. Thus, all subsequent BACKUP DATABASE runs do not need
to backup this data repeatedly.
c. Data of years 2008 to 2009 are in Partitions that are 1-year or Quarter-year
ranges in Tablespaces on Tier-2 storage. Optionally, the COMPRESS
attribute has been set for these Partitions.
d. Data of years 2010 and 2011 are in Partitions that are Quarter-year ranges in
on Tier-1 storage
C. Employee information is to be stored by State. The Table can be List Partitioned as
the State names are well defined and unlikely to change frequently.
4. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
D. Statistical information from manufacturing equipment needs to be stored. The range
of possible values is very high and the values cannot be predicted. Data will be
queried by equality predicates (e.g. “DATA_VALUE IN (1,3,5,7)”).
Hash
Partitioning may distribute the data equally across multiple Partitions.
Adding Data in Partitioned Tables
Single Row Inserts
When you insert data into a Partitioned Table, you do not need to specify the name of the
target Partition. Oracle dynamically determines the target Partition as it views the value of
the incoming Partition Key column(s). Thus, a simple INSERT INTO tablename …. will
suffice. However, you can choose to optionally specify the Partition but Oracle will,
nevertheless, check the value being inserted – if you name a Partition where the defined
Partition Key values do not match the row being inserted, Oracle will raise an error. The
examples on Range Partitioning and Hash Partitioning in the Appendix show how Oracle
automatically maps the target Partition name and inserts the row into the “correct” Partition.
[See the example in the Appendix]
Bulk Inserts
Parallel DML and Direct Path INSERT (the “APPEND” Hint in SQL) can both be used.
Oracle can use separate Parallel Slaves to insert into distinct target Partitions. When
attempting such Bulk operations against a specific (named) Partition, only that target
Partition is locked, the rest of the table is not locked. DML against rows other Partitions is
permitted. This is a significant advantage of Partitioning. Direct Path operations against a
non-partitioned table would lock the entire table.
[See the example in the Appendix]
“Switching” data from a non-partitioned Table to a Partitioned Table
The ALTER TABLE tablename EXCHANGE PARTITION partitionname syntax allows you to
“switch” a non-partitioned Table with a Partition of an existing Partitioned Table. Obviously,
the logical structure of the Tables must match. Similarly, DBMS_REDEFINITION can be
used to “convert” a non-partitioned Table into a Partitioned Table by creating the Partitioned
Table definition as the “interim” Table and copying rows from the existing (non-partitioned)
Table to the new Partitioned Table.
Maintaining Partitioned Tables
Oracle provides a number of maintenance commands for Partitions: ADD, DROP,
COALESCE, TRUNCATE, SPLIT, MERGE, EXCHANGE and MOVE are commonly used.
ADD
The ALTER TABLE tablename ADD PARTITION partitionname can be used in Range, List
and Hash Partitioning. Thus, you can “expand” the list of Partition Keys to allow for new
data.
If the table is Range Partitioned, you can ADD a new Partition only at the “high” end after
the last existing Partition. Otherwise, you SPLIT to “create” a new Partition out of an
existing Partition.
5. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
If the List Partitioned Table already has a DEFAULT Partition, you need to SPLIT the
DEFAULT to create a Partition for the new Partition Key value.
Note that for a Hash Partitioned table, Oracle has to re compute Hash values and reallocate
the data from an existing Partition. So adding a new Partition would take significant time and
resources and, importantly, result in “un-balanced” Partitions.
[See the example in the Appendix]
DROP or TRUNCATE
When you DROP a Table Partition, you physically “remove” the Partition. Data that was
mapped to the Partition Key boundaries is no longer accessible. In a Range Partitioned
Table, if you re-INSERT the same Partition Key, you will find the rows going into the “next”
Partition. In a List Partitioned Table, data would fail to re-insert unless a DEFAULT
Partition has been created.
COALESCE, SPLIT and MERGE
These commands can be used against adjacent Partitions or to create adjacent Partitions. (In
Hash Partitioning, the COALESCE is used to reduce the number of Partitions).
MAXVALUE and DEFAULT Partitions in Range and List Partitioning can be SPLITted to
“create” new Partitions.
EXCHANGE
The EXCHANGE Partition command is useful to “merge” a non-Partitioned Table into an
existing Partitioned Table. Thus, an empty Partition is exchanged with the populated table.
Oracle simply updates the data dictionary information causing the Partition segment to be
become an independent Table segment and the non-Partitioned Table to become a Partition.
Similarly, EXCHANGE can be used to “move” data out of an existing Partitioned Table so
that it can be copied or transported to another database.
[See the example in the Appendix]
DBMS_REDEFINITION and MOVE
DBMS_REDEFINITION allows you to “move” a Partition -- .e.g from one Tablespace to
another as an “online” operation. Alternatively, the simple ALTER TABLE tablename MOVE
PARTITION partitionname command can be used to make an “offline” move. Note that if
the table is SubPartitioned, then the physical segments are the SubPartitioned – so it is the
SubPartitions that are to be moved, not the Partitions.
Maintaining Indexes
Indexes against a Partitioned Table may be
o Global
o Global Partitioned
o Local
Global Indexes
Global Indexes are the “regular” indexes that you would create on a non-partitioned table.
Typically Global Indexes are useful in an OLTP environment where queries against the
Partitioned Table are not restricted to specific Partitions and/or do not specify the Partition
Key that Oracle can use for automatic Partition Pruning.
Any Partition Maintenance operation (SPLIT, TRUNCATE, MERGE) can cause a Global
Index to be made Invalid. Since 10g, Oracle has added the keywords “UPDATE GLOBAL
6. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
INDEXES” to Partition Maintenance commands so as to update the Global Indexes as well.
If you do not include this clause, you may find Global Indexes to be marked UNUSABLE.
This is because while Partition Maintenance operation is executed as a DDL, updates to a
Global Index must be executed as normal DML.
[See the example in the Appendix]
Global Partitioned Indexes
Global Partitioned Indexes are indexes that, while partitioned, are not partitioned along the
same key as the table. Thus, with respect to the Table Partitions, they are still “Global”. A
Partition of a Global Partitioned Index may have keys referencing none, one or multiple
partitions of the table --- there is no one-to-one correspondence between Index Partitions and
Table Partitions.
Local Index
A Local Index (what I call “Locally Partitioned Index”) is Equi-Partitioned with the Table.
For every Table Partition, there is a corresponding Index Partition and vice-versa. Entries in
an Index Partition will refer only to rows in the corresponding Table Partition. Thus, a scan
of a particular Index Partition will never have to read rowids for rows not in the matching
Table Partition.
A Local Index is ideally usable in DWH/DSS environments.
Note that if the Index is to be a Unique Index, it *must* include the Partition Key column(s)
because the Index does not reference any rows outside the corresponding Table Partition so it
has to use the Partition Key columns to help maintain Uniqueness across the Table.
Partition Maintenance operations against the Table may automatically applied to the
corresponding Index Partitions – the DBA does not have to explicitly specify the Index
Partitions.
Thus an ALTER TABLE tablename DROP PARTITION partitionname
automatically drops the corresponding Index Partitions as well without making the Index
UNUSABLE.
However, depending on the nature of the Partitioning and the
presence/absence of rows, actions like SPLIT and MERGE may cause Index Partitions to be
left UNUSABLE unless the UPDATE INDEXES clause is used. The number of possible
combinations is too numerous to enumerate here (ADD, DROP, TRUNCATE, SPLIT,
MERGE against Range, List, Hash and Composite Partitioned tables, against regular
Partitions, the DEFAULT Partition or MAXVALUE partition, with or without rows).
[See the example in the Appendix]
Performance Strategies
Here are some performance strategies:
1. Ideally, you want to be able to use Partition Pruning when querying subsets of the
data. Thus, when querying for only a month’s data in a table containing 5 years data,
the DATE column would be the Partition Key and the query would specify the DATE
column as a query predicate.
2. When doing a Bulk Insert into a Partitioned Table, use Parallel and Direct Path
Inserts. Consider NoLogging but be aware of the pitfalls.
3. If a query submitted by the application cannot supply the Partition Key as a predicate,
a Local Index is not usable – consider creating a Global Index (but note the
maintenance overheads when doing Partition Maintenance).
7. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
4. If two tables are to be joined frequently, consider defining Equi-Partitioning on both
tables – i.e. have the same Partition Key and boundaries. Oracle can then execute
Partition-Wise Joins.
5. If attempting to EXCHANGE PARTITION with a Table where a Unique Constraint
and Index is to be maintained, create the Unique Index as a Local index on the
Partitioned Table (as well as a “normal” Unique Index on the Exchange Table). Then
execute ALTER TABLE tablename DISABLE CONSTRAINT constraintname KEEP
INDEX on the exchange table *before* attempting the EXCHANGE with
INCLUDING INDEXES NOVALIDATE. (Remember to ensure that the rows in the
table *are* Unique with the Constraint Enabled upto the point you attempt the
EXCHANGE PARTITION!!)
If you have data that is Partitioned but your queries are not doing Partition Pruning (filtering
by the Partition Key as a predicate), you must seriously consider your Partitioning design.
Also remember that range-based queries cannot be used against a Hash Partitioned table.
Archiving Data
As has been pointed out, Partitioning by a DATE is a right fit for Archival requirements.
“Old” data is automatically stored in Partitions that identify the age of the data. These
Partitions can then be exported from the database and preserved as backups outside of the
database. Alternatively, they can be moved to Tier-2 or Tier-3 storage on slower/cheaper
disks (by placing them in Tablespaces with Datafiles on such storage). The oldest partitions
can be TRUNCATEd when the data is no longer needed.
Remember that you can also use SPLIT and MERGE commands to manage Range Partitions.
Thus, as data gets “older” you can “merge” the Month Partitions containing data that is 3640months old into a Quarter Partitions. Over time, you can merge older Quarter Partitions
into Year Partitions. The oldest Year partitions can then be archived or moved to Tier-2 or
Tier-3 storage.
[See the example in the Appendix]
Common Mistakes
Some common mistakes that I see are:
1.
Using the wrong Partitioning type – particularly Range where List should have
been used.
An example I came across recently was:
Two columns from a table
YEAR NUMBER(4) NOT NULL,
PERIOD NUMBER(2) NOT NULL,
Partition Key definitions
PARTITION BY RANGE (YEAR)
SUBPARTITION BY HASH (PERIOD)
Ideally, this would be Range Partitioning on (YEAR,PERIOD) OR List-List
Composite Partitioning in 11gR2. (In fact, we might even ask why these two
8. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
columns are NUMBER columns and take a more detailed look at the Schema
design).
2. Not defining all the proper Ranges causing “undefined” values to be entered into
an unexpected Range Partition – this can be particularly troublesome if you were
to TRUNCATE partitions without verifying the data. In the Range Partitioning
example, I showed how rows for ‘JP’ were saved in the ‘SG’ partition. If the
DBA were to TRUNCATE the ‘SG’ partitions, he would lose the ‘JP’ data as
well! This is very dangerous!
3. Not defining the right boundaries causing Partitions to be unequally sized – Hash
Partitioning might be preferable if the Partition boundaries cannot be explicitly
identified.
4. Attempting to move data from one Partition to another. By default, ROW
MOVEMENT is not enabled – Oracle does not expect rows to move between
partitions. If rows have to move between partitions frequently, it would indicate
an incorrectly defined Partitioning schema (somewhat akin to frequent updates to
a Primary Key indicating an incorrectly defined Primary Key).
5. Incorrectly creating LOCAL indexes when GLOBAL indexes are required – for
queries that do not do Partition Pruning. Such queries end up scanning each Index
Partition separately or doing a Full Table Scan, completely bypassing the Index.
6. Defining a GLOBAL index on a DWH/DSS table where all DML and queries are
against targeted partitions. DML requires updates to the GLOBAL index and
Partition Maintenance operations can result in the GLOBAL index being
UNUSABLE.
Conclusion
Oracle Partitioning has varied uses in Performance, Data Management, Data Warehousing
(quick loading of data). It can be used concurrently with Parallel Query, Direct Path
operations and, when necessary, NoLogging commands. However, care must be taken to
define the *correct* Partitioning method.
9. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
Appendix: References and Examples
References : Partition Views
1. There’s a document “Handling large datasets - Partition Views (Nov 1996)” by
Jonathan Lewis which explains the concept.
2. Also see Oracle Support articles
a. “Partition Views in 7.3: Examples and Tests [ID 43194.1]”
b. “How to convert 7.3.X partition view tables to 8.X partition tables [ID
1055257.6]”).
Elements of Table Partitions
This demonstration shows how Partitions and SubPartitions (implementing Range-List
Composite Partitioning) appear as Objects and Segments that are distinct from the Table.
SQL> connect PART_DEMO/PART_DEMO
Connected.
SQL>
SQL> drop table SALES_TABLE;
Table dropped.
SQL> purge recyclebin;
Recyclebin purged.
SQL>
SQL>
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
create table SALES_TABLE(sale_date date not null, region varchar2(8), sale_qty number)
partition by range (sale_date) subpartition by list (region)
(
partition p_2010 values less than (to_date('01-JAN-2011','DD-MON-YYYY'))
(subpartition p_2010_s_east values ('EAST'),
subpartition p_2010_s_north values ('NORTH'),
subpartition p_2010_s_south values ('SOUTH'),
subpartition p_2010_s_west values ('WEST')
)
,
partition p_2011 values less than (to_date('01-JAN-2012','DD-MON-YYYY'))
(subpartition p_2011_s_east values ('EAST'),
subpartition p_2011_s_north values ('NORTH'),
subpartition p_2011_s_south values ('SOUTH'),
subpartition p_2011_s_west values ('WEST')
)
)
/
Table created.
SQL>
SQL>
2
3
4
select object_id, object_name, subobject_name, object_type
from user_objects
order by object_type, object_name, subobject_name
/
OBJECT_ID
---------54889
54890
54891
54892
54893
54894
54895
54896
54897
54898
54899
OBJECT_NAME
-------------------SALES_TABLE
SALES_TABLE
SALES_TABLE
SALES_TABLE
SALES_TABLE
SALES_TABLE
SALES_TABLE
SALES_TABLE
SALES_TABLE
SALES_TABLE
SALES_TABLE
SUBOBJECT_NAME
OBJECT_TYPE
------------------------- ------------------TABLE
P_2010
TABLE PARTITION
P_2011
TABLE PARTITION
P_2010_S_EAST
TABLE SUBPARTITION
P_2010_S_NORTH
TABLE SUBPARTITION
P_2010_S_SOUTH
TABLE SUBPARTITION
P_2010_S_WEST
TABLE SUBPARTITION
P_2011_S_EAST
TABLE SUBPARTITION
P_2011_S_NORTH
TABLE SUBPARTITION
P_2011_S_SOUTH
TABLE SUBPARTITION
P_2011_S_WEST
TABLE SUBPARTITION
11. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
SQL>
Note : In 11g with “deferred_segment_creation” set to TRUE, segments are not created until
at least one row is inserted into each “target” segment.
Range Partitioning
This demonstration shows how Oracle matches the incoming values against the Partition Key
boundaries. Notice how the records for ‘JP’ and ‘US’ go into the ‘SG’ and ‘MAX’ partitions
ignoring the acctg_year value after the first column of the partition key is “matched”.
SQL> drop table ACCOUNTING ;
Table dropped.
SQL> purge recyclebin;
Recyclebin purged.
SQL>
SQL>
2
3
4
5
6
7
8
9
10
11
12
13
create table ACCOUNTING
(biz_country varchar2(10) not null, acctg_year number not null, data_1 varchar2(20))
partition by range (biz_country, acctg_year)
(
partition p_in_2006 values less than ('IN',2007),
partition p_in_2007 values less than ('IN',2008),
partition p_in_2008 values less than ('IN',2009),
partition p_sg_2006 values less than ('SG',2007),
partition p_sg_2007 values less than ('SG',2008),
partition p_sg_2008 values less than ('SG',2009),
partition p_max values less than (MAXVALUE, MAXVALUE)
)
/
Table created.
SQL>
SQL> insert into ACCOUNTING values ('IN',2007,'Row 1');
1 row created.
SQL> insert into ACCOUNTING values ('IN',2008,'Row 2');
1 row created.
SQL> insert into ACCOUNTING values ('JP',2007,'Row 3');
1 row created.
SQL> insert into ACCOUNTING values ('JP',2015,'Row 4');
1 row created.
SQL> insert into ACCOUNTING values ('US',2006,'Row 5');
1 row created.
SQL> insert into ACCOUNTING values ('US',2009,'Row 6');
1 row created.
SQL>
SQL> select * from ACCOUNTING partition (p_in_2006);
no rows selected
SQL> select * from ACCOUNTING partition (p_in_2007);
BIZ_COUNTR ACCTG_YEAR DATA_1
12. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
---------- ---------- -------------------IN
2007 Row 1
SQL> select * from ACCOUNTING partition (p_in_2008);
BIZ_COUNTR ACCTG_YEAR DATA_1
---------- ---------- -------------------IN
2008 Row 2
SQL> select * from ACCOUNTING partition (p_sg_2006);
BIZ_COUNTR ACCTG_YEAR DATA_1
---------- ---------- -------------------JP
2007 Row 3
JP
2015 Row 4
SQL> select * from ACCOUNTING partition (p_max);
BIZ_COUNTR ACCTG_YEAR DATA_1
---------- ---------- -------------------US
2006 Row 5
US
2009 Row 6
SQL>
Had I used List Partitioning, I would have avoided ‘JP’ and ‘US’ going into the ‘SG’
Partition (they could have been sent to a DEFAULT Partition). However, when I want to
define a multi-column Partition Key, I cannot use List Partitioning --- List Partitioning
restricted to a single column. Therefore, I use Range Partitioning for a 2-column key. But as
the example shows, I must be very careful with my Partition bounds and know what data is
getting inserted !
Note : 11g now supports composite List-List Partitioning. So you could address this issue in
11g – but remember it is still only 2 levels, what if you wanted a third column in your
Partition Key ?
Hash Partitioning
This example shows how it is important to match the number of Partitions to the range of
values. A mismatch causes unbalanced Partitions.
SQL> drop table MACHINE_DATA ;
Table dropped.
SQL> purge recyclebin;
Recyclebin purged.
SQL>
SQL>
2
3
4
5
create table MACHINE_DATA
(machine_id number not null, data_value number not null, read_date date)
partition by hash (data_value)
(partition P_1, partition P_2, partition P_3, partition P_4, partition P_5)
/
Table created.
SQL>
SQL>
SQL>
SQL>
SQL>
2
3
4
-- first example of poor distribution
-- 8 distinct data_values are created
insert into MACHINE_DATA
select mod(rownum,10), mod(rownum,8), sysdate+rownum/1000000
from dual connect by level < 1000000
/
999999 rows created.
13. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
SQL>
SQL> exec
dbms_stats.gather_table_stats('','MACHINE_DATA',estimate_percent=>100,granularity=>'ALL');
PL/SQL procedure successfully completed.
SQL>
SQL>
2
3
4
5
select partition_name, num_rows
from user_tab_partitions
where table_name = 'MACHINE_DATA'
order by partition_position
/
PARTITION_NAME
NUM_ROWS
------------------------------ ---------P_1
125000
P_2
124999
P_3
250000
P_4
500000
P_5
0
SQL>
SQL>
2
3
4
5
select data_value, count(*)
from MACHINE_DATA
group by data_value
order by data_value
/
DATA_VALUE
COUNT(*)
---------- ---------0
124999
1
125000
2
125000
3
125000
4
125000
5
125000
6
125000
7
125000
8 rows selected.
SQL>
SQL>
SQL> -- second example of poor distribution
SQL> -- 10500 distinct data_values are created
SQL> drop table MACHINE_DATA ;
Table dropped.
SQL> purge recyclebin;
Recyclebin purged.
SQL>
SQL>
2
3
4
5
create table MACHINE_DATA
(machine_id number not null, data_value number not null, read_date date)
partition by hash (data_value)
(partition P_1, partition P_2, partition P_3, partition P_4, partition P_5)
/
Table created.
SQL>
SQL>
2
3
4
insert into MACHINE_DATA
select mod(rownum,10), mod(rownum,10500), sysdate+rownum/1000000
from dual connect by level < 1000000
/
999999 rows created.
SQL>
SQL> exec
dbms_stats.gather_table_stats('','MACHINE_DATA',estimate_percent=>100,granularity=>'ALL');
PL/SQL procedure successfully completed.
SQL>
SQL> select partition_name, num_rows
14. (c) Hemant K Chitale
2
3
4
5
http://hemantoracledba.blogspot.com
from user_tab_partitions
where table_name = 'MACHINE_DATA'
order by partition_position
/
PARTITION_NAME
NUM_ROWS
------------------------------ ---------P_1
128669
P_2
252465
P_3
251151
P_4
248675
P_5
119039
SQL>
SQL>
SQL> -- third example with 2^N partitions
SQL> -- 10500 distinct data_values are created
SQL> drop table MACHINE_DATA ;
Table dropped.
SQL> purge recyclebin;
Recyclebin purged.
SQL>
SQL>
2
3
4
5
6
create table MACHINE_DATA
(machine_id number not null, data_value number not null, read_date date)
partition by hash (data_value)
(partition P_1, partition P_2, partition P_3, partition P_4,
partition P_5, partition P_6, partition P_7, partition P_8)
/
Table created.
SQL>
SQL>
2
3
4
insert into MACHINE_DATA
select mod(rownum,10), mod(rownum,10500), sysdate+rownum/1000000
from dual connect by level < 1000000
/
999999 rows created.
SQL>
SQL> exec
dbms_stats.gather_table_stats('','MACHINE_DATA',estimate_percent=>100,granularity=>'ALL');
PL/SQL procedure successfully completed.
SQL>
SQL>
2
3
4
5
select partition_name, num_rows
from user_tab_partitions
where table_name = 'MACHINE_DATA'
order by partition_position
/
PARTITION_NAME
NUM_ROWS
------------------------------ ---------P_1
128669
P_2
123503
P_3
126000
P_4
120182
P_5
119039
P_6
128962
P_7
125151
P_8
128493
8 rows selected.
SQL>
SQL> select count(distinct(ora_hash(data_value))) from machine_data;
COUNT(DISTINCT(ORA_HASH(DATA_VALUE)))
------------------------------------10500
SQL>
15. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
List Partitioning
This example shows a table incorrectly defined as Range Partitioned when it should have
been List Partitioned :
SQL> -- badly defined as Range Partitioned
SQL> drop table MONTH_END_BALANCES;
Table dropped.
SQL> purge recyclebin;
Recyclebin purged.
SQL>
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
create table MONTH_END_BALANCES
(Partition_Key
varchar2(8) not null, account_number number, balance number)
partition by Range (Partition_Key)
(partition P_2011_JAN values less than ('20110132'),
partition P_2011_FEB values less than ('20110229'),
partition P_2011_MAR values less than ('20110332'),
partition P_2011_APR values less than ('20110431'),
partition P_2011_MAY values less than ('20110532'),
partition P_2011_JUN values less than ('20110631'),
partition P_2011_JUL values less than ('20110732'),
partition P_2011_AUG values less than ('20110832'),
partition P_2011_SEP values less than ('20110931'),
partition P_2011_OCT values less than ('20111032'),
partition P_2011_NOV values less than ('20111131'),
partition P_2011_DEC values less than ('20111232')
)
/
Table created.
SQL>
SQL> -- correctly defined as List Partitioned
SQL> drop table MONTH_END_BALANCES;
Table dropped.
SQL> purge recyclebin;
Recyclebin purged.
SQL>
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
create table MONTH_END_BALANCES
(Partition_Key
varchar2(6) not null, account_number number, balance number)
partition by List (Partition_Key)
(partition P_2011_JAN values ('201101'),
partition P_2011_FEB values ('201102'),
partition P_2011_MAR values ('201103'),
partition P_2011_APR values ('201104'),
partition P_2011_MAY values ('201105'),
partition P_2011_JUN values ('201106'),
partition P_2011_JUL values ('201107'),
partition P_2011_AUG values ('201108'),
partition P_2011_SEP values ('201109'),
partition P_2011_OCT values ('201110'),
partition P_2011_NOV values ('201111'),
partition P_2011_DEC values ('201112')
)
/
Table created.
SQL>
What is the significant difference between the two definitions ? We know that even in the
first definition, all the rows in a particular Partition have the same value for the
PARTITION_KEY column. However, when it is defined as Range Partitioned, the
16. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
Optimizer *cannot* be sure that this is so. For example, with a Range Partitioned definition,
the P_2011_JAN partition might have values ‘20110129’, ‘20110130’ in two rows out of a
million rows. We know this won’t be the case --- but the Optimizer cannot know so. On the
other hand, the List Partition definition acts like a constraint – every row in the P_2011_JAN
partition will only have the value ‘201101’ and no other value.
Adding Data in Partitioned Tables
Single Row Inserts
This demonstration shows how Oracle automatically maps the target Partition name when it
is not specified but, nevertheless, verifies the target Partition when it is explicitly specified.
Note also how the specification can be Partition name or even a SubPartition name.
SQL> desc sales_table
Name
Null?
----------------------------------------- -------SALE_DATE
NOT NULL
REGION
SALE_QTY
Type
---------------------------DATE
VARCHAR2(8)
NUMBER
SQL> select partition_name, subpartition_name from user_tab_subpartitions
2 where table_name = 'SALES_TABLE'
3 order by 1,2;
PARTITION_NAME
-----------------------------P_2010
P_2010
P_2010
P_2010
P_2011
P_2011
P_2011
P_2011
SUBPARTITION_NAME
-----------------------------P_2010_S_EAST
P_2010_S_NORTH
P_2010_S_SOUTH
P_2010_S_WEST
P_2011_S_EAST
P_2011_S_NORTH
P_2011_S_SOUTH
P_2011_S_WEST
8 rows selected.
SQL>
SQL> insert into sales_table values (to_date('05-NOV-11','DD-MON-RR'),'EAST', 1);
1 row created.
SQL> insert into sales_table partition (P_2010)
2 values (to_date('05-AUG-11','DD-MON-RR'),'WEST',1);
insert into sales_table partition (P_2010)
*
ERROR at line 1:
ORA-14401: inserted partition key is outside specified partition
SQL>
SQL> insert into sales_table partition (P_2011)
2 values (to_date('05-AUG-11','DD-MON-RR'),'WEST',1);
1 row created.
SQL>
SQL> insert into sales_table subpartition (P_2011_S_WEST)
2 values (to_date('05-SEP-11','DD-MON-RR'),'EAST',1);
insert into sales_table subpartition (P_2011_S_WEST)
*
ERROR at line 1:
ORA-14401: inserted partition key is outside specified partition
SQL> insert into sales_table subpartition (P_2011_S_EAST)
2 values (to_date('05-SEP-11','DD-MON-RR'),'EAST',1);
1 row created.
17. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
SQL>
Bulk Insert
This demonstration shows a Bulk Insert from a source table (I use DUAL as a simulated
source table) with Parallel and Direct Path operations.
SQL> desc month_end_balances;
Name
Null?
----------------------------------------- -------PARTITION_KEY
NOT NULL
ACCOUNT_NUMBER
BALANCE
Type
---------------------------VARCHAR2(6)
NUMBER
NUMBER
SQL> alter session enable parallel dml;
Session altered.
SQL> select count(*) from month_end_balances;
COUNT(*)
---------0
SQL>
SQL>
2
3
4
5
6
7
8
9
insert /*+ APPEND PARALLEL (meb 4) */
into month_end_balances meb
select /*+ PARALLEL (s 4) */
decode(mod(rownum,4),0,'201101',1,'201102',2,'201103',3,'201104'),
mod(rownum,125),
rownum
from dual s
connect by level < 1000001
/
1000000 rows created.
SQL>
SQL> pause Press ENTER to commit
Press ENTER to commit
SQL> commit;
Commit complete.
SQL>
SQL>
2
3
4
5
6
select /*+ PARALLEL (meb 4) */
partition_key, count(*)
from month_end_balances meb
group by partition_key
order by 1
/
PARTIT
COUNT(*)
------ ---------201101
250000
201102
250000
201103
250000
201104
250000
SQL>
Maintaining Partitioned Tables
ADD Partition
This is an example of adding a Partition to a Hash Partitioned Table. Note how data for one
existing Partition is split and reassigned. The Partitions are now “unbalanced”.
SQL> exec
dbms_stats.gather_table_stats('','MACHINE_DATA',estimate_percent=>100,granularity=>'ALL');
18. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
PL/SQL procedure successfully completed.
SQL>
SQL>
2
3
4
5
select partition_name, num_rows
from user_tab_partitions
where table_name = 'MACHINE_DATA'
order by partition_position
/
PARTITION_NAME
NUM_ROWS
------------------------------ ---------P_1
128669
P_2
123503
P_3
126000
P_4
120182
P_5
119039
P_6
128962
P_7
125151
P_8
128493
8 rows selected.
SQL>
SQL>
SQL>
2
3
set autotrace on statistics
alter table MACHINE_DATA
add partition P_9
/
Table altered.
SQL>
SQL>
SQL> exec
dbms_stats.gather_table_stats('','MACHINE_DATA',estimate_percent=>100,granularity=>'ALL');
PL/SQL procedure successfully completed.
SQL>
SQL>
2
3
4
5
select partition_name, num_rows
from user_tab_partitions
where table_name = 'MACHINE_DATA'
order by partition_position
/
PARTITION_NAME
NUM_ROWS
------------------------------ ---------P_1
63431
P_2
123503
P_3
126000
P_4
120182
P_5
119039
P_6
128962
P_7
125151
P_8
128493
P_9
65238
9 rows selected.
SQL>
EXCHANGE PARTITION
This demonstration shows how a populated non-Partitioned Table (a Staging Table) is
exchanged with an empty partition very quickly
SQL>
2
3
4
5
6
select /*+ PARALLEL (meb 4) */
partition_key, count(*)
from MONTH_END_BALANCES
group by partition_key
order by 1
/
PARTIT
COUNT(*)
------ ---------201101
250000
201102
250000
201103
250000
19. (c) Hemant K Chitale
201104
http://hemantoracledba.blogspot.com
250000
Statistics
---------------------------------------------------------1 recursive calls
1 db block gets
2837 consistent gets
0 physical reads
96 redo size
548 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
12 sorts (memory)
0 sorts (disk)
4 rows processed
SQL>
SQL>
SQL>
SQL>
SQL>
2
3
-- supposing that we get a staging table from the ODS/Source
-- the staging table has the same structure
create table STAGING_BALANCES_TABLE
(Partition_Key
varchar2(6) not null, account_number number, balance number)
/
Table created.
SQL>
SQL>
SQL>
2
3
4
5
6
7
8
-- the Staging Table has been populated by the source
insert into STAGING_BALANCES_TABLE
select
'201105',
mod(rownum,125),
rownum
from dual s
connect by level < 250001
/
250000 rows created.
Statistics
---------------------------------------------------------695 recursive calls
7624 db block gets
1565 consistent gets
0 physical reads
6659388 redo size
683 bytes sent via SQL*Net to client
652 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
3 sorts (memory)
0 sorts (disk)
250000 rows processed
SQL> select count(*) from STAGING_BALANCES_TABLE;
COUNT(*)
---------250000
Statistics
---------------------------------------------------------29 recursive calls
1 db block gets
820 consistent gets
0 physical reads
132 redo size
411 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>
SQL> -- we do an EXCHANGE
20. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
SQL> set timing on
SQL> alter table MONTH_END_BALANCES exchange partition P_2011_MAY with table
STAGING_BALANCES_TABLE;
Table altered.
Elapsed: 00:00:00.16
SQL>
SQL> -- verify
SQL> select count(*) from MONTH_END_BALANCES partition (P_2011_MAY);
COUNT(*)
---------250000
Elapsed: 00:00:00.01
Statistics
---------------------------------------------------------1 recursive calls
0 db block gets
740 consistent gets
0 physical reads
0 redo size
411 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> select count(*) from STAGING_BALANCES_TABLE;
COUNT(*)
---------0
Elapsed: 00:00:00.00
Statistics
---------------------------------------------------------1 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
410 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>
SQL>
2
3
4
5
6
select /*+ PARALLEL (meb 4) */
partition_key, count(*)
from MONTH_END_BALANCES
group by partition_key
order by 1
/
PARTIT
COUNT(*)
------ ---------201101
250000
201102
250000
201103
250000
201104
250000
201105
250000
Elapsed: 00:00:00.35
Statistics
---------------------------------------------------------1 recursive calls
0 db block gets
3572 consistent gets
0 physical reads
0 redo size
21. (c) Hemant K Chitale
560
385
2
12
0
5
http://hemantoracledba.blogspot.com
bytes sent via SQL*Net to client
bytes received via SQL*Net from client
SQL*Net roundtrips to/from client
sorts (memory)
sorts (disk)
rows processed
SQL>
Maintaining Indexes
Global Indexes
This demonstration shows how a Global Index is automatically marked UNUSABLE when
Partition Maintenance is performed.
SQL> desc month_end_balances;
Name
Null?
----------------------------------------- -------PARTITION_KEY
NOT NULL
ACCOUNT_NUMBER
BALANCE
Type
---------------------------VARCHAR2(6)
NUMBER
NUMBER
SQL> create index meb_a_no_ndx on month_end_balances(account_number) parallel 4 nologging;
Index created.
SQL> select status from user_indexes where index_name = 'MEB_A_NO_NDX';
STATUS
-------VALID
SQL> alter table month_end_balances truncate partition P_2011_JAN;
Table truncated.
SQL> select status from user_indexes where index_name = 'MEB_A_NO_NDX';
STATUS
-------UNUSABLE
SQL>
SQL> alter index meb_a_no_ndx rebuild;
Index altered.
SQL> select status from user_indexes where index_name = 'MEB_A_NO_NDX';
STATUS
-------VALID
SQL> alter table month_end_balances truncate partition P_2011_FEB update global indexes;
Table truncated.
SQL> select status from user_indexes where index_name = 'MEB_A_NO_NDX';
STATUS
-------VALID
SQL>
Local Index
This demonstration shows a locally partitioned index where Index Partitions are
automatically created to match the Table Partitions :
SQL> desc accounting
Name
Null?
Type
----------------------------------------- -------- ---------------------------BIZ_COUNTRY
NOT NULL VARCHAR2(10)
22. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
ACCTG_YEAR
DATA_1
NOT NULL NUMBER
VARCHAR2(20)
SQL> create index accntng_data_ndx on accounting(data_1) LOCAL;
Index created.
SQL> select partition_name from user_tab_partitions where table_name = 'ACCOUNTING';
PARTITION_NAME
-----------------------------P_IN_2006
P_IN_2007
P_IN_2008
P_MAX
P_SG_2006
P_SG_2007
P_SG_2008
7 rows selected.
SQL> select index_name, status from user_indexes where table_name = 'ACCOUNTING';
INDEX_NAME
STATUS
------------------------------ -------ACCNTNG_DATA_NDX
N/A
SQL> select partition_name, status from user_ind_partitions where index_name =
'ACCNTNG_DATA_NDX';
PARTITION_NAME
-----------------------------P_IN_2006
P_IN_2007
P_IN_2008
P_MAX
P_SG_2006
P_SG_2007
P_SG_2008
STATUS
-------USABLE
USABLE
USABLE
USABLE
USABLE
USABLE
USABLE
7 rows selected.
SQL>
SQL> alter table accounting drop partition P_SG_2006;
Table altered.
SQL> select partition_name from user_tab_partitions where table_name = 'ACCOUNTING';
PARTITION_NAME
-----------------------------P_IN_2006
P_IN_2007
P_IN_2008
P_MAX
P_SG_2007
P_SG_2008
6 rows selected.
SQL> select partition_name, status from user_ind_partitions where index_name =
'ACCNTNG_DATA_NDX';
PARTITION_NAME
-----------------------------P_IN_2006
P_IN_2007
P_IN_2008
P_MAX
P_SG_2007
P_SG_2008
6 rows selected.
SQL>
STATUS
-------USABLE
USABLE
USABLE
USABLE
USABLE
USABLE
23. (c) Hemant K Chitale
http://hemantoracledba.blogspot.com
Archiving Data
The demo “SH” schema (that can be installed as part of the installation of EXAMPLES)
contains a table “SALES” that shows how Date-Ranged Partitions can be created and
managed. Note how 1995 and 1996 data is stored in Year Partitions, 1997 data is stored in
Half-Year Partitions and 1998 to 2003 data is stored in Quarter-Year Partitions.
SQL> connect sh/sh
Connected.
SQL> desc sales
Name
----------------------------------------PROD_ID
CUST_ID
TIME_ID
CHANNEL_ID
PROMO_ID
QUANTITY_SOLD
AMOUNT_SOLD
SQL>
1
2
3
4*
SQL>
Null?
-------NOT NULL
NOT NULL
NOT NULL
NOT NULL
NOT NULL
NOT NULL
NOT NULL
Type
---------------------------NUMBER
NUMBER
DATE
NUMBER
NUMBER
NUMBER(10,2)
NUMBER(10,2)
l
select partition_name, high_value
from user_tab_partitions
where table_name = 'SALES'
order by partition_position
/
PARTITION_NAME
-----------------------------HIGH_VALUE
-------------------------------------------------------------------------------SALES_1995
TO_DATE(' 1996-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_1996
TO_DATE(' 1997-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_H1_1997
TO_DATE(' 1997-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_H2_1997
TO_DATE(' 1998-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q1_1998
TO_DATE(' 1998-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q2_1998
TO_DATE(' 1998-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q3_1998
TO_DATE(' 1998-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q4_1998
TO_DATE(' 1999-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q1_1999
TO_DATE(' 1999-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q2_1999
TO_DATE(' 1999-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q3_1999
TO_DATE(' 1999-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q4_1999
TO_DATE(' 2000-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q1_2000
TO_DATE(' 2000-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q2_2000
TO_DATE(' 2000-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
SALES_Q3_2000
TO_DATE(' 2000-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA