TM05 Monitor and Administer Database
TM05 Monitor and Administer Database
Administratio Level-IV
Based on March 2022, Curriculum Version II
November, 2023
Version-I
Ministry of Labor Database Monitoring and November, 2023
Page 2 of 50 and Skills Administration Level III
Author/Copyright
Acknowledgment
Ministry of Labor and Skills wish to extend thanks and appreciation to the many
representatives of TVET instructors and respective industry experts who donated their time and
expertise to the development of this Teaching, Training and Learning Materials (TTLM).
DBA-------------------------------------------------------Database Administrator
This unit will also assist you to attain the learning outcomes stated in the cover page. Specifically,
upon completion of this learning guide, you will be able to:
Understand principles of database
configure system settings necessary for database startup
Understand and implement hardware and software requirements for the database
Monitor Database Start-up and Operations
Data Integrity:
Entity Integrity: Each row in a table must have a unique identifier, usually expressed
through a primary key, to ensure that each record is distinct.
Referential Integrity: Relationships between tables should be maintained, ensuring that
foreign keys in one table correspond to primary keys in another.
Normalization: Databases should be organized to reduce redundancy and dependency by
breaking down tables into smaller, more manageable parts through a process called
normalization.
Consistency: Data in the database should be consistent and accurate. Any changes made to
the database should maintain its overall integrity.
Atomicity, Consistency, Isolation, Durability (ACID): ACID properties are essential for
database transactions. Transactions should be Atomic (indivisible), Consistent (maintain
database integrity), Isolated (independent of other transactions), and Durable (once
committed, changes are permanent).
Data Models: Choose an appropriate data model (e.g., relational, document-oriented, graph)
based on the nature of the data and the requirements of the application.
Structured Query Language (SQL): Use a standardized query language like SQL to
interact with and manipulate the database. SQL provides a set of commands for data
definition, data manipulation, and data control.
Security: Implement security measures to protect sensitive data. This includes user
authentication, authorization, and encryption.
Concurrency Control: Manage multiple users accessing the database simultaneously to
prevent conflicts and ensure data consistency.
Scalability: Design databases to scale with the growth of data and user interactions. This
involves considerations like indexing, partitioning, and clustering.
Backup and Recovery: Regularly back up the database to prevent data loss in the event of
system failures or other disasters. Establish procedures for database recovery.
Database Software Installation: Install the database software on the server, ensuring
that the installation follows best practices and is compatible with the operating system.
Configure the software with the necessary options and settings.
Memory Allocation: Allocate and configure memory settings for the database system.
This includes setting parameters such as buffer sizes, cache sizes, and other memory-
related configurations to optimize database performance.
Storage Configuration
Configure storage settings, including the location of database files, log files, and backup
files. Ensure that there is adequate disk space, and consider factors like disk speed and RAID
configurations for optimal performance.
Database Instance Configuration
Configure the database instance with parameters specific to the database engine. These
parameters may include settings related to transaction logs, data files, and temporary storage.
Network Configuration
Configure error handling mechanisms and logging settings to capture information about the
startup process. This is crucial for diagnosing issues and monitoring the health of the
database system.
Backup Configuration
Establish backup configurations to ensure that regular backups are scheduled and performed.
This includes specifying backup locations, retention policies, and verification procedures.
Monitoring and Alerting
Set up monitoring tools and configure alerting mechanisms to proactively monitor the
database system. This allows for the early detection of issues and prompt resolution.
Documentation
Document the entire system configuration process. This documentation serves as a reference
for future maintenance, upgrades, and troubleshooting.
The system configuration for database startup is a critical step in ensuring the stability,
performance, and security of the database environment. Proper configuration practices
contribute to a well-tuned and efficiently running database system.
Log Monitoring: Regularly check database logs for any error messages, warnings, or
abnormal events during startup and ongoing operation. Logs provide valuable information
about the system's health.
Performance Monitoring: Utilize performance monitoring tools to track database
performance metrics. This includes monitoring CPU usage, memory utilization, disk I/O, and
query response times. Deviations from normal patterns can indicate potential issues.
Alerts and Notifications: Implement alerting mechanisms to notify administrators of any
irregularities. Set up alerts for critical events such as system failures, performance
bottlenecks, or security breaches.
Database Health Checks: Conduct regular health checks to assess the overall state of the
database. This involves reviewing system parameters, configurations, and resource
utilization to identify any abnormalities.
Startup Procedures: Establish and document clear procedures for starting up the database
system. Regularly review the startup logs to ensure that the database initializes without
errors.
Automated Monitoring Scripts: Develop and implement automated scripts to monitor key
database parameters. These scripts can perform periodic checks and report any deviations
from predefined thresholds.
Security Audits: Conduct regular security audits to identify and address any vulnerabilities
or unauthorized access attempts. Monitor login attempts, privilege changes, and data access
patterns.
Backup Verification: Regularly verify the integrity of database backups to ensure they can
be successfully restored. This helps in preparing for potential disasters or data corruption
issues.
Resource Utilization: Monitor resource utilization such as CPU, memory, and disk space to
ensure that the database has sufficient resources for normal operation. Plan for scalability if
resource demands increase.
A data dictionary is a centralized repository that provides metadata about the data within a
database. It typically includes details such as data definitions, data types, relationships between
tables, constraints, and other essential information.
A data dictionary is a collection of descriptions of the data objects or items in a data model for
the benefit of programmers and others who need to refer to them.
It is a set of information describing the contents, format, and structure of a database and the
relationship between its elements, used to control access to and manipulation of the database.
When developing programs that use the data model, a data dictionary can be consulted to
understand where a data item fits in the structure, what values it may contain, and basically what
the data item means in real-world terms.
Most DBMS keep the data dictionary hidden from users to prevent them from accidentally
destroying its contents.
A data dictionary may contains:
The definitions of all schema objects in the database.
How much space has been allocated for, and is currently used by the schema objects
Default values for columns.
Integrity constraint information (Constraints that apply to each field, if any)
Auditing information, such as who has accessed or updated various schema objects
Privileges and roles each user has been granted (Access Authorization)
Description of database users, their responsibilities and their access rights.
Data dictionaries do not contain any actual values from the database, only bookkeeping
information for managing it.
Advantage of a Data Dictionary
When a new user is introduced to the system or a new administrator takes over the system,
identifying table structures and types becomes simpler.
In a data dictionary, you might document that the "Customers" table includes fields such
as "CustomerID," "Name," "Email," and "Phone." For each field, you specify the data
type, length, and any constraints.
Verify that the actual structure of the database matches the information documented in the data
dictionary. This ensures that the database schema aligns with the intended design and that any
changes to the database structure are accurately reflected in the data dictionary.
Example:
If the data dictionary indicates that the "Orders" table should have a foreign key relationship with
the "Customers" table, verify that this relationship exists in the database schema. Check that the
data types, constraints, and relationships match the documented specifications.
Consistency Checks: Conduct consistency checks to ensure that the data dictionary is
consistent with other project documentation and requirements. This involves verifying that
changes made to the database structure are appropriately updated in the data dictionary.
Example:
If there's a change in a table structure, such as adding a new field, ensure that the data dictionary
is updated to reflect this change. Consistency checks prevent discrepancies between
documentation and the actual database.
Data Type Verification: Check that the data types assigned to each field in the database
match the specifications in the data dictionary. This includes verifying numeric precision,
string lengths, and other data type attributes.
Example
If the data dictionary specifies that the "Price" field should be a numeric data type with two
decimal places, verify that this is accurately implemented in the database.
If the data dictionary specifies that the "ProductID" field is the primary key for the "Products"
table, verify that this constraint is enforced in the database schema.
If a new index is created on a table for performance reasons, update the data dictionary to include
information about this index.
Version Control: Implement version control for the data dictionary to track changes over
time. This ensures that you can trace modifications, additions, or deletions to the data
dictionary and understand the evolution of the database structure.
Example:
Use version control tools to track changes to the data dictionary, providing a history of
alterations to the database structure.
By compiling a comprehensive data dictionary and regularly verifying the database structure
against it, organizations can maintain consistency, accuracy, and documentation integrity, which
is crucial for effective database management and development.
Data integrity refers to the overall completeness, accuracy and consistency of data in according
to business requirements.
2. Referential Integrity
This is the concept of foreign keys. The rule states that the foreign key value can be in two states.
The first state is that the foreign key value would refer to a primary key value of another table, or
it can be null. Being null could simply mean that there are no relationships, or that the
relationship is unknown.
Referential integrity is a feature provided by relational DBMS that prevents users from entering
inconsistent data.
3. Domain Integrity
This states that all columns in a relational database are in a defined domain.
The concept of data integrity ensures that all data in a database can be traced and connected to
other data. This ensures that everything is recoverable and searchable. Having a single, well
defined and well controlled data integrity system increases stability, performance, reusability and
maintainability.
What is index
An index is a separate physical data structure that enables queries to access one or more data
rows fast.
A database index is a separate physical data structure that improves the speed of data retrieval
operations on a database table at the cost of additional writes and the use of more storage space
to maintain the extra copy of data.
Indexes are used to quickly locate data without having to search every row in a database table
every time a database table is accessed. Indexes can be created using one or more columns of a
database table, providing the basis for both rapid random lookups and efficient access of ordered
records.
Why Use Indexes?
Two primary reasons exist for creating indexes in SQL Server:
- To maintain uniqueness of the indexed column(s)
- To provide fast access to the data in tables.
Delete an index
Deleting an index means removing one or more relational indexes from the current database.
The DROP INDEX statement is used to delete an index in a table.
Syntax: DROP INDEX index_name ON table_name
To delete an index by using Object Explorer, you can follow the steps as shown below:
In Object Explorer, expand the database that contains the table on which you want to
delete an index.
Expand the Tables folder.
Expand the table that contains the index you want to delete.
Expand the Indexes folder.
Right-click the index you want to delete and select Delete.
In the Delete Object dialog box, verify that the correct index is in the Object to be deleted
grid and click OK.
- In Object Explorer, expand the database that contains the table on which you want to
delete an index.
- Expand the Tables folder.
- Right-click the table that contains the index you want to delete and click Design.
- On the Table Designer menu, click Indexes/Keys.
- In the Indexes/Keys dialog box, select the index you want to delete.
- In Object Explorer, connect to an instance of the SQL Server Database Engine and then
expand that instance.
- Expand Databases, expand the database in which the table belongs, and then expand
Tables.
- Expand the table in which the index belongs and then expand Indexes.
- Right-click the index that you want to modify and then click Properties.
In the Index Properties dialog box, make the desired changes. For example, you can add or
remove a column from the index key, or change the setting of an index option.
A multiple-field key, or composite key, involves using multiple columns as a unique identifier
for a record.
Consideration for Composite Keys: Use composite keys when a combination of multiple
columns uniquely identifies a record, and this combination is more meaningful than any
individual column.
Data Integrity: Ensure that the combination of fields in a composite key maintains data
integrity. This involves considering relationships between the fields and the overall business
logic.
Primary Key and Composite Key: When choosing a primary key, evaluate whether a single-
field primary key is sufficient or if a composite key is more appropriate based on the uniqueness
constraints.
Indexing Composite Keys: Index composite keys to improve performance when querying based
on the combination of fields.
Documentation: Document the rationale behind the choice of composite keys and regularly
review their effectiveness in meeting database performance goals.
Effective index creation and the use of composite keys require a balance between optimizing
read performance and considering the impact on write performance. Regular monitoring and
adjustments based on evolving data patterns and usage scenarios are essential for maintaining an
efficient database structure.
Types of Locks:
Shared Locks: Used for read operations, allowing multiple transactions to read a resource
simultaneously but preventing any of them from writing to it.
Exclusive Locks: Used for write operations, ensuring that only one transaction can modify a
resource at a time.
Read Locks: Similar to shared locks, allowing multiple transactions to read a resource
simultaneously.
Isolation Levels: Understand and configure different isolation levels (e.g., READ COMMITTED,
REPEATABLE READ, SERIALIZABLE) based on the application's requirements. Higher isolation levels
generally involve more restrictive locking, which can impact performance.
Row-Level Locking: Consider using row-level locking instead of table-level locking when possible. Row-
level locking allows for more granular control and reduces contention for resources.
Lock Escalation: Monitor and understand lock escalation mechanisms in the database system. Lock
escalation occurs when a lower-level lock (e.g., row-level) is escalated to a higher-level lock (e.g., table-
level) to manage resources more efficiently.
Lock Statistics: Regularly review lock statistics and performance metrics to identify patterns of
contention. This information helps in making informed decisions about indexing, query optimization,
and application design.
Database Management System (DBMS) Tools: Utilize built-in tools provided by the DBMS for
monitoring locks and transactions. These tools often provide insights into lock wait times, deadlocks,
and other relevant metrics.
Backup Verification: Regularly verify the integrity of database backups to ensure that they are
free from corruption and can be relied upon for recovery. Verification may involve checking
backup files, validating backup processes, and confirming that the backup captures the entire
database.
Verifying that recent backups of a database have been stored successfully and can be retrieved as
a full working copy is a critical aspect of database management. Here's a step-by-step guide on
how you might confirm this:
Identify Backup Location: Determine where the recent backups are stored. This could be on
local servers, network-attached storage (NAS), or cloud storage.
Check Backup Logs: Review backup logs to confirm that recent backup operations were
successful. Examine any error messages or warnings that may indicate issues.
Verify Timestamps: Check the timestamps on the backup files to ensure they correspond to
recent backup operations. This helps confirm that the backup files are up-to-date.
Test the Restored Database: After the restoration, perform tests on the restored database to
ensure that it is fully functional. Run sample queries, check data integrity, and validate that all
necessary components are in place.
Verify File Integrity: Check the integrity of backup files to ensure they are not corrupted. You
can use checksums or hash functions to verify the integrity of the backup files.
Check Backup Retention Policy: Confirm that the backup retention policy aligns with the
organization's requirements. Ensure that backups are retained for an appropriate duration and that
old backups are regularly pruned.
Ensure Accessibility: Verify that the individuals responsible for database recovery have access
to the backup files and the necessary credentials to restore the database.
Test Recovery Scenarios: Simulate various disaster recovery scenarios (e.g., hardware failure,
data corruption) and ensure that the backup and recovery processes can effectively address these
situations.
Automation Verification: If backups are automated, ensure that the backup automation scripts
or tools are running as scheduled and producing the expected results.
Notification Systems: Ensure that notification systems are in place to alert relevant personnel in
case of backup failures or issues.
Set up Monitoring Alerts: Implement monitoring alerts to notify administrators when storage
space reaches predefined thresholds. This ensures timely intervention before critical issues arise.
Regularly Check Disk Space Usage: Periodically review the current disk space usage on both
database and server levels. This can be done using system commands or database management
tools.
Use Database Management Tools: Utilize the features provided by your database management
system (DBMS) to monitor storage space. Many DBMSs offer built-in tools for space utilization
analysis.
Monitor File Growth: Keep an eye on the growth rate of database files, including data files, log
files, and any other file groups. Identify trends and anticipate future storage needs.
Implement Auto-Growth: Configure auto-growth settings for database files to allow for
automatic expansion when needed. However, monitor auto-growth events closely, as excessive
auto-growth can impact performance.
Regularly Resize Database Files: Proactively resize database files when necessary. If you
notice that a particular file group is consistently running out of space, resize the associated files
accordingly.
Archive or Purge Data: Evaluate and implement data archiving or purging strategies to remove
obsolete or historical data. This not only frees up space but also improves database performance.
Evaluate Indexing: Review and optimize database indexes. Poorly designed or fragmented
indexes can contribute to excessive storage usage.
Change Management Process: Implement a formal change management process that outlines
the steps and approvals required for making updates to the database. This process ensures that
changes are well-documented and aligned with organizational goals.
Authorization and Access Controls: Define roles and permissions to control who has the
authority to update data. Ensure that only authorized personnel have the necessary access rights,
and implement a principle of least privilege to minimize the risk of unauthorized changes.
Data Validation and Verification: Establish procedures for validating and verifying data
updates before they are applied. This involves checking the accuracy and completeness of the
data to be modified.
Use of Transactions: Encourage the use of database transactions to ensure that updates are
atomic, consistent, isolated, and durable (ACID). This minimizes the risk of incomplete or
erroneous updates.
Version Control: Implement version control mechanisms to track changes to the database over
time. This includes capturing information about who made the change, when it was made, and
the nature of the change.
Audit Trails: Enable audit trails to log changes made to the database. Audit logs help in tracking
modifications, identifying potential issues, and providing accountability.
Rollback Plans: Develop rollback plans in case issues arise during or after data updates. Having
a plan for reverting to the previous state ensures quick recovery in the event of unexpected
problems.
Documentation: Document all data update processes, including the rationale for the update, the
SQL statements or procedures used, and any issues encountered during the process. This
documentation serves as a reference for future updates and audits.
User Training and Awareness: Provide training to users and stakeholders on the data update
processes and guidelines. Increasing awareness helps prevent unintentional or unauthorized
updates and promotes a culture of responsible data management.
Compliance with Regulations: Ensure that data updates comply with relevant regulations and
industry standards. This is particularly important in industries with specific data governance and
compliance requirements.
Regular reviews and updates to organizational guidelines contribute to a robust and reliable data
management framework.
2. Which type of lock is used for read operations, allowing multiple transactions to read a
resource simultaneously?
Status
Allocate or remove access privileges according to user status" involves managing user access
based on changes in their status, such as new user onboarding, role changes, or when a user
leaves the organization. Here are examples for this aspect of Access Privilege Management:
Example:
Definition: Revoking access privileges for users who have left the organization or no longer require
access.
Example:
Modifying access privileges when a user's role or responsibilities change within the organization.
Example:
Definition: Conducting regular reviews of user access privileges and adjusting them based on
changes in job roles or responsibilities.
Example:
Definition: Implementing conditional access privileges based on the user's status (e.g., active,
inactive, probationary).
Example:
ELSE
END IF;
User Access Termination: Ensuring that access privileges are promptly terminated when a user leaves
the organization.
Example:
Definition: Granting temporary access privileges for users in specific roles or projects.
Example:
Example:
Definition: Logging and auditing access changes to maintain a record of who has been granted
or revoked access privileges.
Example:
Access Log-in Log Files: Regularly access and review log-in log files on the network server.
These files contain records of user log-ins, including successful and unsuccessful attempts.
Focus on Security-Relevant Logs: Concentrate on logs that are relevant to security, such as
authentication logs and logs indicating user log-in activities.
Identify Suspicious Patterns: Look for suspicious patterns in log-in activities, including
multiple failed log-in attempts, log-ins from unusual locations or IP addresses, or log-ins during
non-business hours.
Automate Log Analysis: Implement automated log analysis tools to assist in the identification
of potential security breaches. These tools can quickly analyze large volumes of log data and
generate alerts for anomalies.
Set Thresholds for Alerts: Define thresholds for log-in activities that trigger alerts. For
example, multiple failed log-in attempts within a short period or log-ins from geographically
improbable locations.
Correlate with Other Logs: Correlate log-in log data with other logs, such as intrusion
detection system (IDS) logs or firewall logs, to gain a comprehensive understanding of network
security.
Check for Unusual Log-in Times: Investigate log-ins that occur during unusual hours or
outside of normal business hours. This can be an indicator of unauthorized access.
Track User Accounts: Monitor log-ins for privileged user accounts closely. Unauthorized
access to accounts with elevated privileges poses a significant security risk.
Regular Security Training: Conduct regular security training for users to raise awareness about
the importance of secure log-in practices and to recognize and report suspicious activities.
Incident Response Plan: Have an incident response plan in place to guide actions in the event
of a detected security breach. This plan should include steps for isolating affected systems,
notifying relevant parties, and conducting forensic analysis.
Procedure:
Resource Monitoring: Utilize system monitoring tools to continuously track resource usage,
including CPU utilization, memory consumption, disk I/O, and network activity.
Set Resource Thresholds: Define thresholds for resource usage that, when exceeded, trigger
alerts. These thresholds can be customized based on the specific requirements and performance
expectations.
Prioritize Critical Processes: Identify and prioritize critical database processes and allocate
resources accordingly. Ensure that essential operations receive the necessary computing power
and memory.
Database Indexing and Optimization: Optimize database indexes and queries to minimize
resource-intensive operations. Well-optimized queries contribute to reduced resource
consumption.
Regular Performance Tuning: Conduct regular performance tuning activities, such as query
optimization and index maintenance, to enhance database efficiency and reduce resource
utilization.
Implement Caching Mechanisms: Introduce caching mechanisms to reduce the need for
repetitive database queries, thereby decreasing the load on the database and improving response
times.
Disk Space Management: Monitor and manage disk space regularly. Implement practices such
as archiving, purging, or compressing data to prevent unnecessary storage consumption.
Regular System Updates: Keep the operating system, database software, and relevant
components up to date with the latest patches and updates to benefit from performance.
1. What is the purpose of granting basic read-only access to a new user in the context of access
privilege management?
A. To allow the user to modify database records
B. To provide full administrative access to the database
C. To restrict the user from accessing the database
D. To enable the user to view but not modify data
2. What is the purpose of adjusting access privileges for a user with a new role in access
privilege management?
A. To increase the user's access privileges B. To simplify the access control process
C. To align privileges with the user's new responsibilities D. To grant access to all database
tables
3. What is the primary purpose of access privilege management in a database system?
A. To increase system complexity B. To grant unlimited access to all users
C. To manage and control user access to database resources
D. To ignore user access altogether
4. Why is monitoring network server logs crucial for database security?
A. To increase internet speed
B. To identify and respond to illegal log-in attempts and potential security breaches
C. To track social media usage
D. To measure the server's physical temperature
Part-II Give short answer
1. What is the purpose of access privilege management in a database system?
2. Explain the importance of monitoring network server logs for illegal log-in attempts or
security breaches.
3. Explain the role of log files in SQL Server and why monitoring them is essential for
system administrators.
Steps
3. Click Role
7.
13. nextnext
7. Specify an arbitrary password for the domain administrator account and click Next
12. In the Select Groups dialog box, type Domain Admins and click OK.
Steps
1. Log in to the first node you wish to add to the domain, right-click the My Computer
icon, and click Properties.
2. Select the Computer Name tab and click Change.
3. In the Computer Name Changes window, do the following:
o In the Computer name field, specify a server hostname. This name will be used to
uniquely identify the given node among other nodes in the cluster. By default, you are
o Select the Domain radio button and type the domain DNS name (you specified this
name during the Active Directory domain). In our example the domain DNS name
should be set to mycompany.local.
After providing the necessary information, your window may look like the following:
4. In the Computer Name Changes window, type the username and password of the
domain administrator account and click OK.
5. Click OK to close the displayed message welcoming you to the domain and then click
OK once more to close the Computer Name Changes window.
6. Restart the node.
URL
https://www.alibabacloud.com/blog/why-is-a-sql-log-file-huge-and-how-should-i-deal-with-
it_598491#:~:text=What%20Is%20a%20SQL%20Server,operations%20performed%20on%20a%20databa
se.
file:///C:/Users/John_Mobile/Downloads/sql_admin_2%20(1).pdf
https://www.oreilly.com/library/view/database-administration-the/0201741296/
https://www.geeksforgeeks.org/10-best-books-for-database-administrators-and-developers/
https://www.motadata.com/database-monitoring/
1 Frew Atkilt M-Tech Network & Information Bishoftu Polytechnic 0911787374 [email protected]
Security College
5 Tewodros Girma MSc Information system Sheno Polytechnic 0912068479 girmatewodiros @gmail.com
College