Advanced Databases and Mining Unit 2
Advanced Databases and Mining Unit 2
Unit 2
Transaction Processing:
Consistency:
In order to maintain consistency in a database, before and after the transaction, certain
properties are followed. These are called ACID properties.
Atomicity:
By this, we mean that either the entire transaction takes place at once or doesn’t happen at all.
There is no midway i.e. transactions do not occur partially. Each transaction is considered as one
unit and either runs to completion or is not executed at all. It involves the following two
operations. —Abort: If a transaction aborts, changes made to the database are not visible. —
Commit: If a transaction commits, changes made are visible.
Atomicity is also known as the ‘All or nothing rule’.
Consider the following transaction T consisting of T1 and T2: Transfer of 100 from account X to
account Y.
27
If the transaction fails after completion of T1 but before completion of T2.( say, after write(X) but
before write(Y)), then the amount has been deducted from X but not added to Y. This results in
an inconsistent database state. Therefore, the transaction must be executed in its entirety in
order to ensure the correctness of the database state.
Consistency:
This means that integrity constraints must be maintained so that the database is consistent
before and after the transaction. It refers to the correctness of a database. Referring to the
example above, The total amount before and after the transaction must be maintained. Total
before T occurs = 500 + 200 = 700. Total after T occurs = 400 + 300 = 700. Therefore, the database
is consistent. Inconsistency occurs in case T1 completes but T2 fails.
As a result, T is incomplete.
Isolation:
This property ensures that multiple transactions can occur concurrently without leading to the
inconsistency of the database state. Transactions occur independently without interference.
Changes occurring in a particular transaction will not be visible to any other transaction until that
particular change in that transaction is written to memory or has been committed. This property
ensures that the execution of transactions concurrently will result in a state that is equivalent to
a state achieved these were executed serially in some order. Let X= 500, Y = 500. Consider two
transactions T and T”.
28
Suppose T has been executed till Read (Y) and then T’’ starts. As a result, interleaving of
operations takes place due to which T’’ reads the correct value of X but the incorrect value of Y
and sum computed by T’’: (X+Y = 50, 000+500=50, 500) is thus not consistent with the sum at
end of the transaction:
T: (X+Y = 50, 000 + 450 = 50, 450). This results in database
inconsistency, due to a loss of 50 units. Hence, transactions must take place in isolation and
changes should be visible only after they have been made to the main memory.
Durability:
This property ensures that once the transaction has completed execution, the updates and
modifications to the database are stored in and written to disk and they persist even if a system
failure occurs. These updates now become permanent and are stored in non-volatile memory.
The effects of the transaction, thus, are never lost.
Some important points:
Property Responsibility for maintaining properties
Atomicity means that multiple operations can be grouped into a single logical entity, that is,
other threads of control accessing the database will either see all of the changes or none of the
changes.
Essentially, an atomic transaction ensures that any commit you make finishes the entire operation
successfully. Or, in cases of a lost connection in the middle of an operation, the database is rolled
back to its state prior to the commit being initiated.
This is important for preventing crashes or outages from creating cases where the transaction
was partially finished to an unknown overall state. If a crash occurs during a transaction with no
atomicity, you can’t know exactly how far along the process was before the transaction was
interrupted. By using atomicity, you ensure that either the entire transaction is successfully
completed—or that none of it was.
DBMS is the management of data that should remain integrated when any changes are done in
it. It is because if the integrity of the data is affected, whole data will get disturbed and corrupted.
Therefore, to maintain the integrity of the data, there are four properties described in the
database management system, which are known as the ACID properties. The ACID properties are
meant for the transaction that goes through a different group of tasks, and there we come to see
the role of the ACID properties.
In this section, we will learn and understand about the ACID properties. We will learn what these
properties stand for and what does each property is used for. We will also understand the ACID
properties with the help of some examples.
30
ACID Properties
1) Atomicity: The term atomicity defines that the data remains atomic. It means if any
operation is performed on the data, either it should be performed or executed
completely or should not be executed at all. It further means that the operation
should not break in between or execute partially. In the case of executing operations
on the transaction, the operation should be completely executed and not partially.
• Example: If Remo has account A having $30 in his account from which he wishes to send
$10 to Sheero's account, which is B. In account B, a sum of $ 100 is already present. When
$10 will be transferred to account B, the sum will become $110. Now, there will be two
operations that will take place. One is the amount of $10 that Remo wants to transfer will
be debited from his account A, and the same amount will get credited to account B, i.e.,
into Sheero's account. Now, what happens - the first operation of debit executes
successfully, but the credit operation, however, fails. Thus, in Remo's account A, the value
becomes $20, and to that of Sheero's account, it remains $100 as it was previously
present.
31
In the above diagram, it can be seen that after crediting $10, the amount is still $100 in account
B. So, it is not an atomic transaction.
The below image shows that both debit and credit operations are done successfully. Thus the
transaction is atomic.
Thus, when the amount loses atomicity, then in the bank systems, this becomes a huge issue, and
so the atomicity is the main focus in the bank systems.
2) Consistency: The word consistency means that the value should remain preserved always. In
DBMS, the integrity of the data should be maintained, which means if a change in the database
is made, it should remain preserved always. In the case of transactions, the integrity of the data
32
is very essential so that the database remains consistent before and after the transaction. The
data should always be correct.
Example:
•
• 4) Isolation: The term 'isolation' means separation. In DBMS, Isolation is the property of a
database where no data should affect the other one and may occur concurrently. In short,
the operation on one database should begin when the operation on the first database gets
complete. It means if two operations are being performed on two different databases, they
may not affect the value of one another. In the case of transactions, when two or more
transactions occur simultaneously, the consistency should remain maintained. Any
changes that occur in any particular transaction will not be seen by other transactions until
the change is not committed in the memory.
• Example: If two operations are concurrently running on two different accounts, then the
value of both accounts should not get affected. The value should remain persistent. As you
can see in the below diagram, account A is making T1 and T2 transactions to account B and
C, but both are executing independently without affecting each other. It is known as
Isolation.
33
•
• 4) Durability: Durability ensures the permanency of something. In DBMS, the term
durability ensures that the data after the successful execution of the operation becomes
permanent in the database. The durability of the data should be so perfect that even if the
system fails or leads to a crash, the database still survives. However, if gets lost, it becomes
the responsibility of the recovery manager for ensuring the durability of the database. For
committing the values, the COMMIT command must be used every time we make changes.
• Therefore, the ACID property of DBMS plays a vital role in maintaining the consistency and
availability of data in the database.
• Thus, it was a precise introduction of ACID properties in DBMS. We have discussed these
properties in the transaction section also.
Serializable Schedule:
Schedule, as the name suggests, is a process of lining the transactions and executing them
one by one. When there are multiple transactions that are running in a concurrent manner
and the order of operation is needed to be set so that the operations do not overlap each
other, Scheduling is brought into play and the transactions are timed accordingly. The basics
of Transactions
34
Serial Schedules: Schedules in which the transactions are executed non-interleaved, i.e., a
serial schedule is one in which no transaction starts until a running transaction has ended are
called serial schedules. Example: Consider the following schedule involving two transactions T1
and T2.
T1 T2
R(A)
W(A)
R(B)
W(B)
R(A)
R(B)
where R(A) denotes that a read operation is performed on some data item ‘A’
This is a serial schedule since the transactions perform serially in the order T1 —> T2 Recoverable
Schedule:
35
Schedules in which transactions commit only after all transactions whose changes they read
commit are called recoverable schedules. In other words, if some transaction T j is reading value
updated or written by some other transaction Ti, then the commit of Tj must occur after the
commit of Ti.
Example 1:
S1: R1(x), W1(x), R2(x), R1(y), R2(y),
W2(x), W1(y), C1, C2;
Given schedule follows order of Ti->Tj => C1->C2. Transaction T1 is executed before T2 hence
there is no chances of conflict occur. R1(x) appears before W1(x) and transaction T1 is committed
before T2 i.e. completion of first transaction performed first update on data item x, hence given
schedule is recoverable.
Example 2: Consider the following schedule involving two transactions T1 and T2
T1 T2
R(A)
W(A) W(A)
R(A)
• This is a recoverable schedule since T1 commits before T2, that makes the value read by T2
correct.
Irrecoverable Schedule:
• The table below shows a schedule with two transactions, T1 reads and writes A and that value
is read and written by T2. T2 commits. But later on, T1 fails. So we have to rollback T1. Since
T2 has read the value written by T1, it should also be rollbacked. But we have already
committed that. So this schedule is irrecoverable schedule. When Tj is reading the value
updated by Ti and Tj is committed before committing of Ti, the schedule will be irrecoverable.
36
• Concurrency Control:
Concurrency control concept comes under the Transaction in database management system
(DBMS). It is a procedure in DBMS which helps us for the management of two simultaneous
processes to execute without conflicts between each other, these conflicts occur in multi user
systems.
Concurrency can simply be said to be executing multiple transactions at a time. It is required to
increase time efficiency. If many transactions try to access the same data, then inconsistency
arises. Concurrency control required to maintain consistency data.
For example, if we take ATM machines and do not use concurrency, multiple persons cannot draw
money at a time in different places. This is where we need concurrency.
Advantages
Control concurrency
The simultaneous execution of transactions over shared databases can create several data
integrity and consistency problems.
For example, if too many people are logging in the ATM machines, serial updates and
synchronization in the bank servers should happen whenever the transaction is done, if not it
gives wrong information and wrong data in the database. Main problems in using Concurrency
The problems which arise while using concurrency are as follows −
38
• Updates will be lost − One transaction does some changes and another transaction deletes
that change. One transaction nullifies the updates of another transaction.
• Uncommitted Dependency or dirty read problem − On variable has updated in one
transaction, at the same time another transaction has started and deleted the value of the
variable there the variable is not getting updated or committed that has been done on the
first transaction this gives us false values or the previous values of the variables this is a
major problem.
• Inconsistent retrievals − One transaction is updating multiple different variables, another
transaction is in a process to update those variables, and the problem occurs is
inconsistency of the same variable in different instances.
Locking
Lock guaranties exclusive use of data items to a current transaction. It first accesses the data
items by acquiring a lock, after completion of the transaction it releases the lock.
Types of Locks
The types of locks are as follows −
• Shared Lock [Transaction can read only the data item values]
• Exclusive Lock [Used for both read and write data item values]
Time Stamping
Time stamp is a unique identifier created by DBMS that indicates relative starting time of a
transaction. Whatever transaction we are doing it stores the starting time of the transaction and
denotes a specific time.
This can be generated using a system clock or logical counter. This can be started whenever a
transaction is started. Here, the logical counter is incremented after a new timestamp has been
assigned.
Optimistic
It is based on the assumption that conflict is rare and it is more efficient to allow transactions to
proceed without imposing delays to ensure serializability.
Time-stamp based protocols:
39
Concurrency Control can be implemented in different ways. One way to implement it is by using
Locks. Now, let us discuss Time Stamp Ordering Protocol.
As earlier introduced, Timestamp is a unique identifier created by the DBMS to identify a
transaction. They are usually assigned in the order in which they are submitted to the system.
Refer to the timestamp of a transaction T as TS(T). For the basics of Timestamp
Timestamp Ordering Protocol –
The main idea for this protocol is to order the transactions based on their Timestamps. A schedule
in which the transactions participate is then serializable and the only equivalent serial schedule
permitted has the transactions in the order of their Timestamp Values. Stating simply, the
schedule is equivalent to the particular Serial Order corresponding to the order of the Transaction
timestamps. An algorithm must ensure that, for each item accessed by Conflicting Operations in
the schedule, the order in which the item is accessed does not violate the ordering. To ensure
this, use two Timestamp Values relating to each database item X.
• W_TS(X) is the largest timestamp of any transaction that executed write(X) successfully.
• R_TS(X) is the largest timestamp of any transaction that executed read(X) successfully.
Basic Timestamp Ordering – Every transaction is issued a timestamp based on when it enters the
system. Suppose, if an old transaction Ti has timestamp TS(Ti), a new transaction Tj is assigned
timestamp TS(Tj) such that TS(Ti) < TS(Tj). The protocol manages concurrent execution such that
the timestamps determine the serializability order. The timestamp ordering protocol ensures that
any conflicting read and write operations are executed in timestamp order. Whenever some
Transaction T tries to issue aR_item(X) or a W_item(X), the Basic TO algorithm compares the
timestamp of T with R_TS(X) & W_TS(X) to ensure that the Timestamp order is not violated. This
describes the Basic TO protocol in the following two cases.
1. Whenever a Transaction T issues a W_item(X) operation, check the following conditions:
• If R_TS(X) >TS(T) or if W_TS(X) > TS(T), then abort and rollback T and reject the operation.
else,
• Execute W_item(X) operation of T and set W_TS(X) to TS(T).
Isolation Levels:
As we know that, in order to maintain consistency in a database, it follows ACID properties.
Among these four properties (Atomicity, Consistency, Isolation, and Durability) Isolation
determines how transaction integrity is visible to other users and systems. It means that a
transaction should take place in a system in such a way that it is the only transaction that is
accessing the resources in a database system.
Isolation levels define the degree to which a transaction must be isolated from the data
modifications made by any other transaction in the database system. A transaction isolation level
is defined by the following phenomena
• Dirty Read – A Dirty read is a situation when a transaction reads data that has not yet been
committed. For example, Let’s say transaction 1 updates a row and leaves it uncommitted,
meanwhile, Transaction 2 reads the updated row. If transaction 1 rolls back the change,
transaction 2 will have read data that is considered never to have existed.
• Non Repeatable read – Non Repeatable read occurs when a transaction reads the same row
twice and gets a different value each time. For example, suppose transaction T1 reads data.
Due to concurrency, another transaction T2 updates the same data and commit, Now if
transaction T1 rereads the same data, it will retrieve a different value.
• Phantom Read – Phantom Read occurs when two same queries are executed, but the rows
retrieved by the two, are different. For example, suppose transaction T1 retrieves a set of rows
that satisfy some search criteria. Now, Transaction T2 generates some new rows that match
the search criteria for transaction T1. If transaction T1 re-executes the statement that reads
the rows, it gets a different set of rows this time.
Based on these phenomena, The SQL standard defines four isolation levels :
1. Read Uncommitted – Read Uncommitted is the lowest isolation level. In this level, one
transaction may read not yet committed changes made by other transactions, thereby
allowing dirty reads. At this level, transactions are not isolated from each other.
2. Read Committed – This isolation level guarantees that any data read is committed at the
moment it is read. Thus it does not allow dirty read. The transaction holds a read or write lock
on the current row, and thus prevents other transactions from reading, updating, or deleting
it.
3. Repeatable Read – This is the most restrictive isolation level. The transaction holds read locks
on all rows it references and writes locks on referenced rows for update and delete actions.
Since other transactions cannot read, update or delete these rows, consequently it avoids
non-repeatable read.
41
OLAP operations:
There are five basic analytical operations that can be performed on an OLAP cube:
1. Drill down: In drill-down operation, the less detailed data is converted into highly detailed
data. It can be done by:
• Moving down in the concept hierarchy
• Adding a new dimension
42
In the cube given in overview section, the drill down operation is performed by moving down
in the concept hierarchy of Time dimension (Quarter -> Month).
Roll up: It is just opposite of the drill-down operation. It performs aggregation on the OLAP cube.
It can be done by:
• Climbing up in the concept hierarchy
• Reducing the dimensions
In the cube given in the overview section, the roll-up operation is performed by climbing up in
the concept hierarchy of Location dimension (City -> Country).
Dice: It selects a sub-cube from the OLAP cube by selecting two or more dimensions. In the cube
given in the overview section, a sub-cube is selected by selecting following dimensions with
criteria:
• Location = “Delhi” or “Kolkata”
43
Slice: It selects a single dimension from the OLAP cube which results in a new sub-cube
creation. In the cube given in the overview section, Slice is performed on the dimension Time
= “Q1”.
Pivot: It is also known as rotation operation as it rotates the current view to get a new view of
the representation. In the sub-cube obtained after the slice operation, performing pivot
operation gives a new view of it.
Query Tree:
44
Query optimization involves three steps, namely query tree generation, plan generation, and
query plan code generation.
Step 1 − Query Tree Generation
A query tree is a tree data structure representing a relational algebra expression. The tables of
the query are represented as leaf nodes. The relational algebra operations are represented as
the internal nodes. The root represents the query as a whole.
During execution, an internal node is executed whenever its operand tables are available. The
node is then replaced by the result table. This process continues for all internal nodes until the
root node is executed and replaced by the result table.
For example, let us consider the following schemas – EMPLOYEE
DEPARTMENT
Cost of Query:
In the previous section, we understood about Query processing steps and evaluation plan.
Though a system can create multiple plans for a query, the chosen method should be the best of
all. It can be done by comparing each possible plan in terms of their estimated cost. For
calculating the net estimated cost of any plan, the cost of each operation within a plan should be
determined and combined to get the net estimated cost of the query evaluation plan.
45
The cost estimation of a query evaluation plan is calculated in terms of various resources that
include: o Number of disk accesses o Execution time taken by the CPU to execute a
query o Communication costs in distributed or parallel database systems.
To estimate the cost of a query evaluation plan, we use the number of blocks transferred from
the disk, and the number of disks seeks. Suppose the disk has an average block access time of t s
seconds and takes an average of tT seconds to transfer x data blocks. The block access time is the
sum of disk seeks time and rotational latency. It performs S seeks than the time taken will be b*tT
+ S*tS seconds. If tT=0.1 ms, tS =4 ms, the block size is 4 KB, and its transfer rate is 40 MB per
second. With this, we can easily calculate the estimated cost of the given query evaluation plan.
Generally, for estimating the cost, we consider the worst case that could happen. The users
assume that initially, the data is read from the disk only. But there must be a chance that the
information is already present in the main memory. However, the users usually ignore this effect,
and due to this, the actual cost of execution comes out less than the estimated value.
The response time, i.e., the time required to execute the plan, could be used for estimating the
cost of the query evaluation plan. But due to the following reasons, it becomes difficult to
calculate the response time without actually executing the query evaluation plan: o When the
query begins its execution, the response time becomes
dependent on the contents stored in the buffer. But this information is difficult to
retrieve when the query is in optimized mode, or it is not available also. o When a
system with multiple disks is present, the response time depends on an
interrogation that in "what way accesses are distributed among the disks?". It is
difficult to estimate without having detailed knowledge of the data layout
present over the disk. o Consequently, instead of minimizing the response time
for any query evaluation plan, the optimizers finds it better to reduce the total
resource consumption of the query plan. Thus to estimate the cost of a query
evaluation plan, it is good to minimize the resources used for accessing the disk
or use of the extra resources.
Join:
As the name shows, JOIN means to combine something. In case of SQL, JOIN means "to combine
two or more tables".
In SQL, JOIN clause is used to combine the records from two or more tables in a database. Types
of SQL JOIN
• INNER JOIN
46
• LEFT JOIN
• RIGHT JOIN
• FULL JOIN
Sample Table
INNER JOIN
In SQL, INNER JOIN selects records that have matching values in both tables as long as the
condition is satisfied. It returns the combination of all rows from both the tables where the
condition satisfies.
Syntax
Query
Output
EMP_NAME DEPARTMENT
Angelina Testing
Robert Development
Christian Designing
Kristen Development
LEFT JOIN
The SQL left join returns all the values from left table and the matching values from the right
table. If there is no matching join value, it will return NULL.
Syntax
Query
48
Output
EMP_NAME DEPARTMENT
Angelina Testing
Robert Development
Christian Designing
Kristen Development
Russell NULL
Marry NULL
RIGHT JOIN
In SQL, RIGHT JOIN returns all the values from the values from the rows of right table and the
matched values from the left table. If there is no matching in both tables, it will return NULL.
Syntax
4. ON table1.matching_column = table2.matching_column;
Query
Output
EMP_NAME DEPARTMENT
Angelina Testing
Robert Development
Christian Designing
Kristen Development
4. FULL JOIN
In SQL, FULL JOIN is the result of a combination of both left and right outer join. Join tables have
all the records from both tables. It puts NULL on the place of matches not found.
Syntax
Query
50
Output
EMP_NAME DEPARTMENT
Angelina Testing
Robert Development
Christian Designing
Kristen Development
Russell NULL
Marry NULL
Access Control:
Database Security means keeping sensitive information safe and prevent the loss of data.
Security of data base is controlled by Database Administrator (DBA).
The following are the main control measures are used to provide security of data in databases:
1. Authentication
2. Access control
3. Inference control
4. Flow control
5. Database Security applying Statistical Method
6. Encryption
51
1. Authentication :
Authentication is the process of confirmation that whether the user log in only according to
the rights provided to him to perform the activities of data base. A particular user can login
only up to his privilege but he can’t access the other sensitive data. The privilege of accessing
sensitive data is restricted by using Authentication.
By using these authentication tools for biometrics such as retina and figure prints can prevent
the data base from unauthorized/malicious users.
2. Access Control :
The security mechanism of DBMS must include some provisions for restricting access to the
data base by unauthorized users. Access control is done by creating user accounts and to
control login process by the DBMS. So, that database access of sensitive data is
possible only to those people (database users) who are allowed to access such data and to
restrict access to unauthorized persons.
The database system must also keep the track of all operations performed by certain user
throughout the entire login time.
3. Inference Control :
This method is known as the countermeasures to statistical database security problem. It is
used to prevent the user from completing any inference channel. This method protect
sensitive information from indirect disclosure.
Inferences are of two types, identity disclosure or attribute disclosure.
4. Flow Control :
This prevents information from flowing in a way that it reaches unauthorized users. Channels
are the pathways for information to flow implicitly in ways that violate the privacy policy of a
company are called convert channels.
5. Database Security applying Statistical Method :
Statistical database security focuses on the protection of confidential individual values stored
in and used for statistical purposes and used to retrieve the summaries of values based on
categories. They do not permit to retrieve the individual information. This allows to access
the database to get statistical information about the number of employees in the company
but not to access the detailed confidential/personal information about the specific individual
employee.
6. Encryption :
This method is mainly used to protect sensitive data (such as credit card numbers, OTP
numbers) and other sensitive numbers. The data is encoded using some encoding algorithms.
52
An unauthorized user who tries to access this encoded data will face difficulty in decoding it,
but authorized users are given decoding keys to decode data.
MAC:
1. DAC :
DAC is identity-based access control. DAC mechanisms will be controlled by user identification
such as username and password. DAC is discretionary because the owners can transfer objects
or any authenticated information to other users. In simple words, the owner can determine the
access privileges.
Attributes of DAC –
1. Users can transfer their object ownership to another user.
2. The access type of other users can be determined by the user.
3. Authorization failure can restrict the user access after several failed attempts.
4. Unauthorized users will be blind to object characteristics called file size, directory path, and
file name.
Examples- Permitting the Linux file operating system is an example of DAC. 2. MAC
:
The operating system in MAC will provide access to the user based on their identities and data.
For gaining access, the user has to submit their personal information. It is very secure because
the rules and restrictions are imposed by the admin and will be strictly followed. MAC settings
and policy management will be established in a secure network and are limited to system
administrators.
Attributes of MAC –
1. MAC policies can help to reduce system errors.
2. It has tighter security because only the administrator can access or alter controls.
3. MAC has an enforced operating system that can label and delineate incoming application data.
4. Maintenance will be difficult because only the administrator can have access to the database.
Examples- Access level of windows for ordinary users, admins, and guests are some of the
examples of MAC.
DAC stands for Discretionary Access Control. MAC stands for Mandatory Access Control.
Users will be provided access based on their Users will be restricted based on their power
identity and not using levels. and level of hierarchy.
DAC has high flexibility with no rules and MAC is not flexible as it contains lots of strict
regulations. rules and regulations.
DAC has complete trust in users. MAC has trust only in administrators.
Decisions will be based only on user ID and Decisions will be based on objects and tasks, and
ownership. they can have their own ids.
The concept of Role-based Access Control is to create a set of permissions and assign these
permissions to a user or group. With the help of these permissions, only limited access to users
can be provided therefore level of security is increased.
There are different ways to perform RBAC such as creating custom privilege levels or creating
views.
Custom level privilege –
When we take a console of the router, we enter into the user-level mode. The user-level mode
has privilege level 1. By typing enable, we enter into a privileged mode where the privilege level
is 15. A user with privilege level 15 can access all the commands that are at level 15 or below. By
creating a custom privilege level (between 2 and 14) and assigning commands to it, the
administrator can provide subset of commands to the user.
Configuration –
First we will add a command to our privilege level say 8 and assign a password to it.
R1(config)#privilege exec level 8 configure terminal R1(config)#enable secret level 8 0 saurabh
Here, we have assigned the password as saurabh. Also note that 0 here means the password
followed is clear text (non-hashed) . Now, we will create a local user name saurabh and
associated this user with configured level. Enable aaa model and assign default list to various
lines.
R1(config)#username saurabh privilege 8 secret cisco123
R1(config)#aaa new-model
R1(config)#line vty 0 4
R1(config)#login local
Now, whenever username Saurabh will take remote access through vty lines, he will be assigned
privilege level 8.
Creating views:
Role-Based CLI access enables the administrator to create different views of the device for
different users. Each view defines the commands that a user can access. It is similar to privilege
levels. Role-based CLI provides 2 types of views:
1. Root view – Root view has the same access privilege level as user who has level 15.The
administrator should be in root view as view can be added, edited or deleted in root view.
Configuration – To enter into root view, we first have to enable aaa on the device and then
have to set enable password or secret password which will be used when any user will enter
the root view.
55
To enable aaa on the device and to apply secret password, command is:
2. R1(config)#aaa new-model
R1(config)#enable secret geeksforgeeks
Now, we will enter the root view by command:
R1#enable view
By typing this, we will enter into root level where we can add, delete or edit views.
3. Super view – A super view consists of 2 or more CLI views. A network administrator can assign
a user or group of users a superview which consists of multiple views. A super view can
consists of more than one view therefore it has the access to all the commands which are
being provided in other views.
Configuration – As the super view consists of more than one view therefore first we
will create 2 views named, Cisco and IBM. Now, in view Cisco, we will allow all show
command in exec mode and int e0/0 command on global configuration mode.
4. R1(config)#parser view cisco
5. R1(config-view)#secret geeksforgeeks1