DBMS RGPV UNIT 4 - aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

ambermanglani143143 9 views 5 slides Aug 30, 2025
Slide 1
Slide 1 of 5
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5

About This Presentation

A trigger in Database Management System (DBMS) is a special type of stored procedure that is automatically executed or “fired” by the database in response to specific events on a table or view. These events can be INSERT, UPDATE, or DELETE operations. Unlike normal procedures, triggers cannot be...


Slide Content

Serializability is a concept in database systems that ensures concurrent transactions execute in a manner that produces the same result as if they had run sequentially (in serial order). It guarantees that the final state of the database is consistent and equivalent to the outcome achieved when transactions are executed one after another, even if they are executed simultaneously. There are two common types of serializability : Conflict Serializability : This type of serializability ensures that the outcome of concurrent transactions is equivalent to a serial execution without any conflicting operations. Conflicts arise when transactions access the same data item and at least one of them performs a write operation. To achieve conflict serializability , the transactions' read and write operations must not create conflicts among themselves. View Serializability : View serializability considers the schedules of transactions and focuses on their read and write sets, ensuring that different transactions' reads and writes do not overlap in a way that violates consistency. Achieving serializability is crucial in multi-user database systems where multiple transactions are executed concurrently. It prevents issues such as dirty reads (reading uncommitted data), non-repeatable reads (reading different values within the same transaction), and phantom reads (getting additional rows in a query due to concurrent transactions). Database management systems employ concurrency control mechanisms like locking, timestamping, or snapshot isolation to ensure serializability by controlling and coordinating access to data among concurrent transactions. These mechanisms help maintain data consistency and prevent interference among transactions, ensuring that the overall database remains in a valid state.

A Recoverable Schedule in the context of database systems refers to a schedule of transactions where, if a transaction T1 reads data that has been modified by another transaction T2, T2 must commit before T1 commits. This ensures that in case T2 aborts (rolls back), any changes it made won't be visible to T1, thus maintaining a consistent state and allowing T1 to read only committed data. In simpler terms, a recoverable schedule guarantees that if a transaction reads data that has been updated by another transaction that later gets rolled back, the transaction reading that data can also be rolled back without violating data consistency. The concept is often associated with the notion of avoiding "dirty reads," where a transaction reads uncommitted data that may later be rolled back, leading to inconsistencies or incorrect results. The recoverable schedule helps maintain the ACID properties of a database (Atomicity, Consistency, Isolation, Durability) by ensuring that transactions operate in a manner that allows for recovery and consistency in the event of failures or rollbacks. It forms part of ensuring data integrity and consistency in database systems.

A Cascadeless Schedule is a concept in database systems that aims to prevent cascading rollbacks and maintain data consistency during concurrent transaction execution. It ensures that the effects of an aborted or rolled back transaction don't cause subsequent transactions to also be rolled back, avoiding a cascading effect of rollbacks. In a cascadeless schedule: Transaction Commit Dependency : If a transaction T1 reads data that is later modified by another transaction T2, T1 must commit before T2 commits. This prevents T2 from seeing uncommitted changes made by T1. No Cascading Rollbacks : If T1 modifies data that is later read by T2, and T2 has already committed, even if T1 rolls back, it won't force T2 to roll back. T2's commitment ensures that any changes it has made are permanent and won't be affected by subsequent rollbacks in T1. Cascadeless schedules are important in maintaining data consistency and preventing a domino effect of rollbacks that can disrupt the integrity of the database. They contribute to achieving the isolation property in the ACID (Atomicity, Consistency, Isolation, Durability) principles for database transactions. Concurrency control mechanisms such as locking, timestamping, and validation techniques are employed to ensure that schedules maintain cascadeless properties, thus avoiding undesirable cascading effects and maintaining the consistency and correctness of the database despite concurrent transaction execution.

Transaction failure refers to situations where a transaction cannot complete its execution successfully due to various reasons. These failures can occur for several reasons, leading to different types of transaction failures: System Errors : These errors are caused by issues within the system or hardware, such as power failures, disk failures, or crashes. These errors can interrupt the normal flow of transactions and lead to failures. Logical Errors : These errors occur due to issues in the transaction logic or application code. For example, attempting to perform an operation that violates a constraint, such as inserting duplicate keys into a unique column, can cause a transaction to fail. Deadlock : Deadlocks happen when two or more transactions are waiting indefinitely for resources that are held by each other, preventing any of them from progressing. In such cases, one or more transactions may need to be aborted to resolve the deadlock. Concurrency Control Violations : Transactions might fail due to violations of concurrency control mechanisms, such as when a transaction tries to access data that is locked by another transaction or when a transaction violates isolation levels. Timeouts : Transactions might fail if they exceed a predefined time limit for execution. This could be due to long-running transactions that consume excessive resources or due to network delays . These failures can lead to different types of transaction outcomes: Successful Completion : The transaction executes successfully and commits its changes, making the modifications permanent and visible to other transactions. Aborted or Rolled Back : The transaction fails and is aborted, undoing its changes and restoring the database to its state before the transaction began. This can happen due to various reasons like deadlock resolution, constraint violations, or system errors. Partially Completed : In some cases, a transaction might partially complete some of its operations before failing. In such cases, it might leave the database in an inconsistent state, requiring cleanup or corrective measures.

Strict Two-Phase Locking (S2PL): In S2PL, transactions must follow two rules: All locks must be held until the transaction reaches its commit point (end of the transaction). No new locks can be acquired after the first lock has been released. S2PL aims to prevent both cascading rollbacks (where the rollback of one transaction forces the rollback of subsequent transactions) and the inconsistency problem. This protocol ensures serializability but can potentially limit concurrency as locks are held until the transaction's completion. Rigorous Two-Phase Locking (R2PL): R2PL is an extension of S2PL that addresses the issue of locks being held for the duration of the transaction, potentially reducing concurrency. In R2PL, a transaction can release locks only after it has no further need of them and before it acquires any new locks. It aims to release locks earlier than S2PL, potentially allowing for more concurrency while maintaining serializability . R2PL prevents any lock release before the transaction completes all its operations on a particular data item. Conservative Two-Phase Locking (C2PL): C2PL is a variation of 2PL that focuses on avoiding deadlocks by having the system estimate the maximum number of locks a transaction might require before it begins execution. Transactions declare their lock requirements upfront, and if granting these locks won't exceed the system's maximum limit, they are allowed to proceed. It is a preventive measure against deadlocks by ensuring that the system has enough resources (locks) to satisfy the requirements of all active transactions.