Damage Management in Database Management Systems

Open Access
Bai, Kun
Graduate Program:
Information Sciences and Technology
Doctor of Philosophy
Document Type:
Date of Defense:
July 21, 2009
Committee Members:
  • Peng Liu, Dissertation Advisor
  • Peng Liu, Committee Chair
  • Chao Hsien Chu, Committee Member
  • Thomas La Porta, Committee Member
  • Sencun Zhu, Committee Member
  • Security
  • Data Damage
  • Database
  • Integrity
In the past two decades there have been many advances in the field of computer security. However, since vulnerabilities cannot be completely removed from a system, successful attacks often occur and cause damage to the system. Despite numerous technological advances in both security software and hardware, there are many challenging problems that still limit effectiveness and practicality of existing security measures. As Web applications gain popularity in today's world, surviving Database Management System (DBMS) from an attack is becoming even more crucial than before because of the increasingly critical role that DBMS is playing in business/life/mission-critical applications. Although significant progress has been achieved to protect the DBMS, such as the existing database security techniques (e.g., access control, integrity constraint and failure recovery, etc.,), the buniness/life/mission-critical applications still can be hit due to some new threats towards the back-end DBMS. For example, in addition to the vulnerabilities exploited by attacks (e.g., the SQL injections attack), databases can be damaged in several ways such as the fraudulent transactions (e.g., identity theft) launched by malicious outsiders, erroneous transactions issued by the insiders by mistakes. When the database is under such a circumstance (attack), rolling back and re-executing the damaged transactions are the most used mechanisms during the system recovery. This kind of mechanism either stops (or greatly restricts) the database service during repair, which causes unacceptable data availability loss or denial-of-service for mission critical applications, or may cause serious damage spreading during on-the-fly recovery where many clean data items are accidentally corrupted by legitimate new transactions. In this study, we address database damage management (DBDM), a very important problem faced today by a large number of mission/life/business-critical applications and information systems that must manage risk, business continuity, and assurance in the presence of severe cyber attacks. Although a number of research projects have been done to tackle the emerging data corruption threats, existing mechanisms are still limited in meeting four highly desired requirements: near-zero-run-time overhead, zero-system-down time, zero-blocking-time for read-only transactions, minimal-delay-time for read-write transactions. Firstly, to achieve the four highly desired requirements, we propose TRACE, a zero-system-down-time database damage tracking, quarantine, and recovery solution with negligible run time overhead. TRACE consists of a family of new database damage tracking, quarantine, and cleansing techniques. We built TRACE into the kernel of PostgreSQL. Secondly, motivated by the limitation of TRACE mechanism, we propose a novel proactive damage management approach denoted database firewalling. This approach deals with transaction level attacks. Pattern mining and Bayesian network techniques are adopted in the firewalling framework to mine frequent damage spreading patterns and to predict the data integrity in the face of attack when certain type of attack occurs repeatedly. This pattern mining and Bayesian inference approach provides a probability based strategy to estimate the data integrity on the fly. With this probabilistic feature, the database firewalling approach is able to enforce a policy of transaction filtering to dynamically filter out the potential damage spreading transactions.