A selective data retention approach in massive databases

Orly Kalfus, Boaz Ronen, Israel Spiegler*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Exponentially growing databases have been tackled on two basic fronts: technological and methodological. Technology offered solution in storage capacity, processing power, and access speed. Among the methodologies are indexing, views, data mining, and temporal databases, and combinations of technology and methodology come in the form of data warehousing, all designed to get the most out of and best handle mounting and complex databases. The basic premise that underlines those approaches is to store everything. We challenge that premise suggesting a selective retention approach for operational data thus curtailing the size of databases and warehouses without losing content and information value. A model and methodology for selective data retention are introduced. The model, using cost/benefit analysis, allows assessing data elements currently stored in the database as well as providing a retention policy regarding current and prospective data. An example case study on commercial data illustrates the model and concepts of such method.

Original languageEnglish
Pages (from-to)87-95
Number of pages9
JournalOmega
Volume32
Issue number2
DOIs
StatePublished - Apr 2004

Keywords

  • Cost/benefit analysis
  • Data mining
  • Data warehousing
  • Databases
  • Information as inventory

Fingerprint

Dive into the research topics of 'A selective data retention approach in massive databases'. Together they form a unique fingerprint.

Cite this