Hippo: A Fast, yet Scalable, Database Indexing Approach

Even though existing database indexes (e.g., B+-Tree) speed up the query execution, they suffer from two main drawbacks: (1) A database index usually yields 5% to 15% additional storage overhead which results in non-ignorable dollar cost in big data scenarios especially when deployed on modern stora...

Full description

Bibliographic Details
Main Authors: Yu, Jia, Sarwat, Mohamed
Format: Report
Language:unknown
Published: arXiv 2016
Subjects:
Online Access:https://dx.doi.org/10.48550/arxiv.1604.03234
https://arxiv.org/abs/1604.03234
Description
Summary:Even though existing database indexes (e.g., B+-Tree) speed up the query execution, they suffer from two main drawbacks: (1) A database index usually yields 5% to 15% additional storage overhead which results in non-ignorable dollar cost in big data scenarios especially when deployed on modern storage devices like Solid State Disk (SSD) or Non-Volatile Memory (NVM). (2) Maintaining a database index incurs high latency because the DBMS has to find and update those index pages affected by the underlying table changes. This paper proposes Hippo a fast, yet scalable, database indexing approach. Hippo only stores the pointers of disk pages along with light weight histogram-based summaries. The proposed structure significantly shrinks index storage and maintenance overhead without compromising much on query execution performance. Experiments, based on real Hippo implementation inside PostgreSQL 9.5, using the TPC-H benchmark show that Hippo achieves up to two orders of magnitude less storage space and up to three orders of magnitude less maintenance overhead than traditional database indexes, i.e., B+-Tree. Furthermore, the experiments also show that Hippo achieves comparable query execution performance to that of the B+-Tree for various selectivity factors. : 12 pages, 10 figures, conference