Design and implementation of file deduplication framework on HDFS

Ruey Kai Sheu, Shyan-Ming Yuan*, Win Tsung Lo, Chan I. Ku

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Scopus citations


File systems are designed to control how files are stored and retrieved. Without knowing the context and semantics of file contents, file systems often contain duplicate copies and result in redundant consumptions of storage space and network bandwidth. It has been a complex and challenging issue for enterprises to seek deduplication technologies to reduce cost and increase the storage efficiency. To solve such problem, researchers proposed in-line or offline solutions for primary storages or backup systems at the subfile or whole-file level. Some of the technologies are used for file servers and database systems. Fewer studies focus on the cloud file system deduplication technologies at the application level, especially for the Hadoop distributed file system. It is the goal of this paper to design a file deduplication framework on Hadoop distributed file system for cloud application developers. The architecture, interface, and implementation experiences are also shared in this paper.

Original languageEnglish
Article number561340
Number of pages11
JournalInternational Journal of Distributed Sensor Networks
StatePublished - 1 Jan 2014

Fingerprint Dive into the research topics of 'Design and implementation of file deduplication framework on HDFS'. Together they form a unique fingerprint.

Cite this