Coder Social home page Coder Social logo

chenyao1 / ssm Goto Github PK

View Code? Open in Web Editor NEW

This project forked from littlezhou/ssm-1

0.0 1.0 0.0 5.15 MB

Smart Storage Management for Big Data

License: Apache License 2.0

Shell 0.96% Java 80.77% Protocol Buffer 0.14% HTML 4.35% ANTLR 0.12% JavaScript 8.54% CSS 1.53% Scala 1.30% Batchfile 0.24% XSLT 0.05% Python 0.62% Roff 1.20% R 0.08% Thrift 0.09%

ssm's Introduction

HDFS Smart Storage Management Build Status

HDFS-SSM is the major portion of the overall Smart Data Management Initiative.

Big data is putting increasing pressure on HDFS storage in recent years with varous of workloads and demanding performance. The latest storage devices (Optane Memory, Optane SSD, NVMe SSD, etc.) can be used to improve the storage performance. Meanwhile HDFS provides all kinds of nice methodologies like HDFS Cache, Heterogeneous Storage Management (HSM) and Erasure Coding (EC), but it remains a big challenge for users to make full utilization of these high-performance storage devices and HDFS storage options in a dynamic environment.

To overcome the challenge, we introduced a comprehensive end-to-end solution, aka Smart Storage Management (SSM) in Apache Hadoop. HDFS operation data and system state information are collected, based on the collected metrics SSM can automatically make sophisticated usage of these methodologies to optimize HDFS storage efficiency.

High Level Goals

1. Enhancement for HDFS-HSM and HDFS-Cache

Automatically and smartly adjusting storage policies and options in favor of data temperature. Note this is approaching completion.

2. Support block level erasure coding

Similar to the old HDFS-RAID, not only for Hadoop 3.x, but also Hadoop 2.x. The design doc for this will be coming soon.

3. Small files support and compaction

Optimizing NameNode to support even larger namespace, eliminating the inodes of small files from memory. Support both write and read. Ref. the HDFS small files support design.

4. Cluster Disaster Recovery

Supporting transparent fail-over for applications. Here is the HDFS disaster recovery design document.

High Level Considerations

  1. Supports Hadoop 3.x and Hadoop 2.x;
  2. The whole work and framework builds on top of HDFS, avoiding modifications when possible;
  3. Compatible HDFS client APIs to support existing applications, computation frameworks;
  4. Provide addition client APIs to allow new applications to benefit from SSM nice facilities;
  5. Support High Availability and reliability, trying to reuse existing infrastructures in a deployment when doing so;
  6. Security is ensured, particularly when Hadoop security is enabled.

Architecture

The following picture depicts SSM system behaviours.

Below figure illustrates how to position SSM in big data ecosystem. Ref. SSM architecture for details.

Development Phases

HDFS-SSM development is separated into 3 major phases. Currently the Phase 1 work is approaching completion.

Phase 1. Implement SSM framewwork and the fundamental infrustrature:

  • Event and metrics collection from HDFS cluster;
  • Rule DSL to support high level customization and usage;
  • Support richful smart actions to adjust storage policies and options, enhancing HDFS-HSM and HDFS-Cache;
  • Client API, Admin API following Hadoop RPC and REST channels;
  • Basic web UI support.

Phase 2. Refine SSM framework and support user solutions:

  • Small files support and compaction;
  • Cluster disaster recovery;
  • Support block level erasure coding;
  • To support the new desired actions, enhance the SSM framework and infrastructure.

Phase 3. Optimize further for computing frameworks and workloads benefiting from SSM offerings and facilities:

  • Hive on SSM;
  • HBase on SSM;
  • Spark on SSM;
  • Deep Learning on SSM.

Phase I -- Use Cases

1. Cache most hot data

When the files got very hot, they can be moved from fast storage into cache memory to achieve the best read performance. The following shows the example of moving data to memory cache if the data has been read over 3 times during the last 5 minutes

2. Move hot data to fast storage

Without SSM, data may always be readed from HDD. With SSM, optimizaitons can be made through rules. As showned in the figure above, data can be moved to faster SSD to achive better performance.

3. Archive cold data

Files are less likely to be read during the ending of lifecycle, so it’s better to move these cold files into lower performance storage to decrease the cost of data storage. The following shows the example of archiving data that has not been read over 1 times during the last 90 days.

Admin Doc

Cluster admininstrator takes the role of SSM rule management. A set of APIs is exposed to help administrator manage rule. This set of APIs includes create, delete, list, enable and disable SSM rule. Hadoop admin privilege is required for access the APIs. For detailed information, please refer to Admin Guide.

User Doc

SSM will provide a SmartDFSClient that includes both original HDFS DFSClient APIs and new SSM APIs. Applications can use this SmartDFSClient to benefit from the provided SSM facilities. New SSM APIs include cache file and archive file etc. More APIs will be added later. For detailed information, please refer to User Guide.

How to Contribute

We welcome your feedback and contributions. Please feel free to fire issues or push PRs, we'll respond soon. Note the project is evolving very fast.

Acknowlegement

This originates from and bases on the discussions occured in Apache Hadoop JIRA HDFS-7343. It not only thanks to all the team members of this project, but also thanks a lot to all the idea and feedback contributors.

ssm's People

Contributors

littlezhou avatar huafengw avatar qiyuangong avatar timmyyao avatar manuzhang avatar taojieterry avatar chenyao1 avatar cy122792387 avatar pobei avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.