Coder Social home page Coder Social logo

hdfs-native's Introduction

Native Rust HDFS client

This is a proof-of-concept HDFS client written natively in Rust. All other clients I have found in any other language are simply wrappers around libhdfs and require all the same Java dependencies, so I wanted to see if I could write one from scratch given that HDFS isn't really changing very often anymore. Several basic features are working, however it is not nearly as robust and the real HDFS client.

What this is not trying to do is implement all HDFS client/FileSystem interfaces, just things involving reading and writing data.

Supported HDFS features

Here is a list of currently supported and unsupported but possible future features.

HDFS Operations

  • Listing
  • Reading
  • Writing
  • Rename
  • Delete

HDFS Features

  • Name Services
  • Observer reads (state ID tracking is supported, but needs improvements on tracking Observer/Active NameNode)
  • ViewFS
  • Router based federation
  • Erasure coded reads and writes
    • RS schema only, no support for RS-Legacy or XOR

Security Features

  • Kerberos authentication (GSSAPI SASL support) (requires libgssapi_krb5, see below)
  • Token authentication (DIGEST-MD5 SASL support)
  • NameNode SASL connection
  • DataNode SASL connection
  • DataNode data transfer encryption
  • Encryption at rest (KMS support)

Kerberos Support

Kerberos (SASL GSSAPI) mechanism is supported through a runtime dynamic link to libgssapi_krb5. This must be installed separately, but is likely already installed on your system. If not you can install it by:

Debian-based systems

apt-get install libgssapi-krb5-2

RHEL-based systems

yum install krb5-libs

MacOS

brew install krb5

Supported HDFS Settings

The client will attempt to read Hadoop configs core-site.xml and hdfs-site.xml in the directories $HADOOP_CONF_DIR or if that doesn't exist, $HADOOP_HOME/etc/hadoop. Currently the supported configs that are used are:

  • fs.defaultFS - Client::default() support
  • dfs.ha.namenodes - name service support
  • dfs.namenode.rpc-address.* - name service support
  • fs.viewfs.mounttable.*.link.* - ViewFS links
  • fs.viewfs.mounttable.*.linkFallback - ViewFS link fallback

All other settings are generally assumed to be the defaults currently. For instance, security is assumed to be enabled and SASL negotiation is always done, but on insecure clusters this will just do SIMPLE authentication. Any setups that require other customized Hadoop client configs may not work correctly.

Building

cargo build

Object store implementation

An object_store implementation for HDFS is provided in the hdfs-native-object-store crate.

Running tests

The tests are mostly integration tests that utilize a small Java application in rust/mindifs/ that runs a custom MiniDFSCluster. To run the tests, you need to have Java, Maven, Hadoop binaries, and Kerberos tools available and on your path. Any Java version between 8 and 17 should work.

cargo test -p hdfs-native --features intergation-test

Python tests

See the Python README

Running benchmarks

Some of the benchmarks compare performance to the JVM based client through libhdfs via the fs-hdfs3 crate. Because of that, some extra setup is required to run the benchmarks:

export HADOOP_CONF_DIR=$(pwd)/rust/target/test
export CLASSPATH=$(hadoop classpath)

then you can run the benchmarks with

cargo bench -p hdfs-native --features benchmark

The benchmark feature is required to expose minidfs and the internal erasure coding functions to benchmark.

Running examples

The examples make use of the minidfs module to create a simple HDFS cluster to run the example. This requires including the integration-test feature to enable the minidfs module. Alternatively, if you want to run the example against an existing HDFS cluster you can exclude the integration-test feature and make sure your HADOOP_CONF_DIR points to a directory with HDFS configs for talking to your cluster.

cargo run --example simple --features integration-test

hdfs-native's People

Contributors

kimahriman avatar shbhmrzd avatar xuanwo avatar yjshen avatar zuston avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

hdfs-native's Issues

Implement custom digest-md5 algorithm

We currently use gsasl for digest-md5, which doesn't support integrity or confidentiality modes for digest-md5 sasl. I haven't found another library that does, so we should just implement it directly so we can fully support all security features. This would be required if we ever wanted to support data transit encryption.

Speed up build process

There's a few things we can do to speed up the build process and remove required native dependencies for a build

  • Pre-generate the protobuf definitions. These will likely never change, so no reason to have to generate them on every build
  • gsasl-sys supports using pre-generated bindings, so we should just use those instead of generating them every time
  • libgssapi doesn't support this right now, so we still need all the clang support for bindgen for that

Add lease renewal

HDFS has a soft lease limit of 60 seconds, which I believe means any file being written to for longer than 60 seconds could be "taken" by another writer if the lease hasn't been renewed. We should add a lease renewal process like the Java client to make sure any files actively being written to have their lease renewed

Panics reading erasure coded files

https://github.com/Kimahriman/hdfs-native/blob/master/rust/src/hdfs/block_reader.rs#L231 can panic sometimes while reading erasure coded files. I think this is because we aren't always finishing the read from an individual block, and the receiver just gets dropped early before all messages have been consumed. This also means that the datanode connection does not get released back into the cache. Need to make sure the ReplicatedBlockStream gets fully consumed for all reads

Facing `org.apache.hadoop.ipc.RpcNoSuchMethodException` when connecting to hdfs using nameservice

Hi, I am using hdfs-native for a poc and tried connecting to hdfs version 2.6 and facing the following error

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: RPCError("org.apache.hadoop.ipc.RpcNoSuchMethodException", "Unknown method msync called on org.apache.hadoop.hdfs.protocol.ClientProtocol protocol.\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:422)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275)\n")', src/main.rs:22:27
stack backtrace:
   0: rust_begin_unwind

My code looks something like this

#[tokio::main]
async fn main() -> Result<(), HdfsError> {

    env::set_var("HADOOP_CONF_DIR", "/Users/sraizada/Downloads/hadoop-conf");
    let client = Client::new("hdfs://mycluster")?;

    let files = client.list_status("/", true);

    let res = files.await.unwrap().iter().map(|f| println!("file - {:?}", f));
    Ok(())
}

Could you please help ?
Thank you!

Compiling on windows fails

The users crate seems to only work for unix. Now that lbgssapi isn't used we might as well make windows compilation work

Verify data checksums on read

We currently are not checking data checksums on read (in fact we don't have the datanode send checksums). We should enable this to ensure we are not returning corrupt data

Better support observer NameNodes

Observer reads are technically supported because we track and use the state ID if it's provided, but this can be improved by keeping track of which NameNodes are observers, and knowing which RPC calls are reads and can be sent to observers, and which are writes and have to go to the Active NameNode

Separate object store tests

Currently we run all the object store tests for all the combinations of hdfs features. We probably don't need to do this, as the regular tests can test all the functionality for each combinations of hdfs features, and then we can only run a single test for all the object store features with one set of hdfs features. This should speed things up a little bit.

Support vectorized reading

Vectorized IO is a very common thing to optionally support in various IO utilities. We could greatly benefit of vectorized IO on the reading side by doing things like:

  • Combining multiple reads from the same block in a single connection to a datanode
  • Coalescing reads that are close to each other
  • Loading ranges from multiple different blocks in the same file in parallel

Support DataNode connection caching

As a performance improvement, we should support caching and reusing datanode connections like the Java library does. This reduces the overhead of creating a new TCP connection for every individual block read to the same datanode

File permission different between `list_status` and `hadoop fs -ls`

Hi,
I have created a dir and list its permission as below

    let dir_path = "/hdfs-native-test/";
    client.mkdirs(dir_path,  777, true).await.unwrap();

    let file_info = client.get_file_info(dir_path).await.unwrap();
    println!("file status : {:?}", file_info);

Output as file status : FileStatus { path: "/hdfs-native-test/", length: 0, isdir: true, permission: 777, owner: "sraizada", group: "supergroup", modification_time: 1704538751868, access_time: 0 }

But hadoop fs -ls / shows

*$ hadoop fs -ls /
Found 6 items

dr----x--t   - sraizada supergroup          0 2024-01-06 16:29 /hdfs-native-test

I am working on Hadoop 3.2.4

Can you please help with this?
Thank you!

Split long reading and writing tests into dedicated test

We don't really need to run all the reading and writing edge cases for each combo of HDFS features. We only need to test the very basics with all the various features to make sure they work in various security scenarios. Trying to test all the reading and writing edge cases can be done just once

Dynamically load libgssapi_krb5 at runtime

To help with portability, we should dynamically link to libgssapi_krb5 using libloading instead of using a compile to link. This will make it simple to cross-compile for various targets, as well as include in downstream wheels without needing to build via QEMU to get the right architecture for packaging the dynamic libraries

Create fsspec implementation

If we create an fsspec implementation in Python, it will make the client useable in a lot of other Python ecosystems, such as pyarrow.

Move objectstore implementation to its own crate

Currently the objectstore implementation is behind a feature flag in the main crate. It would probably be cleaner to have a separate crate for that, since it should just rely on the public API of the library. It will also make it easier to discover the implementation, and make it easier to support various feature flags related to just the objectstore, such as potentially allowing different versions of the upstream objectstore crate.

Add datanode heartbeating

Similar to #62, if a write pauses for more than 60 seconds, the datanode connection could timeout. We need to add datanode heartbeating like DFSOutputStream does

Support router based federation

We already have the structure for the federated router state (just a Vec<u8>). Just need to implement the state merge function.

Clarify the license

Hello, thank you so much for this fantastic project! I'm a member of the OpenDAL community, and we've been closely following your project for quite some time: apache/opendal#3144.

The only barrier to integrating your project is the licensing issue. Which license does this project use? Please clarify by including a LICENSE file in the repository. I'm happy to create a PR for this if you'd like.

I noticed you've set the Apache 2.0 license for the Rust crate; does this cover the entire repository?

license = "Apache-2.0"

Stream erasure coded reads properly

Currently erasure coded streams return entire blocks at once. We should update this to return individual cells at a time instead to better support incremental loading/processing

Update new_with_config to also load config files

Currently new_with_config uses the passed config as the only config. Instead we should first load any config files based on environment variables, and then override the loaded config with any passed in values

Create streaming method for file reading

Currently we are limited to individual read calls returning a whole buffer of data, and chunking up reading requires creating new datanode TCP connections for each chunk. We are already getting the data in batches (at the packet level). We should simply stream these back up to the FileReader so it can decide whether to combine them into a single buffer or return a stream object directly

Remove need for Hadoop binary for running tests

We use the Hadoop binary to upload a testfile to verify reads, but we can just create this testfile in the minidfs Java program instead, and remove the need for having a Hadoop binary locally

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.