Expand description

Hadoop Distributed File System (HDFS™) support.

A distributed file system that provides high-throughput access to application data.


HDFS support needs to enable feature services-hdfs.


HDFS needs some environment set correctly.

  • JAVA_HOME: the path to java home, could be found via java -XshowSettings:properties -version
  • HADOOP_HOME: the path to hadoop home, opendal relays on this env to discover hadoop jars and set CLASSPATH automatically.

Most of the time, setting JAVA_HOME and HADOOP_HOME is enough. But there are some edge cases:

  • If meeting errors like the following:
error while loading shared libraries: libjvm.so: cannot open shared object file: No such file or directory

Java’s lib are not including in pkg-config find path, please set LD_LIBRARY_PATH:


The path of libjvm.so could be different, please keep an eye on it.

  • If meeting errors like the following:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)

CLASSPATH is not set correctly or your hadoop installation is incorrect.


use std::sync::Arc;

use anyhow::Result;
use opendal::services::hdfs;
use opendal::services::hdfs::Builder;
use opendal::Accessor;
use opendal::Object;
use opendal::Operator;

async fn main() -> Result<()> {
    // Create fs backend builder.
    let mut builder: Builder = hdfs::Backend::build();
    // Set the name node for hdfs.
    // Set the root for hdfs, all operations will happen under this root.
    // NOTE: the root must be absolute path.
    // Build the `Accessor`.
    let accessor: Arc<dyn Accessor> = builder.finish().await?;

    // `Accessor` provides the low level APIs, we will use `Operator` normally.
    let op: Operator = Operator::new(accessor);

    // Create an object handle to start operation on object.
    let _: Object = op.object("test_file");



Backend for hdfs services.

Builder for hdfs services