Skip to main content

Hadoop on Ceph: usability survey

·3 mins

As we saw in the last post on setting up Hadoop on Ceph there were a lot of steps that cause usability to suffer. In this post we’ll check out a variety of storage systems that can function as an alternative to HDFS in Hadoop environments to see what other systems are doing to ease the pain.

Many of these alternative systems are listed on the Hadoop File System Compatibility wiki page: https://wiki.apache.org/hadoop/HCFS.

MapR, OrangeFS, Quantcast #

The Hadoop adapter for these systems is very similar to that of Ceph. Each contains a Java component and a native JNI component. The OrangeFS adapter is the most similar, containing a native JNI library, Java bindings to the OrangeFS client, and a Java shim layer that translates between the Hadoop file system interface and the OrangeFS interface. Setup of MapR is extremely easy, and a lot of the problems associated with deployment of alternative file systems (e.g. linking against native libraries) is trivial because the MapR distribution ships with all of the paths configured.

A couple tricks that I found:

  1. MapR adds a hadoop jnipath that can be used to easily update the search paths for applications.

  2. OrangeFS looks for a special environment variable that can be used to update the native library search paths without messing around with the Hadoop or Java variables that can differ from version to version.

Lustre, Google Cloud, Azure, Gluster #

The Hadoop adapters for these systems don’t rely on native libraries. For instance, Lustre and Gluster both provide access through the virtual file system (e.g. native kernel or FUSE). This makes deployment simpler because the solution is 100% Java. Both Google Cloud and Azure adapters use existing Java SDK solutions for those platforms. Google Cloud for instance provides access over HTTP and is still a 100% Java solution.

Takeaways #

For the Ceph bindings, as well as all of the solutions above, adding Java dependencies is required. This isn’t terribly complicated, and is quite robust. The 100% Java solutions above have it relatively easy because they do not have to face the challenges of integrating the native JNI libraries.

One approach to easing the pain of handling the JNI bits is to actually embed the JNI bits into the jar file that depends on the native library. This works quite well in general. One challenge that this faces is handling divergent versions of libraries (e.g. libcephfs, libcephfs-jni, libcephfs-java, etc…) which currently are all kept consistent with the rest of the Ceph distribution and available through dependency management tools like yum and apt-get.

Since we are dealing with a finite number of distributions, another approach is to encode several search paths that look for the JNI bits in the common places such as /usr/lib. On most systems then the setup work is reduced to the 100% Java solutions above that simply need to make sure the Java bits make their way into the Hadoop classpath. And for developers, or those on unsupported platforms, the process becomes slightly more complicated.

A note on dependencies #

In the previous post I moved /usr/lib64/libcephfs_jni.so.1 to a file without the .1 extension. However I discovered that the libcephfs_jni.so file will be created by installing libcephfs_jni-devel. Previously I had only installed the libcephfs_jni package.