Cloud Foundry Custom Buildpacks

Cloud Foundry Buildpacks provide runtime and framework support for applications. Users can rely on the built-in selection for Java, NodeJS, Python, etc. or additional community buildpacks from Github.

Buildpacks are open-source, making them simple to customise and include libraries needed by your application.

Doctor Watson uses an NPM module that relies on a command-line application, SOX, being installed in the runtime environment.

Making this command-line application available on the platform required the project to create a custom NodeJS buildpack.

This was the first time I’ve needed to create a custom buildpack. Documenting the steps below will hopefully provide a guide for other people wanting to do the same.

Overall, the process was straightforward and left me with a greater understanding of how buildpacks works.

SOX Audio Processing Library

We’re using the SOX package within Doctor Watson to up-sample an audio file. This module depends on the command-line SOX audio processing utility being installed and available on the command line. SOX is an open-source C application.

Buildpack Internals

Cloud Foundry Buildpacks are Git repositories which must contain three shell scripts under the “bin” directory.

  • detect - Does this buildpack apply to this application?
  • compile - Build the runtime used to execute the application
  • release - Controls how the application should be executed

These shell scripts can be modified to perform any task necessary for an application runtime.

We’re starting with the default NodeJS buildpack.

The “bin/compile” script installs the correct NodeJS version, NPM modules and sets up the runtime environment to start the application. When the script is ran, a command line argument will give a directory path to place files needed at runtime.

We will need to install the SOX binary and dependent libraries under this directory path.

One method for doing this would be downloading the SOX source code and compiling during deployment, before installing the created binaries into the correct location.

Unfortunately, compiling from source during each deployment would add an unacceptable delay.

Therefore, most buildpacks use pre-built binaries, which are downloaded and moved to the build directory during deployment, saving a huge amount of time.

Creating the pre-built binary archive

Rather than manually creating our binaries from source, we can pull them from the Ubuntu package manager which already maintains a pre-built set of binaries for the SOX package.

Packaging the binary and any dynamic libraries dependencies into an archive file, this can be stored in the buildpack repository for extraction during deployment.

We need to ensure the pre-built binaries were compiled for the same host environment that Cloud Foundry will use to run our application.

Using the cf stacks command, we can see the platforms details.

[13:51:45 ~]$ cf stacks
Getting stacks in org james.thomas@uk.ibm.com / space dev as james.thomas@uk.ibm.com...
OK

name      description
lucid64   Ubuntu 10.04
seDEA     private
[13:53:10 ~]$

Now we just need access to the same platform to run the package manager on…

Docker to the rescue!

Using Docker

We’re going to use Docker to run a new container with the same operating system as the Cloud Foundry environment. Using this we can install the SOX package using ‘apt-get’ and extract all the installed files.

[13:56:46 ~]$ docker run -t -i  ubuntu:10.04 /bin/bash
root@7fdb1e9047e1:/#
root@7fdb1e9047e1:/# apt-get install sox
root@7fdb1e9047e1:/# which sox
/usr/bin/sox
root@7fdb1e9047e1:/# ldd /usr/bin/sox
    linux-vdso.so.1 =>  (0x00007fff2819f000)
    libsox.so.1 => /usr/lib/libsox.so.1 (0x00007f0f32a94000)
    libltdl.so.7 => /usr/lib/libltdl.so.7 (0x00007f0f3288a000)
    libdl.so.2 => /lib/libdl.so.2 (0x00007f0f32685000)
    libpng12.so.0 => /lib/libpng12.so.0 (0x00007f0f3245e000)
    libmagic.so.1 => /usr/lib/libmagic.so.1 (0x00007f0f32242000)
    libz.so.1 => /lib/libz.so.1 (0x00007f0f3202a000)
    libgomp.so.1 => /usr/lib/libgomp.so.1 (0x00007f0f31e1c000)
    libgsm.so.1 => /usr/lib/libgsm.so.1 (0x00007f0f31c0e000)
    libm.so.6 => /lib/libm.so.6 (0x00007f0f3198a000)
    libpthread.so.0 => /lib/libpthread.so.0 (0x00007f0f3176d000)
    libc.so.6 => /lib/libc.so.6 (0x00007f0f313eb000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f0f32d28000)
    librt.so.1 => /lib/librt.so.1 (0x00007f0f311e2000)
root@7fdb1e9047e1:/#

Now we have the location of the SOX binary along with a list of the dynamic libraries it depends on.

How do we know which of those libraries were already available in the operating system and those the package manager installed?

Using Docker diff, we can compare the container to the base image.

[14:02:43 ~]$ docker diff 7fdb1e9047e1 | grep '\.so\.'
C /etc/ld.so.cache
C /etc/ld.so.conf.d
A /etc/ld.so.conf.d/libasound2.conf
C /lib/libgcc_s.so.1
A /usr/lib/libFLAC.so.8
A /usr/lib/libFLAC.so.8.2.0
A /usr/lib/libasound.so.2
A /usr/lib/libasound.so.2.0.0
A /usr/lib/libgomp.so.1
A /usr/lib/libgomp.so.1.0.0
....

This command will output list of files that have been modified. Grepping this for the list of dependencies we have, it’s easy to extract those which are new.

We can now copy the files needed from the container filesystem to our local host and bundle into an archive in the “vendor” directory.

[14:02:43 ~]$ docker cp 7fdb1e9047e1:/usr/bin/sox .

Modifying the “bin/compile” script

With the pre-built binary package available in the buildpack repository, we just need to extract this during deployment from the vendor directory into the build directory.

Modifying the PATH and LD_LIBRARY_PATH variables will expose the binary during runtime and ensure the dynamic libraries are recognised.

# Add SOX binary and libraries to path
status "Adding SOX library support"
tar xzf $bp_dir/vendor/sox.tar.gz -C $build_dir/vendor/

# Update the PATH
status "Building runtime environment"
mkdir -p $build_dir/.profile.d
echo "export PATH=\"\$HOME/vendor/node/bin:\$HOME/bin:\$HOME/node_modules/.bin:\$HOME/vendor/:\$PATH\";" > $build_dir/.profile.d/nodejs.sh
echo "export LD_LIBRARY_PATH=\"\$HOME/vendor/libs/\";" >> $build_dir/.profile.d/nodejs.sh

Using the custom buildpack

Once the buildpack modifications have been committed to the external Github repository, the application manifest can be modified to point to this new location.

… at this point all we have to do is deploy our application again to take advantage of the modified runtime.

Conclusion

Buildpacks are a fantastic feature of the Cloud Foundry, allowing the platform to support for almost any runtime. Using open-source Git repositories means you can build on any existing buildpack.

For Doctor Watson, we were able to add a command line binary, built in another language, to the NodeJS runtime. Docker was a great tool when developing our custom buildpack.

If you want more information on customising buildpacks, check out the Cloud Foundary documentation.

Source code for the custom buildpack we created is available here.