Over a year ago, we faced a storage-related challenge that involved migrating from standard file system toward a S3-compatible service. The first solution seemed to be defining an abstraction over storage mechanism and develop implementations for both standard file system and S3. While it was a legit solution, any better option that could spare us modifying the codebase was the true winner.

s3fs and FUSE

s3fs is an open-source FUSE-based file system driver that simply mounts a S3 bucket content as a standard file system directory. One can just install s3fs on Debian-based distros via:

sudo apt-get install s3fs

To start using it, first you need to put your ACCESS-KEY-ID and SECRET-ACCESS-KEY at /etc/passwd-s3fs with read-only permissions:

export AWSACCESSKEYID="your-aws-access-key-id"
export AWSSECRETACCESSKEY="your-aws-secret-access-key"
sudo sh -c "echo $AWSACCESSKEYID:$AWSSECRETACCESSKEY > /etc/passwd-s3fs"
sudo chmod 0640 /etc/passwd-s3fs

Then create a dummy empty directory to mount bucket content onto:

mkdir -p ~/my-bucket-mount-dir

And finally invoke s3fs to mount bucket content:

s3fs \
    "your-bucket-name" \
    ~/my-bucket-mount-dir \
    -d \
    -o allow_other \
    -o use_path_request_style \
    -o default_acl=public-read \
    -o nonempty \
    -o direct_io

⚠️ If, instead of AWS, you’re using a third-party S3-compatible interface (e.g., minio), just provide the storage server domain as an option to the above command:

-o url=https://some-storage-server-domain

ℹ️ Of course there are many options that you may want to consider when using s3fs. You can google “s3fs” or read its docs via running s3fs --help or man s3fs.


About Regular Encounters
I’ve decided to record my daily encounters with professional issues on a somewhat regular basis. Not all of them are equally important/unique/intricate, but are indeed practical, real, and of course, textually minimal.