blog.dasrecht.net

With a min of max the opt!

Containers vs. Local Installs


My main workstation is a Mac, and I tend to run many small tools via the command line. Over time I grew a bit bored with brew installing every even so small binary for using it for a while and then started collecting dust on my system and adding to keeping things up to date.

A long-ish time ago, I started wrapping those tools into tiny containers that only do that one job. This is no new concept; Jesse wrote about this aeons ago, in 2015. I remember talking to Fabian about this at a Drupalcon (it might have been Amsterdam). Fabian built docker-relay, which elegantly solves the issue of how to run those single-purpose containers. Currently, I create containers for a specific use, e.g. ffmpeg, qpdf, pandoc or yt-dlp. And to use them, I have linked them via my dotfiles.

The structure is usually a very small dockerfile e.g.

FROM alpine
WORKDIR /data
RUN apk add ffmpeg
ENTRYPOINT [ "/usr/bin/ffmpeg" ]

In the past, I just wrapped it into my dotfiles

ffmpeg(){
    docker run -it \
    -v "$(pwd):/data" \
    dasrecht/ffmpeg "$@"
}

And yes, I don’t publish most of those images because… there’s little to no benefit of publicising them as I re-build them if there’s a need.

Now docker-relay solves this very nicely 😀 In the .docker-relay.yml it only needs a few lines of configuration and a symlink back to the docker-relay binary on my system, and we’re good to go.

ffmpeg:
  run:
    image: dasrecht/ffmpeg
  cmd: ffmpeg
  Volume:
    - '${PWD}:/data'

And yes, it’s called docker-relay, but it works wonderfully together with Colima. Because Colima is much nicer to use on OSX-based systems compared to Docker Desktop. As it does not have the tendency to suddenly use an entire CPU core to drain my battery for no reason.