Desktop Containers - Tips

2019-03-07 17:51:00 +0000 UTC

I have recently decided to start running as many of my desktop programs as possible inside docker containers as inspired by Jess Frazelle’s post.

My primary motivation for this being a good way to learn some intricacies of the docker/container system and that well it just seemed interesting.

The following are some stumbling blocks I faced and how I got passed them.

Connecting to X Server

Any graphical program, or program that needs access to the X server (for example xclip) will need to be allowed to connect (duh!).

Sharing the Server/Display

Firstly you will need to make sure the DISPLAY variable and X server socket is shared with the container using these options in the run command does this.

    -v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
    -e DISPLAY=unix$DISPLAY \ # pass the display

Allowing Access

In order for apps running in docker to be allowed to connect to your X server (not running in docker) you will need to allow access I have this line in my .xinitrc

xhost local:root

Secure(ish) Passwords

I store all my passwords and other sensitive data in pass when apps that I am running inside docker need these passes I supply them via environment variables. Here is my AWS CLI launcher for example.

aws(){
    (
        export AWS_ACCESS_KEY_ID=$(pass show aws/id)
        export AWS_SECRET_ACCESS_KEY=$(pass show aws/secret)
        export AWS_DEFAULT_REGION="us-east-1"

        docker run --rm -it \
            --user $(id -u) \
            --log-driver none \
            -e "AWS_ACCESS_KEY_ID" \
            -e "AWS_SECRET_ACCESS_KEY" \
            -e "AWS_DEFAULT_REGION" \
            -v "${HOME}/downloads:/Downloads" \
            --name "aws" \
            cyberdummy/aws "$@"
    )
}

So when I run aws command it will lookup the access credentials and supply them to the container. Notice the subshell brackets here that stops those variables being exported outside the sub shell. Also not setting the variables like this -e "AWS_ACCESS_KEY_ID=secrethere stops them from appearing in the process list. Its important to note that the variables would still be visible via inspecting the container but its good enough for me.

# Example to show potentially sensative environment variables
docker inspect --format "{{.Config.Env}}" aws

Webservers -> Browser

When launching a temporary/development webserver of some kind eg Hugos hugo server or PHPs php -S as well as forwarding the port you need to bind the server to the accessible address (not localhost). The easiest way is to bind to the ANY address 0.0.0.0 so its available on all IPs.

# Hugo Example
docker run \
  -ti \
  --rm \
  --name hugoserver \
  -v $(pwd):/workspace \
  --user $(id -u) \
  -p 1313:1313 \
  cyberdummy/hugo server -t terminal --bind 0.0.0.0 -D

# PHP Example
docker run \
  -ti \
  --rm \
  -p 8081:8081 \
  -v "$(pwd)/some/files:/www" \
  php:7.2-cli -S 0.0.0.0:8081 -t /www

My browser runs inside a container also so in order to make connecting to the servers easy I run a user bridge network called “desktop", to create the network:

docker network create desktop

To connect my browser and server to this network either run it with the:

--network desktop

option or connect the existing container with:

docker network connect desktop <container name>

Then you can just visit http://<container name>:<port> to view the site.

Nvidia and OpenGL

I have an Nvidia 1060 card powering my 2x 4K displays using the propriety drivers supplied by nvidia for Linux. This causes OpenGL issues in container apps that want to use OpenGL but do not have the same drivers as the host.

Gettings this to work involves installing the nvidia-docker runtime and having your Dockerfile looks like the following:

FROM ubuntu:18.04

ENV CUDA_VERSION 10.0.130
ENV CUDA_PKG_VERSION 10-0=$CUDA_VERSION-1
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
ENV NVIDIA_REQUIRE_CUDA "cuda>=10.0 brand=tesla,driver>=384,driver<385"
ENV NVIDIA_DRIVER_CAPABILITIES ${NVIDIA_DRIVER_CAPABILITIES},display

It must come from 18.04 as that is the only version currently supported via the runtime. The container should be launched with the --runtime nvidia option.

 docker run  -d \
     --runtime=nvidia \
     --name blah \
     cyberdummy/blah

Instead of using ubuntu 18.04 you can match to your host OS (or any OS) provided the nvidia drivers version installed in the container match the ones used on the host.