App Management
The following article is a primer on managing self-hosted apps. It covers everything from keeping the Dashy (or any other app) up-to-date, secure, backed up, to other topics like auto-starting, monitoring, log management, web server configuration and using custom domains.
Contentsβ
- Providing Assets
- Running Commands
- Healthchecks
- Logs and Performance
- Auto-Starting at Boot
- Updating
- Backing Up
- Scheduling
- SSL Certificates
- Authentication
- Network Exposure
- Managing with Compose
- Environmental Variables
- Setting Headers
- Remote Access
- Custom Domain
- Securing Containers
- Web Server Configuration
- Running a Modified App
- Building your Own Container
Providing Assetsβ
Although not essential, you will most likely want to provide several assets to your running app.
This is easy to do using Docker Volumes, which lets you share a file or directory between your host system, and the container. Volumes are specified in the Docker run command, or Docker compose file, using the --volume or -v flags. The value of which consists of the path to the file / directory on your host system, followed by the destination path within the container. Fields are separated by a colon (:), and must be in the correct order. For example: -v ~/alicia/my-local-conf.yml:/app/user-data/conf.yml
In Dashy, commonly configured resources include:
./user-data/conf.yml- Your main application config file./public/item-icons- A directory containing your own icons. This allows for offline access, and better performance than fetching from a CDN- Also within
./publicyou'll find standard website assets, includingfavicon.ico,manifest.json,robots.txt, etc. There's no need to pass these in, but you can do so if you wish /src/styles/user-defined-themes.scss- A stylesheet for applying custom CSS to your app. You can also write your own themes here.
Running Commandsβ
If you're running an app in Docker, then commands will need to be passed to the container to be executed. This can be done by preceding each command with docker exec -it [container-id], where container ID can be found by running docker ps. For example docker exec -it 26c156c467b4 yarn build. You can also enter the container, with docker exec -it [container-id] /bin/ash, and navigate around it with normal Linux commands.
Dashy has several commands that can be used for various tasks, you can find a list of these either in the Developing Docs, or by looking at the package.json. These can be used by running yarn [command-name].
Healthchecksβ
Healthchecks are configured to periodically check that Dashy is up and running correctly on the specified port. By default, the health script is called every 5 minutes, but this can be modified with the --health-interval option. You can check the current container health with: docker inspect --format "{{json .State.Health }}" [container-id], and a summary of health status will show up under docker ps. You can also manually request the current application status by running docker exec -it [container-id] yarn health-check. You can disable healthchecks altogether by adding the --no-healthcheck flag to your Docker run command.
To restart unhealthy containers automatically, check out Autoheal. This image watches for unhealthy containers, and automatically triggers a restart. (This is a stand in for Docker's --exit-on-unhealthy that was proposed, but not merged). There's also Deunhealth, which is super light-weight, and doesn't require network access.
docker run -d \
--name autoheal \
--restart=always \
-e AUTOHEAL_CONTAINER_LABEL=all \
-v /var/run/docker.sock:/var/run/docker.sock \
willfarrell/autoheal
Logs and Performanceβ
Container Logsβ
You can view logs for a given Docker container with docker logs [container-id], add the --follow flag to stream the logs. For more info, see the Logging Documentation. There's also Dozzle, a useful tool, that provides a web interface where you can stream and query logs from all your running containers from a single web app.
Container Performanceβ
You can check the resource usage for your running Docker containers with docker stats or docker stats [container-id]. For more info, see the Stats Documentation. There's also cAdvisor, a useful web app for viewing and analyzing resource usage and performance of all your running containers.
Management Appsβ
You can also view logs, resource usage and other info as well as manage your entire Docker workflow in third-party Docker management apps. For example Portainer an all-in-one open source management web UI for Docker and Kubernetes, or LazyDocker a terminal UI for Docker container management and monitoring.
Advanced Logging and Monitoringβ
Docker supports using Prometheus to collect logs, which can then be visualized using a platform like Grafana. For more info, see this guide. If you need to route your logs to a remote syslog, then consider using logspout. For enterprise-grade instances, there are managed services, that make monitoring container logs and metrics very easy, such as Sematext with Logagent.
Auto-Starting at System Bootβ
You can use Docker's restart policies to instruct the container to start after a system reboot, or restart after a crash. Just add the --restart=always flag to your Docker compose script or Docker run command. For more information, see the docs on Starting Containers Automatically.
For Podman, you can use systemd to create a service that launches your container, the docs explains things further. A similar approach can be used with Docker, if you need to start containers after a reboot, but before any user interaction.
To restart the container after something within it has crashed, consider using docker-autoheal by @willfarrell, a service that monitors and restarts unhealthy containers. For more info, see the Healthchecks section above.
Updatingβ
Dashy is under active development, so to take advantage of the latest features, you may need to update your instance every now and again.
Updating Docker Containerβ
- Pull latest image:
docker pull lissy93/dashy:latest - Kill off existing container
- Find container ID:
docker ps - Stop container:
docker stop [container_id] - Remove container:
docker rm [container_id]
- Find container ID:
- Spin up new container:
docker run [params] lissy93/dashy
Automatic Docker Updatesβ
You can automate the above process using Watchtower. Watchtower will watch for new versions of a given image on Docker Hub, pull down your new image, gracefully shut down your existing container and restart it with the same options that were used when it was deployed initially.
To get started, spin up the watchtower container:
docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
containrrr/watchtower
For more information, see the Watchtower Docs
Updating Dashy from Sourceβ
Stop your current instance of Dashy, then navigate into the source directory. Pull down the latest code, with git pull origin master, then update dependencies with yarn, rebuild with yarn build, and start the server again with yarn start.
Backing Upβ
Backing Up Containersβ
You can make a backup of any running container really easily, using docker commit and save it with docker export, to do so:
- First find the container ID, you can do this with
docker container ls - Now to create the snapshot, just run
docker commit -p [container-id] my-backup - Finally, to save the backup locally, run
docker save -o ~/dashy-backup.tar my-backup - If you want to push this to a container registry, run
docker push my-backup:latest
Note that this will not include any data in docker volumes, and the process here is a bit different. Since these files exist on your host system, if you have an existing backup solution implemented, you can incorporate and volume files within that system.
Backing Up Volumesβ
offen/docker-volume-backup is a useful tool for periodic Docker volume backups, to any S3-compatible storage provider. It's run as a light-weight Docker container, and is easy to setup, and also supports GPG-encryption, email notification, and routing away older backups.
To get started, create a docker-compose similar to the example below, and then start the container. For more info, check out their documentation, which is very clear.
services:
backup:
image: offen/docker-volume-backup:latest
environment:
BACKUP_CRON_EXPRESSION: "0 * * * *"
BACKUP_PRUNING_PREFIX: backup-
BACKUP_RETENTION_DAYS: 7
AWS_BUCKET_NAME: backup-bucket
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
volumes:
- data:/backup/my-app-backup:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes:
data:
It's worth noting that this process can also be done manually, using the following commands:
Backup:
docker run --rm -v some_volume:/volume -v /tmp:/backup alpine tar -cjf /backup/some_archive.tar.bz2 -C /volume ./
Restore:
docker run --rm -v some_volume:/volume -v /tmp:/backup alpine sh -c "rm -rf /volume/* /volume/..?* /volume/.[!.]* ; tar -C /volume/ -xjf /backup/some_archive.tar.bz2"
Dashy-Specific Backupβ
All configuration and dashboard settings are stored in your user-data/conf.yml file. If you provide additional assets (like icons, fonts, themes, etc), these will also live in the user-data directory. So to backup all Dashy data, this is the only directory you need to backup.
When you save config through the UI, Dashy automatically creates a timestamped backup in user-data/config-backups/ (configurable via the BACKUP_DIR env var). If you break your config, check that directory for a recent copy.
Since Dashy is open source, there shouldn't be any need to backup the main container.
Dashy also has a built-in cloud backup feature, which is free for personal users, and will let you make and restore fully encrypted backups of your config directly through the UI. To learn more, see the Cloud Backup Docs
Schedulingβ
If you need to periodically schedule the running of a given command on Dashy (or any other container), then a useful tool for doing so it ofelia. This runs as a Docker container and is really useful for things like backups, logging, updating, notifications, etc. Crons are specified using Go's crontab format, and a useful tool for visualizing this is crontab.guru. This can also be done natively with Alpine: docker run -it alpine ls /etc/periodic.
I recommend combining this with healthchecks for easy monitoring of jobs, and failure notifications.
SSL Certificatesβ
Enabling HTTPS with an SSL certificate is recommended, especially if you are hosting Dashy anywhere other than your home. This will ensure that all traffic is encrypted in transit.
Auto-SSLβ
If you are using NGINX Proxy Manager, then SSL is supported out of the box. Once you've added your proxy host and web address, then set the scheme to HTTPS, then under the SSL Tab select "Request a new SSL certificate" and follow the on-screen instructions.
If you're hosting Dashy behind Cloudflare, then they offer free and easy SSL- all you need to do is enable it under the SSL/TLS tab. Or if you are using shared hosting, you may find this tutorial helpful.
Getting a Self-Signed SSL Certificateβ
Let's Encrypt is a global Certificate Authority, providing free SSL/TLS Domain Validation certificates in order to enable secure HTTPS access to your website. They have good browser/ OS compatibility with their ISRG X1 and DST CA X3 root certificates, support Wildcard issuance done via ACMEv2 using the DNS-01 and have Multi-Perspective Validation. Let's Encrypt provide CertBot an easy app for generating and setting up an SSL certificate.
This process can be automated, using something like the Docker-NGINX-Auto-SSL Container to generate and renew certificates when needed.
If you're not so comfortable on the command line, then you can use a tool like SSL For Free or ZeroSSL to generate your cert. They also provide step-by-step setup instructions for most platforms.
Passing a Self-Signed Certificate to Dashyβ
Once you've generated your SSL cert, you'll need to pass it to Dashy. This can be done by specifying the paths to your public and private keys using the SSL_PRIV_KEY_PATH and SSL_PUB_KEY_PATH environmental variables. Or if you're using Docker, then just pass public + private SSL keys in under /etc/ssl/certs/dashy-pub.pem and /etc/ssl/certs/dashy-priv.key respectively, e.g:
docker run -d \
-p 8080:8080 \
-v ~/my-private-key.key:/etc/ssl/certs/dashy-priv.key:ro \
-v ~/my-public-key.pem:/etc/ssl/certs/dashy-pub.pem:ro \
lissy93/dashy:latest
By default the SSL port is 443 within a Docker container, or 4001 if running on bare metal, but you can override this with the SSL_PORT environmental variable.
Once everything is setup, you can verify your site is secured using a tool like SSL Checker.
Authenticationβ
Dashy natively supports secure authentication using KeyCloak. There is also a Simple Auth feature that doesn't require any additional setup. Usage instructions for both, as well as alternative auth methods, has now moved to the Authentication Docs page.
Network Exposureβ
Dashy is designed to run on your local network, behind your firewall. If you only access it from within your home or over a VPN, the defaults are fine.
If you do need to expose Dashy to the internet, you should put it behind a reverse proxy with its own authentication layer (e.g. Authelia, Authentik, Cloudflare Access, or your proxy's built-in auth). Don't rely solely on Dashy's built-in auth for internet-facing instances - it's a convenience feature for private networks, not a hardened perimeter control. See the Authentication Docs for setup options.
When Dashy runs in server mode (the default Docker setup), it exposes several API endpoints for things like status checks, config saving, system info, and a CORS proxy used by widgets. When authentication is enabled (via ENABLE_HTTP_AUTH=true or BASIC_AUTH_USERNAME/BASIC_AUTH_PASSWORD env vars), all of these endpoints require valid credentials. Without auth configured, they are open. That's fine for private networks, but not appropriate for public access.
The CORS proxy (/cors-proxy) is worth calling out specifically: it forwards requests from the Dashy server to external URLs, so widgets can reach APIs that don't set CORS headers. On a private network this is harmless, but on an internet-exposed instance without auth, it could be abused as an open proxy. Always enable authentication if your instance is reachable from untrusted networks.
Managing Containers with Docker Composeβ
When you have a lot of containers, it quickly becomes hard to manage with docker run commands. The solution to this is docker compose, a handy tool for defining all a containers run settings in a single YAML file, and then spinning up that container with a single short command - docker compose up. A good example of which can be seen in @abhilesh's docker compose collection.
You can use Dashy's default docker-compose.yml file as a template, and modify it according to your needs.
An example Docker compose, using the default base image from DockerHub, might look something like this:
---
services:
dashy:
container_name: Dashy
image: lissy93/dashy
volumes:
- /root/my-config.yml:/app/user-data/conf.yml
ports:
- 4000:8080
environment:
- BASE_URL=/my-dashboard
restart: unless-stopped
healthcheck:
test: ['CMD', 'node', '/app/services/healthcheck']
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
Passing in Environmental Variablesβ
With Docker, you can define environmental variables under the environment section of your Docker compose file. Environmental variables are used to configure high-level settings, usually before the config file has been read. For a list of all supported env vars in Dashy, see the developing docs, or the default .env file.
A common use case, is to run Dashy under a sub-page, instead of at the root of a URL (e.g. https://my-homelab.local/dashy instead of https://dashy.my-homelab.local). In this use-case, you'd specify the BASE_URL variable in your compose file.
environment:
- BASE_URL=/dashy
You can also do the same thing with the docker run command, using the --env flag.
If you've got many environmental variables, you might find it useful to put them in a .env file. Similarly, for Docker run you can use --env-file if you'd like to pass in a file containing all your environmental variables.
Setting Headersβ
Any external requests made to a different origin (app/ service under a different domain) will be blocked if the correct headers are not specified. This is known as Cross-Origin Resource Sharing (CORS) and is a security feature built into modern browsers.
If you see a CORS error in your console, this can be easily fixed by setting the correct headers. This is not a bug with Dashy, so please don't raise it as a bug!
Example Headersβ
The following section briefly outlines how you can set headers for common web proxies/ servers. More info can be found in the documentation for the proxy that you are using, or in the MDN Docs.
These examples are using:
Access-Control-Allow-Originheader, but depending on what type of content you are enabling, this will vary. For example, to allow a site to be loaded in an iframe (for the modal or workspace views) you would useX-Frame-Options.- The domain root (
/), if your're hosting from a sub-page, replace that with your path. - A wildcard (
*), which would allow access from traffic on any domain, this is discouraged, and you should replace it with the URL where you are hosting Dashy. Note that for requests that transport sensitive info, like credentials (e.g. Keycloak login), the wildcard is disallowed all together and will be blocked.
Caddyβ
See Caddy
headerdocs for more info.
headers / {
Access-Control-Allow-Origin *
}
NGINXβ
See NGINX
ngx_http_headers_moduledocs for more info.
location / {
add_header Access-Control-Allow-Origin *;
}
Note this can also be done through the UI, using NGINX Proxy Manager.
Traefikβ
See TrΓ¦fΙͺk CORS headers docs for more info.
labels:
- "traefik.http.middlewares.testheader.headers.accesscontrolallowmethods=GET,OPTIONS,PUT"
- "traefik.http.middlewares.testheader.headers.accesscontrolalloworiginlist=https://foo.bar.org,https://example.org"
- "traefik.http.middlewares.testheader.headers.accesscontrolmaxage=100"
- "traefik.http.middlewares.testheader.headers.addvaryheader=true"
HAProxyβ
See HAProxy Rewrite Response Docs for more info.
/
http-response add-header Access-Control-Allow-Origin *
Apacheβ
See Apache
mode_headersdocs for more info.
Header always set Access-Control-Allow-Origin "*"
Squidβ
See Squid
request_header_accessdocs for more info.
request_header_access Authorization allow all
Remote Accessβ
WireGuardβ
Using a VPN is one of the easiest ways to provide secure, full access to your local network from remote locations. WireGuard is a reasonably new open source VPN protocol, that was designed with ease of use, performance and security in mind. Unlike OpenVPN, it doesn't need to recreate the tunnel whenever connection is dropped, and it's also much easier to setup, using shared keys instead.
- Install Wireguard - See the Install Docs for download links + instructions
- On Debian-based systems, it's
sudo apt install wireguard
- On Debian-based systems, it's
- Generate a Private Key - Run
wg genkeyon the Wireguard server, and copy it to somewhere safe for later - Create Server Config - Open or create a file at
/etc/wireguard/wg0.confand under[Interface]add the following (see example below):Address- as a subnet of all desired IPsPrivateKey- that you just generatedListenPort- Default is51820, but can be anything
- Get Client App - Download the WG client app for your platform (Linux, Windows, MacOS, Android or iOS are all supported)
- Create new Client Tunnel - On your client app, there should be an option to create a new tunnel, when doing so a client private key will be generated (but if not, use the
wg genkeycommand again), and keep it somewhere safe. A public key will also be generated, and this will go in our saver config - Add Clients to Server Config - Head back to your
wg0.conffile on the server, create a[Peer]section, and populate the following infoAllowedIPs- List of IP address inside the subnet, the client should have access toPublicKey- The public key for the client you just generated
- Start the Server - You can now start the WG server, using:
wg-quick up wg0on your server - Finish Client Setup - Head back to your client device, and edit the config file, leave the private key as is, and add the following fields:
PublicKey- The public key of the serverAddress- This should match theAllowedIPssection on the servers config fileDNS- The DNS server that'll be used when accessing the network through the VPNEndpoint- The hostname or IP + Port where your WG server is running (you may need to forward this in your firewall's settings)
- Done - Your clients should now be able to connect to your WG server :) Depending on your networks firewall rules, you may need to port forward the address of your WG server
Example Server Configβ
# Server file
[Interface]
# Which networks does my interface belong to? Notice: /24 and /64
Address = 10.5.0.1/24, 2001:470:xxxx:xxxx::1/64
PrivateKey = xxx
ListenPort = 51820
# Peer 1
[Peer]
PublicKey = xxx
# Which source IPs can I expect from that peer? Notice: /32 and /128
AllowedIps = 10.5.0.35/32, 2001:470:xxxx:xxxx::746f:786f/128
# Peer 2
[Peer]
PublicKey = xxx
# Which source IPs can I expect from that peer? This one has a LAN which can
# access hosts/jails without NAT.
# Peer 2 has a single IP address inside the VPN: it's 10.5.0.25/32
AllowedIps = 10.5.0.25/32,10.21.10.0/24,10.21.20.0/24,10.21.30.0/24,10.31.0.0/24,2001:470:xxxx:xxxx::ca:571e/128
Example Client Configβ
[Interface]
# Which networks does my interface belong to? Notice: /24 and /64
Address = 10.5.0.35/24, 2001:470:xxxx:xxxx::746f:786f/64
PrivateKey = xxx
# Server
[Peer]
PublicKey = xxx
# I want to route everything through the server, both IPv4 and IPv6. All IPs are
# thus available through the Server, and I can expect packets from any IP to
# come from that peer.
AllowedIPs = 0.0.0.0/0, ::0/0
# Where is the server on the internet? This is a public address. The port
# (:51820) is the same as ListenPort in the [Interface] of the Server file above
Endpoint = 1.2.3.4:51820
# Usually, clients are behind NAT. to keep the connection running, keep alive.
PersistentKeepalive = 15
A useful tool for getting WG setup is Algo. It includes scripts and docs which cover almost all devices, platforms and clients, and has best practices implemented, and security features enabled. All of this is better explained in this blog post.
Reverse SSH Tunnelβ
SSH (or Secure Shell) is a secure tunnel that allows you to connect to a remote host. Unlike the VPN methods, an SSH connection does not require an intermediary, and will not be affected by your IP changing. However it only allows you to access a single service at a time. SSH was really designed for terminal access, but because of the latter mentioned benefits it's useful to setup, as a fallback option.
Directly SSH'ing into your home, would require you to open a port (usually 22), which would be terrible for security, and is not recommended. However a reverse SSH connection is initiated from inside your network. Once the connection is established, the port is redirected, allowing you to use the established connection to SSH into your home network.
The issue you've probably spotted, is that most public, corporate, and institutional networks will block SSH connections. To overcome this, you'd have to establish a server outside of your homelab that your homelab's device could SSH into to establish the reverse SSH connection. You can then connect to that remote server (the mothership), which in turn connects to your home network.
Now all of this is starting to sound like quite a lot of work, but this is where services like remot3.it come in. They maintain the intermediary mothership server, and create the tunnel service for you. It's free for personal use, secure and easy. There are several similar services, such as RemoteIoT, or you could create your own on a cloud VPS (see this tutorial for more info on that).
Before getting started, you'll need to head over to Remote.it and create an account.
Then setup your local device:
- If you haven't already done so, you'll need to enable and configure SSH.
- This is out-of-scope of this article, but I've explained it in detail in this post.
- Download the Remote.it install script from their GitHub
curl -LkO https://raw.githubusercontent.com/remoteit/installer/master/scripts/auto-install.sh
- Make it executable, with
chmod +x ./auto-install.sh, and then run it withsudo ./auto-install.sh - Finally, configure your device, by running
sudo connectd_installerand following the on-screen instructions
And when you're ready to connect to it:
- Login to app.remote.it, and select the name of your device
- You should see a list of running services, click SSH
- You'll then be presented with some SSH credentials that you can now use to securely connect to your home, via the Remote.it servers
Done :)
TCP Tunnelβ
If you're running Dashy on your local network, behind a firewall, but need to temporarily share it with someone external, this can be achieved quickly and securely using Ngrok. It's basically a super slick, encrypted TCP tunnel that provides an internet-accessible address that anyone use to access your local service, from anywhere.
To get started, Download and install Ngrok for your system, then just run ngrok http [port] (replace the port with the http port where Dashy is running, e.g. 8080). When using https, specify the full local url/ ip including the protocol.
Some Ngrok features require you to be authenticated, you can create a free account and generate a token in your dashboard, then run ngrok authtoken [token].
It's recommended to use authentication for any publicly accessible service. Dashy has an Auth feature built in, but an even easier method it to use the -auth switch. E.g. ngrok http -auth="username:password123" 8080
By default, your web app is assigned a randomly generated ngrok domain, but you can also use your own custom domain. Under the Domains Tab of your Ngrok dashboard, add your domain, and follow the CNAME instructions. You can now use your domain, with the -hostname switch, e.g. ngrok http -region=us -hostname=dashy.example.com 8080. If you don't have your own domain name, you can instead use a custom sub-domain (e.g. alicia-dashy.ngrok.io), using the -subdomain switch.
To integrate this into your docker-compose, take a look at the gtriggiano/ngrok-tunnel container.
There's so much more you can do with Ngrok, such as exposing a directory as a file browser, using websockets, relaying requests, rewriting headers, inspecting traffic, TLS and TCP tunnels and lots more. All or which is explained in the Documentation.
It's worth noting that Ngrok isn't the only option here, other options include: FRP, Inlets, Local Tunnel, TailScale, etc. Check out Awesome Tunneling for a list of alternatives.
Custom Domainβ
Using DNSβ
For locally running services, a domain can be set up directly in the DNS records. This method is really quick and easy, and doesn't require you to purchase an actual domain. Just update your networks DNS resolver, to point your desired URL to the local IP where Dashy (or any other app) is running. For example, a line in your hosts file might look something like: 192.168.0.2 dashy.homelab.local.
If you're using Pi-Hole, a similar thing can be done in the /etc/dnsmasq.d/03-custom-dns.conf file, add a line like: address=/dashy.example.com/192.168.2.0 for each of your services.
If you're running OPNSense/ PfSense, then this can be done through the UI with Unbound, it's explained nicely in this article, by Dustin Casto.
Using NGINXβ
If you're using NGINX, then you can use your own domain name, with a config similar to the below example.
upstream dashy {
server 127.0.0.1:32400;
}
server {
listen 8080;
server_name dashy.mydomain.com;
# Setup SSL
ssl_certificate /var/www/mydomain/sslcert.pem;
ssl_certificate_key /var/www/mydomain/sslkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_session_timeout 5m;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://dashy;
proxy_redirect off;
proxy_buffering off;
proxy_set_header host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
}
}
Similarly, a basic Caddyfile might look like:
dashy.example.com {
reverse_proxy / nginx:8080
}
For more info, this guide on Setting up Domains with NGINX Proxy Manager and CloudFlare may be useful.
Container Securityβ
- Keep Docker Up-To-Date
- Set Resource Quotas
- Don't Run as Root
- Specify a User
- Limit Capabilities
- Prevent new Privileges being Added
- Disable Inter-Container Communication
- Don't Expose the Docker Daemon Socket
- Use Read-Only Volumes
- Set the Logging Level
- Verify Image before Pulling
- Specify the Tag
- Container Security Scanning
- Registry Security
- Security Modules
Keep Docker Up-To-Dateβ
To prevent known container escape vulnerabilities, which typically end in escalating to root/administrator privileges, patching Docker Engine and Docker Machine is crucial. For more info, see the Docker Installation Docs.
Set Resource Quotasβ
Docker enables you to limit resource consumption (CPU, memory, disk) on a per-container basis. This not only enhances system performance, but also prevents a compromised container from consuming a large amount of resources, in order to disrupt service or perform malicious activities. To learn more, see the Resource Constraints Docs
For example, to run Dashy with max of 1GB ram, and max of 50% of 1 CP core:
docker run -d -p 8080:8080 --cpus=".5" --memory="1024m" lissy93/dashy:latest
Don't Run as Rootβ
Running Docker commands with sudo gives the container more host-level access than it needs. You should run Docker as a non-root host user instead.
If you're facing permission issues on Debian-based systems when running Docker commands without sudo, you may need to add your user to the Docker group. First create the group: sudo groupadd docker, then add your (non-root) user: sudo usermod βaG docker [my-username], finally newgrp docker to refresh.
Specify a Userβ
For containers in general, running as an unprivileged user is one of the best ways to prevent privilege escalation attacks. You can specify a user with the --user param, using the user ID (UID) from id -u and group ID (GID) from id -g.
Note for Dashy: If you use features that write to disk (saving config through the UI, triggering a rebuild), the process needs write access to /app/user-data/ and /app/dist/. Since the default image creates these directories as root, running with --user will cause those features to fail with permission errors unless you also fix ownership of the mounted volumes. If you only use Dashy in read-only mode, running as a non-root user works fine:
docker run --user 1000:1000 -p 8080:8080 lissy93/dashy
Or with Docker Compose, using an environmental variable:
services:
dashy:
image: lissy93/dashy
user: ${CURRENT_UID}
ports: [ 4000:8080 ]
And then to set the variable, and start the container, run: CURRENT_UID=$(id -u):$(id -g) docker-compose up
Limit capabilitiesβ
Docker containers run with a subset of Linux Kernal's Capabilities by default. It's good practice to drop privilege permissions that are not needed for any given container.
With Docker run, you can use the --cap-drop flag to remove capabilities, you can also use --cap-drop=all and then define just the required permissions using the --cap-add option. For a list of available capabilities, see the Privilege Capabilities Docs.
Note that dropping privileges and capabilities on runtime is not fool-proof, and often any leftover privileges can be used to re-escalate, see POS36-C.
Here's an example using docker-compose, removing privileges that are not required for Dashy to run:
services:
dashy:
image: lissy93/dashy
ports: [ 4000:8080 ]
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
- DAC_OVERRIDE
- NET_BIND_SERVICE
Prevent new Privileges being Addedβ
To prevent processes inside the container from getting additional privileges, pass in the --security-opt=no-new-privileges:true option to the Docker run command (see docs).
Run Command:
docker run --security-opt=no-new-privileges:true -p 8080:8080 lissy93/dashy
Docker Compose
security_opt:
- no-new-privileges:true
Disable Inter-Container Communicationβ
By default Docker containers can talk to each other (using docker0 bridged network). If you don't need this capability, then it should be disabled. This can be done with the --icc=false in your run command. You can learn more about how to facilitate secure communication between containers in the Compose Networking docs.