SadServers
  • Scenarios
  • Dashboard
  • Solutions
    For Individuals For Businesses
  • Ranking
  • Newsletter
  • Documentation
    FAQ Pro Accounts Business Accounts Gift Support API Privacy Troubleshooting Interviews
  • Blog
  • Pricing
  • Gift
    Gift Purchase Gift Redeem
  • About
Log In - Sign Up

Troubleshooting Scenarios: Docker

Linux & Bash

  • - Linux commands, Bash scripting
  • - Systemd
  • - Networking, DNS
  • - Storage
  • - SSH, Firewall
  • - Libraries
  • - Cron and more...

Web Servers

  • - Nginx
  • - Apache
  • - HAProxy
  • - Caddy
  • - Gunicorn
  • - uWSGI
  • - HTTPS/TLS

Databases

  • - PostgreSQL
  • - MySQL
  • - SQLite
  • - Redis
  • - ClickHouse
  • - MongoDB
  • - etcd

Data Processing

  • - CSV
  • - JSON
  • - SQL queries

Docker

  • - Building images
  • - Multi-stage builds
  • - Volumes
  • - Networks
  • - Docker Compose
  • - Podman

Kubernetes

  • - kubectl
  • - Helm
  • - K8S Roles & Permissions
  • - Services
  • - Namespaces
  • - Deployments, StatefulSets
  • - ConfigMaps, Secrets

Tooling / Applications

  • - Git
  • - Rabbitmq
  • - Envoy
  • - Vault
  • - Harbor
  • - Prometheus
  • - Jenkins

Hacking

  • - Capture the Flag (CTF) Challenges
  • - Code Vulnerabilities
  • - Privilege Escalation

Languages

  • - Python
  • - Golang
  • - PHP
  • - Java
  • - Node.js
  • - C
Previous Next
docker podman
realistic / interviews new pro

Medium

# Name Time Type
1 "Salta": Docker container won't start. 15 m Fix
"Salta": Docker container won't start.

Scenario: "Salta": Docker container won't start.

Level: Medium

Type: Fix

Access: Email

Description: There's a "dockerized" Node.js web application in the /home/admin/app directory. Create a Docker container so you get a web app on port :8888 and can curl to it. For the solution to be valid, there should be only one running Docker container.

Test: curl localhost:8888 returns Hello World! from a running container.

Time to Solve: 15 minutes.

2 "Venice": Am I in a container? 15 m Do
"Venice": Am I in a container?

Scenario: "Venice": Am I in a container?

Level: Medium

Type: Do

Access: Email

Description: Try and figure out if you are inside a container (like a Docker one for example) or inside a Virtual Machine (like in the other scenarios).

Test: This scenario doesn't have a test (hence also no "Check My Solution" either).

Time to Solve: 15 minutes.

3 "Tarifa": Between Two Seas 20 m Fix Pro
"Tarifa": Between Two Seas

Scenario: "Tarifa": Between Two Seas

Level: Medium

Type: Fix

Access: Paid

Description: There are three Docker containers defined in the docker-compose.yml file: an HAProxy accepting connetions on port :5000 of the host, and two nginx containers, not exposed to the host.

The person who tried to set this up wanted to have HAProxy in front of the (backend or upstream) nginx containers load-balancing them but something is not working.

Test: Running curl localhost:5000 several times returns both hello there from nginx_0 and hello there from nginx_1

Check /home/admin/agent/check.sh for the test that "Check My Solution" runs.

Time to Solve: 20 minutes.

4 "Helsingør": The first walls of postgres physical replication 20 m Fix
"Helsingør": The first walls of postgres physical replication

Scenario: "Helsingør": The first walls of postgres physical replication

Level: Medium

Type: Fix

Access: Email

Description: You're setting up a PostgreSQL database with replication, you decided to use Docker along with Docker Compose to make it easier to manage and test, after a few hours of work you finished the job and the master database is up and running, but you're having trouble with the replica.

You need to figure out what's wrong with the replica and fix it.

Since you are using Docker Compose, you can check the status of the running containers using docker compose ps or docker ps will do the job too). You may also want to check the logs of the containers.

All definition for the containers are inside the docker-compose.yml file. You can stand up the environment by running docker compose up -d and set it down by running `docker compose down`.

If you make any change to the docker-compose.yml file, you can restart the containers by running docker compose up -d --force-recreate.

Test: Postgres replica container works.

The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.

Time to Solve: 20 minutes.

5 "Bharuch": Lost in Translation 20 m Fix Pro
"Bharuch": Lost in Translation

Scenario: "Bharuch": Lost in Translation

Level: Medium

Type: Fix

Access: Paid

Description: There's a Docker container that runs a web server on port 3000, but it's not working.

Using the tooling and resources provided in the server, make the container run correctly.

Test: curl http://localhost:3000 should return "Hello from sadservers!"

The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.

Time to Solve: 20 minutes.

6 "Quito": Control One Container from Another 20 m Do
"Quito": Control One Container from Another

Scenario: "Quito": Control One Container from Another

Level: Medium

Type: Do

Access: Email

Description: You have a running container named docker-access. Another container nginx is present but in a stopped state. Your goal is to start the nginx container from inside the docker-access container.

You must not start the nginx container from the host system or any other container that is not docker-access. You can restart this docker-access container.

Test: Executing docker ps inside the docker-access container: docker exec docker-access docker ps succeeds.

The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.

Time to Solve: 20 minutes.

7 "Atlantis": Not found 15 m Fix Pro
"Atlantis": Not found

Scenario: "Atlantis": Not found

Level: Medium

Type: Fix

Access: Paid

Description: There is a small "C" application in the /home/admin/app directory. Create the Docker container "app" with a small footprint and minimalistic so you get a hello binary that returns a greeting in Atlantean (Docker multi-stage build). The binary application is automatically called when running docker run app

Test: docker run app returns SOO-puhk

The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.

Time to Solve: 15 minutes.

8 "Auderghem": Containers miscommunication 15 m Fix Pro
"Auderghem": Containers miscommunication

Scenario: "Auderghem": Containers miscommunication

Level: Medium

Type: Fix

Access: Paid

Description: There is an nginx Docker container that listens on port 80, the purpose of which is to redirect the traffic to two other containers statichtml1 and statichtml2 but this redirection is not working.
Fix the problem.

IMPORTANT. You can restart all containers, but don't stop or remove them.

Test: The nginx container must redirect the traffic to the statichtml1 and statichtml2 containers:

curl http://localhost returns the Welcome to nginx default page
curl http://localhost/1 returns HelloWorld;1
curl http://localhost/2 returns HelloWorld;2

The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.

Time to Solve: 15 minutes.

9 "Woluwe": Too many images 15 m Fix Pro
"Woluwe": Too many images

Scenario: "Woluwe": Too many images

Level: Medium

Type: Fix

Access: Paid

Description: A pipeline created a lot of Docker images locally for a web app. All these images except for one contain a typo introduced by a developer: there's an incorrect image instruction to pipe "HelloWorld" to "index.htmlz" instead of using the correct "index.html"
Find which image doesn't have the typo (and uses the correct "index.html"), tag this correct image as "prod" (rather than fixing the current prod image) and then deploy it with docker run -d --name prod -p 3000:3000 prod so it responds correctly to HTTP requests on port :3000 instead of "404 Not Found".

Test: curl http://localhost:3000 should respond with HelloWorld;529

The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.

Time to Solve: 15 minutes.

10 "Podgorica": Docker to Podman migration 20 m Do Pro
"Podgorica": Docker to Podman migration

Scenario: "Podgorica": Docker to Podman migration

Level: Medium

Type: Do

Access: Paid

Description: You have been tasked with migrating this future web server from using Docker (which uses a daemon) to rootless Podman.
There is already an Nginx Podman image on the server, and your objective is to manage the container created from it using systemd, so the it starts automatically on reboot and continues running unless explicity stopped (the same behaviour expected from a Docker-managed container).
Create a systemd service named container-nginx.service that manages the Podman Nginx container. Enable and start this service.

NOTES: Although a quadlet file solution should be valid, the check script is still not accounting for it.

There is no need to reboot the VM, although if you want you could reboot it from the command line with /sbin/shutdown -r now and refresh or reopen the web console.

Test: The checker script will test if the container-nginx.service is active and enabled, and if it can stop and start the service. It will also verify that curl localhost:8888 returns the default "Welcome to nginx" web page.

The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.

Time to Solve: 20 minutes.

11 "Torino": Optimize grande Docker image 15 m Do Pro
"Torino": Optimize grande Docker image

Scenario: "Torino": Optimize grande Docker image

Level: Medium

Type: Do

Access: Paid

Description: A Torino Node.js application is located in the ~/torino-app directory.
You can run it directly with: nohup node app.js > app.log 2>&1 &. You can also verify that it works by running: curl localhost:3000

There is already a torino Docker image built with the Dockerfile in ~/torino-app, but the resulting image size is 916 MB.

Your task is to optimize the Docker image size:
1. Build a new Docker image for the Torino application, also called torino:latest but with a total size under 122 MB
2. Create and run a container using this optimized image.

NOTE: You can only use the existing Docker images in the server.
To build a Node application you need to COPY in your Dockerfile, besides the app.js , the package*.json files and without Internet access, the node_modules directory, since you cannot RUN npm install.

Test: The torino Docker image is less than 122 MB and curl http://localhost:3000 returns Hello from Torino!

The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.

Time to Solve: 15 minutes.

12 "Socorro, NM": Optimize Podman image 15 m Do Pro
"Socorro, NM": Optimize Podman image

Scenario: "Socorro, NM": Optimize Podman image

Level: Medium

Type: Do

Access: Paid

Description: The podman image localhost/prod:latest contains a static website.
Initially the image size is 261 MB and contains 100 layers.

Your task:
1. Optimize the image localhost/prod:latest so that its size is less than 200 MB, using the same tag.
2. Run a container named "check" from the optimized image: podman run -d --name check -p 8888:80 localhost/prod:latest so that curl localhost:8888 returns 100 lines.

Test: The podman image localhost/prod:latest size is less than 200 MB and running curl localhost:8888 from a container named "check" created from the image retuns 100 lines.

The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.

Time to Solve: 15 minutes.

13 "San Juan": mucho Traefik 20 m Fix New
"San Juan": mucho Traefik

Scenario: "San Juan": mucho Traefik

Level: Medium

Type: Fix

Access: Email

Description: There is a Traefik load balancer that must be up and running. The server and the backend services are managed by Docker Compose. Running curl -s app.sadserver | head -n1 must return the host ID of one of the backend servers, running the command again must return a new host ID. The server seems to be working some times, some others fails or just times out.

The round-robin configuration should make the webserver iterate through the back-end servers.

Test: curl -s app.sadserver | head -n1 returns something like Hostname:

The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.

Time to Solve: 20 minutes.

Hard

# Name Time Type
1 "Bern": Docker web container can't connect to db container. 20 m Fix
"Bern": Docker web container can't connect to db container.

Scenario: "Bern": Docker web container can't connect to db container.

Level: Hard

Type: Fix

Access: Email

Description: There are two Docker containers running, a web application (Wordpress or WP) and a database (MariaDB) as back-end, but if we look at the web page, we see that it cannot connect to the database. curl -s localhost:80 |tail -4 returns:

<body id="error-page"> <div class="wp-die-message"><h1>Error establishing a database connection</h1></div></body> </html>

This is not a Wordpress code issue (the image is :latest with some network utilities added). What you need to know is that WP uses "WORDPRESS_DB_" environment variables to create the MySQL connection string. See the ./html/wp-config.php WP config file for example (from /home/admin).

Test: sudo docker exec wordpress mysqladmin -h mysql -u root -ppassword ping . The wordpress container is able to connect to the database in the mariadb container and returns mysqld is alive.

Time to Solve: 20 minutes.

2 "Singara": Docker and Kubernetes web app not working. 20 m Fix
"Singara": Docker and Kubernetes web app not working.

Scenario: "Singara": Docker and Kubernetes web app not working.

Level: Hard

Type: Fix

Access: Email

Description: There's a k3s Kubernetes install you can access with kubectl. The Kubernetes YAML manifests under /home/admin have been applied. The objective is to access from the host the "webapp" web server deployed and find what message it serves (it's a name of a town or city btw). In order to pass the check, the webapp Docker container should not be run separately outside Kubernetes as a shortcut.

Test: curl localhost:8888 returns a value from the webapp deployed Kubernetes pod.

Time to Solve: 20 minutes.

3 "Chennai": Pull a Rabbit from a Hat 30 m Fix Pro
"Chennai": Pull a Rabbit from a Hat

Scenario: "Chennai": Pull a Rabbit from a Hat

Level: Hard

Type: Fix

Access: Paid

Description: There is a RabbitMQ (RMQ) cluster defined in a docker-compose.yml file.

Bring this system up and then run the producer.py script in such a way that is able to send messages to RMQ. In particular you have to send the message "hello-lwc".

- RMQ is a queuing system: messages are put in the queue with a "producer" and they are taken out from the other side by a "consumer". The queue name has to be the same for both.

- To send the message "hello-lwc": python3 ~/producer.py hello-lwc. Should return Message sent to RabbitMQ. "IncompatibleProtocolError" means RMQ is not working properly.

- To test consuming it: python3 ~/consumer.py, this will retrieve the next message from the queue and print it. Once everything is working send more than one message so there's at least one in the queue when the validation runs.

- Do not change the consumer.py and producer.py files; if you do the Check My Solution will fail.

Test: python3 ~/consumer.py returns hello-lwc

See /home/admin/agent/check.sh for the exact test.

Time to Solve: 30 minutes.

4 "Florence": Database Migration Hell 30 m Fix Pro
"Florence": Database Migration Hell

Scenario: "Florence": Database Migration Hell

Level: Hard

Type: Fix

Access: Paid

Description: You are working as a DevOps Engineer in a company and another team member left the company and left the docker-compose.yml of a database-backed web application unfinished.

Generally, the problem revolves around the database migration and docker compose.

Additionally on front of the application there is an Nginx server and you need to fix the proper access to it as well.

The source of code is in /home/admin/app

Credit Kamil Błaż

Test: curl --cacert /etc/nginx/certs/sadserver.crt https://sadserver.local returns a message containing "ready to serve requests"

The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute

Time to Solve: 30 minutes.

Send Us Feedback
Get Notified
For announcements like new scenarios. We'll never share your email with anyone else.
SadServersSadServers

Real-world Linux and DevOps scenarios for hands-on learning and technical assessment.

Uptime Robot ratio (30 days)
Product
  • Scenarios
  • For Individuals
  • For Businesses
  • Pricing
Resources
  • FAQ
  • Blog
  • Newsletter
Company
  • About Us
  • Support
  • Privacy Policy
  • Terms of Service
  • Contact
Connect With Us
info@sadservers.com

Made in Canada 🇨🇦
Updated: 2026-02-20 23:15 UTC – 8ecb0ab