Description: A critical backup cron job has silently stopped working 3 days ago. The backup script is located at /opt/backup/backup.sh and should create daily backups in /var/backups/daily/, but no new backups have been created recently.
Looking at the backup directory, you can see old backup files from a few days ago, proving the system used to work. However, there are no error emails, no obvious error logs, and the cron service appears to be running normally.
Fix ALL issues preventing the backups from running, so that backups are created successfully and reliably.
Test directory: /var/backups/daily/ Backup script: /opt/backup/backup.sh
Test: The solution will be validated by checking if a backup file has been created in the last 10 minutes.
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: Is "All I want for Christmas is you" already everywhere?. A bit unrelated, someone messed up the permissions in this server, the admin user can't list new directories and can't write into new files. Fix the issue. NOTE: Besides solving the problem in your current admin shell session, you need to fix it permanently, as in a new login shell for user "admin" (like the one initiated by the scenario checker) should have the problem fixed as well.
Test: The admin user in a separate Bash login session should be able to create a new directory in your /home/admin directory, as well as being able to create a file into this new directory and add text into the new file.
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: We have a lot of AWS EC2 instances and EBS volumes, the description of which volumes we have save to a file with: aws ec2 describe-volumes > aws-volumes.json. One of the volumes attached to an ec2 instance contains important data and we need to identify which instance is attached to (its ID), but we only remember these characteristics: gp3, created before 31/09/2025 , Size < 64 , Iops < 1500, Throughput > 300.
Find the correct instance and put its "InstanceId" into the ~/mysolution file, e.g.: echo "i-00000000000000000" > ~/mysolution
Test: Running md5sum /home/admin/mysolution returns e7e34463823bf7e39358bf6bb24336d8 (we also accept the file without a new line at the end).
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: A Redis master-replica setup is running on this server, with the master on port 6379 and the replica on port 6380. Both instances show as "connected" when you check their status, but data synchronization has silently broken.
Recent writes to the master don't appear on the replica, even though there are no obvious errors in the logs and both Redis instances appear healthy.
Fix the replication issues so that data written to the master (port 6379) immediately appears on the replica (port 6380) without data loss.
Description: The security team has asked again Mary and John to implemente more security measures. Unfortunately, this time they have broken the LAMP stack (Apache with PHP) so the frontend is unable get an answer upstream, thus, they need your help again to fix it.
The fixed application should be able to serve the content from the webserver, the problem is a network connectivity, although the logs have valuable informatiion, it has nothing to do with the configuration of the apache server.
Test:curl localhost | head -n1 returns SadServers - LAMP Stack
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: While working with a distro with a very small footprint, we just found out that there are some basic commands not present, this was supposed to be a security feature, after all this is just a small server, however, the web content was not deployed. Your task is to decompress the file /home/admin/web.zip and move the file home.html in it to /var/www/html/index.html
Test: The service must return the string "Homepage". You can check with the command curl -s localhost
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: There is an nginx Docker container that listens on port 80, the purpose of which is to redirect the traffic to two other containers statichtml1 and statichtml2 but this redirection is not working. Fix the problem.
IMPORTANT. You can restart all containers, but don't stop or remove them.
Test: The nginx container must redirect the traffic to the statichtml1 and statichtml2 containers:
curl http://localhost returns the Welcome to nginx default page curl http://localhost/1 returns HelloWorld;1 curl http://localhost/2 returns HelloWorld;2
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: As the Christmas shopping season approaches, the security team has asked Mary and John to implemente more security measures. Unfortunately, this time they have broken the LAMP stack; the frontend is unable get an answer from upstream, thus they need your help again to fix it.
The application should be able to serve the content from the webserver.
Note for Pro users: direct SSH access is not avaiable (yet) for this scenario.
Test:curl localhost | head -n1 returns SadServers - LAMP Stack
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: A pipeline created a lot of Docker images locally for a web app. All these images except for one contain a typo introduced by a developer: there's an incorrect image instruction to pipe "HelloWorld" to "index.htmlz" instead of using the correct "index.html" Find which image doesn't have the typo (and uses the correct "index.html"), tag this correct image as "prod" (rather than fixing the current prod image) and then deploy it with docker run -d --name prod -p 3000:3000 prod so it responds correctly to HTTP requests on port :3000 instead of "404 Not Found".
Test:curl http://localhost:3000 should respond with HelloWorld;529
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: You are logged in as the user "admin" without general "sudo" privileges. The system administrator has granted you limited "sudo" access; this was intended to allow you to read log files.
Your mission is to find a way to exploit this limited sudo permission to gain a full root shell and read the secret file at /root/secret.txt Copy the content of /root/secret.txt into the /home/admin/solution.txt file, for example: cat /root/secret.txt > /home/admin/solution.txt (the "admin" user must be able to read the file).
Test: As the user "admin", md5sum /home/admin/solution.txt returns 52a55258e4d530489ffe0cc4cf02030c (we also accept the hash of the same secret string without an ending newline).
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: You are logged in as the user admin .
You have been tasked with auditing the admin user privileges in this server; "admin" should not have sudo (root) access.
Exploit this server so you as the admin user can read the file /root/mysecret.txt Save the content of /root/mysecret.txt to the file /home/admin/mysolution.txt , for example: echo "secret" > ~/mysolution.txt
Test: Running md5sum /home/admin/mysolution.txt returns . (We also accept the md5sum of the same file without a newline at the end).
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: You have been tasked with migrating this future web server from using Docker (which uses a daemon) to rootless Podman. There is already an Nginx Podman image on the server, and your objective is to manage the container created from it using systemd, so the it starts automatically on reboot and continues running unless explicity stopped (the same behaviour expected from a Docker-managed container). Create a systemd service named container-nginx.service that manages the Podman Nginx container. Enable and start this service.
NOTES: Although a quadlet file solution should be valid, the check script is still not accounting for it.
There is no need to reboot the VM, although if you want you could reboot it from the command line with /sbin/shutdown -r now and refresh or reopen the web console.
Test: The checker script will test if the container-nginx.service is active and enabled, and if it can stop and start the service. It will also verify that curl localhost:8888 returns the default "Welcome to nginx" web page.
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: A Torino Node.js application is located in the ~torino-app directory. You can run it directly with: nohup node app.js > app.log 2>&1 &. You can also verify that it works by running: curl localhost:3000
There is already a torino Docker image built with the Dockerfile in ~torino-app, but the resulting image size is 916 MB.
Your task is to optimize the Docker image size: 1. Build a new Docker image for the Torino application, also called torino:latest but with a total size under 122 MB 2. Create and run a container using this optimized image.
NOTE: You can only use the existing Docker images in the server. To build a Node application you need to COPY in your Dockerfile, besides the app.js , the package*.json files and without Internet access, the node_modules directory, since you cannot RUN npm install.
Test: The torino Docker image is less than 122 MB and curl http://localhost:3000 returns Hello from Torino!
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: The podman image localhost/prod:latest contains a static website. Initially the image size is 261 MB and contains 100 layers.
Your task: 1. Optimize the image localhost/prod:latest so that its size is less than 200 MB, using the same tag. 2. Run a container named "check" from the optimized image: podman run -d --name check -p 8888:80 localhost/prod:latest so that curl localhost:8888 returns 100 lines.
Test: The podman image localhost/prod:latest size is less than 200 MB and running curl localhost:8888 from a container named "check" created from the image retuns 100 lines.
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Description: A DNS server running Knot DNS is serving the zone sadservers.internal (see ls /var/lib/knot/zones/), but users are reporting that they cannot access blog.sadservers.internal neither api.sadservers.internal. Your task is to diagnose and fix the DNS issues so the services become accessible. You can manage Knot DNS with sudo knotc commands.
Note: the 203.0.113.0/24 range is part of TEST-NET-3, a block reserved by RFC 5737 for documentation and examples, making it a Bogon IP range.
IMPORTANT. Do not change the Nginx configurations under /opt/services/ for the solution to work.
Test: You are able to access the blog and the API services: curl blog.sadservers.internal returns Welcome to blog.sadservers.internal curl api.sadservers.internal returns {"status": "ok", "service": "api.sadservers.internal"}
The "Check My Solution" button runs the script /home/admin/agent/check.sh, which you can see and execute.
Scenario: "Karakorum": WTFIT – What The Fun Is This?
Level: Hard
Type: Fix
Access: Email
Description: (NOTE: this is not a new scenario but an existing Pro one temporarily available to all users as the last Advent of SysAdmin 2025 scenario).
There's a binary at /home/admin/wtfit that nobody knows how it works or what it does ("what the fun is this"). Someone remembers something about wtfit needing to communicate to a service in order to start.
Run this wtfit program so it doesn't exit with an error, fixing or working around things that you need but are broken in this server.