Skip to content
All posts

Thunderdome - Emerge Through the Breach walkthrough

This is the first in a series of walkthroughs for the Thunderdome multi-cloud Cyber Range from Pwned Labs. This post will guide you through capturing the first flag, "Emerge Through the Breach". In the process, I will cover various tools and techniques, illustrating that there are multiple ways to achieve an objective.

Beginners can benefit from replicating this tradecraft, and even pros might learn a new thing or two! Walkthroughs also give me the opportunity to solidify and refresh my own understanding of offensive security concepts.

Starting point ๐ŸŽฏ

Since all we have to start with is the IP address, it makes sense to scan it and see what we find. Run nmap, rustscan, masscan or your port scanner of choice. You just need to run a tool with options that give you a reliable scan.

$ nmap -v -Pn -sCV -T4 -oN nmap.out


Quick breakdown of the Nmap flags


I'm not going to go into a lot of detail as there are plenty of Nmap tutorials out there and also the official docs.

  • -v - verbose output in real time
  • -Pn - skip host discovery (useful if ping requests are blocked)
  • -sCV - C - detect common vulnerabilities, V - detect versions
  • -T4 - timing template - T1 is slowest and stealthiest and T5 is the most aggressive - fastest but noisiest and may overwhelm a host. Lower timings can provide more accurate results
  • -oN - normal output (as opposed to XML or grep-able
  • You can use -A instead of -sCV to give you OS versions (if detected) and traceroute output as well as what -sCV provides

The Nmap output shows a few interesting things

  • Ports 22, 80 and 443 are open
  • HTTP is being served on port 443, which is not a standard configuration
  • resolves to

From the hostname we know we are dealing with an AWS EC2 (VM) instance.

You can do a quick scan with rustscan to get an idea of open ports (which can sometimes be more reliable than nmap). Rustscan passes findings to nmap so  you can run something like rustscan -a -- -sCV -Pn . In this command we specify -- and then the nmap flags. For more information on usage see:'t-understand-how

$ rustscan -a

If we didn't get any Cloud provider information from the nmap scan we could also have used something like ip2cloud that maps IPs to Cloud provider ranges -

$ echo | ip2cloud 
[aws] :


There's also ip2provider -

$ ./ aws AMAZON us-east-1

We know we're dealing with an AWS host serving web traffic. You may be thinking "well I could have just run nslookup and determined this was an EC2 instance." True, but nslookup won't always reveal whether a host is a Cloud VM - custom DNS records and internal naming conventions may obscure this. In addition, scripts like ip2provider accept bulk IP addresses in a file so you can quickly lookup a number of potential hosts.


Checking out the website  ๐Ÿ•ต๏ธ

It's a healthcare / pharmaceutical website. The menu links return to the main page.

I checked for the usual robots.txt, .git/, backup[s], admin, api etc, and even checked for flag.txt (wishful thinking!)

Looking at the page source is always a good idea as you might find useful hidden field values or comments. Examining the source for the site we see

<!-- -->

This looks interesting... we'll save that and come back to it later.

The nmap output shows the host is running Linux and the site is being served via Apache. This is useful to know when looking for potential files with certain extensions (e.g., what you would expect to find hosted on an IIS box vs Apache box). The key point is use some reasoning in your enumeration for better results. I ran ffuz to quickly fuzz subdirectories as a starting point.


There is nothing to go after in the assets or portal or server-status directories.

I also ran feroxbuster ( because it looks cool ๐Ÿ˜Ž ) but also as I like to compare the results between different tools. I used a seclists wordlist and added some common file extensions to look for (php, html, json, txt).

$ feroxbuster -u -x php,html,json,txt

I also tried to browse and enumerate as well but didn't find anything interesting. Back to the page source finding...


Leaking secrets from BitBucket ๐Ÿชฃ๐Ÿ’ฆ


Let's have a look at the BitBucket URL we found in the page source. Navigating to it we see the page below.

Note the highlighted link above. Clicking on that link returns a list of all repositories that belong to the organization.

Let's have a look at the first repository - trial-data-management-poc. We see a bunch of commits. It's a good idea to have a look through them to see if any sensitive information or credentials are (or were at some point) exposed.

We can use a tool like Gitleaks to do the heavy lifting and scan the repository to look for things like hard-coded secrets -

Get Gitleaks docker image (pull the image).

$ docker pull zricethezav/gitleaks

Test it's working.

$ docker run --rm --name=gitleaks zricethezav/gitleaks detect --help

Clone the  trial-data-management-poc and mp-website repositories from the Bitbucket site (click on the Clone button).


$ git clone


Now we can scan the repository. Change the path below to where the trial-data-management-poc repository was cloned on your machine.


$ docker run --rm -v /root/thunderdome/flag1/trial-data-management-poc:/tmp/scan --name=gitleaks zricethezav/gitleaks detect -v --source /tmp/scan



Gitleaks found a bunch of stuff but there may be a few false positives. Let's have a quick look at one of the findings. Make sure you're in the trial-data-management-poc directory.

git log -L <Line>,<Line>:<File> <Commit> - fill in these details from the output above, for example:

$ git log -L 11,11:tests/uploader/data-uploader.php c167543e30628c5a76f79f519a0adb752b238106



There are a couple of things we can potentially leverage - there's an email address and an AWS access key (which is effectively like a username). Let's get a list of all the commits and who made them so we can have a more thorough look, as secret scanning tools can sometimes produce false negatives and not find something it should have.

Get a list of the commits.

$ git log --all --format='%H - %an %ae - %s' 


There aren't many commits, so we can examine each one for anything interesting.

$ git show 14129237ea34eeefbced772092c9264f60b2cefa

Interesting to note that Gitleaks didn't find the password above - Treatment! in the .env file. This may be because many secrets detection tools rely heavily on finding high entropy (randomness) items. This is an example of a false negative. 


๐Ÿ’ก I've encountered examples similar to the above in a DevSecOps context with tools such as TruffleHog. These tools work with a regular expression engine that must be tuned to your needs, striking a balance between false positives and false negatives. When committing code (depending the size of the pull request) additional approvers can spot things that tools may miss (because they lack the context - but let's not tangent to AI!). You've probably heard the phrase "people, process, tools" in regards to security. In this scenario, approvers are the people, the mandatory requirement is the process and TruffleHog is the tool.


Foothold in AWS ๐Ÿช๐Ÿ’ฅ


Going through all the git commits we find the following interesting items:

Treatment! - password from commit 14129237ea34eeefbced772092c9264f60b2cefa
AKIATCKANV3QK3BT3CVG - AWS Access Key from commit c167543e30628c5a76f79f519a0adb752b238106 - from list of commits - from list of commits
mp-clinical-trial-data - S3 bucket name from commit c167543e30628c5a76f79f519a0adb752b238106


What can we do with this?


An AWS access key is not much good without the corresponding secret access key, but not entirely useless... you can ascertain the AWS account associated with an AWS access key if you have your own AWS account credentials to use with the aws cli. To authenticate via the cli run aws configure. When prompted, enter the AWS access key and secret access key of a user in your AWS account. More information is available here -

You can run the following command to return the ID of the AWS account that the IAM user (whose key we found) belongs to:

$ aws sts get-access-key-info --access-key-id AKIATCKANV3QK3BT3CVG
    "Account": "211125382880"


To recap, we now have:

  • Two usernames - and
  • A password - Treatment!
  • An AWS account ID - 211125382880

At this point we have enough to try and login using the AWS portal. There are only two names to try, but I think we should use haru's username first as the password Treatment! was found in one of his commits, and we should always check for password reuse. Haru used it for the database password, he may also use it for his AWS console password.

We manage to log in successfully! Looking at the recently visited services, one stands out over the others as potential place for a flag.

When we go to Secrets Manger we see the following.

Grab the loot ๐Ÿ†

Click on the secret flag and then "Retrieve secret value" in the "Secret value" pane.


There's the first flag!

But we're not done yet. We need to identify, gather and exfiltrate anything that can help us move laterally within Massive Pharma and discover new services leading to the next flag. Remember the other secret - aws/haru ? Let's have a look at that.




For fun, let's double check the access key is actually associated with Haru. Yes, we can check this by running aws sts get-caller-identity but we can also use Pacu to check the access key is not a troll key... would Pwned Labs do that ๐Ÿ‘€

Run pacu and issue the command set_keys to add the AWS access key and secret access key. You can also run import_keys <profile> if you have the keys in your .aws/credentials file.



We can see the access key is associated with .  For more information on honey token detection it's worth checking these articles from Rhino Security.


For a quick test of what honey-token AWS keys look like in pacu, hop over to, create an "AWS keys" canary token, run set_keys to add the AWS keys and test them in pacu:



Situational awareness (and hint to the next flag) ๐Ÿ‘€


Let's use these credentials to see what else we can discover in the AWS account.


I like to use a tool called CloudFox to gain AWS situational awareness if I find AWS credentials - Once installed, run it as below, specifying all-checks and --profile haru, which is the AWS profile we created above. We're using all-checks because for a penetration test it's good to have a lot of information we can sift through and eliminate, rather than missing out on something crucial and having to retrace our steps.

$ cloudfox aws --profile haru all-checks



Once the CloudFox command has completed, one of the first things to examine is the inventory it has discovered. The inventory is a list of resources that the user associated with the access key has some level of permissions to. CloudFox dumped the output to /root/.cloudfox/cloudfox-output/aws/haru-211125382880/.

Let's check the inventory in the loot folder.

Make a note of the above... it's a snapshot (ahem) of what you'll need for the next flag!  ๐Ÿ˜‰