Erik's Security Blog Practicing Constant Vigilence

Advanced Web Security: Request Smuggling

The Advanced Web Security series looks beyond the OWASP Top 10 web application security risks. For the full list of articles in this series, visit this page.

Today’s web applications and systems are more complex than ever before. While servers should always send data to users over HTTPS, the usage of HTTPS does not guarantee that the data sent from the web server will be received only by the correct recipient. This may be surprising to hear, since in many minds HTTPS equals a green lock symbol which means everything is secure, right? Research from the past year shows otherwise! Request smuggling is a technique that allows a malicious user to receive server responses intended for other users of a web application. This type of attack makes it ever more important that web applications are designed to NOT send sensitive account data to the user (such as the user’s plaintext password, government ID information, payment details, etc.) and reinforces the importance of defense-in-depth. To reiterate, if a web application transmits sensitive account data back to the user, there is an opportunity for a high impact request smuggling attack - HTTPS will not provide protection against this risk.

What is request smuggling?

Every so often, a new attack surface is investigated in the world of web applications, and the result is that high impact vulnerabilities are discovered to impact a large number of websites. In my opinion, this was previously the case with SSRF (server-side request forgery) a few years ago, and I would argue that request smuggling did this in 2019. While James Kettle, the researcher who popularized the vulnerability, humbly decided to remove his research from eligibility in the main PortSwigger Top 10 web hacking techniques of 2019 competition, request smuggling (specifically the HTTP Desync attack variant) easily won the “community favourite” category, indicating that the security community thinks of this attack vector as an important research development. The way that request smuggling works requires thinking about a web application as more than a single web server. Instead, the results of a web application are the sum of all the servers in between your browser and the backend server, and this includes the frontend server (i.e. reverse proxy). Request smuggling targets the fact that the front-end and back-end of the application both handle HTTP requests from users, but it is possible that they handle the requests in slightly different ways due to implementation differences in different code bases. Specifically, request smuggling exploits a difference in how HTTP requests end, which varies based on the “Content-Length” header and the “Transfer-Encoding” header. Due to this issue, we can encounter a scenario like the one shown in the diagram below (borrowed from the request smuggling section of the excellent PortSwigger Web Academy). At a high level, a possible result of this vulnerability is that the backend server may return the response for a web application victim user to a malicious user, potentially exposing details about the victim user to the malicious user.

Request smuggling high level visual

How does request smuggling really work under the hood?

I can’t claim to understand this attack better than the PortSwigger team that did this research in the first place, and therefore will defer to PortSwigger’s explanation for the deeper details of this attack. At the same time, I can try to provide a briefer explanation that may approach the topic from a different angle.

A web application’s frontend frequently takes incoming requests from users and forwards them to the backend over a single TLS connection. The frontend and backend usually are running different software, and certain edge cases of the HTTP/1.1 protocol are implemented differently in different software packages. A malicious user can exploit these different implementations by crafting a HTTP request that is understood one way by the frontend server and a second way by the backend server. Usually this is done by creating a HTTP request with a request body that contains a header that looks like the start of another HTTP request. The server might take this header information and append it to the following HTTP request (one that the malicious user did not send), and effectively inserting a request into another user’s HTTP session (and therefore unaffected by HTTPS security, because this occurs on the server side after HTTP requests are decrypted). While HTTP request smuggling is usually performed using a POST request, since the body of the POST request is used in the exploit process, this detailed request smuggling writeup from the Knownsec 404 team shows a GET request can also perform request smuggling. While it might be possible to consider request smuggling on its own a form of reverse proxy bypass, the impact changes a lot when the server is using a web cache, has internal APIs, or provides other targets.

How can a penetration tester exploit request smuggling?

There are several options to exploit request smuggling, but if you’re not in a rush, I am currently in favor of exploiting this issue manually, at least at first. This provides a deeper understanding of how the server counts the Content-Length value, and how to build a HTTP request in the exact manner needed to exploit the server in the manner you intend. One tip based on a mistake I overcame: when modifying the Content Length, a newline counts as 2 characters (/r/n). However, once you grasp how this vulnerability works, you no doubt will want to speed up the process. Burp Suite has a request smuggler extension, or you can use the turbo intruder Burp extension for more heavy duty testing. Outside of Burp Suite, this tool named “smuggler” is my preferred choice, and I don’t think there are currently many competing off-the-shelf tools. It may be possible to use curl or ncat based on this follow-up comment from James Kettle in a bug bounty report to the DoD, but this doesn’t appear to be a common method (this post was the only instance I found of using ncat for request smuggling).

Smuggler tool at work testing for request smuggling

How to defend against request smuggling?

The easiest approach for small-scale web applications is to remove any CDN, reverse proxy, or load balancers! However, such a simplifying measure is impractical for larger web applications, who rely on these added complexities for increased performance, so the following ideas are commonly considered other mitigation strategies:

  • Use HTTP/2 for connections to the backend: the root of the request smuggling vulnerability is ambiguity in HTTP/1.1 RFCs. The newer HTTP/2 protocol resolved the problematic ambiguities.
  • Reconfigure frontend/backend communication to avoid connection reuse: Request smuggling relies on confusion about the division of multiple HTTP requests in a single connection. If these HTTP requests are each in a separate connection, there would be no confusion about where each HTTP request starts and ends.
  • Use frontend and backend software that rely on the same HTTP request parsing: Request smuggling occurs because different software implementations in the frontend and backend handle ambiguous HTTP/1.1 edge cases differently. If the frontend and backend handle each HTTP request in the same manner, either because both the frontend and backend use the same software or close examination reveals consistent handling of the relevant headers in both pieces of software, then the frontend and backend will not handle HTTP requests in different ways.


James Kettle’s excellent research on the topic:
Portswigger Web Academy Request Smuggling labs:
defparam’s Smuggler standalone Python tool:
Portswigger’s request smuggling Burp extension:

Go take a break!

I’m on vacation for the week, and while I considered posting a technical article as usual, I decided this would be good opportunity to highlight the importance of taking a break every now and then. For some of you this news will come as no surprise, but there may be others who are more focused on productivity and think “sleep is for the weak!” or “I’ll rest when I’m dead!”. For better or worse, I sometimes lean towards the latter category, so this post is for others who have difficulty with the choice between taking a break and maintaining an unbroken streak of productivity. I’m not suggesting long vacations are the most productive approach to every goal, but I would advocate that taking a rest can be extremely helpful when done right. By using the weasel words “when done right”, I admit the implementation details (frequency and length) of how breaks are taken will vary from one person to another, but the reasons why breaks should be taken is more consistent. So I am not here to advocate how one should take breaks in detail - I am only here to remind you to take the occasional break.

Perhaps the best reason to take breaks is to avoid burnout, which has gotten a bit of press in security. If you are unable to avoid burnout, your productivity will take a substantial hit regardless of whether you take a break or not, and no doubt you will improve your mindset if you choose to take breaks rather than take the burnout route. But a second reason where breaks can be useful is in your daily routine itself. One example would be the Pomodoro technique, which I have found useful when I am focused on a single topic or project for large chunks of time. If you wish to try out the Pomodoro technique, there are plenty of mobile apps and websites that offer a free Pomodoro timer.

Another approach to breaks in a daily routine is to focus your energy during your most productive hours of the day, leaving the less productive parts of the day open for breaks (such as meals, etc.). Everyone’s internal clock operates a bit differently, and because of this we have different times during the day that we are more likely to do our best work. We hear frequently of people working 6 days a week or other stories of intense struggle, but we hear less about how efficiently some people use just a few hours per day to accomplish great things. I speculate this is because it is easier and more common to be productive than to be effective. To apply this second approach, I would suggest recording your work output in half-hour chunks for 1-3 weeks to determine what parts of the day you are more productive, and review Peter Drucker’s timeless book “The Effective Executive” to go further and remove unnecessary tasks from your work or person to-do list.

Now if only my future self would heed these words, since I did not take a break from writing a blog post… Wisdom is the ability to follow your own advice!

Why you should start hacking Android

Android application security is like the land that time forgot. Mobile security is behind the times - issues like hard-coded sensitive data, unencrypted transmission of sensitive data, and other simple vulnerabilities abound. Billions of devices run Android applications, and only a handful of security researchers are seriously focused on analyzing them. Meanwhile, there’s a wild bonanza in web application security, where some of the cutting edge research looks like the equivalent of a graduate level thesis! Why does reality look like this? I don’t know, but I’m convinced that Android app security has been underappreciated and should see a surge of interest soon. Let me share what has convinced me to focus my energy on Android security.

First, let’s look at a graph from HackerOne’s 2020 annual report showing how many hackers on the platform hack mobile applications:

HackerOne 2020 Report Graph HackerOne 2020 Annual Report

That dark blue sliver, around 3% of the graph, is the amount of interest that exists in Android application security compared to other assets on HackerOne. While the folks looking at web apps spend substantial effort to find undiscovered assets, Android apps sit alone in the corner, unloved. HackerOne even tried to drum up more interest in Android Security with #AndroidHackingMonth blogposts in February 2020, and one researcher who started looking at Android after seeing these blogposts (that’s less than 6 months ago) already scooped up $30k in bounties from a single vulnerability. Let’s check BugCrowd’s 2020 report:

BugCrowd 2020 Report Graph BugCrowd 2020 ITMOAH Report

Huh, BugCrowd’s graph implies that Android is where hackers are “weakest”. Not much competition there either. Let’s look elsewhere, at Sebastian Porst (Google Play Protect’s engineering manager) presenting at Bugcrowd’s LevelUp 0x05. The two interesting tidbits I picked up on in the 4 minutes of video between minute 7 and minute 11 of this talk are that 1. 75% of the bug bounty payments that Google made on their Google Play Security Reward program occurred in the 90 days before the presentation (so between July and Oct 2019), which means interest was ramping up and 2. The number of vulnerabilities fixed in the 3 years prior to this talk (so 2016-2019) may make the Android app ecosystem security initiative the biggest application security program to date. Conveniently, Sebastian proceeds to outline different bug categories that you can start looking for.

BugCrowd LevelUp 0x05 Android app vulnerabilities presentation

So basically Google’s Android bug bounty program, which covers any app with over 100 million installs (that’s around 500 targets, according to, is known to have many vulnerabilities and pays well. Okay, sounds like that’s motivation enough, but there’s more data from Google. This one comes directly from the Google Play Security Rewards Program website. To summarize the chart below, Google is saying that easy-to-find vulnerabilities exist in Android apps, and instead of classifying them as out of scope, Google instead describes these vulnerabilities and still offers a reward. Very kind of them!

Google Play Security Rewards Known Issues chart Google Play Security Rewards Known Issues Chart

Of course, that chart is only for the low hanging fruit. The more complete list of rewards contains higher rewards, naturally. And remember that HackerOne, BugCrowd, Intigriti, and other sites have additional Android targets, many of which are conveniently curated in this Github repo.

Google Play Security Rewards full chart Google Play Security Rewards Chart

Oh, and let’s wrap up by reminding everyone that you can download and decompile Android applications, instrument them with Frida on a rooted device, and basically examine all the ins and outs of the application that you might want to see. How many other target assets can claim to be this close to whitebox open-source testing? I don’t see many hands being raised.

Let’s recap: we have many targets, a large number of expected vulnerabilities, decent rewards, not much competition, and open source assets ready for examination. Not to mention that it’s possible to download hundreds of applications and use scripts to statically analyze the applications quickly. I’m sold, and in the coming months, I hope to share more info about my adventures looking at Android application security. If you can’t wait to dive in, take a look at @0xteknogeek’s #AndroidHackingMonth Guide for a decent primer on how to get started.

DEF CON 28 Highlights

The first year of virtual DEF CON wrapped up last week, and what a firehose of information it was! I first must admit this was my first DEF CON where I was watching talks live (does watching a recorded talk on Twitch count as live?) as opposed to on YouTube after the event. Overall, I was impressed with how smoothly the event ran (or at least the parts that I saw). One perk of the online conference was that the 30 minute Q&A sessions seemed to move faster and more smoothly than in-person Q&As, and 30 minutes was enough to ask quite a few questions! The five talks that I highlight below are not in any particular order, since picking highlights is always a matter of opinion anyway.

DEF CON 28 logo

  1. I think Joshua is rightfully getting substantial attention for his “When TLS hacks you” talk, as this appears to be a new web security attack. A summary of this new attack surface is SSRF over TLS, but I think the idea of using TLS for web attacks is relatively unexplored in itself. Joshua’s Q&A session was very relaxed and indicated that he hadn’t poked around this new attack surface fully, which leaves a lot of room for others to fill in the gaps and move this further. On a related note, the “Domain Fronting Using TLS 1.3” talk by Erik Hunstad (which also featured novel research!) also found a new way to use TLS, and I appreciate the creativity of these researchers using technology that’s in plain sight in new ways.

  2. Having spent some time diving into the depths of the Bluetooth core specification, I have been keeping track of the new Bluetooth-related CVEs that have been popping up over the last several years (summarized in this blog post). After unveiling InternalBlue at 35C3 a couple years ago, Jiska presented with Francesco Gringoli about a creative new attack on Bluetooth/WiFi combo chips. Since space is at a premium in smartphones, and because WiFi and Bluetooth operate at the same frequency, most phones today use combination chips that implement both WiFi and Bluetooth. However, the phone has a clock that coordinates which protocol is able to transmit to avoid interference, and it’s this coordination mechanism that was attacked. Since phones will not be moving away from Bluetooth/WiFi combo chips (because of how compact and cheap they are), this and similar bugs at the intersection of different technologies will continue to be an interesting research area.

  3. Server-side template injection (SSTI) was something I first learned about thanks to PortSwigger’s fantastic Web Security Academy, and I suspect it is underrated because many of the findings to date rely on specific old versions of software to be running. However, discovering new vulnerabilities in the latest software versions is one solution to this! Munoz and Mirosh have had other awesome collaborations in the past (Friday the 13th JSON Attacks is worthwhile to read or skim), and this one produced over 30 CVEs - quite a collection. While the CVE in Microsoft Sharepoint seems to be getting a lot of press, there are many other well-known CMS software (HubSpot, Atlassian Confluence) that will likely be seeing the impact of this over the coming months, if they’re not patched.

  4. An astronaut. At DEF CON! With cool space photos!! Pam’s talk was lots of fun and gave a lot of cool insider information about experiences and observations based on experience of American spaceflight. To paraphrase one thing Pam said, the security risks are magnified when human lives are involved, and the ISS has people living up there 24/7/365. There’s obviously concerns with GPS security and other satellite hacking (a topic which got a substantially larger amount of interest this year), but this talk offered a perspective on the human element. Combining astronauts, space, and security just seems like a fantastic combo.

  5. Pedro Umbelino and Joao Morais presented some awesome vulnerabilities they uncovered in some of the most installed Android apps - the Google camera application and Samsung’s Find My Mobile application. The impact they were able to demonstrate with their findings was pretty impressive, as they put a fair bit of effort into adding capabilities to their proof of concept apps.

BONUS: Let’s face it, security can get pretty technical and no one likes being the dummy in the room. The War Story Bunker was an entertaining audio-only session on Discord where people chimed in with their stories of DEF CON and pen testing experiences. There were many epic wins and epic fails! Are you worried about deleting data during a pentest? Some of this year’s stories described accidentally taking down entire networks over substantial geographic areas! With these crazy stories to compare to, I expect to feel a lot better about any of the (reasonable) mistakes I might make.

Unfortunately, there’s no recording of the audio-only War Story Bunker session that I know of!

Analyzing Recent Bluetooth CVEs

A growing number of Bluetooth security vulnerabilities have been published in recent years, and I decided to dig into whether there was a common cause behind the many vulnerabilities being published. For those without much background in Bluetooth, I’ll give a brief introduction. Much like the internet, Bluetooth was created at a time when security was not a top priority. It has gotten security upgrades as the years have passed, but the need for backward compatibility and device interoperability has resulted in a massive 3000+ page Bluetooth core specification. This specification has many shortcomings. For instance, it’s well organized, it often leaves implementation details open for interpretation, and it’s obviously quite complex. And the 3000+ page specification doesn’t cover everything - there are many supplements and additional pieces of documentation that exist for understanding the details of the Bluetooth protocol. This complexity can obscure insecurity, and it makes the protocol less approachable for a researcher who has minimal time to spend learning all about a new protocol.

I find Bluetooth interesting because it’s found in so many devices, and yet it doesn’t seem to get much love. In fact, the security-conscious OpenBSD even removed their default support for Bluetooth. I can’t claim to have advanced knowledge of Bluetooth, given the enormous complexity held within the Bluetooth core specification alone, but I was able to piece together a few things by looking at the details behind some of the recent vulnerabilities. Here is a table showing some of the notable Bluetooth vulnerabilities I examined. The same table, as well as other useful Bluetooth security resources, are stored in this “awesome” Bluetooth security repo.

Name Con & Year published Site URL Paper URL Video URL SIG Notice Impacted CVE
BlueBorne Black Hat Europe 2017 Site Paper Video No Notice BR/EDR CVE-2017-8628, CVE-2017-0781, CVE-2017-0782, CVE-2017-0783, CVE-2017-0785, CVE-2017-14315, CVE-2017-1000250, CVE-2017-1000251, CVE-2017-14315, CVE-2017-1000410
Bleedingbit 2018 Site Paper Video No Notice LE CVE-2018-7080, CVE-2018-16986
Fixed Coordinate Invalid Curve Attack 2018 Site Paper No Video SIG Notice BR/EDR/LE CVE-2018-5383
SweynTooth 2019 Site Paper Video No Notice LE CVE-2019-16336, CVE-2019-17060, CVE-2019-17061, CVE-2019-17517, CVE-2019-17518, CVE-2019-17519, CVE-2019-17520, CVE-2019-19192, CVE-2019-19193, CVE-2019-19194, CVE-2019-19195, CVE-2019-19196, CVE-2020-10061, CVE-2020-10069, CVE-2020-13593, CVE-2020-13594, CVE-2020-13595
KNOB USENIX 2019 Site Paper Video SIG Notice BR/EDR CVE-2019-9506
BIAS IEEE S&P 2020 Site Paper Video SIG Notice BR/EDR CVE-2020-10135
Pairing Method Confusion 2020 No Site No Paper No Video SIG Notice BR/EDR/LE CVE-2020-10134
BlueFrag 2020 Article No Paper No Video No Notice Android CVE-2020-0022
Spectra Black Hat USA 2020 Abstract ? ? ? ? ?

I wanted to investigate the techniques that the vulnerability researchers were using to find and test out the new vulnerabilities they uncovered, and here’s what I found. The 2017 Blueborne research paper was the earliest attack examined, and perhaps the first in the most recent wave of Bluetooth vulnerabilities. The Blueborne research relied heavily on open source Bluetooth stacks (Android’s stack and Linux’s BlueZ stack), with some firmware reversing for Apple’s Bluetooth-based stack. The other vulnerability discovery from Armis followed a similar pattern, as did the Fixed Coordinate Invalid Curve Attack, though they both relied more on reverse engineering of firmware. However, in late 2018, a presentation at 35C3 unveiled the InternalBlue tool, which allows a Broadcom Bluetooth chip to be instrumented in such a way that it can act like an SDR. This allows for the injection of custom packets and many other features, without the need to buy expensive SDR hardware with a configurable Bluetooth stack on top of it. The introduction of this tool clearly assisted the discovery of more Bluetooth vulnerabilities, since after this tool’s release, practically every major Bluetooth vulnerability discovery used InternalBlue during the research phase. I found it interesting that the BlueFrag vulnerability was the only recent vulnerability in the list that was found with fuzzing, since fuzzing has uncovered so many vulnerabilities in other projects. Will more Bluetooth vulnerabilities be found once a better Bluetooth fuzzing tool is released? I’d be willing to bet yes, but time will tell.

The table below is a summary of the tools and techniques I saw used in the papers describing these vulnerabilities. The Pairing Method Confusion vulnerability does not appear to have a paper published, which explains my uncertainty around the techniques used to find it.

Name Firmware Reversing? Internal Blue? Fuzzing?
BlueBorne Yes No No
BleedingBit Yes No No
Fixed Coordinate Invalid Curve Attack Yes No No
Sweyntooth Yes Yes Maybe
KNOB Maybe Yes No
BIAS Yes Yes No
Pairing Method Confusion ? ? ?
BlueFrag Yes Yes Yes

Before you get overly concerned about your Bluetooth devices being attacked, it should be noted that many of the above attacks require an attacker to perform a man-in-the-middle (MiTM) attack, which is harder in some circumstances than in others. While Bluetooth is considered a short-distance radio communication protocol, Bluetooth’s commonly quoted maximum range of 100 meters is only limited by the radios used. Radio transmissions can be sent and received from a much further distance than 100 m if proper radio equipment is used, as shown back in 2004 by the trifinite group. New attacks impacting the underlying Bluetooth specification are certainly something to watch out for due to the number of devices they impact. While a malicious Bluetooth worm has yet to appear in the wild, attacks such as BlueFrag make such a scenario a possibility. And since many IoT devices and older phones supporting Bluetooth are unlikely to receive Bluetooth firmware updates to mitigate attacks at the Bluetooth stack layer, these type of findings may have consequences in the coming years.


“awesome” Bluetooth security repository: