Back in the days, the word Amazon used to refer to over half of earth's rainforests. While this is still true, it isn't what most people think of when they hear the word Amazon. Nowadays, people refer to the word Amazon as a place where they can order goods from the confort of their couches. However, Amazon offers much more than just a online marketplace. In 2006, Amazon launched a subsidiary called Amazon Web Services (AWS) which is a platform that provides on-deman cloud computing for everyone. The adoption of AWS has grew considerably in the past years, more and more companies are now embracing AWS and deploying their web applications in the cloud offered by Amazon. But what does that mean from a security perspective?
A little overview
Amazon Web Services, simply known as AWS, offers many different cloud hosting services that companies can use when building their web applications. The wide range of services offered by Amazon can be very convenient for companies looking to outsource part (or the entirety) of their infrastructure as it provides robust and flexible solutions for building modern web applications.Such variety of services have paved the way for new type of security considerations and new attack surfaces and, of course, when it comes to abusing AWS services the one culprit is misconfiguration.
A misconfigured service might allow unauthorized access to a resource that in turns might give access to other resources eventually spreading until the entire system has been compromised.
AWS S3 Buckets
Simple Storage Service (S3) is probably the most popular of the AWS services and provides a scalable solution for objects storage via web APIs. S3 buckets can be used for many different usage and in many different scenarios.
Consider the case of a web application that requires a place where to store content that is then served to users, S3 comes to the rescue. The image below depicts a scenario where a web application is making use of AWS S3 to store images uploaded by users and to further store content, such as JavaScript and CSS, used by the web application itself.
Moreover, S3 buckets can be used to store and serve static web sites by enabling the "Static website hosting" property. This property gives the possibility to store and serve only static content written in HTML, meaning that dynamic pages written in server-side code such as PHP or ASP, will not be executed.
S3 buckets can also be connected to other AWS services to provide support or enhance their features. Such synergy between S3 buckets and other AWS services inevitably results in many juice information being stored in buckets (as depicted in the picture below). For example, AWS EC2 instances' snapshots are stored in S3 buckets. As a result, a poorly configured S3 bucket will end up storing sensitive information contained in the EC2 instance possibly including, but not limited to, keys which grant access to other EC2 instances or services.
There's always a Pot of Gold at the Rainbow's End
AWS S3 buckets provide different access permissions which, if misconfigured and left open for unauthorized access, might result in many different attack scenarios. Over the past years, AWS S3 buckets have been come to be known as the primary source of leakage when companies suffer from data breaches. No company is immune to AWS S3 buckets publicly open in the wild, and whenever a breach happens, they make their ways into major press headlines.
Booz Allen Hamilton, a U.S. defense contractor, left data publicly accessible through an insecurely configured S3 account containing files related to the National Geospatial-Intelligence Agency (NGA), which handles battlefield satellite and drone surveillance imagery.
Accenture, one of the biggest consulting agencies out there, left openly accessible AWS S3 buckets containing sensitive information including plain text passwords.
Verizon too was responsible, multiple times, of leaving misconfigured AWS S3 buckets which contained millions personal information of its customer.
Misconfigured AWS S3 buckets that leaves unauthorized access are thus abused by attackers in order to compromise the privacy of the data stored in those buckets, ultimately resulting in violating the privacy of million of users around the world.
Violation of users' privacy is not the only thing that can be achieved, being able to access S3 buckets might also provide an attacker with the knowledge he requires to access other AWS services.
Booz Allen Hamilton, a U.S. defense contractor, left data publicly accessible through an insecurely configured S3 account containing files related to the National Geospatial-Intelligence Agency (NGA), which handles battlefield satellite and drone surveillance imagery.
Accenture, one of the biggest consulting agencies out there, left openly accessible AWS S3 buckets containing sensitive information including plain text passwords.
Verizon too was responsible, multiple times, of leaving misconfigured AWS S3 buckets which contained millions personal information of its customer.
Misconfigured AWS S3 buckets that leaves unauthorized access are thus abused by attackers in order to compromise the privacy of the data stored in those buckets, ultimately resulting in violating the privacy of million of users around the world.
Violation of users' privacy is not the only thing that can be achieved, being able to access S3 buckets might also provide an attacker with the knowledge he requires to access other AWS services.
If we consider a poorly configured S3 bucket that contains EC2 snapshots, then an attacker might be able to have access to those EC2 snapshots and retrieve security keys to the EC2 instance itself.
Furthermore, consider the case in which an AWS S3 bucket is used to store and serve JavaScript content to a web application. If the bucket is left misconfigured allowing an attacker to get writing access he might be able perform an attack that has been called "GhostRider". Quoting Sekhar Sarukkai, chief scientist at Skyhigh networks,
Bucket owners who store Javascript or other code should pay particular attention to this issue to ensure that 3rd parties don’t silently overwrite their code for drive-by attacks, bit-coin mining or other exploits. Even benign image or document content left open for overwriting can be exploited for steganography attacks or malware distribution.
Get your hands dirty
The first step when testing the security of S3 buckets is to identify the location of the bucket itself, meaning the URL that can be used to interact with the bucket.
Note that S3 buckets have unique names, meaning that two different AWS users cannot have a bucket with the same name. This fact can be helpful when trying to guess the name of a bucket knowing the name of the web application.
Note that S3 buckets have unique names, meaning that two different AWS users cannot have a bucket with the same name. This fact can be helpful when trying to guess the name of a bucket knowing the name of the web application.
Let's start by considering a bucket named mindeds3test.
The URLs schemas that can be used to interact with the bucket are:
https://s3-[region].amazonaws.com/[bucketname]/
http://[bucketname].s3.amazonaws.com/
Moreover, if the bucket has the property "Static website hosting", it provides access to static HTML pages via the following URL:
http://[bucketname].s3-website-[region].amazonaws.com/
S3 bucket identification
As described above, the identification of a bucket boils down on identifying the URL that identifies the bucket. There are many different ways to do so.
HTML inspection
Let's start very easily and consider the HTML code of the web application under analysis. It might in fact happen that you might find S3 URLs directly in the HTML code, saving you the trouble of looking around for the buckets. Start by having a look at the HTML code and the resources loaded by the web page in order to identify S3 buckets.
Brute-force
A brute-force approach, possibly based on a wordlist of common words along with specific words coming from the domain you're testing, might also do the trick. For example, we can use the Burp Intruder to perform a series of request to the URL http://s3.amazonaws.com/[bucketname]. This URL does not identify a bucket however, it responds with a convenient PermanentRedirect message in case a bucket is found and a NoSuchBucket message otherwise.
In the intruder tab configure as target host http://s3.amazonaws.com, then move to position and setup a simple get request and put the payload position right after the / character of the request. Proceed to the Payloads section and load your wordlist, finally move to Options and in the Grep - Match panel add only a match for the word PermanentRedirect. This will help in identifying and sorting the result of the attack. Now press the Start attack button and the intruder will start performing requests and collecting results of possible buckets.
As shown in the picture above, our Grep - Match option cause the creation of an additional column in the result of the attack providing a convenient way to identify which payload corresponds to a valid AWS S3 bucket and which doesn't.
Google Dork
Google always comes to the rescue when it comes to search for URLs. You can in fact use the convenient Inurl directive to search for possibly interesting AWS S3 buckets. We report the following list of google docks that can be used to retrieve possibly juice AWS S3 buckets.
Inurl: s3.amazonaws.com/legacy/
Inurl: s3.amazonaws.com/uploads/
Inurl: s3.amazonaws.com/backup/
Inurl: s3.amazonaws.com/mp3/
Inurl: s3.amazonaws.com/movie/
Inurl: s3.amazonaws.com/video/
inurl: s3.amazonaws.com
Refer to https://it.toolbox.com/blogs/rmorril/google-hacking-amazon-web-services-cloud-front-and-s3-011613 more interesting Google dorks.
Refer to https://it.toolbox.com/blogs/rmorril/google-hacking-amazon-web-services-cloud-front-and-s3-011613 more interesting Google dorks.
DNS Caching
There are many services out there maintaining some sort of DNS caching that can be queried by users. By taking advantage of such services it is possible to hunt down AWS S3 buckets.
Interesting services we recommend to check out are:
https://findsubdomains.com/
https://www.robtex.com/
https://buckets.grayhatwarfare.com/ (created specifically to collect AWS S3 buckets)
The following is a screenshot from findsubdomains showing how easy it can be to retrieve AWS S3 buckets by searching for subdomains of s3.amazonaws.com
The following is a screenshot from findsubdomains showing how easy it can be to retrieve AWS S3 buckets by searching for subdomains of s3.amazonaws.com
Bing reverse IP
Microsoft's Bing search engine can be very helpful in identifying AWS S3 buckets given its ability of searching for domains given an IP address. Given the IP address of a known AWS S3 bucket, just by taking advantage of the ip:[IP] feature of Bing, it is possible to retrieve many other AWS S3 buckets resolving to the same IP.
Testing permissions
Once an S3 bucket has been identified, it is time to test its access permissions and try to abuse them. An S3 bucket provides a set of five permissions that can be granted at the bucket level or at the object level.
READ
At bucket level allows the to list the objects in the bucket.
At object level allows to read the content as well as the metadata of the object.
WRITE
At bucket level allows to create, overwrite, and delete objects in the bucket.
At object level allows to edit the object itself.
READ_ACP
At bucket level allows to read the bucket’s Access Control List.
At object level allows to read the object’s Access Control List.
WRITE_ACP
At bucket level allows to set the Access Control List for the bucket.
At object level allows to set an Acess Control List for the object.
FULL_CONTROL
At bucket level is equivalent to granting the READ, WRITE, READ_ACP, and WRITE_ACP permissions.
At the object levelis equivalent to granting the READ, WRITE, READ_ACP, and WRITE_ACP.
Testing READ
Via HTTP, try to access the bucket by requesting the following URL
http://[bucketname].s3.amazonaws.com
It is also possible to use the AWS command line and list the content of the bucket with the following command:
aws s3 ls s3://[bucketname] --no-sign-request
Note: the -no-sign-request flag specifies to not use credential to sign the request.
A bucket that allows to read its content will answer by providing a list of the content. The HTTP request will answer with an XML page while the command line request will answer with a list of files.
Testing WRITE
Via AWS command line:
aws s3 cp localfile s3://[bucketname]/test-upload.txt --no-sign-request
A bucket that allows arbitrary file upload will answer with a message showing that the file has been uploaded
upload: Pictures/ec2-s3.png to s3://mindeds3test01/test-upload.txt
upload: Pictures/ec2-s3.png to s3://mindeds3test01/test-upload.txt
Testing READ_ACL
Via AWS command line:
aws s3api get-bucket-acl --bucket [bucketname] --no-sign
ACL can also be specified for a single object and can be read with the following command:
aws s3api get-object-acl --bucket [bucketname] --key index.html --no-sign-request
Both commands will output a JSON describing the ACL policies for the resource being specified.
Testing WRITE_ACL
Via AWS command line:
aws s3api put-bucket-acl --bucket [bucketname] [ACLPERMISSIONS] --no-sign-request
aws s3api put-bucket-acl --bucket [bucketname] [ACLPERMISSIONS] --no-sign-request
ACL can also be specified for a single object and can be written with the following command:
aws s3api put-object-acl --bucket [bucketname] --key file.txt [ACLPERMISSIONS] --no-sign-request
Both commands do not display an output in case of operation successful.
Any authenticated AWS client
Finally, AWS S3 permissions used to include a peculiar grant named "any authenticated AWS client". This permission allows any AWS member, regardless of who they are, access to the bucket. This feature is not provided anymore but there are still buckets with this type of permission enabled.
To test for this type of permission, you should create an AWS account and configure it locally with the aws command line:
aws configure
You can then try to access the bucket with the same commands described above, the only difference is that the flag --no-sign-request should be replaces with --profile [PROFILENAME] where PROFILENAME is the name of the profile created with the configure command.
Conclusion of Part 1
AWS S3 buckets provides a convenient means of outsourcing storage resources and should thus not surprise that many companies decide to take advantage of such a service. The simplicity provided by AWS S3 for creating a bucket, overtime might result in a difficulty on keeping track of all the buckets. This post shows how to test publicly accessible AWS S3 buckets. Once a wrongly configured bucket is identified, the number one priority is to restrict its access only to authorized users.
Remember, you are one misconfigured S3 bucket away from becoming the next major press headline.
No comments:
Post a Comment