Comparing DAST Tools
Objective
The aim of this section is to compare the findings of the various DAST tools used in the previous section and rank them to provide a solution to the 2nd point of the problem statement under Task 2
.
Vulnerability Reports
ZAP and W3AF, both, found various vulnerabilities. Interestingly, most of the vulnerabilities identified by the tools are different from the other. The tools also were different in terms of ease of usage. I found W3AF more user-friendly than ZAP, mainly due to the difficulty I faced while configuring the tools to authenticate with DVNA as a user. W3AF's authentication mechanism was extremely simple whereas, for ZAP, the CLI was very complicated to use. After the initial hitches, both tools were quite close in terms of the number of discoveries but ZAP found more vulnerabilities than W3AF. After consolidating the discoveries that the tools made, I'll rank ZAP above W3AF even though it was more complicated to use because the results that ZAP produced were more relevant. Below is a table of comparison the various types of discoveries ZAP and W3AF made:
Rank | Tool | Vulnerabilities Discovered | Informational Discoveries |
---|---|---|---|
1 | ZAP (Zed Attack Proxy) | 9 | 1 |
2 | W3AF (Web Application Attack & Audit Framework) | 5 | 3 |
ZAP
ZAP uses a set of rules built into the application and segregated into two halves - active scan rules and passive scan rules. These rules are derived from the OWASP Top 10 project. The active scan rules attempt to uncover security vulnerabilities by using known attacks against the target application. The passive scan rules observe the HTTP messages sent to the web application and detect malicious behavior.
ZAP's baseline scan found a total of 10 issues with DVNA. The split-up of the issues found is listed in the table below:
Sl. No. | Description | Severity |
---|---|---|
1. | X-Frame-Options Header Not Set | Medium |
2. | CSP Scanner: Wildcard Directive | Medium |
3. | Cross-Domain JavaScript Source File Inclusion | Low |
4. | Absence of Anti-CSRF Tokens | Low |
5. | X-Content-Type-Options Header Missing | Low |
6. | Cookie Without SameSite Attribute | Low |
7. | Web Browser XSS Protection Not Enabled | Low |
8. | Server Leaks Information via "X-Powered-By" HTTP Response Header Field(s) | Low |
9. | Content Security Policy (CSP) Header Not Set | Low |
10. | Information Disclosure - Suspicious Comments | Informational |
The complete report generated by the ZAP baseline scan can be found here.
W3AF
Since, W3AF had multiple output options, I initially went with storing the output in an HTML report as I thought it would be easier to comprehend but the report only stated a single informational discovery, which was not right because on the console I saw various other discoveries. So, I also generated a Text Report, which is supposed to be the exact replica of the console output and as expected, the text report was more detailed and contained more discoveries.
W3AF works by first crawling the application to find injectable fields (with an option to authenticate with the application to reach segments that lie behind the authentication layer). After it curates a list of URLs, it tries to find injectable fields in each of those URLs and based on the plugins enables (such as SQLi plugin, XSS plugin) it injects crafted string inputs to those fields to test vulnerability while also taking note of auxiliary observations about the application's behavior, state or content that W3AF deems worthy of the user taking a look at them.
Below mentioned is the table of the consolidated discoveries that W3AF made along with what kind of discovery it was:
Sl. No. | Description | Type |
---|---|---|
1. | HTTP response returned without the recommended HTTP header X-Content-Type-Options. | Vulnerability |
2. | An HTTP response matching the web backdoor signature "cmd.jsp" was found at: "http://localhost:9090/cmd.jspx". | Vulnerability |
3. | An HTTP response matching the web backdoor signature "cmd.jsp" was found at: "http://localhost:9090/cmd.jsp". | Vulnerability |
4. | The URL: "http://localhost:9090/learn" has a script tag with a source that points to a third party site. | Vulnerability |
5. | A comment containing HTML code [if lt IE 9]> <script src="http://htm" was found in: "http://localhost:9090/learn". |
Vulnerability |
6. | X-Powered-By header for the target HTTP server is "Express". | Information |
7. | Total unique URLs found (after removing redundant entries) is 24. | Information |
8. | Total unique fuzzable URLs found (after removing redundant entries) is 24. | Information |
The Text report I used to analyze the observations of the tool can be found here. The initial HTML report (which I left and instead used the Text report) can be found here.
The actual Text report contains a lot of debug
output. So, I redacted the reported and only kept the information relevant to security issues in a separate file which can be found here.
Conclusion
In conclusion, I would say both the performance of both the tools was quite close to the other. The ranking I gave is my personal opinion based solely on the results they provided and their relevance to the solution of the problem statement. I did not consider additional factors, such as user experience, while ranking them.