Often, Designite's audience asks - how Designite is different from other tools especially SonarQube and NDepend. My (shallow) answer to this question is that the focus on design and architecture granularity issues is lacking in other tools while for Designite they are first-class citizens.
To tackle the comparison appropriately, I took a large open-source software NHibernate, carried out analysis using Designite, NDepend, and SonarQube, and compared different aspects of a code quality analysis.
All three tools present an analysis summary though the information differs. SonarQube reports the number of bugs, vulnerabilities, security hotspots, code smells, and lines of code (LOC) along with their related ratings.
NDepend reports more information-number of lines of code, as well as the number of source code elements (such as methods and types), number of violated rules, technical debt rating, quality gate information, and found category-wise issues.
Designite shows a few size metrics (such as LOC, number of types, and methods) as well as quality profile snapshot in the form of smell density (i.e., number of smells in per thousand lines of code), code duplication, and metrics violations.
Each tool reports its findings in its unique way. SonarQube reported 125 bugs and 4.5 thousand code smells for the analyzed system. NDepend indicates that the software fails 4 quality gates with a total number of issues close to 20K. Designite identified 1.7K architecture, 7.8K design, and approximately 41K implementation smell instances. SonarQube and Designite reported 2.7% and 9.15% code duplication respectively. One vast difference is in reporting LOC; SonarQube reported 192KLOC, NDepend 272KLOC, and Designite 726KLOC. The big difference comes from the way tools count LOC; Designite counts all non-empty lines (including comments).
SonarQube and NDepend also report technical debt quantified in a number. I wrote a post a few years ago about why technical debt quantification is not reliable.
Quality issue rules
Reported quality issues are the meat of such analysis tools. Each tool has a set of rules and quality issues are violations of those rules. I mapped each of the reported issues by all three tools to the implementation, design, and architecture granularity based on the scope and impact of the issue. Some issues are common between two tools and some are their unique offering.
Following figure shows the comparison between issues reported by Designite and NDepend. It is evident that NDepend is much better than Designite for implementation smells/issues where NDepend identifies 66 unique issues whereas Designite reported 6 unique issues and the rest 7 were reported by both of the tools. However, Designite covered design and architecture quality issues much better than NDepend. For design granularity, NDepend reported 5 unique issues against Designite’s 13 unique design smells; 12 design smells were reported by both the tools. For architecture granularity, NDepend identified 3 architectural issues among them 2 were covered by Designite too; Designite detected 5 unique architecture smells not covered by NDepend. To summarize, NDepend offers more for implementation granularity quality issues whereas Designite does more for design and architecture aspects.
The same analysis is repeated between Designite and SonarQube. For implementation quality issues, similar to NDepend, SonarQube fairs superior to Designite with 106 unique issues. However, SonarQube does not detect any architecture issue and a mere small subset of design issues compared to Designite. Hence, SonarQube could be preferred when the focus is on implementation-specific issues; for design and architecture granularity quality issues Designite is more suitable.
Designite and NDepend have significantly better visualization than SonarQube. Designite presents smell distribution in sunburst as well as in treemap and metrics distribution using pie-charts. These visualization aids are interactive i.e., user can filter and navigate the presented entities based on various factors (such as namespace or class). NDepend shows interactive treemap for metrics as well as dependency matrix.
For code quality visualization and analysis, both Designite and NDepend show interactive visualization for a wide range of code metrics.
Comparing other notable features
- Continuous Integration: All three tools support CI. SonarQube supports Jenkins, TeamCity, and a few more platforms. NDepend Azure DevOps/TFS edition supports CI in Azure DevOps. Designite supports CI within GitHub.
- Visual Studio integration: Designite offers powerful plugins for Visual Studio and IntelliJ IDEA. NDepend also has powerful IDE integration with Visual Studio. SonarQube has a plugin SonarLint integrated with Visual Studio, IntelliJ IDEA, and VS Code.
- Trend analysis: All of these tools support trend analysis; mechanism and the details on generated reports differ.
- CQL: NDepend offers Code Query Language that can be used to write new quality rules - a useful feature that both SonarQube and Designite do not offer.
- Action Hub: Not all quality issues are the same. Some issues developers intend to fix and some they think are too trivial or too challenging for them given their project context. Designite offers Action Hub that allows each smell to be tagged as “Refactor”, “Wrong”, and “Drop” so that the tool reports only relevant smells in the next analysis.
- Hotspot: If you have limited time, which class or method you will refactor first? Designite offers an answer in the form of hotspot analysis that shows the class suffering from most number of smells. Along similar lines, NDepend and SonarQube tag each issue with its severity to help the developer choose what to refactor first.