What Is Static Analysis?
Static analysis is the automated examination of source code without executing it. Tools scan the code for patterns that indicate bugs, security vulnerabilities, style violations, and complexity issues.
While manual code reviews (covered in Lesson 2.29) rely on human judgment, static analysis tools apply thousands of rules consistently across every line of code in seconds. They never get tired, never miss a known pattern, and run the same way every time.
Think of static analysis as spell-check for code. Spell-check cannot tell if your essay makes a good argument (that requires a human reviewer), but it catches typos, grammar errors, and formatting issues instantly and reliably.
Static Analysis vs. Manual Reviews
| Aspect | Static Analysis (Tools) | Manual Reviews (Humans) |
|---|---|---|
| Speed | Seconds to minutes | Hours to days |
| Consistency | 100% consistent | Varies by reviewer |
| Coverage | Every file, every line | Focused on changed code |
| What it finds | Known patterns, rules violations | Design flaws, logic errors, missing requirements |
| What it misses | Novel bugs, business logic errors | Known patterns (if reviewer is tired/rushed) |
| Cost | Tool license + CI time | Developer time |
Best practice: use both. Static analysis catches the routine issues so human reviewers can focus on design and logic.
Popular Static Analysis Tools
| Tool | Languages | Focus |
|---|---|---|
| SonarQube/SonarCloud | 30+ languages | Comprehensive: bugs, vulnerabilities, smells, coverage |
| ESLint | JavaScript/TypeScript | Code style, patterns, potential errors |
| Pylint / Ruff | Python | Code quality, style, errors |
| PMD | Java, Apex, others | Code patterns, complexity |
| FindBugs/SpotBugs | Java | Bug patterns in bytecode |
| RuboCop | Ruby | Style, patterns, complexity |
| golangci-lint | Go | Meta-linter aggregating multiple tools |
| Semgrep | Multi-language | Security-focused pattern matching |
SonarQube Overview
SonarQube is the industry standard for continuous code quality inspection. It provides a centralized dashboard where teams can monitor code quality across projects, track technical debt, and enforce quality standards.
Issue Types
SonarQube classifies issues into three categories:
Bugs — Code that is demonstrably wrong or likely to cause unexpected behavior at runtime. Examples: null pointer dereference, array index out of bounds, resource leaks.
Vulnerabilities — Code that could be exploited by attackers. Examples: SQL injection, cross-site scripting (XSS), hardcoded credentials, insecure cryptography.
Code Smells — Code that is not wrong but makes the codebase harder to maintain. Examples: duplicated code, overly complex methods, unused variables, poor naming.
Severity Levels
Each issue has a severity:
| Severity | Description | Example |
|---|---|---|
| Blocker | Will cause the application to crash or lose data | Null dereference in production path |
| Critical | Likely to cause a significant issue | SQL injection vulnerability |
| Major | May cause a minor issue or significant quality degradation | Method with cyclomatic complexity of 50 |
| Minor | Quality issue with low impact | Unused import statement |
| Info | Not a problem, just a suggestion | Consider using StringBuilder |
Quality Gates
A Quality Gate is a set of conditions that new code must satisfy. It acts as a gateway between development and deployment — if code fails the Quality Gate, it should not be deployed.
Default SonarQube Quality Gate (“Sonar Way”):
- No new bugs
- No new vulnerabilities
- New code coverage ≥ 80%
- New code duplication ≤ 3%
Teams customize Quality Gates based on their standards. A strict team might require zero new code smells. A team focused on security might set zero tolerance for vulnerabilities regardless of severity.
Technical Debt
SonarQube measures technical debt as the estimated time needed to fix all code smells in a project. It displays this as a time estimate: “5 days of technical debt.”
Technical Debt Ratio = (cost to fix all code smells) / (cost to rewrite all code from scratch). A ratio under 5% is typically considered manageable.
Setting Up SonarQube
Local Setup (Docker)
# Start SonarQube
docker run -d --name sonarqube -p 9000:9000 sonarqube:community
# Access at http://localhost:9000 (default login: admin/admin)
Scanning a Project
# Using SonarScanner CLI
sonar-scanner \
-Dsonar.projectKey=my-project \
-Dsonar.sources=src \
-Dsonar.host.url=http://localhost:9000 \
-Dsonar.token=your-project-token
CI/CD Integration
Most teams run SonarQube analysis as part of their CI pipeline:
# GitHub Actions example
- name: SonarQube Scan
uses: sonarqube/sonarcloud-github-action@v2
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
The pipeline fails if the Quality Gate is not satisfied, preventing low-quality code from being merged.
Interpreting SonarQube Reports
When reviewing a SonarQube report, follow this priority order:
- Quality Gate status — Pass or fail? If fail, what conditions were not met?
- New bugs and vulnerabilities — These are the highest priority. Fix all blockers and criticals before merging.
- Security hotspots — Code that SonarQube flagged as potentially vulnerable but needs human review to determine if it is actually exploitable.
- Coverage on new code — Is the new code adequately tested? Below 80% indicates gaps.
- Code smells on new code — Low priority but track the trend. Increasing smells indicate growing technical debt.
- Overall metrics — Monitor trends over time. Is the project getting better or worse?
Common SonarQube Rules
Reliability (Bugs):
- Identical sub-expressions on both sides of a binary operator
- Null should not be returned from Boolean methods
- Resources should be closed after use
Security (Vulnerabilities):
- SQL queries should not be constructed from user input
- Cookies should be secure and HttpOnly
- Cryptographic algorithms should not be insecure
Maintainability (Code Smells):
- Methods should not have too many parameters
- Cognitive complexity should not be too high
- Commented-out code should be removed
Exercise: Interpret a SonarQube Report
You are the QA lead reviewing a SonarQube report for a pull request that adds a new user registration feature. Here is the report:
New Code Analysis (on PR):
- Bugs: 2 (1 Critical, 1 Major)
- Vulnerabilities: 1 (Critical)
- Code Smells: 8 (2 Major, 6 Minor)
- Coverage: 65%
- Duplication: 1.2%
Bug Details:
- CRITICAL: Null pointer dereference in
UserService.register()— theemailValidatorfield may be null when registration is called before dependency injection completes - MAJOR:
UserController.handleRegistration()catches Exception instead of specific exceptions, potentially swallowing important errors
Vulnerability Details:
- CRITICAL: SQL injection in
UserRepository.findByEmail()— user email is concatenated directly into the SQL query string
Quality Gate: FAILED (new coverage < 80%, critical vulnerability found)
Part 1: Prioritize the findings. Which issues must be fixed before the PR can be merged? Which can be addressed later?
Part 2: For the critical SQL injection vulnerability, describe what the fix should look like and what test cases should be added.
Part 3: The developer argues that 65% coverage is “good enough” and the team should lower the Quality Gate threshold. Write a response explaining your position.
Part 4: Beyond the specific issues, what process improvements would you recommend to prevent these types of issues from reaching the PR stage?
Hint
For Part 1, consider severity and issue type. Vulnerabilities, especially SQL injection, should always be fixed before merge. For Part 3, think about what coverage targets are meant to achieve and whether lowering them solves the root problem.Solution
Part 1: Prioritization
Must fix before merge (blockers):
- CRITICAL Vulnerability: SQL injection — this is a security risk that could allow data breach if deployed. Non-negotiable.
- CRITICAL Bug: Null pointer dereference — this will cause runtime crashes for users.
Should fix before merge (strongly recommended): 3. MAJOR Bug: Catching generic Exception — this hides real errors and makes debugging difficult. 4. Coverage: 65% → needs more tests, especially for the registration flow.
Can be addressed in follow-up (tech debt): 5. 8 Code smells (2 Major, 6 Minor) — important but not blocking. Create a ticket to address in the next sprint.
Part 2: SQL Injection Fix
Current vulnerable code (likely):
String query = "SELECT * FROM users WHERE email = '" + email + "'";
Fixed code using parameterized query:
@Query("SELECT u FROM User u WHERE u.email = :email")
Optional<User> findByEmail(@Param("email") String email);
Or using PreparedStatement:
PreparedStatement stmt = conn.prepareStatement("SELECT * FROM users WHERE email = ?");
stmt.setString(1, email);
Test cases to add:
- Normal email: verify correct user is returned
- Email with SQL injection attempt:
admin'; DROP TABLE users;--— verify no SQL execution - Email with special characters:
user+tag@example.com,o'brien@example.com— verify correct handling - Empty email: verify proper error handling
- Null email: verify proper error handling
Part 3: Response to “65% is good enough”
“I understand the desire to move fast, but I recommend keeping the 80% threshold for new code. Here is why:
65% coverage on a registration flow is risky. Registration involves user input validation, database writes, email sending, and error handling. The uncovered 35% likely includes error paths and edge cases where bugs hide.
The threshold applies to new code only. We are not asking for 80% on the entire codebase — only on the code being added now. This is the cheapest time to write these tests.
Lowering the bar sets a precedent. If we lower to 65% for this PR, we will face the same argument at 50% next time. Quality standards should be maintained, not adjusted to fit current code.
The real question is why coverage is low. If the code is hard to test, that may indicate a design issue. If there was not enough time, that is a planning issue. Neither is solved by lowering the threshold.
Instead of lowering the gate, let us identify which specific code paths are uncovered and prioritize testing the most critical ones (error handling, input validation, database operations).”
Part 4: Process Improvements
- IDE integration: Configure SonarLint in developers’ IDEs so they see issues as they write code, not after pushing.
- Pre-commit hooks: Add ESLint/linting with security rules that catch SQL injection patterns before commit.
- Security training: The SQL injection should not have been written in the first place. A brief training session on OWASP Top 10 would help.
- PR template: Add a checklist item: “I have verified no raw SQL concatenation with user input.”
- Dependency injection tests: Add integration tests that verify all services are properly initialized before handling requests.
Key Takeaways
- Static analysis automates code inspection, catching bugs, vulnerabilities, and code smells without executing the code
- SonarQube classifies issues into bugs (reliability), vulnerabilities (security), and code smells (maintainability)
- Quality Gates set threshold conditions that new code must meet before deployment
- Technical debt is measured as the estimated time to fix all code smells
- Static analysis complements manual code reviews — tools catch patterns, humans catch logic and design issues
- CI/CD integration ensures every change is analyzed, preventing quality degradation over time
- The most value comes from fixing issues on new code (shift-left) rather than remediating legacy code