Why Combine Techniques?
Every test design technique has blind spots. Equivalence partitioning misses boundary defects. Boundary value analysis misses state-dependent bugs. State transition testing misses calculation errors. Decision tables miss path-specific defects.
No single technique provides complete coverage. But when you combine them strategically, the strengths of one technique compensate for the weaknesses of another. The result is a test suite far more effective than any single technique could produce.
The Three-Layer Model
A comprehensive test strategy combines three layers of techniques:
Layer 1: Specification-Based (Black-Box)
These form the foundation. They verify that the system meets its requirements.
- Equivalence partitioning — core valid/invalid classes
- Boundary value analysis — boundary defects
- Decision tables — business rule combinations
- State transition testing — stateful behavior
Coverage target: All requirements have at least one test. All boundary values tested.
Layer 2: Structure-Based (White-Box)
These fill gaps in Layer 1 by analyzing which code paths are not yet covered.
- Statement and decision coverage — identify untested code
- Path coverage — verify critical algorithm paths
- MC/DC — for safety-critical conditions
- Data flow testing — variable lifecycle issues
Coverage target: Code coverage metrics meet project standards (typically 80%+ for decision coverage).
Layer 3: Experience-Based
These catch defects that formal techniques miss — the “weird” bugs that arise from real-world usage patterns.
- Error guessing — based on tester’s domain knowledge
- Exploratory testing — simultaneous learning and testing
- Checklist-based — systematic experience capture
Coverage target: High-risk areas explored. Common failure patterns checked.
The Combination Workflow
Step 1: Start with Specification-Based Tests
Analyze each feature and apply the most relevant black-box technique:
Feature: User registration
├── Email field → EP (valid/invalid formats) + BVA (length)
├── Password field → EP (strength classes) + BVA (min/max length)
├── Age field → EP (valid ranges) + BVA (18, 120)
├── Registration rules → Decision table (email confirmed + age valid + terms accepted)
└── Account lifecycle → State transitions (pending → active → suspended → deleted)
Step 2: Measure Structural Coverage
Run the Layer 1 tests and measure code coverage. Analyze the gaps.
Registration module coverage after Layer 1:
- Statement coverage: 78%
- Decision coverage: 65%
- Uncovered code:
├── Error handling for database connection failure (line 45-52)
├── Race condition check for duplicate emails (line 67-74)
├── Edge case: unicode normalization in email (line 89-95)
└── Fallback path when email service is unavailable (line 110-118)
Step 3: Add Structure-Based Tests
For each uncovered block, determine whether it should be tested:
- Dead code? Mark for removal, not testing
- Error handling? Add negative tests that trigger these paths
- Implicit behavior? Understand the code and add appropriate tests
- Unreachable with current inputs? May indicate missing EP classes
Step 4: Apply Experience-Based Techniques
After formal techniques, apply domain knowledge:
- Error guessing: SQL injection in email, XSS in name, unicode edge cases
- Exploratory testing: Rapid registration attempts, back-button behavior, session handling
- Checklist: OWASP security items, accessibility, localization
Case Study: Payment Processing
Let us walk through combining techniques for a payment processing feature.
Requirements:
- Accept credit card, debit card, and PayPal
- Validate card numbers using Luhn algorithm
- Apply currency conversion for international payments
- Handle insufficient funds, expired cards, and fraud detection
- Support partial refunds within 30 days
Layer 1: Specification-Based
| Sub-Feature | Technique | Test Cases |
|---|---|---|
| Payment type selection | EP | 3 valid types + 1 invalid |
| Card number | EP + BVA | Valid Visa/MC/Amex + invalid formats + boundary lengths |
| Expiry date | BVA | Today, tomorrow, yesterday, far future |
| Currency conversion | BVA + EP | Same currency, supported pairs, unsupported |
| Payment rules | Decision table | (type x amount x currency x fraud_score) |
| Transaction lifecycle | State transitions | Pending → Authorized → Captured → Refunded |
| Refund window | BVA | Day 0, 1, 29, 30, 31 |
Result: ~45 test cases covering all specified requirements.
Layer 2: Structure-Based
After running Layer 1 tests, code coverage reveals:
| Module | Statement | Decision |
|---|---|---|
| CardValidator | 92% | 85% |
| CurrencyConverter | 88% | 78% |
| FraudDetector | 65% | 52% |
| RefundProcessor | 80% | 72% |
The FraudDetector has low coverage because our spec-based tests did not exercise many fraud detection paths. Add tests:
- Card from high-risk country + large amount → fraud flag
- Multiple rapid transactions → velocity check
- Mismatched billing/shipping address → review flag
Result: ~15 additional tests bringing coverage to 85%+ across all modules.
Layer 3: Experience-Based
- Error guessing: Double-submit of payment form, network timeout during authorization, 3D Secure redirect failure, card number with spaces/dashes
- Exploratory testing: Charter — “Explore payment flow with slow network and browser back button to discover state inconsistencies”
- Checklist: PCI DSS compliance items, accessibility of error messages
Result: ~10 additional tests and 2 exploratory sessions.
Total combined suite: ~70 test cases + 2 exploratory sessions, covering requirements, code structure, and real-world edge cases.
Exercise: Combined Test Design
Problem 1
You are testing a flight booking feature. Combine techniques:
Requirements:
- Search by origin, destination, dates, passengers (1-9), class (economy, business, first)
- Results sorted by price, with filters for airlines, stops, departure time
- Booking requires passenger details, contact info, payment
- Tickets can be modified (for a fee) or cancelled (refund policy varies by fare class)
Design a combined test strategy using all three layers.
Solution
Layer 1: Specification-Based (~50 tests)
| Sub-Feature | Technique | Key Tests |
|---|---|---|
| Search inputs | EP + BVA | Valid/invalid cities, date ranges, 1/9/10 passengers |
| Date logic | BVA | Same day, next day, return before departure, far future |
| Passenger x class | Pairwise | 3 classes x passenger counts x trip types |
| Sort/filter | EP | Each sort option, each filter, combinations |
| Booking rules | Decision table | Fare class x modification x cancellation policies |
| Ticket lifecycle | State transitions | Searched → Booked → Modified → Cancelled |
| Modification fees | BVA | Fee boundaries by days before departure |
| Refund amounts | Decision table | Fare class x cancellation timing |
Layer 2: Structure-Based (~15 tests)
- Run Layer 1 and measure coverage
- Focus on: pricing calculation paths, availability check logic, fee computation branches
- Add tests for error handling: no flights found, sold-out flight, payment failure mid-booking
Layer 3: Experience-Based (~10 tests + sessions)
- Error guessing: One-way vs round-trip edge cases, infant passengers, unaccompanied minors, special characters in names
- Exploratory: “Explore booking flow when modifying passenger count after initial search, with currency changes and promo codes”
- Checklist: Accessibility (screen reader for search results), localization (date formats, currencies)
Total: ~75 test cases + exploratory sessions
Problem 2
After applying EP and BVA to a discount calculator, code coverage is:
- Statement: 68%
- Decision: 55%
The uncovered code includes:
- Lines 34-40: Loyalty tier calculation for platinum members
- Lines 55-62: Holiday weekend special pricing
- Lines 78-85: Employee discount override
- Lines 92-98: Negative amount handling
- Lines 105-110: Deprecated feature flag check
Analyze each gap and recommend actions.
Solution
| Uncovered Code | Analysis | Action |
|---|---|---|
| Loyalty tier (platinum) | Missing EP class — platinum was not in our equivalence classes | Add EP class for platinum members + BVA for tier boundaries |
| Holiday weekend pricing | Time-dependent behavior not in initial EP | Add time-based tests: regular day, holiday, weekend, holiday+weekend |
| Employee discount | Special user type not in EP | Add EP class for employee user type + decision table for discount stacking |
| Negative amount handling | Defensive code for invalid input | Add negative test: negative prices, negative quantities |
| Deprecated feature flag | Dead code behind a feature flag | Verify flag is off in production. Do not test — mark for removal |
After adding these tests, expected coverage: Statement 90%+, Decision 82%+.
Measuring Combined Effectiveness
Track these metrics to evaluate your combined approach:
| Metric | Target | Purpose |
|---|---|---|
| Requirements coverage | 100% | Every requirement has tests |
| Code statement coverage | 80%+ | Most code is exercised |
| Code decision coverage | 75%+ | Most branches are tested |
| Mutation score | 80%+ on critical code | Tests actually catch faults |
| Defect detection rate | Increasing over sprints | Combined approach finds more bugs |
| Escaped defects | Decreasing over sprints | Fewer bugs reach production |
Key Takeaways
- No single technique is sufficient — combine specification-based, structure-based, and experience-based
- Start with specification-based tests (EP, BVA, decision tables, state transitions) as the foundation
- Use code coverage analysis to identify structural gaps missed by spec-based tests
- Fill gaps with targeted structure-based tests, not random tests
- Apply experience-based techniques (error guessing, exploratory) to catch real-world edge cases
- Measure combined effectiveness through coverage metrics, mutation score, and defect escape rate
- The three-layer approach is not sequential — iterate as you learn about the system