Why Combine Techniques?

Every test design technique has blind spots. Equivalence partitioning misses boundary defects. Boundary value analysis misses state-dependent bugs. State transition testing misses calculation errors. Decision tables miss path-specific defects.

No single technique provides complete coverage. But when you combine them strategically, the strengths of one technique compensate for the weaknesses of another. The result is a test suite far more effective than any single technique could produce.

The Three-Layer Model

A comprehensive test strategy combines three layers of techniques:

Layer 1: Specification-Based (Black-Box)

These form the foundation. They verify that the system meets its requirements.

  • Equivalence partitioning — core valid/invalid classes
  • Boundary value analysis — boundary defects
  • Decision tables — business rule combinations
  • State transition testing — stateful behavior

Coverage target: All requirements have at least one test. All boundary values tested.

Layer 2: Structure-Based (White-Box)

These fill gaps in Layer 1 by analyzing which code paths are not yet covered.

  • Statement and decision coverage — identify untested code
  • Path coverage — verify critical algorithm paths
  • MC/DC — for safety-critical conditions
  • Data flow testing — variable lifecycle issues

Coverage target: Code coverage metrics meet project standards (typically 80%+ for decision coverage).

Layer 3: Experience-Based

These catch defects that formal techniques miss — the “weird” bugs that arise from real-world usage patterns.

  • Error guessing — based on tester’s domain knowledge
  • Exploratory testing — simultaneous learning and testing
  • Checklist-based — systematic experience capture

Coverage target: High-risk areas explored. Common failure patterns checked.

The Combination Workflow

Step 1: Start with Specification-Based Tests

Analyze each feature and apply the most relevant black-box technique:

Feature: User registration
├── Email field → EP (valid/invalid formats) + BVA (length)
├── Password field → EP (strength classes) + BVA (min/max length)
├── Age field → EP (valid ranges) + BVA (18, 120)
├── Registration rules → Decision table (email confirmed + age valid + terms accepted)
└── Account lifecycle → State transitions (pending → active → suspended → deleted)

Step 2: Measure Structural Coverage

Run the Layer 1 tests and measure code coverage. Analyze the gaps.

Registration module coverage after Layer 1:
- Statement coverage: 78%
- Decision coverage: 65%
- Uncovered code:
  ├── Error handling for database connection failure (line 45-52)
  ├── Race condition check for duplicate emails (line 67-74)
  ├── Edge case: unicode normalization in email (line 89-95)
  └── Fallback path when email service is unavailable (line 110-118)

Step 3: Add Structure-Based Tests

For each uncovered block, determine whether it should be tested:

  • Dead code? Mark for removal, not testing
  • Error handling? Add negative tests that trigger these paths
  • Implicit behavior? Understand the code and add appropriate tests
  • Unreachable with current inputs? May indicate missing EP classes

Step 4: Apply Experience-Based Techniques

After formal techniques, apply domain knowledge:

  • Error guessing: SQL injection in email, XSS in name, unicode edge cases
  • Exploratory testing: Rapid registration attempts, back-button behavior, session handling
  • Checklist: OWASP security items, accessibility, localization

Case Study: Payment Processing

Let us walk through combining techniques for a payment processing feature.

Requirements:

  • Accept credit card, debit card, and PayPal
  • Validate card numbers using Luhn algorithm
  • Apply currency conversion for international payments
  • Handle insufficient funds, expired cards, and fraud detection
  • Support partial refunds within 30 days

Layer 1: Specification-Based

Sub-FeatureTechniqueTest Cases
Payment type selectionEP3 valid types + 1 invalid
Card numberEP + BVAValid Visa/MC/Amex + invalid formats + boundary lengths
Expiry dateBVAToday, tomorrow, yesterday, far future
Currency conversionBVA + EPSame currency, supported pairs, unsupported
Payment rulesDecision table(type x amount x currency x fraud_score)
Transaction lifecycleState transitionsPending → Authorized → Captured → Refunded
Refund windowBVADay 0, 1, 29, 30, 31

Result: ~45 test cases covering all specified requirements.

Layer 2: Structure-Based

After running Layer 1 tests, code coverage reveals:

ModuleStatementDecision
CardValidator92%85%
CurrencyConverter88%78%
FraudDetector65%52%
RefundProcessor80%72%

The FraudDetector has low coverage because our spec-based tests did not exercise many fraud detection paths. Add tests:

  • Card from high-risk country + large amount → fraud flag
  • Multiple rapid transactions → velocity check
  • Mismatched billing/shipping address → review flag

Result: ~15 additional tests bringing coverage to 85%+ across all modules.

Layer 3: Experience-Based

  • Error guessing: Double-submit of payment form, network timeout during authorization, 3D Secure redirect failure, card number with spaces/dashes
  • Exploratory testing: Charter — “Explore payment flow with slow network and browser back button to discover state inconsistencies”
  • Checklist: PCI DSS compliance items, accessibility of error messages

Result: ~10 additional tests and 2 exploratory sessions.

Total combined suite: ~70 test cases + 2 exploratory sessions, covering requirements, code structure, and real-world edge cases.

Exercise: Combined Test Design

Problem 1

You are testing a flight booking feature. Combine techniques:

Requirements:

  • Search by origin, destination, dates, passengers (1-9), class (economy, business, first)
  • Results sorted by price, with filters for airlines, stops, departure time
  • Booking requires passenger details, contact info, payment
  • Tickets can be modified (for a fee) or cancelled (refund policy varies by fare class)

Design a combined test strategy using all three layers.

Solution

Layer 1: Specification-Based (~50 tests)

Sub-FeatureTechniqueKey Tests
Search inputsEP + BVAValid/invalid cities, date ranges, 1/9/10 passengers
Date logicBVASame day, next day, return before departure, far future
Passenger x classPairwise3 classes x passenger counts x trip types
Sort/filterEPEach sort option, each filter, combinations
Booking rulesDecision tableFare class x modification x cancellation policies
Ticket lifecycleState transitionsSearched → Booked → Modified → Cancelled
Modification feesBVAFee boundaries by days before departure
Refund amountsDecision tableFare class x cancellation timing

Layer 2: Structure-Based (~15 tests)

  • Run Layer 1 and measure coverage
  • Focus on: pricing calculation paths, availability check logic, fee computation branches
  • Add tests for error handling: no flights found, sold-out flight, payment failure mid-booking

Layer 3: Experience-Based (~10 tests + sessions)

  • Error guessing: One-way vs round-trip edge cases, infant passengers, unaccompanied minors, special characters in names
  • Exploratory: “Explore booking flow when modifying passenger count after initial search, with currency changes and promo codes”
  • Checklist: Accessibility (screen reader for search results), localization (date formats, currencies)

Total: ~75 test cases + exploratory sessions

Problem 2

After applying EP and BVA to a discount calculator, code coverage is:

  • Statement: 68%
  • Decision: 55%

The uncovered code includes:

  • Lines 34-40: Loyalty tier calculation for platinum members
  • Lines 55-62: Holiday weekend special pricing
  • Lines 78-85: Employee discount override
  • Lines 92-98: Negative amount handling
  • Lines 105-110: Deprecated feature flag check

Analyze each gap and recommend actions.

Solution
Uncovered CodeAnalysisAction
Loyalty tier (platinum)Missing EP class — platinum was not in our equivalence classesAdd EP class for platinum members + BVA for tier boundaries
Holiday weekend pricingTime-dependent behavior not in initial EPAdd time-based tests: regular day, holiday, weekend, holiday+weekend
Employee discountSpecial user type not in EPAdd EP class for employee user type + decision table for discount stacking
Negative amount handlingDefensive code for invalid inputAdd negative test: negative prices, negative quantities
Deprecated feature flagDead code behind a feature flagVerify flag is off in production. Do not test — mark for removal

After adding these tests, expected coverage: Statement 90%+, Decision 82%+.

Measuring Combined Effectiveness

Track these metrics to evaluate your combined approach:

MetricTargetPurpose
Requirements coverage100%Every requirement has tests
Code statement coverage80%+Most code is exercised
Code decision coverage75%+Most branches are tested
Mutation score80%+ on critical codeTests actually catch faults
Defect detection rateIncreasing over sprintsCombined approach finds more bugs
Escaped defectsDecreasing over sprintsFewer bugs reach production

Key Takeaways

  • No single technique is sufficient — combine specification-based, structure-based, and experience-based
  • Start with specification-based tests (EP, BVA, decision tables, state transitions) as the foundation
  • Use code coverage analysis to identify structural gaps missed by spec-based tests
  • Fill gaps with targeted structure-based tests, not random tests
  • Apply experience-based techniques (error guessing, exploratory) to catch real-world edge cases
  • Measure combined effectiveness through coverage metrics, mutation score, and defect escape rate
  • The three-layer approach is not sequential — iterate as you learn about the system