The Technique Selection Problem
You have learned over 20 test design techniques across this module. Equivalence partitioning, boundary value analysis, decision tables, state transitions, pairwise testing, MC/DC, path coverage, mutation testing, and more. The challenge is no longer “what techniques exist?” but “which technique should I use right now?”
Choosing the wrong technique wastes effort. Using EP on a stateful protocol misses transition bugs. Using state transition testing on a calculation engine misses boundary defects. Effective testers match the technique to the problem.
Decision Framework
Step 1: What Type of Feature Are You Testing?
| Feature Type | Best-Fit Techniques |
|---|---|
| Input validation (forms, fields) | Equivalence partitioning + BVA |
| Business rules with conditions | Decision tables |
| Workflows, protocols, sessions | State transition testing |
| Configuration/compatibility | Pairwise testing |
| Calculations, formulas | Domain analysis + BVA |
| Text search, pattern matching | Equivalence partitioning + error guessing |
| APIs with multiple parameters | Combinatorial testing |
| Critical algorithms (finance, safety) | MC/DC + path coverage |
| Complex user journeys | Use case testing + state transitions |
Step 2: What Information Do You Have?
| Available Information | Applicable Techniques |
|---|---|
| Requirements/specifications only | Black-box: EP, BVA, decision tables, state transitions |
| Source code available | White-box: statement/decision coverage, path coverage, MC/DC |
| No documentation | Experience-based: error guessing, exploratory testing |
| Formal model exists | Model-based testing |
| Historical defect data | Risk-based: focus techniques on high-defect areas |
Step 3: What Is the Risk Level?
| Risk Level | Recommended Approach |
|---|---|
| Safety-critical | MC/DC + domain analysis + mutation testing to validate tests |
| Financial/regulatory | Decision tables + BVA + combinatorial testing |
| Core business logic | EP + BVA + state transitions + path coverage |
| Standard features | EP + BVA + error guessing |
| Low-risk/cosmetic | Error guessing + checklist-based |
Technique Mapping by Category
Data Input Testing
When testing how a system handles input data:
- Start with equivalence partitioning — identify valid and invalid classes
- Apply BVA — test boundaries of each class
- Add domain analysis — if multiple inputs interact
- Use error guessing — add tests for common input mistakes (empty, null, special characters, very long strings)
Business Logic Testing
When testing rules that determine system behavior:
- Start with decision tables — map all condition combinations to actions
- Add state transitions — if behavior depends on previous state
- Apply cause-effect graphing — if conditions have complex dependencies
- Use combinatorial testing — if many parameters interact
Structural Testing (White-Box)
When testing code coverage:
- Start with statement coverage — basic minimum
- Add decision coverage — test both branches of every decision
- Apply MC/DC — if safety-critical
- Use path coverage — for critical algorithms
- Validate with mutation testing — ensure tests are actually effective
Integration Testing
When testing how components interact:
- State transition testing — for protocol-based interactions
- Pairwise testing — for configuration combinations
- Use case testing — for end-to-end workflows
- Data flow testing — for tracking data through components
Real-World Decision Examples
Example 1: Login Form
- Username field: EP (valid/invalid formats) + BVA (min/max length)
- Password field: EP (meets/doesn’t meet rules) + BVA (length bounds)
- Login button behavior: State transitions (locked after 3 failures)
- Overall: Error guessing (SQL injection, XSS, empty fields)
Example 2: Insurance Quote Calculator
- Premium calculation: Decision tables (age, coverage, history rules)
- Input ranges: BVA + Domain analysis (age, income boundaries)
- Rate tiers: EP (standard, preferred, high-risk classes)
- Discount combinations: Pairwise testing (multi-policy, good driver, etc.)
- Critical calculations: Path coverage + Mutation testing
Example 3: E-commerce Checkout
- Cart states: State transition testing (empty, has items, checkout, ordered)
- Payment methods: Pairwise testing (method x currency x amount range)
- Shipping rules: Decision tables (weight, destination, speed)
- Coupon validation: EP + BVA (expired, min purchase, one-time use)
- End-to-end flow: Use case testing
Exercise: Technique Selection
Problem 1
For each feature below, select the primary and secondary test design techniques. Justify your choices.
- A tax calculation engine that applies different rates based on income brackets, filing status, deductions, and state of residence
- A music player that supports play, pause, skip, shuffle, repeat, and queue management
- A search function that accepts text queries with optional filters (date range, category, sort order)
- An elevator control system for a 20-floor building with multiple elevators
- A password strength meter that evaluates length, character diversity, common patterns, and dictionary words
Solution
Tax calculation:
- Primary: Decision tables — complex rules with many conditions
- Secondary: BVA — income bracket boundaries; Domain analysis — multi-variable boundaries interact; Path coverage — verify calculation paths
Music player:
- Primary: State transition testing — player has clear states (stopped, playing, paused) with events
- Secondary: Pairwise testing — combinations of shuffle/repeat settings; Error guessing — corrupt files, empty playlist
Search function:
- Primary: Equivalence partitioning — valid/invalid queries, result categories
- Secondary: Pairwise testing — filter combinations; BVA — date range boundaries; Error guessing — empty queries, special characters, SQL injection
Elevator control:
- Primary: State transition testing — elevator states (idle, moving up, moving down, doors open)
- Secondary: Model-based testing — complex state interactions between multiple elevators; Combinatorial testing — floor request combinations
Password strength meter:
- Primary: Equivalence partitioning — strength categories (weak, medium, strong)
- Secondary: BVA — length thresholds; Decision tables — character type combinations; Error guessing — common passwords, unicode, empty string
Problem 2
You are the QA lead for a new feature: a hotel booking system. The system must handle room search (dates, guests, room type), pricing (dynamic rates, discounts, taxes), reservation management (create, modify, cancel), and payment processing.
Create a testing strategy document mapping each sub-feature to specific test design techniques. Include your rationale.
Solution
| Sub-Feature | Primary Technique | Secondary Technique | Rationale |
|---|---|---|---|
| Room search dates | BVA | EP | Date inputs have clear boundaries (check-in before check-out, no past dates) |
| Guest count | BVA + EP | Error guessing | Boundaries (min 1, max per room), invalid values (0, negative, very large) |
| Room type selection | EP | Pairwise | Categories of rooms; combinations of room type + dates + guests |
| Dynamic pricing | Decision tables | Domain analysis | Complex rules (season, demand, day of week); multi-variable boundaries |
| Discount application | Decision tables | BVA | Rules for when discounts apply; discount amount boundaries |
| Tax calculation | BVA + decision tables | Path coverage | Jurisdictional rules; boundary amounts; verify calculation logic |
| Reservation lifecycle | State transition | Use case testing | States: pending, confirmed, modified, cancelled; event sequences matter |
| Modify reservation | State transitions | EP | Valid/invalid modifications from each state |
| Cancel reservation | State transitions | Decision tables | Cancellation policies (refund rules based on timing) |
| Payment processing | State transitions | Error guessing | Payment states (pending, authorized, captured, refunded); edge cases (timeout, double-charge) |
| Search + book flow | Use case testing | Exploratory testing | End-to-end happy path and alternative paths |
| Configuration | Pairwise | Checklist-based | Browser/device combinations |
Anti-Patterns in Technique Selection
Using only one technique. Teams that apply only EP everywhere miss state-dependent bugs and boundary defects.
Skipping experience-based techniques. Formal techniques cannot cover everything. Error guessing and exploratory testing find the “weird” bugs.
Over-engineering low-risk features. Applying MC/DC to a marketing page is waste. Match rigor to risk.
Ignoring white-box techniques entirely. Even if you test from the outside, structural coverage data reveals gaps.
Key Takeaways
- No single technique is sufficient — effective testing requires combining techniques
- Match technique to feature type: state-dependent → state transitions, rules → decision tables, inputs → EP+BVA
- Risk level determines rigor: safety-critical needs MC/DC, standard features need EP+BVA
- Available information constrains choices: no code = black-box only, no spec = experience-based
- Always supplement formal techniques with error guessing and exploratory testing
- Build a technique selection habit — for every feature, consciously ask “which technique fits best?”