Module 3 Assessment Overview
Congratulations on reaching the final lesson of Module 3. This comprehensive assessment tests your understanding of all test design techniques covered across the module’s 25 lessons.
The assessment consists of three parts:
- Knowledge Questions — 10 quiz questions in the frontmatter (take them before reading further)
- Scenario-Based Questions — Apply test design techniques to real-world situations
- Practical Exercise — Design a complete test suite for a complex feature
Scoring Guide
- Part 1 (Quiz): 10 questions, 3 points each = 30 points
- Part 2 (Scenarios): 5 scenarios, 6 points each = 30 points
- Part 3 (Exercise): 40 points (detailed rubric below)
- Total: 100 points
- Passing score: 70 points
Topics Covered
| Topic Area | Lessons | Key Concepts |
|---|---|---|
| Specification-Based | 3.1-3.9 | EP, BVA, decision tables, state transitions, cause-effect, pairwise, classification tree, use cases, user stories |
| Experience-Based | 3.10-3.12 | Orthogonal arrays, error guessing, checklists |
| Structure-Based | 3.13-3.18 | Statement/decision coverage, MC/DC, path coverage, mutation testing, data flow, control flow |
| Advanced | 3.19-3.21 | Domain analysis, combinatorial strategies, model-based testing |
| Strategy | 3.22-3.24 | Technique selection, combination, real-world application |
Part 2: Scenario-Based Questions
Scenario 1: A banking application calculates interest on savings accounts. Interest rates depend on account type (regular, premium, VIP), balance tier ($0-10K, $10K-50K, $50K+), and account age (<1 year, 1-5 years, >5 years). Different combinations yield different rates.
Which test design technique(s) would you apply? How many test cases would you estimate?
Scenario 2: An elevator system has states: idle on floor, moving up, moving down, doors open, doors closing, emergency stop. Events include floor button press, door sensor, weight sensor, emergency button.
Which technique is primary? What specific tests would you design for the emergency stop feature?
Scenario 3: After designing specification-based tests for a payment processing module, code coverage shows: statement coverage 75%, decision coverage 60%, with uncovered code in fraud detection and retry logic.
What steps should you take next?
Scenario 4: A mobile app has these configuration variables: OS (iOS, Android), screen size (small, medium, large), network (WiFi, 4G, 5G, offline), language (EN, ES, FR, DE, JA), and dark/light mode. How many all-combinations tests? How many pairwise? What tool would you use?
Scenario 5: A safety-critical medical device checks three conditions before dispensing medication: patient ID verified, dosage within safe range, and no drug interactions detected. What coverage criterion should be used and why?
Solution — Scenario 1
Technique: Decision table testing as primary, with BVA for balance tier boundaries.
The three conditions create: 3 x 3 x 3 = 27 rule combinations in the decision table. Add BVA tests for tier boundaries: $0, $9,999, $10,000, $10,001, $49,999, $50,000, $50,001.
Estimated test cases: 27 (decision table) + ~14 (BVA, 2 per boundary x 4 boundaries x ~2 account types) = ~41 test cases. Can reduce with risk-based prioritization.
Solution — Scenario 2
Primary: State transition testing. The elevator has clear states and events.
Emergency stop tests:
- Emergency during upward movement → immediate stop, doors stay closed
- Emergency during downward movement → immediate stop
- Emergency while doors are open → doors stay open, movement locked
- Emergency while idle → system enters emergency mode
- Reset after emergency → system returns to idle on current floor
- Emergency pressed twice → no additional effect
- Emergency during door close → doors reopen + emergency mode
- Power failure during emergency → backup power activates
Also test invalid transitions: pressing floor buttons during emergency (should be ignored).
Solution — Scenario 3
- Analyze uncovered code — identify which fraud detection and retry logic paths are not exercised
- Determine if uncovered code is legitimate — is it dead code, error handling, or missed specification?
- Add targeted white-box tests — specifically for fraud detection scenarios (high-risk transaction, velocity checks, geographic anomalies)
- Add retry logic tests — network timeout → retry → success; retry → failure → final failure; max retries exceeded
- Apply error guessing for common payment edge cases
- Measure again — target 85%+ statement and 75%+ decision coverage
- Consider mutation testing on critical payment calculations to validate test effectiveness
Solution — Scenario 4
All-combinations: 2 x 3 x 4 x 5 x 2 = 240 test cases.
Pairwise: ~20-25 test cases (use a tool to optimize).
Tool: PICT (Microsoft’s Pairwise Independent Combinatorial Testing tool).
PICT model:
OS: iOS, Android
Screen: Small, Medium, Large
Network: WiFi, 4G, 5G, Offline
Language: EN, ES, FR, DE, JA
Theme: Light, Dark
Pairwise reduces 240 combinations to ~20 while guaranteeing every pair of values is tested.
Solution — Scenario 5
MC/DC (Modified Condition/Decision Coverage) is required for safety-critical medical devices.
Reason: Medical device software falls under IEC 62304 and FDA guidance. For the highest risk class (Class C), MC/DC ensures each condition (patient ID, dosage, drug interaction) independently affects the dispensing decision. This is critical because a missed condition check could harm or kill a patient.
For the decision patientVerified AND dosageInRange AND noInteractions:
- MC/DC requires 4 test cases (N+1 = 3+1)
- Each test proves one condition independently determines the outcome
- Also add BVA for dosage boundaries (safe range edges)
Part 3: Practical Exercise — Complete Test Design
System: Online Auction Platform
Design a comprehensive test suite for a live auction feature:
Bidding rules:
- Minimum bid increment: 5% of current price or $1, whichever is greater
- Bids accepted only during active auction window
- Users cannot bid on their own items
- Maximum 3 active bids per user across all auctions
- Sniping protection: auction extends 2 minutes if a bid is placed in the final 30 seconds
- Reserve price: item does not sell if final bid is below reserve
- Buy-it-now: instantly ends auction if available and bid threshold not yet reached
Part A (10 points): Identify which test design techniques you would use for each aspect of the feature. Justify each selection.
Part B (10 points): Create a decision table for the bid acceptance rules.
Part C (10 points): Draw a state transition diagram for the auction lifecycle (draft, active, extended, ended, sold, unsold).
Part D (10 points): List specific BVA test cases for the bid increment rule and sniping protection timing.
Grading Rubric
Part A (10 points):
- Correctly maps at least 5 aspects to techniques (3 pts)
- Provides clear justification for each choice (4 pts)
- Identifies need for combining techniques (3 pts)
Part B (10 points):
- All relevant conditions identified (3 pts)
- Complete truth table with all rule combinations (4 pts)
- Correct actions for each combination (3 pts)
Part C (10 points):
- All states identified (2 pts)
- All transitions with correct events/guards (4 pts)
- Invalid transitions identified (2 pts)
- Sniping extension modeled correctly (2 pts)
Part D (10 points):
- Bid increment boundaries correct (4 pts)
- Sniping timing boundaries correct (3 pts)
- Edge cases identified (3 pts)
Solution — Complete Test Design
Part A: Technique Selection
| Aspect | Technique | Justification |
|---|---|---|
| Bid increment calculation | BVA + Domain analysis | Two-variable boundary (5% vs $1 threshold) |
| Bid acceptance rules | Decision table | Multiple conditions determine accept/reject |
| Auction lifecycle | State transitions | Clear states with event-driven transitions |
| Sniping protection | BVA | Time boundaries (29s/30s/31s before end) |
| User bid limit | EP + BVA | Classes (0/1/2/3/4 active bids) with boundary at 3 |
| Buy-it-now interaction | Decision table | BIN available + threshold conditions |
| Configuration (browser/device) | Pairwise | Multiple independent parameters |
| Real-time updates | Error guessing | Network issues, race conditions, concurrent bids |
Part B: Decision Table
Conditions: Auction active? | User != Owner? | Under bid limit? | Bid >= min increment? | Reserve met?
| Rule | Active | Not Owner | Under Limit | Min Increment | Action |
|---|---|---|---|---|---|
| 1 | T | T | T | T | Accept bid |
| 2 | T | T | T | F | Reject: bid too low |
| 3 | T | T | F | T | Reject: bid limit reached |
| 4 | T | F | T | T | Reject: own item |
| 5 | F | T | T | T | Reject: auction not active |
| 6-16 | Various F combinations | Reject with appropriate message |
Part C: State Transitions
States: Draft → Active → Extended → Ended → Sold/Unsold
Key transitions:
- Draft → Active [seller publishes, start time reached]
- Active → Extended [bid in final 30s, +2min]
- Extended → Extended [another bid in final 30s]
- Active → Ended [time expires, no last-second bids]
- Extended → Ended [extended time expires]
- Ended → Sold [final bid >= reserve]
- Ended → Unsold [final bid < reserve OR no bids]
- Active → Sold [buy-it-now accepted]
Part D: BVA Test Cases
Bid increment (current price = $20, 5% = $1.00, minimum $1):
- Bid $20.99 → reject (increment < $1.00)
- Bid $21.00 → accept (increment = $1.00 = 5%)
- Bid $21.01 → accept
Current price = $10, 5% = $0.50, minimum $1:
- Bid $10.50 → reject (increment $0.50 < $1.00 minimum)
- Bid $10.99 → reject
- Bid $11.00 → accept (increment $1.00 = minimum)
Sniping protection:
- Bid at 31 seconds before end → no extension
- Bid at 30 seconds before end → 2-minute extension
- Bid at 29 seconds before end → 2-minute extension
- Bid at 1 second before end → 2-minute extension
- Bid at exactly end time → reject (auction ended)
- Bid during extension at 31s before new end → no further extension
- Bid during extension at 30s before new end → another extension
What’s Next
Congratulations on completing Module 3: Test Design Techniques. You now have a comprehensive toolkit for designing effective tests — from simple equivalence partitioning to advanced MC/DC and model-based testing.
Module 4: Test Documentation covers how to capture and communicate your test designs professionally: test strategies, test plans, test cases, bug reports, and test summary reports. The techniques you learned in Module 3 will directly feed into the test cases and test plans you write in Module 4.