Workshop Introduction

This lesson is a hands-on workshop. You will apply everything you have learned across Module 3 to design test suites for realistic features. Each exercise simulates a real-world scenario where you must:

  1. Analyze the feature requirements
  2. Select appropriate test design techniques
  3. Derive test cases systematically
  4. Document your rationale and coverage

There are no new concepts in this lesson — only practice. Treat each exercise as if you were designing tests for a real project.

Workshop Exercise 1: Ride-Sharing Pricing Engine

Feature Description:

A ride-sharing app calculates fares using these rules:

Base fare: $2.50 Per mile: $1.75 (standard), $2.50 (premium), $3.25 (luxury) Per minute: $0.35 (standard), $0.50 (premium), $0.65 (luxury) Minimum fare: $7.00 (standard), $12.00 (premium), $18.00 (luxury)

Surge pricing:

  • Demand multiplier from 1.0x to 3.0x
  • Applied to per-mile and per-minute rates only (not base fare)
  • Surge does not apply to scheduled rides booked 2+ hours in advance

Discounts:

  • First ride: 50% off (max $10 discount)
  • Promo code: fixed dollar amount off final fare
  • Loyalty discount: 10% off for riders with 50+ rides in past 30 days
  • Only one discount applies (highest value wins)

Tolls and fees:

  • Airport fee: $5.00
  • Bridge/tunnel tolls: passed through at cost
  • Booking fee: $2.00 (waived for loyalty riders)

Constraints:

  • Fare cannot be negative (minimum $0)
  • Maximum fare cap: $500 per ride
  • Canceled rides: $5 fee if driver already dispatched

Design a comprehensive test suite. Document your technique selection and derive test cases.

Guided Solution Outline

Technique selection:

  • EP: Vehicle types (standard, premium, luxury, invalid), discount types, surge levels
  • BVA: Surge boundaries (1.0, 3.0), loyalty threshold (49/50 rides), advance booking (1h59m/2h), minimum fare edges, max fare cap ($500), first-ride discount cap ($10)
  • Decision tables: Discount priority rules (first ride vs. promo vs. loyalty)
  • Domain analysis: Distance x time x surge interaction
  • State transitions: Ride lifecycle (requested → matched → in-progress → completed → rated; or requested → matched → canceled)

Key test cases (abbreviated):

#CategoryTechniqueTest
1Base calcEPStandard ride, 5 miles, 15 min, no surge → $2.50 + 51.75 + 150.35 = $16.50
2MinimumBVAVery short standard ride → should be at least $7.00
3Surge boundaryBVASurge at 1.0x (no change), 1.01x (minimal change), 3.0x (max), 3.01x (should cap at 3.0x)
4Advance bookingBVAScheduled 1h59m ahead (surge applies) vs. 2h00m (surge waived)
5Discount priorityDecision tableFirst ride + promo + loyalty all qualify → highest value wins
6First ride capBVARide costs $25, first ride = 50% = $12.50 → capped at $10 discount = $15
7Fare capBVALuxury, long distance, max surge → should cap at $500
8Negative fareBVALarge promo code on cheap ride → minimum $0
9AirportEPPickup from airport (+$5) vs. regular location
10CancellationState transitionCancel after driver dispatched → $5 fee

Workshop Exercise 2: Healthcare Appointment System

Feature Description:

A healthcare portal allows patients to schedule appointments:

Scheduling rules:

  • Appointments available Monday-Friday, 8:00-17:00, in 30-minute slots
  • Same-day appointments only if booked before 14:00
  • Patients can book up to 3 months in advance
  • Maximum 2 active appointments per patient
  • Follow-up appointments must be 7-90 days after the original visit

Doctor availability:

  • Each doctor has a weekly schedule with available/blocked slots
  • Emergency slots: 2 per day reserved for urgent cases
  • Buffer time: 15 minutes between appointments (automatic)

Notifications:

  • Confirmation email immediately after booking
  • Reminder 24 hours before
  • Cancellation must be 4+ hours before appointment time

Patient types:

  • New patient: first visit requires 60-minute slot (double slot)
  • Returning patient: 30-minute slot
  • Urgent: can use emergency slots, no advance notice required

Design a complete test suite.

Guided Solution Outline

Techniques:

  • State transitions: Appointment lifecycle (available → booked → confirmed → completed/cancelled/no-show)
  • BVA: Time boundaries (8:00, 17:00, 14:00 cutoff), date boundaries (today, 3 months), follow-up (7/90 days), cancellation (4 hours)
  • EP: Patient types (new/returning/urgent), slot types (regular/emergency)
  • Decision tables: Booking eligibility (patient type x time x active appointments x day of week)
  • Combinatorial: Doctor x day x time x patient type x urgency

Key boundary tests:

  • Book at 13:59 for same-day vs. 14:00 (rejected)
  • Book exactly 3 months ahead vs. 3 months + 1 day
  • Follow-up at day 6 (too early) vs. day 7 vs. day 90 vs. day 91 (too late)
  • Cancel at 3h59m before (too late) vs. 4h00m before (OK)
  • Third appointment when already 2 active (rejected)

Workshop Exercise 3: Content Moderation Pipeline

Feature Description:

A social media platform processes user-generated content through a moderation pipeline:

Content types: Text posts, images, videos, comments, profile bios

Automated checks (in order):

  1. Spam detection (AI model, confidence 0-100%)
  2. Profanity filter (dictionary + ML, severity: mild/moderate/severe)
  3. Image/video classification (nudity, violence, hate symbols)
  4. Personal information detection (phone numbers, addresses, SSNs)
  5. Copyright check (for images/videos only)

Decision rules:

  • Confidence >= 95%: auto-reject
  • Confidence 70-94%: flag for human review
  • Confidence < 70%: auto-approve
  • Severe profanity: always reject regardless of confidence
  • Personal information detected: always flag for review
  • Appeals: rejected content can be appealed once

Priority queue for human review:

  • High: content from verified accounts or content with high engagement
  • Medium: standard content
  • Low: content from new accounts (< 7 days old)

Design test cases focusing on the decision logic and edge cases.

Guided Solution

Techniques:

  • Decision tables: Confidence level x profanity severity x content type → action
  • State transitions: Content lifecycle (submitted → check1 → check2 → … → approved/rejected/review → appealed → final)
  • BVA: Confidence thresholds (69/70, 94/95), account age (6/7 days)
  • EP: Content types, check categories, priority levels
  • Combinatorial: Content type x check result x account type

Decision table for core logic:

ConfidenceProfanityPII DetectedAction
>= 95AnyAnyAuto-reject
70-94Not severeNoHuman review
70-94Not severeYesHuman review (flagged PII)
70-94SevereAnyAuto-reject (severity override)
< 70None/mildNoAuto-approve
< 70None/mildYesHuman review (PII override)
< 70ModerateNoAuto-approve
< 70SevereAnyAuto-reject (severity override)

Boundary tests:

  • Confidence exactly 70: review (not approve)
  • Confidence exactly 95: reject (not review)
  • Confidence 69.9: approve
  • Confidence 94.9: review
  • Account age exactly 7 days: medium priority (not low)
  • Appeal after rejection: allowed once
  • Second appeal: rejected

State transition tests:

  • Content passes all 5 checks → approved
  • Content fails at check 1 (spam) → appropriate action, checks 2-5 skipped?
  • Rejected → appealed → human reviews → overturned to approved
  • Rejected → appealed → upheld → no further appeal allowed

Producing a Professional Test Design Document

For each workshop exercise, your test design document should include:

1. Feature Analysis

  • Summary of the feature and its key behaviors
  • Identified ambiguities or assumptions

2. Technique Selection Rationale

  • Which techniques you chose and why
  • What each technique is targeting

3. Test Case Table

IDTechniqueCategoryInputExpected ResultPriority
TC-001BVASurge boundarysurge=3.0xApplied at maxHigh
TC-002BVASurge boundarysurge=3.01xCapped at 3.0xHigh

4. Coverage Analysis

  • Requirements covered / total requirements
  • Structural coverage estimate
  • Identified gaps and risks

5. Risks and Assumptions

  • What could go wrong that is not covered
  • Assumptions made when requirements were unclear

Key Takeaways

  • Real-world test design requires combining multiple techniques for a single feature
  • Always start by analyzing the feature and selecting techniques before writing test cases
  • Document your technique selection rationale — it helps reviewers and future maintainers
  • Ambiguous requirements are common — document assumptions and test both interpretations when possible
  • Workshop practice is the fastest way to build test design intuition
  • A professional test design document communicates not just WHAT you tested but WHY you chose that approach