Why Test Case Quality Matters

A test case is only as good as its ability to be executed by someone else. If a colleague cannot follow your test case and get the same result, the test case has failed its purpose — regardless of whether the software passes or fails.

Poor test cases lead to:

  • Inconsistent results — different testers interpret steps differently
  • Wasted time — testers spend time figuring out what the case means instead of testing
  • False confidence — vague expected results make it easy to mark tests as “passed” incorrectly
  • Maintenance burden — unclear cases are harder to update when requirements change

Anatomy of a Test Case

Every test case should contain these elements:

Test Case ID

A unique identifier for tracking and reference. Follow a consistent naming convention.

Examples: TC-LOGIN-001, TC-CART-015, AUTH-TC-003

Title

A concise but descriptive sentence that explains exactly what the test verifies.

Bad titles:

  • “Test login” — too vague
  • “Login” — not a sentence
  • “Verify the login functionality works correctly” — too generic

Good titles:

  • “Verify that user with valid email and password is redirected to dashboard after login”
  • “Verify that login fails with error message when password is incorrect”
  • “Verify that account is locked after 5 consecutive failed login attempts”

The pattern: Verify that [actor] [action] [expected outcome] when [condition]

Preconditions

State everything that must be true before the test can begin.

Examples:

  • User account exists with email test@example.com and password ValidPass123!
  • User is on the login page (/login)
  • User is not currently logged in
  • Database contains at least 3 products in the “Electronics” category

Test Steps

Numbered, atomic actions that the tester performs. Each step should describe one action.

Bad steps:

1. Go to the website and log in with valid credentials and navigate to settings

Good steps:

1. Navigate to https://app.example.com/login
2. Enter "test@example.com" in the Email field
3. Enter "ValidPass123!" in the Password field
4. Click the "Sign In" button

Expected Results

What should happen after each step or group of steps. Be specific and measurable.

Bad expected result:

  • “Login works correctly”

Good expected results:

  • “User is redirected to /dashboard
  • “Welcome message displays: ‘Hello, John!’”
  • “Session cookie is created with 30-minute expiry”

Priority

Classification of test case importance: Critical, High, Medium, Low.

Test Data

Specific values used in the test. List them explicitly rather than saying “valid data.”

Writing Quality Criteria

Use the ATOMIC checklist:

  • Accurate — Tests exactly what it claims to test
  • Traceable — Links to a requirement, user story, or bug
  • One focus — Tests one specific behavior
  • Measurable — Expected results are objectively verifiable
  • Independent — Does not depend on other test cases
  • Complete — All necessary steps and data are included

Good vs Bad Test Cases

Bad test case:

Title: Test shopping cart
Steps:
1. Add items to cart
2. Check cart
Expected: Cart works correctly

Good test case:

Title: Verify that adding a product to empty cart shows cart count as 1
Preconditions:
- User is logged in
- Cart is empty
- Product "Wireless Mouse" (SKU: WM-001, Price: $29.99) exists
Steps:
1. Navigate to product page for "Wireless Mouse" (SKU: WM-001)
2. Click "Add to Cart" button
3. Observe the cart icon in the header
Expected Results:
1. Product page loads with correct details
2. Success toast appears: "Wireless Mouse added to cart"
3. Cart icon shows badge with number "1"

Exercise: Rewrite Bad Test Cases

Rewrite the following test cases to meet the ATOMIC quality criteria.

Bad test case 1:

Title: Test password reset
Steps: Reset the password
Expected: It should work

Bad test case 2:

Title: Check search
Steps:
1. Search for something
2. See results
Expected: Results should be correct
Solution

Rewritten test case 1:

Title: Verify that user receives password reset email within 60 seconds after requesting reset
Preconditions:
- User account exists with email test@example.com
- User is on the login page
- Email inbox for test@example.com is accessible
Steps:
1. Click "Forgot Password?" link on login page
2. Enter "test@example.com" in the email field
3. Click "Send Reset Link" button
4. Check email inbox for test@example.com
Expected Results:
1. Forgot password page opens with email input field
2. Email address is accepted (no validation error)
3. Success message displays: "Reset link sent to your email"
4. Email from "noreply@app.com" received with subject "Password Reset Request" within 60 seconds. Email contains a reset link that includes a unique token.

Rewritten test case 2:

Title: Verify that searching for existing product name returns matching results sorted by relevance
Preconditions:
- Database contains products: "Wireless Mouse", "Wireless Keyboard", "Wired Mouse"
- User is on the homepage
Steps:
1. Click the search bar in the header
2. Type "wireless mouse" in the search field
3. Press Enter or click the search icon
Expected Results:
1. Search bar is focused and ready for input
2. Search suggestions dropdown appears showing matching products
3. Search results page displays:
   - "Wireless Mouse" as first result
   - "Wireless Keyboard" as second result (partial match)
   - Result count shows "2 results for 'wireless mouse'"
   - Each result shows product image, name, and price

Common Pitfalls

Pitfall 1: Compound steps. “Log in and navigate to settings and change the password” should be three separate steps.

Pitfall 2: Assumed knowledge. Never assume the tester knows where buttons are or what valid data looks like. Be explicit.

Pitfall 3: Missing negative path. Always consider: what happens if the user enters wrong data? What if the network fails?

Pitfall 4: Vague expected results. “Page loads correctly” is not testable. Specify what “correctly” means: specific elements, specific text, specific behavior.

Pitfall 5: Not specifying test data. “Enter a valid email” is ambiguous. Write the actual email: test@example.com.

Key Takeaways

  • Every test case needs: ID, clear title, preconditions, numbered steps, specific expected results
  • Follow the ATOMIC checklist: Accurate, Traceable, One focus, Measurable, Independent, Complete
  • Write step-by-step atomic actions — one action per step
  • Expected results must be specific and objectively verifiable
  • Include explicit test data rather than saying “valid data” or “correct values”
  • Independent test cases can run in any order without dependencies