What Is Error Guessing?
Error guessing is an experience-based test design technique where testers use their knowledge of common mistakes, typical defects, and past failures to anticipate where the software is likely to break. Unlike formal techniques that follow rules, error guessing leverages intuition and domain expertise.
Why Error Guessing Works
Experienced testers develop an intuition for where defects hide. This comes from:
- Years of finding similar bugs across different projects
- Knowledge of common programming mistakes
- Understanding of typical user behaviors that break software
- Awareness of system integration points that often fail
The Defect Taxonomy Approach
To make error guessing systematic rather than purely intuitive, build a defect taxonomy — a categorized catalog of common error patterns:
Common Error Categories
Input Handling:
| Error Pattern | Test |
|---|---|
| Null/empty | Submit empty form fields |
| Special characters | <script>alert(1)</script>, '; DROP TABLE-- |
| Extremely long | 10,000 character string in name field |
| Unicode | Emojis, RTL text, Chinese characters |
| Negative numbers | -1 in quantity field |
| Zero | 0 items, $0 payment |
| Leading/trailing spaces | " admin " as username |
Computation:
| Error Pattern | Test |
|---|---|
| Division by zero | Calculate average of 0 items |
| Integer overflow | Quantity = 2,147,483,647 + 1 |
| Float precision | $0.1 + $0.2 = ? |
| Date arithmetic | Add 1 month to Jan 31 |
State/Timing:
| Error Pattern | Test |
|---|---|
| Double submit | Click submit button twice quickly |
| Back button | Submit form, press back, submit again |
| Session expired | Leave page open overnight, then submit |
| Concurrent edit | Two users edit the same record |
Integration:
| Error Pattern | Test |
|---|---|
| Service down | What if the payment gateway is unavailable? |
| Slow response | API takes 30 seconds to respond |
| Invalid response | API returns malformed JSON |
Real-World Example: Testing a Shopping Cart
Applying error guessing to an e-commerce cart:
| # | Error Guess | Rationale |
|---|---|---|
| 1 | Add item, then it goes out of stock | Inventory race condition |
| 2 | Add 999,999 of the same item | Quantity overflow |
| 3 | Apply expired coupon code | Date validation |
| 4 | Change currency mid-checkout | State corruption |
| 5 | Open cart in two tabs, checkout in both | Concurrent modification |
| 6 | Add item priced at $0.001 | Rounding in total |
| 7 | Cart with 100+ unique items | Performance/rendering |
| 8 | Remove all items and proceed to checkout | Empty state handling |
Building Your Personal Defect Catalog
Structured Error Guessing Framework
Instead of relying purely on memory, create a structured catalog organized by:
1. Category — What type of error? 2. Trigger — What input or action causes it? 3. Symptom — What does the user see? 4. Likelihood — How common is this in your domain?
Example Catalog Entries:
| Category | Trigger | Symptom | Likelihood |
|---|---|---|---|
| Input | Paste text with hidden Unicode chars | Data corruption | Medium |
| Timing | Submit during auto-save | Duplicate or lost data | High |
| Auth | Token expires during multi-step form | Silent failure, lost work | High |
| Data | Import CSV with extra columns | Crash or wrong mapping | Medium |
| Display | Very long text without spaces | Layout breaks | High |
| Locale | Date field with DD/MM vs MM/DD | Wrong date stored | High |
Growing Your Catalog
After every test cycle:
- Review bugs found — add new patterns to the catalog
- Review bugs missed — what error guess would have caught them?
- Review production incidents — what patterns should you test for?
Combining Error Guessing with Formal Techniques
Best practice workflow:
- Apply formal techniques first (EP, BVA, decision tables)
- Then apply error guessing to find what formal techniques missed
- Focus error guessing on areas of highest risk and complexity
Exercise: Error Guessing Session
Scenario: You’re testing a user profile edit page with fields: Display Name, Bio (max 500 chars), Profile Picture upload, Date of Birth, and Website URL.
Task: Generate at least 10 error guessing test cases. Organize them by category and explain your rationale.
Hint
Think about each field: What’s the worst thing a user could enter? What happens at the limits? What about file upload edge cases (0 bytes, 10GB, wrong format, executable file)? What about date edge cases (future date, Feb 29, year 0)?
Solution
| # | Category | Test | Rationale |
|---|---|---|---|
| 1 | Input | Display Name = <img src=x onerror=alert(1)> | XSS in stored field |
| 2 | Input | Bio = exactly 500 chars then add 1 more | Boundary enforcement |
| 3 | Input | Bio with only emojis (500 emojis) | Unicode handling, char counting |
| 4 | Upload | Profile pic = 0 byte file | Empty file handling |
| 5 | Upload | Profile pic = renamed .exe to .jpg | File type validation bypass |
| 6 | Upload | Profile pic = 50MB image | Size limit enforcement |
| 7 | Date | DOB = tomorrow’s date | Future date should be invalid |
| 8 | Date | DOB = Feb 29, 2023 (non-leap year) | Invalid date handling |
| 9 | URL | Website = javascript:alert(1) | XSS via URL scheme |
| 10 | URL | Website = very long URL (2000+ chars) | Length validation |
| 11 | State | Edit name, don’t save, navigate away | Unsaved changes warning? |
| 12 | Concurrent | Edit profile in two tabs, save both | Last-write-wins or conflict? |
| 13 | Input | Display Name = all spaces | Whitespace-only validation |
| 14 | Upload | Upload pic, then delete account | Orphaned file cleanup |
Pro Tips
- Keep a personal bug journal. Track every interesting bug you find — over time, it becomes your most valuable testing asset.
- Learn from production incidents. Post-mortems reveal error patterns that formal test design rarely covers.
- Think like an attacker. Security researchers are expert error guessers — study OWASP Top 10 for web testing patterns.
- Consider the user’s environment. Slow networks, old browsers, ad blockers, accessibility tools — these create errors that developers rarely anticipate.
- Share your catalog with your team. A team defect taxonomy is more powerful than individual knowledge. Make it a living document that grows with every sprint.