TL;DR
- Manual testing uses human judgment to find bugs automation misses — usability issues, visual defects, unexpected behaviors
- Core skills: test case design, exploratory testing, bug reporting, requirement analysis
- Test case structure: ID, title, preconditions, steps, expected result, actual result
- Bug reports need: summary, steps to reproduce, expected vs actual, severity, screenshots
- Manual testing complements automation — both are essential for quality
Best for: New QA engineers, career changers entering testing, developers wanting QA fundamentals Skip if: You only work with automated pipelines and never touch UI Reading time: 18 minutes
Your automation suite passes. Users report the checkout button is invisible on mobile. The login form accepts empty passwords. The error message says “Error: null.”
Automation tests what you tell it to test. Manual testing finds what you didn’t think to check.
This tutorial teaches manual testing fundamentals — test design, execution, bug reporting, and the skills that make QA engineers valuable beyond clicking buttons.
What is Manual Testing?
Manual testing is software testing performed by humans. Testers interact with the application, verify functionality against requirements, and report defects.
What manual testing involves:
- Reading and understanding requirements
- Designing test cases and test scenarios
- Executing tests step by step
- Comparing actual results with expected results
- Reporting and tracking bugs
- Retesting fixed issues
Why manual testing matters:
- Finds usability issues — automation can’t judge if a UI is confusing
- Discovers edge cases — human intuition catches unexpected scenarios
- Validates user experience — real users don’t follow scripts
- Adapts quickly — no code to update when requirements change
- Cost-effective for small projects — automation setup takes time
Types of Manual Testing
Functional Testing
Verifies that features work according to requirements.
Requirement: User can reset password via email
Test:
1. Click "Forgot Password"
2. Enter registered email
3. Click Submit
4. Check email for reset link
5. Click link, enter new password
6. Login with new password
Expected: User successfully logs in with new password
Exploratory Testing
Unscripted testing where testers explore the application freely.
Session goal: Explore checkout flow for edge cases
Time: 30 minutes
Notes:
- What happens with 100 items in cart?
- Can I checkout with expired credit card?
- What if I change quantity during payment?
- Does back button break the flow?
- What happens on network timeout?
Smoke Testing
Quick tests to verify basic functionality works after a new build.
Smoke Test Checklist:
□ Application launches
□ Login works
□ Main navigation accessible
□ Core feature (e.g., search) functions
□ No console errors on key pages
□ Logout works
Regression Testing
Re-testing existing functionality after code changes.
Changed: User profile page redesign
Regression areas:
- Profile editing still works
- Avatar upload functional
- Password change works
- Email notifications still sent
- API endpoints return same data
- Other pages linking to profile work
User Acceptance Testing (UAT)
End users validate the software meets their needs.
UAT Scenario: Sales manager processes monthly report
Steps:
1. Login as sales manager
2. Navigate to Reports > Monthly Sales
3. Select date range: last month
4. Generate report
5. Verify data matches known figures
6. Export to PDF
7. Confirm format is usable
Acceptance criteria: Report matches accounting data within 1%
Writing Test Cases
A test case is a documented set of steps to verify specific functionality.
Test Case Structure
Test Case ID: TC-LOGIN-001
Title: Successful login with valid credentials
Module: Authentication
Priority: High
Preconditions:
- User account exists
- User is on login page
Test Steps:
1. Enter valid username "testuser@example.com"
2. Enter valid password "SecurePass123"
3. Click "Login" button
Expected Result:
- User redirected to dashboard
- Welcome message displays username
- Session created (visible in dev tools)
Actual Result: [Filled during execution]
Status: [Pass/Fail]
Tested By: [Name]
Date: [Date]
Test Case Best Practices
1. One thing per test case
# Bad - tests multiple things
Title: Login functionality
# Good - specific and focused
Title: Login fails with incorrect password
Title: Login fails with non-existent email
Title: Login succeeds with valid credentials
2. Clear, actionable steps
# Bad - vague
1. Try to login
2. Check if it works
# Good - specific
1. Enter email "user@test.com" in email field
2. Enter password "wrong123" in password field
3. Click "Sign In" button
4. Observe error message
3. Measurable expected results
# Bad - subjective
Expected: Page loads quickly
# Good - measurable
Expected: Page loads within 3 seconds, all images visible
Test Case Template
## Test Case: [ID]
**Title:** [Descriptive title]
**Module:** [Feature/Module name]
**Priority:** [High/Medium/Low]
**Type:** [Functional/UI/Security/Performance]
### Preconditions
- [Condition 1]
- [Condition 2]
### Test Data
| Field | Value |
|-------|-------|
| Username | testuser@example.com |
| Password | TestPass123 |
### Steps
| # | Action | Expected Result |
|---|--------|-----------------|
| 1 | Navigate to /login | Login page displays |
| 2 | Enter username | Field accepts input |
| 3 | Enter password | Password masked |
| 4 | Click Login | Dashboard loads |
### Postconditions
- User session created
- Login timestamp recorded
### Notes
[Any additional context]
Exploratory Testing
Exploratory testing combines test design and execution simultaneously. You learn, test, and adapt in real-time.
Session-Based Exploratory Testing
Session Charter:
Explore: Payment processing
Duration: 45 minutes
Focus: Edge cases and error handling
Session Notes:
[10:00] Started with valid payment - works
[10:05] Tried $0.01 payment - accepted (is this correct?)
[10:12] Tested negative amount - error message unclear
[10:18] Payment with special characters in name - crashes
[10:25] Timeout during processing - no recovery option
[10:35] Multiple rapid submissions - duplicate charges!
Bugs Found: 3
Questions: 2
Areas for more testing: Timeout handling, input validation
Exploratory Testing Techniques
Boundary Testing
Field: Age (accepts 18-100)
Test values:
- 17 (below minimum)
- 18 (at minimum)
- 19 (above minimum)
- 99 (below maximum)
- 100 (at maximum)
- 101 (above maximum)
- 0, -1, 999, empty, text
State Transition Testing
Shopping Cart States:
Empty → Has Items → Checkout → Payment → Confirmation
Test transitions:
- Empty → direct to Checkout (should fail)
- Has Items → back to Empty → Checkout (should fail)
- Payment → browser back → Payment again (duplicate?)
- Confirmation → refresh (what happens?)
Error Guessing
Common error scenarios to try:
- Empty required fields
- Special characters: <script>, ', ", &, %
- Very long inputs (1000+ characters)
- Unicode: 你好, مرحبا, 🎉
- Negative numbers where positive expected
- Future dates where past expected
- Concurrent actions (two tabs)
Bug Reporting
A bug report should enable anyone to reproduce the issue.
Bug Report Structure
Bug ID: BUG-2026-0142
Title: Checkout fails when cart contains more than 50 items
Severity: High
Priority: High
Status: New
Environment: Production, Chrome 120, macOS 14.2
Steps to Reproduce:
1. Login as any user
2. Add 51 items to cart (any products)
3. Click "Proceed to Checkout"
4. Enter valid shipping address
5. Click "Continue to Payment"
Expected Result:
Payment page loads with order summary
Actual Result:
Page displays "Error 500: Internal Server Error"
Console shows: "TypeError: Cannot read property 'length' of undefined"
Attachments:
- screenshot_error.png
- console_log.txt
- network_har.har
Additional Info:
- Works fine with 50 or fewer items
- Issue started after deploy on Jan 25
- Affects all browsers tested
Severity vs Priority
| Severity | Description | Example |
|---|---|---|
| Critical | System unusable | App crashes on launch |
| High | Major feature broken | Cannot complete purchase |
| Medium | Feature impaired | Search filters don’t work |
| Low | Minor issue | Typo in footer |
| Priority | Description | Example |
|---|---|---|
| P1 | Fix immediately | Payment broken before Black Friday |
| P2 | Fix this sprint | Login issue affecting 10% users |
| P3 | Fix when possible | UI alignment on rare screen size |
| P4 | Nice to have | Suggestion for improvement |
Bug Report Best Practices
1. Descriptive titles
# Bad
Login broken
# Good
Login fails with "Invalid token" error when using SSO on Firefox
2. Minimal reproduction steps
# Bad - too many steps
1. Open browser
2. Type URL
3. Press Enter
4. Wait for page
5. Find login button
...
# Good - essential steps only
1. Go to /login
2. Click "Sign in with Google"
3. Complete Google auth
4. Observe error on redirect
3. Include evidence
Attachments that help:
- Screenshots with annotations
- Screen recordings
- Console logs
- Network requests (HAR file)
- Database state if relevant
Test Documentation
Test Plan
High-level document describing testing approach.
# Test Plan: E-commerce Checkout Redesign
## Scope
- New checkout UI
- Payment integration
- Order confirmation flow
- Email notifications
## Out of Scope
- Product catalog (unchanged)
- User registration (unchanged)
- Admin panel
## Test Approach
- Functional testing: All checkout scenarios
- Usability testing: 5 user sessions
- Regression testing: Related features
- Performance testing: Load time benchmarks
## Entry Criteria
- Feature code complete
- Test environment ready
- Test data available
## Exit Criteria
- All critical tests pass
- No P1/P2 bugs open
- Performance within 3s target
## Resources
- 2 QA engineers
- 1 week duration
- Staging environment
## Risks
- Payment sandbox limitations
- Third-party shipping API delays
Test Summary Report
# Test Summary: Sprint 42
## Overview
- **Period:** Jan 20-27, 2026
- **Build:** v2.4.1
- **Tester:** QA Team
## Results
| Type | Total | Passed | Failed | Blocked |
|------|-------|--------|--------|---------|
| Functional | 145 | 138 | 5 | 2 |
| Regression | 89 | 89 | 0 | 0 |
| Exploratory | 8 sessions | — | 12 bugs | — |
## Bugs Found
- Critical: 0
- High: 2
- Medium: 7
- Low: 3
## Open Issues
1. [BUG-142] Payment timeout not handled
2. [BUG-145] Mobile menu overlap
## Recommendation
Ready for release with known issues documented.
Low risk - issues affect edge cases only.
Manual Testing with AI Assistance
AI tools can help with manual testing tasks when used appropriately.
What AI does well:
- Generate test case ideas from requirements
- Suggest edge cases and boundary values
- Draft bug report templates
- Create test data variations
What still needs humans:
- Judging usability and user experience
- Exploratory testing intuition
- Understanding business context
- Deciding severity and priority
Useful prompt:
Generate test cases for a password reset feature. Include positive tests, negative tests, boundary cases, and security considerations. Format as a test case table with steps and expected results.
Common Mistakes to Avoid
1. Testing Only Happy Path
# Incomplete
Test: User registration
Steps: Enter valid data, submit
Result: Success
# Complete coverage
- Valid data → success
- Empty required fields → validation error
- Invalid email format → specific error
- Password too short → specific error
- Duplicate email → appropriate message
- Special characters → handled correctly
- SQL injection attempt → blocked
2. Vague Bug Reports
# Useless
"Search doesn't work"
# Useful
"Search returns 0 results for 'laptop' when 47 laptop products exist.
Tested on Chrome 120, production environment.
Console shows: 'Elasticsearch connection timeout'"
3. Skipping Regression Testing
# Risk
"I only tested the new feature, didn't check related areas"
# Better
New feature: User profile redesign
Regression checklist:
□ Profile view from other users' perspective
□ Profile links in comments/posts
□ Search results showing profile info
□ Email notifications with profile data
□ API endpoints returning profile
□ Mobile app profile display
FAQ
What is manual testing?
Manual testing is software testing performed by humans without automation scripts. Testers manually execute test cases, explore the application, identify defects, and verify that software behaves according to requirements. It relies on human judgment, intuition, and domain knowledge to find issues that automated tests might miss.
Is manual testing still relevant in 2026?
Yes. Manual testing remains essential for exploratory testing, usability evaluation, ad-hoc testing, and scenarios requiring human judgment. While automation handles repetitive regression tests efficiently, manual testing excels at finding unexpected bugs, evaluating user experience, and testing new features before automation is built. The best QA strategies combine both approaches.
What skills do manual testers need?
Essential skills include critical thinking, attention to detail, clear communication, requirement analysis, test case design, and bug reporting. Domain knowledge helps understand user needs. Technical skills like SQL basics, API testing, and browser dev tools are increasingly valuable. Soft skills matter too — you need to explain bugs clearly and work with developers constructively.
How do I write good test cases?
Good test cases are specific, repeatable, and traceable. They have clear titles describing what’s tested, explicit preconditions, step-by-step actions (not vague instructions), measurable expected results, and fields for actual results. Each test case should verify one thing. Include test data values, not just “valid data.” Make them understandable by someone unfamiliar with the feature.
See Also
- Software Testing Tutorial - Testing fundamentals for complete beginners
- Test Case Design Techniques - Advanced test design methods
- Exploratory Testing Guide - Deep dive into exploratory testing
- Bug Reports Guide - Effective defect management
- Test Automation Tutorial - When and how to automate
