The Manual Testing Interview Landscape

Manual testing interviews typically combine three elements: conceptual questions about testing theory, practical exercises where you test something on the spot, and behavioral questions about your experience. This lesson covers the first two — behavioral questions are addressed in Lesson 12.7.

The goal is not to memorize answers. Interviewers are evaluating your thinking process, not your ability to recite definitions. The best candidates think out loud, ask clarifying questions, and structure their answers logically.

Top 20 Manual Testing Interview Questions

Category 1: Fundamentals

Q1: What is the difference between QA, QC, and Testing?

Structure your answer in layers:

  • QA (Quality Assurance): Process-oriented. Focuses on preventing defects through better processes. Proactive.
  • QC (Quality Control): Product-oriented. Focuses on identifying defects in the product. Reactive.
  • Testing: A subset of QC. The actual execution of tests to find defects.

Q2: Explain the Software Testing Life Cycle (STLC).

Walk through the six phases: Requirements Analysis, Test Planning, Test Case Design, Environment Setup, Test Execution, Test Closure. For each phase, mention one key deliverable.

Q3: What is the difference between verification and validation?

  • Verification: Are we building the product right? (Matches specifications)
  • Validation: Are we building the right product? (Meets user needs)

Use the “building a house” analogy: verification checks that walls are straight and plumbing works. Validation checks that the house is something people actually want to live in.

Q4: What is the difference between severity and priority?

  • Severity: Technical impact of the defect on the system
  • Priority: Business urgency of fixing the defect

Key insight: They can differ. A typo in the CEO’s name on the homepage is low severity (cosmetic) but high priority (brand impact). A crash in an admin panel used once a year is high severity but low priority.

Q5: What are the seven principles of testing? (ISTQB)

List all seven and be ready to give an example for each. The two most commonly asked about are the Pesticide Paradox and Defect Clustering.

Category 2: Test Design

Q6: What is boundary value analysis? Give an example.

Testing at the edges of valid input ranges. For a field accepting ages 18-65: test 17, 18, 19, 64, 65, 66. Explain why — defects cluster at boundaries because developers use wrong comparison operators (< vs <=).

Q7: What is equivalence partitioning?

Dividing input data into groups (partitions) where all values in a partition are expected to behave the same. For an age field (18-65): invalid low (<18), valid (18-65), invalid high (>65). Test one value from each partition.

Q8: How would you test a login page?

Structure your answer into categories:

  • Functional: Valid login, invalid password, empty fields, SQL injection, case sensitivity
  • UI/UX: Field labels, error messages, tab order, password masking
  • Security: Brute force protection, session management, HTTPS
  • Performance: Response time under load
  • Accessibility: Screen reader compatibility, keyboard navigation
  • Cross-browser: Chrome, Firefox, Safari, Edge

Q9: What is regression testing and when do you perform it?

Testing unchanged features to ensure new changes have not broken them. Performed after every code change, bug fix, or feature addition. Explain the tradeoff between running full regression (safe but slow) and targeted regression (fast but risky).

Q10: What is exploratory testing?

Simultaneous test design and execution without predefined scripts. The tester uses domain knowledge and intuition to explore the application. It complements scripted testing by finding defects that structured tests miss.

Category 3: Process and Documentation

Q11: How do you write a good bug report?

Essential elements: title, steps to reproduce, expected vs actual result, environment, severity/priority, attachments (screenshots/videos). Emphasize reproducibility — a bug that cannot be reproduced cannot be fixed.

Q12: What is a test plan vs a test strategy?

  • Test Strategy: High-level, organization-wide, rarely changes. Defines the testing approach.
  • Test Plan: Project-specific, detailed, changes per release. Defines what, when, and how to test.

Q13: How do you decide what to test first when time is limited?

Risk-based testing: prioritize based on business impact and likelihood of failure. Test the features that users interact with most and that would cause the most damage if they failed. Explain how you use risk matrices.

Category 4: Real-World Scenarios

Q14: You found a critical bug 1 hour before release. What do you do?

Walk through your decision process: immediately notify the team lead/PM, document the bug with full reproduction steps, assess the impact (does it affect all users or a subset?), present options (delay release, release with known issue, hotfix). Emphasize communication over unilateral decisions.

Q15: A developer says they cannot reproduce your bug. How do you respond?

Do not get defensive. Offer to reproduce it together, share your exact environment details, provide a video recording, check if there are environment-specific conditions. If it truly cannot be reproduced, document it as an intermittent issue and monitor.

Live Testing Exercises

The “Test This Object” Exercise

Interviewers frequently ask you to test everyday objects: a pen, elevator, ATM, vending machine, or search bar. This tests your ability to think systematically under pressure.

Framework for answering:

  1. Clarify: Ask who uses it, what’s the primary purpose, any constraints?
  2. Categorize: Organize tests by type (functional, usability, security, performance, edge cases)
  3. Prioritize: Start with the most critical tests
  4. Think aloud: Share your reasoning as you go

Example: “Test this elevator”

Clarify: Residential or commercial? How many floors? Accessibility requirements?

Functional tests:

  • Press each floor button — does it go to the correct floor?
  • Press the door open/close buttons
  • Press the emergency button
  • Test floor indicator display accuracy

Safety tests:

  • Overweight capacity — does the alarm trigger?
  • Door sensor — does it reopen when obstructed?
  • Power failure behavior
  • Emergency phone functionality

Usability tests:

  • Button placement for wheelchair users
  • Braille labels
  • Audio announcements
  • Mirror placement

Performance tests:

  • Wait time between floors
  • Door open/close speed
  • Response time when multiple buttons pressed

Exercise: Mock Interview Practice

Practice answering these questions with a timer (2 minutes per answer):

  1. You have an e-commerce checkout flow. Describe your testing approach.
  2. The CEO asks “Is the product ready for release?” How do you answer?
  3. Test the search functionality on an e-commerce site. List 15 test cases in 5 minutes.
Sample Answer for Question 1

E-commerce checkout testing approach:

Happy path first:

  • Add item to cart → proceed to checkout → fill shipping → fill payment → confirm → verify order confirmation and email

Input validation:

  • Empty required fields, invalid email, invalid card number, expired card, address fields with special characters

Business logic:

  • Discount codes (valid, expired, already used)
  • Tax calculation by region
  • Shipping cost calculation by weight/destination
  • Inventory check (what if item sells out during checkout?)

Payment integration:

  • Successful payment
  • Declined card
  • Network timeout during payment
  • Double-click on “Pay” button
  • Browser back button after payment

Edge cases:

  • Session timeout during checkout
  • Multiple tabs/devices with same cart
  • Currency conversion for international orders
  • Maximum order value limits
Sample Answer for Question 3

Search functionality — 15 test cases:

  1. Search with valid keyword → relevant results appear
  2. Search with empty input → appropriate message
  3. Search with no matching results → “no results” message
  4. Search with special characters (!@#$%)
  5. Search with very long string (1000+ characters)
  6. Search with SQL injection attempt
  7. Search with XSS script tags
  8. Search result pagination
  9. Search with multiple words
  10. Search with typo → “Did you mean?” suggestion
  11. Search filters (category, price range, rating)
  12. Search result sorting (relevance, price, newest)
  13. Search performance (<2 seconds for results)
  14. Search from different pages (homepage, category page)
  15. Search with product SKU or exact product name

Interview Red Flags to Avoid

Red flag 1: Only thinking about functional testing. Always mention non-functional aspects: performance, security, usability, accessibility.

Red flag 2: Never asking clarifying questions. Diving straight into test cases without understanding context shows lack of analytical thinking.

Red flag 3: Being negative about developers. Phrases like “developers always write buggy code” signal poor collaboration skills.

Red flag 4: Claiming you can test everything. Show that you understand prioritization and risk-based testing.

Key Takeaways

  • Structure answers logically: definition, example, real-world application
  • Always ask clarifying questions before testing anything
  • Organize test cases by category: functional, security, performance, usability, edge cases
  • Think out loud — interviewers evaluate your process, not just your answers
  • Prepare concrete examples from your experience for common scenarios
  • Honesty about knowledge gaps, combined with problem-solving approach, builds trust