Assessment Overview

Congratulations on reaching the end of Module 6: API and Backend Testing. This assessment covers all topics from lessons 6.1 through 6.29.

The assessment has three parts:

PartFormatQuestionsTime Estimate
Part 1Multiple-choice quiz10 questions10 minutes
Part 2Scenario-based questions3 scenarios20 minutes
Part 3Practical exercise1 exercise30 minutes

How to Use This Assessment

Before you begin:

  • Review your notes from Module 6
  • Do not use reference materials during the quiz (Part 1)
  • For Parts 2 and 3, you may reference earlier lessons

Scoring guide:

  • Part 1: 10 points (1 point per correct answer)
  • Part 2: 15 points (5 points per scenario)
  • Part 3: 15 points (rubric provided)
  • Total: 40 points
  • Passing score: 28/40 (70%)

Topics Covered

  1. API Performance Testing — Throughput, latency percentiles, load/stress/spike/soak tests
  2. API Security — OWASP API Top 10, BOLA, authentication, authorization
  3. Microservices Testing — Component tests, testing pyramid/honeycomb, environment strategy
  4. Service Mesh — Istio, traffic routing, circuit breakers, fault injection, mTLS
  5. Message Queues — Kafka vs RabbitMQ, DLQ, consumer lag, ordering guarantees
  6. Event-Driven Architecture — Event sourcing, CQRS, sagas, eventual consistency
  7. SQL Database Testing — Constraints, ACID, stored procedures, indexes
  8. NoSQL Testing — MongoDB, Redis, DynamoDB, schema-less validation
  9. ETL Testing — Extract, transform, load, data reconciliation, idempotency
  10. Data Migration — Mapping documents, rollback strategies, completeness verification
  11. Webhooks — Signature verification, idempotency, retry logic
  12. Third-Party Integrations — Sandbox testing, circuit breakers, failure modes
  13. Contract Testing — Pact, consumer-driven contracts, Pact Broker
  14. API Documentation — OpenAPI validation, drift detection, Dredd/Schemathesis

Part 1: Multiple-Choice Quiz

The quiz questions are in the frontmatter of this lesson (10 questions). Take the quiz first before proceeding to Parts 2 and 3.

After completing the quiz, check your answers against the explanations. Note any topics where you answered incorrectly — these are areas worth reviewing.

Part 2: Scenario-Based Questions

Scenario A: The E-Commerce Platform Migration

Context: Your company is migrating from a monolithic application to microservices. The current system processes 50,000 orders per day through a single PostgreSQL database. The new architecture will have: Order Service, Payment Service, Inventory Service, and Notification Service communicating via Kafka. The database will be split into service-owned databases with MongoDB for the product catalog and PostgreSQL for orders and payments.

Questions (5 points):

  1. Design a testing strategy that covers all testing levels (unit, component, integration, contract, E2E). For each level, give one specific test example. (3 points)

  2. What are the top 3 risks of this migration and how would you test for each? (2 points)

Solution

1. Testing strategy:

  • Unit tests: Test order total calculation logic in Order Service (currency conversion, discounts).
  • Component tests: Start Order Service with mocked Kafka and mocked Payment Service. Send a POST /orders request and verify it creates an order and publishes an OrderCreated event.
  • Integration tests: Start Order Service and Kafka in Docker Compose. Publish an event and verify it arrives in the correct topic with the correct schema.
  • Contract tests: Order Service (consumer) defines what it expects from Payment Service’s POST /payments endpoint. Payment Service verifies it can fulfill that contract.
  • E2E tests: Create an order through the API gateway, verify payment is processed, inventory is decremented, and notification is sent.

2. Top 3 risks:

  1. Data loss during migration — Test by running migration on a copy of production data and verifying every record with reconciliation queries (counts, sums, sample records).
  2. Eventual consistency issues — Test by creating orders and immediately querying the read model. Measure consistency delay. Test with concurrent operations to find race conditions.
  3. Kafka message loss or duplication — Test by producing events and verifying consumers receive exactly the expected number. Test consumer idempotency by replaying events. Test what happens when a consumer is temporarily down.

Scenario B: The Payment Gateway Integration

Context: Your team integrates with three payment providers: Stripe (credit cards), PayPal (digital wallet), and a local bank API (wire transfers). Each has different authentication, response formats, and error handling. Last month, the bank API changed their error response format without notice, causing 2,000 failed transactions before the issue was detected.

Questions (5 points):

  1. How would you prevent undocumented API changes from causing production failures? Describe at least 2 approaches. (2 points)

  2. Design a circuit breaker strategy for these three payment providers. What thresholds would you set and why? (3 points)

Solution

1. Preventing undocumented changes:

  1. Contract testing — Define Pact contracts for each payment provider’s expected response format. Run provider verification daily (even if your code has not changed) to detect changes early.
  2. Canary monitoring — Run a small percentage of real transactions through a validation layer that compares responses against expected schemas. Alert immediately when unexpected formats are detected.
  3. Response schema validation — Validate every API response against expected schemas in production code. Log and alert on schema violations instead of silently failing.

2. Circuit breaker strategy:

ProviderFailure ThresholdTimeoutRecoveryFallback
Stripe5 failures in 30s10s per requestHalf-open after 60sQueue payment for retry
PayPal5 failures in 30s15s per requestHalf-open after 60sOffer Stripe as alternative
Bank API3 failures in 60s30s per requestHalf-open after 120sQueue for manual processing

Rationale: Bank API gets lower thresholds because it is historically less reliable and has longer response times. Payment providers get longer recovery times than non-critical services because failed payments directly impact revenue.

Scenario C: The Real-Time Analytics Pipeline

Context: Your company processes user events (page views, clicks, purchases) through a Kafka pipeline into a data warehouse. The pipeline processes 10 million events per day. An ETL job runs nightly to transform raw events into analytics tables. Last week, a schema change in the event producer caused the ETL to silently drop 15% of events (fields were renamed but the ETL did not error — it just stored NULLs).

Questions (5 points):

  1. What tests would have caught the silent data loss? (2 points)

  2. Design a data quality monitoring system for this pipeline. What metrics would you track and what alerts would you set? (3 points)

Solution

1. Tests to catch silent data loss:

  1. Event schema contract tests — Consumer-driven contracts between the event producer and the ETL consumer. When the producer renames a field, the contract test fails in CI before deployment.
  2. Data reconciliation checks — After each ETL run, compare: COUNT of source events vs COUNT of target records. SUM of purchase amounts in source vs target. If any difference exceeds 0.1%, alert and halt.
  3. NULL rate monitoring — Track the percentage of NULL values in each target column. If NULL rate suddenly increases (e.g., from 0% to 15%), alert immediately. This would have caught the renamed fields.
  4. Schema evolution tests — Before deploying producer changes, run the ETL against sample events with the new schema. Verify no NULLs appear where data should exist.

2. Data quality monitoring system:

MetricThresholdAlert
Event count per hour< 80% of hourly averageImmediate (Slack + PagerDuty)
NULL rate per column> 1% increase from baselineWarning within 15 minutes
ETL processing time> 2x average durationWarning
Source-target count mismatch> 0.1% differenceCritical
Source-target sum mismatch> 0.01% for financial fieldsCritical
Schema validation failuresAnyImmediate
Consumer lag (Kafka)> 100,000 messagesWarning
Consumer lag (Kafka)> 1,000,000 messagesCritical

Part 3: Practical Exercise

Design an API Test Strategy

You are the QA Lead for a healthcare appointment booking platform. The system has these APIs:

  • Patient API — Register, login, view medical history
  • Doctor API — Availability, specialties, ratings
  • Appointment API — Book, cancel, reschedule, reminders
  • Payment API — Process copay, insurance verification
  • Notification API — Email, SMS, push notifications

The platform integrates with:

  • Insurance verification API (third-party)
  • Twilio for SMS
  • Stripe for payments
  • A legacy hospital records system (SOAP/XML)

Requirements:

  • HIPAA compliance (healthcare data privacy)
  • 99.9% uptime SLA
  • Maximum 2-second response time for booking
  • Must handle 10,000 concurrent users during peak hours

Your task: Create a comprehensive API test strategy document covering:

  1. Test levels and types (10 points) — What tests at each level? How many? What tools?
  2. Security testing plan (5 points) — HIPAA-specific concerns, OWASP API Top 10 coverage.
  3. Performance testing plan (5 points) — Load profiles, SLA validation, breaking point determination.
  4. Integration testing plan (5 points) — Third-party integration strategy, mocking vs sandbox.
  5. Data testing plan (5 points) — Database validation, PHI (Protected Health Information) handling.

Evaluation rubric:

CriterionExcellent (5)Good (3)Needs Work (1)
CompletenessAll APIs and integrations coveredMost APIs coveredMajor gaps
DepthSpecific test cases and toolsGeneral approachVague descriptions
PracticalityRealistic and implementableMostly realisticTheoretical only
Risk awarenessKey risks identified with mitigationSome risks notedRisks not addressed
Solution Outline

1. Test levels:

  • Unit: 500+ tests per service, Jest/pytest, business logic focus
  • Component: 50+ per service, Docker-based, mocked dependencies
  • Contract: Pact for all service pairs (Patient→Appointment, Appointment→Payment, etc.)
  • Integration: Docker Compose for service clusters, real Kafka/PostgreSQL
  • E2E: 20 critical paths (book appointment, process payment, send reminder)
  • Performance: k6 scripts for load (10K concurrent), stress, and soak testing

2. Security:

  • BOLA testing on all patient endpoints (user A cannot see user B’s records)
  • PHI encryption in transit (TLS) and at rest
  • Audit logging for all PHI access
  • Session timeout testing (HIPAA requires auto-logout)
  • OWASP API Top 10 full coverage with automated scanning

3. Performance:

  • Baseline: 100 VUs for 5 minutes
  • Load: Ramp to 10,000 VUs, sustain 30 minutes
  • Spike: 1,000 → 15,000 VUs in 10 seconds
  • Soak: 5,000 VUs for 8 hours
  • SLA validation: p95 < 2s for booking, 99.9% success rate

4. Integration:

  • Insurance API: Sandbox for happy paths, mocks for failure modes
  • Twilio: Test credentials, verify message content
  • Stripe: Test mode with special card numbers
  • Legacy SOAP: WireMock with recorded responses

5. Data:

  • PHI never in test logs or error messages
  • Test data uses synthetic patients (not real data)
  • Database constraint testing for all tables
  • Audit trail verification for compliance