Assessment Overview
Congratulations on reaching the end of Module 5: Web Application Testing. This assessment tests your understanding of all topics covered in lessons 5.1 through 5.29, with emphasis on the advanced topics from lessons 5.16-5.29.
The assessment has three parts:
| Part | Format | Questions | Time Estimate |
|---|---|---|---|
| Part 1 | Multiple-choice quiz | 10 questions | 10 minutes |
| Part 2 | Scenario-based questions | 3 scenarios | 20 minutes |
| Part 3 | Practical exercise | 1 exercise | 30 minutes |
How to Use This Assessment
Before you begin:
- Review your notes from Module 5
- Do not use reference materials during the quiz (Part 1) — test your recall
- For Parts 2 and 3, you may reference earlier lessons
- There are no trick questions — every question has a clearly correct answer
Scoring guide:
- Part 1: 10 points (1 point per correct answer)
- Part 2: 15 points (5 points per scenario)
- Part 3: 15 points (rubric provided)
- Total: 40 points
- Passing score: 28/40 (70%)
Topics Covered
This assessment covers all major topics from Module 5:
- Web architecture — Client-server model, HTTP, HTML/CSS/JS
- Browser DevTools — Network, Performance, Application tabs
- Cross-browser and responsive testing — Viewport, compatibility
- Form and authentication testing — Validation, sessions, cookies
- E-commerce testing — Cart, payments, billing, subscriptions
- SPA and PWA testing — Client-side routing, offline support
- Real-time testing — WebSocket, SSE, reconnection logic
- Performance — Core Web Vitals, Lighthouse, optimization, budgets
- SEO testing — Meta tags, structured data, crawlability
- Accessibility — WCAG 2.2, keyboard, screen reader testing
- Security — XSS, CSRF, injection, security headers
- Infrastructure — Caching, CDN, geo-distribution
- Compliance — GDPR, data privacy, consent management
- SaaS specifics — Multi-tenancy, feature gating, billing
Part 1: Multiple-Choice Quiz
The quiz questions are in the frontmatter of this lesson (10 questions). Take the quiz first before proceeding to Parts 2 and 3.
After completing the quiz, check your answers against the explanations. Note any topics where you answered incorrectly — these are areas worth reviewing before moving to Module 6.
Part 2: Scenario-Based Questions
Scenario A: The E-Commerce Launch
Context: Your company is launching a new e-commerce platform next month. The platform includes: product catalog with search and filters, shopping cart, checkout with Stripe payments, user accounts with order history, a blog for SEO, and support for 3 languages (EN, ES, FR). The team expects 50,000 users in the first month, with traffic split roughly 40% US, 30% Europe, 20% Latin America, 10% other.
Questions (5 points):
What are the top 5 testing areas you would prioritize before launch, ranked by risk? Justify each choice. (3 points)
What specific performance budget would you set for the product pages, and how would you enforce it? (2 points)
Solution
1. Top 5 testing areas by risk:
Payment flow (Critical risk) — Revenue depends on it. Test Stripe integration with test cards (success, decline, 3D Secure), proration for any subscription features, tax calculations for US/EU/LATAM, and receipt emails. A payment bug means zero revenue.
Security testing (Critical risk) — E-commerce handles PII and financial data. Test XSS on search/profile fields, CSRF on checkout forms, SQL injection, security headers, and HTTPS everywhere. A security breach before launch would be catastrophic.
Cross-browser and responsive (High risk) — 50K users across diverse devices. Test Chrome, Safari, Firefox on desktop and mobile. Test responsive design at common breakpoints. Cart and checkout must work flawlessly on mobile (likely 60%+ of traffic).
Performance and CDN (High risk) — 40% US + 30% EU + 20% LATAM = global users. Set up CDN with edge servers in each region. Test LCP <2.5s on product pages with images. Test with throttled mobile network. Poor performance = high bounce rate.
Multi-language and SEO (Medium risk) — 3 languages means hreflang, canonical tags, translated meta descriptions, and localized structured data. Incorrect hreflang = lost search traffic in target markets. Test sitemap includes all language versions.
2. Performance budget for product pages:
| Metric | Budget |
|---|---|
| Total page weight | <400KB compressed |
| JavaScript | <120KB compressed |
| Product images | <150KB each (WebP, lazy loaded) |
| LCP | <2.5s on Fast 3G |
| CLS | <0.1 |
| TBT | <200ms |
| TTFB | <400ms (CDN edge) |
Enforcement: Lighthouse CI in GitHub Actions with budget.json assertions. Build fails if any metric exceeds budget. Weekly manual Lighthouse audits of top 10 product pages.
Scenario B: The SaaS Data Leak
Context: You are a QA Lead at a project management SaaS company. A customer reports that they can see another company’s project names in their search results. Investigation reveals that the global search index was not properly filtered by tenant_id.
Questions (5 points):
What immediate testing should you perform to assess the scope of the data leak? (2 points)
Design a test suite (5-7 test cases) to prevent this type of issue from recurring. (3 points)
Solution
1. Immediate testing to assess scope:
- Search results audit: Test search across 5+ tenant accounts — can any tenant see data from other tenants?
- API endpoint audit: Test all API endpoints that return lists (projects, tasks, users, files) — are they all filtered by tenant_id?
- Report/export audit: Test data exports, reports, and dashboards — do they include cross-tenant data?
- Autocomplete audit: Test all autocomplete/suggestion features — do they suggest cross-tenant items?
- Cache audit: Purge all caches and re-test — was stale cached data contributing to the leak?
- Log analysis: Check if any tenant received another tenant’s data in the past (audit trail)
2. Test suite for data isolation:
| # | Test Case | Steps | Expected |
|---|---|---|---|
| 1 | Search returns only current tenant data | Create unique item in Tenant A, search from Tenant B | Not found |
| 2 | API list endpoints filter by tenant | GET /api/projects as Tenant B | Only Tenant B projects |
| 3 | Direct ID access blocked | GET /api/projects/{tenant-a-id} as Tenant B | 404 Not Found |
| 4 | Export contains only tenant data | Export all projects as Tenant A | No Tenant B data |
| 5 | Autocomplete scoped to tenant | Type in search as Tenant B | No Tenant A suggestions |
| 6 | Bulk operations scoped | Bulk update as Tenant A | Only Tenant A items affected |
| 7 | New feature default isolation | Create any new entity | tenant_id automatically set |
Add these to the automated regression suite. Run on every deployment. Add a mandatory code review checklist item: “All database queries filtered by tenant_id.”
Scenario C: The Accessibility Complaint
Context: A visually impaired user has filed a formal accessibility complaint about your web application. They report: cannot complete the checkout form using a screen reader, images have no descriptions, and they cannot close modal dialogs with the keyboard. Your company needs to achieve WCAG 2.2 AA compliance within 90 days.
Questions (5 points):
Create a prioritized 90-day plan with three phases. (3 points)
How would you integrate accessibility testing into the development workflow to prevent future regressions? (2 points)
Solution
1. 90-day plan:
Phase 1: Critical fixes (Days 1-30)
- Fix the three reported issues immediately (checkout labels, image alt text, modal keyboard trap)
- Run axe-core scan on all pages — fix all Critical and Serious issues
- Add
langattribute to HTML - Fix color contrast issues (4.5:1 minimum)
- Add skip navigation links
- Estimated effort: 1 developer, 1 QA (full-time)
Phase 2: Systematic audit (Days 31-60)
- Manual keyboard audit of all pages and flows
- Screen reader testing (VoiceOver + NVDA) of core user journeys
- Fix all form label associations
- Fix heading hierarchy across all pages
- Add ARIA labels to all interactive components
- Test and fix all modal/dialog components for keyboard support
- Estimated effort: 1 developer, 1 QA, 1 UX designer
Phase 3: Automation and prevention (Days 61-90)
- Integrate axe-core into CI/CD — fail builds on Critical/Serious violations
- Create accessibility testing checklist for QA
- Train development team on WCAG 2.2 AA requirements
- Document ARIA patterns used in the application
- Run final manual audit with external accessibility consultant
- Create ongoing monitoring dashboard
2. Preventing regressions:
- CI/CD: axe-core scans on every PR — block merge on Critical/Serious violations
- Code review: Mandatory checklist: alt text, labels, ARIA, keyboard support, contrast
- Design system: All components in the design system must meet WCAG AA
- Sprint testing: Include keyboard + screen reader testing in Definition of Done
- Quarterly: External accessibility audit by specialist
- Training: Quarterly accessibility awareness sessions for all developers
Part 3: Practical Exercise
Create a Web Application Test Strategy
Scenario: You are the QA Lead for a health and fitness web application with these features:
- User profiles with health data (weight, goals, medical conditions)
- Workout tracking with real-time timer (WebSocket)
- Meal planning with nutrition database (search, filters)
- Social features (follow users, share achievements)
- Premium subscription ($9.99/month via Stripe)
- Available in 2 languages (EN, ES)
- 200K users, 70% mobile, deployed on AWS with CloudFront CDN
Your task: Create a test strategy covering:
- Top 5 risk areas with justification (3 points)
- Performance testing approach including budgets (3 points)
- Security and privacy plan considering health data sensitivity (3 points)
- Accessibility requirements and testing approach (3 points)
- Real-time feature testing strategy for the workout timer (3 points)
Scoring rubric (15 points): 3 points per section, evaluated on completeness, realism, and prioritization.
Solution
1. Top 5 risk areas:
Health data privacy (CRITICAL) — Health data is sensitive under HIPAA/GDPR. A data breach exposes medical conditions. Test encryption, access controls, DSAR functionality, data deletion.
Payment processing (CRITICAL) — Subscription revenue. Test Stripe integration, proration on upgrades, dunning for failed payments, cancellation flow.
Real-time workout timer (HIGH) — Core feature. WebSocket disconnection during workout = lost data = user anger. Test reconnection, data persistence, offline recovery.
Mobile performance (HIGH) — 70% mobile users. Health/fitness apps are used during workouts on potentially poor connections. LCP <2s, offline support for active workouts.
Social feature security (HIGH) — Users can follow each other and share data. Test that health data is only shared with consent, no cross-user data leakage, blocked users cannot access profiles.
2. Performance approach:
Budget: LCP <2s, TBT <150ms, CLS <0.05, total weight <300KB, API response <200ms Mobile-first: Test on 4G and 3G with CPU throttling CDN: Verify CloudFront edge performance from US, LATAM, and EU Real-time: WebSocket latency <100ms for timer updates CI: Lighthouse CI with budgets on every PR
3. Security and privacy:
- Encrypt health data at rest and in transit (AES-256, TLS 1.3)
- Health data endpoints use
Cache-Control: no-store - GDPR compliance: consent management, DSAR, right to erasure including health data
- API security: rate limiting, input validation, CSRF tokens
- Social: Granular privacy settings (who can see weight, goals, medical data)
- Pen testing: Quarterly, focusing on health data endpoints
- Security headers: CSP, HSTS, X-Frame-Options on all responses
4. Accessibility:
- Target WCAG 2.2 AA
- Workout timer must be accessible: screen reader announcements for time updates (aria-live), keyboard-operable start/stop/pause
- Nutrition search: keyboard-navigable results, screen reader-friendly
- Color contrast: Critical for charts and progress indicators
- Testing: axe-core in CI + monthly manual keyboard/VoiceOver testing
- Bilingual: Both EN and ES versions must be equally accessible
5. Real-time testing:
- Test WebSocket connection lifecycle: connect, active workout, disconnect, reconnect
- Simulate network drop during workout: timer data persisted locally
- On reconnect: sync local data with server, no duplicate entries
- Multiple device test: Start workout on phone, check status on web — consistent
- Load test: 10,000 concurrent WebSocket connections
- Battery impact: Measure battery drain during 1-hour workout session on mobile
- Background behavior: Timer continues when app is backgrounded
What Is Next
If you scored 28+ out of 40, you are ready for Module 6: API and Backend Testing. If you scored below 28, review the topics where you lost points before proceeding.
Module 6 builds directly on the web testing knowledge from Module 5, diving deep into API testing with Postman, REST architecture, authentication, schema validation, and backend testing strategies. The HTTP knowledge and DevTools skills from Module 5 will be essential.