Why Kubernetes Matters for QA
Kubernetes (K8s) has become the standard platform for running containerized applications in production. If the application you test runs on Kubernetes, understanding its architecture helps you debug failures, understand deployment behavior, and design more effective tests.
You do not need to become a Kubernetes administrator. But as a QA engineer, you need enough knowledge to read pod logs, check deployment status, understand why a test environment is misbehaving, and communicate effectively with DevOps teams.
Kubernetes Architecture
Cluster Components
A Kubernetes cluster consists of:
- Control Plane (Master): Manages the cluster state, scheduling, and API
- Worker Nodes: Machines that run application containers
- etcd: Distributed key-value store for cluster data
Key Resources
| Resource | Purpose | QA Relevance |
|---|---|---|
| Pod | Smallest unit — one or more containers | Your application runs in pods |
| Deployment | Manages pod replicas and updates | Rolling updates affect your tests |
| Service | Stable network endpoint for pods | How tests connect to the application |
| Namespace | Virtual cluster isolation | Test environments as namespaces |
| ConfigMap | Configuration data | Test environment settings |
| Secret | Sensitive data | API keys, database credentials |
| Ingress | External HTTP/S access | URL routing for test environments |
Essential kubectl Commands for QA
Viewing Resources
# List all pods in current namespace
kubectl get pods
# List pods in a specific namespace
kubectl get pods -n staging
# Get detailed pod information
kubectl describe pod my-app-abc123
# List all services
kubectl get services
# List all deployments
kubectl get deployments
# Watch pods in real-time
kubectl get pods -w
Debugging Applications
# View pod logs
kubectl logs my-app-abc123
# Follow logs in real-time
kubectl logs -f my-app-abc123
# View logs from a previous container (after restart)
kubectl logs my-app-abc123 --previous
# Execute a shell inside a pod
kubectl exec -it my-app-abc123 -- bash
# Port-forward to access a pod locally
kubectl port-forward my-app-abc123 3000:3000
# Check pod resource usage
kubectl top pods
Checking Deployment Status
# View deployment status
kubectl rollout status deployment/my-app
# View deployment history
kubectl rollout history deployment/my-app
# Check events (recent cluster activity)
kubectl get events --sort-by='.lastTimestamp'
Namespaces for Test Environments
Namespaces provide isolation within a cluster. Teams commonly use namespaces to create separate test environments:
production → Live application
staging → Pre-production testing
qa → QA team's test environment
feature-xyz → Ephemeral environment for a specific feature
# Create a namespace for testing
kubectl create namespace qa-testing
# Deploy to a specific namespace
kubectl apply -f deployment.yaml -n qa-testing
# Set default namespace
kubectl config set-context --current --namespace=qa-testing
Common QA Scenarios in Kubernetes
Scenario 1: Tests Fail After Deployment
Your E2E tests suddenly fail after a new deployment. Check:
# Is the pod running?
kubectl get pods -n staging
# Are there recent restarts? (CrashLoopBackOff)
kubectl describe pod app-pod-name -n staging
# Check application logs for errors
kubectl logs app-pod-name -n staging --tail=100
# Check events for scheduling or resource issues
kubectl get events -n staging --sort-by='.lastTimestamp'
Scenario 2: Intermittent Test Failures
Tests pass sometimes and fail others. Possible K8s-related causes:
- Pod scaling: Requests hitting different pods with different states
- Resource limits: Pod running out of memory or CPU
- Liveness probe failures: Pod restarting mid-test
# Check if pods are restarting
kubectl get pods -n staging -o wide
# Check resource limits and usage
kubectl top pods -n staging
kubectl describe pod app-pod -n staging | grep -A5 "Resources:"
Scenario 3: Cannot Connect to Test Environment
# Check if the service exists and has endpoints
kubectl get service my-app -n staging
kubectl get endpoints my-app -n staging
# Check ingress configuration
kubectl get ingress -n staging
kubectl describe ingress my-app-ingress -n staging
# Port-forward as a workaround
kubectl port-forward service/my-app 3000:80 -n staging
Exercise: Debug a Failing Test Environment
Your team deploys an application to a Kubernetes staging namespace. E2E tests that ran fine yesterday now timeout with “connection refused.” Walk through the debugging steps.
Solution
Step 1: Check pod status
kubectl get pods -n staging
Look for: CrashLoopBackOff, ImagePullBackOff, Pending, or 0/1 Ready.
Step 2: Check pod events and logs
kubectl describe pod app-pod-name -n staging
kubectl logs app-pod-name -n staging
Look for: OOM killed, failed health checks, configuration errors.
Step 3: Check the service
kubectl get service my-app -n staging
kubectl get endpoints my-app -n staging
Look for: Missing endpoints (no healthy pods backing the service).
Step 4: Check recent deployments
kubectl rollout status deployment/my-app -n staging
kubectl rollout history deployment/my-app -n staging
Look for: Failed rollout, wrong image tag, missing ConfigMap.
Step 5: Check resource availability
kubectl top pods -n staging
kubectl describe node | grep -A5 "Allocated resources"
Look for: Node at capacity, unable to schedule pods.
Common root causes:
- New deployment has a bug — pod crashes on startup
- Docker image tag is wrong — ImagePullBackOff
- Missing environment variable or secret — application fails to start
- Resource quota exceeded — pod cannot be scheduled
- Network policy blocks traffic — service is unreachable
Kubernetes Testing Patterns
Pattern 1: Namespace-per-PR
Create an ephemeral namespace for each pull request with the full application stack. Delete after tests pass.
Pattern 2: Shared Staging
A single staging namespace with the latest main branch deployed. All QA tests run here.
Pattern 3: Local Development with Minikube
Run a local Kubernetes cluster for development testing:
# Start Minikube
minikube start
# Deploy your application
kubectl apply -f k8s/
# Access the application
minikube service my-app --url
Key Takeaways
- Know enough K8s to debug test failures —
kubectl get pods,kubectl logs, andkubectl describeare your primary tools - Namespaces isolate test environments — each team or feature can have its own namespace
- Pod lifecycle affects tests — restarts, scaling, and resource limits cause intermittent failures
- Services provide stable endpoints — always connect tests to services, not directly to pods
- Collaborate with DevOps — QA does not manage the cluster but must understand it to diagnose issues