What Is Locust?
Locust is an open-source load testing tool written in Python. Its defining feature is that you write your tests as plain Python code, defining user behavior as Python classes. If you or your team is comfortable with Python, Locust offers the lowest barrier to entry of any load testing tool.
Locust uses an event-driven architecture (based on gevent) rather than threads, which allows a single process to simulate thousands of concurrent users. It includes a built-in web UI for monitoring tests in real time and supports distributed testing across multiple machines.
The name “Locust” comes from the swarming behavior of locusts — you define user behaviors, and Locust unleashes a swarm of them on your application.
When to Choose Locust
| Feature | Locust | k6 | JMeter | Gatling |
|---|---|---|---|---|
| Language | Python | JavaScript | GUI/XML | Scala/Java |
| Web UI | Built-in | None | GUI (not for monitoring) | None |
| Distributed | Master/Worker | k6 Cloud/xk6 | Master/Slave | Enterprise |
| Custom load shapes | Python classes | Scenarios | Step Thread Group | Injection profiles |
| Learning curve | Easy (Python) | Easy (JS) | Moderate | Steep (Scala) |
| Python library access | Full | None | None | None |
Choose Locust when: Your team knows Python, you want a real-time web UI, you need to use Python libraries in your tests (database drivers, ML libraries, custom protocols), or you need highly customizable user behavior.
Installation
pip install locust
Verify:
locust --version
Your First Locust Test
Create a file called locustfile.py:
from locust import HttpUser, task, between
class WebsiteUser(HttpUser):
wait_time = between(1, 3) # wait 1-3 seconds between tasks
@task(3)
def view_products(self):
self.client.get("/api/products")
@task(1)
def view_product_detail(self):
self.client.get("/api/products/1")
Run it:
locust -f locustfile.py --host=https://api.example.com
Open http://localhost:8089 in your browser to see the web UI. Enter the number of users, spawn rate, and click Start.
Key Concepts
HttpUser: A class representing a virtual user. Each instance simulates one user.
wait_time: Controls the pause between tasks. between(1, 3) means a random wait of 1-3 seconds. Other options:
constant(2)— always wait 2 secondsconstant_pacing(5)— ensure each task cycle takes exactly 5 seconds
@task decorator: Marks methods as user tasks. The number argument sets the weight — @task(3) means this task runs 3x more often than @task(1).
Task Weighting and Sequential Tasks
Weighted Tasks
Task weights model realistic user behavior. In a typical e-commerce application, browsing is far more common than purchasing:
class EcommerceUser(HttpUser):
wait_time = between(1, 5)
@task(10)
def browse_products(self):
self.client.get("/api/products")
@task(5)
def search(self):
self.client.get("/api/search?q=laptop")
@task(3)
def view_product(self):
self.client.get("/api/products/42")
@task(1)
def add_to_cart(self):
self.client.post("/api/cart", json={"product_id": 42, "qty": 1})
Here, browsing happens 10x more often than adding to cart — which mirrors real user behavior.
Sequential Tasks (TaskSets)
For ordered workflows, use SequentialTaskSet:
from locust import HttpUser, SequentialTaskSet, task, between
class PurchaseFlow(SequentialTaskSet):
@task
def login(self):
response = self.client.post("/api/auth/login", json={
"username": "testuser",
"password": "testpass"
})
self.token = response.json()["token"]
@task
def browse(self):
self.client.get("/api/products", headers={
"Authorization": f"Bearer {self.token}"
})
@task
def add_to_cart(self):
self.client.post("/api/cart", json={"product_id": 1, "qty": 1},
headers={"Authorization": f"Bearer {self.token}"})
@task
def checkout(self):
self.client.post("/api/checkout",
headers={"Authorization": f"Bearer {self.token}"})
self.interrupt() # return to parent user class
class WebsiteUser(HttpUser):
wait_time = between(1, 3)
tasks = [PurchaseFlow]
Lifecycle Hooks
class WebsiteUser(HttpUser):
wait_time = between(1, 3)
def on_start(self):
"""Called when a user starts. Use for login/setup."""
response = self.client.post("/api/auth/login", json={
"username": "user1", "password": "pass123"
})
self.token = response.json()["token"]
self.headers = {"Authorization": f"Bearer {self.token}"}
def on_stop(self):
"""Called when a user stops. Use for cleanup."""
self.client.post("/api/auth/logout", headers=self.headers)
@task
def browse(self):
self.client.get("/api/products", headers=self.headers)
Custom Validation
@task
def get_products(self):
with self.client.get("/api/products", catch_response=True) as response:
if response.status_code != 200:
response.failure(f"Got status {response.status_code}")
elif "products" not in response.json():
response.failure("Response missing 'products' field")
elif len(response.json()["products"]) == 0:
response.failure("Empty product list")
else:
response.success()
Distributed Testing
Locust supports distributed testing with a master/worker architecture:
# Start master
locust -f locustfile.py --master --host=https://api.example.com
# Start workers (on same or different machines)
locust -f locustfile.py --worker --master-host=192.168.1.100
locust -f locustfile.py --worker --master-host=192.168.1.100
The master coordinates the test and aggregates results. Workers generate the actual load. Each worker can simulate thousands of users.
The Web UI
Locust’s web UI at http://localhost:8089 provides:
- Real-time charts: Requests per second, response times, number of users
- Statistics table: Per-request metrics (median, p95, p99, max, fail rate)
- Failures tab: Detailed error messages
- Download data: CSV export of results
- Stop/Reset: Control the test without restarting
For headless (CI/CD) execution:
locust -f locustfile.py --headless -u 100 -r 10 --run-time 5m --host=https://api.example.com
Exercise: Multi-Behavior Load Test with Locust
Write a Locust test that simulates three distinct user types for a content platform.
Scenario
A content platform has three types of users:
- Readers (70%) — browse articles, read content
- Authors (20%) — create and edit articles
- Admins (5%) — manage users and view analytics
Requirements
- Create separate user classes for each type with appropriate task weights
- Use
on_startfor authentication - Add custom validation for response content
- Use
between()for realistic think time - Make the admin user class run less frequently (use
weighton user class)
Hint: Multiple User Types
class ReaderUser(HttpUser):
weight = 70 # 70% of users are readers
wait_time = between(2, 5)
class AuthorUser(HttpUser):
weight = 20 # 20% are authors
wait_time = between(3, 8)
class AdminUser(HttpUser):
weight = 5 # 5% are admins
wait_time = between(5, 10)
The weight attribute on a User class controls the proportion of that user type in the swarm. Locust will spawn users in approximately the ratio defined by their weights.
Solution: Complete Locust Test
from locust import HttpUser, task, between, events
import json
import random
class ReaderUser(HttpUser):
weight = 70
wait_time = between(2, 5)
def on_start(self):
response = self.client.post("/api/auth/login", json={
"username": f"reader_{random.randint(1, 1000)}",
"password": "readerpass"
})
if response.status_code == 200:
self.token = response.json()["token"]
self.headers = {"Authorization": f"Bearer {self.token}"}
else:
self.headers = {}
@task(5)
def browse_articles(self):
with self.client.get("/api/articles", headers=self.headers,
catch_response=True) as response:
if response.status_code == 200:
articles = response.json().get("articles", [])
if len(articles) > 0:
response.success()
else:
response.failure("No articles returned")
else:
response.failure(f"Status: {response.status_code}")
@task(3)
def read_article(self):
article_id = random.randint(1, 100)
with self.client.get(f"/api/articles/{article_id}",
headers=self.headers,
catch_response=True) as response:
if response.status_code == 200:
body = response.json()
if "title" in body and "content" in body:
response.success()
else:
response.failure("Article missing title or content")
@task(1)
def search_articles(self):
queries = ["python", "testing", "qa", "automation", "ci/cd"]
query = random.choice(queries)
self.client.get(f"/api/search?q={query}", headers=self.headers)
class AuthorUser(HttpUser):
weight = 20
wait_time = between(3, 8)
def on_start(self):
response = self.client.post("/api/auth/login", json={
"username": f"author_{random.randint(1, 50)}",
"password": "authorpass"
})
self.token = response.json()["token"]
self.headers = {
"Authorization": f"Bearer {self.token}",
"Content-Type": "application/json"
}
@task(3)
def view_my_articles(self):
self.client.get("/api/articles/mine", headers=self.headers)
@task(2)
def create_draft(self):
self.client.post("/api/articles", headers=self.headers, json={
"title": f"Test Article {random.randint(1, 10000)}",
"content": "This is a test article content for load testing.",
"status": "draft"
})
@task(1)
def edit_article(self):
article_id = random.randint(1, 50)
self.client.put(f"/api/articles/{article_id}", headers=self.headers, json={
"title": "Updated Title",
"content": "Updated content."
})
class AdminUser(HttpUser):
weight = 5
wait_time = between(5, 10)
def on_start(self):
response = self.client.post("/api/auth/login", json={
"username": "admin",
"password": "adminpass"
})
self.token = response.json()["token"]
self.headers = {"Authorization": f"Bearer {self.token}"}
@task(3)
def view_analytics(self):
self.client.get("/api/admin/analytics", headers=self.headers)
@task(2)
def list_users(self):
self.client.get("/api/admin/users", headers=self.headers)
@task(1)
def view_system_health(self):
self.client.get("/api/admin/health", headers=self.headers)
Running the test:
# With web UI
locust -f locustfile.py --host=https://content-api.example.com
# Headless for CI/CD
locust -f locustfile.py --headless -u 200 -r 20 --run-time 10m \
--host=https://content-api.example.com --csv=results
What to analyze:
- Compare response times across user types
- Verify Reader endpoints handle the most load (70% of traffic)
- Check that Author write operations do not degrade Reader performance
- Admin endpoints should show low traffic but stable response times
- The
--csvflag generates CSV files for post-test analysis
Pro Tips
- Custom Load Shapes: Create a class extending
LoadTestShapeto define complex load patterns (spike, step, wave) with full Python flexibility. This is more powerful than any other tool’s built-in load shaping. - Events System: Use Locust’s event hooks (
@events.test_start.add_listener,@events.request.add_listener) to add custom logging, metrics, or notifications during test execution. - FastHttpUser: For maximum throughput, use
FastHttpUserinstead ofHttpUser. It uses a C-based HTTP client (geventhttpclient) that is 5-6x faster for simple requests. - Tag-Based Filtering: Use
@tag('smoke')decorator and run with--tags smoketo execute only tagged tasks — useful for running subsets of a large test suite. - Docker Compose for Distributed: Use Docker Compose to spin up a master and multiple workers with a single command, making distributed testing repeatable and easy to scale.