TL;DR
- Pytest is Python’s standard testing framework — simple syntax, powerful features
- Write tests as functions:
def test_something():withassertstatements- Fixtures handle setup/teardown with
@pytest.fixturedecorator- Parametrize to run same test with different data:
@pytest.mark.parametrize- Rich plugin ecosystem: pytest-cov (coverage), pytest-xdist (parallel), pytest-mock
Best for: Python developers, Django/Flask/FastAPI projects, data science testing Skip if: You’re not using Python (use Jest for JS, JUnit for Java) Read time: 15 minutes
Your unittest tests are verbose. Every test needs a class. Assertions don’t show what failed. Fixtures require inheritance hierarchies.
Pytest fixes all of this. Write simple functions. Get detailed failure output. Share fixtures without inheritance. Run tests in parallel.
This tutorial teaches pytest from zero — installation, assertions, fixtures, parametrization, and the patterns that make Python tests maintainable.
What is Pytest?
Pytest is a Python testing framework that makes writing and running tests easy. It’s the most popular testing framework in the Python ecosystem.
Why pytest over unittest:
- Simpler syntax — tests are functions, not classes
- Better assertions — use plain
assert, get detailed diffs - Powerful fixtures — dependency injection without inheritance
- Parametrization — run same test with different inputs
- Plugin ecosystem — 1000+ plugins for every need
- Auto-discovery — finds tests automatically
Installation
# Install pytest
pip install pytest
# Verify installation
pytest --version
Project Structure
my-project/
├── src/
│ └── calculator.py
├── tests/
│ ├── __init__.py
│ ├── conftest.py # Shared fixtures
│ ├── test_calculator.py
│ └── test_utils.py
├── pytest.ini # Configuration
└── requirements.txt
Pytest auto-discovers tests in:
- Files named
test_*.pyor*_test.py - Functions named
test_* - Classes named
Test*with methods namedtest_*
Writing Your First Test
# src/calculator.py
def add(a, b):
return a + b
def divide(a, b):
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
# tests/test_calculator.py
from src.calculator import add, divide
import pytest
def test_add_positive_numbers():
assert add(2, 3) == 5
def test_add_negative_numbers():
assert add(-1, -1) == -2
def test_add_zero():
assert add(5, 0) == 5
def test_divide():
assert divide(10, 2) == 5
def test_divide_by_zero():
with pytest.raises(ValueError, match="Cannot divide by zero"):
divide(10, 0)
Running Tests
# Run all tests
pytest
# Verbose output
pytest -v
# Run specific file
pytest tests/test_calculator.py
# Run specific test
pytest tests/test_calculator.py::test_add_positive_numbers
# Run tests matching pattern
pytest -k "add"
# Stop on first failure
pytest -x
# Show print statements
pytest -s
# Run with coverage
pytest --cov=src
Assertions
Pytest uses Python’s built-in assert statement with detailed failure messages.
Basic Assertions
def test_assertions():
# Equality
assert 1 + 1 == 2
assert "hello" == "hello"
assert [1, 2, 3] == [1, 2, 3]
assert {"a": 1} == {"a": 1}
# Truthiness
assert True
assert not False
assert "string" # non-empty is truthy
assert [1, 2] # non-empty list is truthy
# Comparisons
assert 5 > 3
assert 5 >= 5
assert 3 < 5
assert "abc" < "abd"
# Membership
assert 2 in [1, 2, 3]
assert "hello" in "hello world"
assert "key" in {"key": "value"}
# Identity
assert None is None
a = [1, 2]
b = a
assert a is b
def test_approximate_equality():
# Floating point comparison
assert 0.1 + 0.2 == pytest.approx(0.3)
assert [0.1 + 0.2, 0.2 + 0.4] == pytest.approx([0.3, 0.6])
Exception Testing
import pytest
def test_raises_exception():
with pytest.raises(ValueError):
int("not a number")
def test_raises_with_message():
with pytest.raises(ValueError, match="invalid literal"):
int("not a number")
def test_exception_info():
with pytest.raises(ValueError) as exc_info:
int("abc")
assert "invalid literal" in str(exc_info.value)
Warning Testing
import warnings
def deprecated_function():
warnings.warn("This is deprecated", DeprecationWarning)
return 42
def test_warns():
with pytest.warns(DeprecationWarning):
deprecated_function()
Fixtures
Fixtures provide test dependencies and handle setup/teardown.
Basic Fixture
import pytest
@pytest.fixture
def sample_user():
return {"name": "John", "email": "john@example.com", "age": 30}
def test_user_has_name(sample_user):
assert sample_user["name"] == "John"
def test_user_has_email(sample_user):
assert "@" in sample_user["email"]
Setup and Teardown
@pytest.fixture
def database_connection():
# Setup
connection = create_connection()
yield connection # Test runs here
# Teardown
connection.close()
def test_query(database_connection):
result = database_connection.query("SELECT 1")
assert result == 1
Fixture Scopes
@pytest.fixture(scope="function") # Default: runs for each test
def function_fixture():
return create_resource()
@pytest.fixture(scope="class") # Once per test class
def class_fixture():
return create_resource()
@pytest.fixture(scope="module") # Once per test file
def module_fixture():
return create_resource()
@pytest.fixture(scope="session") # Once per test session
def session_fixture():
return create_resource()
Fixtures Using Other Fixtures
@pytest.fixture
def user():
return {"username": "testuser", "password": "secret"}
@pytest.fixture
def authenticated_client(user):
client = APIClient()
client.login(user["username"], user["password"])
return client
def test_protected_endpoint(authenticated_client):
response = authenticated_client.get("/api/protected")
assert response.status_code == 200
conftest.py — Shared Fixtures
# tests/conftest.py
import pytest
@pytest.fixture
def app():
"""Create application for testing."""
from myapp import create_app
app = create_app(testing=True)
return app
@pytest.fixture
def client(app):
"""Create test client."""
return app.test_client()
@pytest.fixture
def db(app):
"""Create database tables."""
from myapp import db
with app.app_context():
db.create_all()
yield db
db.drop_all()
Fixtures in conftest.py are automatically available to all tests in the directory and subdirectories.
Parametrization
Run the same test with different inputs.
Basic Parametrization
import pytest
@pytest.mark.parametrize("input,expected", [
(1, 2),
(2, 4),
(3, 6),
(0, 0),
(-1, -2),
])
def test_double(input, expected):
assert input * 2 == expected
Multiple Parameters
@pytest.mark.parametrize("a,b,expected", [
(1, 1, 2),
(2, 3, 5),
(10, -5, 5),
(0, 0, 0),
])
def test_add(a, b, expected):
assert add(a, b) == expected
Parametrize with IDs
@pytest.mark.parametrize("email,valid", [
("user@example.com", True),
("invalid", False),
("@missing.com", False),
("user@.com", False),
], ids=["valid_email", "no_at_sign", "no_local_part", "no_domain"])
def test_email_validation(email, valid):
assert is_valid_email(email) == valid
Combine Parametrize
@pytest.mark.parametrize("x", [1, 2])
@pytest.mark.parametrize("y", [3, 4])
def test_multiply(x, y):
# Runs 4 times: (1,3), (1,4), (2,3), (2,4)
assert x * y == x * y
Markers
Markers categorize and select tests.
Built-in Markers
import pytest
@pytest.mark.skip(reason="Not implemented yet")
def test_future_feature():
pass
@pytest.mark.skipif(sys.platform == "win32", reason="Unix only")
def test_unix_feature():
pass
@pytest.mark.xfail(reason="Known bug #123")
def test_known_bug():
assert False # Won't fail the suite
@pytest.mark.xfail(strict=True)
def test_should_fail():
assert False # Must fail, or test fails
Custom Markers
# pytest.ini
# [pytest]
# markers =
# slow: marks tests as slow
# integration: marks tests as integration tests
@pytest.mark.slow
def test_slow_operation():
import time
time.sleep(5)
assert True
@pytest.mark.integration
def test_database_integration():
# Requires database
pass
# Run only slow tests
pytest -m slow
# Skip slow tests
pytest -m "not slow"
# Run integration or slow tests
pytest -m "integration or slow"
Mocking
Use pytest-mock or unittest.mock for mocking.
pip install pytest-mock
# Using pytest-mock
def test_api_call(mocker):
mock_get = mocker.patch("requests.get")
mock_get.return_value.json.return_value = {"data": "mocked"}
from myapp import fetch_data
result = fetch_data()
assert result == {"data": "mocked"}
mock_get.assert_called_once()
# Using unittest.mock
from unittest.mock import patch, MagicMock
def test_with_mock():
with patch("myapp.external_service") as mock_service:
mock_service.return_value = "mocked result"
from myapp import process_data
result = process_data()
assert result == "mocked result"
Test Organization
Grouping with Classes
class TestUserValidation:
def test_valid_email(self):
assert is_valid_email("user@example.com")
def test_invalid_email(self):
assert not is_valid_email("invalid")
def test_empty_email(self):
assert not is_valid_email("")
class TestUserCreation:
def test_creates_user(self, db):
user = create_user("John", "john@example.com")
assert user.id is not None
Test File Organization
tests/
├── conftest.py # Shared fixtures
├── unit/
│ ├── test_models.py
│ └── test_utils.py
├── integration/
│ ├── test_api.py
│ └── test_database.py
└── e2e/
└── test_workflows.py
Configuration
pytest.ini
[pytest]
testpaths = tests
python_files = test_*.py
python_functions = test_*
python_classes = Test*
addopts = -v --tb=short
markers =
slow: marks tests as slow
integration: integration tests
filterwarnings =
ignore::DeprecationWarning
pyproject.toml
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
addopts = "-v --tb=short"
markers = [
"slow: marks tests as slow",
"integration: integration tests",
]
Useful Plugins
# Code coverage
pip install pytest-cov
pytest --cov=src --cov-report=html
# Parallel execution
pip install pytest-xdist
pytest -n auto # Use all CPU cores
# Better diffs
pip install pytest-clarity
# Random order
pip install pytest-randomly
# Repeat tests
pip install pytest-repeat
pytest --count=10
# Timeout
pip install pytest-timeout
pytest --timeout=300
Best Practices
1. One Assertion Per Test (Ideally)
# Less ideal: multiple assertions
def test_user_creation():
user = create_user("John", "john@example.com")
assert user.name == "John"
assert user.email == "john@example.com"
assert user.id is not None
# Better: focused tests
def test_user_has_correct_name():
user = create_user("John", "john@example.com")
assert user.name == "John"
def test_user_has_correct_email():
user = create_user("John", "john@example.com")
assert user.email == "john@example.com"
2. Descriptive Test Names
# Bad
def test_1():
pass
# Good
def test_user_cannot_login_with_invalid_password():
pass
3. Arrange-Act-Assert Pattern
def test_discount_calculation():
# Arrange
cart = ShoppingCart()
cart.add_item(Product("Book", 100))
cart.add_item(Product("Pen", 10))
# Act
total = cart.calculate_total(discount=0.1)
# Assert
assert total == 99 # (100 + 10) * 0.9
4. Use Fixtures for Common Setup
@pytest.fixture
def authenticated_user(db):
user = User.create(email="test@example.com")
user.set_password("password123")
return user
def test_can_update_profile(authenticated_user):
authenticated_user.update_profile(name="New Name")
assert authenticated_user.name == "New Name"
AI-Assisted Pytest
AI tools can accelerate test writing when used appropriately.
What AI does well:
- Generate test cases from function signatures
- Create parametrized test data
- Write boilerplate fixtures
- Suggest edge cases to test
What still needs humans:
- Deciding what behavior matters
- Verifying tests test the right thing
- Debugging complex fixture interactions
- Understanding business logic requirements
Useful prompt:
I have this Python function:
def validate_order(order: dict) -> dict:
if not order.get("items"):
return {"valid": False, "error": "No items"}
if any(item["quantity"] <= 0 for item in order["items"]):
return {"valid": False, "error": "Invalid quantity"}
return {"valid": True}
Generate pytest tests covering:
- Valid order with multiple items
- Empty items list
- Missing items key
- Zero quantity
- Negative quantity
FAQ
What is pytest used for?
Pytest is Python’s most popular testing framework. It runs unit tests, integration tests, functional tests, and end-to-end tests with minimal boilerplate code. Pytest is used by major projects like Django, Flask, FastAPI, Requests, and most of the Python ecosystem. It’s simpler than unittest while being more powerful.
Is pytest better than unittest?
For most use cases, yes. Pytest has simpler syntax (plain functions instead of classes), better assertion messages (shows actual vs expected), powerful fixtures without inheritance, and a rich plugin ecosystem. Unittest is part of Python’s standard library and works fine, but requires more code to achieve the same results. Pytest can run unittest tests too.
How do I run pytest?
Run pytest in your terminal from your project root. It automatically discovers test files (named test_*.py) and test functions (named test_*). Use pytest -v for verbose output showing each test. Use pytest -k "pattern" to run tests matching a pattern. Use pytest path/to/test_file.py to run specific files.
What are pytest fixtures?
Fixtures are functions decorated with @pytest.fixture that provide test dependencies like database connections, test data, API clients, or configured objects. Tests receive fixtures as function arguments. Fixtures handle setup before tests and teardown after (using yield). They can have different scopes: function, class, module, or session.
Official Resources
See Also
- Pytest Advanced Techniques - Mastering fixtures, parametrization, and plugins
- Locust Python Load Testing - Performance testing with Python
- Test Automation Pyramid - Where unit tests fit in your strategy
- Robot Framework - Keyword-driven testing for Python
