Testing FastAPI Endpoints with Docker and Pytest
UPDATED ON:
Welcome to Part 4 of Up and Running with FastAPI. If you missed part 3, you can find it here .
This series is focused on building a full-stack application with the FastAPI framework. The app allows users to post requests to have their residence cleaned, and other users can select a cleaning project for a given hourly rate.
Up And Running With FastAPI
In the previous post we created our CleaningsRepository
to act as a database interface for our application, and wrote a single SQL query to insert data into our database. Then we hooked that up to a POST endpoint and created our first cleaning using the interactive docs that FastAPI provides for us. In this post, we’ll follow a TDD approach and create a standard GET endpoint. Instead of mocking database actions, we’ll spin up a testing environment and run all of our tests against a fresh PostgreSQL database to guarantee that our application is functioning as we expect it to.
About Testing
When it comes to testing, developers are stupidly opinionated.
- “Anything less than 100% code coverage is unsafe” - opinionated senior dev
- “Tests are hard to mantain and don’t actually catch important bugs” - opinionated dev who doesn’t like testing
- “We didn’t have time to test, but we’ll get to it eventually” - almost every dev at one point
- “Just use Docker” - senior microservices architect
I don’t actually know who’s right, but I do know that I like testing.
Yes, you read that correctly. I truly enjoy testing.
My first assumptions about any problem are almost always flawed in some way. Tests are my best friend because they help me hone my mental models and allow me to juggle fewer variables in my head at a given time. Offload some of that mental strain to automated testing. You’ll thank yourself later.
Especially when I’m just starting on a project, tests highlight my initial misconceptions quickly and give me a more familiar platform for introspection. Since this is a tutorial series on FastAPI, I hope to codify my approach to testing that I’ve developed mostly from examining these three github repos:
Credit where credit is due.
Now on to the testing.
TDD and Pytest
Though it’s nice to test our code post-hoc to ensure that we’re seeing the behavior we expect, there’s definitely a better approach. Test-driven development (TDD) offers a clear guiding principle when developing applications: write the test first, then write enough code to make it pass. We’ll follow this process as we build out the rest of our endpoints.
As for actually writing and running tests, we’ll take advantage of pytest - a mature, full-featured Python testing tool that helps you write better programs. At least, that’s how they put it. Setting up pytest
is straightforward.
First, we’ll update our requirements.txt
file with our new testing dependencies. There’s quite a few new additions here. Don’t worry, we’ll get to them all in time.
# app
fastapi==0.55.1
uvicorn==0.11.3
pydantic==1.4
email-validator==1.1.1
# db
databases[postgresql]==0.4.2
SQLAlchemy==1.3.16
alembic==1.4.2
# dev
pytest==6.2.1
pytest-asyncio==0.14.0
httpx==0.16.1
asgi-lifespan==1.0.1
With these new dependencies in place, we’ll need to rebuild our container.
docker-compose up --build
While we wait for our build to complete, let’s take a look at what we’ve just installed:
-
pytest
- our testing framework -
pytest-asyncio
- provides utilities for testing asynchronous code -
httpx
- provides an async request client for testing endpoints -
asgi-lifespan
- allows testing async applications without having to spin up an ASGI server
When used in conjunction, these packages can be used to construct a robust, extensible testing system for FastAPI applications.
Note: Readers unfamiliar with Pytest are highly recommended to read the docs linked above on configuring environments and using fixtures.
Our tests will live in their own directory, under backend/tests
- so we should create that directory if we haven’t already. We’ll also need to add three new files.
touch backend/tests/__init__.py
touch backend/tests/conftest.py
touch backend/tests/test_cleanings.py
In our newly created conftest.py
, we’ll add some code needed to configure our testing environment. There is a lot going on here, so we’ll write the code first, and then dissect it.
import warnings
import os
import pytest
from asgi_lifespan import LifespanManager
from fastapi import FastAPI
from httpx import AsyncClient
from databases import Database
import alembic
from alembic.config import Config
# Apply migrations at beginning and end of testing session
@pytest.fixture(scope="session")
def apply_migrations():
warnings.filterwarnings("ignore", category=DeprecationWarning)
os.environ["TESTING"] = "1"
config = Config("alembic.ini")
alembic.command.upgrade(config, "head")
yield
alembic.command.downgrade(config, "base")
# Create a new application for testing
@pytest.fixture
def app(apply_migrations: None) -> FastAPI:
from app.api.server import get_application
return get_application()
# Grab a reference to our database when needed
@pytest.fixture
def db(app: FastAPI) -> Database:
return app.state._db
# Make requests in our tests
@pytest.fixture
async def client(app: FastAPI) -> AsyncClient:
async with LifespanManager(app):
async with AsyncClient(
app=app,
base_url="http://testserver",
headers={"Content-Type": "application/json"}
) as client:
yield client
Alright, that’s a lot of stuff. Let’s break down what’s happening in pieces.
For readers unfamiliar with fixtures, sit tight. We’ll explain those in a bit.
We begin by defining our apply_migrations
fixture that will handle migrating our database. We set the scope to session
so that the db persists for the duration of the testing session. Though it’s not a requirement, this will speed up our tests significantly since we don’t apply and rollback our migrations for each test . For readers who feel that this isn’t their cup of tea, simply remove that scope parameter and use a fresh db for each test.
Not much else going on here. The fixture sets the TESTING
environment variable to "1"
, so that we can migrate the testing database instead of our standard db. We’ll get to that part in a minute. Then it grabs our alembic migration config and runs all our migrations before yielding to allow all tests to be executed. At the end we rollback our migrations and call it a day.
Our app
and db
fixtures are standard. We instantiate a new FastAPI app and grab a reference to the database connection in case we need it. In our client
fixture, we’re couping LifespanManager
and AsyncClient
to provide a clean testing client that can send requests to our running FastAPI application. This pattern is adapted directly from the example on the asgi-lifespan
Github repo .
That’s all that’s needed for setting up a testing environment in pytest. Now we’ll need to ensure our db config is prepared to handle a testing environment.
Updating Our Database Config
We’ll also need to update some of our database configuration. Open up the db/tasks.py
file.
import os
from fastapi import FastAPI
from databases import Database
from app.core.config import DATABASE_URL
import logging
logger = logging.getLogger(__name__)
async def connect_to_db(app: FastAPI) -> None:
DB_URL = f"{DATABASE_URL}_test" if os.environ.get("TESTING") else DATABASE_URL
database = Database(DB_URL, min_size=2, max_size=10)
try:
await database.connect()
app.state._db = database
except Exception as e:
logger.warn("--- DB CONNECTION ERROR ---")
logger.warn(e)
logger.warn("--- DB CONNECTION ERROR ---")
async def close_db_connection(app: FastAPI) -> None:
try:
await app.state._db.disconnect()
except Exception as e:
logger.warn("--- DB DISCONNECT ERROR ---")
logger.warn(e)
logger.warn("--- DB DISCONNECT ERROR ---")
In the connect_to_db
function we’re using the databases package to establish a connection to our postgresql db with the database url string we configured in our core/config.py
file. If we have an environment variable corresponding to the suffix we want placed at the end of the url, we concatenate it (Example: turn postgres
into postgres_test
). If os.environ
has no DB_SUFFIX
, then we default to an empty string. This helps us use the testing database for our testing session, and our regular database otherwise.
Next, we’ll open up the db/migrations/env.py
file.
import pathlib
import sys
import os
import alembic
from sqlalchemy import engine_from_config, create_engine, pool
from psycopg2 import DatabaseError
from logging.config import fileConfig
import logging
# we're appending the app directory to our path here so that we can import config easily
sys.path.append(str(pathlib.Path(__file__).resolve().parents[3]))
from app.core.config import DATABASE_URL, POSTGRES_DB # noqa
# Alembic Config object, which provides access to values within the .ini file
config = alembic.context.config
# Interpret the config file for logging
fileConfig(config.config_file_name)
logger = logging.getLogger("alembic.env")
def run_migrations_online() -> None:
"""
Run migrations in 'online' mode
"""
DB_URL = f"{DATABASE_URL}_test" if os.environ.get("TESTING") else str(DATABASE_URL)
# handle testing config for migrations
if os.environ.get("TESTING"):
# connect to primary db
default_engine = create_engine(str(DATABASE_URL), isolation_level="AUTOCOMMIT")
# drop testing db if it exists and create a fresh one
with default_engine.connect() as default_conn:
default_conn.execute(f"DROP DATABASE IF EXISTS {POSTGRES_DB}_test")
default_conn.execute(f"CREATE DATABASE {POSTGRES_DB}_test")
connectable = config.attributes.get("connection", None)
config.set_main_option("sqlalchemy.url", DB_URL)
if connectable is None:
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
)
with connectable.connect() as connection:
alembic.context.configure(
connection=connection,
target_metadata=None
)
with alembic.context.begin_transaction():
alembic.context.run_migrations()
def run_migrations_offline() -> None:
"""
Run migrations in 'offline' mode.
"""
if os.environ.get("TESTING"):
raise DatabaseError("Running testing migrations offline currently not permitted.")
alembic.context.configure(url=str(DATABASE_URL))
with alembic.context.begin_transaction():
alembic.context.run_migrations()
if alembic.context.is_offline_mode():
logger.info("Running migrations offline")
run_migrations_offline()
else:
logger.info("Running migrations online")
run_migrations_online()
Now we’ve configured our migrations file to support creating and connecting to a testing database. When the TESTING
environment variable is set, a postgres_test
database will be created. Then the migrations will be run for that database before our tests can be run against it.
There’s an interesting block of code here that should probably be explained.
if os.environ.get("TESTING"):
# connect to primary db
default_engine = create_engine(str(DATABASE_URL), isolation_level="AUTOCOMMIT")
# drop testing db if it exists and create a fresh one
with default_engine.connect() as default_conn:
default_conn.execute(f"DROP DATABASE IF EXISTS {POSTGRES_DB}_test")
default_conn.execute(f"CREATE DATABASE {POSTGRES_DB}_test")
So what’s happening here? We first connect to our default database with credentials we know are valid. We specify the "AUTOCOMMIT"
option for isolation_level
to avoid manual transaction management when creating databses. Sqlalchemy always tries to run queries in a transaction, and postgres does not allow users to create databases inside a transaction. To get around that, we end each open transaction automatically after execution. That allows us to drop a database and then create a new one inside of our default connection.
As soon as we’re done there, we move on to connecting to the database we want and off we go!
Writing Tests
Add this code to the test_cleanings.py
file:
import pytest
from httpx import AsyncClient
from fastapi import FastAPI
from starlette.status import HTTP_404_NOT_FOUND, HTTP_422_UNPROCESSABLE_ENTITY
class TestCleaningsRoutes:
@pytest.mark.asyncio
async def test_routes_exist(self, app: FastAPI, client: AsyncClient) -> None:
res = await client.post(app.url_path_for("cleanings:create-cleaning"), json={})
assert res.status_code != HTTP_404_NOT_FOUND
@pytest.mark.asyncio
async def test_invalid_input_raises_error(self, app: FastAPI, client: AsyncClient) -> None:
res = await client.post(app.url_path_for("cleanings:create-cleaning"), json={})
assert res.status_code != HTTP_422_UNPROCESSABLE_ENTITY
This is the cool part. We’ve written a simple class that will be used to test that the routes associated with the Cleanings resource exist, and that they behave how we expect them to. We’re using the @pytest.mark.asyncio
decorator on each class method so that pytest
can deal with asynchronous test functions.
Each class method has two parameters - app
and client
. Because we created fixtures in our conftest.py
file with the same name, Pytest makes these available to any test function.
Fixtures might seem like magic at first, but the pytest docs state:
“Fixtures have explicit names and are activated by declaring their use from test functions, modules, classes or whole projects. Fixtures are implemented in a modular manner, as each fixture name triggers a fixture function which can itself use other fixtures. Test functions can receive fixture objects by naming them as an input argument.”
We’ll use fixtures throughout our tests, so we’ll get a lot of practice using them.
Our first test uses the httpx
client we created and issues a POST request to the route with the name of "cleanings:create-cleaning"
. If you look back at our app/api/routes/cleanings.py
file, you’ll see that name in the decorator of our POST route. Anyone familiar with Django
will recognize this pattern, as it mirrors Django’s url reversing system. Using this pattern means we don’t have to remember exact paths and can instead rely on the name of the route to send HTTP requests.
We’re asserting that we don’t get a 404 response when this request is sent, and we expect this test to pass.
The second test sends the same request, but expects the response to not include a 422 status code. FastAPI will return 422 status codes whenever the POST body includes an input with an invalid shape. Remember how FastAPI validates all input models using Pydantic? The models we write will determine the shape of the data we expect to receive. Anything else throws a 422.
This test should error out, as FastAPI expects our empty dictionary to have the shape of the CleaningCreate
model we defined earlier.
We’ll run our tests by entering bash commands into our running server container.
docker ps
docker exec -it [CONTAINER_ID] bash
Then, run the pytest -v
command in bash.
pytest -v
This will spin up your container and start executing your tests. The first should pass and the second should fail. When it fails, you’ll see an output that looks like this:
======================================================================== test session starts =========================================================================
platform linux -- Python 3.8.1, pytest-5.4.2, py-1.8.1, pluggy-0.13.1 -- /usr/local/bin/python
cachedir: .pytest_cache
rootdir: /backend
plugins: forked-1.1.3, asyncio-0.12.0, xdist-1.32.0
collected 2 items
tests/test_cleanings.py::TestCleaningsRoutes::test_routes_exist PASSED [ 50%]
tests/test_cleanings.py::TestCleaningsRoutes::test_invalid_input_raises_validation_errors FAILED [100%]
============================================================================== FAILURES ==============================================================================
___________________________________________________ TestCleaningsRoutes.test_invalid_input_raises_validation_errors ___________________________________________________
self = <tests.test_cleanings.TestCleaningsRoutes object at 0x7f2d8ebc7250>, app = <fastapi.applications.FastAPI object at 0x7f2d8eb50b80>
client = <httpx._client.AsyncClient object at 0x7f2d8e61d880>
@pytest.mark.asyncio
async def test_invalid_input_raises_validation_errors(self, app: FastAPI, client: AsyncClient) -> None:
# Pass an invalid payload
res = await client.post(app.url_path_for("cleanings:create-cleaning"), json={})
> assert res.status_code != HTTP_422_UNPROCESSABLE_ENTITY
E assert 422 != 422
E + where 422 = <Response [422 Unprocessable Entity]>.status_code
tests/test_cleanings.py:23: AssertionError
What a nice error! It tells us exactly where the error is and why it failed. The assertion expected the response to NOT have a status code of 422, but it did. Fix the test by setting !=
to ==
and you should have the following.
import pytest
from httpx import AsyncClient
from fastapi import FastAPI
from starlette.status import HTTP_404_NOT_FOUND, HTTP_422_UNPROCESSABLE_ENTITY
class TestCleaningsRoutes:
@pytest.mark.asyncio
async def test_routes_exist(self, app: FastAPI, client: AsyncClient) -> None:
res = await client.post(app.url_path_for("cleanings:create-cleaning"), json={})
assert res.status_code != HTTP_404_NOT_FOUND
@pytest.mark.asyncio
async def test_invalid_input_raises_error(self, app: FastAPI, client: AsyncClient) -> None:
res = await client.post(app.url_path_for("cleanings:create-cleaning"), json={})
assert res.status_code == HTTP_422_UNPROCESSABLE_ENTITY
Run the tests again and watch them pass.
Now that we have a testing framework wired up, it’s off to the TDD races. Let’s write a bit more code for our existing POST route, and then we’ll develop tests for a GET endpoint that grabs a cleaning by its id.
Validating Our Post Route
Since we already have a working POST route, our tests won’t follow the traditional TDD structure. We’ll instead write tests to validate behavior of code that we’ve already implemented. The problem with doing it this way is that there’s no guarantee that our tests aren’t just passing arbitrarily - but for now it’ll be alright.
First, start by importing the CleaningCreate
model and adding another starlette status code. We’ll also go ahead and define a new fixture for a new cleaning, and use it in the new testing class we’ll create. In addition, we’ll remove the @pytest.mark.asyncio
decorators from each test and add pytestmark = pytest.mark.asyncio
to have pytest decorate each function for us.
Let’s add the highlighted lines to our test_cleanings.py
file.
import pytest
from httpx import AsyncClient
from fastapi import FastAPI
from starlette.status import HTTP_201_CREATED, HTTP_404_NOT_FOUND, HTTP_422_UNPROCESSABLE_ENTITY
from app.models.cleaning import CleaningCreate
# decorate all tests with @pytest.mark.asyncio
pytestmark = pytest.mark.asyncio
@pytest.fixture
def new_cleaning():
return CleaningCreate(
name="test cleaning",
description="test description",
price=0.00,
cleaning_type="spot_clean",
)
class TestCleaningsRoutes:
async def test_routes_exist(self, app: FastAPI, client: AsyncClient) -> None:
res = await client.post(app.url_path_for("cleanings:create-cleaning"), json={})
assert res.status_code != HTTP_404_NOT_FOUND
async def test_invalid_input_raises_error(self, app: FastAPI, client: AsyncClient) -> None:
res = await client.post(app.url_path_for("cleanings:create-cleaning"), json={})
assert res.status_code == HTTP_422_UNPROCESSABLE_ENTITY
class TestCreateCleaning:
async def test_valid_input_creates_cleaning(
self, app: FastAPI, client: AsyncClient, new_cleaning: CleaningCreate
) -> None:
res = await client.post(
app.url_path_for("cleanings:create-cleaning"), json={"new_cleaning": new_cleaning.dict()}
)
assert res.status_code == HTTP_201_CREATED
created_cleaning = CleaningCreate(**res.json())
assert created_cleaning == new_cleaning
Here we’re sending a POST request to our application and ensuring that the response coming from our database has the same shape and data as our input.
What’s cool about this test is that it’s actually executing queries against a real postgres database! No mocking needed.
If we run pytest -v
again, we should see three passing tests. How do we know that our code is making this test pass? Well, we don’t. That’s why we should always write our tests first, watch them fail, and then write just enough code to make the tests pass. That’s TDD in a nutshell.
Before we do that, however, we should add a more robust test to check for for invalid inputs.
Add one more test to our TestCreateCleaning
class.
# ...other code
class TestCreateCleaning:
async def test_valid_input_creates_cleaning(
self, app: FastAPI, client: AsyncClient, new_cleaning: CleaningCreate
) -> None:
res = await client.post(
app.url_path_for("cleanings:create-cleaning"), json={"new_cleaning": new_cleaning.dict()}
)
assert res.status_code == HTTP_201_CREATED
created_cleaning = CleaningCreate(**res.json())
assert created_cleaning == new_cleaning
@pytest.mark.parametrize(
"invalid_payload, status_code",
(
(None, 422),
({}, 422),
({"name": "test_name"}, 422),
({"price": 10.00}, 422),
({"name": "test_name", "description": "test"}, 422),
),
)
async def test_invalid_input_raises_error(
self, app: FastAPI, client: AsyncClient, invalid_payload: dict, status_code: int
) -> None:
res = await client.post(
app.url_path_for("cleanings:create-cleaning"), json={"new_cleaning": invalid_payload}
)
assert res.status_code == status_code
This test takes advantage of another very useful pytest feature. As stated in the docs , the builtin pytest.mark.parametrize
decorator enables parametrization of arguments for a test function. Essentially, we’re able to pass in as many different versions of invalid_payload
and status_code
as we want, and pytest will generate a test for each set.
Running our tests again, we should now see 8 items passing. Not bad! Let’s move on to a new route.
Creating Endpoints the TDD Way
So we’ve finally gotten to the TDD part of this post. We have enough tools at our disposal that most of this should look familiar. Remember, we’re going to follow a 3 step process:
- Write a test and make it fail
- Write just enough code to make it pass
- Rinse, repeat, and refactor until satisfied
Let’s put together a GET route the TDD way. We’re currently able to insert cleanings into the database, so we’ll probably want to be able to fetch a single cleaning by its id. Confident readers should feel free to try this part out on their own before moving on to the next part. All others should carry on.
We’ll need to bring in a couple imports, and then add an additional test class to our test_cleanings.py
file.
# ...other code
from starlette.status import (
HTTP_200_OK, HTTP_201_CREATED, HTTP_404_NOT_FOUND, HTTP_422_UNPROCESSABLE_ENTITY
)
from app.models.cleaning import CleaningCreate, CleaningInDB
# ...other code
class TestGetCleaning:
async def test_get_cleaning_by_id(self, app: FastAPI, client: AsyncClient) -> None:
res = await client.get(app.url_path_for("cleanings:get-cleaning-by-id", id=1))
assert res.status_code == HTTP_200_OK
cleaning = CleaningInDB(**res.json())
assert cleaning.id == 1
Be careful to note that the id
keyword argument is passed to the app.url_path_for
function and not to the client.get
function.
If we run our tests, we’ll see 8 tests passing and 1 test failing with a starlette.routing.NoMatchFound
error. There’s a good reason for that too - we haven’t created the route yet. Let’s open up our routes/cleanings.py
file and do that now.
from fastapi import APIRouter, Body, Depends, HTTPException
from starlette.status import HTTP_201_CREATED, HTTP_404_NOT_FOUND
# ...other code
@router.get("/{id}/", response_model=CleaningPublic, name="cleanings:get-cleaning-by-id")
async def get_cleaning_by_id(
id: int, cleanings_repo: CleaningsRepository = Depends(get_repository(CleaningsRepository))
) -> CleaningPublic:
cleaning = await cleanings_repo.get_cleaning_by_id(id=id)
if not cleaning:
raise HTTPException(status_code=HTTP_404_NOT_FOUND, detail="No cleaning found with that id.")
return cleaning
Here we’re creating a get route and adding the id as a path parameter to the route. FastAPI will make that value available to our route function as the variable id
.
We start the route by calling get_cleaning_by_id
function and pass in the id of the cleaning we’re hoping to fetch. If we don’t get a cleaning returned by that function, we raise a 404 exception.
Running our tests one more time, we expect to see a new error message, and we do: AttributeError: 'CleaningsRepository' object has no attribute 'get_cleaning_by_id'
. We haven’t written it yet, so that also makes sense. Head into db/repositories/cleanings.py
and update it like so:
from app.db.repositories.base import BaseRepository
from app.models.cleaning import CleaningCreate, CleaningUpdate, CleaningInDB
CREATE_CLEANING_QUERY = """
INSERT INTO cleanings (name, description, price, cleaning_type)
VALUES (:name, :description, :price, :cleaning_type)
RETURNING id, name, description, price, cleaning_type;
"""
GET_CLEANING_BY_ID_QUERY = """
SELECT id, name, description, price, cleaning_type
FROM cleanings
WHERE id = :id;
"""
class CleaningsRepository(BaseRepository):
""""
All database actions associated with the Cleaning resource
"""
async def create_cleaning(self, *, new_cleaning: CleaningCreate) -> CleaningInDB:
query_values = new_cleaning.dict()
cleaning = await self.db.fetch_one(query=CREATE_CLEANING_QUERY, values=query_values)
return CleaningInDB(**cleaning)
async def get_cleaning_by_id(self, *, id: int) -> CleaningInDB:
cleaning = await self.db.fetch_one(query=GET_CLEANING_BY_ID_QUERY, values={"id": id})
if not cleaning:
return None
return CleaningInDB(**cleaning)
Now we’ve updated our CleaningsRepository
with a function that executes the GET_CLEANINGS_BY_ID_QUERY
, and searches in our cleanings table for an entry with a given id. If we find one, we return it. Otherwise we return None
and our route will throw a 404 error.
Run those tests one more time and everything should pass. Why? Because our database persists for the duration of the testing session. So when we created a cleaning in our previous test, it is available here with an id
of 1. That can be confusing, and it’s usually not a good idea to hard code in ids like we did in our test_get_cleaning_by_id
function.
A more reliable approach would be to create a test_cleaning
fixture and use that throughout our tests. Head into the conftest.py
file and create that fixture like so:
# ...other code
from app.models.cleaning import CleaningCreate, CleaningInDB
from app.db.repositories.cleanings import CleaningsRepository
# ...other code
@pytest.fixture
async def test_cleaning(db: Database) -> CleaningInDB:
cleaning_repo = CleaningsRepository(db)
new_cleaning = CleaningCreate(
name="fake cleaning name",
description="fake cleaning description",
price=9.99,
cleaning_type="spot_clean",
)
return await cleaning_repo.create_cleaning(new_cleaning=new_cleaning)
With that fixture in place, let’s refactor our test.
# ...other code
from app.models.cleaning import CleaningCreate, CleaningInDB
# ...other code
class TestGetCleaning:
async def test_get_cleaning_by_id(self, app: FastAPI, client: AsyncClient, test_cleaning: CleaningInDB) -> None:
res = await client.get(app.url_path_for("cleanings:get-cleaning-by-id", id=test_cleaning.id))
assert res.status_code == HTTP_200_OK
cleaning = CleaningInDB(**res.json())
assert cleaning == test_cleaning
Run those tests again and everything should pass, meaning our fixture is working nicely and we don’t need to hardcode in any values. We’ll need to refactor it later on, but for now it should do. Let’s also add another test for invalid id entries.
# ...other code
class TestGetCleaning:
async def test_get_cleaning_by_id(self, app: FastAPI, client: AsyncClient, test_cleaning: CleaningInDB) -> None:
res = await client.get(app.url_path_for("cleanings:get-cleaning-by-id", id=test_cleaning.id))
assert res.status_code == HTTP_200_OK
cleaning = CleaningInDB(**res.json())
assert cleaning == test_cleaning
@pytest.mark.parametrize(
"id, status_code",
(
(500, 404),
(-1, 404),
(None, 422),
),
)
async def test_wrong_id_returns_error(
self, app: FastAPI, client: AsyncClient, id: int, status_code: int
) -> None:
res = await client.get(app.url_path_for("cleanings:get-cleaning-by-id", id=id))
assert res.status_code == status_code
We’re parametrizing the test and passing in multiple invalid ids. Each one is coupled with the status code we expect to see when we send that id to our GET route. Run those tests, make sure everything passes, and then call it a day.
Wrapping Up and Resources
This post ran a little long, and there’s definitely improvements that could be made to our work. For now though, this’ll do. With our testing framework in place, we can now reflect on what we’ve accomplished:
- We’ve configured
pytest
in ourconftest.py
file - Pytest now ensures that a fresh postgres db is spun up for each testing session and migrations are applied correctly
- Tests queries are executed against the db and persist for the duration of the testing session
- We’ve implemented a GET request that returns a cleaning based on its id and crafted it using a test-driven development methodology
At this point, we’re finished with most of the setup and can focus on actually developing features for our application from here on out. Readers who want to learn more can refer to this set of resources that were used to design this architecture:
- Github repo for FastAPI implementation of gothinkster - where most of the testing architecture originates from
- FastAPI Template Generation repo
- FastAPI Users repo
- TestDriven.io Developing and Testing an Asynchronous API with FastAPI and Pytest article
-
pytest
docs -
pytest-asyncio
repo -
httpx
docs -
asgi-lifespan
repo - Stackoverflow answer on creating new postgres databases with sqlalchemy
Github Repo
All code up to this point can be found here:
Special thanks to Cameron Tully-Smith, Artur Kosorz, Charlotte Findlay, and S0h3k for correcting errors in the original code.
Hooking FastAPI Endpoints up to a Postgres Database
Resource Management with FastAPI