Setting Up Structured Logging in FastAPI with structlog
8 mins read
#python
#logging
#API
Introduction
A few days ago, I was wandering through the open-source universe, exploring different projects, when I stumbled upon one that caught my attention. As I was going through its code, something stood out, the way it handled logging. The setup was clean, structured, and easy to read. That’s when I noticed it was using structlog.
Curious, I decided to dive deeper. What started as a casual look into logging quickly turned into a two-day deep dive. I found myself nerding out over logging best practices, structured logs, and how modern applications handle observability. That’s what led me to write this blog post, to share what I’ve learned about structured logging and how you can implement it in your FastAPI applications using structlog.
In this blog post, we’ll focus on structured logging, what it is, why it’s better than traditional logging, and how to set it up in FastAPI using structlog. We’ll go step by step, from installation to advanced configurations, ensuring that by the end of this post, you’ll have a solid structured logging setup for your FastAPI application.
Understanding Structured Logging
Traditional logging often produces unstructured text, making it harder to parse and analyze logs at scale. Structured logging, on the other hand, formats logs as key-value pairs or JSON objects, making them machine-readable and easier to filter, search, and analyze.
Instead of logging plain strings like this:
[INFO] User logged in: JohnDoe
Structured logging would store it in a structured format:
{ "level": "info", "event": "User logged in", "username": "JohnDoe" }
This approach makes logs more useful, especially when working with log aggregators, monitoring tools, or cloud services
Introducing structlog
The standard logging module in Python is powerful, but adding context to log messages can be tedious and inconsistent. structlog simplifies this by providing structured, contextual logging out of the box.
Consider these log examples:
DEBUG:__main__:User example triggered deletion of hero 18bcb9d1-9ecd-4297-a7cc-591d0ededf7aINFO:__main__:Hero 18bcb9d1-9ecd-4297-a7cc-591d0ededf7a deleted successfully
With structlog, we can achieve much more informative and structured logs:
{"timestamp": "2024-05-03T18:58:21.443989Z", "level": "debug", "event": "Triggering deletion of hero", "request_id": "97133231229d4af2a0588e468049261c", "hero_id": "18bcb9d1-9ecd-4297-a7cc-591d0ededf7a", "user_id": "example"}{"timestamp": "2024-05-03T18:58:21.509100Z", "level": "info", "event": "Hero deleted successfully", "request_id": "97133231229d4af2a0588e468049261c", "hero_id": "18bcb9d1-9ecd-4297-a7cc-591d0ededf7a", "user_id": "example"}
This structured approach makes it easy to track requests, correlate logs across services, and ingest logs into observability platforms.
One of the key features of structlog is bound loggers. Instead of passing extra context with every log statement, you can bind contextual data once and reuse it:
import structlog
logger = structlog.get_logger().bind(service="user-service", request_id="abc123")logger.info("User logged in", username="JohnDoe")
Which outputs:
{ "service": "user-service", "request_id": "abc123", "level": "info", "event": "User logged in", "username": "JohnDoe"}
This eliminates repetitive code and ensures every log contains the necessary context.
Integrating Structlog with FastAPI
Now that we’ve covered the basics of structured logging and introduced structlog, it’s time to dive into setting it up in a FastAPI application.
Install structlog
python -m pip install structlog
Settings
To make your logging configuration flexible, extend your settings with logging parameters using pydantic-settings
.
from enum import StrEnum
from pydantic_settings import BaseSettings, SettingsConfigDict
class Environment(StrEnum): development = "development" production = "production" # testing = "testing" - I don't write tests that's why this is commented out...
class Settings(BaseSettings): ENV: Environment = Environment.development DEBUG: bool = False LOG_LEVEL: str = "DEBUG"
Configuring the logger
The core of our logging setup revolves around Structlog, which uses processors to format, filter, and enrich log entries. Processors are functions that take a log event, modify it, and pass it along to the next processor in the chain. This modular approach allows us to customize the logging pipeline to suit our needs.
Here’s the complete logging.py
file for reference:
import logging.configimport uuidfrom typing import Any, Generic, TypeVar
import structlogfrom structlog.typing import EventDict
from src.config import settings
RendererType = TypeVar("RendererType")
Logger = structlog.stdlib.BoundLogger
def get_level() -> str: return settings.LOG_LEVEL
def drop_color_message_key(_, __, event_dict: EventDict) -> EventDict: """ Uvicorn logs the message a second time in the extra `color_message`, but we don't need it. This processor drops the key from the event dict if it exists. """ event_dict.pop("color_message", None) return event_dict
class Logging(Generic[RendererType]): """Customized implementation inspired by the following documentation:
https://www.structlog.org/en/stable/standard-library.html#rendering-using-structlog-based-formatters-within-logging """
timestamper = structlog.processors.TimeStamper(fmt="iso") shared_processors = [ structlog.contextvars.merge_contextvars, structlog.stdlib.add_log_level, structlog.stdlib.add_logger_name, structlog.stdlib.PositionalArgumentsFormatter(), drop_color_message_key, timestamper, structlog.processors.UnicodeDecoder(), structlog.processors.StackInfoRenderer(), ]
@classmethod def get_processors(cls) -> list[Any]: if settings.is_production(): cls.shared_processors.append(structlog.processors.format_exc_info)
return cls.shared_processors + [ structlog.stdlib.ProcessorFormatter.wrap_for_formatter ]
@classmethod def get_renderer(cls) -> RendererType: raise NotImplementedError()
@classmethod def configure_stdlib( cls, ) -> None: level = get_level()
if settings.is_production(): cls.shared_processors.append(structlog.processors.format_exc_info)
logging.config.dictConfig( { "version": 1, "disable_existing_loggers": True, "formatters": { "myLogger": { "()": structlog.stdlib.ProcessorFormatter, "processors": [ structlog.stdlib.ProcessorFormatter.remove_processors_meta, cls.get_renderer(), ], "foreign_pre_chain": cls.shared_processors, }, }, "handlers": { "default": { "level": level, "class": "logging.StreamHandler", "formatter": "myLogger", }, }, "loggers": { "": { "handlers": ["default"], "level": level, "propagate": False, }, # Propagate third-party loggers to the root one **{ logger: { "handlers": [], "propagate": True, } for logger in [ "uvicorn", "sqlalchemy", "arq", ] }, }, } )
@classmethod def configure_structlog(cls) -> None: structlog.configure_once( processors=cls.get_processors(), logger_factory=structlog.stdlib.LoggerFactory(), wrapper_class=structlog.stdlib.BoundLogger, cache_logger_on_first_use=True, )
@classmethod def configure(cls) -> None: cls.configure_stdlib() cls.configure_structlog()
class Development(Logging[structlog.dev.ConsoleRenderer]): @classmethod def get_renderer(cls) -> structlog.dev.ConsoleRenderer: return structlog.dev.ConsoleRenderer(colors=True)
class Production(Logging[structlog.processors.JSONRenderer]): @classmethod def get_renderer(cls) -> structlog.processors.JSONRenderer: return structlog.processors.JSONRenderer()
def configure() -> None: if settings.is_development(): Development.configure() else: Production.configure()
def generate_correlation_id() -> str: return str(uuid.uuid4())
In our logging.py
file, we define a set of shared processors that are used across both development and production environments, in production, we also add the format_exc_info
processor to include exception tracebacks in the logs.
Integrating with Python’s Logging Module
To ensure compatibility with third-party libraries that use Python’s standard logging
module, we configure a custom formatter (ProcessorFormatter
) that converts traditional logs into Structlog’s structured format. This is done using the logging.config.dictConfig
method, which allows us to define:
-
A custom formatter (
myLogger
) that uses Structlog’s ProcessorFormatter to process logs. -
A
StreamHandler
to output logs to the console. -
Loggers for third-party libraries like Uvicorn, SQLAlchemy, and Arq, which are configured to propagate their logs to the root logger.
This setup ensures that all logs, whether they originate from our application or third-party libraries, are formatted consistently and enriched with structured information.
Development vs. Production Logging
We’ve implemented two logging configurations: one for development and one for production.
-
Development: Uses
ConsoleRenderer
to output colorful, human-readable logs to the console. This is ideal for debugging and local development. -
Production: Uses
JSONRenderer
to output logs in JSON format. This makes it easier to parse and analyze logs using tools like ELK Stack, Datadog, or Splunk.
Correlation IDs for Traceability
To improve traceability across distributed systems, we’ve added a utility function (generate_correlation_id
) to generate unique correlation IDs. These IDs can be added to the logging context using Structlog’s contextvars integration, ensuring that all logs related to a single request share the same correlation ID.
Adding Correlation ID Middleware
A correlation ID is a unique identifier that is passed along with a request and included in all logs related to that request. This allows us to trace the flow of a request through the system, even when it spans multiple services.
In our FastAPI application, we achieve this by creating a custom middleware called LogCorrelationIdMiddleware
. This middleware generates a unique correlation ID for each incoming HTTP request and binds it to the logging context using Structlog’s contextvars
integration. Here’s how it works:
import structlogfrom starlette.types import ASGIApp, Receive, Scope, Send
from src.logging import generate_correlation_id
class LogCorrelationIdMiddleware: def __init__(self, app: ASGIApp) -> None: self.app = app
async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None: if scope["type"] != "http": return await self.app(scope, receive, send)
structlog.contextvars.bind_contextvars( correlation_id=generate_correlation_id(), method=scope["method"], path=scope["path"], )
await self.app(scope, receive, send)
structlog.contextvars.unbind_contextvars("correlation_id", "method", "path")
Bring it all together
To use the middleware in your FastAPI application, simply add it to your app:
from fastapi import FastAPIfrom src.middlewares import LogCorrelationIdMiddlewarefrom src.loggins import configure as configure_logging
configure_logging()
app = FastAPI()app.add_middleware(LogCorrelationIdMiddleware)
Using the logger
Now that we have everything configured, we can finally go ahead and use our new logger. Here is a simple example that shows the general usage pattern:
import uuid
from fastapi import APIRouter, HTTPExceptionfrom fastapi_async_sqlalchemy import dbimport structlog
from src.logging import Loggerfrom src.models import models, schemasfrom src.crud import crud_hero
logger: Logger = structlog.get_logger()
router = APIRouter(prefix="/hero", tags=["heroes"])
# Demonstrate logger usage@router.get("/{hero_id}/ability", response_model=schemas.HeroSchema)async def get_hero_ability(hero_id: uuid.UUID):
# Bind requested log ID logger.bind(requested_hero_id=hero_id) hero = await crud_hero.get(db.session, hero_id)
if not hero: logger.warning("Hero not found") raise HTTPException(status_code=404, detail="Hero not found") await db.session.refresh(hero, attribute_names=["ability"])
logger.info("Successfully fetched hero ability") return hero.ability
Conclusion
Setting up structured logging with Structlog and FastAPI is a game-changer for debugging, monitoring, and maintaining your application. While there’s more to explore, like exception handling, integrating observability tools such as Logfire, and other advanced techniques, this guide serves as a solid introduction to logging. With the basics in place, you’re well-equipped to build on this foundation and tailor your logging setup to your needs.
Thanks for reading this far! Autobots, Roll Out.