Decoupling Logs from Application: How We Built a Cloud-Native Logger with FastAPI, Celery, and AWS CloudWatch
Background
One of our enterprise project was originally built using Flask. The application was monolithic and not designed for multi-tenancy.
As the application scaled, so did the need for a centralized, reliable logging mechanism — one that could:
- Handle logs across tenants and services
- Push logs to AWS CloudWatch in real-time
- Avoid bloating the core application with logging logic
- Remain scalable and decoupled from the main codebase
To solve this, we designed and implemented a standalone logger service using FastAPI, Celery, and AWS CloudWatch.
Our Logging Architecture
We split the responsibilities into two clear components:
- FastAPI-based Logger Service
- Exposes a
/logsendpoint - Handles log formatting, validation, CloudWatch integration
- Uses background tasks to avoid blocking requests
- Exposes a
- Flask-based ASM Application
- Contains a lightweight log client
- Offloads logs to the FastAPI service asynchronously via Celery
Flow Overview:
Flask App --> Celery Task --> FastAPI Logger Service --> CloudWatch
FastAPI Logger Service – Internals
This service is lightweight and independent. It exposes one endpoint:
POST /logs
Accepts:
log_grouplog_streamlog_messagelevel(info/error/debug)
Once a request is received:
- Checks for log group and stream existence
- Creates them if they don’t exist
- Pushes the message to CloudWatch
All of this happens inside a background task, so the response is returned instantly.
Example: FastAPI Background Task
@app.post("/logs")
async def push_log(payload: LogPayload, background_tasks: BackgroundTasks):
background_tasks.add_task(write_log_to_cloudwatch, payload)
return {"status": "queued"}
This helps offload the work while keeping the API non-blocking.
Flask Integration with Celery
Inside our Flask app, we created a Log Client class that calls Celery workers. The workers then send the logs to the FastAPI service.
Example Celery Task:
@celery_app.task
def send_log_to_logger(payload):
requests.post("http://logger-service/logs", json=payload)
This decouples log handling from the main request flow and ensures:
- Logs are retried if the logger service is down
- Main requests stay lightweight
- Logging can scale independently
Why This Design Works Well
| Benefit | Description |
|---|---|
| Decoupled | Logging logic doesn’t interfere with core app |
| Scalable | Logger can be deployed independently and scaled |
| Non-blocking | FastAPI uses background tasks for async log writing |
| Multi-tenant-ready | Log groups/streams can be dynamically created |
| Retry-safe | Celery ensures logs aren’t lost during failure |
Takeaways
If you’re working with monoliths or microservices and need a clean, scalable logging solution:
- FastAPI is great for building stateless microservices
- Celery is ideal for async processing & retry logic
- Decoupling logging from your main app gives you control and performance
This architecture helped us centralize logs, reduce main app complexity, and scale faster without bloating our core codebase.
Thanks for reading!
Let me know if you’ve implemented something similar or want help setting this up in your own stack.





