Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run Fastapi-observability using Gunicorn and multiple uvicorn-workers #16

Open
denis-k2 opened this issue Apr 1, 2024 · 4 comments
Open

Comments

@denis-k2
Copy link

denis-k2 commented Apr 1, 2024

Is it possible to run fastapi-observability using Gunicorn and multiple uvicorn-workers?
Could you share an example code?
I try to use this manual (+ this), but it doesn’t work for me.

@daler-api
Copy link

daler-api commented Apr 2, 2024

Hi. Yes, it's possible.
Just inside docker-compose.yml in the sections that start with app, uncomment the line build
and inside your application, remove the section if __name__ == '__main__'
and change it to

class StandaloneApplication(BaseApplication):
    def __init__(self, application: Callable, options: Dict[str, Any] = None):
        self.options = options or {}
        self.application = application
        super().__init__()

    def load_config(self):
        config = {
            key: value
            for key, value in self.options.items()
            if key in self.cfg.settings and value is not None
        }
        For key, value in config.items():
            self.cfg.set(key.lower(), value)

    def load(self):
        return self.application


if __name__ == "__main__":
    options = {
        "bind": "%s:%s" % ("0.0.0.0.0", "8000"),
        "workers": number_of_workers(),
        "worker_class": "uvicorn.workers.UvicornWorker",
    }
    StandaloneApplication(app, options).run()

@daler-api
Copy link

The full version will look like this:

import logging
import multiprocessing
import os
import random
import time
from typing import Optional, Callable, Dict, Any

import httpx
import uvicorn
from fastapi import FastAPI, Response
from gunicorn.app.base import BaseApplication
from opentelemetry.propagate import inject
from utils import PrometheusMiddleware, metrics, setting_otlp

APP_NAME = os.environ.get("APP_NAME", "app")
EXPOSE_PORT = os.environ.get("EXPOSE_PORT", 8000)
OTLP_GRPC_ENDPOINT = os.environ.get("OTLP_GRPC_ENDPOINT", "http://tempo:4317")

TARGET_ONE_HOST = os.environ.get("TARGET_ONE_HOST", "app-b")
TARGET_TWO_HOST = os.environ.get("TARGET_TWO_HOST", "app-c")

app = FastAPI()

# Setting metrics middleware
app.add_middleware(PrometheusMiddleware, app_name=APP_NAME)
app.add_route("/metrics", metrics)

# Setting OpenTelemetry exporter
setting_otlp(app, APP_NAME, OTLP_GRPC_ENDPOINT)


class EndpointFilter(logging.Filter):
    # Uvicorn endpoint access log filter
    def filter(self, record: logging.LogRecord) -> bool:
        return record.getMessage().find("GET /metrics") == -1


# Filter out /endpoint
logging.getLogger("uvicorn.access").addFilter(EndpointFilter())


@app.get("/")
async def read_root():
    logging.error("Hello World")
    return {"Hello": "World"}


@app.get("/items/{item_id}")
async def read_item(item_id: int, q: Optional[str] = None):
    logging.error("items")
    return {"item_id": item_id, "q": q}


@app.get("/io_task")
async def io_task():
    time.sleep(1)
    logging.error("io task")
    return "IO bound task finish!"


@app.get("/cpu_task")
async def cpu_task():
    for i in range(1000):
        _ = i * i * i
    logging.error("cpu task")
    return "CPU bound task finish!"


@app.get("/random_status")
async def random_status(response: Response):
    response.status_code = random.choice([200, 200, 300, 400, 500])
    logging.error("random status")
    return {"path": "/random_status"}


@app.get("/random_sleep")
async def random_sleep(response: Response):
    time.sleep(random.randint(0, 5))
    logging.error("random sleep")
    return {"path": "/random_sleep"}


@app.get("/error_test")
async def error_test(response: Response):
    logging.error("got error!!!!")
    raise ValueError("value error")


@app.get("/chain")
async def chain(response: Response):
    headers = {}
    inject(headers)  # inject trace info to header
    logging.critical(headers)

    async with httpx.AsyncClient() as client:
        await client.get(
            "http://localhost:8000/",
            headers=headers,
        )
    async with httpx.AsyncClient() as client:
        await client.get(
            f"http://{TARGET_ONE_HOST}:8000/io_task",
            headers=headers,
        )
    async with httpx.AsyncClient() as client:
        await client.get(
            f"http://{TARGET_TWO_HOST}:8000/cpu_task",
            headers=headers,
        )
    logging.info("Chain Finished")
    return {"path": "/chain"}


def number_of_workers():
    return multiprocessing.cpu_count()


class StandaloneApplication(BaseApplication):
    def __init__(self, application: Callable, options: Dict[str, Any] = None):
        self.options = options or {}
        self.application = application
        super().__init__()

    def load_config(self):
        config = {
            key: value
            for key, value in self.options.items()
            if key in self.cfg.settings and value is not None
        }
        for key, value in config.items():
            self.cfg.set(key.lower(), value)

    def load(self):
        return self.application


if __name__ == "__main__":
    options = {
        "bind": "%s:%s" % ("0.0.0.0", "8000"),
        "workers": number_of_workers(),
        "worker_class": "uvicorn.workers.UvicornWorker",
    }
    StandaloneApplication(app, options).run()

@denis-k2
Copy link
Author

denis-k2 commented Apr 4, 2024

Thank you @daler-api for your reply.
When I run this code, the result is that the metrics (Total Requests, Requests Count ...) are not summarized, but displayed separately for each worker.
I get the same result by removing one line if __name__ == '__main__' from the original fastapi-observability and making docker command: gunicorn main:app -w N -k uvicorn.workers.UvicornWorker.

Using links to Prometeus FastAPI + Gunicornl and Multiprocess Mode I managed to get *.db-files in PROMETHEUS_MULTIPROC_DIR directory but I don't know how to pass them to Prometheus.

@faradox
Copy link

faradox commented Apr 16, 2024

I'm having the same issue that the instrumentation works but the metrics are tracked separately for each worker. As far as I understand, I need to use gunicorn's post_fork() hook as mentioned in open-telemetry/opentelemetry-python-contrib#385 (comment) and the opentelemetry documentation but I can't put that together with the FastAPIInstrumentor.instrument_app(app, tracer_provider=tracer) call as shown in your example because the worker does not seem have access to the app object post-fork...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants