Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

APScheduler logs not showing in docker #227

Open
ethanopp opened this issue Dec 27, 2022 · 1 comment
Open

APScheduler logs not showing in docker #227

ethanopp opened this issue Dec 27, 2022 · 1 comment

Comments

@ethanopp
Copy link

The jobs spin up fine when I run locally, but when I run through docker, each of the workers duplicates the jobs

from fastapi import FastAPI
from models.db import init_db
from routers.portfolio import router as portfolio_router
from routers.balance import router as balance_router
from routers.transaction import router as transaction_router

from idom import component, html
from idom.backend.fastapi import configure
import os
from tasks import manage_portfolio_task, fetch_liquidity_changes
from apscheduler.schedulers.asyncio import AsyncIOScheduler

from loguru import logger

# Configure Logs
from log_settings import setup_logger_from_settings, LoggingSettings

# Configure endpoints
app = FastAPI()
app.include_router(portfolio_router, prefix="/portfolio", tags=["Portfolio"])
app.include_router(balance_router, prefix="/balance", tags=["Balance"])
app.include_router(transaction_router, prefix="/transaction", tags=["Transaction"])

# Configure DB
@app.on_event("startup")
async def on_startup():
    # Startup db
    await init_db()
    # Configure Logs
    if not os.environ.get("LOG_LEVEL"):
        os.environ["LOG_LEVEL"] = "INFO"
    setup_logger_from_settings(
        LoggingSettings(
            level=os.environ.get("LOG_LEVEL"),
            filepath=os.path.join(
                os.path.dirname(os.path.abspath(__file__)),
                "config",
                "log.log",
            ),
        )
    )

    # Setup Scheduler
    scheduler = AsyncIOScheduler()
    scheduler.add_job(manage_portfolio_task, "interval", seconds=10)
    scheduler.add_job(fetch_liquidity_changes, "interval", minutes=5)
    scheduler.start()


# Configure Frontend (IDOM)
@component
def BalanceBot():
    return html.h1("Hello, world!")


configure(app, BalanceBot)


# For local dev
if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

From research I've done, the workaround around is to preload the app, however when I try to do this both from the CLI or custom guincorn_conf.py file (setting preload_app=True), I am still experiencing the duplication.

When moving the following code outside of the on_statup() function or inside the if __name__ == "__main__": block, the scheduler does not seem to start at all

    scheduler = AsyncIOScheduler()
    scheduler.add_job(manage_portfolio_task, "interval", seconds=10)
    scheduler.add_job(fetch_liquidity_changes, "interval", minutes=5)
    scheduler.start()
@ethanopp ethanopp changed the title APScheduler with this image APScheduler with this image causing duplicated jobs Dec 27, 2022
@ethanopp
Copy link
Author

ethanopp commented Dec 27, 2022

Made some progress on this.. looks like preload is working but I'm having some weird behavior with the logs

When I start the scheduler inside the on_startup function, I get the apscheduler.executors.base_py3:run_coroutine_job:28 logs - (the last line showing the job has been executed, and this repeats every 10 seconds per the interval I set)

@app.on_event("startup")
async def on_startup():
    # Startup db
    await init_db()

    # Configure Logs
    if not os.environ.get("LOG_LEVEL"):
        os.environ["LOG_LEVEL"] = "INFO"
    setup_logger_from_settings(
        LoggingSettings(
            level=os.environ.get("LOG_LEVEL"),
            filepath=os.path.join(
                os.path.dirname(os.path.abspath(__file__)),
                "config",
                "log.log",
            ),
        )
    )

    # Setup Scheduler
    scheduler = AsyncIOScheduler()
    scheduler.add_job(manage_portfolio_task, "interval", seconds=10)
    scheduler.add_job(fetch_liquidity_changes, "interval", minutes=5)
    scheduler.start()

# For local dev
if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)
INFO:     Started server process [6468]
INFO:     Waiting for application startup.
2022-12-27 12:36:24.943 | INFO     | apscheduler.schedulers.base:add_job:444 | Adding job tentatively -- it will be properly scheduled when the scheduler starts
2022-12-27 12:36:24.943 | INFO     | apscheduler.schedulers.base:add_job:444 | Adding job tentatively -- it will be properly scheduled when the scheduler starts
2022-12-27 12:36:24.945 | INFO     | apscheduler.schedulers.base:_real_add_job:885 | Added job "manage_portfolio_task" to job store "default"
2022-12-27 12:36:24.946 | INFO     | apscheduler.schedulers.base:_real_add_job:885 | Added job "fetch_liquidity_changes" to job store "default"
2022-12-27 12:36:24.946 | INFO     | apscheduler.schedulers.base:start:171 | Scheduler started
2022-12-27 12:36:24.947 | INFO     | uvicorn.lifespan.on:startup:61 | Application startup complete.
2022-12-27 12:36:24.948 | INFO     | uvicorn.server:_log_started_message:209 | Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
2022-12-27 12:36:34.955 | INFO     | apscheduler.executors.base_py3:run_coroutine_job:28 | Running job "manage_portfolio_task (trigger: interval[0:00:10], next run at: 2022-12-27 12:36:44 EST)" (scheduled at 2022-12-27 12:36:34.943992-05:00)

However when I run it outside of this startup function (which I need to otherwise all of the uvicorn workers will duplicate the scheduler), I do not see these logs showing the job logs when they are run

if not os.environ.get("LOG_LEVEL"):
    os.environ["LOG_LEVEL"] = "INFO"
setup_logger_from_settings(
    LoggingSettings(
        level=os.environ.get("LOG_LEVEL"),
        filepath=os.path.join(
            os.path.dirname(os.path.abspath(__file__)),
            "config",
            "log.log",
        ),
    )
)

# Setup Scheduler
scheduler = AsyncIOScheduler()
scheduler.add_job(manage_portfolio_task, "interval", seconds=10)
scheduler.add_job(fetch_liquidity_changes, "interval", minutes=5)
scheduler.start()

# Configure DB
@app.on_event("startup")
async def on_startup():
    # Startup db
    await init_db()

# For local dev
if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)
2022-12-27 12:41:01.368 | INFO     | apscheduler.schedulers.base:add_job:444 | Adding job tentatively -- it will be properly scheduled when the scheduler starts
2022-12-27 12:41:01.368 | INFO     | apscheduler.schedulers.base:add_job:444 | Adding job tentatively -- it will be properly scheduled when the scheduler starts
2022-12-27 12:41:01.372 | INFO     | apscheduler.schedulers.base:_real_add_job:885 | Added job "manage_portfolio_task" to job store "default"
2022-12-27 12:41:01.372 | INFO     | apscheduler.schedulers.base:_real_add_job:885 | Added job "fetch_liquidity_changes" to job store "default"
2022-12-27 12:41:01.372 | INFO     | apscheduler.schedulers.base:start:171 | Scheduler started

How can I get the workers logs to show so that way if there are any errors I would still be able to see them?

@ethanopp ethanopp changed the title APScheduler with this image causing duplicated jobs APScheduler logs not showing in docker Dec 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant