You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When i try to use a model from prisma.models as a response_model on a fastAPI endpoint, the fastAPI docs freeze when I try to open that endpoint to see the response type. I have tried to use partials to resolve this. I can only get it to work when i have no relational fields in the partial. I find it hard to believe its completely freezing just because the type is large. Anyone have experience with this?
is there a way to control the depth of the relation generated for partials like we can do when querying?
How to reproduce
Here is a simple partial im generating to test this and its failing. pillar model also has some relations.
from prisma.models import Bot
ChatOverview = Bot.create_partial("ChatOverview", include={"pillar"})
and in fastAPI endpoint...
response_model=ChatOverview,
The text was updated successfully, but these errors were encountered:
I haven't encountered this myself, what FastAPI version are you using?
I'd currently recommend searching the FastAPI repository to see if anyone else is running into this if you haven't already & if you can reproduce this without using Prisma then open an issue with FastAPI.
Bug description
When i try to use a model from prisma.models as a response_model on a fastAPI endpoint, the fastAPI docs freeze when I try to open that endpoint to see the response type. I have tried to use partials to resolve this. I can only get it to work when i have no relational fields in the partial. I find it hard to believe its completely freezing just because the type is large. Anyone have experience with this?
How to reproduce
Here is a simple partial im generating to test this and its failing. pillar model also has some relations.
and in fastAPI endpoint...
The text was updated successfully, but these errors were encountered: