Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Optimizations w/ Recursive Endpoints for On-Chain Ordinals Games #3719

Open
patrick99e99 opened this issue May 2, 2024 · 7 comments
Labels

Comments

@patrick99e99
Copy link

patrick99e99 commented May 2, 2024

Hi,

We have developed on on-chain Ordinals game where the main Inscriptions allows the user to inscribe an arbitrary number of children (potentially hundreds), that, depending on how they play the game, will determine whether their inscription "dies" or "lives and evolves." The problem is with the current endpoints this requires a massive number of requests. The number of requests can be modeled by (Number of Inscriptions (say 10k) * 2) * Number of Children * 2.

So for a 10k collection with an average of 50 children, this means we need: 20,000 * 50 * 2 = 2,000,000 requests to determine the state of the game fully on-chain. In addition, on marketplaces and explorers where the collection is displayed, each iframe will need to do 100 requests to display the state of the Inscription.

Because the logic for the game requires us to get BOTH the block height AND content for each child, we are basically having to make calls to /r/children/<INSCRIPTION_ID>/<PAGE> and then for each page, make 2n number of api calls because we have to fetch both /r/inscription/<child-id> AND /content/<child-id> for EACH child inscription id. It would be great if the bulk children endpoint took an optional list of fields so that specific data could be returned in one batch. Something like: /r/children/abc123i0/0?fields=height,content,sat,etc... and then get back those attributes for each child..

Thank you for your time, we believe on-chain Ordinals games can be very exciting and open up use cases that show Bitcoin's strengths as a platform. We understand there are other considerations and tradeoffs in expanding the recursive endpoints to facilitate this, but we would like to discuss what ways could potentially reduce the number of requests by 10-100x

@owenstrevor
Copy link

gm, would love to get some feedback on this!

Been working with Patrick and the Mega Punks artist on this for the past 3 months as an "MVP" for something much bigger we hope to do in the future, and now that we're at the final mile we're actually testing this in production and seeing that it is an absolute monster that will break browsers/servers 🤣

I think there is so much potential here to show what Bitcoin Ordinals can do that Ethereum NFTs can't; at least with the same level of simplicity. Would love to see a meta of on-chain games leveraging parent-child to do crazy things.

@gmart7t2
Copy link
Contributor

gmart7t2 commented May 5, 2024

I imagine @casey wouldn't want to implement returning multiple inscription contents from a single request because of the DoS potential it opens up.

He's previously argued against fully supporting brotli compression because uncompressing small files can make them much bigger, making DoS attacks easier, and I guess the same would be the case here.

@elocremarc
Copy link
Contributor

elocremarc commented May 5, 2024

Why do you need to get all 10k states for the game at once? Can you not design the game to be localized with the state around one or a few inscription? I think /r/parent is on the roadmap could help do the tree search up to the parent. Think of it like a SPV proof that Casey talked about for runes. You don't need to compute the entire state of all the runes just the state updates of that specific rune.

@patrick99e99
Copy link
Author

@elocremarc no, that would open the door for players to change their actual state (cheat).

We don't need to get all the states for everyting at once-- just would be ideal and a performance boost if all content + block heights of child inscriptions could be done at once (batches of 100). The larger number 10k+ would be from aggregation to get the entire state of EVERYTHING, and that would not need to or expected to be done in one bulk call.

@patrick99e99
Copy link
Author

@gmart7t2 yeah, I can certainly understand that. Maybe a restriction on response content size, 422 if the serialized data exceeds a certain limit?

@raphjaph
Copy link
Collaborator

raphjaph commented May 7, 2024

Here are some remarks off the top of my head:

It would be great if the bulk children endpoint took an optional list of fields so that specific data could be returned in one batch. Something like: /r/children/abc123i0/0?fields=height,content,sat,etc... and then get back those attributes for each child

A bulk response for child inscription metadata probably makes sense (if tests confirm this). I see a big problem with returning the content in that bulk request though. The content will almost always represent the largest part of the data transferred and therefore requires extensive caching at the edges. Right now we only serve content at /content/<INSCRIPTION_ID>, which makes cache design inside a CDN quite easy because it's a fixed route that serves static content. With batched content responses this would circumvent the whole edge cache and make requests to the origin server, creating a whole new bottleneck altogether.

Been working with Patrick and the Mega Punks artist on this for the past 3 months as an "MVP" for something much bigger we hope to do in the future, and now that we're at the final mile we're actually testing this in production and seeing that it is an absolute monster that will break browsers/servers 🤣

Do you have more details about what you were testing exactly? Some rough numbers and identified bottlenecks would be great. I could imagine that the number of requests sent is not the only problem but also the sheer amount of data transferred, memory usage of the browser, etc.

@patrick99e99
Copy link
Author

patrick99e99 commented May 7, 2024

With batched content responses this would circumvent the whole edge cache and make requests to the origin server, creating a whole new bottleneck altogether.

I see. Yeah, that does sound problematic. Well, if content isn't possible, it would at least be an improvement if the child endpoint could still accept an arbitrary list of fields that are in the response of /r/inscription .. Because as of now, we have to 1) fetch the children, 2) call /r/inscription for each child to get the child's blockheight, and then call /content to get the child's content. If we could at the very least be able make a call to /r/children/<parent>/0?fields=id,height and get back a response like:

{
    "more": false,
    "page": 0,
    "fields": [
       {
          "height": <child-1-height>,
          "id": <child-1-id>
       },
       {
          "height":  <child-2-height>,
          "id": <child-2-id>
       },
       ...etc
    ]
}

That would be helpful and a step in the right direction.

Do you have more details about what you were testing exactly? Some rough numbers and identified bottlenecks would be great. I could imagine that the number of requests sent is not the only problem but also the sheer amount of data transferred, memory usage of the browser, etc.

No, not particularly.. We just have metrics code which aggregates the entire collection (100k+ parent inscriptions), and buckets them based on the state of each parent. So I just was anticipating that this is going to be disastrous and incredibly slow because every parent has to get a list of its child ids, then get the blockheight for each child, then get the content for each child.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants