-
Notifications
You must be signed in to change notification settings - Fork 326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use case discussion - Many, many offers with eligiblity rules #377
Comments
@alok87 Cmmiw but you want dynamic rule updates in real-time at scale. Have you tried using the You don't need to restart the instance. Another approach can be to create a new instance of knowledgeBase when rule updates happen. You can use a cache or create a pool of knowledgeBase that can be created in a separate goroutine to further improve latency. (basically precomputing the knowledgeBase instances and using from the pool as rule execution requests come in). Be mindful of the pool size as memory issues might pop in if the rule file is big. So when you receive updates on rules, you refresh the pool with new knowledgeBase instances. I haven't used it myself but saving rules to DB and creating a wrapper on top of it might make more sense given you have a high-scale use case. Similar question (for drools - java based) - https://groups.google.com/g/drools-usage/c/ZDygAFCaqiM |
I tired the example from your blog. It has one rule. It is 5ms for loading just one rule.
So 99ms for 100 rules looks right. (as stated in benchmark README) Why does it take so much time to load? How can we optimize the load? I think creating a blank engine and adding rules(as you said above) as a knowledge base to that engine should be different tasks. Will this help in reducing 5ms? |
I debugged more, out of 5ms, almost all the time (~4ms) goes in walking the AST
can this be optimized? or avoided? |
So, there is a difference between loading rules vs executing rules. Major time is taken in Btw I use this in production and evaluation just takes 1-2ms. Loading on the other hand does take time. But given the scale you're mentioning keep an eye on memory usage on instances. |
Ack, would read the library and the algorithm in detail. How about this? This should work out at big scale also 🤔
|
Thanks @mkfeuhrer @newm4n What are your thoughts on this? Should we have a cache support built in at pod level also first? Then, support to have a central cache store? |
Ran benchmarks with govaluate, huge difference. Really need to solve this. |
Use case
We have a use case in which an offer is created with some eligibility rules for the customer. These offers are dynamically created by our users for various customers of our users dynamically from a dashboard and managed from a dashboard. Offers could be billions.
How can we use this rule engine with changing eligibility rules and offers getting created in realtime. Do we need to keep restarting the service to load all the new configurations. Won't loading all the rules in memory every time a bad idea at boot as offers can be millions also?
What are you suggestions here?
Benchmarks of grule
https://github.com/hyperjumptech/grule-rule-engine#benchmark
The text was updated successfully, but these errors were encountered: