{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":349172057,"defaultBranch":"main","name":"venice","ownerLogin":"linkedin","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2021-03-18T18:04:30.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/357098?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1715221405.0","currentOid":""},"activityList":{"items":[{"before":"db36ee228fc0b8265729ba28aa0f84c39a9734f9","after":"dbaf8e2a78b99cec2ee8512c9abd69249e0db4ba","ref":"refs/heads/javadoc","pushedAt":"2024-05-13T17:28:11.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Deploying to javadoc from @ linkedin/venice@db36ee228fc0b8265729ba28aa0f84c39a9734f9 πŸš€","shortMessageHtmlLink":"Deploying to javadoc from @ db36ee2 πŸš€"}},{"before":"375810393d714f863a9532fafff6ebcebca2beac","after":"db36ee228fc0b8265729ba28aa0f84c39a9734f9","ref":"refs/heads/javadoc","pushedAt":"2024-05-13T17:26:33.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"[fast-client][server] Throw 403 for metadata request that does not have storage read quota enabled (#975)\n\n* [fast-client][server] Throw 403 for metadata request that does not have storage read quota enabled\r\n\r\n1. The idea is to block new fast-client traffic that do not have storage node read quota enabled.\r\nNew client version will fail fast and have a clear error message to direct the user on what's missing.\r\nOld client versions will keep failing the metadata request and the user will also see the error\r\nmessage once dig deeper into the error trace.","shortMessageHtmlLink":"[fast-client][server] Throw 403 for metadata request that does not ha…"}},{"before":"ba0d7fa517978abf4231d60bc0333bbd044f8f9c","after":"db36ee228fc0b8265729ba28aa0f84c39a9734f9","ref":"refs/heads/main","pushedAt":"2024-05-13T17:26:12.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"xunyin8","name":"Xun Yin","path":"/xunyin8","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7265832?s=80&v=4"},"commit":{"message":"[fast-client][server] Throw 403 for metadata request that does not have storage read quota enabled (#975)\n\n* [fast-client][server] Throw 403 for metadata request that does not have storage read quota enabled\r\n\r\n1. The idea is to block new fast-client traffic that do not have storage node read quota enabled.\r\nNew client version will fail fast and have a clear error message to direct the user on what's missing.\r\nOld client versions will keep failing the metadata request and the user will also see the error\r\nmessage once dig deeper into the error trace.","shortMessageHtmlLink":"[fast-client][server] Throw 403 for metadata request that does not ha…"}},{"before":"ba0d7fa517978abf4231d60bc0333bbd044f8f9c","after":"375810393d714f863a9532fafff6ebcebca2beac","ref":"refs/heads/javadoc","pushedAt":"2024-05-09T22:57:42.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Deploying to javadoc from @ linkedin/venice@ba0d7fa517978abf4231d60bc0333bbd044f8f9c πŸš€","shortMessageHtmlLink":"Deploying to javadoc from @ ba0d7fa πŸš€"}},{"before":"8ce8ac7712b788d05e3fc4efb55f7f051cf32c7b","after":"ba0d7fa517978abf4231d60bc0333bbd044f8f9c","ref":"refs/heads/javadoc","pushedAt":"2024-05-09T22:55:54.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"[changelog] Multiple fixes to changelog consumer logic (#977)\n\n* [changelog] Multiple fixes to changelog consumer logic\r\n\r\n* Fix test and comments\r\n\r\n* protected\r\n\r\n* fix static","shortMessageHtmlLink":"[changelog] Multiple fixes to changelog consumer logic (#977)"}},{"before":"74f07246b8c3a52b518a3255892bebe3a7241b64","after":"ba0d7fa517978abf4231d60bc0333bbd044f8f9c","ref":"refs/heads/main","pushedAt":"2024-05-09T22:55:31.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ZacAttack","name":"Zac Policzer","path":"/ZacAttack","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/639889?s=80&v=4"},"commit":{"message":"[changelog] Multiple fixes to changelog consumer logic (#977)\n\n* [changelog] Multiple fixes to changelog consumer logic\r\n\r\n* Fix test and comments\r\n\r\n* protected\r\n\r\n* fix static","shortMessageHtmlLink":"[changelog] Multiple fixes to changelog consumer logic (#977)"}},{"before":"74f07246b8c3a52b518a3255892bebe3a7241b64","after":"8ce8ac7712b788d05e3fc4efb55f7f051cf32c7b","ref":"refs/heads/javadoc","pushedAt":"2024-05-09T00:16:30.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Deploying to javadoc from @ linkedin/venice@74f07246b8c3a52b518a3255892bebe3a7241b64 πŸš€","shortMessageHtmlLink":"Deploying to javadoc from @ 74f0724 πŸš€"}},{"before":"e0ec7889fd385fbbd555aba16d7164cdeaed424e","after":"74f07246b8c3a52b518a3255892bebe3a7241b64","ref":"refs/heads/javadoc","pushedAt":"2024-05-09T00:15:17.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"[vpj] Do not enable target region push for deferred version swap stores (#978)\n\nDo not allow target region push for deferred version swap and hybrid stores. Since hybrid store uses repush which allow concurrent pushes, this PR disable target region push for such cases.","shortMessageHtmlLink":"[vpj] Do not enable target region push for deferred version swap stor…"}},{"before":"02591a119534377cbd1d993e8276d3548d6d3669","after":"74f07246b8c3a52b518a3255892bebe3a7241b64","ref":"refs/heads/main","pushedAt":"2024-05-09T00:15:00.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"majisourav99","name":"Sourav Maji","path":"/majisourav99","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/62683561?s=80&v=4"},"commit":{"message":"[vpj] Do not enable target region push for deferred version swap stores (#978)\n\nDo not allow target region push for deferred version swap and hybrid stores. Since hybrid store uses repush which allow concurrent pushes, this PR disable target region push for such cases.","shortMessageHtmlLink":"[vpj] Do not enable target region push for deferred version swap stor…"}},{"before":"02591a119534377cbd1d993e8276d3548d6d3669","after":"e0ec7889fd385fbbd555aba16d7164cdeaed424e","ref":"refs/heads/javadoc","pushedAt":"2024-05-09T00:06:10.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Deploying to javadoc from @ linkedin/venice@02591a119534377cbd1d993e8276d3548d6d3669 πŸš€","shortMessageHtmlLink":"Deploying to javadoc from @ 02591a1 πŸš€"}},{"before":"61c405291ba080f300d7269c629c3ba5c2fc4251","after":"02591a119534377cbd1d993e8276d3548d6d3669","ref":"refs/heads/javadoc","pushedAt":"2024-05-09T00:04:34.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"[server][da-vinci] Bumped RocksDB dep and adopt multiget async io by default (#950)\n\n* [server][da-vinci] Bumped RocksDB dep and adopt multiget async io by default\r\n\r\nThis PR bumps up the RocksDB dep and expose a config to enable async io\r\nfor multi-get and the default value is true.\r\nrocksdb.read.async.io.enabled: default true\r\nIn theoy, with this config and posix filesystem, RocksDB multiget API\r\nwill be speeded up quite a bit based on the benchmarking:\r\nhttps://rocksdb.org/blog/2022/10/07/asynchronous-io-in-rocksdb.html\r\n\r\nSo far, such optimization only applies to the chunk lookup for large\r\nvalue/rmd, and if it is proved to be more performant by checking\r\nthe lookup latency for large value in the read path, we can apply\r\nsuch optimization in more areas:\r\n1. DaVinci with DISK mode.\r\n2. Ingestion code path by looking up entries in batch for AA/WC use cases.\r\n3. Regular read path.\r\nSo far, multi-get API can only be used against the same RocksDB database,\r\nso we might need some re-org of RocksDB databases if this API is truly\r\nhelpful.\r\n\r\n* Removed unused code\r\n\r\n* Addressed the spotbug issue\r\n\r\n* Addressed comment","shortMessageHtmlLink":"[server][da-vinci] Bumped RocksDB dep and adopt multiget async io by …"}},{"before":"4b5e4ff47775df1a04d865f07c7f5b6a2cd5b7a8","after":"02591a119534377cbd1d993e8276d3548d6d3669","ref":"refs/heads/main","pushedAt":"2024-05-09T00:04:14.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"gaojieliu","name":"Gaojie Liu","path":"/gaojieliu","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/17712420?s=80&v=4"},"commit":{"message":"[server][da-vinci] Bumped RocksDB dep and adopt multiget async io by default (#950)\n\n* [server][da-vinci] Bumped RocksDB dep and adopt multiget async io by default\r\n\r\nThis PR bumps up the RocksDB dep and expose a config to enable async io\r\nfor multi-get and the default value is true.\r\nrocksdb.read.async.io.enabled: default true\r\nIn theoy, with this config and posix filesystem, RocksDB multiget API\r\nwill be speeded up quite a bit based on the benchmarking:\r\nhttps://rocksdb.org/blog/2022/10/07/asynchronous-io-in-rocksdb.html\r\n\r\nSo far, such optimization only applies to the chunk lookup for large\r\nvalue/rmd, and if it is proved to be more performant by checking\r\nthe lookup latency for large value in the read path, we can apply\r\nsuch optimization in more areas:\r\n1. DaVinci with DISK mode.\r\n2. Ingestion code path by looking up entries in batch for AA/WC use cases.\r\n3. Regular read path.\r\nSo far, multi-get API can only be used against the same RocksDB database,\r\nso we might need some re-org of RocksDB databases if this API is truly\r\nhelpful.\r\n\r\n* Removed unused code\r\n\r\n* Addressed the spotbug issue\r\n\r\n* Addressed comment","shortMessageHtmlLink":"[server][da-vinci] Bumped RocksDB dep and adopt multiget async io by …"}},{"before":"4b5e4ff47775df1a04d865f07c7f5b6a2cd5b7a8","after":"61c405291ba080f300d7269c629c3ba5c2fc4251","ref":"refs/heads/javadoc","pushedAt":"2024-05-08T05:42:17.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Deploying to javadoc from @ linkedin/venice@4b5e4ff47775df1a04d865f07c7f5b6a2cd5b7a8 πŸš€","shortMessageHtmlLink":"Deploying to javadoc from @ 4b5e4ff πŸš€"}},{"before":"efc1f44ec12632f483930d35ad03117a7c6c04e6","after":"4b5e4ff47775df1a04d865f07c7f5b6a2cd5b7a8","ref":"refs/heads/javadoc","pushedAt":"2024-05-08T05:40:50.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"[controller] Fix multiple AdminExecutionTasks working on the same store at the same time. (#918)\n\nCurrent state: when we create AdminExecutionTask , we did not check if there is AdminExecutionTask for one store on the fly, so if one AdminExecutionTask is waiting for a lock (e.g. updating store operation waiting for the store level write lock and that lock is currently unavailable), it is possible that AdminConsumptionTask will create multiple AdminExecutionTask s for a single store until they occupy all threads from ExecutorService 's thread pool, they are all waiting for the same store-level lock.\r\n\r\nBesides, executorService.invokeAll(tasks, processingCycleTimeoutInMs, TimeUnit.MILLISECONDS)\r\nwill cancel every invoked tasks, even the task is emitted by not getting thread from pool to execute. ExecutorService will try to cancel the AdminExecutionTask waiting for a lock after timeout, but that will not terminate that thread (as the acquiring locking operation by AutoCloseableLock is not interruptible). Then that AdminExecutionTask will keep occupying one thread from ExecutorService 's thread pool until the lock is released.\r\n\r\nThis PR has the following changes to address the issue:\r\n\r\nAdding check to see if there is AdminExecutionTask is running for a store.\r\nIntegration test to simulate the lock blocking one thread from pool and recover after acquiring the lock.\r\n\r\n\r\nCo-authored-by: Hao Xu ","shortMessageHtmlLink":"[controller] Fix multiple AdminExecutionTasks working on the same sto…"}},{"before":"d7997d782605fd203c26f0f3549f5041536bc5a8","after":"4b5e4ff47775df1a04d865f07c7f5b6a2cd5b7a8","ref":"refs/heads/main","pushedAt":"2024-05-08T05:40:31.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"haoxu07","name":"Hao Xu","path":"/haoxu07","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5949273?s=80&v=4"},"commit":{"message":"[controller] Fix multiple AdminExecutionTasks working on the same store at the same time. (#918)\n\nCurrent state: when we create AdminExecutionTask , we did not check if there is AdminExecutionTask for one store on the fly, so if one AdminExecutionTask is waiting for a lock (e.g. updating store operation waiting for the store level write lock and that lock is currently unavailable), it is possible that AdminConsumptionTask will create multiple AdminExecutionTask s for a single store until they occupy all threads from ExecutorService 's thread pool, they are all waiting for the same store-level lock.\r\n\r\nBesides, executorService.invokeAll(tasks, processingCycleTimeoutInMs, TimeUnit.MILLISECONDS)\r\nwill cancel every invoked tasks, even the task is emitted by not getting thread from pool to execute. ExecutorService will try to cancel the AdminExecutionTask waiting for a lock after timeout, but that will not terminate that thread (as the acquiring locking operation by AutoCloseableLock is not interruptible). Then that AdminExecutionTask will keep occupying one thread from ExecutorService 's thread pool until the lock is released.\r\n\r\nThis PR has the following changes to address the issue:\r\n\r\nAdding check to see if there is AdminExecutionTask is running for a store.\r\nIntegration test to simulate the lock blocking one thread from pool and recover after acquiring the lock.\r\n\r\n\r\nCo-authored-by: Hao Xu ","shortMessageHtmlLink":"[controller] Fix multiple AdminExecutionTasks working on the same sto…"}},{"before":"d7997d782605fd203c26f0f3549f5041536bc5a8","after":"efc1f44ec12632f483930d35ad03117a7c6c04e6","ref":"refs/heads/javadoc","pushedAt":"2024-05-06T22:37:18.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Deploying to javadoc from @ linkedin/venice@d7997d782605fd203c26f0f3549f5041536bc5a8 πŸš€","shortMessageHtmlLink":"Deploying to javadoc from @ d7997d7 πŸš€"}},{"before":"4f6c731ab0689683deb210369427c5e268e18da2","after":"d7997d782605fd203c26f0f3549f5041536bc5a8","ref":"refs/heads/javadoc","pushedAt":"2024-05-06T22:35:47.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"[Controller][all] Do not truncate Kafka topic immediately for fatal data validation error before EOP (#937)\n\nWhen the ingestion of a new store version is failed, today, we truncate the Kafka topic of the store version by updating its retention time to a small value (15 seconds), specified by DEPRECATED_TOPIC_RETENTION_MS.\r\n\r\nThis is fine for regular failures but for fatal data validation errors, which indicates critical issues happened during the batch push period (before EOP), truncating the topic too early can prevent us from finding the root cause, as the Kafka data is gone.\r\n\r\nThis change adjusts the Kafka topic retention time (to 2 days) when fatal data validation errors is identified, so that we can have enough time to investigate.\r\n\r\nMeanwhile, there is additional logic added to the TopicCleanupService, i.e. even if a topic's retention time is > DEPRECATED_TOPIC_MAX_RETENTION_MS, it can still be consider for deletion:\r\n\r\n- If topic retention is 2 (DEPRECATED_TOPIC_RETENTION_MS) days.\r\n- If The topic is a version topic.\r\n- Get topic creation time (from venice_system_store_push_job_details_store) and check it's already more than 2 days (DEPRECATED_TOPIC_RETENTION_MS), if yes, delete it.","shortMessageHtmlLink":"[Controller][all] Do not truncate Kafka topic immediately for fatal d…"}},{"before":"40ed8875fb4d5ed7dda358fe58835548adddfebd","after":"d7997d782605fd203c26f0f3549f5041536bc5a8","ref":"refs/heads/main","pushedAt":"2024-05-06T22:35:26.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"lluwm","name":"Lei Lu","path":"/lluwm","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/17652526?s=80&v=4"},"commit":{"message":"[Controller][all] Do not truncate Kafka topic immediately for fatal data validation error before EOP (#937)\n\nWhen the ingestion of a new store version is failed, today, we truncate the Kafka topic of the store version by updating its retention time to a small value (15 seconds), specified by DEPRECATED_TOPIC_RETENTION_MS.\r\n\r\nThis is fine for regular failures but for fatal data validation errors, which indicates critical issues happened during the batch push period (before EOP), truncating the topic too early can prevent us from finding the root cause, as the Kafka data is gone.\r\n\r\nThis change adjusts the Kafka topic retention time (to 2 days) when fatal data validation errors is identified, so that we can have enough time to investigate.\r\n\r\nMeanwhile, there is additional logic added to the TopicCleanupService, i.e. even if a topic's retention time is > DEPRECATED_TOPIC_MAX_RETENTION_MS, it can still be consider for deletion:\r\n\r\n- If topic retention is 2 (DEPRECATED_TOPIC_RETENTION_MS) days.\r\n- If The topic is a version topic.\r\n- Get topic creation time (from venice_system_store_push_job_details_store) and check it's already more than 2 days (DEPRECATED_TOPIC_RETENTION_MS), if yes, delete it.","shortMessageHtmlLink":"[Controller][all] Do not truncate Kafka topic immediately for fatal d…"}},{"before":"40ed8875fb4d5ed7dda358fe58835548adddfebd","after":"4f6c731ab0689683deb210369427c5e268e18da2","ref":"refs/heads/javadoc","pushedAt":"2024-05-01T21:31:37.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Deploying to javadoc from @ linkedin/venice@40ed8875fb4d5ed7dda358fe58835548adddfebd πŸš€","shortMessageHtmlLink":"Deploying to javadoc from @ 40ed887 πŸš€"}},{"before":"69e2e51637bafd77c86202ed5989456dc85fe974","after":"40ed8875fb4d5ed7dda358fe58835548adddfebd","ref":"refs/heads/javadoc","pushedAt":"2024-05-01T21:30:29.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"[server][da-vinci] Log VersionTopic offset when logging lossy rewind error (#952)\n\n* [server][da-vinci] Log VersionTopic offset when logging lossy rewind error\r\n\r\nWithout VT offset, it's difficult to find the right offset to dump VT\r\nfor troubleshooting.","shortMessageHtmlLink":"[server][da-vinci] Log VersionTopic offset when logging lossy rewind …"}},{"before":"7df577eec08acd3a6c9fd389c07bbcda185f0aad","after":"40ed8875fb4d5ed7dda358fe58835548adddfebd","ref":"refs/heads/main","pushedAt":"2024-05-01T21:30:12.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"huangminchn","name":"Min Huang","path":"/huangminchn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10958899?s=80&v=4"},"commit":{"message":"[server][da-vinci] Log VersionTopic offset when logging lossy rewind error (#952)\n\n* [server][da-vinci] Log VersionTopic offset when logging lossy rewind error\r\n\r\nWithout VT offset, it's difficult to find the right offset to dump VT\r\nfor troubleshooting.","shortMessageHtmlLink":"[server][da-vinci] Log VersionTopic offset when logging lossy rewind …"}},{"before":"7df577eec08acd3a6c9fd389c07bbcda185f0aad","after":"69e2e51637bafd77c86202ed5989456dc85fe974","ref":"refs/heads/javadoc","pushedAt":"2024-05-01T20:35:53.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Deploying to javadoc from @ linkedin/venice@7df577eec08acd3a6c9fd389c07bbcda185f0aad πŸš€","shortMessageHtmlLink":"Deploying to javadoc from @ 7df577e πŸš€"}},{"before":"e47e401e178d0050c0fa1f5565c6d5b08d7e7730","after":"7df577eec08acd3a6c9fd389c07bbcda185f0aad","ref":"refs/heads/javadoc","pushedAt":"2024-05-01T20:34:28.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"[server] Make separate drainer for batch and hybrid store ingestions (#973)\n\nMake separate drainer for batch and hybrid store ingestions so that batch store ingestions does not affect hybrid store ingestions.\r\n\r\n---------\r\n\r\nCo-authored-by: Sourav Maji ","shortMessageHtmlLink":"[server] Make separate drainer for batch and hybrid store ingestions (#…"}},{"before":"054ea3f9a0591a1f7710202567bbd1de7103a32e","after":"7df577eec08acd3a6c9fd389c07bbcda185f0aad","ref":"refs/heads/main","pushedAt":"2024-05-01T20:34:09.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"majisourav99","name":"Sourav Maji","path":"/majisourav99","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/62683561?s=80&v=4"},"commit":{"message":"[server] Make separate drainer for batch and hybrid store ingestions (#973)\n\nMake separate drainer for batch and hybrid store ingestions so that batch store ingestions does not affect hybrid store ingestions.\r\n\r\n---------\r\n\r\nCo-authored-by: Sourav Maji ","shortMessageHtmlLink":"[server] Make separate drainer for batch and hybrid store ingestions (#…"}},{"before":"054ea3f9a0591a1f7710202567bbd1de7103a32e","after":"e47e401e178d0050c0fa1f5565c6d5b08d7e7730","ref":"refs/heads/javadoc","pushedAt":"2024-05-01T17:44:59.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Deploying to javadoc from @ linkedin/venice@054ea3f9a0591a1f7710202567bbd1de7103a32e πŸš€","shortMessageHtmlLink":"Deploying to javadoc from @ 054ea3f πŸš€"}},{"before":"8eac102d6de6f05a391810c7d394a986aa65c9fa","after":"054ea3f9a0591a1f7710202567bbd1de7103a32e","ref":"refs/heads/javadoc","pushedAt":"2024-05-01T17:43:35.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"[controller] Do not delete parent VT for target colo batch push (#964)\n\nTruncate topic after push is in terminal state if\r\n 1. Its a hybrid store or regular push. (Hybrid store target push uses repush which does not have target regions)\r\n 2. If target region push is enabled and job to push data only to target region completed (status == PUSHED)\r\n\r\n---------\r\n\r\nCo-authored-by: Sourav Maji ","shortMessageHtmlLink":"[controller] Do not delete parent VT for target colo batch push (#964)"}},{"before":"05836eb93d11343c77514008005fe954408c9dbd","after":"054ea3f9a0591a1f7710202567bbd1de7103a32e","ref":"refs/heads/main","pushedAt":"2024-05-01T17:43:13.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"majisourav99","name":"Sourav Maji","path":"/majisourav99","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/62683561?s=80&v=4"},"commit":{"message":"[controller] Do not delete parent VT for target colo batch push (#964)\n\nTruncate topic after push is in terminal state if\r\n 1. Its a hybrid store or regular push. (Hybrid store target push uses repush which does not have target regions)\r\n 2. If target region push is enabled and job to push data only to target region completed (status == PUSHED)\r\n\r\n---------\r\n\r\nCo-authored-by: Sourav Maji ","shortMessageHtmlLink":"[controller] Do not delete parent VT for target colo batch push (#964)"}},{"before":"05836eb93d11343c77514008005fe954408c9dbd","after":"8eac102d6de6f05a391810c7d394a986aa65c9fa","ref":"refs/heads/javadoc","pushedAt":"2024-05-01T17:11:44.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"Deploying to javadoc from @ linkedin/venice@05836eb93d11343c77514008005fe954408c9dbd πŸš€","shortMessageHtmlLink":"Deploying to javadoc from @ 05836eb πŸš€"}},{"before":"164aca2d8c95fcd0cff7ddc52e6119e60ac13e33","after":"05836eb93d11343c77514008005fe954408c9dbd","ref":"refs/heads/javadoc","pushedAt":"2024-05-01T17:10:00.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"github-actions[bot]","name":null,"path":"/apps/github-actions","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/15368?s=80&v=4"},"commit":{"message":"[router] Added config to enable DNS resolution before SSL (#970)\n\n* [router] Added config to enable DNS resolution before SSL\r\n\r\nConfig to enable this feature:\r\nrouter.resolve.before.ssl\r\n\r\nWhen this config is enabled, \"SslInitializer#enableSslTaskExecutor\"\r\nwill not be called, and the SSL handshake thread pool count will\r\nbe used to construct the DNS resolution thread pool.\r\n\r\nBesides, added two new SSL related metrics using the API from Netty:\r\npending_ssl_handshake_count\r\ntotal_failed_ssl_handshake_count","shortMessageHtmlLink":"[router] Added config to enable DNS resolution before SSL (#970)"}},{"before":"b2301acfe2fd4cd6694cc38f296c0807810cf76a","after":"05836eb93d11343c77514008005fe954408c9dbd","ref":"refs/heads/main","pushedAt":"2024-05-01T17:09:34.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"huangminchn","name":"Min Huang","path":"/huangminchn","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10958899?s=80&v=4"},"commit":{"message":"[router] Added config to enable DNS resolution before SSL (#970)\n\n* [router] Added config to enable DNS resolution before SSL\r\n\r\nConfig to enable this feature:\r\nrouter.resolve.before.ssl\r\n\r\nWhen this config is enabled, \"SslInitializer#enableSslTaskExecutor\"\r\nwill not be called, and the SSL handshake thread pool count will\r\nbe used to construct the DNS resolution thread pool.\r\n\r\nBesides, added two new SSL related metrics using the API from Netty:\r\npending_ssl_handshake_count\r\ntotal_failed_ssl_handshake_count","shortMessageHtmlLink":"[router] Added config to enable DNS resolution before SSL (#970)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAESKrXkwA","startCursor":null,"endCursor":null}},"title":"Activity Β· linkedin/venice"}