{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":6934395,"defaultBranch":"main","name":"rocksdb","ownerLogin":"facebook","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2012-11-30T06:16:18.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/69631?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1714595653.0","currentOid":""},"activityList":{"items":[{"before":"e2ef349f56f99ae83d2ded1de23ff0684c66e1bb","after":"ed01babd07ab23788f563e78c234c01d247c09b9","ref":"refs/heads/main","pushedAt":"2024-05-03T01:42:57.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Expose compaction pri through C API (#12604)\n\nSummary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/12604\n\nReviewed By: cbi42\n\nDifferential Revision: D56914066\n\nPulled By: ajkr\n\nfbshipit-source-id: 64b51ab2b7b5ec0b5fde5a5f61d076bac1c3a8ad","shortMessageHtmlLink":"Expose compaction pri through C API (#12604)"}},{"before":"2cd4346df6703b190d1497719bb1e3fa4336cd42","after":"e2ef349f56f99ae83d2ded1de23ff0684c66e1bb","ref":"refs/heads/main","pushedAt":"2024-05-03T00:14:43.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Deflake unit test `DBCompactionTest.CompactionLimiter` (#12596)\n\nSummary:\nThe test has been flaky for a long time. A recent [failure](https://github.com/facebook/rocksdb/actions/runs/8820808355/job/24215219590?pr=12578) shows that there is still flush running when the assertion fails. I think this is because `WaitForFlushMemTable()` may return before the a flush schedules the next compaction.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12596\n\nTest Plan: I could not repro the failure locally: `gtest-parallel --repeat=8000 --workers=100 ./db_compaction_test --gtest_filter=\"*CompactionLimiter*\"`\n\nReviewed By: ajkr\n\nDifferential Revision: D56715874\n\nPulled By: cbi42\n\nfbshipit-source-id: f5f64eb30fff7e115c19beedad2dc22afa06258d","shortMessageHtmlLink":"Deflake unit test DBCompactionTest.CompactionLimiter (#12596)"}},{"before":"6cc7ad15b61d27adf706c5889068d5587967401b","after":"2cd4346df6703b190d1497719bb1e3fa4336cd42","ref":"refs/heads/main","pushedAt":"2024-05-02T23:57:36.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix compile error in Clang (#12588)\n\nSummary:\nThis PR fixes the following compile errors with Clang:\n\n```\n.../rocksdb/env/fs_on_demand.cc:184:5: error: no member named 'for_each' in namespace 'std'; did you mean 'std::ranges::for_each'?\n 184 | std::for_each(rchildren.begin(), rchildren.end(), [&](std::string& name) {\n | ^~~~~~~~~~~~~\n | std::ranges::for_each\n/opt/homebrew/opt/llvm@17/bin/../include/c++/v1/__algorithm/ranges_for_each.h:68:23: note: 'std::ranges::for_each' declared here\n 68 | inline constexpr auto for_each = __for_each::__fn{};\n | ^\n.../rocksdb/env/fs_on_demand.cc:188:10: error: no member named 'sort' in namespace 'std'\n 188 | std::sort(result->begin(), result->end());\n | ~~~~~^\n.../rocksdb/env/fs_on_demand.cc:189:10: error: no member named 'sort' in namespace 'std'\n 189 | std::sort(rchildren.begin(), rchildren.end());\n | ~~~~~^\n.../rocksdb/env/fs_on_demand.cc:193:10: error: no member named 'set_union' in namespace 'std'\n 193 | std::set_union(result->begin(), result->end(), rchildren.begin(),\n | ~~~~~^\n.../rocksdb/env/fs_on_demand.cc:221:5: error: no member named 'for_each' in namespace 'std'; did you mean 'std::ranges::for_each'?\n 221 | std::for_each(\n | ^~~~~~~~~~~~~\n | std::ranges::for_each\n/opt/homebrew/opt/llvm@17/bin/../include/c++/v1/__algorithm/ranges_for_each.h:68:23: note: 'std::ranges::for_each' declared here\n 68 | inline constexpr auto for_each = __for_each::__fn{};\n | ^\n.../rocksdb/env/fs_on_demand.cc:226:10: error: no member named 'sort' in namespace 'std'\n 226 | std::sort(result->begin(), result->end(), file_attr_sorter);\n | ~~~~~^\n.../rocksdb/env/fs_on_demand.cc:227:10: error: no member named 'sort' in namespace 'std'\n 227 | std::sort(rchildren.begin(), rchildren.end(), file_attr_sorter);\n | ~~~~~^\n.../rocksdb/env/fs_on_demand.cc:231:10: error: no member named 'set_union' in namespace 'std'\n 231 | std::set_union(rchildren.begin(), rchildren.end(), result->begin(),\n | ~~~~~^\n8 errors generated.\n```\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12588\n\nReviewed By: jaykorean\n\nDifferential Revision: D56656222\n\nPulled By: ajkr\n\nfbshipit-source-id: 7e94b6250fc9edfe597a61b7622f09d6b6cd9cbd","shortMessageHtmlLink":"Fix compile error in Clang (#12588)"}},{"before":"6349da612bd26bbf338cbce4601de29ffe1a1f1c","after":"6cc7ad15b61d27adf706c5889068d5587967401b","ref":"refs/heads/main","pushedAt":"2024-05-02T18:25:54.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Implement secondary cache admission policy to allow all evicted blocks (#12599)\n\nSummary:\nAdd a secondary cache admission policy to admit all blocks evicted from the block cache.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12599\n\nReviewed By: pdillinger\n\nDifferential Revision: D56891760\n\nPulled By: anand1976\n\nfbshipit-source-id: 193c98c055aa3477f4e3a78e5d3daef27a5eacf4","shortMessageHtmlLink":"Implement secondary cache admission policy to allow all evicted blocks ("}},{"before":"241253053a4418b7e8d055ab38ce3796cb3f655d","after":"6349da612bd26bbf338cbce4601de29ffe1a1f1c","ref":"refs/heads/main","pushedAt":"2024-05-01T23:37:05.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Update HISTORY.md and version to 9.3.0 (#12601)\n\nSummary:\nUpdate HISTORY.md for 9.2 and version to 9.3.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12601\n\nReviewed By: jaykorean, jowlyzhang\n\nDifferential Revision: D56845901\n\nPulled By: anand1976\n\nfbshipit-source-id: 0d1137a6568e4712be2f8b705f4f7b438217dbed","shortMessageHtmlLink":"Update HISTORY.md and version to 9.3.0 (#12601)"}},{"before":null,"after":"4226a32413086cd085cdfd91e74961b1406e4de4","ref":"refs/heads/9.2.fb","pushedAt":"2024-05-01T20:34:13.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"anand1976","name":null,"path":"/anand1976","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/33647610?s=80&v=4"},"commit":{"message":"Update HISTORY.md for 9.2.0","shortMessageHtmlLink":"Update HISTORY.md for 9.2.0"}},{"before":"8b3d9e6bfe7dc4a0be5c94150e9c888d0c11bff5","after":"241253053a4418b7e8d055ab38ce3796cb3f655d","ref":"refs/heads/main","pushedAt":"2024-05-01T19:32:04.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix delete obsolete files on recovery not rate limited (#12590)\n\nSummary:\nThis PR fix the issue that deletion of obsolete files during DB::Open are not rate limited.\n\nThe root cause is slow deletion is disabled if trash/db size ratio exceeds the configured `max_trash_db_ratio` https://github.com/facebook/rocksdb/blob/d610e14f9386bab7f1fa85cf34dcb5b465152699/include/rocksdb/sst_file_manager.h#L126 however, the current handling in DB::Open starts with tracking nothing but the obsolete files. This will make the ratio always look like it's 1.\n\nIn order for the deletion rate limiting logic to work properly, we should only start deleting files after `SstFileManager` has finished tracking the whole DB, so the main fix is to move these two places that attempts to delete file after the tracking are done: 1) the `DeleteScheduler::CleanupDirectory` call in `SanitizeOptions`, 2) the `DB::DeleteObsoleteFiles` call.\n\nThere are some other aesthetic changes like refactoring collecting all the DB paths into a function, rename `DBImp::DeleteUnreferencedSstFiles` to `DBImpl:: MaybeUpdateNextFileNumber` as it doesn't actually delete the files.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12590\n\nTest Plan: Added unit test and verified with manual testing\n\nReviewed By: anand1976\n\nDifferential Revision: D56830519\n\nPulled By: jowlyzhang\n\nfbshipit-source-id: 8a38a21b1ea11c5371924f2b88663648f7a17885","shortMessageHtmlLink":"Fix delete obsolete files on recovery not rate limited (#12590)"}},{"before":"abd6751aba0715710e472474b46f6d12911bfc1f","after":"8b3d9e6bfe7dc4a0be5c94150e9c888d0c11bff5","ref":"refs/heads/main","pushedAt":"2024-04-30T22:48:41.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add TimedPut to stress test (#12559)\n\nSummary:\nThis also updates WriteBatch's protection info to include write time since there are several places in memtable that by default protects the whole value slice.\n\nThis PR is stacked on https://github.com/facebook/rocksdb/issues/12543\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12559\n\nReviewed By: pdillinger\n\nDifferential Revision: D56308285\n\nPulled By: jowlyzhang\n\nfbshipit-source-id: 5524339fe0dd6c918dc940ca2f0657b5f2111c56","shortMessageHtmlLink":"Add TimedPut to stress test (#12559)"}},{"before":"2c02a9b76fac0cb468a59235d894f53112691974","after":"abd6751aba0715710e472474b46f6d12911bfc1f","ref":"refs/heads/main","pushedAt":"2024-04-30T22:42:43.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix wrong padded bytes being used to generate file checksum (#12598)\n\nSummary:\n**Context/Summary:**\n\nhttps://github.com/facebook/rocksdb/pull/12542 introduced a bug where wrong padded bytes used to generate file checksum if flush happens during padding. This PR fixed it along with an existing same bug for `perform_data_verification_=true`.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12598\n\nTest Plan:\n- New UT that failed before this fix (`db->VerifyFileChecksums: ...Corruption: ...file checksum mismatch`) and passes after\n- Benchmark\n```\nTEST_TMPDIR=/dev/shm ./db_bench --benchmarks=fillseq[-X300] --num=100000 --block_align=1 --compression_type=none\n```\nPre-PR:\nfillseq [AVG 300 runs] : 421334 (± 4126) ops/sec; 46.6 (± 0.5) MB/sec\nPost-PR: (no regression observed but a slight improvement)\nfillseq [AVG 300 runs] : 425768 (± 4309) ops/sec; 47.1 (± 0.5) MB/sec\n\nReviewed By: ajkr, anand1976\n\nDifferential Revision: D56725688\n\nPulled By: hx235\n\nfbshipit-source-id: c1a700a95def8c65c0a21e44f8c1966164925ad5","shortMessageHtmlLink":"Fix wrong padded bytes being used to generate file checksum (#12598)"}},{"before":"45c105104bad2027edf6d2a668ffbace397e1f8a","after":"2c02a9b76fac0cb468a59235d894f53112691974","ref":"refs/heads/main","pushedAt":"2024-04-30T18:19:33.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Preserve TimedPut on penultimate level until it actually expires (#12543)\n\nSummary:\nTo make sure `TimedPut` are placed on proper tier before and when it becomes eligible for cold tier\n1) flush and compaction need to keep relevant seqno to time mapping for not just the sequence number contained in internal keys, but also preferred sequence number for `TimedPut` entries.\n\nThis PR also fix some bugs in for handling `TimedPut` during compaction:\n1) dealing with an edge case when a `TimedPut` entry's internal key is the right bound for penultimate level, the internal key after swapping in its preferred sequence number will fall outside of the penultimate range because preferred sequence number is smaller than its original sequence number. The entry however is still safe to be placed on penultimate level, so we keep track of `TimedPut` entry's original sequence number for this check. The idea behind this is that as long as it's safe for the original key to be placed on penultimate level, it's safe for the entry with swapped preferred sequence number to be placed on penultimate level too. Because we only swap in preferred sequence number when that entry is visible to the earliest snapshot and there is no other data points with the same user key in lower levels. On the other hand, as long as it's not safe for the original key to be placed on penultimate level, we will not place the entry after swapping the preferred seqno on penultimate level either.\n\n2) the assertion that preferred seqno is always bigger than original sequence number may fail if this logic is only exercised after sequence number is zeroed out. We adjust the assertion to handle that case too. In this case, we don't swap in the preferred seqno but will adjust the its type to `kTypeValue`.\n\n3) there was a special case handling for when range deletion may end up incorrectly covering an entry if preferred seqno is swapped in. But it missed the case that if the original entry is already covered by range deletion. The original handling will mistakenly output the entry instead of omitting it.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12543\n\nTest Plan:\n./tiered_compaction_test --gtest_filter=\"PrecludeLastLevelTest.PreserveTimedPutOnPenultimateLevel\"\n./compaction_iterator_test --gtest_filter=\"*TimedPut*\"\n\nReviewed By: pdillinger\n\nDifferential Revision: D56195096\n\nPulled By: jowlyzhang\n\nfbshipit-source-id: 37ebb09d2513abbd9e90cda0217e26874584b8f3","shortMessageHtmlLink":"Preserve TimedPut on penultimate level until it actually expires (#12543"}},{"before":null,"after":"87d4df1f07011ad0530c3361e3f36606e089dd52","ref":"refs/heads/9.1.fb.myrocks","pushedAt":"2024-04-30T17:57:15.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"jowlyzhang","name":"Yu Zhang","path":"/jowlyzhang","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5846404?s=80&v=4"},"commit":{"message":"Fix deprecated use of 0/NULL in internal_repo_rocksdb/repo/util/xxhash.h + 5\n\nSummary:\n`nullptr` is typesafe. `0` and `NULL` are not. In the future, only `nullptr` will be allowed.\n\nThis diff helps us embrace the future _now_ in service of enabling `-Wzero-as-null-pointer-constant`.\n\nReviewed By: dmm-fb\n\nDifferential Revision: D55559752\n\nfbshipit-source-id: 9f1edc836ded919022c4b53722f6f86208fecf8d","shortMessageHtmlLink":"Fix deprecated use of 0/NULL in internal_repo_rocksdb/repo/util/xxhas…"}},{"before":"5c1334f7633159eae1f5f89d2a7b43024b2c1916","after":"45c105104bad2027edf6d2a668ffbace397e1f8a","ref":"refs/heads/main","pushedAt":"2024-04-30T15:38:22.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Set optimize_filters_for_memory by default (#12377)\n\nSummary:\nThis feature has been around for a couple of years and users haven't reported any problems with it.\n\nNot quite related: fixed a technical ODR violation in public header for info_log_level in case DEBUG build status changes.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12377\n\nTest Plan: unit tests updated, already in crash test. Some unit tests are expecting specific behaviors of optimize_filters_for_memory=false and we now need to bake that in.\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D54129517\n\nPulled By: pdillinger\n\nfbshipit-source-id: a64b614840eadd18b892624187b3e122bab6719c","shortMessageHtmlLink":"Set optimize_filters_for_memory by default (#12377)"}},{"before":"b2931a5c532588c03a996c36eccfc160a5a0f6d1","after":"5c1334f7633159eae1f5f89d2a7b43024b2c1916","ref":"refs/heads/main","pushedAt":"2024-04-29T23:37:47.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"DeleteRange() return NotSupported if row_cache is configured (#12512)\n\nSummary:\n...since this feature combination is not supported yet (https://github.com/facebook/rocksdb/issues/4122).\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12512\n\nTest Plan: new unit test.\n\nReviewed By: jaykorean, jowlyzhang\n\nDifferential Revision: D55820323\n\nPulled By: cbi42\n\nfbshipit-source-id: eeb5e97d15c9bdc388793a2fb8e52cfa47e34bcf","shortMessageHtmlLink":"DeleteRange() return NotSupported if row_cache is configured (#12512)"}},{"before":"e36b0a2da40c28461b602c4a340d9dcccafb96c6","after":"b2931a5c532588c03a996c36eccfc160a5a0f6d1","ref":"refs/heads/main","pushedAt":"2024-04-29T21:22:09.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fixed `MultiGet()` error handling to not skip blob dereference (#12597)\n\nSummary:\nSee comment at top of the test case and release note.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12597\n\nReviewed By: jaykorean\n\nDifferential Revision: D56718786\n\nPulled By: ajkr\n\nfbshipit-source-id: 8dce185bb0d24a358372fc2b553d181793fc335f","shortMessageHtmlLink":"Fixed MultiGet() error handling to not skip blob dereference (#12597)"}},{"before":"d80e1d99bcf8b8c530011a93f4382324dab4da26","after":"e36b0a2da40c28461b602c4a340d9dcccafb96c6","ref":"refs/heads/main","pushedAt":"2024-04-29T19:32:30.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix corruption bug when recycle_log_file_num changed from 0 (#12591)\n\nSummary:\nWhen `recycle_log_file_num` is changed from 0 to non-zero and the DB is reopened, any log files from the previous session that are still alive get reused. However, the WAL records in those files are not in the recyclable format. If one of those files is reused and is empty, a subsequent re-open, in `RecoverLogFiles`, can replay those records and insert stale data into the memtable. Another manifestation of this is an assertion failure `first_seqno_ == 0 || s >= first_seqno_` in `rocksdb::MemTable::Add`.\n\nWe could fix this by either 1) Writing a special record when reusing a log file, or 2) Implement more rigorous checking in `RecoverLogFiles` to ensure we don't replay stale records, or 3) Not reuse files created by a previous DB session. We choose option 3 as its the simplest, and flipping `recycle_log_file_num` is expected to be a rare event.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12591\n\nTest Plan: 1. Add a unit test to verify the bug and fix\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D56655812\n\nPulled By: anand1976\n\nfbshipit-source-id: aa3a26b4a5e892d39a54b5a0658233cbebebac87","shortMessageHtmlLink":"Fix corruption bug when recycle_log_file_num changed from 0 (#12591)"}},{"before":"2ec25a3e54d7be89b6ec75328d1fa647765c66d9","after":"d80e1d99bcf8b8c530011a93f4382324dab4da26","ref":"refs/heads/main","pushedAt":"2024-04-29T04:25:25.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add `ldb multi_get_entity` subcommand (#12593)\n\nSummary:\nMixed code from `MultiGetCommand` and `GetEntityCommand` to introduce `MultiGetEntityCommand`. Some minor fixes for the related subcommands are included.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12593\n\nReviewed By: jaykorean\n\nDifferential Revision: D56687147\n\nPulled By: ajkr\n\nfbshipit-source-id: 2ad7b7ba8e05e990b43f2d1eb4990f746ce5f1ea","shortMessageHtmlLink":"Add ldb multi_get_entity subcommand (#12593)"}},{"before":"a82ba52756de6bc85c0b6b6bab0cd0c62068a392","after":"2ec25a3e54d7be89b6ec75328d1fa647765c66d9","ref":"refs/heads/main","pushedAt":"2024-04-27T03:10:47.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Prevent data block compression with `BlockBasedTableOptions::block_align` (#12592)\n\nSummary:\nMade `BlockBasedTableOptions::block_align` incompatible (i.e., APIs will return `Status::InvalidArgument`) with more ways of enabling compression: `CompactionOptions::compression`, `ColumnFamilyOptions::compression_per_level`, and `ColumnFamilyOptions::bottommost_compression`. Previously it was only incompatible with `ColumnFamilyOptions::compression`.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12592\n\nReviewed By: hx235\n\nDifferential Revision: D56650862\n\nPulled By: ajkr\n\nfbshipit-source-id: f5201602c2ce436e6d8d30893caa6a161a61f141","shortMessageHtmlLink":"Prevent data block compression with `BlockBasedTableOptions::block_al…"}},{"before":"8e1bd022795fbb9ebcc782b8c69ed234a77d6360","after":"a82ba52756de6bc85c0b6b6bab0cd0c62068a392","ref":"refs/heads/main","pushedAt":"2024-04-26T23:04:37.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Disable inplace_update_support in OptimisticTxnDB (#12589)\n\nSummary:\nAdding OptimisticTransactionDB like https://github.com/facebook/rocksdb/issues/12586\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12589\n\nTest Plan:\n```\npython3 tools/db_crashtest.py whitebox --optimistic_txn\n```\n```\nRunning db_stress with pid=773197: ./db_stress ...\n--inplace_update_support=0 ...\n--use_optimistic_txn=1 ...\n...\n```\n\nReviewed By: ajkr\n\nDifferential Revision: D56635338\n\nPulled By: jaykorean\n\nfbshipit-source-id: fc3ef13420a2d539c7651d3f5b7dd6c4c89c836d","shortMessageHtmlLink":"Disable inplace_update_support in OptimisticTxnDB (#12589)"}},{"before":"3fa2ff304675e7d9d0505e19630f27f9cb86ca22","after":"8e1bd022795fbb9ebcc782b8c69ed234a77d6360","ref":"refs/heads/main","pushedAt":"2024-04-26T22:43:41.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix deprecated use of 0/NULL in internal_repo_rocksdb/repo/util/xxhash.h + 1\n\nSummary:\n`nullptr` is typesafe. `0` and `NULL` are not. In the future, only `nullptr` will be allowed.\n\nThis diff helps us embrace the future _now_ in service of enabling `-Wzero-as-null-pointer-constant`.\n\nReviewed By: palmje\n\nDifferential Revision: D56650257\n\nfbshipit-source-id: ce628fbf12ea5846bb7103455ab859c5ed7e3598","shortMessageHtmlLink":"Fix deprecated use of 0/NULL in internal_repo_rocksdb/repo/util/xxhas…"}},{"before":"b4c520cadcc6e027a0c890f611d6ee6c56d74738","after":"3fa2ff304675e7d9d0505e19630f27f9cb86ca22","ref":"refs/heads/main","pushedAt":"2024-04-26T22:38:41.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix deprecated use of 0/NULL in internal_repo_rocksdb/repo/include/rocksdb/utilities/env_mirror.h + 1\n\nSummary:\n`nullptr` is typesafe. `0` and `NULL` are not. In the future, only `nullptr` will be allowed.\n\nThis diff helps us embrace the future _now_ in service of enabling `-Wzero-as-null-pointer-constant`.\n\nReviewed By: palmje\n\nDifferential Revision: D56650296\n\nfbshipit-source-id: ee3491d30e6c1fdefb3010c8ae1104b3f45e70f6","shortMessageHtmlLink":"Fix deprecated use of 0/NULL in internal_repo_rocksdb/repo/include/ro…"}},{"before":"d610e14f9386bab7f1fa85cf34dcb5b465152699","after":"b4c520cadcc6e027a0c890f611d6ee6c56d74738","ref":"refs/heads/main","pushedAt":"2024-04-26T20:08:41.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"change default `CompactionOptions::compression` while deprecating it (#12587)\n\nSummary:\nI had a TODO to complete `CompactionOptions`'s compression API but never did it: https://github.com/facebook/rocksdb/blob/d610e14f9386bab7f1fa85cf34dcb5b465152699/db/compaction/compaction_picker.cc#L371-L373\n\nWithout solving that TODO, the API remains incomplete and unsafe. Now, however, I don't think it's worthwhile to complete it. I think we should instead delete the API entirely. This PR deprecates it in preparation for deletion in a future major release. The `ColumnFamilyOptions` settings for compression should be good enough for `CompactFiles()` since they are apparently good enough for every other compaction, including `CompactRange()`.\n\nIn the meantime, I also changed the default `CompressionType`. Having callers of `CompactFiles()` use Snappy compression by default does not make sense when the default could be to simply use the same compression type that is used for every other compaction. As a bonus, this change makes the default `CompressionType` consistent with the `CompressionOptions` that will be used.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12587\n\nReviewed By: hx235\n\nDifferential Revision: D56619273\n\nPulled By: ajkr\n\nfbshipit-source-id: 1477de49f14b06c72d6f0045616a8ce91d97e66e","shortMessageHtmlLink":"change default CompactionOptions::compression while deprecating it (#…"}},{"before":"177ccd39043a969c15252bda9b960240c6ed19b9","after":"d610e14f9386bab7f1fa85cf34dcb5b465152699","ref":"refs/heads/main","pushedAt":"2024-04-26T00:26:45.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Disable `inplace_update_support` in transaction stress tests (#12586)\n\nSummary:\n`MultiOpsTxnsStressTest` relies on snapshot which is incompatible with `inplace_update_support`. TransactionDB uses snapshot too so we don't expect it to be used with `inplace_update_support` either.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12586\n\nTest Plan:\n```\npython3 tools/db_crashtest.py whitebox --[test_multiops_txn|txn] --txn_write_policy=1\n```\n\nReviewed By: hx235\n\nDifferential Revision: D56602769\n\nPulled By: cbi42\n\nfbshipit-source-id: 8778541295f0af71e8ce912c8f872ab5cc607fc1","shortMessageHtmlLink":"Disable inplace_update_support in transaction stress tests (#12586)"}},{"before":"490d11a01298c92adb744c698fe9c3bc9f65f03b","after":"177ccd39043a969c15252bda9b960240c6ed19b9","ref":"refs/heads/main","pushedAt":"2024-04-25T21:38:15.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Print more debug info in test when `SyncWAL()` fails (#12580)\n\nSummary:\nExample failure (cannot reproduce):\n\n```\n[==========] Running 1 test from 1 test case.\n[----------] Global test environment set-up.\n[----------] 1 test from DBWriteTestInstance/DBWriteTest\n[ RUN ] DBWriteTestInstance/DBWriteTest.ConcurrentlyDisabledWAL/0\ndb/db_write_test.cc:809: Failure\ndbfull()->SyncWAL()\nNot implemented: SyncWAL() is not supported for this implementation of WAL file\ndb/db_write_test.cc:809: Failure\ndbfull()->SyncWAL()\nNot implemented: SyncWAL() is not supported for this implementation of WAL file\ndb/db_write_test.cc:809: Failure\ndbfull()->SyncWAL()\nNot implemented: SyncWAL() is not supported for this implementation of WAL file\ndb/db_write_test.cc:809: Failure\ndbfull()->SyncWAL()\nNot implemented: SyncWAL() is not supported for this implementation of WAL file\ndb/db_write_test.cc:809: Failure\ndbfull()->SyncWAL()\nNot implemented: SyncWAL() is not supported for this implementation of WAL file\ndb/db_write_test.cc:809: Failure\ndbfull()->SyncWAL()\nNot implemented: SyncWAL() is not supported for this implementation of WAL file\ndb/db_write_test.cc:809: Failure\ndbfull()->SyncWAL()\nNot implemented: SyncWAL() is not supported for this implementation of WAL file\ndb/db_write_test.cc:809: Failure\ndbfull()->SyncWAL()\nNot implemented: SyncWAL() is not supported for this implementation of WAL file\ndb/db_write_test.cc:809: Failure\ndbfull()->SyncWAL()\nNot implemented: SyncWAL() is not supported for this implementation of WAL file\ndb/db_write_test.cc:809: Failure\ndbfull()->SyncWAL()\nNot implemented: SyncWAL() is not supported for this implementation of WAL file\n[ FAILED ] DBWriteTestInstance/DBWriteTest.ConcurrentlyDisabledWAL/0, where GetParam() = 0 (49 ms)\n[----------] 1 test from DBWriteTestInstance/DBWriteTest (49 ms total)\n\n[----------] Global test environment tear-down\n[==========] 1 test from 1 test case ran. (49 ms total)\n[ PASSED ] 0 tests.\n[ FAILED ] 1 test, listed below:\n[ FAILED ] DBWriteTestInstance/DBWriteTest.ConcurrentlyDisabledWAL/0, where GetParam() = 0\n```\n\nI have no idea why `SyncWAL()` would not be supported from what is presumably a `SpecialEnv` so added more debug info in case it fails again in CI. The last failure was https://github.com/facebook/rocksdb/actions/runs/8731304938/job/23956487511?fbclid=IwAR2jyXgVQtCezri3axV5MwMdI7D6VIudMk1xkiN_FL9-x2dkBv4IqIjjgB4 and it only happened once ever AFAIK.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12580\n\nReviewed By: hx235\n\nDifferential Revision: D56541996\n\nPulled By: ajkr\n\nfbshipit-source-id: 1eab17567db783c11054fa85dd8b8880eacd3a50","shortMessageHtmlLink":"Print more debug info in test when SyncWAL() fails (#12580)"}},{"before":"f75f033d74989143601fa344c29d989809579ca9","after":"490d11a01298c92adb744c698fe9c3bc9f65f03b","ref":"refs/heads/main","pushedAt":"2024-04-25T21:11:43.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Clarify `inplace_update_support` with DeleteRange and reenable `inplace_update_support` in crash test (#12577)\n\nSummary:\n**Context/Summary:**\nOur crash test recently surfaced incompatibilities between DeleteRange and inplace_update_support. Incorrect read result will be returned after insertion into memtables already contain delete range data.\n\nThis PR is to clarify this in API and re-enable `inplace_update_support` in crash test with sanitization.\n\nIdeally there should be a way to check memtable for delete range entry upon put under inplace_update_support = true\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12577\n\nTest Plan: CI\n\nReviewed By: ajkr\n\nDifferential Revision: D56492556\n\nPulled By: hx235\n\nfbshipit-source-id: 9e80e5c69dd708716619a266f41580959680c83b","shortMessageHtmlLink":"Clarify inplace_update_support with DeleteRange and reenable `inpla…"}},{"before":"1fca175eece9213a07f99973bae8e9a7d6aea93c","after":"f75f033d74989143601fa344c29d989809579ca9","ref":"refs/heads/main","pushedAt":"2024-04-25T17:33:12.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"initialize member variables in `PerfContext`'s default constructor (#12581)\n\nSummary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/12581\n\nReviewed By: jaykorean\n\nDifferential Revision: D56555535\n\nPulled By: ajkr\n\nfbshipit-source-id: 8bff376247736a8da69c79b20f6f334f47d896ca","shortMessageHtmlLink":"initialize member variables in PerfContext's default constructor (#…"}},{"before":"6807da0b44f28a0b22e5d32c7950aa9a6d5cb4bb","after":"1fca175eece9213a07f99973bae8e9a7d6aea93c","ref":"refs/heads/main","pushedAt":"2024-04-24T22:32:20.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"MultiCFSnapshot for NewIterators() API (#12573)\n\nSummary:\nAs mentioned in https://github.com/facebook/rocksdb/issues/12561 and https://github.com/facebook/rocksdb/issues/12566 , `NewIterators()` API has not been providing consistent view of the db across multiple column families. This PR addresses it by utilizing `MultiCFSnapshot()` function which has been used for `MultiGet()` APIs. To be able to obtain the thread-local super version with ref, `sv_exclusive_access` parameter has been added to `MultiCFSnapshot()` so that we could call `GetReferencedSuperVersion()` or `GetAndRefSuperVersion()` depending on the param and support `Refresh()` API for MultiCfIterators\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12573\n\nTest Plan:\n**Unit Tests Added**\n\n```\n./db_iterator_test --gtest_filter=\"*IteratorsConsistentView*\"\n```\n```\n./multi_cf_iterator_test -- --gtest_filter=\"*ConsistentView*\"\n```\n\n**Performance Check**\n\nSetup\n```\nmake -j64 release\nTEST_TMPDIR=/dev/shm/db_bench ./db_bench -benchmarks=\"filluniquerandom\" -key_size=32 -value_size=512 -num=10000000 -compression_type=none\n```\n\nRun\n```\nTEST_TMPDIR=/dev/shm/db_bench ./db_bench -use_existing_db=1 -benchmarks=\"multireadrandom\" -cache_size=10485760000\n```\nBefore the change\n```\nDB path: [/dev/shm/db_bench/dbbench]\nmultireadrandom : 6.374 micros/op 156892 ops/sec 6.374 seconds 1000000 operations; (0 of 1000000 found)\n```\nAfter the change\n```\nDB path: [/dev/shm/db_bench/dbbench]\nmultireadrandom : 6.265 micros/op 159627 ops/sec 6.265 seconds 1000000 operations; (0 of 1000000 found)\n```\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D56444066\n\nPulled By: jaykorean\n\nfbshipit-source-id: 327ce73c072da30c221e18d4f3389f49115b8f99","shortMessageHtmlLink":"MultiCFSnapshot for NewIterators() API (#12573)"}},{"before":"6a060888aa10cc9c56b422cb7dbd7337b57b3670","after":"87d4df1f07011ad0530c3361e3f36606e089dd52","ref":"refs/heads/9.1.fb","pushedAt":"2024-04-24T22:20:13.000Z","pushType":"push","commitsCount":3,"pusher":{"login":"ajkr","name":"Andrew Kryczka","path":"/ajkr","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/4780362?s=80&v=4"},"commit":{"message":"Fix deprecated use of 0/NULL in internal_repo_rocksdb/repo/util/xxhash.h + 5\n\nSummary:\n`nullptr` is typesafe. `0` and `NULL` are not. In the future, only `nullptr` will be allowed.\n\nThis diff helps us embrace the future _now_ in service of enabling `-Wzero-as-null-pointer-constant`.\n\nReviewed By: dmm-fb\n\nDifferential Revision: D55559752\n\nfbshipit-source-id: 9f1edc836ded919022c4b53722f6f86208fecf8d","shortMessageHtmlLink":"Fix deprecated use of 0/NULL in internal_repo_rocksdb/repo/util/xxhas…"}},{"before":"6f7cabeac80a3a6150be2c8a8369fcecb107bf43","after":"6a060888aa10cc9c56b422cb7dbd7337b57b3670","ref":"refs/heads/9.1.fb","pushedAt":"2024-04-24T19:45:41.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"ajkr","name":"Andrew Kryczka","path":"/ajkr","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/4780362?s=80&v=4"},"commit":{"message":"update version.h and HISTORY.md for 9.1.2","shortMessageHtmlLink":"update version.h and HISTORY.md for 9.1.2"}},{"before":"d72e60397f492a5dca1822f5313d75eb8c8b46ba","after":"6807da0b44f28a0b22e5d32c7950aa9a6d5cb4bb","ref":"refs/heads/main","pushedAt":"2024-04-24T19:43:52.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix `DisableManualCompaction()` hang (#12578)\n\nSummary:\nPrior to this PR the following sequence could happen:\n\n1. `RunManualCompaction()` A schedules compaction to thread pool and waits\n2. `RunManualCompaction()` B waits without scheduling anything due to conflict\n3. `DisableManualCompaction()` bumps `manual_compaction_paused_` and wakes up both\n4. `RunManualCompaction()` A (`scheduled && !unscheduled`) unschedules its compaction and marks itself done\n5. `RunManualCompaction()` B (`!scheduled && !unscheduled`) schedules compaction to thread pool\n6. `RunManualCompaction()` B (`scheduled && !unscheduled`) waits on its compaction\n7. `RunManualCompaction()` B at some point wakes up and finishes, either by unscheduling or by compaction execution\n8. `DisableManualCompaction()` returns as there are no more manual compactions running\n\nBetween 6. and 7. the wait can be long while the compaction sits in the thread pool queue. That wait is unnecessary. This PR changes the behavior from step 5. onward:\n\n5'. `RunManualCompaction()` B (`!scheduled && !unscheduled`) marks itself done\n6'. `DisableManualCompaction()` returns as there are no more manual compactions running\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12578\n\nReviewed By: cbi42\n\nDifferential Revision: D56528144\n\nPulled By: ajkr\n\nfbshipit-source-id: 4da2467376d7d4ff435547aa74dd8f118db0c03b","shortMessageHtmlLink":"Fix DisableManualCompaction() hang (#12578)"}},{"before":"9d37408f9af15c7a1ae42f9b94d06b27d98a011a","after":"d72e60397f492a5dca1822f5313d75eb8c8b46ba","ref":"refs/heads/main","pushedAt":"2024-04-23T22:11:09.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Enable block_align in crash test (#12560)\n\nSummary:\n**Context/Summary:**\nAfter https://github.com/facebook/rocksdb/pull/12542 there should be no blocker to re-enable block_align in crash test\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12560\n\nTest Plan: CI\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D56479173\n\nPulled By: hx235\n\nfbshipit-source-id: 7c0bf327da0bd619deb89ab706e6ccd24e5b9543","shortMessageHtmlLink":"Enable block_align in crash test (#12560)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEQFhTkQA","startCursor":null,"endCursor":null}},"title":"Activity · facebook/rocksdb"}