{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":644742603,"defaultBranch":"main","name":"cloudberrydb","ownerLogin":"cloudberrydb","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2023-05-24T06:58:16.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/119398105?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1715236581.0","currentOid":""},"activityList":{"items":[{"before":"9b9fd5bbc90faa8b7ea059598ba0d98da431e103","after":"c226e0c819cd34a32c1f21afa432fdd6cdcb6edd","ref":"refs/heads/main","pushedAt":"2024-05-24T09:15:38.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"CPP keywords should not be used as function/parameter names (#449)\n\nAlso extern \"C\" will not work in this case. Current change remove the delete which defined as parameter.","shortMessageHtmlLink":"CPP keywords should not be used as function/parameter names (#449)"}},{"before":"d678c248612c3a431ac88c015c1193b53fd2ac02","after":"9b9fd5bbc90faa8b7ea059598ba0d98da431e103","ref":"refs/heads/main","pushedAt":"2024-05-24T02:45:31.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"PendingDelete: expand the pending deletes interface (#442)\n\nThe pending deletes in CBDB can only mount the relfilenode.\r\n\r\nCurrent change expand the pending deletes interface, make self-defined structure can be mount in the\r\npending delete list, also can use the self-defined callback decide how to delete the resource. It's very\r\nhelper for the UFile or other extension which will use the different local/remote resource.","shortMessageHtmlLink":"PendingDelete: expand the pending deletes interface (#442)"}},{"before":"1b0e01f7447b1441243264856ea4cc932b733ed1","after":"d678c248612c3a431ac88c015c1193b53fd2ac02","ref":"refs/heads/main","pushedAt":"2024-05-23T06:34:39.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Expand a new external var tag (#443)\n\nExternal toast in CBDB have the fixed structure, the vartag_external used to tell which way to detoast it.\r\n\r\nIf want to add an external toast implementation in the extension without changing the kernel, then need to\r\nadd a new tag in vartag_external.\r\n\r\nThe current change defines an extension generic tag which named VARTAG_CUSTOM, this kind of tag is not\r\nused in the kernel, which means that the datum returned from the extension should not be a toast with this kind\r\nof tag. This tag is only used within the extension.","shortMessageHtmlLink":"Expand a new external var tag (#443)"}},{"before":"fd453bfd47f63e44f32e775aeb65d454dc20e681","after":"1b0e01f7447b1441243264856ea4cc932b733ed1","ref":"refs/heads/main","pushedAt":"2024-05-22T03:32:36.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"avamingli","name":"Zhang Mingli","path":"/avamingli","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/17311022?s=80&v=4"},"commit":{"message":"[AQUMV] Support DISTINCT ON clause on origin query.\n\nSince we have supported ORDER BY caluse and DISTINCT ON\nclause references are processed in target list, open it\non origin query.\n\ncreate incremental materialized view mv as\n select c1 as mc1, c2 as mc2, c3 as mc3, c4 as mc4\n from t1 where c1 > 90;\n\nOrigin querys:\n\n select DISTINCT ON(c1 - 1) c1, c2 from t1 where c1 > 90\n order by c1 - 1, c2 nulls first;\n\nCould be rewritten to:\n\n select DISTINCT ON(mc1 - 1) mc1, mc2 from mv\n order by mc1 - 1, mc2 nulls first;\n\nAuthored-by: Zhang Mingli avamingli@gmail.com","shortMessageHtmlLink":"[AQUMV] Support DISTINCT ON clause on origin query."}},{"before":"6bf35c89d5c45e61006283e4f75d9aebcbe73939","after":"fd453bfd47f63e44f32e775aeb65d454dc20e681","ref":"refs/heads/main","pushedAt":"2024-05-17T06:44:10.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"[AQUMV] Support DISTINCT clause on origin query.\n\nSELECT DISTINCT clause references are processed in target\nlist, open it on origin query.\nDISTINCT in aggregation and Group By DISTINCT are already\nsupported, add cases to verify that.\n\ncreate incremental materialized view mv as\n select c1 as mc1, c2 as mc2, c3 as mc3, c4 as mc4\n from t1 where c1 > 90;\n\nOrigin querys:\n\n select DISTINCT c2 from t1 where c1 > 90;\n\n select count(DISTINCT c2) from t1 where c1 > 90;\n\n select c1, c2, c3, sum(c4) from t1 where c1 > 90\n group by DISTINCT rollup(c1, c2), rollup(c1, c3);\n\nCould be rewritten to:\n\n select DISTINCT mc2 from mv;\n\n select count(DISTINCT mc2) from mv;\n\n select mc1, mc2, mc3, sum(mc4) from mv\n group by DISTINCT rollup(mc1, mc2), rollup(mc1, mc3);\n\nAuthored-by: Zhang Mingli avamingli@gmail.com","shortMessageHtmlLink":"[AQUMV] Support DISTINCT clause on origin query."}},{"before":"1347cd6901d02fa5f828f28c7468881c478db57e","after":"6bf35c89d5c45e61006283e4f75d9aebcbe73939","ref":"refs/heads/main","pushedAt":"2024-05-16T11:41:45.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"avamingli","name":"Zhang Mingli","path":"/avamingli","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/17311022?s=80&v=4"},"commit":{"message":"Fix explain(locus) issues.\n\nFix typo and locus null as below.\n\nexplain (costs off, locus)\nselect * from dedup_reptab r where r.a in (select t.a/10 from dedup_tab\nt);\n QUERY PLAN\n------------------------------------------------------------------------\n Gather Motion 3:1 (slice1; segments: 3)\n Locus: Entry\n -> Result\n Locus: Strewn\n -> Unique\n Locus: NULL\n Group Key: (RowIdExpr)\n -> Sort\n Locus: NULL\n Sort Key (Distinct): (RowIdExpr)\n -> Redistribute Motion 3:3 (slice2; segments: 3)\n Locus: Hashed\n Hash Key: (RowIdExpr)\n -> Hash Join\n Locus: Hashed\n Hash Cond: ((t.a / 10) = r.a)\n -> Seq Scan on dedup_tab t\n Locus: Hashed\n -> Hash\n Locus: Replicated\n -> Broadcast Motion 1:3\n(slice3; segments: 1)\n Locus: Replicated\n -> Seq Scan on\ndedup_reptab r\n Locus: SingleQE\n\nAuthored-by: Zhang Mingli avamingli@gmail.com","shortMessageHtmlLink":"Fix explain(locus) issues."}},{"before":"9a630392a94ec29790a25174b851f6e64be63e21","after":"1347cd6901d02fa5f828f28c7468881c478db57e","ref":"refs/heads/main","pushedAt":"2024-05-16T10:01:24.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Refactor cbload to gpdirtableload with python.\n\nWe refactor cbload with python language which is more friendly to\nkernel. What's more, we make a minor fix with respect to directory\ntable check in copy from.","shortMessageHtmlLink":"Refactor cbload to gpdirtableload with python."}},{"before":"9274a53490bc1a85f3f103054edd420cc48bd026","after":"9a630392a94ec29790a25174b851f6e64be63e21","ref":"refs/heads/main","pushedAt":"2024-05-16T09:58:25.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"[ORCA] Fix flaky \"Invalid key is inaccessible\" fallback (#15147)\n\nIn CI pipeline there were occassional test failures due to ORCA fallback\r\nwith following stacktrace.\r\n\r\n ```\r\n +INFO: GPORCA failed to produce a plan, falling back to planner\r\n +DETAIL: CSyncHashtable.h:109: Failed assertion: IsValid(key) && \"Invalid key is inaccessible\"\r\n +Stack trace:\r\n +1 gpos::CException::Raise + 227\r\n +2 + 15235666\r\n +3 gpos::CMemoryPoolManager::CreateMemoryPool + 653\r\n +4 gpos::CAutoMemoryPool::CAutoMemoryPool + 34\r\n +5 gpopt::CColumnFactory::CColumnFactory + 80\r\n +6 gpopt::COptCtxt::PoctxtCreate + 77\r\n +7 gpopt::CAutoOptCtxt::CAutoOptCtxt + 54\r\n +8 gpopt::COptimizer::PdxlnOptimize + 411\r\n +9 COptTasks::OptimizeTask + 850\r\n +10 gpos::CTask::Execute + 52\r\n +11 gpos::CWorker::Execute + 36\r\n +12 gpos::CAutoTaskProxy::Execute + 97\r\n +13 gpos_exec + 557\r\n ```\r\n\r\nCore dump of failure showed CMemoryPool::m_hash_key had invalid key\r\nvalue 0xffffffff. Hence, the query raised an assertion error and fell\r\nback to PLANNER.\r\n\r\nIssue is that CMemoryPool::m_hash_key was never directly initialized.\r\nThis suggests that it was using uninitialized memory to produce\r\nrandomness in the key. When that memory contains 0xffffffff in just the\r\nright place, then the value of the CMemoryPool::m_hash_key is an invalid\r\nkey and ORCA falls back.\r\n\r\nFollowing is patch that demonstrates the issue:\r\n ```\r\n diff src/backend/utils/mmgr/aset.c\r\n @@ -989,6 +989,8 @@ AllocSetAlloc(MemoryContext context, Size size)\r\n\r\n MEMORY_ACCOUNT_INC_ALLOCATED(set, chunk->size);\r\n\r\n + memset((char *) AllocChunkGetPointer(chunk), 0xFFFFFFFF, size);\r\n +\r\n return AllocChunkGetPointer(chunk);\r\n }\r\n ```\r\n\r\nA few lines above that patch, you can see that when compiled with\r\nRANDOMIZE_ALLOCATED_MEMORY the memory is randomly initialied. So we can\r\nmake no assumptions about the uninitialied memory; meaning that 0xffffff\r\nis valid.\r\n\r\nNote: Seemed this failure manifested more commonly with JIT ICW runs.\n(cherry picked from commit 2c7152f46aced9328d86dc1025d0395fcf467455)","shortMessageHtmlLink":"[ORCA] Fix flaky \"Invalid key is inaccessible\" fallback (#15147)"}},{"before":"0592389e180c589695ee6492f04882ffcf5a8b14","after":"9274a53490bc1a85f3f103054edd420cc48bd026","ref":"refs/heads/main","pushedAt":"2024-05-16T02:18:18.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Fix checking password file permissions in dbconn.py (#438)\n\nThe mode is expressed in octal which causes warning.","shortMessageHtmlLink":"Fix checking password file permissions in dbconn.py (#438)"}},{"before":"30da5b4907c9eb91f50fb3d5e9cf5714c02cb2dc","after":"0592389e180c589695ee6492f04882ffcf5a8b14","ref":"refs/heads/main","pushedAt":"2024-05-15T13:41:56.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"gfphoenix78","name":"Hao Wu","path":"/gfphoenix78","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/37101401?s=80&v=4"},"commit":{"message":"Fix motion toast error. (#436)\n\nslot->tts_isnull cannot be read before invoking slot_getallattrs.\r\n\r\nFix Github Issue 16906.\r\n\r\nAuthored-by: wenxing.yaun \r\n\r\nCo-authored-by: HelloYJohn ","shortMessageHtmlLink":"Fix motion toast error. (#436)"}},{"before":"9053b816ffc6be3f30d3fe2471d96c8d0b04971f","after":"30da5b4907c9eb91f50fb3d5e9cf5714c02cb2dc","ref":"refs/heads/main","pushedAt":"2024-05-14T02:08:25.000Z","pushType":"pr_merge","commitsCount":24,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Fix Merge GPDB.\n\nSome codes do not work in CBDB after Merge from GPDB.\nFix errors and etc.\n\nAuthored-by: Zhang Mingli avamingli@gmail.com","shortMessageHtmlLink":"Fix Merge GPDB."}},{"before":"27dc1241ba8fd9fcededb1d3832a823abe7c3ba6","after":"9053b816ffc6be3f30d3fe2471d96c8d0b04971f","ref":"refs/heads/main","pushedAt":"2024-05-13T02:21:35.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"avamingli","name":"Zhang Mingli","path":"/avamingli","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/17311022?s=80&v=4"},"commit":{"message":"Remove cbload relevant codes.\n\nAs cbload is implemented by go language which is not friendly for\ncompilation, we remove it's relevant codes. We will refactor cbload\nby python or other languages.","shortMessageHtmlLink":"Remove cbload relevant codes."}},{"before":"2b8815bce07e6a76d04e18bb1acf5afe9ab9ba8f","after":"27dc1241ba8fd9fcededb1d3832a823abe7c3ba6","ref":"refs/heads/main","pushedAt":"2024-05-13T02:15:20.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Add function cbdb_relation_size (#428)\n\nAdd function cbdb_relation_size\r\n\r\nIt can be used to fetch the size of a batch of relations as below\r\n\r\nSELECT * FROM\r\ncbdb_relation_size((SELECT array_agg(oid) FROM pg_class));\r\n\r\nIt has better performance than pg_relation_size in such case, more details\r\nsee the comment on the function\r\n\r\nCo-authored-by: Xiaoran Wang ","shortMessageHtmlLink":"Add function cbdb_relation_size (#428)"}},{"before":"d32c144dbc7c9a3bc870b58fd2fd4c49e583b873","after":"2b8815bce07e6a76d04e18bb1acf5afe9ab9ba8f","ref":"refs/heads/main","pushedAt":"2024-05-10T02:29:55.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Fix drop directory privilege check.\n\nNow, when we drop directory table we will check whether current user\nhas tablespace's privilege which is not reasonable. In this\ncommit, we will check directory table's privilege in drop directory\ntable.","shortMessageHtmlLink":"Fix drop directory privilege check."}},{"before":"80a0a073d2e63acdbe6320470318ae7377e181b2","after":"d32c144dbc7c9a3bc870b58fd2fd4c49e583b873","ref":"refs/heads/main","pushedAt":"2024-05-10T01:51:58.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tuhaihe","name":"Dianjin Wang","path":"/tuhaihe","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1284465?s=80&v=4"},"commit":{"message":"Update the googletest module URL\n\nThe googletest module code URL cannot be accessed by the community\nmembers because the original URL is not public. Change it to the public\nURL to make it available to community developers.","shortMessageHtmlLink":"Update the googletest module URL"}},{"before":"866f6651ec0dedfe5b2beaa941502eccbd68de3a","after":"80a0a073d2e63acdbe6320470318ae7377e181b2","ref":"refs/heads/main","pushedAt":"2024-05-09T10:35:00.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"tuhaihe","name":"Dianjin Wang","path":"/tuhaihe","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1284465?s=80&v=4"},"commit":{"message":"Fix visimap consults for unique checks during UPDATEs\n\nThis fixes #17183.\n\nWhen consulting the visimap during an UPDATE for the purposes of\nuniqueness checks, we used to refer to the visimap from the delete half\nof the update.\n\nThis is the wrong structure to look at as this structure is not meant to\nbe consulted while deletes are in flight (we haven't reached\nend-of-delete where visibility info from the visimapDelete structure\nflows into the catalog).\n\nInstead, we should be consulting the visimapDelete structure attached to\nthe deleteDesc. This structure can handle visimap queries for tuples\nthat have visimap changes that haven't yet been persisted to the catalog\ntable.\n\nThe effect of not using this structure meant running into issues such\nas: \"attempted to update invisible tuple\" when we would attempt to\npersist a dirty visimap entry in AppendOnlyVisimap_IsVisible() with a\ncall to AppendOnlyVisimap_Store(). The dirty entry is one which was\nintroduced by the delete half of the update. Our existing test did not\ntrip this issue because the update did not need a swap-out of the\ncurrent entry. With enough data, however, the issue reproduces, as\nevidenced in #17183.\n\nCo-authored-by: Ashwin Agrawal \nReviewed-by: Haolin Wang ","shortMessageHtmlLink":"Fix visimap consults for unique checks during UPDATEs"}},{"before":"9951bfb3a1e987882ee317175e011fd5694676c1","after":"866f6651ec0dedfe5b2beaa941502eccbd68de3a","ref":"refs/heads/main","pushedAt":"2024-05-09T10:20:43.000Z","pushType":"pr_merge","commitsCount":8,"pusher":{"login":"tuhaihe","name":"Dianjin Wang","path":"/tuhaihe","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1284465?s=80&v=4"},"commit":{"message":"Revert \"Replace scp with rsync (#14145)\"\n\nThis reverts commit a32ef6be3993944d4e8ac45430aaf7a47d3b7cf7.","shortMessageHtmlLink":"Revert \"Replace scp with rsync (#14145)\""}},{"before":"9b68d1b9e110e67eea575eabf4b845d5bf37634c","after":null,"ref":"refs/heads/revert-424-gpcheckperf","pushedAt":"2024-05-09T10:20:43.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"tuhaihe","name":"Dianjin Wang","path":"/tuhaihe","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1284465?s=80&v=4"}},{"before":null,"after":"9b68d1b9e110e67eea575eabf4b845d5bf37634c","ref":"refs/heads/revert-424-gpcheckperf","pushedAt":"2024-05-09T06:36:21.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Revert \"Replace scp with rsync (#14145)\"\n\nThis reverts commit a32ef6be3993944d4e8ac45430aaf7a47d3b7cf7.","shortMessageHtmlLink":"Revert \"Replace scp with rsync (#14145)\""}},{"before":"80a0a073d2e63acdbe6320470318ae7377e181b2","after":"9951bfb3a1e987882ee317175e011fd5694676c1","ref":"refs/heads/main","pushedAt":"2024-05-09T06:08:51.000Z","pushType":"pr_merge","commitsCount":8,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Fixing if the time command has comma in the output (#17207)\n\nChanges to fix gpcheckperf failure with an exception when sometime time command output\r\nhas a comma separator instead of a dot.\r\nSteps to reproduce issue:\r\n\r\n$export LC_ALL=de_DE_utf8\r\n$time sleep 1\r\nreal\t0m1,021s\r\nuser\t0m0,001s\r\nsys\t0m0,005s\r\n\r\nFix: added check if comma present in the time output, replace it with a dot and continue parsing.\r\n\r\nTesting: Added unit tests to check the output of parseMultiDDResult() in case of comma and dot.","shortMessageHtmlLink":"Fixing if the time command has comma in the output (#17207)"}},{"before":"78ce115a6dc3c717c9cfb02e987096f4c97fa709","after":"80a0a073d2e63acdbe6320470318ae7377e181b2","ref":"refs/heads/main","pushedAt":"2024-05-08T07:55:55.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Fix visimap consults for unique checks during UPDATEs\n\nThis fixes #17183.\n\nWhen consulting the visimap during an UPDATE for the purposes of\nuniqueness checks, we used to refer to the visimap from the delete half\nof the update.\n\nThis is the wrong structure to look at as this structure is not meant to\nbe consulted while deletes are in flight (we haven't reached\nend-of-delete where visibility info from the visimapDelete structure\nflows into the catalog).\n\nInstead, we should be consulting the visimapDelete structure attached to\nthe deleteDesc. This structure can handle visimap queries for tuples\nthat have visimap changes that haven't yet been persisted to the catalog\ntable.\n\nThe effect of not using this structure meant running into issues such\nas: \"attempted to update invisible tuple\" when we would attempt to\npersist a dirty visimap entry in AppendOnlyVisimap_IsVisible() with a\ncall to AppendOnlyVisimap_Store(). The dirty entry is one which was\nintroduced by the delete half of the update. Our existing test did not\ntrip this issue because the update did not need a swap-out of the\ncurrent entry. With enough data, however, the issue reproduces, as\nevidenced in #17183.\n\nCo-authored-by: Ashwin Agrawal \nReviewed-by: Haolin Wang ","shortMessageHtmlLink":"Fix visimap consults for unique checks during UPDATEs"}},{"before":"f73e8c659787fef529f5927175839d4a65f324e4","after":"78ce115a6dc3c717c9cfb02e987096f4c97fa709","ref":"refs/heads/main","pushedAt":"2024-04-28T09:03:33.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Fix copy from directory table. (#416)\n\nWe use temporary tupledesction in copy from directory table, which\r\nwill have no side effect on relcache.","shortMessageHtmlLink":"Fix copy from directory table. (#416)"}},{"before":"23fece736cc91d6056bda7c036897711293d9915","after":"f73e8c659787fef529f5927175839d4a65f324e4","ref":"refs/heads/main","pushedAt":"2024-04-28T03:51:24.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Disable dump pax tables in pg_dump (#412)\n\nPax is not working properly in pg_dump/pg_restore.\r\n\r\nThere are several problems:\r\nThe pax relative table in namespace pg_ext_aux won't be dump.\r\nThe table access method pax have not been setting in pg_dump.\r\nCurrent change ignore pax table in pg_dump.","shortMessageHtmlLink":"Disable dump pax tables in pg_dump (#412)"}},{"before":"20add92be713d0884ea8f8d6fb6a104ecbbeeb21","after":"23fece736cc91d6056bda7c036897711293d9915","ref":"refs/heads/main","pushedAt":"2024-04-26T02:05:25.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Fix directory table ci pipeline problems. (#414)\n\nKeep test env clean to make sure test pass.","shortMessageHtmlLink":"Fix directory table ci pipeline problems. (#414)"}},{"before":"09ed012ca0acd72023d8515f00953f4985c336ea","after":"20add92be713d0884ea8f8d6fb6a104ecbbeeb21","ref":"refs/heads/main","pushedAt":"2024-04-24T15:02:50.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Implement Directory Table.\n\nImplement directory table feature in this commit. Directory table is a new\nrelation which used to organize the unstructured data files in the specified\ntablespace. The date files are stored in the specified tablespace while\nthe tuples recorded the metadata of the data files such as relative_path, md5\nsize etc. are stored in normal table.\n\nWe support local directory table and remote directory table meanwhile. The\nlocal directory table uses the local tablespace while the remote directory\ntable uses the DFS tablespace which implemented in our enterprise extension.\n\nWe support copy binary from to upload file to directory table, directory_table\nUDF to get file content, remove_file UDF to remove file from directory table.\nWhat's more, we implement a tool called cbload used to upload file to direcotry\ntable. Meanwhile, to support DFS directory table, we also import some catalog\ntables such as gp_storage_server, gp_storage_user_mapping which are shared in\nall databases.\n\nWe will illustrage some examples for your convinence of usage as follow.\n\n-- Create an oss_server that points to endpoint:\nCREATE STORAGE SERVER oss_server OPTIONS\n(protocol 'qingstor', endpoint 'pek3b.qingstor.com', https 'true', virtual_host 'false');\n\n-- Create a user mapping to access oss_server\nCREATE STORAGE USER MAPPING FOR CURRENT_USER STORAGE SERVER oss_server OPTIONS\n(accesskey 'KGCPPHVCHRDSYFEAWLLC', secretkey '0SJIWiIATh6jOlmAas23q6hOAGBI1BnsnvgJmTs');\n\n-- Create a local tablespace\nCREATE TABLESPACE dirtable_spc location '/data/dirtable_spc';\n\n-- Create a local directory table\nCREATE DIRECTORY TABLE dirtable TABLESPACE dirtable_spc;\n\n-- Copy binary from directory table\nCOPY BINARY dirtable FROM '/data/file1.csv' 'file1';\n\n-- Select directory table\nSELECT * FROM dirtable;\nSELECT * FROM directory_table('dirtable');\n\n-- Remove file from directory table\nSELECT remove_file('dirtable', 'file1');\n\nCo-authored-by: Mu Guoqing muguoqing@hashdata.cn\nReviewd-by: Yang Yu yangyu@hashdata.cn\n Yang Jianghua yjhjstz@gmail.com","shortMessageHtmlLink":"Implement Directory Table."}},{"before":"eaf462951fea6ba85364e4643163f38374b83ed9","after":"09ed012ca0acd72023d8515f00953f4985c336ea","ref":"refs/heads/main","pushedAt":"2024-04-23T07:27:41.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Fix: pgrx cannot find function after numeric change interface (#410)\n\nAfter CBDB public part of numeric defines, then numeric_is_nan/numeric_is_inf have been replace with\r\nmacro NUMERIC_IS_NAN/NUMERIC_IS_INF.\r\n\r\nBut some of extension may not write by c/c++, then it can't direct call the macro NUMERIC_IS_NAN/NUMERIC_IS_INF. \r\nSo current change add these function back.","shortMessageHtmlLink":"Fix: pgrx cannot find function after numeric change interface (#410)"}},{"before":"188cc1db0fa23e056c3522f7e218e6b075f46c26","after":"eaf462951fea6ba85364e4643163f38374b83ed9","ref":"refs/heads/main","pushedAt":"2024-04-23T02:16:22.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tuhaihe","name":"Dianjin Wang","path":"/tuhaihe","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1284465?s=80&v=4"},"commit":{"message":"Doc: update the deployment README.md\n\nWe have updated the old documents and improved the formatting to make\nthem more user-friendly. Users can deploy the Cloudberry Database by\nfollowing these steps.","shortMessageHtmlLink":"Doc: update the deployment README.md"}},{"before":"143b3df943f793df70ff20b6274c489daaf1a022","after":"188cc1db0fa23e056c3522f7e218e6b075f46c26","ref":"refs/heads/main","pushedAt":"2024-04-19T03:55:04.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tuhaihe","name":"Dianjin Wang","path":"/tuhaihe","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1284465?s=80&v=4"},"commit":{"message":"Doc: update the README.md\n\nUpdate the description and information in README.md to give everyone a\nbetter understanding of the Cloudberry Database.","shortMessageHtmlLink":"Doc: update the README.md"}},{"before":"d189fd02d7d5d6aed5ed2887c8f2933954a85169","after":"143b3df943f793df70ff20b6274c489daaf1a022","ref":"refs/heads/main","pushedAt":"2024-04-18T10:42:37.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Add GUC 'gp_random_insert_segments' to control the segments used for random distributed table insertion (#406)\n\nIntroduces the 'gp_random_insert_segments' GUC to reduce the generation of\r\nexcessive fragmented files during the insertion of small amounts of data into\r\nclusters with a large number of segments (e.g., 1000 records into 100 segments).\r\n\r\nFragmented data insertion can significantly degrade performance, especially\r\nwhen using append-optimized or cloud-based storage. By introducing\r\nthe 'gp_random_insert_segments' GUC, users can limit the number of segments\r\nused for data insertion in randomly distributed tables, which can significantly\r\nreduce fragmented files.","shortMessageHtmlLink":"Add GUC 'gp_random_insert_segments' to control the segments used for …"}},{"before":"7c0423ce506c86bbbbb638ba2ddce614118ae9b9","after":"d189fd02d7d5d6aed5ed2887c8f2933954a85169","ref":"refs/heads/main","pushedAt":"2024-04-18T02:00:29.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"my-ship-it","name":"Max Yang","path":"/my-ship-it","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/79948451?s=80&v=4"},"commit":{"message":"Fix: make enough out data buffer when call EVP_DecryptUpdate (#479) (#408)\n\nIf padding is enabled the decrypted data buffer out\r\npassed to EVP_DecryptUpdate() should have sufficient room for\r\n (inl + cipher_block_size) bytes.\r\nMore detail information in https://www.openssl.org/docs/man3.1/man3/EVP_DecryptUpdate.html\r\n\r\nCo-authored-by: kongfanshen ","shortMessageHtmlLink":"Fix: make enough out data buffer when call EVP_DecryptUpdate (#479) (#…"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEUtqgrgA","startCursor":null,"endCursor":null}},"title":"Activity · cloudberrydb/cloudberrydb"}