{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":44781140,"defaultBranch":"main","name":"gpdb","ownerLogin":"greenplum-db","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2015-10-23T00:25:17.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/14097842?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1715674955.0","currentOid":""},"activityList":{"items":[{"before":"a844e147e36fb236e8f7d11a37b3bbeaee0c7f86","after":"1825c136dba56f9c4945924e9520f6fa3416729e","ref":"refs/heads/main","pushedAt":"2024-05-24T23:33:28.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"khuddlefish","name":"Karen Huddleston","path":"/khuddlefish","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/4099765?s=80&v=4"},"commit":{"message":"Reject substituting extension schemas or owners matching [\"$'\\].\n\nSubstituting such values in extension scripts facilitated SQL injection\nwhen @extowner@, @extschema@, or @extschema:...@ appeared inside a\nquoting construct (dollar quoting, '', or \"\"). No bundled extension was\nvulnerable. Vulnerable uses do appear in a documentation example and in\nnon-bundled extensions. Hence, the attack prerequisite was an\nadministrator having installed files of a vulnerable, trusted,\nnon-bundled extension. Subject to that prerequisite, this enabled an\nattacker having database-level CREATE privilege to execute arbitrary\ncode as the bootstrap superuser. By blocking this attack in the core\nserver, there's no need to modify individual extensions. Back-patch to\nv11 (all supported versions).\n\nReported by Micah Gate, Valerie Woolard, Tim Carey-Smith, and Christoph\nBerg.\n\nSecurity: CVE-2023-39417\n(cherry picked from commit eb044d8f0aee1ba4950b0867f6ca9328374318db)","shortMessageHtmlLink":"Reject substituting extension schemas or owners matching [\"$'\\]."}},{"before":"7143be29dbef9a3f9903821afeaf965998093f29","after":"a844e147e36fb236e8f7d11a37b3bbeaee0c7f86","ref":"refs/heads/main","pushedAt":"2024-05-24T23:33:13.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"khuddlefish","name":"Karen Huddleston","path":"/khuddlefish","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/4099765?s=80&v=4"},"commit":{"message":"Detect integer overflow while computing new array dimensions.\n\narray_set_element() and related functions allow an array to be\nenlarged by assigning to subscripts outside the current array bounds.\nWhile these places were careful to check that the new bounds are\nallowable, they neglected to consider the risk of integer overflow\nin computing the new bounds. In edge cases, we could compute new\nbounds that are invalid but get past the subsequent checks,\nallowing bad things to happen. Memory stomps that are potentially\nexploitable for arbitrary code execution are possible, and so is\ndisclosure of server memory.\n\nTo fix, perform the hazardous computations using overflow-detecting\narithmetic routines, which fortunately exist in all still-supported\nbranches.\n\nThe test cases added for this generate (after patching) errors that\nmention the value of MaxArraySize, which is platform-dependent.\nRather than introduce multiple expected-files, use psql's VERBOSITY\nparameter to suppress the printing of the message text. v11 psql\nlacks that parameter, so omit the tests in that branch.\n\nOur thanks to Pedro Gallegos for reporting this problem.\n\nSecurity: CVE-2023-5869\n(cherry picked from commit 18b585155a891784ca8985f595ebc0dde94e0d43)","shortMessageHtmlLink":"Detect integer overflow while computing new array dimensions."}},{"before":"0d8226d68e1235b615d0e9636ea63b1cae4ed9bb","after":"7143be29dbef9a3f9903821afeaf965998093f29","ref":"refs/heads/main","pushedAt":"2024-05-24T18:35:08.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"dgkimura","name":"David Kimura","path":"/dgkimura","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1569601?s=80&v=4"},"commit":{"message":"[ORCA] Fix stale hint parser\n\nCommit 6d768093e1 updated hint parser to execute once. Previously, ORCA\nparsed twice, in planner_hook and again in plan_hint_hook. That commit\ncreated a compilation unit scoped variable for hstate. However, it was\nmissing logic to prevent the varible from becoming stale across prepare\nstatements. Instead, let's use existing variable current_hint_state that\nhandles that for us.","shortMessageHtmlLink":"[ORCA] Fix stale hint parser"}},{"before":"482967c1b49028cf072c15935462f75bc3e4b045","after":"0d8226d68e1235b615d0e9636ea63b1cae4ed9bb","ref":"refs/heads/main","pushedAt":"2024-05-24T09:45:54.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"shrakesh","name":"Rakesh Sharma","path":"/shrakesh","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/11582689?s=80&v=4"},"commit":{"message":"Fix Addmirror valid range port error (#17524)\n\nIssue: gpaddmirrors with -p option produces ports outside of the valid rangeMirror error\r\n\r\nRCA: gpaddmirrors with -p option was considering the coordinator port\r\nnumber while calculating the mirror port, because of that it lands up\r\nerror case if the coordinator port is 5432 and calulcated mirror port\r\nwill come 5432 + 500 = 5932 which is less than allowed minPort for\r\nmirror (6432).\r\n\r\nFix: with this fix now the mirror port will be calulated from\r\nmin segment port available.\r\n\r\nfor example :\r\nCoordinator port : 5432\r\nPrimary base port : 8000\r\nMirror base port will be 8000 + offset.","shortMessageHtmlLink":"Fix Addmirror valid range port error (#17524)"}},{"before":"633407dbee491e8f782e384de063cf11810aa7a3","after":"5c881abebe73dcf48e534f54f498f11fd91799a4","ref":"refs/heads/6X_STABLE","pushedAt":"2024-05-24T06:09:15.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"soumyadeep2007","name":"Soumyadeep Chakraborty","path":"/soumyadeep2007","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/11961043?s=80&v=4"},"commit":{"message":"Allocate DatumHashTable in ANALYZE memory context\n\nInstead of allocating under TopMemoryContext, we allocate it under\nVacAttrStats->anl_context now.\n\nReviewed-by: Ashwin Agrawal \n(cherry picked from commit b88beebd8e6d09199ca26ad66f4a24dc9e092677)","shortMessageHtmlLink":"Allocate DatumHashTable in ANALYZE memory context"}},{"before":"cfa141f42ea3cef312e16013c0f43e44f0d647ba","after":"482967c1b49028cf072c15935462f75bc3e4b045","ref":"refs/heads/main","pushedAt":"2024-05-23T15:54:35.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"Annu149","name":"Annpurna Shahani","path":"/Annu149","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/30636132?s=80&v=4"},"commit":{"message":"Fix compiler warnings\n\nRemoved strerror(errno) from error messages as it\nis not required for argument checks.","shortMessageHtmlLink":"Fix compiler warnings"}},{"before":"b88beebd8e6d09199ca26ad66f4a24dc9e092677","after":"cfa141f42ea3cef312e16013c0f43e44f0d647ba","ref":"refs/heads/main","pushedAt":"2024-05-23T06:38:02.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"higuoxing","name":"Xing Guo","path":"/higuoxing","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/21099318?s=80&v=4"},"commit":{"message":"Add missing volatile qualifier. (#17521)\n\nAccording to C99 7.13.2.1[^1],\r\n\r\n> All accessible objects have values, and all other components of the\r\nabstract machine have state, as of the time the longjmp function was\r\ncalled, except that the values of objects of automatic storage duration\r\nthat are local to the function containing the invocation of the\r\ncorresponding setjmp macro that do not have volatile-qualified type and\r\nhave been changed between the setjmp invocation and longjmp call are\r\nindeterminate.\r\n\r\nThe object oldcontext is changed in line 1194 (inside PG_TRY() block)\r\nand read in line 1434 (inside PG_CATCH() block). We should qualify it\r\nwith volatile.\r\n\r\n[^1]: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf","shortMessageHtmlLink":"Add missing volatile qualifier. (#17521)"}},{"before":"a28cdeaa51a61738975d5313dc0170a0ac22dddf","after":"b88beebd8e6d09199ca26ad66f4a24dc9e092677","ref":"refs/heads/main","pushedAt":"2024-05-23T05:42:21.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"soumyadeep2007","name":"Soumyadeep Chakraborty","path":"/soumyadeep2007","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/11961043?s=80&v=4"},"commit":{"message":"Allocate DatumHashTable in ANALYZE memory context\n\nInstead of allocating under TopMemoryContext, we allocate it under\nVacAttrStats->anl_context now.\n\nReviewed-by: Ashwin Agrawal ","shortMessageHtmlLink":"Allocate DatumHashTable in ANALYZE memory context"}},{"before":"2c3443d6ae038f7b99a6aa1224a385c3751eb40f","after":"633407dbee491e8f782e384de063cf11810aa7a3","ref":"refs/heads/6X_STABLE","pushedAt":"2024-05-22T22:21:23.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"soumyadeep2007","name":"Soumyadeep Chakraborty","path":"/soumyadeep2007","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/11961043?s=80&v=4"},"commit":{"message":"mirrorless: Enable WAL optimization for COPY FROM\n\nBackported from 3abcf7c7412, with minor conflict:\nTABLE_INSERT_SKIP_WAL -> HEAP_INSERT_SKIP_WAL, since there is no table\nAM API in 6X.\n\nAlso, the xlogdump output looks slightly different:\n\nWithout patch:\nrmgr: Storage len (rec/tot): 16/ 48, tx: 0, lsn: 0/05E2C7A0, prev 0/05E2C750, bkp: 0000, desc: file create: base/12812/16385\npg_xlogdump: FATAL: error in WAL record at 0/5E2DF60: invalid magic number 0000 in log segment 000000000000000000000001, offset 31653888\nrmgr: Heap2 len (rec/tot): 56/ 88, tx: 715, lsn: 0/05E2DA28, prev 0/05E2D9E0, bkp: 0000, desc: multi-insert (init): rel 1663/12812/16385; blk 0; 3 tuples\n\nWith patch (multi-insert record avoided):\nrmgr: Storage len (rec/tot): 16/ 48, tx: 0, lsn: 0/05E300F0, prev 0/05E300C8, bkp: 0000, desc: file create: base/12812/24576\n\nOriginal commit message follows:\n\nCopyFrom has an optimization where WAL can be avoided if the COPY is in\nthe same transaction as the CREATE and the data is being written to the\nsame relfilenode created in this transaction.\n\nUnfortunately, this optimization was ifdefed out, due to legacy\nassumptions about our inability to support wal_level = minimal.\n\nHere is an example in a mirrorless demo cluster with wal_level =\nminimal:\n\nBEGIN;\nCREATE TABLE foo(i int) DISTRIBUTED REPLICATED;\nCOPY foo FROM PROGRAM 'seq 1 3';\nCOMMIT;\n\nWithout patch, WAL for this table:\nrmgr: Storage len (rec/tot): 46/ 46, tx: 0, lsn: 0/0C010D30, prev 0/0C010D10, desc: CREATE base/13720/32768; smgr: heap\nrmgr: Heap2 len (rec/tot): 86/ 86, tx: 540, lsn: 0/0C023C20, prev 0/0C023B88, desc: MULTI_INSERT+INIT 3 tuples flags 0x02, blkref #0: rel 1663/13720/32768 blk 0\n\nWith patch, WAL for this table (MULTI_INSERT record not emitted):\nrmgr: Storage len (rec/tot): 46/ 46, tx: 0, lsn: 0/0C004908, prev 0/0C0048E8, desc: CREATE base/13720/24576; smgr: heap\n\nPS: AO/CO tables avoid writing WAL for all inserts in a more general\nway, and this change doesn't affect them.","shortMessageHtmlLink":"mirrorless: Enable WAL optimization for COPY FROM"}},{"before":"330d6aeac3f5c20d001956bf1d6b96b1699503ec","after":"a28cdeaa51a61738975d5313dc0170a0ac22dddf","ref":"refs/heads/main","pushedAt":"2024-05-22T00:06:21.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"higuoxing","name":"Xing Guo","path":"/higuoxing","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/21099318?s=80&v=4"},"commit":{"message":"Revert \"Fix mismatched types.\" (#17506)\n\nThis reverts commit 6fa4800539879ba21e2833835f035f2c2489ceee.\r\n\r\nThe commit is causing bugs in resque. I'm going to re-work on it.","shortMessageHtmlLink":"Revert \"Fix mismatched types.\" (#17506)"}},{"before":"78d9fe66402ee7b711660ecc4e3b2da3a49ea25d","after":"330d6aeac3f5c20d001956bf1d6b96b1699503ec","ref":"refs/heads/main","pushedAt":"2024-05-21T13:42:59.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"hpbee","name":"bhari","path":"/hpbee","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/26516462?s=80&v=4"},"commit":{"message":"[Orca] Update doc about Fallback to planner for queries with 'WITH ORDINALITY' (#17484)\n\nOrca currently doesn't support \"WITH ORDINALITY\".\r\nIt falls back to planner when a function in `FROM` clause of a `SELECT` statement uses \"WITH ORDINALITY\" clause. Fallback functionality is added in commit https://github.com/greenplum-db/gpdb/commit/99f0c829398291e0026f1628c6732021f5b7e29b.\r\n\r\nUpdated the doc `query-piv-opt-limitations.html.md` about the fallback.","shortMessageHtmlLink":"[Orca] Update doc about Fallback to planner for queries with 'WITH OR…"}},{"before":"e4a304eaf77685b0664e0bf31dae10aa0d0261cc","after":"2c3443d6ae038f7b99a6aa1224a385c3751eb40f","ref":"refs/heads/6X_STABLE","pushedAt":"2024-05-21T08:44:31.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"hpbee","name":"bhari","path":"/hpbee","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/26516462?s=80&v=4"},"commit":{"message":"[Orca] Fallback to planner if a function in 'from' clause uses 'WITH ORDINALITY' (#17480)\n\n(Backport of commit : https://github.com/greenplum-db/gpdb/commit/99f0c829398291e0026f1628c6732021f5b7e29b)\r\n\r\nIn https://github.com/greenplum-db/gpdb/issues/17461, the following sql crashes orca:\r\n\r\n`SELECT * FROM jsonb_array_elements('[\"c\", \"a\", \"b\"]'::jsonb) WITH ORDINALITY;`\r\nThat is because, Orca currently doesn't support \"WITH ORDINALITY\".\r\n\r\nFollowing error can be seen in 6X:\r\n`ERROR,\"CTranslatorUtils.cpp:526: Failed assertion: out_arg_types->Size() == (ULONG) gpdb::ListLength(col_names)\"`\r\n\r\nOrca needs work to support this, so we want to fall back for now.\r\n\r\nChange:\r\nIf \"WITH ORDINALITY\" is used, we will now fallback to postgres-based planner with explicit error message like below.\r\n`DETAIL: Feature not supported: WITH ORDINALITY`","shortMessageHtmlLink":"[Orca] Fallback to planner if a function in 'from' clause uses 'WITH …"}},{"before":"b758dff2cf1928c8af6989cee9f6a9837c027e77","after":"e4a304eaf77685b0664e0bf31dae10aa0d0261cc","ref":"refs/heads/6X_STABLE","pushedAt":"2024-05-21T08:40:52.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"hpbee","name":"bhari","path":"/hpbee","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/26516462?s=80&v=4"},"commit":{"message":"[6X] Fix orca preprocess step for query with Select-Project-NaryJoin pattern (#17476)\n\n(Backported from https://github.com/greenplum-db/gpdb/commit/d126b4e4fa6af8e1c22864b3114317c97ec805f8)\r\n\r\nIssue:\r\nWhen a query having pattern of Select-Project-NaryJoin\r\nwith Select predicate's condition also containing pattern\r\nScalarSubquery-Project-Select-Project-NaryJoin\r\nis executed, a crash was happening.\r\n\r\nRCA:\r\nDuring the preprocessing step `PexprTransposeSelectAndProject`,\r\nfunction `CollapseSelectAndReplaceColref` is called on the expr with the\r\npattern and Project expr inside the Select's predicate is being dropped,\r\nbut it's column references are not being replaced with equivalent dropped project expr.\r\nThat is because `CollapseSelectAndReplaceColref` is called recursively on\r\nthe Select's predicate. That function only takes care of removing project but\r\ndoesn't re-add the project expr on top of collapsed select.\r\n\r\nFix:\r\nNow function `CollapseSelectAndReplaceColref` is removed.\r\n\r\nFor creation of collapsed select,\r\n* First colrefs of the columns projected in the project list are replaced\r\n with their equivalent Project exprs in the Select's predicate expr.\r\n This happens in the refactored function `CUtils::ReplaceColrefWithProjectExpr`\r\n* Then the new Select predicate with replaced colrefs and the NaryJoin expr\r\n are transposed recursively with `PexprTransposeSelectAndProject`.\r\n* Collapsed Select is created using the transposed NaryJoin and transposed Select predicate.\r\n\r\nCollapsed Select is then added as a child to the CLogicalProject creating the new transposed expr.\r\n\r\nInput:\r\n+--CLogicalSelect\r\n |--CLogicalProject\r\n | +--CLogicalNAryJoin\r\n +--...\r\n +--CLogicalSelect\r\n +--CLogicalProject\r\n +--CLogicalNAryJoin\r\n\r\nOld Output:\r\n+--CLogicalProject\r\n |--CLogicalSelect\r\n | +--CLogicalNAryJoin\r\n +--...\r\n +--CLogicalSelect\r\n +--CLogicalNAryJoin\r\n\r\nFixed output:\r\n+--CLogicalProject\r\n |--CLogicalSelect\r\n | +--CLogicalNAryJoin\r\n +--...\r\n +--CLogicalProject\r\n +--CLogicalSelect\r\n +--CLogicalNAryJoin\r\n\r\n* New regression testcases are added that tests queries containing the above pattern.\r\n* New mdp testcase is added that tests a query with above pattern.","shortMessageHtmlLink":"[6X] Fix orca preprocess step for query with Select-Project-NaryJoin …"}},{"before":"88d20a66300bfed7a48eb6d6f8dcc094ba8fecee","after":"78d9fe66402ee7b711660ecc4e3b2da3a49ea25d","ref":"refs/heads/main","pushedAt":"2024-05-21T06:47:25.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"ravoorsh","name":"ravoorsh","path":"/ravoorsh","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/14228771?s=80&v=4"},"commit":{"message":"Introduce \"gp init cluster --clean\" command\n\nThis commit introduces a new option \"gp init cluster --clean\". This facilitates the user to rollback in case \"gp init cluster\" command fails.\nIf cluster creation fails then a file is created with the hostname and data directory details and when failure occurs,\ncleanCluster RPC is called which will in turn call Remove directory agent RPC which will cleanup the postgres processes and remove data directory.\n\nAdditionally, the user is also prompted for Input, asking if he wished to rollback. If the user chooses yes, then rollback is done.\nOtherwise the function exists asking the user to run \"gp init cluster --clean\" at a later point in time.\n\nHigh Level Design for rollback\n When gp init cluster fails\n\t• Check if there is an existing cleanup file and if it exits, return asking the user to run gp init cluster --clean\n\t• Create a cleanup file and add co-ordinator, primary and mirror entries to the file.\n\t• If the cluster creation is successful, then delete this file.\n\t• If cluster creation fails, then invoke the clean cluster Hub RPC\n\t• CLean CLuster RPC will check for the existence of the cleanup file.\n\t• If it exists then it proceeds to call the remove_directory agent RPC\n\t• Remove directory will check if the data directory exists or not\n\t• If it exists then the data directory is removed.\n\t• and if there are postgres processes existing, clean up the processes as well.\n\t• Return success.\n\t• CleanCluster RPC returns success if remove directory is successful.\n\t• If not return the appropriate error to user.\n\t• Remove the cleanup file.\n\n When calling \"gp init cluster --clean\" command\n\t• User executes gp init cluster --clean command\n\t• Check if cleanup file exists, error out if it doesnt\n\t• If it exits then call CleanCLuster RPC\n\t• CLean CLuster RPC will check for the existence of the cleanup file.\n\t• If it exists then it proceeds to call the remove_directory agent RPC\n\t• Remove directory will check if the data directory exists or not\n\t• If it exists then the data directory is removed.\n\t• and if there are postgres processes existing, clean up the processes as well.\n\t• Return success.\n\t• CleanCluster RPC returns success if remove directory is successful.\n\t• If not return the appropriate error to user.\n\t• Remove the cleanup file.\n\n Design of User Input.\n\t• If the \"gp init cluster is executed in the interactive mode, then the user is asked if he wants to perform rollback/Cleanup\n\t• If the gp init cluster command fails,\n\t• Ask the user if he wishes to rollback\n\t• If the user types yes, the clean cluster RPC and remove_directory RPCs are called.\n\t• If the user types no, then the command exists asking the user to run \"gp init cluster --clean\" command","shortMessageHtmlLink":"Introduce \"gp init cluster --clean\" command"}},{"before":"bea611c59d9307d47355636961c87024449fa1ae","after":"b758dff2cf1928c8af6989cee9f6a9837c027e77","ref":"refs/heads/6X_STABLE","pushedAt":"2024-05-21T06:00:55.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"soumyadeep2007","name":"Soumyadeep Chakraborty","path":"/soumyadeep2007","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/11961043?s=80&v=4"},"commit":{"message":"gpexpand: TRUNCATE coordinator-only tables for cleanup\n\nBackported from f55885f9b4e98c3978301ba56748453f91ece51d with some\nsignificant conflicts:\n\n1. Coordinator -> Master\n2. The set of master-only tables is different on 6X, and so is the list\nof mapped vs non-mapped tables. We should see bigger benefits for 6X,\ngiven pg_partition* and pg_statistic tables fall in the master-only\ntruncate-able category.\n3. For some odd reason, gp_segment_configuration was originally listed\ntwice under MASTER_ONLY_TABLES. Probably missed during\n06c0558f035c1f758e438f95211623cfd8ce9ce9.\n\nOriginal commit message follows:\n\nFor a subset of coordinator only tables (non-mapped relations), we can\ngo ahead and use TRUNCATE instead of DELETE, to gain a performance\nboost.\n\nCo-authored-by: Andrey Borodin \nCo-authored-by: Ashwin Agrawal ","shortMessageHtmlLink":"gpexpand: TRUNCATE coordinator-only tables for cleanup"}},{"before":"5b24d4d2a8f140ee97f6d62486f1821475c82e02","after":"88d20a66300bfed7a48eb6d6f8dcc094ba8fecee","ref":"refs/heads/main","pushedAt":"2024-05-21T04:11:15.000Z","pushType":"pr_merge","commitsCount":4,"pusher":{"login":"adam8157","name":"Adam Lee","path":"/adam8157","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/153160?s=80&v=4"},"commit":{"message":"Update the missed doc changes of `gp_resgroup_config`","shortMessageHtmlLink":"Update the missed doc changes of gp_resgroup_config"}},{"before":"6fa4800539879ba21e2833835f035f2c2489ceee","after":"5b24d4d2a8f140ee97f6d62486f1821475c82e02","ref":"refs/heads/main","pushedAt":"2024-05-21T02:23:39.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"z-wenlin","name":"Wenlin Zhang","path":"/z-wenlin","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7484001?s=80&v=4"},"commit":{"message":"Fix issue https://github.com/greenplum-db/gpdb/issues/17333.\n\nIn the adjust_setop_arguments(), if the setop is partitioned and subpath locus is general,\nit will call make_motion_hash_all_targets(), and this function will change the locus to singleQE\nwhen none of the columns are hashable. But mark_append_locus() will change it back to the original\npartitioned locus, which will result a wrong Motion node added in the top.\n\nThis PR moves check for non-hashable column(no targetlist) to choose_setop_type().","shortMessageHtmlLink":"Fix issue #17333."}},{"before":"8c56dcaa783c822ee0215cd4e636c67b857f3009","after":"6fa4800539879ba21e2833835f035f2c2489ceee","ref":"refs/heads/main","pushedAt":"2024-05-20T23:31:43.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"higuoxing","name":"Xing Guo","path":"/higuoxing","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/21099318?s=80&v=4"},"commit":{"message":"Fix mismatched types. (#17348)\n\nThis patch fixes several type mismatching issues.\r\n\r\n1. The DistributedTransactionId is of type uint64, we shouldn't cast it\r\n to TransactionId which is of type uint32.\r\n\r\n2. The Cost is of type Float8, we should use Float8GetDatum to cast it\r\n to Datum.","shortMessageHtmlLink":"Fix mismatched types. (#17348)"}},{"before":"b7b9501bb3a31de4fb474dc42c6e6c42d29050be","after":"8c56dcaa783c822ee0215cd4e636c67b857f3009","ref":"refs/heads/main","pushedAt":"2024-05-20T02:47:23.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"Annu149","name":"Annpurna Shahani","path":"/Annu149","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/30636132?s=80&v=4"},"commit":{"message":"pg_waldump: Added option --last-valid-walname\n\nAdded option --last-valid-walname to pg_waldump to emit the\nlast valid wal segment for given timeline.\n\nUsage:\npg_waldump -p -t --last-valid-walname\n\nTesting: Added TAP tests for pg_waldump\n\nCode logic borrowed from: https://www.postgresql.org/message-id/attachment/129055/v5-0001-Introduce-feature-to-start-WAL-receiver-eagerly.patch\n\nSuggested-by: Soumyadeep Chakraborty \nReviewed-by: Soumyadeep Chakraborty \nReviewed-by: Rakesh Sharma >","shortMessageHtmlLink":"pg_waldump: Added option --last-valid-walname"}},{"before":"cc9df38373bd33c691a380f469cb2151b02aef9e","after":"b7b9501bb3a31de4fb474dc42c6e6c42d29050be","ref":"refs/heads/main","pushedAt":"2024-05-19T06:35:12.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"interma","name":"Hongxu Ma","path":"/interma","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/61843?s=80&v=4"},"commit":{"message":"Bypass ICProxy addresses check of gpexpand when disable ENABLE_IC_PROXY (#17451)\n\nWhen user uses a distribution without ENABLE_IC_PROXY, gpexpand complains an error unrecognized configuration parameter gp_interconnect_proxy_addresses when checking the ICProxy addresses.\r\n\r\nWe cannot assume all GP distribution have this Guc (e.g. compiled by source code), so add a simple {try ... catch} logic to prevent the error.\r\n\r\n(As the follow up of #17316)","shortMessageHtmlLink":"Bypass ICProxy addresses check of gpexpand when disable ENABLE_IC_PRO…"}},{"before":"3abcf7c7412ccf32e6dfe8252ecd1b8afa029bfc","after":"cc9df38373bd33c691a380f469cb2151b02aef9e","ref":"refs/heads/main","pushedAt":"2024-05-17T20:18:31.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"dgkimura","name":"David Kimura","path":"/dgkimura","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1569601?s=80&v=4"},"commit":{"message":"[ORCA] Support left/right outer join order hints\n\nCommit ef7eec4 added a framework to support inner join order hints. Our next\nstep extends that framework to support left (LOJ) and right (ROJ) outer joins.\n\nAt a high level, join order hints is performed as a preprocessor step that\ntakes as input an nary join expression and a join order hint in order to\nproduce a new nary join expression composed of inner joins and LOJ/ROJs.\n\nLOJ and ROJ are similar to inner joins, but have the following additional\nrestrictions:\n\nJoin predicates are non-transitive. For example:\n\n SELECT * FROM t1 LEFT JOIN t2 ON t1.a=42 LEFT JOIN t3 ON t1.a>4;\n\nPredicates a=42 and a>4 cannot be pushed below their respective\njoin conditions. Otherwise NULL values may not be output correctly.\n\nJoin kind (e.g. left, right) is affected by the order. For example:\n\n SELECT * FROM t1 LEFT JOIN t2 ON t1.a=42;\n\n Leading((t1 t2)) requires a left join\n Leading((t2 t1)) requires a right join\n\nIn order to handle these additional constraints, the preprocessor step makes\ndecisions based on the LOJ specific attributes in CLogicalNAryJoin. In\nparticular, CScalarNAryJoinPredList.\n\nLet's consider the following example:\n\n /*+\n Leading((t2 t1))\n */\n SELECT * FROM t t1 LEFT JOIN t t2 ON t1.a=t2.a LEFT JOIN t t3 ON t2.b=t3.b;\n\n Input:\n\n +--CLogicalNAryJoin [0, 1, 2]\n |--CLogicalGet \"t1\" (\"t1\"), Columns: [\"a\" (0), \"b\" (1), ...\n |--CLogicalGet \"t2\" (\"t2\"), Columns: [\"a\" (9), \"b\" (10), ...\n |--CLogicalGet \"t3\" (\"t3\"), Columns: [\"a\" (18), \"b\" (19), ...\n +--CScalarNAryJoinPredList\n |--CScalarConst (1)\n |--CScalarCmp (=)\n | |--CScalarIdent \"a\" (0)\n | +--CScalarIdent \"a\" (9)\n +--CScalarCmp (=)\n |--CScalarIdent \"b\" (10)\n +--CScalarIdent \"b\" (19)\n\nThe nary join's scalar child is either a scalar operator (if all the join\nchildren are composed of inner joins) or a predicate list (if at least one join\nchild is a left outer join).\n\nThe nary join LOJ child predicate indexes [0, 1, 2] correspond to children t1,\nt2, and t3. That means t1 left joins t2 using index 1 of LOJ child pred list\n(i.e. \"a\"(0) = \"a\"(9) and that t2 left joins t3 using index 2 of LOJ child pred\nlist (i.e. \"b\"(10) = \"b\"(19).\n\nIn order to satisfy the hint, we need to convert a LOJ with t1 on the left side\nto an ROJ with t1 on the right side. That requires maintaining a child \"index\"\nso that we can compare the relative position of the child under the join. This\n\"index\" is the value stored in the LOJ pred list index.\n\nRelations [t1, t2] correspond to pred list indexes [0, 1]. The hint swaps the\norder, so relations [t2, t1] correspond to pred list indexes [1, 0]. Note that\nif the pred list values are ascending then it is a LOJ and if they are\ndescending then it is a ROJ. In this example, the hint produces a ROJ.\n\n Output:\n\n +--CLogicalNAryJoin [0, 1]\n |--CLogicalRightOuterJoin\n | |--CLogicalGet \"t2\" (\"t2\"), Columns: [\"a\" (9), \"b\" (10), ...\n | |--CLogicalGet \"t1\" (\"t1\"), Columns: [\"a\" (0), \"b\" (1), ...\n | +--CScalarCmp (=)\n | |--CScalarIdent \"a\" (0)\n | +--CScalarIdent \"a\" (9)\n |--CLogicalGet \"t3\" (\"t3\"), Columns: [\"a\" (18), \"b\" (19), ...\n +--CScalarNAryJoinPredList\n |--CScalarConst (1)\n +--CScalarCmp (=)\n |--CScalarIdent \"b\" (10)\n +--CScalarIdent \"b\" (19)\n\nAfter deciding to decompose a LOJ/ROJ, the pred list values need to be updated.\nIn the example, index 1 (i.e. \"a\"(0) = \"a\"(9)) is absorbed by the ROJ. That\nchanges the child pred list indexes from [0, 1, 2] to [0, 1]. In order to\naccount for this we need to track a set of used pred indexes. Here index 1 is\nused so we need to subtract 1 from all indexes higher than 1.\n\n [0, 1, 2] => [0, X, 2-1] => [0, 1]\n ^\n deleted","shortMessageHtmlLink":"[ORCA] Support left/right outer join order hints"}},{"before":"603a381e2065fceb85d724ac6e7c20258b978304","after":"3abcf7c7412ccf32e6dfe8252ecd1b8afa029bfc","ref":"refs/heads/main","pushedAt":"2024-05-17T17:50:43.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"soumyadeep2007","name":"Soumyadeep Chakraborty","path":"/soumyadeep2007","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/11961043?s=80&v=4"},"commit":{"message":"mirrorless: Enable WAL optimization for COPY FROM\n\nCopyFrom has an optimization where WAL can be avoided if the COPY is in\nthe same transaction as the CREATE and the data is being written to the\nsame relfilenode created in this transaction.\n\nUnfortunately, this optimization was ifdefed out, due to legacy\nassumptions about our inability to support wal_level = minimal.\n\nHere is an example in a mirrorless demo cluster with wal_level =\nminimal:\n\nBEGIN;\nCREATE TABLE foo(i int) DISTRIBUTED REPLICATED;\nCOPY foo FROM PROGRAM 'seq 1 3';\nCOMMIT;\n\nWithout patch, WAL for this table:\nrmgr: Storage len (rec/tot): 46/ 46, tx: 0, lsn: 0/0C010D30, prev 0/0C010D10, desc: CREATE base/13720/32768; smgr: heap\nrmgr: Heap2 len (rec/tot): 86/ 86, tx: 540, lsn: 0/0C023C20, prev 0/0C023B88, desc: MULTI_INSERT+INIT 3 tuples flags 0x02, blkref #0: rel 1663/13720/32768 blk 0\n\nWith patch, WAL for this table (MULTI_INSERT record not emitted):\nrmgr: Storage len (rec/tot): 46/ 46, tx: 0, lsn: 0/0C004908, prev 0/0C0048E8, desc: CREATE base/13720/24576; smgr: heap\n\nPS: AO/CO tables avoid writing WAL for all inserts in a more general\nway, and this change doesn't affect them.","shortMessageHtmlLink":"mirrorless: Enable WAL optimization for COPY FROM"}},{"before":"00e9e0d3a5e94096d8c396d70542e1690f325a49","after":"603a381e2065fceb85d724ac6e7c20258b978304","ref":"refs/heads/main","pushedAt":"2024-05-17T15:25:21.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"Annu149","name":"Annpurna Shahani","path":"/Annu149","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/30636132?s=80&v=4"},"commit":{"message":"[7X] Removed redundant tests\n\nFollowing gprecoverseg scenarios are not valid-\n\n* SIGINT on gprecoverseg should delete the progress file\n* SIGINT on gprecoverseg differential recovery should delete the progress file\n\nReason: if user selects yes when prompted for input at the time of interrupt,\ngprecoverseg will be terminated and segments will still be down. If user selects\nno, then recovery will continue marking the segments up but progress file will\nexist. Hence above scenarios are invalid.\n\nAssertion of progress file deletion has been covered already in following scenario:\n* gprecoverseg should terminate on SIGINT when user selects Yes in the prompt","shortMessageHtmlLink":"[7X] Removed redundant tests"}},{"before":"f55885f9b4e98c3978301ba56748453f91ece51d","after":"00e9e0d3a5e94096d8c396d70542e1690f325a49","ref":"refs/heads/main","pushedAt":"2024-05-17T14:19:55.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"huansong","name":"Huansong Fu","path":"/huansong","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/8323643?s=80&v=4"},"commit":{"message":"Prepare GUC option string only once during gang creation\n\nThe function makeOptions() is called for every segment which is not really\nnecessary since the option string stays the same. Now prepare it just once.","shortMessageHtmlLink":"Prepare GUC option string only once during gang creation"}},{"before":"99f0c829398291e0026f1628c6732021f5b7e29b","after":"f55885f9b4e98c3978301ba56748453f91ece51d","ref":"refs/heads/main","pushedAt":"2024-05-16T23:13:05.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"soumyadeep2007","name":"Soumyadeep Chakraborty","path":"/soumyadeep2007","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/11961043?s=80&v=4"},"commit":{"message":"gpexpand: TRUNCATE coordinator-only tables for cleanup\n\nFor a subset of coordinator only tables (non-mapped relations), we can\ngo ahead and use TRUNCATE instead of DELETE, to gain a performance\nboost.\n\nCo-authored-by: Andrey Borodin \nCo-authored-by: Ashwin Agrawal ","shortMessageHtmlLink":"gpexpand: TRUNCATE coordinator-only tables for cleanup"}},{"before":"e5c2ff9e832daf24021423d658fc5087070f353a","after":"bea611c59d9307d47355636961c87024449fa1ae","ref":"refs/heads/6X_STABLE","pushedAt":"2024-05-16T14:41:09.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"huansong","name":"Huansong Fu","path":"/huansong","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/8323643?s=80&v=4"},"commit":{"message":"Change deadlock_timeout GUC to sync\n\nBackport from 43baa2fac1636171c97631180b8515200d80ea1d.\n\nThis GUC controls each QD/QE's behavior independently. So it should be sync'ed.","shortMessageHtmlLink":"Change deadlock_timeout GUC to sync"}},{"before":"88d8d68c5aa07a0e699ee0d07249327be56785f0","after":"99f0c829398291e0026f1628c6732021f5b7e29b","ref":"refs/heads/main","pushedAt":"2024-05-16T04:17:07.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"hpbee","name":"bhari","path":"/hpbee","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/26516462?s=80&v=4"},"commit":{"message":"[Orca] Fallback to planner if a function in 'from' clause uses 'WITH ORDINALITY' (#17477)\n\nIn https://github.com/greenplum-db/gpdb/issues/17461, the following sql crashes orca during optimization:\r\n\r\n`SELECT * FROM jsonb_array_elements('[\"c\", \"a\", \"b\"]'::jsonb) WITH ORDINALITY;`\r\nThat is because, Orca currently doesn't support \"WITH ORDINALITY\".\r\n\r\nSometimes orca will fallback with the message: Query-to-DXL Translation: No variable entry found due to incorrect normalization of query.\r\n\r\nOrca needs work to support this, so we want to fall back for now.\r\n\r\nChange:\r\nIf \"WITH ORDINALITY\" is used, we will now fallback to postgres-based planner with explicit error message like below.\r\n`DETAIL: Falling back to Postgres-based planner because GPORCA does not support the following feature: WITH ORDINALITY`","shortMessageHtmlLink":"[Orca] Fallback to planner if a function in 'from' clause uses 'WITH …"}},{"before":"0ab72c7de5902181a55a02fdfc8af7118c22c65e","after":"e5c2ff9e832daf24021423d658fc5087070f353a","ref":"refs/heads/6X_STABLE","pushedAt":"2024-05-15T22:31:38.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"chrishajas","name":"Chris Hajas","path":"/chrishajas","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1610524?s=80&v=4"},"commit":{"message":"Fix fallback in debug build due to scalar with invalid return type\n\nThis is exposed by Orca's support for query parameters. In retail build\nit would still run fine, but it's better to make this explicit.","shortMessageHtmlLink":"Fix fallback in debug build due to scalar with invalid return type"}},{"before":"1da9ac376cb0011f107b3ea74af7863de97ad55a","after":"88d8d68c5aa07a0e699ee0d07249327be56785f0","ref":"refs/heads/main","pushedAt":"2024-05-15T11:24:32.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"srividyaky","name":null,"path":"/srividyaky","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/28972770?s=80&v=4"},"commit":{"message":"gp init cluster:Added functional tests for expansion support\nThis commit adds functional tests for expansion support changes pushed in the PR\n\nBelow changes are included:\n\ninit_cluster_config_validation_test.go - This contains tests related to the input expansion config validations, new functions GetDefaultExpansionConfig is added to create default config with expansion support along with config validaiton related tests\n\ninit_cluster_db_validation_test.go - This contains tests related to the database configuration checks after the cluster is successfully initialized with expansion support, it also included validation for spread and group mirroring\n\ninit_cluster_test.go: This includes expansion validation by passing config file with different file formats\n\ninit_cluster_suite_test.go: In this file check has been added to fetch the hostname","shortMessageHtmlLink":"gp init cluster:Added functional tests for expansion support"}},{"before":"d126b4e4fa6af8e1c22864b3114317c97ec805f8","after":"1da9ac376cb0011f107b3ea74af7863de97ad55a","ref":"refs/heads/main","pushedAt":"2024-05-15T09:08:30.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"srividyaky","name":null,"path":"/srividyaky","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/28972770?s=80&v=4"},"commit":{"message":"gp init cluster: Add mirror functional tests\n\nThis commit adds functional tests for the changes committed in the gpinitsystem support for mirror PR #17294.\n\ninit_cluster_config_validation_test.go - This contains tests related to the input init config provided with support for mirrors and functions to form default config with primary and mirror support\ninit_cluster_db_validation_test.go - This contains tests related to the database configuration checks after the cluster is successfully initialized with mirror support, gprecoverseg validation, gpstat replication validation\ninit_cluster_env_validation_test - cluster initialisation Validation related to mirror directory is not empty and mirror port is in use","shortMessageHtmlLink":"gp init cluster: Add mirror functional tests"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEU3epBAA","startCursor":null,"endCursor":null}},"title":"Activity · greenplum-db/gpdb"}