Tom Lane [Wed, 21 Dec 2022 22:51:50 +0000 (17:51 -0500)]
Fix contrib/seg to be more wary of long input numbers.
seg stores the number of significant digits in an input number
in a "char" field. If char is signed, and the input is more than
127 digits long, the count can read out as negative causing
seg_out() to print garbage (or, if you're really unlucky,
even crash).
To fix, clamp the digit count to be not more than FLT_DIG.
(In theory this loses some information about what the original
input was, but it doesn't seem like useful information; it would
not survive dump/restore in any case.)
Also, in case there are stored values of the seg type containing
bad data, add a clamp in seg_out's restore() subroutine.
Per bug #17725 from Robins Tharakan. It's been like this
forever, so back-patch to all supported branches.
Discussion: https://postgr.es/m/17725-
0a09313b67fbe86e@postgresql.org
Andrew Dunstan [Wed, 21 Dec 2022 13:37:17 +0000 (08:37 -0500)]
Introduce float4in_internal
This is the guts of float4in, callable as a routine to input floats,
which will be useful in an upcoming patch for allowing soft errors in
the seg module's input function.
A similar operation was performed some years ago for float8in in
commit
50861cd683e.
Reviewed by Tom Lane
Discussion: https://postgr.es/m/
cee4e426-d014-c0b7-aa22-
a659f2cd9130@dunslane.net
David Rowley [Wed, 21 Dec 2022 20:57:49 +0000 (09:57 +1300)]
Fix newly introduced bug in slab.c
d21ded75f changed the way slab.c works but introduced a bug that meant we
could end up with the slab's curBlocklistIndex pointing to the wrong list.
The condition which was checking for this was failing to account for two
things:
1. The curBlocklistIndex could be 0 as we've currently got no non-full
blocks to put chunks on. In this case, the dlist_is_empty() check cannot
be performed as there can be any number of completely full blocks at that
index.
2. The curBlocklistIndex may be greater than the index we just moved the
block onto. Since we need to ensure we fill up fuller blocks first, we
must reset curBlocklistIndex when changing any blocklist element that's
less than the curBlocklistIndex too.
Reported-by: Takamichi Osumi
Discussion: https://postgr.es/m/TYCPR01MB8373329C6329768D7E093D68EDEB9@TYCPR01MB8373.jpnprd01.prod.outlook.com
Michael Paquier [Wed, 21 Dec 2022 01:39:06 +0000 (10:39 +0900)]
Make more consistent some translated strings related to compression
This commit changes some of the bbstreamer files and pg_dump to use the
same style as a few other places (like common/compression.c), where the
name of the compression method is not part of the string, but an
argument of it. This reduces a bit the translation work with less
string patterns.
Discussion: https://postgr.es/m/Y5/5tdK+4n3clvtU@paquier.xyz
Michael Paquier [Wed, 21 Dec 2022 01:11:22 +0000 (10:11 +0900)]
Switch some system functions to use get_call_result_type()
This shaves some code by replacing the combinations of
CreateTemplateTupleDesc()/TupleDescInitEntry() hardcoding a mapping of
the attributes listed in pg_proc.dat by get_call_result_type() to build
the TupleDesc needed for the rows generated.
get_call_result_type() is more expensive than the former style, but this
removes some duplication with the lists of OUT parameters (pg_proc.dat
and the attributes hardcoded in these code paths). This is applied to
functions that are not considered as critical (aka that could be called
repeatedly for monitoring purposes).
Author: Bharath Rupireddy
Reviewed-by: Robert Haas, Álvaro Herrera, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACV23HW5HP5hFjd89FNS-z5X8r2jNXdMXcpN2BgTtKd87w@mail.gmail.com
Andrew Dunstan [Mon, 19 Dec 2022 10:58:08 +0000 (05:58 -0500)]
Use existing SSL certs in LDAP tests instead of generating them
The SSL test suite has a bunch of pre-existing certificates, so it's
better simply to use what we already have than generate new certificates
each time the LDAP tests are run.
Discussion: https://postgr.es/m/
bc305c7a-f390-44f2-2e82-
9bcaec6108da@dunslane.net
Andrew Dunstan [Tue, 20 Dec 2022 12:54:39 +0000 (07:54 -0500)]
Add copyright notices to meson files
Discussion: https://postgr.es/m/
222b43a5-2fb3-2c1b-9cd0-
375d376c8246@dunslane.net
Etsuro Fujita [Tue, 20 Dec 2022 10:05:00 +0000 (19:05 +0900)]
Allow batching of inserts during cross-partition updates.
Commit
927f453a9 disallowed batching added by commit
b663a4136 to be
used for the inserts performed as part of cross-partition updates of
partitioned tables, mainly because the previous code in
nodeModifyTable.c couldn't handle pending inserts into foreign-table
partitions that are also UPDATE target partitions. But we don't have
such a limitation anymore (cf. commit
ffbb7e65a), so let's allow for
this by removing from execPartition.c the restriction added by commit
927f453a9 that batching is only allowed if the query command type is
CMD_INSERT.
In postgres_fdw, since commit
86dc90056 changed it to effectively
disable cross-partition updates in the case where a foreign-table
partition chosen to insert rows into is also an UPDATE target partition,
allow batching in the case where a foreign-table partition chosen to
do so is *not* also an UPDATE target partition. This is enabled by the
"batch_size" option added by commit
b663a4136, which is disabled by
default.
This patch also adjusts the test case added by commit
927f453a9 to
confirm that the inserts performed as part of a cross-partition update
of a partitioned table indeed uses batching.
Amit Langote, reviewed and/or tested by Georgios Kokolatos, Zhihong Yu,
Bharath Rupireddy, Hou Zhijie, Vignesh C, and me.
Discussion: http://postgr.es/m/CA%2BHiwqH1Lz1yJmPs%3DaD-pzd_HLLynLHvq5iYeT9mB0bBV7oJ6w%40mail.gmail.com
David Rowley [Tue, 20 Dec 2022 09:28:58 +0000 (22:28 +1300)]
Add enable_presorted_aggregate GUC
1349d279 added query planner support to allow more efficient execution of
aggregate functions which have an ORDER BY or a DISTINCT clause. Prior to
that commit, the planner would only request that the lower planner produce
a plan with the order required for the GROUP BY clause and it would be
left up to nodeAgg.c to perform the final sort of records within each
group so that the aggregate transition functions were called in the
correct order. Now that the planner requests the lower planner produce a
plan with the GROUP BY and the ORDER BY / DISTINCT aggregates in mind,
there is the possibility that the planner chooses a plan which could be
less efficient than what would have been produced before
1349d279.
While developing
1349d279, I had in mind that Incremental Sort would help
us in cases where an index exists only on the GROUP BY column(s).
Incremental Sort would just replace the implicit tuplesorts which are
being performed in nodeAgg.c. However, because the planner has the
flexibility to instead choose a plan which just performs a full sort on
both the GROUP BY and ORDER BY / DISTINCT aggregate columns, there is
potential for the planner to make a bad choice. The costing for
Incremental Sort is not perfect as it assumes an even distribution of rows
to sort within each sort group.
Here we add an escape hatch in the form of the enable_presorted_aggregate
GUC. This will allow users to get the pre-PG16 behavior in cases where
they have no other means to convince the query planner to produce a plan
which only sorts on the GROUP BY column(s).
Discussion: https://postgr.es/m/CAApHDvr1Sm+g9hbv4REOVuvQKeDWXcKUAhmbK5K+dfun0s9CvA@mail.gmail.com
David Rowley [Tue, 20 Dec 2022 08:48:51 +0000 (21:48 +1300)]
Improve the performance of the slab memory allocator
Slab has traditionally been fairly slow when compared with the AllocSet or
Generation memory allocators. Part of this slowness came from having to
write out an entire block when we allocate a new block in order to
populate the free list indexes within the block's memory. Additional
slowness came from having to move a block onto another dlist each time we
palloc or pfree a chunk from it.
Here we optimize both of those cases and do a little bit extra to improve
the performance of the slab allocator.
Here, instead of writing out the free list indexes when allocating a new
block, we introduce the concept of "unused" chunks. When a block is first
allocated all chunks are unused. These chunks only make it onto the
free list when they are pfree'd. When allocating new chunks on an
existing block, we have the choice of consuming a chunk from the free list
or an unused chunk. When both exist, we opt to use one from the free
list, as these have been used already and the memory of them is more
likely to be cached by the CPU.
Here we also reduce the number of block lists from there being one for
every possible value of free chunks on a block to just having a small
fixed number of block lists. We keep the 0th block list for completely
full blocks and anything else stores blocks for some range of free chunks
with fuller blocks appearing on lower block list array elements. This
reduces how often we must move a block to another list when we allocate or
free chunks, but still allows us to prefer to put new chunks on fuller
blocks and perhaps allow blocks with fewer chunks to be free'd later
once all their remaining chunks have been pfree'd.
Additionally, we now store a list of "emptyblocks", which are blocks that
no longer contain any allocated chunks. We now keep up to 10 of these
around to avoid having to thrash malloc/free when allocation patterns
continually cause blocks to become free of any allocated chunks only to
allocate more chunks again. Now only once we have 10 of these, we free
the block. This does raise the high water mark for the total memory that
a slab context can consume. It does not seem entirely unreasonable that
we might one day want to make this a property of SlabContext rather than a
compile-time constant. Let's wait and see if there is any evidence to
support that this is required before doing it.
Author: Andres Freund, David Rowley
Tested-by: Tomas Vondra, John Naylor
Discussion: https://postgr.es/m/
20210717194333.mr5io3zup3kxahfm@alap3.anarazel.de
John Naylor [Tue, 20 Dec 2022 07:13:14 +0000 (14:13 +0700)]
Move variable increment to the end of the loop
This is less error prone and matches the placement of other code
in the file.
Justin Pryzby
Reviewed by Tom Lane
Discussion: https://www.postgresql.org/message-id/
20221123172436.GJ11463@telsasoft.com
Michael Paquier [Tue, 20 Dec 2022 04:36:27 +0000 (13:36 +0900)]
Add pg_dissect_walfile_name()
This function takes in input a WAL segment name and returns a tuple made
of the segment sequence number (dependent on the WAL segment size of the
cluster) and its timeline, as of a thin SQL wrapper around the existing
XLogFromFileName().
This function has multiple usages, like being able to compile a LSN from
a file name and an offset, or finding the timeline of a segment without
having to do to some maths based on the first eight characters of the
segment.
Bump catalog version.
Author: Bharath Rupireddy
Reviewed-by: Nathan Bossart, Kyotaro Horiguchi, Maxim Orlov, Michael
Paquier
Discussion: https://postgr.es/m/CALj2ACWV=FCddsxcGbVOA=cvPyMr75YCFbSQT6g4KDj=gcJK4g@mail.gmail.com
Michael Paquier [Mon, 19 Dec 2022 23:53:22 +0000 (08:53 +0900)]
Remove hardcoded dependency to cryptohash type in the internals of SCRAM
SCRAM_KEY_LEN was a variable used in the internal routines of SCRAM to
size a set of fixed-sized arrays used in the SHA and HMAC computations
during the SASL exchange or when building a SCRAM password. This had a
hard dependency on SHA-256, reducing the flexibility of SCRAM when it
comes to the addition of more hash methods. A second issue was that
SHA-256 is assumed as the cryptohash method to use all the time.
This commit renames SCRAM_KEY_LEN to a more generic SCRAM_KEY_MAX_LEN,
which is used as the size of the buffers used by the internal routines
of SCRAM. This is aimed at tracking centrally the maximum size
necessary for all the hash methods supported by SCRAM. A global
variable has the advantage of keeping the code in its simplest form,
reducing the need of more alloc/free logic for all the buffers used in
the hash calculations.
A second change is that the key length (SHA digest length) and hash
types are now tracked by the state data in the backend and the frontend,
the common portions being extended to handle these as arguments by the
internal routines of SCRAM. There are a few RFC proposals floating
around to extend the SCRAM protocol, including some to use stronger
cryptohash algorithms, so this lifts some of the existing restrictions
in the code.
The code in charge of parsing and building SCRAM secrets is extended to
rely on the key length and on the cryptohash type used for the exchange,
assuming currently that only SHA-256 is supported for the moment. Note
that the mock authentication simply enforces SHA-256.
Author: Michael Paquier
Reviewed-by: Peter Eisentraut, Jonathan Katz
Discussion: https://postgr.es/m/Y5k3Qiweo/1g9CG6@paquier.xyz
Robert Haas [Mon, 19 Dec 2022 20:56:17 +0000 (15:56 -0500)]
Fix comment that was missing a word.
Ted Yu
Discussion: http://postgr.es/m/CALte62wkFB05=RTWf7BL_6MfWs2=DY=ai-K7LWn_+0TJUuPJ2w@mail.gmail.com
Peter Eisentraut [Mon, 19 Dec 2022 20:08:28 +0000 (21:08 +0100)]
Fix typo in comment
Author: Ted Yu <yuzhihong@gmail.com>
Robert Haas [Mon, 19 Dec 2022 19:43:09 +0000 (14:43 -0500)]
Expose some information about backend subxact status.
A new function pg_stat_get_backend_subxact() can be used to get
information about the number of subtransactions in the cache of
a particular backend and whether that cache has overflowed. This
can be useful for tracking down performance problems that can
result from overflowed snapshots.
Dilip Kumar, reviewed by Zhihong Yu, Nikolay Samokhvalov,
Justin Pryzby, Nathan Bossart, Ashutosh Sharma, Julien
Rouhaud. Additional design comments from Andres Freund,
Tom Lane, Bruce Momjian, and David G. Johnston.
Discussion: http://postgr.es/m/CAFiTN-ut0uwkRJDQJeDPXpVyTWD46m3gt3JDToE02hTfONEN=Q@mail.gmail.com
Tom Lane [Sat, 17 Dec 2022 23:51:24 +0000 (18:51 -0500)]
Fix bit-rotted planner test case.
While fooling with my pet outer-join-variables patch, I discovered
that the test case I added in commit
11086f2f2 no longer demonstrates
what it's supposed to. The idea is to tempt the planner to reverse
the order of the two outer joins, which would leave noplace to
correctly evaluate the WHERE clause that's inserted between them.
Before the addition of the delay_upper_joins mechanism, it would
have taken the bait.
However, subsequent improvements broke the test in two different ways.
First, we now recognize the IS NULL coding pattern as an antijoin, and
we won't re-order antijoins; even if we did, the IS NULL test clauses
get removed so there would be no opportunity for them to misbehave.
Second, the planner now discovers that nested parameterized indexscans
are a lot cheaper than the double hash join it used back in the day,
and that approach doesn't want to re-order the joins anyway. Thus,
in HEAD the test passes even if one dikes out delay_upper_joins.
To fix, change the IS NULL tests to COALESCE clauses, which produce
the same results but the planner isn't smart enough to convert them
to antijoins. It'll still go for parameterized indexscans though,
so drop the index enabling that (don't know why I added that in the
first place), and disable nestloop joining just to be sure.
This time around, add an EXPLAIN to make the choice of plan visible.
Tom Lane [Sat, 17 Dec 2022 15:31:25 +0000 (10:31 -0500)]
Doc: update pg_list.h header comments to include XidLists.
I realize that the XidList infrastructure is rather incomplete,
but failing to mention it in adjacent comments takes that a bit
too far.
Tom Lane [Fri, 16 Dec 2022 18:07:42 +0000 (13:07 -0500)]
Fix inability to reference CYCLE column from inside its CTE.
Such references failed with "cache lookup failed for type 0"
because we didn't resolve the type of the CYCLE column until after
analyzing the CTE's query. We can just move that processing
to before the recursive parse_sub_analyze call, though.
While here, invent a couple of local variables to make this
code less egregiously wider-than-80-columns.
Per bug #17723 from Vik Fearing. Back-patch to v14 where
the CYCLE feature was added.
Discussion: https://postgr.es/m/17723-
2c4985ff111e7bba@postgresql.org
Peter Eisentraut [Fri, 16 Dec 2022 16:49:59 +0000 (17:49 +0100)]
pg_upgrade: Make testing different transfer modes easier
The environment variable PG_TEST_PG_UPGRADE_MODE can be set to
override the default transfer mode for the pg_upgrade tests.
(Automatically running the pg_upgrade tests for all supported modes
would be too slow.)
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/
50a97009-8ff9-ca4d-a0f6-
6086a6775a5b%40enterprisedb.com
Peter Eisentraut [Fri, 16 Dec 2022 16:49:59 +0000 (17:49 +0100)]
pg_upgrade: Add --copy option
This option selects the default transfer mode. Having an explicit
option is handy to make scripts and tests more explicit. It also
makes it easier to talk about a "copy" mode rather than "the default
mode" or something like that, since until now the default mode didn't
have an externally visible name.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/
50a97009-8ff9-ca4d-a0f6-
6086a6775a5b%40enterprisedb.com
Bruce Momjian [Fri, 16 Dec 2022 17:15:54 +0000 (12:15 -0500)]
C comment: fix wording
Backpatch-through: master
Tom Lane [Fri, 16 Dec 2022 16:10:36 +0000 (11:10 -0500)]
Clean up dubious error handling in wellformed_xml().
This ancient bit of code was summarily trapping any ereport longjmp
whatsoever and assuming that it must represent an invalid-XML report.
It's not really appropriate to handle OOM-like situations that way:
maybe the input is valid or maybe not, but we couldn't find out.
And it'd be a seriously bad idea to ignore, say, a query cancel
error that way. (Perhaps that can't happen because there is no
CHECK_FOR_INTERRUPTS anywhere within xml_parse, but even if that's
true today it's obviously a very fragile assumption.)
But in the wake of the previous commit, we can drop the PG_TRY
here altogether, and use the soft error mechanism to catch only
the kinds of errors that are legitimate to treat as invalid-XML.
(This is our first use of the soft error mechanism for something
not directly related to a datatype input function. It won't be
the last.)
xml_is_document can be converted in the same way. That one is
not actively broken, because it was checking specifically for
ERRCODE_INVALID_XML_DOCUMENT rather than trapping everything;
but the code is still shorter and probably faster this way.
Discussion: https://postgr.es/m/
3564577.
1671142683@sss.pgh.pa.us
Tom Lane [Fri, 16 Dec 2022 15:58:49 +0000 (10:58 -0500)]
Convert xml_in to report errors softly.
The key idea here is that xml_parse must distinguish hard errors
from soft errors. We want to throw a hard error for libxml
initialization failures: those might be out-of-memory, or something
else, but in any case they are not the fault of the input string.
If we get to the point of parsing the input, and something goes
wrong, we can fairly consider that to mean bad input.
One thing that arguably does mean bad input, but I didn't trouble
to handle softly, is encoding conversion failure while converting
the server encoding to UTF8. This might be something to improve
later, but it seems like a pretty low-probability scenario.
Discussion: https://postgr.es/m/
3564577.
1671142683@sss.pgh.pa.us
Thomas Munro [Fri, 16 Dec 2022 04:36:22 +0000 (17:36 +1300)]
Fix typo in reference to __FreeBSD__.
Commit
a2a8acd152 introduced a platform-dependent mechanism to prevent
developers from referencing errno in the argument list of
elog()/ereport(), but didn't use the right macro to detect FreeBSD, so
it didn't actually work there.
Reported-by: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/MEYP282MB16693AAEEF84F47D8F7CA007B6E69%40MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
David Rowley [Fri, 16 Dec 2022 02:22:23 +0000 (15:22 +1300)]
Remove pessimistic cost penalization from Incremental Sort
When incremental sorts were added in v13 a 1.5x pessimism factor was added
to the cost modal. Seemingly this was done because the cost modal only
has an estimate of the total number of input rows and the number of
presorted groups. It assumes that the input rows will be evenly
distributed throughout the presorted groups. The 1.5x pessimism factor
was added to slightly reduce the likelihood of incremental sorts being
used in the hope to avoid performance regressions where an incremental
sort plan was picked and turned out slower due to a large skew in the
number of rows in the presorted groups.
An additional quirk with the path generation code meant that we could
consider both a sort and an incremental sort on paths with presorted keys.
This meant that with the pessimism factor, it was possible that we opted
to perform a sort rather than an incremental sort when the given path had
presorted keys.
Here we remove the 1.5x pessimism factor to allow incremental sorts to
have a fairer chance at being chosen against a full sort.
Previously we would generally create a sort path on the cheapest input
path (if that wasn't sorted already) and incremental sort paths on any
path which had presorted keys. This meant that if the cheapest input path
wasn't completely sorted but happened to have presorted keys, we would
create a full sort path *and* an incremental sort path on that input path.
Here we change this logic so that if there are presorted keys, we only
create an incremental sort path, and create sort paths only when a full
sort is required.
Both the removal of the cost pessimism factor and the changes made to the
path generation make it more likely that incremental sorts will now be
chosen. That, of course, as with teaching the planner any new tricks,
means an increased likelihood that the planner will perform an incremental
sort when it's not the best method. Our standard escape hatch for these
cases is an enable_* GUC. enable_incremental_sort already exists for
this.
This came out of a report by Pavel Luzanov where he mentioned that the
master branch was choosing to perform a Seq Scan -> Sort -> Group
Aggregate for his query with an ORDER BY aggregate function. The v15 plan
for his query performed an Index Scan -> Group Aggregate, of course, the
aggregate performed the final sort internally in nodeAgg.c for the
aggregate's ORDER BY. The ideal plan would have been to use the index,
which provided partially sorted input then use an incremental sort to
provide the aggregate with the sorted input. This was not being chosen
due to the pessimism in the incremental sort cost modal, so here we remove
that and rationalize the path generation so that sort and incremental sort
plans don't have to needlessly compete. We assume that it's senseless
to ever use a full sort on a given input path where an incremental sort
can be performed.
Reported-by: Pavel Luzanov
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/
9f61ddbf-2989-1536-b31e-
6459370a6baa%40postgrespro.ru
David Rowley [Thu, 15 Dec 2022 22:39:40 +0000 (11:39 +1300)]
Re-adjust drop-index-concurrently-1 isolation test
It seems that drop-index-concurrently-1 has started to forget what it was
originally meant to be testing.
d2d8a229b, which added incremental sorts
changed the expected plan to be an Index Scan plan instead of a Seq Scan
plan. This occurred as the primary key index of the table in question
provided presorted input and, because that index happened to be the
cheapest input path due to enable_seqscan being disabled, the incremental
sort changes just added a Sort on top of that. It seems based on the name
of the PREPAREd statement that the intention here is that the query
produces a seqscan plan.
The reason this test has become broken seems to be due to how the test was
originally coded. The test was trying to force a seqscan plan by
performing some casting to make it so the test_dc index couldn't be used
to perform the required filtering. Trying to coax the planner into using
a plan which has costed in a disable_cost seems like it's always going to
be flakey as small changes in costs are drowned out by the large
disable_cost combined with add_path's STD_FUZZ_FACTOR. Here we get rid of
the casts that we're using to try to trick the planner into a seqscan and
instead toggle enable_seqscan as and when required to get the desired
plan.
Additionally, rename a few things in the test and add some additional
wording to the comments to try and make it more clear in the future what
we expect this test to be doing.
Discussion: https://postgr.es/m/CAApHDvrbDhObhLV+=U_K_-t+2Av2av1aL9d+2j_3AO-XndaviA@mail.gmail.com
Backpatch-through: 13, where
d2d8a229b changed the expected test output
David Rowley [Thu, 15 Dec 2022 21:31:25 +0000 (10:31 +1300)]
Speed up creation of command completion tags
The building of command completion tags could often be seen showing up in
profiles when running high tps workloads.
The query completion tags were being built with snprintf, which is slow at
the best of times when compared with more manual ways of formatting
strings. Here we introduce BuildQueryCompletionString() to do this job
for us. We also now store the completion tag's strlen in the
CommandTagBehavior struct so that we can quickly memcpy this number of
bytes into the completion tag string. Appending the rows affected is done
via pg_ulltoa_n. BuildQueryCompletionString returns the length of the
built string. This saves us having to call strlen to figure out how many
bytes to pass to pq_putmessage().
Author: David Rowley, Andres Freund
Reviewed-by: Andres Freund
Discussion: https://postgr.es/m/CAHoyFK-Xwqc-iY52shj0G+8K9FJpse+FuZ36XBKy78wDVnd=Qg@mail.gmail.com
Tom Lane [Thu, 15 Dec 2022 17:18:36 +0000 (12:18 -0500)]
Convert range_in and multirange_in to report errors softly.
This is mostly straightforward, except that if the range type
has a canonical function, that might throw an error during range
input. (Such errors probably only occur for edge cases: in the
in-core canonical functions, it happens only if a bound has the
maximum valid value for the underlying type.) Hence, this patch
extends the soft-error regime to allow canonical functions to
return errors softly as well. Extensions implementing range
canonical functions will need modification anyway because of the
API change for range_serialize(); while at it, they might want
to do something similar to what's been done here in the in-core
canonical functions.
Discussion: https://postgr.es/m/
3284599.
1671075185@sss.pgh.pa.us
Peter Eisentraut [Thu, 8 Dec 2022 13:30:01 +0000 (14:30 +0100)]
Static assertions cleanup
Because we added StaticAssertStmt() first before StaticAssertDecl(),
some uses as well as the instructions in c.h are now a bit backwards
from the "native" way static assertions are meant to be used in C.
This updates the guidance and moves some static assertions to better
places.
Specifically, since the addition of StaticAssertDecl(), we can put
static assertions at the file level. This moves a number of static
assertions out of function bodies, where they might have been stuck
out of necessity, to perhaps better places at the file level or in
header files.
Also, when the static assertion appears in a position where a
declaration is allowed, then using StaticAssertDecl() is more native
than StaticAssertStmt().
Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/
941a04e7-dd6f-c0e4-8cdf-
a33b3338cbda%40enterprisedb.com
Peter Eisentraut [Thu, 15 Dec 2022 06:49:30 +0000 (07:49 +0100)]
Move provariadic sanity check to a more appropriate place
35f059e9bdfb3b14ac9d22a9e159d36ec0ccf804 put the provariadic sanity
check into type_sanity.sql, even though it's not about types, and
moreover in the middle of some connected test group, which makes it
all very confusing. Move it to opr_sanity.sql, where it is in better
company.
Tom Lane [Thu, 15 Dec 2022 00:42:05 +0000 (19:42 -0500)]
Convert a few more datatype input functions to report errors softly.
Convert the remaining string-category input functions
(bpcharin, varcharin, byteain) to the new style.
Discussion: https://postgr.es/m/
3038346.
1671060258@sss.pgh.pa.us
Tom Lane [Wed, 14 Dec 2022 23:03:11 +0000 (18:03 -0500)]
Convert a few more datatype input functions to report errors softly.
Convert cash_in and uuid_in to the new style.
Amul Sul, minor mods by me
Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com