How to Interpret and Explore the Postgres Logs #26224
TheOtherBrian1
announced in
Troubleshooting
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Debugging and Monitoring Postgres with Logs
Logs provide insights into Postgres operations. They help meet compliance requirements, detect suspicious activity, and troubleshoot problems.
Table of Contents
postgres_logs
Table StructureQuerying Logs
The most practical way to explore and filter logs is through the Logs Explorer.
It uses a subset of Bigquery SQL syntax and preparses queries for optimization. This imposes three primary limitations:
WITH
statements*
wildcards for column namesILIKE
statementsAlthough there are many strategies to filter logs, such as
like
andin
statements, a helper function calledregexp_contains
provides the most flexibility and control.The
postgres_logs
table contains Postgres events.postgres_logs
Table StructureThe table contains 3 fundamental columns:
The parsed metadata column is an array that contains relevant information about events. To access the information, it must be unnested. This is done with a
cross join
.Unnesting example
Parsed Metadata Fields
Query Information
SELECT * FROM table;
SELECT
,INSERT
,UPDATE
...select to_jsonb()
Suggested use cases:
Error/Warning Information
LOG
,WARNING
,ERROR
...42501
Suggested use cases:
Connection/Identification Information
client backend
)client backend
Suggested use cases:
Filtering Logs
Excluding Routine Events
Most Postgres logs during normal periods are routine events, such as connection authorizations and checkpoints. To see the default types of events that are logged, you can check this guide.
When exploring the logs for atypical behavior, it's often strategic to filter out expected values. This can be done by adding the following filter to your queries:
By Timeframe
To investigate issues around a specific period:
By Error Severity
This filter finds all errors, fatals, and panics:
Failure events include an sql_state_code that can be referenced in the Postgres Docs
By Query
Queries can use complex syntax, so it is often helpful to isolate by referenced database objects, such as
functions
,tables
, andcolumns
. Because query structures can be complex, it is advised to use regex to find matches. Some common regex patterns are:(?i)
: ignore case sensitivty.
: wildcard^
: look for values at start of string|
: or operatorBy APIs/roles
All failed queries, including those from PostgREST, Auth, and external libraries (e.g., Prisma) are logged with helpful error messages for debugging.
Server/Role Mapping
API servers have assigned database roles for connecting to the database:
Filter by the
parsed.user_name
role to only retrieve logs made by specific roles:By Dashboard Queries
Queries from the Supabase Dashboard are executed under the postgres role and include the comment
-- source: dashboard
. To isolate or exclude Dashboard requests during debugging, you can filter by this comment.Full Example For Finding Errors
Logging for Compliance and Security
Customized Object and Role Activity Logging
When recording what is accessed and by whom, logging based on database roles and objects is the most reliable way to ensure a proper trail of activity.
You can use the pg_audit extension to selectively log relevant queries (not just errors) by certain roles, against specific database objects.
You should take care when using the extension to not log all database events, but only what is absolutely necessary. Overlogging can strain the database and create log noise that makes it difficult to filter for relevant events.
Filtering by pg_audit:
Filtering By IP
Monitoring IPs becomes tricky when dealing with dynamic addressing, such as those from serverless or edge environments. This challenge amplifies when relying on certain poolers, such as Prisma Accelerate, Supavisor, or Cloudflare's Hyperdrive, as they record the pooler's IP, not the true origin.
IP tracking is most effective when consistently relying on direct database connections from servers with static IP addresses:
Reviewing Log Settings
The
pg_settings
table describes system and logging configurations.The settings that affect logs are categorized under:
Reporting and Logging / What to Log
Reporting and Logging / When to Log
Customized Options
To view all log settings for your database, you can execute the following SQL:
Changing Log Settings
Severity levels
The
log_min_messages
variable determines what is severe enough to log. Here are the severity thresholds from the Postgres docs.In most cases, the default is adequate. However, if you must adjust the setting, you can do so with the following query:
Configuring queries logged
By default, only failed queries are logged. The PGAudit extension extends Postgres's built-in logging abilities. It can be used to selectively track all queries in your database by:
Logging within database functions
To track or debug functions, logging can be configured by followin the function debugging guide
Frequently Asked Questions
Can I join together different log tables, such as
edge_logs
andpostgres_logs
?No, log tables are independent from each other and do not share any primary/foreign key relations for joining.
Can I download the logs?
At the moment, the way to download logs is through the Log Dashboard as a CSV
What is logged?
To see the default types of events that are logged, you can check this guide.
Other resources:
Beta Was this translation helpful? Give feedback.
All reactions