New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Dependency tracking of hierarchical caggs #6829
Comments
Hello @leppaott, Thanks for reaching out. I tried to reproduce the behavior in my local environment using TimescaleDB 2.13.0. However, it seems I missed something since I have not been able to reproduce the locking issue so far. Could you check the following steps if something is missing? CREATE TABLE temperature (
time timestamptz NOT NULL,
sensor int,
value float
);
SELECT create_hypertable('temperature', 'time');
INSERT INTO temperature
SELECT time, ceil(random() * 100)::int, ceil(random() * 100)::int
FROM generate_series('2000-01-01 0:00:00+0'::timestamptz,
'2000-01-01 23:59:59+0','1m') time;
CREATE MATERIALIZED VIEW cagg_1
WITH (timescaledb.continuous) AS
SELECT time_bucket('5 minutes', time), sensor, avg(value) AS avg
FROM temperature
GROUP BY 1,2 ORDER BY 1,2;
CREATE INDEX "cagg_1_value" ON cagg_1(sensor);
SELECT add_continuous_aggregate_policy('cagg_1',
start_offset => INTERVAL '1 day',
end_offset => INTERVAL '1 hour',
schedule_interval => INTERVAL '1 minute');
SELECT add_retention_policy('cagg_1', INTERVAL '2 weeks');
ALTER MATERIALIZED VIEW cagg_1
SET (timescaledb.compress = true, timescaledb.compress_segmentby = 'sensor');
SELECT add_compression_policy('cagg_1', INTERVAL '1 week');
DROP MATERIALIZED VIEW IF EXISTS cagg_1 CASCADE; |
Quick modification on your post, forgot that it was hierarchical caggs:
But now I see this If I just run:
I get:
Which is quite weird as I tried to run drops one by one with actual caggs and seemed the first one would basically CASCADE delete all views and second ones would be unneeded basically utilizing This doesn't seem to be the same random issue we faced originally but we did drop these hierachical caggs from bottom to top in same order and we didn't have this "depends on it" error... |
Hi @leppaott, Thanks for getting back. The created HCaggs have the following dependencies cagg_3 -> cagg_2 -> cagg_1. If you drop them within a transaction, PostgreSQL keeps track of the dependencies, knows that all dependent objects should be removed, and drops them in the right order. If you drop them individually, you have to keep track of these dependencies. So, the following order works: tsdb2130=# DROP MATERIALIZED VIEW IF EXISTS cagg_3 CASCADE;
DROP MATERIALIZED VIEW
tsdb2130=# DROP MATERIALIZED VIEW IF EXISTS cagg_2 CASCADE;
DROP MATERIALIZED VIEW
tsdb2130=# DROP MATERIALIZED VIEW IF EXISTS cagg_1 CASCADE;
DROP MATERIALIZED VIEW
tsdb2130=# If you drop cagg_1 before cagg_2, PotsgreSQL seems not to follow the
However, I compared this behavior with the PostgreSQL behavior when dependent materialized views are dropped. In this case, the drop of CREATE MATERIALIZED VIEW view_1 AS
SELECT time_bucket('5 minutes', time) as time, sensor, avg(value) AS avg
FROM temperature
GROUP BY 1,2
WITH NO DATA;
CREATE MATERIALIZED VIEW view_2 AS
SELECT time_bucket('15 minutes', time) as time, sensor, avg(avg) AS avg
FROM view_1
GROUP BY 1,2
WITH NO DATA;
CREATE MATERIALIZED VIEW view_3 AS
SELECT time_bucket('30 minutes', time) as time, sensor, avg(avg) AS avg
FROM view_2
GROUP BY 1,2
WITH NO DATA;
tsdb2130=# DROP MATERIALIZED VIEW view_1 CASCADE;
NOTICE: drop cascades to 2 other objects
DETAIL: drop cascades to materialized view view_2
drop cascades to materialized view view_3
DROP MATERIALIZED VIEW It seems the dependency tracking of HCaggs is different from dependency tracking of materialized views. So, I will move this issue to the backlog. |
Yes that seemed to be a bug within a transaction as well. But really now I managed to get the original issue back what I was mainly having issues with:
I try to get this reproduced with that example too, but can this happen when cascade delete is trying to remove compression jobs but the compression job is running at the same time? I was thinking in this whole cagg removal I shouldn't need |
Hi @leppaott, Thanks for getting back and the reproduction case. To ensure that both issues are addressed properly, could you please create a new issue for the locking problem? By doing so, we can use this issue to track the dependency problem exclusively. |
What type of bug is this?
Locking issue
What subsystems and features are affected?
Continuous aggregate
What happened?
Apparently creating, deleting, creating materialized views with retention + compression policy deadlocks:
i.e. we're trying to migrate a cagg to a new version but while testing we still have the old cagg migration in place.
TimescaleDB version affected
2.13
PostgreSQL version used
16
What operating system did you use?
Docker
What installation method did you use?
Docker
What platform did you run on?
Other
Relevant log output and stack trace
No response
How can we reproduce the bug?
The text was updated successfully, but these errors were encountered: