You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A bunch of our data is paranoid now. But we don't have any process/hooks/crons to actually delete S3 files.
Write some async jobs to actually go delete S3 resources. MediaResources/Images should delete their files, maybe episodes should rm -rf their S3 directories, maybe podcasts as well?
Wire those jobs into the after_real_destroy callback, if that's not too dangerous.
Also - we can reap Tasks more often. Write a cron that goes and looks for tasks that aren't the latest for their owner, belong to soft-deleted owners, etc.
Make sure owner-destroys cascade to the Task
Some open questions around how long we retain data for old shows. But should figure that out as part of this ticket. And then:
Go through our oldest podcast_ids and delete that we don't care about.
Maybe we need this in staging, for integration tests, as well
The text was updated successfully, but these errors were encountered:
Also wondering - should BigQuery still have a record of really-deleted podcasts?
Right now dt_downloads and dt_impressions will forever have those. But podcasts / episodes / etc get overwritten prett frequently, so the gone-show would disappear there.
Cleanup our stuff.
A bunch of our data is paranoid now. But we don't have any process/hooks/crons to actually delete S3 files.
MediaResources
/Images
should delete their files, maybe episodes shouldrm -rf
their S3 directories, maybe podcasts as well?after_real_destroy
callback, if that's not too dangerous.Task
s more often. Write a cron that goes and looks for tasks that aren't the latest for their owner, belong to soft-deleted owners, etc.Some open questions around how long we retain data for old shows. But should figure that out as part of this ticket. And then:
The text was updated successfully, but these errors were encountered: