Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log the tables sizes and statistics with db activity #22665

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

jrafanie
Copy link
Member

@jrafanie jrafanie commented Aug 16, 2023

It's helpful to log the tables sorted by row counts and other information such as the autovacuum/analzye times.

Even if we truncate the line to 8k (default for the logger), the largest row tables should be represented.

Note, this is run every 30 minutes and queued up for the server by the schedule worker. This will be run on all servers, even if they all use the same database, so hopefully, one server log or orchestrator log will provide the information on the high level row counts, sizes, vacuum/analyze, etc.

Here is an example of what it looks like locally using the many short lived containers database:

[----] I, [2023-08-16T17:23:24.325429 #23485:9754]  INFO -- evm: MIQ(VmdbDatabaseConnection.log_csv) <<-ACTIVITY_STATS_CSV
session_id,xact_start,last_request_start_time,command,login,application,request_id,net_address,host_name,client_port,wait_event_type,wait_event,wait_time_ms,blocked_by
23496,2023-08-16 21:23:24 UTC,2023-08-16 21:23:24 UTC,"SELECT ""pg_stat_activity"".* FROM ""pg_stat_activity"" WHERE ""pg_stat_activity"".""datname"" = $1",root,bin/rails,16384,,,-1,,,0,
ACTIVITY_STATS_CSV
[----] I, [2023-08-16T17:23:24.331576 #23485:9754]  INFO -- evm: MIQ(VmdbDatabaseConnection.log_csv) <<-TABLE_SIZE_CSV
table_name,rows,pages,size,average_row_size
vim_performance_states,1941436,69469,569090048,293.12825911940485
index_vim_performance_states_on_resource_and_timestamp,1941436,11788,96567296,49.74011312239336
vim_performance_states_pkey,1941436,5329,43655168,22.486008044556687
event_streams,1466737,105441,863772672,588.9072704191206
event_streams_pkey,1466737,4021,32940032,22.45802045082353
index_event_streams_on_timestamp,1466737,2779,22765568,15.521223285958365
index_event_streams_on_event_type,1466737,1281,10493952,7.154619298061412
index_event_streams_on_availability_zone_id_and_type,1466737,1275,10444800,7.121108200646605
index_event_streams_on_chain_id_and_ems_id,1466737,1252,10256384,6.992648993889842
index_event_streams_on_ems_id,1466737,1243,10182656,6.942382347767631
index_event_streams_on_host_id,1466737,1243,10182656,6.942382347767631
index_event_streams_on_dest_host_id,1466737,1243,10182656,6.942382347767631
index_event_streams_on_dest_vm_or_template_id,1466737,1243,10182656,6.942382347767631
index_event_streams_on_generating_ems_id,1466737,1243,10182656,6.942382347767631
index_event_streams_on_vm_or_template_id,1466737,1243,10182656,6.942382347767631
index_event_streams_on_ems_cluster_id,1466737,1243,10182656,6.942382347767631
miq_queue,499084,89313,731652096,1465.9869481150506
miq_queue_pkey,499084,1371,11231232,22.50364567157899
miq_queue_put_idx,499084,440,3604480,7.222176583147159
miq_queue_get_idx,499084,440,3604480,7.222176583147159
index_miq_queue_on_state_and_handler_type_and_handler_id,499084,432,3538944,7.090864281635393
index_miq_queue_on_miq_task_id,499084,426,3489792,6.992380055501568
index_miq_queue_on_task_id,499084,426,3489792,6.992380055501568
metric_rollups_06,265276,17636,144474112,544.6160503926085
index_metric_rollups_06_on_ts_and_capture_interval_name,254520,1998,16367616,64.30752668738532
index_metric_rollups_06_on_resource_and_ts,254520,1960,16056320,63.08446061425187
metric_rollups_06_pkey,254520,898,7356416,28.90298246510111
container_conditions,239605,3404,27885568,116.38092535245362
container_conditions_pkey,239605,660,5406720,22.565044281028023
container_volumes,69528,1000,8192000,117.82134073552044
container_volumes_pkey,69528,192,1572864,22.621697421219924
index_container_volumes_on_type,69528,62,507904,7.304923125602267
containers,64315,3799,31121408,483.88282853411283
containers_pkey,64315,355,2908160,45.21674233472231
index_containers_on_container_group_id,64315,353,2891776,44.962000124385845
index_containers_on_deleted_on,64315,326,2670592,41.522980284843584
index_containers_on_type,64315,148,1212416,18.850923564898313
index_containers_on_container_image_id,64315,114,933888,14.520305989178432
index_containers_on_ems_id,64315,108,884736,13.75607935816904
security_contexts,64059,472,3866624,60.359413050265374
security_contexts_pkey,64059,178,1458176,22.762660006244147
container_groups,63858,2239,18341888,287.2247921201397
container_groups_pkey,63858,351,2875392,45.02720055121439
index_container_groups_on_deleted_on,63858,324,2654208,41.56356973958252
index_container_groups_on_type,63858,132,1081344,16.933306190200284
index_container_groups_on_ems_id,63858,107,876544,13.726240623874473
metrics_15,46679,1345,11018240,236.0377035132819
index_metrics_15_on_ts_and_capture_interval_name,46679,389,3186688,68.26666666666667
index_metrics_15_on_resource_and_ts,46679,339,2777088,59.49203084832905
metrics_15_pkey,46679,130,1064960,22.814053127677806
metrics_16,46678,1346,11026432,236.2182566036119
index_metrics_16_on_ts_and_capture_interval_name,46678,388,3178496,68.09263266136807
index_metrics_16_on_resource_and_ts,46678,339,2777088,59.493305340731375
metrics_16_pkey,46678,130,1064960,22.81454187107693
metrics_17,46656,1346,11026432,236.32963971108302
index_metrics_17_on_ts_and_capture_interval_name,46656,389,3186688,68.30031935186574
index_metrics_17_on_resource_and_ts,46656,339,2777088,59.5213579955848
metrics_17_pkey,46656,130,1064960,22.825299526330454
metrics_18,46612,1345,11018240,236.37697637997982
index_metrics_18_on_ts_and_capture_interval_name,46612,389,3186688,68.3647909381503
index_metrics_18_on_resource_and_ts,46612,338,2768896,59.40179778173471
metrics_18_pkey,46612,130,1064960,22.846845300667194
vim_performance_operating_ranges,41456,3016,24707072,595.9686422075886
index_vpor_on_resource,41456,206,1687552,40.706080999589936
vim_performance_operating_ranges_pkey,41456,116,950272,22.92187085413802
index_vpor_on_time_profile_id,41456,37,303104,7.311286393130231
metrics_19,27654,847,6938624,250.8994395226903
index_metrics_19_on_ts_and_capture_interval_name,27654,231,1892352,68.42711986982462
index_metrics_19_on_resource_and_ts,27654,202,1654784,59.83670222382933
metrics_19_pkey,27654,78,638976,23.105261254745976
custom_attributes,19370,327,2678784,138.28836921170821
custom_attributes_pkey,19370,55,450560,23.259511641113004
index_custom_attributes_on_resource_id_and_resource_type,19370,35,286720,14.801507407981003
container_env_vars,12607,140,1146880,90.96446700507614
container_env_vars_pkey,12607,37,303104,24.04060913705584
container_port_configs,1731,27,221184,127.70438799076213
container_port_configs_pkey,1731,7,57344,33.108545034642034
miq_product_features,1602,28,229376,143.09170305676855
miq_product_features_pkey,1602,7,57344,35.77292576419214
index_miq_product_features_on_parent_id,1602,5,40960,25.552089831565816
index_miq_product_features_on_tenant_id,1602,4,32768,20.44167186525265
miq_roles_features,1481,10,81920,55.276653171390016
miq_roles_features_pkey,1481,7,57344,38.69365721997301
index_miq_roles_features_on_miq_user_role_id,1481,4,32768,22.110661268556004
miq_ae_fields,1255,24,196608,156.53503184713375
miq_ae_fields_pkey,1255,6,49152,39.13375796178344
index_miq_ae_fields_on_ae_class_id,1255,4,32768,26.089171974522294
index_miq_ae_fields_on_method_id,1255,4,32768,26.089171974522294
index_miq_ae_fields_on_updated_by_user_id,1255,2,16384,13.044585987261147
miq_ae_values,1204,23,188416,156.3618257261411
miq_ae_values_pkey,1204,6,49152,40.79004149377593
index_miq_ae_values_on_instance_id,1204,5,40960,33.99170124481328
index_miq_ae_values_on_field_id,1204,4,32768,27.19336099585062
index_miq_ae_values_on_updated_by_user_id,1204,2,16384,13.59668049792531
container_template_parameters,964,24,196608,203.73886010362693
container_template_parameters_pkey,964,5,40960,42.44559585492228
index_container_template_parameters_on_container_template_id,964,2,16384,16.97823834196891
miq_ae_instances,917,20,163840,178.47494553376907
index_miq_ae_instances_on_relative_path,917,12,98304,107.08496732026144
miq_ae_instances_pkey,917,5,40960,44.61873638344227
index_miq_ae_instances_on_updated_by_user_id,917,2,16384,17.847494553376908
index_miq_ae_instances_on_ae_class_id,917,2,16384,17.847494553376908
sql_features,712,8,65536,91.91584852734923
metrics_20,513,15,122880,239.06614785992218
index_metrics_20_on_ts_and_capture_interval_name,513,7,57344,111.56420233463035
index_metrics_20_on_resource_and_ts,513,6,49152,95.62645914396887
metrics_20_pkey,513,4,32768,63.750972762645915
container_images,407,45,368640,903.5294117647059
container_images_pkey,407,5,40960,100.3921568627451
index_container_images_on_ems_id,407,2,16384,40.15686274509804
index_container_images_on_deleted_on,407,2,16384,40.15686274509804
schema_migrations_pkey,333,4,32768,98.10778443113773
schema_migrations_ran,333,3,24576,73.58083832335329
schema_migrations,333,2,16384,49.053892215568865
schema_migrations_ran_pkey,333,2,16384,49.053892215568865
chargeback_tiers,286,3,24576,85.63066202090593
chargeback_tiers_pkey,286,2,16384,57.08710801393728
miq_ae_methods,239,30,245760,1024.0
index_miq_ae_methods_on_relative_path,239,5,40960,170.66666666666666
miq_ae_methods_pkey,239,2,16384,68.26666666666667
index_miq_ae_methods_on_class_id,239,2,16384,68.26666666666667
index_miq_ae_methods_on_updated_by_user_id,239,2,16384,68.26666666666667
container_groups_container_services_pkey,224,2,16384,72.81777777777778
container_groups_container_services,224,2,16384,72.81777777777778
index_miq_set_memberships_on_member_type_and_member_id,214,4,32768,152.4093023255814
miq_set_memberships,214,3,24576,114.3069...
[----] I, [2023-08-16T17:23:24.407559 #23485:9754]  INFO -- evm: MIQ(VmdbDatabaseConnection.log_csv) <<-TABLE_STATS_CSV
table_name,table_scans,sequential_rows_read,index_scans,index_rows_fetched,rows_inserted,rows_updated,rows_deleted,rows_hot_updated,rows_live,rows_dead,last_vacuum_date,last_autovacuum_date,last_analyze_date,last_autoanalyze_date
metric_rollups_06,0,0,2419,122199,12,122185,0,0,265276,0,,2023-08-15 21:56:24 +0000,,
containers,67,3344380,1993105,1944966,0,71764,234502,0,64315,7449,,2023-08-15 21:22:25 +0000,,
container_groups,146,5600393,192018,270387,0,71145,191574,0,63858,7287,,2023-08-15 21:22:14 +0000,,
container_images,61,23780,2364,2363,0,2443,2335,9,407,0,,2023-08-15 21:56:06 +0000,,
pg_toast_138914,0,0,29,2646,2835,0,0,0,378,2457,,,,
pg_toast_137466,0,0,26,2646,2268,0,0,0,189,2079,,,,
miq_reports,29,1950,158,156,0,156,0,0,156,0,,2023-08-10 15:47:59 +0000,,2023-08-10 15:47:59 +0000
chargeback_tiers,0,0,0,0,26,0,0,0,26,0,,,,
miq_set_memberships,2954,609835,0,0,13,0,5,0,8,5,,,,
miq_roles_features,0,0,58,1517,8,0,0,0,8,0,,,,
miq_product_features,41,64276,4959,4910,7,3,0,0,7,3,,,,
audit_events,0,0,0,0,6,0,0,0,6,0,,,,
schema_migrations_ran,0,0,0,0,3,0,0,0,3,0,,,,
miq_queue,1564,148225891,19528,25948,220,298,2995324,0,3,515,,,,
schema_migrations,6,2073,0,0,3,0,0,0,3,0,,,,
sessions,4,20,0,0,2,2,0,0,2,2,,,,
pg_toast_137832,0,0,67,86,45,0,43,0,2,43,,,,
vim_performance_states,60,38850705,98375,59191,15,59178,0,0,2,59191,,,,
miq_sets,403331,10889563,0,0,1,0,0,0,1,0,,,,
authentications,30,93,0,0,1,1,16,1,1,1,,,,
ext_management_systems,496,629,0,0,1,7,5,3,1,7,,,,
storage_resources,0,0,0,0,0,0,0,0,0,0,,,,
storage_profiles,9,0,0,0,0,0,0,0,0,0,,,,
storage_profile_storages,0,0,0,0,0,0,0,0,0,0,,,,
storage_files,0,0,0,0,0,0,0,0,0,0,,,,
snapshots,0,0,0,0,0,0,0,0,0,0,,,,
showback_tiers,0,0,0,0,0,0,0,0,0,0,,,,
showback_rates,0,0,0,0,0,0,0,0,0,0,,,,
showback_price_plans,0,0,0,0,0,0,0,0,0,0,,,,
showback_input_measures,0,0,0,0,0,0,0,0,0,0,,,,
showback_envelopes,0,0,0,0,0,0,0,0,0,0,,,,
showback_data_views,0,0,0,0,0,0,0,0,0,0,,,,
showback_data_rollups,0,0,0,0,0,0,0,0,0,0,,,,
shares,0,0,0,0,0,0,0,0,0,0,,,,
settings_changes,164,4520,136,0,0,0,80,0,0,0,,2023-08-10 15:47:59 +0000,,2023-08-10 15:47:59 +0000
services,0,0,0,0,0,0,0,0,0,0,,,,
service_templates,0,0,0,0,0,0,0,0,0,0,,,,
service_template_tenants,0,0,0,0,0,0,0,0,0,0,,,,
service_template_catalogs,0,0,0,0,0,0,0,0,0,0,,,,
service_resources,0,0,0,0,0,0,0,0,0,0,,,,
service_parameters_sets,0,0,9,0,0,0,0,0,0,0,,,,
service_orders,0,0,0,0,0,0,0,0,0,0,,,,
service_offerings,0,0,9,0,0,0,0,0,0,0,,,,
service_instances,0,0,9,0,0,0,0,0,0,0,,,,
service_connections,0,0,0,0,0,0,0,0,0,0,,,,
server_roles,196,234,0,0,0,0,0,0,0,0,,,,
security_policy_rules,0,0,0,0,0,0,0,0,0,0,,,,
security_policy_rule_source_security_groups,0,0,0,0,0,0,0,0,0,0,,,,
security_policy_rule_network_services,0,0,0,0,0,0,0,0,0,0,,,,
security_policy_rule_destination_security_groups,0,0,0,0,0,0,0,0,0,0,,,,
security_policies,0,0,0,0,0,0,0,0,0,0,,,,
security_groups_vms,0,0,0,0,0,0,0,0,0,0,,,,
security_groups,0,0,0,0,0,0,0,0,0,0,,,,
security_contexts,234503,2152650220,233612,233612,0,0,233612,0,0,0,,,,
scan_results,0,0,2335,0,0,0,0,0,0,0,,,,
scan_items,8,48,0,0,0,6,0,4,0,6,,,,
scan_histories,0,0,0,0,0,0,0,0,0,0,,,,
san_addresses,0,0,0,0,0,0,0,0,0,0,,,,
resource_pools,0,0,9,0,0,0,0,0,0,0,,,,
resource_groups,0,0,0,0,0,0,0,0,0,0,,,,
resource_actions,0,0,0,0,0,0,0,0,0,0,,,,
reserves,0,0,0,0,0,0,0,0,0,0,,,,
request_logs,0,0,0,0,0,0,0,0,0,0,,,,
repositories,0,0,0,0,0,0,0,0,0,0,,,,
relationships,0,0,9,0,0,0,0,0,0,0,,,,
registry_items,0,0,0,0,0,0,0,0,0,0,,,,
pxe_servers,0,0,0,0,0,0,0,0,0,0,,,,
pxe_menus,0,0,0,0,0,0,0,0,0,0,,,,
pxe_images,0,0,0,0,0,0,0,0,0,0,,,,
pxe_image_types,1,1,0,0,0,0,0,0,0,0,,,,
providers,1,1,0,0,0,0,0,0,0,0,,,,
provider_tag_mappings,0,0,0,0,0,0,0,0,0,0,,,,
policy_events,0,0,0,0,0,0,0,0,0,0,,,,
policy_event_contents,0,0,0,0,0,0,0,0,0,0,,,,
placement_groups,0,0,9,0,0,0,0,0,0,0,,,,
pictures,0,0,0,0,0,0,0,0,0,0,,,,
physical_storages,0,0,0,0,0,0,0,0,0,0,,,,
physical_storage_families,0,0,0,0,0,0,0,0,0,0,,,,
physical_servers,9,0,0,0,0,0,0,0,0,0,,,,
physical_server_profiles,0,0,9,0,0,0,0,0,0,0,,,,
physical_server_profile_templates,0,0,9,0,0,0,0,0,0,0,,,,
physical_racks,0,0,0,0,0,0,0,0,0,0,,,,
physical_network_ports,0,0,0,0,0,0,0,0,0,0,,,,
physical_disks,0,0,0,0,0,0,0,0,0,0,,,,
physical_chassis,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_826,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_6100,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_6000,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_3600,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_3596,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_3592,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_3466,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_3456,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_3429,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_3394,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_3381,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_3350,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_3256,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_3118,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_3079,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_2964,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_2620,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_2619,0,0,1447,3216,15,0,15,0,0,15,,,,
pg_toast_2618,0,0,125,366,0,0,0,0,0,0,,,,
pg_toast_2615,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_2612,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_2609,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_2606,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_2604,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_2600,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_2396,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_2328,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_1418,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_1417,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138963,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138955,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138947,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138939,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138931,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138922,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138906,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138898,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138890,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138882,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138874,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138866,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138858,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138850,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138842,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138834,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138826,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138811,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138802,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138794,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138785,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138777,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138764,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138756,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138748,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138740,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138732,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138724,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138716,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138708,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138700,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138692,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138684,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138676,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138668,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138660,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138647,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138638,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138630,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138622,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138614,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138606,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138598,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138590,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138582,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138559,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138546,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138538,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138530,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138524,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138516,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138508,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138500,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138492,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138484,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138476,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138468,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138460,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138452,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138444,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138436,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138428,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138420,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138412,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138404,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138396,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138387,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138379,0,0,0,0,0,0,0,0,0,0,,,,
pg_toast_138371,0,...

@jrafanie jrafanie force-pushed the log_table_sizes_and_statistics_with_activity branch from dc70e6b to 41babce Compare August 16, 2023 22:11
output.warn("MIQ(#{name}.#{__method__}) Unable to log stats, '#{err.message}'")
end
def self.log_table_statistics(output = $log)
stats = ApplicationRecord.connection.table_statistics.sort_by { |h| h['rows_live'] }.reverse!
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should just sort by table name and log in slices of 50 or so to avoid the log line limit.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So you mean log separate CSVs?

Copy link
Member

@Fryguy Fryguy Aug 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also how often is this logging? I'm concerned we're going to flood the logs and just cause other problems.

I'm wondering if it's better to just extract a simple hash of table_name => rows_live and log that with .inspect as a one-liner

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I was considering dropping CSV too

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is run every 30 minutes... it's using the existing logging of pg stat activity

:db_diagnostics_interval: 30.minutes

Copy link
Member Author

@jrafanie jrafanie Aug 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if it's better to just extract a simple hash of table_name => rows_live and log that with .inspect as a one-liner

I can try pulling out just the fields we want but I guess we don't always know what we want.

table size

table_name,rows,pages,size,average_row_size

table stats

table_name,table_scans,sequential_rows_read,index_scans,index_rows_fetched,rows_inserted,rows_updated,rows_deleted,rows_hot_updated,rows_live,rows_dead,last_vacuum_date,last_autovacuum_date,last_analyze_date,last_autoanalyze_date

As it is now, we need rows and the the dates for last auto vacuum/analyze. I guess the rest could be useful. If it's every 30 minutes, it seems worthwhile to capture.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note, we only get approximate row counts by looking at the live/dead rows from autovacuum logging, which we might not capture due to log rotation in the postgresql logs or for tables that haven't vacuumed in the time window of the logging.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is run every 30 minutes... it's using the existing logging of pg stat activity

I really don't like this going into the main log due to the size and frequency and it's very hard to grep -v it. Would prefer a separate log, or a dedicated file or something.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We only have stdout on podified. Yeah, for grep vs. readability, I don't know. All I know is we're blind right now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, we could probably trim out the toasts and indexes but the mechanisms to just exclude them wasn't there so I didn't try that yet.

log_table_statistics(output)
end

def self.log_csv(keys, stats, label, output)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably should be

Suggested change
def self.log_csv(keys, stats, label, output)
private_class_method def self.log_csv(keys, stats, label, output)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we even need csv? The only reason we had this was because it was easier for the old log scraper from THenn back in the ManageIQ Inc days. I always thought it was weird that the logs had CSV inside them.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it was done as a CSV to avoid having to print the same keys over and over again. If you do a hash, at least half of each row of data will be printing the keys, unless we abbreviate them.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Fryguy I'm fine with changing formats. The reason CSV was used was because of the single header row instead of repeating the same header values in other structures such as keys in a hash or json. As it is though, the log line limit gets reached so we can't even log the CSV in a single line so we'll need to split up anyway.

Comment on lines +161 to +162


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

@Fryguy
Copy link
Member

Fryguy commented Aug 30, 2023

What about a threshold for printing? Like maybe only the top 10, or perhaps any table over X rows/tuples, or something?

@Fryguy
Copy link
Member

Fryguy commented Aug 30, 2023

Just to be clear - I'm 100% for this PR, but would prefer if we can get it into a format that's not onerous for diagnosing issues that don't need this information and also that doesn't increase the log sizes too dramatically

@jrafanie
Copy link
Member Author

What about a threshold for printing? Like maybe only the top 10, or perhaps any table over X rows/tuples, or something?

Yeah, not sure what that threshold is. I need to see the vacuum/analyze information and it's not always based on row size. For example, with high churn on ext management systems, we'd never print vacuum/analyze or live/dead/row counts as it's not a high row table.

This was an idea to start the discussion.

It's helpful to log the tables sorted by row counts and other information such as the
autovacuum/analzye times.

Even if we truncate the line to 8k (default for the logger), the largest row tables
should be represented.
@jrafanie jrafanie force-pushed the log_table_sizes_and_statistics_with_activity branch from 41babce to 9bd5b13 Compare September 7, 2023 21:53
@miq-bot
Copy link
Member

miq-bot commented Sep 7, 2023

Checked commits jrafanie/manageiq@ec6569b~...9bd5b13 with ruby 2.6.10, rubocop 1.28.2, haml-lint 0.35.0, and yamllint
3 files checked, 3 offenses detected

spec/models/vmdb_database_connection_spec.rb

@@ -462,6 +462,7 @@ def table_size
FROM pg_class
WHERE reltuples > 1
AND relname NOT LIKE 'pg_%'
AND relkind = 'r'
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made the above two changes to eliminate system tables and only include ordinary tables and not indexes, etc.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

may want to reltuples > 0 to only display rows with a value

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of WHERE reltuples > 1 that's there? I guess I can. It's basically the same thing, the idea was to keep the original query and only normal tables and not indexes.

Copy link
Member

@kbrock kbrock Mar 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry - didn't see that. keep what is there

Just saw the example had a bunch of tables that it seems should not display (as they have 0 rows)

@miq-bot miq-bot added the stale label Dec 11, 2023
@miq-bot
Copy link
Member

miq-bot commented Dec 11, 2023

This pull request has been automatically marked as stale because it has not been updated for at least 3 months.

If these changes are still valid, please remove the stale label, make any changes requested by reviewers (if any), and ensure that this issue is being looked at by the assigned/reviewer(s).

1 similar comment
@miq-bot
Copy link
Member

miq-bot commented Mar 18, 2024

This pull request has been automatically marked as stale because it has not been updated for at least 3 months.

If these changes are still valid, please remove the stale label, make any changes requested by reviewers (if any), and ensure that this issue is being looked at by the assigned/reviewer(s).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants