Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: memory keeps growing #2130

Open
1 task done
spalagu opened this issue Mar 30, 2023 · 10 comments
Open
1 task done

[Bug]: memory keeps growing #2130

spalagu opened this issue Mar 30, 2023 · 10 comments
Labels

Comments

@spalagu
Copy link

spalagu commented Mar 30, 2023

Environment

  • VerneMQ Version: 1.12.6.2 (commit:e33f7abe5568e17be0fbce73a8bf17d619b1e107)
  • OS: CentOS 7
  • Erlang/OTP version (if building from source): 25.0
  • Cluster size/standalone: 3nodes

Current Behavior

VerneMQ's memory keeps growing for two weeks after startup without a large increase in the number of connections.

Below are the memory-related metrics of vernemq:
./bin/vmq-admin metrics show|grep memory
gauge.vm_memory_code = 15375612
gauge.vm_memory_processes_used = 125731120
gauge.vm_memory_ets = 1057110688
gauge.vm_memory_system = 5422875664
gauge.vm_memory_binary = 4314718624
gauge.vm_memory_atom_used = 714673
gauge.retain_memory = 669219520
gauge.swc_dotkeymap_memory = 38943376
gauge.vm_memory_processes = 126363800
gauge.vm_memory_atom = 729321
gauge.vm_memory_total = 5549239464
gauge.router_memory = 25253688

And my levedb memory parameter is(my ec2's memory is 30G in total):
leveldb.maximum_memory.percent = 20

The following figure shows the growth curve of VerneMQ memory after VerneMQ starts, I found that vernemq memory usage is not equal to gauge.vm_memory_total + leveldb.maximum_memory
image

The following figure shows the growth curve of gauge.vm_memory_total
image

Expected behaviour

I hope to find the reason for the continuous growth of memory, or which metric is the main focus of memory growth

Configuration, logs, error output, etc.

vernemq.conf
---------
allow_anonymous = on
allow_register_during_netsplit = on
allow_publish_during_netsplit = on
allow_subscribe_during_netsplit = on
allow_unsubscribe_during_netsplit = on
allow_multiple_sessions = off
coordinate_registrations = on
upgrade_outgoing_qos = off
systree_enabled = on
systree_interval = 20000
graphite_enabled = off
graphite_host = localhost
graphite_port = 2003
graphite_interval = 20000
shared_subscription_policy = prefer_local
plugins.vmq_passwd = on
plugins.vmq_acl = on
plugins.vmq_diversity = off
plugins.vmq_webhooks = off
plugins.vmq_bridge = off
metadata_plugin = vmq_swc
vmq_acl.acl_file = /data/apps/vernemq/etc/vmq.acl
vmq_acl.acl_reload_interval = 10
vmq_passwd.password_file = /data/apps/vernemq/etc/vmq.passwd
vmq_passwd.password_reload_interval = 10
vmq_diversity.script_dir = /data/apps/vernemq/share/lua
vmq_diversity.auth_postgres.enabled = off
vmq_diversity.postgres.ssl = off
vmq_diversity.postgres.password_hash_method = crypt
vmq_diversity.auth_cockroachdb.enabled = off
vmq_diversity.cockroachdb.ssl = on
vmq_diversity.cockroachdb.password_hash_method = bcrypt
vmq_diversity.auth_mysql.enabled = off
vmq_diversity.mysql.password_hash_method = password
vmq_diversity.auth_mongodb.enabled = off
vmq_diversity.mongodb.ssl = off
vmq_diversity.auth_redis.enabled = off
vmq_bcrypt.pool_size = 1
vmq_bcrypt.nif_pool_size = 4
vmq_bcrypt.nif_pool_max_overflow = 10
vmq_bcrypt.default_log_rounds = 12
vmq_bcrypt.mechanism = port
log.console = file
log.console.level = info
log.console.file = /data/apps/vernemq/log/console.log
log.error.file = /data/apps/vernemq/log/error.log
log.syslog = off
log.crash = on
log.crash.file = /data/apps/vernemq/log/crash.log
log.crash.maximum_message_size = 64KB
log.crash.size = 10MB
log.crash.rotation = $D0
log.crash.rotation.keep = 5

erlang.async_threads = 64
erlang.max_ports = 1048576
erlang.process_limit = 2097152
leveldb.maximum_memory.percent = 20

max_inflight_messages = 0
max_online_messages = 1000
max_offline_messages = 1000
max_message_size = 0

listener.max_connections = 100000
listener.nr_of_acceptors = 100
listener.vmq.clustering = xxx.xxx.xxx.xxx:44053
listener.mountpoint = off

listener.tcp.default=0.0.0.0:1883
listener.ssl.default=0.0.0.0:8883
listener.http.default=0.0.0.0:8888
listener.ssl.cafile=/data/apps/vernemq/ca/cacert.pem
listener.ssl.certfile=/data/apps/vernemq/ca/cert.pem
listener.ssl.keyfile=/data/apps/vernemq/ca/key.pem
listener.ssl.require_certificate=on
nodename = VerneMQ@xxx.xxx.xxx.xxx

max_ws_frame_size = 268435456
topic_max_depth = 10

distributed_cookie = vmq

include conf.d/*.conf

Code of Conduct

  • I agree to follow the VerneMQ's Code of Conduct
@spalagu spalagu added the bug label Mar 30, 2023
@ioolkos
Copy link
Contributor

ioolkos commented Mar 30, 2023

What does your first figure show exactly?
The second figure shows a daily memory pattern which is probably MQTT clients connecting/disconnecting. This would be in line with the fact that binaries take up most of your memory there. (ie. taken up by TCP buffers)
So, you might want to determine whether it's your TCP buffers and then adjust your configuration.


👉 Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
👉 Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.

@spalagu
Copy link
Author

spalagu commented Mar 30, 2023

The first figure show memory usage of vernemq process.

When I migrate the connections to other VerneMQ clusters, the memory used by the current VerneMQ cluster does not decrease. Does this mean that the memory is not taken up by the tcp buffer?

Is the memory usaged by the vernemq process mainly composed of gauge.vm_memory_total and leveldb.maximum_memory, but my actual situation is that the memory usaged by the vernemq process is 6G more than gauge.vm_memory_total+leveldb.maximum_memory.

@ioolkos
Copy link
Contributor

ioolkos commented Mar 31, 2023

I still don't see what exact metric your first figure shows. Should you have any additional findings based on your investigation, please share them.
If you suspect memory fragmentation, the chapter on that in "Erlang in Anger" might help. (https://erlang-in-anger.com/)


👉 Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
👉 Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.

@spalagu
Copy link
Author

spalagu commented Mar 31, 2023

The exact metric shown in the first figure is provided by node-explorer:
node_memory_MemTotal_bytes{job="vernemq"} - node_memory_MemAvailable_bytes{job="vernemq"}

@spalagu
Copy link
Author

spalagu commented Apr 10, 2023

image

@ioolkos
Copy link
Contributor

ioolkos commented Apr 10, 2023

Can you show the ouput of vmq-admin metrics show? (as text, not picture)
And vmq-admin metrics show | grep memory for convenience, as well?


👉 Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
👉 Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.

@spalagu
Copy link
Author

spalagu commented Apr 10, 2023

bin/vmq-admin metrics show

counter.mqtt_unsubscribe_received = 667799
gauge.vm_memory_processes = 126848720
counter.mqtt_connack_unacceptable_protocol_sent = 0
gauge.retain_memory = 1191773016
gauge.system_utilization_scheduler_26 = 0
counter.mqtt_connack_identifier_rejected_sent = 0
counter.system_context_switches = 87294451873
gauge.system_utilization_scheduler_30 = 0
gauge.system_utilization_scheduler_28 = 0
counter.system_reductions = 9958331587375
gauge.vm_memory_system = 7914544816
counter.mqtt_subscribe_received = 163009575
counter.mqtt_connack_not_authorized_sent = 0
counter.mqtt_unsubscribe_error = 0
counter.mqtt_puback_received = 2607364568
counter.client_expired = 0
gauge.system_utilization_scheduler_20 = 0
gauge.vm_memory_code = 15422762
counter.system_exact_reductions = 9939013494692
counter.socket_open = 164834647
gauge.retain_messages = 2785246
counter.mqtt_connack_server_unavailable_sent = 0
counter.mqtt_pubrec_invalid_error = 0
counter.mqtt_suback_sent = 163009575
gauge.system_utilization_scheduler_7 = 10
counter.mqtt_subscribe_error = 0
counter.system_io_out = 3805852416548
histogram.storage_read_microseconds_bucket_infinity = 2205868982
histogram.storage_read_microseconds_bucket_1000000 = 2205868982
histogram.storage_read_microseconds_bucket_100000 = 2205868971
histogram.storage_read_microseconds_bucket_10000 = 2205839098
histogram.storage_read_microseconds_bucket_1000 = 2205612536
histogram.storage_read_microseconds_bucket_100 = 2144875690
histogram.storage_read_microseconds_bucket_10 = 0
histogram.storage_read_microseconds_count = 2205868982
histogram.storage_read_microseconds_sum = 102960479785

gauge.system_utilization_scheduler_13 = 11
gauge.system_utilization_scheduler_18 = 0
counter.mqtt_disconnect_sent = 3
gauge.system_utilization_scheduler_32 = 0
gauge.vm_memory_ets = 1855081016
gauge.system_process_count = 8794
histogram.metadata_fold_microseconds_bucket_infinity = 3
histogram.metadata_fold_microseconds_bucket_1000000 = 3
histogram.metadata_fold_microseconds_bucket_100000 = 3
histogram.metadata_fold_microseconds_bucket_10000 = 3
histogram.metadata_fold_microseconds_bucket_1000 = 3
histogram.metadata_fold_microseconds_bucket_100 = 0
histogram.metadata_fold_microseconds_bucket_10 = 0
histogram.metadata_fold_microseconds_count = 3
histogram.metadata_fold_microseconds_sum = 2263

gauge.system_utilization_scheduler_8 = 14
gauge.system_utilization_scheduler_19 = 0
gauge.vm_memory_processes_used = 126829840
counter.mqtt_pubrec_received = 0
histogram.metadata_get_microseconds_bucket_infinity = 163677752
histogram.metadata_get_microseconds_bucket_1000000 = 163677752
histogram.metadata_get_microseconds_bucket_100000 = 163677752
histogram.metadata_get_microseconds_bucket_10000 = 163669668
histogram.metadata_get_microseconds_bucket_1000 = 163621384
histogram.metadata_get_microseconds_bucket_100 = 148430172
histogram.metadata_get_microseconds_bucket_10 = 10
histogram.metadata_get_microseconds_count = 163677752
histogram.metadata_get_microseconds_sum = 12290309056

gauge.system_utilization_scheduler_3 = 11
counter.socket_close = 164830642
gauge.system_utilization_scheduler_22 = 0
gauge.system_utilization_scheduler_9 = 14
counter.mqtt_pingresp_sent = 170772377
gauge.system_utilization_scheduler_2 = 100
gauge.swc_dotkeymap_memory = 70205284
histogram.metadata_put_microseconds_bucket_infinity = 347973873
histogram.metadata_put_microseconds_bucket_1000000 = 347973873
histogram.metadata_put_microseconds_bucket_100000 = 347973380
histogram.metadata_put_microseconds_bucket_10000 = 347689089
histogram.metadata_put_microseconds_bucket_1000 = 347306447
histogram.metadata_put_microseconds_bucket_100 = 0
histogram.metadata_put_microseconds_bucket_10 = 0
histogram.metadata_put_microseconds_count = 347973873
histogram.metadata_put_microseconds_sum = 88829623282

counter.mqtt_subscribe_auth_error = 0
counter.queue_teardown = 162756330
counter.mqtt_invalid_msg_size_error = 0
counter.queue_setup = 162760326
counter.mqtt_publish_error = 0
gauge.system_utilization_scheduler_15 = 5
gauge.vm_memory_atom = 729321
gauge.system_utilization_scheduler_29 = 0
counter.system_io_in = 3571002028830
counter.router_matches_remote = 1414664213
counter.mqtt_connack_bad_credentials_sent = 0
counter.mqtt_publish_sent = 3152158935
counter.mqtt_unsuback_sent = 667799
gauge.system_utilization_scheduler_21 = 0
counter.mqtt_connack_accepted_sent = 0
counter.mqtt_pingreq_received = 170772377
counter.cluster_bytes_sent = 945239216653
gauge.system_utilization_scheduler_24 = 0
gauge.swc_object_count = 2800271
histogram.metadata_delete_microseconds_bucket_infinity = 162367022
histogram.metadata_delete_microseconds_bucket_1000000 = 162367022
histogram.metadata_delete_microseconds_bucket_100000 = 162366762
histogram.metadata_delete_microseconds_bucket_10000 = 162221750
histogram.metadata_delete_microseconds_bucket_1000 = 162013171
histogram.metadata_delete_microseconds_bucket_100 = 0
histogram.metadata_delete_microseconds_bucket_10 = 0
histogram.metadata_delete_microseconds_count = 162367022
histogram.metadata_delete_microseconds_sum = 45061300220

counter.bytes_sent = 2125880002850
gauge.router_subscriptions = 72000
counter.queue_message_in = 6323022405
gauge.vm_memory_total = 8041393536
counter.system_runtime = 2830460538
counter.cluster_bytes_dropped = 0
counter.queue_message_expired = 0
histogram.storage_write_microseconds_bucket_infinity = 1532142946
histogram.storage_write_microseconds_bucket_1000000 = 1532142946
histogram.storage_write_microseconds_bucket_100000 = 1532142934
histogram.storage_write_microseconds_bucket_10000 = 1532069234
histogram.storage_write_microseconds_bucket_1000 = 1531566298
histogram.storage_write_microseconds_bucket_100 = 1412115976
histogram.storage_write_microseconds_bucket_10 = 0
histogram.storage_write_microseconds_count = 1532142946
histogram.storage_write_microseconds_sum = 101516676351

counter.mqtt_connack_sent = 162760343
gauge.system_utilization_scheduler_4 = 12
gauge.system_utilization_scheduler_31 = 0
counter.netsplit_detected = 0
gauge.system_utilization_scheduler_27 = 0
counter.socket_close_timeout = 186132
counter.mqtt_pubcomp_received = 0
histogram.storage_scan_microseconds_bucket_infinity = 50
histogram.storage_scan_microseconds_bucket_1000000 = 50
histogram.storage_scan_microseconds_bucket_100000 = 50
histogram.storage_scan_microseconds_bucket_10000 = 50
histogram.storage_scan_microseconds_bucket_1000 = 50
histogram.storage_scan_microseconds_bucket_100 = 47
histogram.storage_scan_microseconds_bucket_10 = 0
histogram.storage_scan_microseconds_count = 50
histogram.storage_scan_microseconds_sum = 3259

gauge.system_utilization_scheduler_11 = 8
counter.system_gc_count = 26647087282
counter.system_wallclock = 951952259
counter.queue_message_unhandled = 3852
counter.system_words_reclaimed_by_gc = 22302913591416
gauge.system_utilization_scheduler_5 = 17
gauge.system_utilization_scheduler_6 = 8
gauge.vm_memory_atom_used = 714888
counter.mqtt_auth_received = 0
counter.mqtt_publish_received = 2534506221
counter.mqtt_pubrel_received = 0
counter.mqtt_connect_received = 162760343
gauge.system_utilization_scheduler_10 = 11
counter.router_matches_local = 3564934404
counter.queue_message_out = 3148053033
counter.socket_error = 0
gauge.system_utilization_scheduler_14 = 11
gauge.swc_tombstone_count = 2967
gauge.system_utilization_scheduler_23 = 0
counter.bytes_received = 1493752793578
counter.mqtt_publish_auth_error = 0
gauge.system_utilization = 8
counter.mqtt_pubcomp_sent = 0
gauge.system_run_queue = 0
counter.mqtt_puback_sent = 1996903630
gauge.queue_processes = 3996
counter.queue_initialized_from_storage = 0
counter.mqtt_disconnect_received = 146334749
counter.mqtt_auth_sent = 0
gauge.vm_memory_binary = 6008482104
gauge.system_utilization_scheduler_12 = 7
counter.mqtt_pubrec_sent = 0
counter.mqtt_pubrel_sent = 0
gauge.system_utilization_scheduler_25 = 0
counter.queue_message_drop = 129649612
gauge.system_utilization_scheduler_16 = 10
counter.client_keepalive_expired = 812864
gauge.router_memory = 24654624
counter.cluster_bytes_received = 1376146691697
gauge.system_utilization_scheduler_1 = 21
gauge.system_utilization_scheduler_17 = 0
counter.netsplit_resolved = 0
counter.mqtt_pubcomp_invalid_error = 0
counter.mqtt_puback_invalid_error = 278518

@spalagu
Copy link
Author

spalagu commented Apr 10, 2023

bin/vmq-admin metrics show | grep memory

gauge.vm_memory_processes = 128668664
gauge.retain_memory = 1191772816
gauge.vm_memory_system = 7923636920
gauge.vm_memory_code = 15422762
gauge.vm_memory_ets = 1855079704
gauge.vm_memory_processes_used = 128567000
gauge.swc_dotkeymap_memory = 70177239
gauge.vm_memory_atom = 729321
gauge.vm_memory_total = 8052305584
gauge.vm_memory_atom_used = 714917
gauge.vm_memory_binary = 6017442464
gauge.router_memory = 24865536

@ioolkos
Copy link
Contributor

ioolkos commented Apr 11, 2023

@spalagu I don't know why RESident memory shows at 20.5 GB. As mentioned previously, I'd investigate for memory fragmentation.
Looking at the metrics, I'd investigate 2 more things (unrelated to memory):

  • why the broker drops that many messages from consumer queues (it means you must have at least one slow consumer)
  • why you have that many retained messages in the retain cache. (create clarity on how you use retained messages as an MQTT feature)

👉 Thank you for supporting VerneMQ: https://github.com/sponsors/vernemq
👉 Using the binary VerneMQ packages commercially (.deb/.rpm/Docker) requires a paid subscription.

@spalagu
Copy link
Author

spalagu commented Apr 11, 2023

The "queue_message_drop" metric has a sudden increase, which does not occur most of the time in normal circumstances, and it seems to coincide with the time when garbage collection("system_words_reclaimed_by_gc" metric) occurs.

image

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants