Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sentry stopped accepting transaction data #2876

Open
ingria opened this issue Mar 10, 2024 · 41 comments
Open

Sentry stopped accepting transaction data #2876

ingria opened this issue Mar 10, 2024 · 41 comments

Comments

@ingria
Copy link

ingria commented Mar 10, 2024

Self-Hosted Version

24.3.0.dev0

CPU Architecture

x86_x64

Docker Version

24.0.4

Docker Compose Version

24.0.4

Steps to Reproduce

Update to the latest master

Expected Result

Everything works fine

Actual Result

Performance page shows zeros for the time period since the update and until now:

image

Project page shows the correct info about transactions and errors:

image

Stats page shows 49k transactions of which 49k are dropped:

image

Same for errors:

image

Event ID

No response

UPD

there are a lot of errors in clickhouse container:

2024.03.10 23:40:34.789282 [ 46 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):

0. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x13c4ee8e in /usr/bin/clickhouse
1. Poco::Net::SocketImpl::peerAddress() @ 0x13c510d6 in /usr/bin/clickhouse
2. DB::ReadBufferFromPocoSocket::ReadBufferFromPocoSocket(Poco::Net::Socket&, unsigned long) @ 0x101540cd in /usr/bin/clickhouse
3. DB::HTTPServerRequest::HTTPServerRequest(std::__1::shared_ptr<DB::Context const>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x110e6fd5 in /usr/bin/clickhouse
4. DB::HTTPServerConnection::run() @ 0x110e5d6e in /usr/bin/clickhouse
5. Poco::Net::TCPServerConnection::start() @ 0x13c5614f in /usr/bin/clickhouse
6. Poco::Net::TCPServerDispatcher::run() @ 0x13c57bda in /usr/bin/clickhouse
7. Poco::PooledThread::run() @ 0x13d89e59 in /usr/bin/clickhouse
8. Poco::ThreadImpl::runnableEntry(void*) @ 0x13d860ea in /usr/bin/clickhouse
9. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
10. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
 (version 21.8.13.1.altinitystable (altinity build))
@ingria
Copy link
Author

ingria commented Mar 10, 2024

Also, for some reason Sentry started dropping incoming errors some time ago (as if I was using saas sentry):

image

@barisyild
Copy link

Did you change the port?
I had the same situation when I changed the port.

@ingria
Copy link
Author

ingria commented Mar 10, 2024

Yes, I have the relay port exposed to the host network. How did you manage to fix the problem?

@barisyild
Copy link

Yes, I have the relay port exposed to the host network. How did you manage to fix the problem?

When I reverted the port change the problem was resolved.

@ingria
Copy link
Author

ingria commented Mar 10, 2024

Nope, didn't help. Doesn't work even with default config. Thanks for the tip though

@hubertdeng123
Copy link
Member

Are there any logs in your web container that can help? Are you sure you are receiving the event envelopes? You should be able to see that activity in your nginx container.

@linxiaowang
Copy link

Same here, on the browser side, there is a request sent with an event type of "transaction", but there is no data displayed under "performance", and the number of transactions in the project is also 0.

@linxiaowang
Copy link

Same here, on the browser side, there is a request sent with an event type of "transaction", but there is no data displayed under "performance", and the number of transactions in the project is also 0.

Problem solved, server time not match the sdk time.

@ingria
Copy link
Author

ingria commented Mar 14, 2024

I can see that there are successful requests to /api/2/envelope:

image

Also I can see transaction statistics on the projects page:

Number 394k for the last 24 hours is about right.

@hubertdeng123
Copy link
Member

Are you on a nightly version of self-hosted? What does your sentry.conf.py look like? We've added some feature flags there to support the new performance features

@ingria
Copy link
Author

ingria commented Mar 15, 2024

I'm using docker with the latest commit from this repository. Bottom of the page says Sentry 24.3.0.dev0 unknown. So I guess that's nightly.

I've updated sentry.conf.py to match the most recent version from this repo - now the only difference is in SENTRY_SINGLE_ORGANIZATION and CSRF_TRUSTED_ORIGINS variables.

After that, errors have also disappeared:

image

@williamdes
Copy link
Contributor

williamdes commented Mar 16, 2024

I can confirm that the clickhouse errors are due to the Rust workers, reverting the workers part of #2831 and #2861
made the errors disappear.
But still I have a too high dropping of transactions since the upgrade.

Worker code: https://github.com/getsentry/snuba/blob/359878fbe030a63945914ef05e705224680b453c/rust_snuba/src/strategies/clickhouse.rs#L61

Workers logs show that insert is done (is it ?): "timestamp":"2024-03-16T11:40:52.491448Z","level":"INFO","fields":{"message":"Inserted 29 rows"},

@aldy505
Copy link
Collaborator

aldy505 commented Mar 18, 2024

The error is caused by connection being prematurely closed. See #2900

@LvckyAPI
Copy link
Contributor

Same issue on latest 24.3.0

image

image

@LvckyAPI
Copy link
Contributor

errors are not logged to

@aldy505
Copy link
Collaborator

aldy505 commented Mar 19, 2024

Okay so I'm able to replicate this issue on my instance (24.3.0). What happen is that Sentry does accept transaction/errors/profiles/replays/attachments data, but it doesn't record it on the statistics. So your stats of ingested events might be displayed as is there were no events being recorded, but actually the events are there -- it's processed by Snuba and you can view it on the web UI.

Can anyone reading this confirm that that's what happened on your instances as well? (I don't want to ping everybody)

If the answer to that 👆🏻 is "yes", that means something (a module, container, or something) that ingest the events didn't do data insertion correctly for it to be queried as statistics. I don't know for sure whether it's the responsibility of Snuba consumers (as we moved to rust-consumer just on 24.3.0) or Sentry workers, but I'd assume it's Snuba consumers.

A few solution (well not really but I hope this would get rid of this issue) for this is, either:

  1. Fix the issue somewhere, cut a patch release.
  2. If it's caused by rust-consumers, then we might rollback the usage of rust-consumer and just go back to old Python ones.

@LvckyAPI
Copy link
Contributor

Okay so I'm able to replicate this issue on my instance (24.3.0). What happen is that Sentry does accept transaction/errors/profiles/replays/attachments data, but it doesn't record it on the statistics. So your stats of ingested events might be displayed as is there were no events being recorded, but actually the events are there -- it's processed by Snuba and you can view it on the web UI.

Can anyone reading this confirm that that's what happened on your instances as well? (I don't want to ping everybody)

If the answer to that 👆🏻 is "yes", that means something (a module, container, or something) that ingest the events didn't do data insertion correctly for it to be queried as statistics. I don't know for sure whether it's the responsibility of Snuba consumers (as we moved to rust-consumer just on 24.3.0) or Sentry workers, but I'd assume it's Snuba consumers.

A few solution (well not really but I hope this would get rid of this issue) for this is, either:

  1. Fix the issue somewhere, cut a patch release.
  2. If it's caused by rust-consumers, then we might rollback the usage of rust-consumer and just go back to old Python ones.

I didn't see any errors in the Issues tab. I had to rebuild a Server Snapshot to “fix” this problem. So it wasn't just the statistics that were affected.

@aldy505
Copy link
Collaborator

aldy505 commented Mar 25, 2024

I have the same error, with 24.3.0 sentry is nonfunctional, i will revert to backup now.

2024.03.25 13:15:44.444996 [ 47 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):
0. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x13c4ee8e in /usr/bin/clickhouse
1. Poco::Net::SocketImpl::peerAddress() @ 0x13c510d6 in /usr/bin/clickhouse
2. DB::HTTPServerRequest::HTTPServerRequest(std::__1::shared_ptr<DB::Context const>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x110e6f0b in /usr/bin/clickhouse
3. DB::HTTPServerConnection::run() @ 0x110e5d6e in /usr/bin/clickhouse
4. Poco::Net::TCPServerConnection::start() @ 0x13c5614f in /usr/bin/clickhouse
5. Poco::Net::TCPServerDispatcher::run() @ 0x13c57bda in /usr/bin/clickhouse
6. Poco::PooledThread::run() @ 0x13d89e59 in /usr/bin/clickhouse
7. Poco::ThreadImpl::runnableEntry(void*) @ 0x13d860ea in /usr/bin/clickhouse
8. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
9. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
 (version 21.8.13.1.altinitystable (altinity build))
2024.03.25 13:15:45.469271 [ 47 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):
0. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x13c4ee8e in /usr/bin/clickhouse
1. Poco::Net::SocketImpl::peerAddress() @ 0x13c510d6 in /usr/bin/clickhouse
2. DB::HTTPServerRequest::HTTPServerRequest(std::__1::shared_ptr<DB::Context const>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x110e6f0b in /usr/bin/clickhouse
3. DB::HTTPServerConnection::run() @ 0x110e5d6e in /usr/bin/clickhouse
4. Poco::Net::TCPServerConnection::start() @ 0x13c5614f in /usr/bin/clickhouse
5. Poco::Net::TCPServerDispatcher::run() @ 0x13c57bda in /usr/bin/clickhouse
6. Poco::PooledThread::run() @ 0x13d89e59 in /usr/bin/clickhouse
7. Poco::ThreadImpl::runnableEntry(void*) @ 0x13d860ea in /usr/bin/clickhouse
8. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
9. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
 (version 21.8.13.1.altinitystable (altinity build))

EDIT: switching to non rust consumers, fixed the problem. But the stats view is broken now

@DarkByteZero for the broken stats view, try to add one more snuba consumer from this PR #2909

@DarkByteZero
Copy link

After using the new snuba consumer from #2909 and reverting to the python consumer, everything is working now. My statistics view is now complete again, even retroactively.

@barisyild
Copy link

I have the same problem and sentry no longer accepts new issues.

@williamdes
Copy link
Contributor

I have the same problem and sentry no longer accepts new issues.

I did some changes on my relay, to add more caching
Are you sure the relay is accepting enveloppes?

@barisyild
Copy link

I have the same problem and sentry no longer accepts new issues.

I did some changes on my relay, to add more caching Are you sure the relay is accepting enveloppes?

All of a sudden my server stopped processing the issuances and started giving the error "envelope buffer capacity exceeded" and after restarting a few times it got better but I'm not sure if it's related to this issue or not.

@williamdes
Copy link
Contributor

same error, that's why I found #1929 (comment) and applied the setting with some other ones
maybe also try this setting

@combrs
Copy link

combrs commented Apr 26, 2024

same error on just installed 24.4.1

# git log --oneline -1
2fe5499 (HEAD, tag: 24.4.1) release: 24.4.1

only git checkout, run install.sh , docker compose up -d
no projects, no connected clients, no ingestions at all
and after ~1 day (no-op run) got clickhouse logs of this size:

-rw-r----- 1 systemd-network systemd-journal 687M Apr 26 14:07 clickhouse-server.err.log
-rw-r----- 1 systemd-network systemd-journal 687M Apr 26 14:07 clickhouse-server.log

Flooded with errors as in UPD in issue description.
If start docker-compose without -d , first appearance of this clickhouse error is here

snuba-generic-metrics-sets-consumer-1           | {"timestamp":"2024-04-26T11:25:10.171809Z","level":"INFO","fields":{"message":"Starting Rust consumer","consumer_config.storages":"[StorageConfig { name: \"generic_metrics_sets_raw\", clickhouse_table_name: \"generic_metric_sets_raw_local\", clickhouse_cluster: ClickhouseConfig { host: \"clickhouse\", port: 9000, http_port: 8123, user: \"default\", password: \"\", database: \"default\" }, message_processor: MessageProcessorConfig { python_class_name: \"GenericSetsMetricsProcessor\", python_module: \"snuba.datasets.processors.generic_metrics_processor\" } }]"},"target":"rust_snuba::consumer"}
snuba-generic-metrics-sets-consumer-1           | {"timestamp":"2024-04-26T11:25:10.171916Z","level":"INFO","fields":{"message":"Starting consumer for \"generic_metrics_sets_raw\"","storage":"generic_metrics_sets_raw"},"target":"rust_snuba::consumer"}
snuba-transactions-consumer-1                   | {"timestamp":"2024-04-26T11:25:10.208491Z","level":"INFO","fields":{"message":"New partitions assigned: {Partition { topic: Topic(\"transactions\"), index: 0 }: 0}"},"target":"rust_arroyo::processing"}
snuba-generic-metrics-distributions-consumer-1  | {"timestamp":"2024-04-26T11:25:10.554345Z","level":"INFO","fields":{"message":"skipping write of 0 rows"},"target":"rust_snuba::strategies::clickhouse"}
clickhouse-1                                    | 2024.04.26 11:25:10.562495 [ 237 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):
clickhouse-1                                    |
clickhouse-1                                    | 0. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x13c4ee8e in /usr/bin/clickhouse
clickhouse-1                                    | 1. Poco::Net::SocketImpl::peerAddress() @ 0x13c510d6 in /usr/bin/clickhouse
clickhouse-1                                    | 2. DB::HTTPServerRequest::HTTPServerRequest(std::__1::shared_ptr<DB::Context const>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x110e6f0b in /usr/bin/clickhouse
clickhouse-1                                    | 3. DB::HTTPServerConnection::run() @ 0x110e5d6e in /usr/bin/clickhouse

Replace 'rust-consumer' with 'consumer' (as in comments to issues somewhere) "solves" the problem without need to full version downgrade.

@tmpkn
Copy link

tmpkn commented Apr 26, 2024

I confirm that the fix mentioned by @combrs works:

%s/rust-consumer/consumer/g on your docker-compose.yaml and the problem goes away

@le0pard
Copy link
Contributor

le0pard commented Apr 27, 2024

Screenshot 2024-04-27 at 14 09 36
Screenshot 2024-04-27 at 13 48 59

This is what happening in my case - jump in load to 75 (no big amount errors or transactions at this time), small drop in memory (some process died?) and continuous grow in memory after this (looks like some consumer for redis dead and redis start eating memory)

@williamdes
Copy link
Contributor

For reference I started using keydb (by Snapchat) to replace redis, pretty much out of the box replacement.
Seems like it works great. Redis writes pretty often.
Did someone tweak Redis?

@le0pard
Copy link
Contributor

le0pard commented Apr 27, 2024

No tweaks for redis. It is not redis issue - transaction events pushed to redis, but looks like consumers dead, that is why memory growing. It is effect of the problem, not a reason

@williamdes
Copy link
Contributor

williamdes commented Apr 29, 2024

2024.04.29 10:29:52.619697 [ 3391719 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):

And in Rust workers {"timestamp":"2024-04-29T09:44:59.918768Z","level":"INFO","fields":{"message":"skipping write of XYZ rows"},"target":"rust_snuba::strategies::clickhouse"}

I am really not sure why this occurs

Edit: this changed with getsentry/snuba#5838

@williamdes
Copy link
Contributor

williamdes commented Apr 30, 2024

Here is: relay/config.yml

# See: https://github.com/getsentry/sentry-docs/blob/master/docs/product/relay/options.mdx

relay:
  upstream: "http://web:9000/"
  host: 0.0.0.0
  port: 3000
  override_project_ids: false
logging:
  level: WARN
http:
  timeout: 60
  connection_timeout: 60
  max_retry_interval: 60
  host_header: 'sentry.xxxxxx.xx'
processing:
  enabled: true
  kafka_config:
    - {name: "bootstrap.servers", value: "kafka:9092"}
    - {name: "message.max.bytes", value: 50000000} # 50MB
  redis: redis://redis:6379
  geoip_path: "/geoip/GeoLite2-City.mmdb"
cache:
  envelope_buffer_size: 80000  # queue up to 1 million requests
  eviction_interval: 120
  project_grace_period: 3600 # One hour
  envelope_expiry: 1200 # 20 minutes
  batch_interval: 300
  file_interval: 30
spool:
  envelopes:
#    path: /var/lib/sentry-envelopes/files
    max_memory_size: 1GB
    max_disk_size: 4GB
    max_connections: 20
    min_connections: 10

You will notice the jump from 2-3K lines/s to 13K lines/s. Was Sentry struggling to follow ?
Now it all feels waaaay more stable than before. Maybe this was the relay crashing.

image

@williamdes
Copy link
Contributor

williamdes commented May 11, 2024

I noticed that restarting all containers does some kind of state reset. But after some time the relay seems unable to handle it's own load: envelope buffer capacity exceeded since the config is missing.
i suspect the root cause to be this error in the relay logs:

can't fetch project states failure_duration="42716 seconds" backoff_attempts=7
relay-1  | 2024-05-11T10:53:22.079788Z ERROR relay_server::services::project_upstream: error fetching project states error=upstream request returned error 500 Internal Server Error

I would be very happy if anyone had a clue of where to search.

The web logs report 200 OK in their logs.
Full relay config above: #2876 (comment)
I did change a bit the http block to try to mitigate this issue:

http:
  timeout: 60
  connection_timeout: 60
  max_retry_interval: 60

In the cron logs I did find:

celery.beat.SchedulingError: Couldn't apply scheduled task deliver-from-outbox-control: Command # 1 (LLEN outbox.control) of pipeline caused error: OOM command not allowed when used memory > 'maxmemory

Seems that the Docker host needs vm.overcommit_memory sysctl for Redis: docker-library/redis#298 (comment)

Plus maxmemory of Redis/Keydb was too low

But

ERROR relay_server::services::project_upstream: error fetching project state cbb152173d0b4451b3453b05b58dddee: deadline exceeded errors=0 pending=88 tags.did_error=false tags.was_pending=true tags.project_key="xxxxxx"

persists

Seems like this was previously reported as #1929


With the help of this incredible tcpdump command (https://stackoverflow.com/a/16610385/5155484) I managed to see the reply web did:

{"configs":{},"pending":["cbb152173d0b4451b3453b05b58dddee","084e50cc07ad4b9f862a3595260d7aa1"]}

Request: POST /api/0/relays/projectconfigs/?version=3 HTTP/1.1

{"publicKeys":["cbb152173d0b4451b3453b05b58dddee","084e50cc07ad4b9f862a3595260d7aa1"],"fullConfig":true,"noCache":false}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Status: Waiting for: Product Owner
Status: No status
Development

No branches or pull requests