-
-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
infohashes_count goes into negative values #471
Comments
Oh wow, nice :D Edit: just to make sure: Did you clear redis completely before running Chihaya? |
Yes, I did that, also after the switch from 5/5 announcement config. Stopped redis on master/slave and did rm -rf /var/lib/redis/6379/* (6379 is a relict of my attempt to multi master, but I kept the folder structure) on both config attempts. If you want live examples: Tracker is udp://chihaya.de:6969 leading to the two IPs above |
Sorry, another question: You wrote that both instances of Chihaya connect to one instance of Redis, but also that tracker 2 has a Redis slave, can you clarify this? Do they both actually use the same Redis instance, or does tracker 2 use the Redis slave? |
tracker 1(srv A) <-> redis 1(srv A) redis on server B / tracker 2 is a readonly replica of redis server A and therefore not usable. |
thanks! |
Hey! If you're a risky person, you can try #472 and see if it fixes your problem :) |
I kinda want to wait until it is a confirmed fix / merged as I want to keep resetting redis and restarting the trackers to a minimum. I did hit a negative quarter of a million infohashes 😁 |
Well, let me put it like this: We were somewhat sure that the old implementation was correct, and it held up to our tests, but we are more certain that the new implementation is correct :) So, the problem is, no matter what I throw at the old or new implementation, I never get negative infohashes. So.. in order to know whether it fixes your problem, we kind of need you to test it 😅 |
Okay, downloaded https://patch-diff.githubusercontent.com/raw/chihaya/chihaya/pull/472.patch and did a "git apply 472.patch" in the master folder, then "go build ./cmd/chihaya" . I can already see that it is increasing way faster, I did one "unfixed test" (aka I forgot to do the go build step) and it went to 3k... 3,5k... 3,6k... so like 500~ per minute. After doing a go build with the patch it goes from 1k to 10k to 20k within a minute or two. And whilst writing it went up to 40k+ infohashes. This seems more realistic than -250k 😄. Great work, thanks for the effort! I'd say it is fixed. It's dipping 1k every now and then, sometimes 10, It might be normal torrent behaviour though. I'll keep a look at it and report here if it goes to negative values again. |
Nice! The stats look pretty good, and make sense. The old implementation counted infohashes only if they had at least one seeder, which probably explains the 20k you had before. The only thing that has me worried is that you originally said you switched from a 20k memory storage to redis, but redis is now reporting 40k 😅 Do you still have your old stats maybe? But, it does look much better now! Thanks for trying this out :) |
I've added the tracker to ngosang/trackerslist and newtrackon, so there is growth to it. Still in the positive, seems to be fixed. I'll go ahead and close it. 👌 |
Hello,
I switched from a 20k infohashes_count memory storage single tracker
to two trackers connecting to one redis (residing on tracker 1, tracker 2 has redis slave for backup reasons, since multi master with dynomite didn't work), both have the same private_key.
Everything is default, except for:
announce_interval: 5m
min_announce_interval: 2m
storage:
name: redis
config:
peer_lifetime: 6m
redis_broker: "redis://authpw@redisaddress:6379/0"
Current stats show:
chihaya_storage_infohashes_count -9149
chihaya_storage_leechers_count 19898
chihaya_storage_seeders_count 10761
Is the interval way too low, so there are weird glitches causing this behavour?
I kept the min announce to 5m beforehand, but that caused issues of announcing not working sometimes.
Best regards,
SolSoCoG
The text was updated successfully, but these errors were encountered: