Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

infohashes_count goes into negative values #471

Closed
SolSoCoG opened this issue Mar 11, 2020 · 11 comments · May be fixed by #472
Closed

infohashes_count goes into negative values #471

SolSoCoG opened this issue Mar 11, 2020 · 11 comments · May be fixed by #472
Labels

Comments

@SolSoCoG
Copy link

Hello,

I switched from a 20k infohashes_count memory storage single tracker
to two trackers connecting to one redis (residing on tracker 1, tracker 2 has redis slave for backup reasons, since multi master with dynomite didn't work), both have the same private_key.

Everything is default, except for:
announce_interval: 5m
min_announce_interval: 2m
storage:
name: redis
config:
peer_lifetime: 6m
redis_broker: "redis://authpw@redisaddress:6379/0"

Current stats show:
chihaya_storage_infohashes_count -9149
chihaya_storage_leechers_count 19898
chihaya_storage_seeders_count 10761

Is the interval way too low, so there are weird glitches causing this behavour?
I kept the min announce to 5m beforehand, but that caused issues of announcing not working sometimes.

Best regards,
SolSoCoG

@mrd0ll4r
Copy link
Member

mrd0ll4r commented Mar 11, 2020

Oh wow, nice :D
Thanks for reporting this! I'll look into it :)

Edit: just to make sure: Did you clear redis completely before running Chihaya?

@SolSoCoG
Copy link
Author

SolSoCoG commented Mar 11, 2020

Oh wow, nice :D
Thanks for reporting this! I'll look into it :)

Edit: just to make sure: Did you clear redis completely before running Chihaya?

Yes, I did that, also after the switch from 5/5 announcement config. Stopped redis on master/slave and did rm -rf /var/lib/redis/6379/* (6379 is a relict of my attempt to multi master, but I kept the folder structure) on both config attempts.

If you want live examples:
http://194.26.183.158:6880/
http://94.237.82.46:6880/

Tracker is udp://chihaya.de:6969 leading to the two IPs above

@mrd0ll4r
Copy link
Member

Sorry, another question: You wrote that both instances of Chihaya connect to one instance of Redis, but also that tracker 2 has a Redis slave, can you clarify this? Do they both actually use the same Redis instance, or does tracker 2 use the Redis slave?

@SolSoCoG
Copy link
Author

tracker 1(srv A) <-> redis 1(srv A)
tracker 2(srv B) <-> redis 1(srv A)
redis 1(srvA) > redis 2(srvB)

redis on server B / tracker 2 is a readonly replica of redis server A and therefore not usable.

@mrd0ll4r
Copy link
Member

thanks!

@mrd0ll4r
Copy link
Member

Hey! If you're a risky person, you can try #472 and see if it fixes your problem :)

@SolSoCoG
Copy link
Author

I kinda want to wait until it is a confirmed fix / merged as I want to keep resetting redis and restarting the trackers to a minimum.

I did hit a negative quarter of a million infohashes 😁

@mrd0ll4r
Copy link
Member

Well, let me put it like this: We were somewhat sure that the old implementation was correct, and it held up to our tests, but we are more certain that the new implementation is correct :)

So, the problem is, no matter what I throw at the old or new implementation, I never get negative infohashes. So.. in order to know whether it fixes your problem, we kind of need you to test it 😅

@SolSoCoG
Copy link
Author

Okay, downloaded https://patch-diff.githubusercontent.com/raw/chihaya/chihaya/pull/472.patch and did a "git apply 472.patch" in the master folder, then "go build ./cmd/chihaya" .

I can already see that it is increasing way faster, I did one "unfixed test" (aka I forgot to do the go build step) and it went to 3k... 3,5k... 3,6k... so like 500~ per minute. After doing a go build with the patch it goes from 1k to 10k to 20k within a minute or two. And whilst writing it went up to 40k+ infohashes. This seems more realistic than -250k 😄. Great work, thanks for the effort! I'd say it is fixed. It's dipping 1k every now and then, sometimes 10, It might be normal torrent behaviour though. I'll keep a look at it and report here if it goes to negative values again.

@mrd0ll4r
Copy link
Member

Nice! The stats look pretty good, and make sense. The old implementation counted infohashes only if they had at least one seeder, which probably explains the 20k you had before. The only thing that has me worried is that you originally said you switched from a 20k memory storage to redis, but redis is now reporting 40k 😅 Do you still have your old stats maybe?

But, it does look much better now! Thanks for trying this out :)

@SolSoCoG
Copy link
Author

I've added the tracker to ngosang/trackerslist and newtrackon, so there is growth to it. Still in the positive, seems to be fixed. I'll go ahead and close it. 👌

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants