Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

influxdb.conf ... changing the retention of data #240

Open
lslamp opened this issue May 18, 2021 · 1 comment
Open

influxdb.conf ... changing the retention of data #240

lslamp opened this issue May 18, 2021 · 1 comment

Comments

@lslamp
Copy link

lslamp commented May 18, 2021

I am very new to all this. I am very sorry if this is placed in teh incorrect location. Please advise.

I have set up a something called unifi-poler that feeds data into an influxdb, which in turn delivers that data to a grafana dashboard.

My issue is that the influxdb seems to be growing and growing, where it is currently at 67.3 GB. See below.

image

I don't really understand what this means or what the impact is upon my system/platform. So I would very much appreciate any feedback from anyone that has any ideas. Should I try to reduce that size. if so how?

Thanks
Lawrence

@Paraphraser
Copy link
Contributor

Please see this project is dormant. You might have more luck re-asking this question on SensorsIot/IOTstack or its Discord channel (see issue for links to both).

I don't understand retention periods either. My databases are about 4 years old, the largest growing at the rate of one measurement per 10 seconds but Influx's disk storage still hasn't cracked 500MB. I have a 400GB SSD so I'm a long way shy of having to care too much but I agree I'd be starting to freak if I was seeing 60GB.

The thing is, when I look at your screen shot, I think I'm seeing "top" output. Yes?

Here's what I see (and I had to wait a bit for influx to appear near the top of the list):

572F0C37-2BD8-43E8-AC2B-C07525BBB7CB

Exactly one row for Influx. I've never seen more than one row so I don't get why you have multiple rows. I do have multiple NodeRed flows writing to multiple Influx databases but as far as I can tell, everything is serialised so I wouldn't expect multiple instances of Influx to be fired up.

I also read that "virt" column (which is, I assume, what you're talking about) as being the amount of virtual memory, not the database size. It's still seriously on the high side when compared with mine.

To figure out the on-disk database storage I'd probably do something like sudo du -m ~/IOTstack/volumes/influxdb/data and take the number from the last row in megabytes. There's probably a simpler way of doing that but...

It's not quite the same thing but I did have trouble with Influx consuming more than its fair share of CPU. Maybe see if this helps. Maybe that internal database was also chewing up virtual memory and I simply didn't notice.

If you can't make any progress, maybe start by sticking your docker-compose.yml on pastebin, then ask a question on the IOTstack Discord channel, with a link to the pastebin paste.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants