Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

review some examples and typo #17345

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/netdata-agent/sizing-netdata-agents/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ This is a map of how Netdata **features impact resources utilization**:

Lowering the data collection frequency from every-second to every-2-seconds, will make Netdata use half the CPU utilization. So, CPU utilization is proportional to the data collection frequency.

3. **Database Mode and Tiers**: By default Netdata stores metrics in 3 database tiers: high-resolution, mid-resolution, low-resolution. All database tiers are updated in parallel during data collection, and depending on the query duration Netdata may consult one or more tiers to optimize the resources required to satisfy it.
3. **Database Mode and Tiers**: By default Netdata stores metrics in 3 database tiers: high-resolution, mid-resolution, and low-resolution. All database tiers are updated in parallel during data collection, and depending on the query duration Netdata may consult one or more tiers to optimize the resources required to satisfy it.

The number of database tiers affects the memory requirements of Netdata. Going from 3-tiers to 1-tier, will make Netdata use half the memory. Of course metrics retention will also be limited to 1 tier.

4. **Machine Learning**: Byt default Netdata trains multiple machine learning models for every metric collected, to learn its behavior and detect anomalies. Machine Learning is a CPU intensive process and affects the overall CPU utilization of Netdata.
4. **Machine Learning**: By default Netdata trains multiple machine learning models for every metric collected, to learn its behavior and detect anomalies. Machine Learning is a CPU intensive process and affects the overall CPU utilization of Netdata.

5. **Streaming Compression**: When using Netdata in Parent-Child configurations to create Metrics Centralization Points, the compression algorithm used greatly affects CPU utilization and bandwidth consumption.

Expand Down
6 changes: 3 additions & 3 deletions docs/store/change-metrics-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ the `update every iterations` of the tiers, to stay under the limit.

The exact retention that can be achieved by each tier depends on the number of metrics collected. The more
the metrics, the smaller the retention that will fit in a given size. The general rule is that Netdata needs
about **1 byte per data point on disk for tier 0**, and **4 bytes per data point on disk for tier 1 and above**.
about **1 byte per data point on disk for tier 0**, and **6 bytes per data point on disk for tier 1** and **16 bytes per data point on disk for tier 2 and above**.

So, for 1000 metrics collected per second and 256 MB for tier 0, Netdata will store about:

Expand All @@ -60,13 +60,13 @@ So, for 1000 metrics collected per second and 256 MB for tier 0, Netdata will st
At tier 1 (per minute):

```
128MB on disk / 4 bytes per point / 1000 metrics => 32k points per metric / (24 hr * 60 min) ~= 22 days
128MB on disk / 6 bytes per point / 1000 metrics => 21k points per metric / (24 hr * 60 min) ~= 15 days
```

At tier 2 (per hour):

```
64MB on disk / 4 bytes per point / 1000 metrics => 16k points per metric / 24 hr per day ~= 2 years
64MB on disk / 16 bytes per point / 1000 metrics => 4k points per metric / 24 hr per day ~= 0.5 years
```

Of course double the metrics, half the retention. There are more factors that affect retention. The number
Expand Down