-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create Overview #2591
base: latest
Are you sure you want to change the base?
Create Overview #2591
Changes from all commits
a88ff7d
82847f5
6066046
0f0e395
d75d641
618ead1
cd4983d
c537bfc
c2ab135
c03e067
291615b
9980a40
b5bb548
f1cfd7d
8cd53aa
101669e
63209e9
3793e33
397458b
a617950
5c901aa
8b15f09
ed4cd2f
eb4c5e0
75c1162
3692e67
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
FIXME | ||
Check warning on line 1 in _partials/_architecture-overview.md GitHub Actions / prose
Check warning on line 1 in _partials/_architecture-overview.md GitHub Actions / prose
|
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
@@ -0,0 +1,18 @@ | ||||||
Creating a continuous aggregate is a two-step process. You need to create the | ||||||
view first, then enable a policy to keep the view refreshed. You can create the | ||||||
view on a hypertable, or on top of another continuous aggregate. You can have | ||||||
more than one continuous aggregate on each source table or view. | ||||||
|
||||||
Continuous aggregates require a `time_bucket` on the time partitioning column of | ||||||
the hypertable. | ||||||
|
||||||
By default, views are automatically refreshed. You can adjust this by setting | ||||||
the [WITH NO DATA](#using-the-with-no-data-option) option. Additionally, the | ||||||
view can not be a [security barrier view][postgres-security-barrier]. | ||||||
Check failure on line 11 in _partials/_caggs-next.md GitHub Actions / prose
Check failure on line 11 in _partials/_caggs-next.md GitHub Actions / prose
|
||||||
|
||||||
Continuous aggregates use hypertables in the background, which means that they | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
also use chunk time intervals. By default, the continuous aggregate's chunk time | ||||||
interval is 10 times what the original hypertable's chunk time interval is. For | ||||||
example, if the original hypertable's chunk time interval is 7 days, the | ||||||
continuous aggregates that are on top of it have a 70 day chunk time | ||||||
interval. |
Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,21 @@ | ||||||||||||||||||||||||||
Research has shown that when data is newly ingested, the queries are more likely | ||||||||||||||||||||||||||
to be shallow in time, and wide in columns. Generally, they are debugging | ||||||||||||||||||||||||||
queries, or queries that cover the whole system, rather than specific, analytic | ||||||||||||||||||||||||||
queries. An example of the kind of query more likely for new data is "show the | ||||||||||||||||||||||||||
current CPU usage, disk usage, energy consumption, and I/O for a particular | ||||||||||||||||||||||||||
server". When this is the case, the uncompressed data has better query | ||||||||||||||||||||||||||
performance, so the native PostgreSQL row-based format is the best option. | ||||||||||||||||||||||||||
Comment on lines
+1
to
+7
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm... not sure if "debugging queries" is the best categorization of this.
Suggested change
|
||||||||||||||||||||||||||
|
||||||||||||||||||||||||||
However, as data ages, queries are likely to change. They become more | ||||||||||||||||||||||||||
analytical, and involve fewer columns. An example of the kind of query run on | ||||||||||||||||||||||||||
older data is "calculate the average disk usage over the last month." This type | ||||||||||||||||||||||||||
of query runs much faster on compressed, columnar data. | ||||||||||||||||||||||||||
|
||||||||||||||||||||||||||
To take advantage of this and increase your query efficiency, you want to run | ||||||||||||||||||||||||||
queries on new data that is uncompressed, and on older data that is compressed. | ||||||||||||||||||||||||||
Setting the right compression policy interval means that recent data is ingested | ||||||||||||||||||||||||||
in an uncompressed, row format for efficient shallow and wide queries, and then | ||||||||||||||||||||||||||
automatically converted to a compressed, columnar format after it ages and is | ||||||||||||||||||||||||||
more likely to be queried using deep and narrow queries. Therefore, one | ||||||||||||||||||||||||||
consideration for choosing the age at which to compress the data is when your | ||||||||||||||||||||||||||
query patterns change from shallow and wide to deep and narrow. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
Timescale includes traditional disk storage, and a low-cost object-storage | ||
layer built on Amazon S3. You can move your hypertable data across the different | ||
storage tiers to get the best price performance. You can use primary storage for | ||
data that requires quick access, and low-cost object storage for historical | ||
data. Regardless of where your data is stored, you can query it with standard | ||
SQL. |
Original file line number | Diff line number | Diff line change | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,17 @@ | ||||||||||
Data tiering works by periodically and asynchronously moving older chunks to S3 | ||||||||||
storage. There, it's stored in the Apache Parquet format, which is a compressed | ||||||||||
columnar format well-suited for S3. Data remains accessible both during and | ||||||||||
after migration. | ||||||||||
|
||||||||||
When you run regular SQL queries, a behind-the-scenes process transparently | ||||||||||
pulls data from wherever it's located: disk storage, object storage, or both. | ||||||||||
Various SQL optimizations limit what needs to be read from S3: | ||||||||||
|
||||||||||
* Chunk exclusion avoids processing chunks that fall outside the query's time | ||||||||||
window | ||||||||||
* The database uses metadata about row groups and columnar offsets, so only | ||||||||||
part of an object needs to be read from S3 | ||||||||||
|
||||||||||
The result is transparent queries across standard PostgreSQL storage and S3 | ||||||||||
storage, so your queries fetch the same data as before, with minimal added | ||||||||||
latency. | ||||||||||
Comment on lines
+15
to
+17
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Work focus on the utility, not on performance.
Suggested change
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
FIXME | ||
Check warning on line 1 in _partials/_dynamic-compute-intro.md GitHub Actions / prose
Check warning on line 1 in _partials/_dynamic-compute-intro.md GitHub Actions / prose
|
Original file line number | Diff line number | Diff line change | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,20 @@ | ||||||||||||||||
When you create and use a hypertable, it automatically partitions data by time, | ||||||||||||||||
and optionally by space. | ||||||||||||||||
Comment on lines
+1
to
+2
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||||
|
||||||||||||||||
Each hypertable is made up of child tables called chunks. Each chunk is assigned | ||||||||||||||||
a range of time, and only contains data from that range. If the hypertable is | ||||||||||||||||
also partitioned by space, each chunk is also assigned a subset of the space | ||||||||||||||||
values. | ||||||||||||||||
Comment on lines
+4
to
+7
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You can partition using multiple time dimensions and multiple space dimensions, so suggest to elaborate a little on this.
Suggested change
|
||||||||||||||||
|
||||||||||||||||
Each chunk of a hypertable only holds data from a specific time range. When you | ||||||||||||||||
insert data from a time range that doesn't yet have a chunk, Timescale | ||||||||||||||||
automatically creates a chunk to store it. | ||||||||||||||||
|
||||||||||||||||
By default, each chunk covers 7 days. You can change this to better suit your | ||||||||||||||||
needs. For example, if you set `chunk_time_interval` to 1 day, each chunk stores | ||||||||||||||||
data from the same day. Data from different days is stored in different chunks. | ||||||||||||||||
|
||||||||||||||||
<img class="main-content__illustration" | ||||||||||||||||
src="https://assets.timescale.com/docs/images/getting-started/hypertables-chunks.webp" | ||||||||||||||||
alt="A normal table compared to a hypertable. The normal table holds data for 3 different days in one container. The hypertable contains 3 containers, called chunks, each of which holds data for a separate day." | ||||||||||||||||
/> |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,22 @@ | ||
The [`time_bucket`][time_bucket] function allows you to aggregate data into | ||
Check failure on line 1 in _partials/_time-bucket-intro.md GitHub Actions / prose
Check failure on line 1 in _partials/_time-bucket-intro.md GitHub Actions / prose
|
||
buckets of time, for example: 5 minutes, 1 hour, or 3 days. It's similar to | ||
PostgreSQL's [`date_bin`][date_bin] function, but it gives you more | ||
Check failure on line 3 in _partials/_time-bucket-intro.md GitHub Actions / prose
Check failure on line 3 in _partials/_time-bucket-intro.md GitHub Actions / prose
|
||
flexibility in bucket size and start time. | ||
|
||
Time bucketing is essential to working with time-series data. You can use it to | ||
roll up data for analysis or downsampling. For example, you can calculate | ||
5-minute averages for a sensor reading over the last day. You can perform these | ||
rollups as needed, or pre-calculate them in [continuous aggregates][caggs]. | ||
Check failure on line 9 in _partials/_time-bucket-intro.md GitHub Actions / prose
Check failure on line 9 in _partials/_time-bucket-intro.md GitHub Actions / prose
|
||
|
||
Time bucketing groups data into time intervals. With `time_bucket`, the interval | ||
length can be any number of microseconds, milliseconds, seconds, minutes, hours, | ||
days, weeks, months, years, or centuries. | ||
|
||
The `time_bucket` function is usually used in combination with `GROUP BY` to | ||
aggregate data. For example, you can calculate the average, maximum, minimum, or | ||
sum of values within a bucket. | ||
|
||
<img class="main-content__illustration" | ||
src="https://assets.timescale.com/docs/images/getting-started/time-bucket.webp" | ||
alt="Diagram showing time-bucket aggregating data into daily buckets, and calculating the daily sum of a value" | ||
/> |
Original file line number | Diff line number | Diff line change | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -1,2 +1,19 @@ | ||||||||||||||
Timescale extends PostgreSQL for time-series and analytics, so you can build | ||||||||||||||
faster, scale further, and stay under budget. | ||||||||||||||
Timescale is the database platform built for developers. It's engineered to | ||||||||||||||
deliver speed and scale for your resource-intensive workloads—like those using | ||||||||||||||
time series, event, and analytics data. | ||||||||||||||
Comment on lines
+1
to
+3
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think that we also want to push more on the ease of use. Hypertables are performing well, but they are also hassle-free because they can handle automatic partititioning and manage data in different stages of the life-cycle.
Suggested change
|
||||||||||||||
|
||||||||||||||
* _PostgreSQL++_ - Timescale is the PostgreSQL you know and love, giving you | ||||||||||||||
access to the entire PostgreSQL ecosystem, but Timescale has additional | ||||||||||||||
features like hypertables, compression and continuous aggregates. | ||||||||||||||
Comment on lines
+5
to
+7
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I wonder if we want to add a bullet around something like "Designed for Data Intensive Applications". |
||||||||||||||
* _Faster query performance_ - Timescale dramatically improves your | ||||||||||||||
performance with automatic partitioning using Timescale hypertables and | ||||||||||||||
automatically refreshed materialized views with Timescale continuous | ||||||||||||||
aggregates. | ||||||||||||||
* _Lower costs as you scale_ - Timescale's straightforward and predictable | ||||||||||||||
pricing is based on two metrics, compute and storage. You only pay for what | ||||||||||||||
you use, and Timescale can save you even more money with features like | ||||||||||||||
native compression and data tiering to S3. | ||||||||||||||
* _Worry-free_ - Timescale simplifies deployment and management with a | ||||||||||||||
user-friendly interface, along with one-click replicas, forks, monitoring, | ||||||||||||||
high availability, connection pooling, and more. And our expert support team | ||||||||||||||
Check warning on line 18 in _partials/_timescale-intro.md GitHub Actions / prose
Check warning on line 18 in _partials/_timescale-intro.md GitHub Actions / prose
|
||||||||||||||
is available to assist you at no extra charge. |
Original file line number | Diff line number | Diff line change | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,61 @@ | ||||||||||||||||||
### Vastly improve query performance | ||||||||||||||||||
|
||||||||||||||||||
As the amount of data in your database grows, query speed tends to get slower. | ||||||||||||||||||
Timescale hypertables can dramatically improve the performance of large tables. | ||||||||||||||||||
Hypertables work just like standard PostgreSQL tables, but they are broken down | ||||||||||||||||||
into chunks, and automatically partitioned by time. This gives you improved | ||||||||||||||||||
insert and query performance, plus access to an entire suite of useful tools. | ||||||||||||||||||
Because you work with hypertables in the same way as standard PostgreSQL tables, | ||||||||||||||||||
you don't need to re-architect your application, or learn any new languages. | ||||||||||||||||||
|
||||||||||||||||||
### Automatically refresh materialized views with continuous aggregates | ||||||||||||||||||
|
||||||||||||||||||
Time-series data grows very quickly, and as the data grows, analyzing it gets | ||||||||||||||||||
slower and uses more resources. Timescale solves the slow-down with continuous | ||||||||||||||||||
aggregates. Based on PostgreSQL materialized views, continuous aggregates are | ||||||||||||||||||
incrementally and continuously updated, to make them lightning fast. | ||||||||||||||||||
Comment on lines
+13
to
+16
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||||||
Materialized views need to be refreshed and recalculated from scratch every time | ||||||||||||||||||
they are run, but continuous aggregates are incrementally updated in the | ||||||||||||||||||
background, so they do not require a lot of resources to keep up to date. | ||||||||||||||||||
Additionally, you can query your continuous aggregates even if the underlying | ||||||||||||||||||
data is compressed and archived. | ||||||||||||||||||
|
||||||||||||||||||
### Achieve 90%+ compression | ||||||||||||||||||
|
||||||||||||||||||
It is common for databases to provide compression that saves space, but doesn't | ||||||||||||||||||
improve query performance for data that spans long time intervals. Timescale | ||||||||||||||||||
includes native compression for your hypertables that improves query performance | ||||||||||||||||||
and dramatically reduces data volume. Timescale compression uses native columnar | ||||||||||||||||||
compression and best-in-class algorithms that are automatically chosen based on | ||||||||||||||||||
your data, which saves space and improves performance. Timescale subscribers | ||||||||||||||||||
usually see compression ratios greater than 90% and, because you only pay for | ||||||||||||||||||
the storage space that you actually use, that means that you save money from the | ||||||||||||||||||
moment you start. | ||||||||||||||||||
|
||||||||||||||||||
### Lower costs with data tiering to S3 | ||||||||||||||||||
|
||||||||||||||||||
When you are working with time-series and event data, storage costs can easily | ||||||||||||||||||
spiral out of control. With Timescale, you never have to worry about hidden | ||||||||||||||||||
Comment on lines
+37
to
+38
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe talk more about "data-intensive applications that collect large amounts of time-series and events data". |
||||||||||||||||||
costs for your storage because you only pay for what you actually use. You don't | ||||||||||||||||||
need to allocate a fixed storage volume or worry about managing your disk size | ||||||||||||||||||
and, if you compress or delete data, you immediately reduce the size of your | ||||||||||||||||||
bill. Timescale also allows you to change how your data is stored with data | ||||||||||||||||||
tiering to S3, with no limits to how much data you tier. This lets you choose a | ||||||||||||||||||
cheaper storage option for historical data, with no hidden costs like extra | ||||||||||||||||||
charges for querying or reading tiered data. When you have enabled data tiering, | ||||||||||||||||||
your data is automatically archived as it ages, so there is no need to manually | ||||||||||||||||||
perform archive operations. Best of all, your historical data is not siloed, so | ||||||||||||||||||
your active and tiered data can be queried directly from within the same | ||||||||||||||||||
database. | ||||||||||||||||||
|
||||||||||||||||||
### Timescale works for you end-to-end | ||||||||||||||||||
|
||||||||||||||||||
Converting your PostgreSQL tables to hypertables instantly improves query and | ||||||||||||||||||
insert performance, and gives you immediate access to continuous aggregates and | ||||||||||||||||||
compression. Continuous aggregates continuously and incrementally materialize | ||||||||||||||||||
your aggregate queries, giving you updated insights as soon as new data arrives. | ||||||||||||||||||
Compression immediately improves database performance and, with usage-based | ||||||||||||||||||
storage, also saves you money. Pair all this with data tiering to automatically | ||||||||||||||||||
archive older data, saving money, but retaining access when you need it. Want to | ||||||||||||||||||
know more? Keep reading, and remember our world-class support team is here to | ||||||||||||||||||
Check warning on line 60 in _partials/_timescale-value-prop.md GitHub Actions / prose
Check warning on line 60 in _partials/_timescale-value-prop.md GitHub Actions / prose
|
||||||||||||||||||
help you if you need it, every step of the way. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,23 @@ | ||
--- | ||
title: Continuous aggregates overview | ||
excerpt: What are continuous aggregates? | ||
products: [cloud, mst, self_hosted] | ||
keywords: [learn, overview, continuous aggregates] | ||
--- | ||
|
||
import CaggsIntro from "versionContent/_partials/_caggs-intro.mdx"; | ||
import CaggsTypes from "versionContent/_partials/_caggs-types.mdx"; | ||
import CaggsNext from "versionContent/_partials/_caggs-next.mdx"; | ||
|
||
# Continuous aggregation | ||
|
||
<CaggsIntro /> | ||
|
||
<CaggsTypes /> | ||
|
||
<CaggsNext /> | ||
|
||
For more information about continuous aggregation, see the | ||
[continuous aggregates section][caggs] | ||
|
||
[caggs]: /use-timescale/:currentVersion:/continuous-aggregates/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
--- | ||
title: Compression overview | ||
excerpt: What is compression? | ||
products: [cloud, mst, self_hosted] | ||
keywords: [learn, overview, compression] | ||
--- | ||
|
||
|
||
import CompressionIntro from "versionContent/_partials/_compression-intro.mdx"; | ||
import CompressionNext from "versionContent/_partials/_compression-next.mdx"; | ||
|
||
# Compression | ||
|
||
<CompressionIntro /> | ||
|
||
<CompressionNext /> | ||
|
||
For more information about compression, see the | ||
[compression section][time-buckets] | ||
|
||
[time-buckets]: /use-timescale/:currentVersion:/time-buckets/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
--- | ||
title: Dynamic compute and usage-based storage | ||
excerpt: What is dynamic compute and usage-based storage? | ||
products: [cloud, mst, self_hosted] | ||
keywords: [learn, overview, storage, compute, billing] | ||
--- | ||
|
||
|
||
import UbsIntro from "versionContent/_partials/_usage-based-storage-intro.mdx"; | ||
import DynamicComputeIntro from "versionContent/_partials/_dynamic-compute-intro.mdx"; | ||
|
||
# Dynamic compute and usage-based storage | ||
|
||
<UbsIntro /> | ||
|
||
<DynamicComputeIntro /> | ||
|
||
For more information about dynamic compute and usage-based storage, see the | ||
[billing section][billing] | ||
|
||
[billing]: /use-timescale/:currentVersion:/account-management/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
--- | ||
title: Data tiering overview | ||
excerpt: What is data tiering? | ||
products: [cloud, mst, self_hosted] | ||
keywords: [learn, overview, data tiering] | ||
--- | ||
|
||
|
||
import DataTieringIntro from "versionContent/_partials/_data-tiering-intro.mdx"; | ||
import DataTieringNext from "versionContent/_partials/_data-tiering-next.mdx"; | ||
|
||
# Data tiering | ||
|
||
<DataTieringIntro /> | ||
|
||
<DataTieringNext /> | ||
|
||
For more information about data tiering, see the | ||
[data tiering section][data-tiering] | ||
|
||
[data-tiering]: /use-timescale/:currentVersion:/data-tiering/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
--- | ||
title: Hypertables overview | ||
excerpt: What are hypertables? | ||
products: [cloud, mst, self_hosted] | ||
keywords: [learn, overview, hypertables] | ||
--- | ||
|
||
|
||
import HypertablesIntro from "versionContent/_partials/_hypertables-intro.mdx"; | ||
import HypertablesNext from "versionContent/_partials/_hypertables-next.mdx"; | ||
|
||
# Hypertables | ||
|
||
<HypertablesIntro /> | ||
|
||
<HypertablesNext /> | ||
|
||
For more information about hypertables, see the | ||
[hypertables section][hypertables] | ||
|
||
[hypertables]: /use-timescale/:currentVersion:/hypertables/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
--- | ||
title: Timescale overview | ||
excerpt: Learn about core Timescale concepts, architecture, and features | ||
products: [cloud, mst, self_hosted] | ||
keywords: [learn, overview, hypertables, time buckets, compression, continuous aggregates] | ||
--- | ||
|
||
import TimescaleIntro from "versionContent/_partials/_timescale-intro.mdx"; | ||
import TimescaleValueProp from "versionContent/_partials/_timescale-value-prop.mdx"; | ||
|
||
# What is Timescale? | ||
|
||
<TimescaleIntro /> | ||
|
||
## What can Timescale do for your database? | ||
|
||
<TimescaleValueProp /> | ||
|
||
This section provides an overview of Timescale architecture, introducing you | ||
to special Timescale concepts and features. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,56 @@ | ||
module.exports = [ | ||
{ | ||
title: "What is Timescale?", | ||
href: "overview", | ||
filePath: "index.md", | ||
excerpt: | ||
"What is Timescale?", | ||
children: [ | ||
{ | ||
title: "Time-series data", | ||
href: "time-series-data-overview", | ||
excerpt: "What is time-series data?", | ||
}, | ||
{ | ||
title: "Timescale architecture", | ||
href: "timescale-architecture-overview", | ||
excerpt: "What is Timescale architecture?", | ||
}, | ||
{ | ||
title: "Timescale", | ||
href: "timescale-overview", | ||
excerpt: "What is Timescale?", | ||
}, | ||
{ | ||
title: "Hypertables", | ||
href: "hypertables-overview", | ||
excerpt: "What are hypertables?", | ||
}, | ||
{ | ||
title: "Time buckets", | ||
href: "time-bucket-overview", | ||
excerpt: "What are time buckets?", | ||
}, | ||
{ | ||
title: "Data tiering", | ||
href: "data-tiering-overview", | ||
excerpt: "What is data tiering?", | ||
}, | ||
{ | ||
title: "Continuous aggregation", | ||
href: "caggs-overview", | ||
excerpt: "What are continuous aggregates?", | ||
}, | ||
{ | ||
title: "Compression", | ||
href: "compression-overview", | ||
excerpt: "What is compression?", | ||
}, | ||
{ | ||
title: "Dynamic compute and usage-based storage", | ||
href: "compute-storage-overview", | ||
excerpt: "What is dynamic compute and usage-based storage?", | ||
}, | ||
], | ||
}, | ||
]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.