Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create Overview #2591

Open
wants to merge 26 commits into
base: latest
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
a88ff7d
Create pages
Loquacity Aug 2, 2023
82847f5
page index and move our of dir
Loquacity Aug 2, 2023
6066046
data tiering partials
Loquacity Aug 2, 2023
0f0e395
more data tiering
Loquacity Aug 2, 2023
d75d641
Hypertables partials
Loquacity Aug 2, 2023
618ead1
Time bucket partials
Loquacity Aug 2, 2023
cd4983d
Caggs partials
Loquacity Aug 2, 2023
c537bfc
Compression partials
Loquacity Aug 2, 2023
c2ab135
Merge branch 'latest' into dev-overview-lana
Loquacity Aug 7, 2023
c03e067
Move overview out of use ts
Loquacity Aug 10, 2023
291615b
Merge branch 'latest' into dev-overview-lana
Loquacity Aug 10, 2023
9980a40
Merge branch 'latest' into dev-overview-lana
Loquacity Aug 14, 2023
b5bb548
fix: add page index for overview section
charislam Aug 15, 2023
f1cfd7d
Merge branch 'latest' into dev-overview-lana
Loquacity Aug 16, 2023
8cd53aa
Add intro and value prop
Loquacity Aug 16, 2023
101669e
Merge branch 'latest' into dev-overview-lana
Loquacity Aug 28, 2023
63209e9
Merge branch 'latest' into dev-overview-lana
Loquacity Sep 4, 2023
3793e33
Change title of overview
Loquacity Sep 4, 2023
397458b
Merge branch 'latest' into dev-overview-lana
Loquacity Sep 6, 2023
a617950
Merge branch 'latest' into dev-overview-lana
Loquacity Sep 8, 2023
5c901aa
Merge branch 'latest' into dev-overview-lana
Loquacity Sep 12, 2023
8b15f09
Merge branch 'latest' into dev-overview-lana
Loquacity Sep 13, 2023
ed4cd2f
split page
Loquacity Sep 13, 2023
eb4c5e0
metadata
Loquacity Sep 13, 2023
75c1162
update top level content
Loquacity Sep 13, 2023
3692e67
Update title
Loquacity Sep 13, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 1 addition & 0 deletions _partials/_architecture-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
FIXME

Check warning on line 1 in _partials/_architecture-overview.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.FixMe] Replace placeholder text. Raw Output: {"message": "[Google.FixMe] Replace placeholder text.", "location": {"path": "_partials/_architecture-overview.md", "range": {"start": {"line": 1, "column": 1}}}, "severity": "WARNING"}

Check warning on line 1 in _partials/_architecture-overview.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.FixMe] Replace placeholder text. Raw Output: {"message": "[Google.FixMe] Replace placeholder text.", "location": {"path": "_partials/_architecture-overview.md", "range": {"start": {"line": 1, "column": 1}}}, "severity": "WARNING"}
18 changes: 18 additions & 0 deletions _partials/_caggs-next.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
Creating a continuous aggregate is a two-step process. You need to create the
view first, then enable a policy to keep the view refreshed. You can create the
view on a hypertable, or on top of another continuous aggregate. You can have
more than one continuous aggregate on each source table or view.

Continuous aggregates require a `time_bucket` on the time partitioning column of
the hypertable.

By default, views are automatically refreshed. You can adjust this by setting
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
By default, views are automatically refreshed. You can adjust this by setting
By default, views are automatically refreshed when they are created. You can adjust this by using

the [WITH NO DATA](#using-the-with-no-data-option) option. Additionally, the
view can not be a [security barrier view][postgres-security-barrier].

Check failure on line 11 in _partials/_caggs-next.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.MarkdownLinks] Remember to include the link. Raw Output: {"message": "[Google.MarkdownLinks] Remember to include the link.", "location": {"path": "_partials/_caggs-next.md", "range": {"start": {"line": 11, "column": 42}}}, "severity": "ERROR"}

Check failure on line 11 in _partials/_caggs-next.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.MarkdownLinks] Remember to include the link. Raw Output: {"message": "[Google.MarkdownLinks] Remember to include the link.", "location": {"path": "_partials/_caggs-next.md", "range": {"start": {"line": 11, "column": 42}}}, "severity": "ERROR"}

Continuous aggregates use hypertables in the background, which means that they
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Continuous aggregates use hypertables in the background, which means that they
Continuous aggregates use hypertables internally, which means that they

also use chunk time intervals. By default, the continuous aggregate's chunk time
interval is 10 times what the original hypertable's chunk time interval is. For
example, if the original hypertable's chunk time interval is 7 days, the
continuous aggregates that are on top of it have a 70 day chunk time
interval.
21 changes: 21 additions & 0 deletions _partials/_compression-next.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
Research has shown that when data is newly ingested, the queries are more likely
to be shallow in time, and wide in columns. Generally, they are debugging
queries, or queries that cover the whole system, rather than specific, analytic
queries. An example of the kind of query more likely for new data is "show the
current CPU usage, disk usage, energy consumption, and I/O for a particular
server". When this is the case, the uncompressed data has better query
performance, so the native PostgreSQL row-based format is the best option.
Comment on lines +1 to +7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... not sure if "debugging queries" is the best categorization of this.

Suggested change
Research has shown that when data is newly ingested, the queries are more likely
to be shallow in time, and wide in columns. Generally, they are debugging
queries, or queries that cover the whole system, rather than specific, analytic
queries. An example of the kind of query more likely for new data is "show the
current CPU usage, disk usage, energy consumption, and I/O for a particular
server". When this is the case, the uncompressed data has better query
performance, so the native PostgreSQL row-based format is the best option.
For newly ingested data, the queries are usually
shallow in time, and wide in columns. At this stage, the queries delve into details of the system. An example of the kind of query more likely for new data is "show the
current CPU usage, disk usage, energy consumption, and I/O for a particular
server". When this is the case, the uncompressed data has better query
performance, so the native PostgreSQL row-based format is the best option.


However, as data ages, queries are likely to change. They become more
analytical, and involve fewer columns. An example of the kind of query run on
older data is "calculate the average disk usage over the last month." This type
of query runs much faster on compressed, columnar data.

To take advantage of this and increase your query efficiency, you want to run
queries on new data that is uncompressed, and on older data that is compressed.
Setting the right compression policy interval means that recent data is ingested
in an uncompressed, row format for efficient shallow and wide queries, and then
automatically converted to a compressed, columnar format after it ages and is
more likely to be queried using deep and narrow queries. Therefore, one
consideration for choosing the age at which to compress the data is when your
query patterns change from shallow and wide to deep and narrow.
6 changes: 6 additions & 0 deletions _partials/_data-tiering-intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
Timescale includes traditional disk storage, and a low-cost object-storage
layer built on Amazon S3. You can move your hypertable data across the different
storage tiers to get the best price performance. You can use primary storage for
data that requires quick access, and low-cost object storage for historical
data. Regardless of where your data is stored, you can query it with standard
SQL.
17 changes: 17 additions & 0 deletions _partials/_data-tiering-next.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
Data tiering works by periodically and asynchronously moving older chunks to S3
storage. There, it's stored in the Apache Parquet format, which is a compressed
columnar format well-suited for S3. Data remains accessible both during and
after migration.

When you run regular SQL queries, a behind-the-scenes process transparently
pulls data from wherever it's located: disk storage, object storage, or both.
Various SQL optimizations limit what needs to be read from S3:

* Chunk exclusion avoids processing chunks that fall outside the query's time
window
* The database uses metadata about row groups and columnar offsets, so only
part of an object needs to be read from S3

The result is transparent queries across standard PostgreSQL storage and S3
storage, so your queries fetch the same data as before, with minimal added
latency.
Comment on lines +15 to +17
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Work focus on the utility, not on performance.

Suggested change
The result is transparent queries across standard PostgreSQL storage and S3
storage, so your queries fetch the same data as before, with minimal added
latency.
As a result, you can write queries seamlessly reading and involving both tiered and untiered data.```

1 change: 1 addition & 0 deletions _partials/_dynamic-compute-intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
FIXME

Check warning on line 1 in _partials/_dynamic-compute-intro.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.FixMe] Replace placeholder text. Raw Output: {"message": "[Google.FixMe] Replace placeholder text.", "location": {"path": "_partials/_dynamic-compute-intro.md", "range": {"start": {"line": 1, "column": 1}}}, "severity": "WARNING"}

Check warning on line 1 in _partials/_dynamic-compute-intro.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.FixMe] Replace placeholder text. Raw Output: {"message": "[Google.FixMe] Replace placeholder text.", "location": {"path": "_partials/_dynamic-compute-intro.md", "range": {"start": {"line": 1, "column": 1}}}, "severity": "WARNING"}
20 changes: 20 additions & 0 deletions _partials/_hypertables-next.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
When you create and use a hypertable, it automatically partitions data by time,
and optionally by space.
Comment on lines +1 to +2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When you create and use a hypertable, it automatically partitions data by time,
and optionally by space.
Hypertables are used to automatically partition data: traditionally using time, but hypertables can also be used to partition data in other dimensions.```


Each hypertable is made up of child tables called chunks. Each chunk is assigned
a range of time, and only contains data from that range. If the hypertable is
also partitioned by space, each chunk is also assigned a subset of the space
values.
Comment on lines +4 to +7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can partition using multiple time dimensions and multiple space dimensions, so suggest to elaborate a little on this.

Suggested change
Each hypertable is made up of child tables called chunks. Each chunk is assigned
a range of time, and only contains data from that range. If the hypertable is
also partitioned by space, each chunk is also assigned a subset of the space
values.
Each hypertable is made up of child tables called chunks. Each chunk is assigned
a range of time, and only contains data from that range. If the hypertable is
also partitioned by other dimensions, each chunk is also assigned a subset of the values in that dimension.


Each chunk of a hypertable only holds data from a specific time range. When you
insert data from a time range that doesn't yet have a chunk, Timescale
automatically creates a chunk to store it.

By default, each chunk covers 7 days. You can change this to better suit your
needs. For example, if you set `chunk_time_interval` to 1 day, each chunk stores
data from the same day. Data from different days is stored in different chunks.

<img class="main-content__illustration"
src="https://assets.timescale.com/docs/images/getting-started/hypertables-chunks.webp"
alt="A normal table compared to a hypertable. The normal table holds data for 3 different days in one container. The hypertable contains 3 containers, called chunks, each of which holds data for a separate day."
/>
22 changes: 22 additions & 0 deletions _partials/_time-bucket-intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
The [`time_bucket`][time_bucket] function allows you to aggregate data into

Check failure on line 1 in _partials/_time-bucket-intro.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.MarkdownLinks] Remember to include the link. Raw Output: {"message": "[Google.MarkdownLinks] Remember to include the link.", "location": {"path": "_partials/_time-bucket-intro.md", "range": {"start": {"line": 1, "column": 20}}}, "severity": "ERROR"}

Check failure on line 1 in _partials/_time-bucket-intro.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.MarkdownLinks] Remember to include the link. Raw Output: {"message": "[Google.MarkdownLinks] Remember to include the link.", "location": {"path": "_partials/_time-bucket-intro.md", "range": {"start": {"line": 1, "column": 20}}}, "severity": "ERROR"}
buckets of time, for example: 5 minutes, 1 hour, or 3 days. It's similar to
PostgreSQL's [`date_bin`][date_bin] function, but it gives you more

Check failure on line 3 in _partials/_time-bucket-intro.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.MarkdownLinks] Remember to include the link. Raw Output: {"message": "[Google.MarkdownLinks] Remember to include the link.", "location": {"path": "_partials/_time-bucket-intro.md", "range": {"start": {"line": 3, "column": 26}}}, "severity": "ERROR"}

Check failure on line 3 in _partials/_time-bucket-intro.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.MarkdownLinks] Remember to include the link. Raw Output: {"message": "[Google.MarkdownLinks] Remember to include the link.", "location": {"path": "_partials/_time-bucket-intro.md", "range": {"start": {"line": 3, "column": 26}}}, "severity": "ERROR"}
flexibility in bucket size and start time.

Time bucketing is essential to working with time-series data. You can use it to
roll up data for analysis or downsampling. For example, you can calculate
5-minute averages for a sensor reading over the last day. You can perform these
rollups as needed, or pre-calculate them in [continuous aggregates][caggs].

Check failure on line 9 in _partials/_time-bucket-intro.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.MarkdownLinks] Remember to include the link. Raw Output: {"message": "[Google.MarkdownLinks] Remember to include the link.", "location": {"path": "_partials/_time-bucket-intro.md", "range": {"start": {"line": 9, "column": 68}}}, "severity": "ERROR"}

Check failure on line 9 in _partials/_time-bucket-intro.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.MarkdownLinks] Remember to include the link. Raw Output: {"message": "[Google.MarkdownLinks] Remember to include the link.", "location": {"path": "_partials/_time-bucket-intro.md", "range": {"start": {"line": 9, "column": 68}}}, "severity": "ERROR"}

Time bucketing groups data into time intervals. With `time_bucket`, the interval
length can be any number of microseconds, milliseconds, seconds, minutes, hours,
days, weeks, months, years, or centuries.

The `time_bucket` function is usually used in combination with `GROUP BY` to
aggregate data. For example, you can calculate the average, maximum, minimum, or
sum of values within a bucket.

<img class="main-content__illustration"
src="https://assets.timescale.com/docs/images/getting-started/time-bucket.webp"
alt="Diagram showing time-bucket aggregating data into daily buckets, and calculating the daily sum of a value"
/>
21 changes: 19 additions & 2 deletions _partials/_timescale-intro.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,19 @@
Timescale extends PostgreSQL for time-series and analytics, so you can build
faster, scale further, and stay under budget.
Timescale is the database platform built for developers. It's engineered to
deliver speed and scale for your resource-intensive workloads&mdash;like those using
time series, event, and analytics data.
Comment on lines +1 to +3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we also want to push more on the ease of use. Hypertables are performing well, but they are also hassle-free because they can handle automatic partititioning and manage data in different stages of the life-cycle.

Suggested change
Timescale is the database platform built for developers. It's engineered to
deliver speed and scale for your resource-intensive workloads&mdash;like those using
time series, event, and analytics data.
Timescale is the database platform built for developers. It's engineered to
without hassle deliver speed and scale for your resource-intensive workloads&mdash;like those using
time series, event, and analytics data.


* _PostgreSQL++_ - Timescale is the PostgreSQL you know and love, giving you
access to the entire PostgreSQL ecosystem, but Timescale has additional
features like hypertables, compression and continuous aggregates.
Comment on lines +5 to +7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we want to add a bullet around something like "Designed for Data Intensive Applications".

* _Faster query performance_ - Timescale dramatically improves your
performance with automatic partitioning using Timescale hypertables and
automatically refreshed materialized views with Timescale continuous
aggregates.
* _Lower costs as you scale_ - Timescale's straightforward and predictable
pricing is based on two metrics, compute and storage. You only pay for what
you use, and Timescale can save you even more money with features like
native compression and data tiering to S3.
* _Worry-free_ - Timescale simplifies deployment and management with a
user-friendly interface, along with one-click replicas, forks, monitoring,
high availability, connection pooling, and more. And our expert support team

Check warning on line 18 in _partials/_timescale-intro.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.We] Try to avoid using first-person plural like 'our'. Raw Output: {"message": "[Google.We] Try to avoid using first-person plural like 'our'.", "location": {"path": "_partials/_timescale-intro.md", "range": {"start": {"line": 18, "column": 58}}}, "severity": "WARNING"}

Check warning on line 18 in _partials/_timescale-intro.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.We] Try to avoid using first-person plural like 'our'. Raw Output: {"message": "[Google.We] Try to avoid using first-person plural like 'our'.", "location": {"path": "_partials/_timescale-intro.md", "range": {"start": {"line": 18, "column": 58}}}, "severity": "WARNING"}
is available to assist you at no extra charge.
61 changes: 61 additions & 0 deletions _partials/_timescale-value-prop.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
### Vastly improve query performance

As the amount of data in your database grows, query speed tends to get slower.
Timescale hypertables can dramatically improve the performance of large tables.
Hypertables work just like standard PostgreSQL tables, but they are broken down
into chunks, and automatically partitioned by time. This gives you improved
insert and query performance, plus access to an entire suite of useful tools.
Because you work with hypertables in the same way as standard PostgreSQL tables,
you don't need to re-architect your application, or learn any new languages.

### Automatically refresh materialized views with continuous aggregates

Time-series data grows very quickly, and as the data grows, analyzing it gets
slower and uses more resources. Timescale solves the slow-down with continuous
aggregates. Based on PostgreSQL materialized views, continuous aggregates are
incrementally and continuously updated, to make them lightning fast.
Comment on lines +13 to +16
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Time-series data grows very quickly, and as the data grows, analyzing it gets
slower and uses more resources. Timescale solves the slow-down with continuous
aggregates. Based on PostgreSQL materialized views, continuous aggregates are
incrementally and continuously updated, to make them lightning fast.
For data-intensive applications the amount of of data that needs to be managed grows very quickly, and as the data grows, analyzing it gets
slower and uses more resources. Timescale solves the slow-down with continuous
aggregates. Based on PostgreSQL materialized views, continuous aggregates are
incrementally and continuously updated, to make them lightning fast.

Materialized views need to be refreshed and recalculated from scratch every time
they are run, but continuous aggregates are incrementally updated in the
background, so they do not require a lot of resources to keep up to date.
Additionally, you can query your continuous aggregates even if the underlying
data is compressed and archived.

### Achieve 90%+ compression

It is common for databases to provide compression that saves space, but doesn't
improve query performance for data that spans long time intervals. Timescale
includes native compression for your hypertables that improves query performance
and dramatically reduces data volume. Timescale compression uses native columnar
compression and best-in-class algorithms that are automatically chosen based on
your data, which saves space and improves performance. Timescale subscribers
usually see compression ratios greater than 90% and, because you only pay for
the storage space that you actually use, that means that you save money from the
moment you start.

### Lower costs with data tiering to S3

When you are working with time-series and event data, storage costs can easily
spiral out of control. With Timescale, you never have to worry about hidden
Comment on lines +37 to +38
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe talk more about "data-intensive applications that collect large amounts of time-series and events data".

costs for your storage because you only pay for what you actually use. You don't
need to allocate a fixed storage volume or worry about managing your disk size
and, if you compress or delete data, you immediately reduce the size of your
bill. Timescale also allows you to change how your data is stored with data
tiering to S3, with no limits to how much data you tier. This lets you choose a
cheaper storage option for historical data, with no hidden costs like extra
charges for querying or reading tiered data. When you have enabled data tiering,
your data is automatically archived as it ages, so there is no need to manually
perform archive operations. Best of all, your historical data is not siloed, so
your active and tiered data can be queried directly from within the same
database.

### Timescale works for you end-to-end

Converting your PostgreSQL tables to hypertables instantly improves query and
insert performance, and gives you immediate access to continuous aggregates and
compression. Continuous aggregates continuously and incrementally materialize
your aggregate queries, giving you updated insights as soon as new data arrives.
Compression immediately improves database performance and, with usage-based
storage, also saves you money. Pair all this with data tiering to automatically
archive older data, saving money, but retaining access when you need it. Want to
know more? Keep reading, and remember our world-class support team is here to

Check warning on line 60 in _partials/_timescale-value-prop.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.We] Try to avoid using first-person plural like 'our'. Raw Output: {"message": "[Google.We] Try to avoid using first-person plural like 'our'.", "location": {"path": "_partials/_timescale-value-prop.md", "range": {"start": {"line": 60, "column": 39}}}, "severity": "WARNING"}

Check warning on line 60 in _partials/_timescale-value-prop.md

View workflow job for this annotation

GitHub Actions / prose

[vale] reported by reviewdog 🐶 [Google.We] Try to avoid using first-person plural like 'our'. Raw Output: {"message": "[Google.We] Try to avoid using first-person plural like 'our'.", "location": {"path": "_partials/_timescale-value-prop.md", "range": {"start": {"line": 60, "column": 39}}}, "severity": "WARNING"}
help you if you need it, every step of the way.
23 changes: 23 additions & 0 deletions overview/caggs-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
title: Continuous aggregates overview
excerpt: What are continuous aggregates?
products: [cloud, mst, self_hosted]
keywords: [learn, overview, continuous aggregates]
---

import CaggsIntro from "versionContent/_partials/_caggs-intro.mdx";
import CaggsTypes from "versionContent/_partials/_caggs-types.mdx";
import CaggsNext from "versionContent/_partials/_caggs-next.mdx";

# Continuous aggregation

<CaggsIntro />

<CaggsTypes />

<CaggsNext />

For more information about continuous aggregation, see the
[continuous aggregates section][caggs]

[caggs]: /use-timescale/:currentVersion:/continuous-aggregates/
21 changes: 21 additions & 0 deletions overview/compression-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
title: Compression overview
excerpt: What is compression?
products: [cloud, mst, self_hosted]
keywords: [learn, overview, compression]
---


import CompressionIntro from "versionContent/_partials/_compression-intro.mdx";
import CompressionNext from "versionContent/_partials/_compression-next.mdx";

# Compression

<CompressionIntro />

<CompressionNext />

For more information about compression, see the
[compression section][time-buckets]

[time-buckets]: /use-timescale/:currentVersion:/time-buckets/
21 changes: 21 additions & 0 deletions overview/compute-storage-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
title: Dynamic compute and usage-based storage
excerpt: What is dynamic compute and usage-based storage?
products: [cloud, mst, self_hosted]
keywords: [learn, overview, storage, compute, billing]
---


import UbsIntro from "versionContent/_partials/_usage-based-storage-intro.mdx";
import DynamicComputeIntro from "versionContent/_partials/_dynamic-compute-intro.mdx";

# Dynamic compute and usage-based storage

<UbsIntro />

<DynamicComputeIntro />

For more information about dynamic compute and usage-based storage, see the
[billing section][billing]

[billing]: /use-timescale/:currentVersion:/account-management/
21 changes: 21 additions & 0 deletions overview/data-tiering-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
title: Data tiering overview
excerpt: What is data tiering?
products: [cloud, mst, self_hosted]
keywords: [learn, overview, data tiering]
---


import DataTieringIntro from "versionContent/_partials/_data-tiering-intro.mdx";
import DataTieringNext from "versionContent/_partials/_data-tiering-next.mdx";

# Data tiering

<DataTieringIntro />

<DataTieringNext />

For more information about data tiering, see the
[data tiering section][data-tiering]

[data-tiering]: /use-timescale/:currentVersion:/data-tiering/
21 changes: 21 additions & 0 deletions overview/hypertables-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
title: Hypertables overview
excerpt: What are hypertables?
products: [cloud, mst, self_hosted]
keywords: [learn, overview, hypertables]
---


import HypertablesIntro from "versionContent/_partials/_hypertables-intro.mdx";
import HypertablesNext from "versionContent/_partials/_hypertables-next.mdx";

# Hypertables

<HypertablesIntro />

<HypertablesNext />

For more information about hypertables, see the
[hypertables section][hypertables]

[hypertables]: /use-timescale/:currentVersion:/hypertables/
20 changes: 20 additions & 0 deletions overview/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
title: Timescale overview
excerpt: Learn about core Timescale concepts, architecture, and features
products: [cloud, mst, self_hosted]
keywords: [learn, overview, hypertables, time buckets, compression, continuous aggregates]
---

import TimescaleIntro from "versionContent/_partials/_timescale-intro.mdx";
import TimescaleValueProp from "versionContent/_partials/_timescale-value-prop.mdx";

# What is Timescale?

<TimescaleIntro />

## What can Timescale do for your database?

<TimescaleValueProp />

This section provides an overview of Timescale architecture, introducing you
to special Timescale concepts and features.
56 changes: 56 additions & 0 deletions overview/page-index/page-index.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
module.exports = [
{
title: "What is Timescale?",
href: "overview",
filePath: "index.md",
excerpt:
"What is Timescale?",
children: [
{
title: "Time-series data",
href: "time-series-data-overview",
excerpt: "What is time-series data?",
},
{
title: "Timescale architecture",
href: "timescale-architecture-overview",
excerpt: "What is Timescale architecture?",
},
{
title: "Timescale",
href: "timescale-overview",
excerpt: "What is Timescale?",
},
{
title: "Hypertables",
href: "hypertables-overview",
excerpt: "What are hypertables?",
},
{
title: "Time buckets",
href: "time-bucket-overview",
excerpt: "What are time buckets?",
},
{
title: "Data tiering",
href: "data-tiering-overview",
excerpt: "What is data tiering?",
},
{
title: "Continuous aggregation",
href: "caggs-overview",
excerpt: "What are continuous aggregates?",
},
{
title: "Compression",
href: "compression-overview",
excerpt: "What is compression?",
},
{
title: "Dynamic compute and usage-based storage",
href: "compute-storage-overview",
excerpt: "What is dynamic compute and usage-based storage?",
},
],
},
];