Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stubs update move to branch #3013

Merged
merged 65 commits into from
May 10, 2024
Merged
Show file tree
Hide file tree
Changes from 61 commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
59e3525
Merging up stubs into main documents, editing, formatting
ujmediaservices Nov 30, 2023
b69a0df
Oops typo
ujmediaservices Nov 30, 2023
174f29f
Merge branch 'latest' into stub-merge-1
jamessewell Dec 14, 2023
58e579c
Update for linter
billy-the-fish Feb 18, 2024
935bf1e
Update administration.md
billy-the-fish Feb 18, 2024
1c67c89
Merge branch 'latest' into stub-merge-1
billy-the-fish Feb 18, 2024
6c8c0b7
fix: update navigation and links for page merge.
billy-the-fish Feb 18, 2024
9813e4a
Merge branch 'latest' into stub-merge-1
billy-the-fish Feb 19, 2024
403f11f
fix: clean up de-chunking.
billy-the-fish Feb 20, 2024
4efd9e4
fix: clean up de-chunking.
billy-the-fish Feb 21, 2024
2ef3eb0
Merge branch 'latest' into stubs-update-move-to-branch
billy-the-fish Feb 25, 2024
ee87a01
Merge branch 'latest' into stubs-update-move-to-branch
billy-the-fish Mar 4, 2024
43f6eea
Merge branch 'latest' into stubs-update-move-to-branch
billy-the-fish Mar 6, 2024
04b0713
feat: unite stubs.
billy-the-fish Mar 6, 2024
48d362f
Merge branch 'latest' into stubs-update-move-to-branch
billy-the-fish Mar 9, 2024
662a89f
Merge branch 'latest' into stubs-update-move-to-branch
billy-the-fish Mar 9, 2024
b201dae
feat: unite stubs.
billy-the-fish Mar 9, 2024
5c8a8b8
Merge branch 'latest' into stubs-update-move-to-branch
billy-the-fish Mar 14, 2024
2f48d3c
Fix typo (#3052)
Mar 14, 2024
0885261
Fix typo in troubleshooting section (#3073)
antekresic Mar 18, 2024
2f35b63
Update live-migration to v0.0.9 (#3071)
JamesGuthrie Mar 18, 2024
5ebe956
Copy Nit (#3078)
alai97 Mar 19, 2024
248174c
Update import-csv.md (#3080)
solugebefola Mar 21, 2024
f0ff048
Update new about compression (#3012)
antekresic Mar 25, 2024
8e02f37
live-migration: Add docs changes for v0.0.10 (#3091)
arajkumar Mar 25, 2024
83ff074
Add Google CA (#3090)
minkimipt Mar 25, 2024
f56118c
Update ruby.md (#3065)
solugebefola Mar 25, 2024
807c2a2
live-migration: Add docs changes for v0.0.11 (#3096)
arajkumar Mar 26, 2024
fd165c2
Move multinode migration under playbooks (#3110)
giokostis Apr 2, 2024
ec4e131
3113 docs rfc add new editor (#3115)
billy-the-fish Apr 4, 2024
a5567e1
fix: numbering and full stop. (#3120)
billy-the-fish Apr 4, 2024
e382c46
missing word in add_reorder_policy (#3106)
sdebruyn Apr 5, 2024
15d3b82
Restrict usage of time_bucket_ng in CAggs (#3112)
jnidzwetzki Apr 5, 2024
4cf8c75
fix: broken links. (#3123)
billy-the-fish Apr 8, 2024
c6539cd
Update CAgg add_continuous_aggregate_policy documentation (#3128)
jnidzwetzki Apr 11, 2024
62a4f89
fix: deprecate distributed hypertables from the API reference. (#3134)
billy-the-fish Apr 15, 2024
af98fd1
fix: update API reference for set_integer_now_func. (#3135)
billy-the-fish Apr 16, 2024
94af0a0
Add compression settings information views (#3142)
antekresic Apr 18, 2024
eff2cf2
fix: using the public API to check the chunk time interval for a hyp…
billy-the-fish Apr 18, 2024
beac116
3144 docs rfc be more clear about tiered storage being on timescale n…
billy-the-fish Apr 22, 2024
c92b6ff
Update alter_job.md (#3103)
fabriziomello Apr 22, 2024
522fc26
fix: some broken links. A couple of text updates. (#3151)
billy-the-fish Apr 22, 2024
a6dfda8
feat: FAQ for live migration (#3139)
Harkishen-Singh Apr 23, 2024
c204187
chore: update live-migration to v0.0.13 (#3150)
Harkishen-Singh Apr 23, 2024
b62ca3c
Add pgstattuple to list of available extensions (#3154)
JamesGuthrie Apr 25, 2024
d21d172
fix: clarify that each service hosts a single service only. (#3161)
billy-the-fish Apr 25, 2024
0067534
fix: change link in Migrate the entire database at once to the live m…
billy-the-fish Apr 25, 2024
fed5f12
Add warning about SERIAL types and dual-write (#3162)
JamesGuthrie Apr 25, 2024
4e5a692
Address possible installation issues regarding psycopg2 (#3165)
jgpruitt Apr 29, 2024
e2d4580
Add troubleshooting note about partial continuous aggregates (#3164)
JamesGuthrie Apr 30, 2024
531ce93
2829 obsolete restriction on replication factor 1 still in the docs 1…
billy-the-fish May 1, 2024
37135e5
Recommend live-migration instead of dual-writes (#3173)
alejandrodnm May 6, 2024
909f7c8
Refer to live-migration 0.0.14 (#3179)
arajkumar May 7, 2024
8687e8c
feat: add version downgrade for live-migration FAQ (#3163)
Harkishen-Singh May 7, 2024
470672b
Make some changes to refresh cagg note (#3146)
RobAtticus May 7, 2024
62e1443
Add role based auth to CloudWatch exporter doc (#3156)
minkimipt May 7, 2024
0b1bfba
fix: document actual behavior of add_continuous_aggregate_policy when…
dtext May 7, 2024
1a886f0
fix: add verb to sentence. (#3131)
billy-the-fish May 7, 2024
73a6f5f
fix: update navigation and links for page merge.
billy-the-fish Feb 18, 2024
3def1be
fix: clean up de-chunking.
billy-the-fish Feb 21, 2024
08ec6dd
chore: migrate changes in future versions back to this PR.
billy-the-fish May 7, 2024
dded382
Merge remote-tracking branch 'origin/latest' into stubs-update-move-t…
billy-the-fish May 10, 2024
50188fa
fix: my fatal weakness.
billy-the-fish May 10, 2024
b34a899
fix: one broken link fixed.
billy-the-fish May 10, 2024
839aa1d
fix: more link fixing after manual check.
billy-the-fish May 10, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
41 changes: 15 additions & 26 deletions _partials/_cloud-connect.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,28 +2,22 @@

### Check your service and connect to it

To check a service associated with your account is running, then connect to it:
To ensure a Timescale service is running correctly:

1. In the [Services section in Timescale Console][services-portal], check that your service is marked as `Running`.

1. In your development environment open [psql][install-psql] and connect to your service with the value of
`Service URL` in the config file you just saved.
1. Use PopSQL or psql to connect to your service:
- [Setup PopSQL][popsql]: Follow the instructions to easily connect to your service in the UI.
- [psql][install-psql]: Connect to your service with the value of `Service URL` from the config file you
just saved.

The command line looks like:
<CodeBlock canCopy={true} showLineNumbers={false} children={`
psql "postgres://tsdbadmin:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=require"
`} />
<CodeBlock canCopy={true} showLineNumbers={false} children={`
psql "postgres://tsdbadmin:<PASSWORD>@<HOST>:<PORT>/tsdb?sslmode=require"
`} />

You are connected to your service and are now able to issue commands. You see something like:
You are connected to your service and are now able to issue commands.

<CodeBlock canCopy={false} showLineNumbers={true} children={`
psql (14.5, server 15.3 (Ubuntu 15.3-1.pgdg22.04+1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
tsdb=>
`} />

1. To create a PostgreSQL table, copy the following into the psql prompt and press Enter:
1. Create a PostgreSQL table, copy the following into [PopSQL][popsql] or psql, then run your query:

```sql
CREATE TABLE stocks_real_time (
Expand All @@ -34,15 +28,9 @@ To check a service associated with your account is running, then connect to it:
);
```

1. Check that the table exists in your service with the `\dt` command. In psql:

<CodeBlock canCopy={true} showLineNumbers={false} children={`
\\dt
`} />

You see the table listing in your service.

1. To disconnect, type `exit`.
1. Check that the table exists.
- In PopSQL, you see the table in the UI.
- In psql, run the `\dt` command, You see the table listing in your service. To disconnect, type `exit`.


Quick recap, you find configuration information about your
Expand All @@ -54,4 +42,5 @@ config file.
[tsc-portal]: https://console.cloud.timescale.com/
[account-portal]: https://console.cloud.timescale.com/dashboard/account
[services-portal]: https://console.cloud.timescale.com/dashboard/services
[install-psql]: /use-timescale/:currentVersion:/integrations/query-admin/psql/
[install-psql]: /use-timescale/:currentVersion:/integrations/query-admin/psql/
[popsql]: /use-timescale/:currentVersion:/popsql/
6 changes: 4 additions & 2 deletions _partials/_cloud-installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,16 @@

### Create a Timescale account

If you have not already signed up for Timescale, join up now:
To setup Timescale:
billy-the-fish marked this conversation as resolved.
Show resolved Hide resolved

1. Sign up for a [30 day free trial][sign-up].

You receive a confirmation email in your inbox.
1. In the confirmation email, click the link supplied and sign in to [Timescale Console][tsc-portal].

Answer the requirements questions, it helps us optimize for your use case. You can now create a Timescale service.
Answer the requirements questions, they help us optimize the Timescale service for your use case. You can now create a Timescale service.



</Procedure>

Expand Down
4 changes: 2 additions & 2 deletions _partials/_cloud-integrations-exporter-region.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<Highlight type="important">
Your exporter must be in the same AWS region as your database service. If you
have databases in multiple regions, you can create multiple exporters.
Your exporter must be in the same AWS region as the Timescale Service it is attached to.
If you have Timescale Services running in multiple regions, create an exporter for each region.
</Highlight>
6 changes: 3 additions & 3 deletions _partials/_cloud-intro.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
Timescale is a cloud based PostgreSQL platform for resource-intensive workloads. We help you you build faster, scale further, and stay under budget. Timescale offers the following PostgreSQL database optimisations:
Timescale is a cloud-based PostgreSQL platform for resource-intensive workloads. We help you build faster, scale further, and stay under budget. Timescale offers the following PostgreSQL database optimizations:

- [**Time-series data**](https://www.timescale.com/blog/what-is-a-time-series-database/#what-is-a-time-series-database): a
TimescaleDB instance optimized for your time-series and analytics workloads. Get automated dynamic data partitioning, hybrid row-columnar storage, advanced compression techniques, incremental up-to-date materializations, and specialized analysis functions as well as cloud-only features like transparent tiering to low-cost object storage.
TimescaleDB instance optimized for your time-series and analytics workloads. Get automated dynamic data partitioning, hybrid row-columnar storage, advanced compression techniques, incremental up-to-date materializations, and specialized analysis functions as well as cloud-only features like transparent tiering and low-cost object storage.

- **All other workloads**: a [Dynamic PostgreSQL](https://www.timescale.com/dynamic-postgresql) instance where you select a compute range, only paying for the base and the amount of extra CPU as you scale.

All databases are extended with lightning fast vector search, and include all the cloud tooling you'd expect for production use,
with automatic backups, high availability, read replicas, data forking, connection pooling, usage-based storage and much more.
with automatic backups, high availability, read replicas, data forking, connection pooling, usage-based storage, and much more.
17 changes: 17 additions & 0 deletions _partials/_compression-intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,20 @@ data. This means that instead of using lots of rows to store the data, it stores
the same data in a single row. Because a single row takes up less disk space
than many rows, it decreases the amount of disk space required, and can also
speed up your queries.

For example, if you had a table with data that looked a bit like this:

|Timestamp|Device ID|Device Type|CPU|Disk IO|
|-|-|-|-|
|12:00:01|A|SSD|70.11|13.4|
|12:00:01|B|HDD|69.70|20.5|
|12:00:02|A|SSD|70.12|13.2|
|12:00:02|B|HDD|69.69|23.4|
|12:00:03|A|SSD|70.14|13.0|
|12:00:03|B|HDD|69.70|25.2|

You can convert this to a single row in array form, like this:

|Timestamp|Device ID|Device Type|CPU|Disk IO|
|-|-|-|-|-|
|[12:00:01, 12:00:01, 12:00:02, 12:00:02, 12:00:03, 12:00:03]|[A, B, A, B, A, B]|[SSD, HDD, SSD, HDD, SSD, HDD]|[70.11, 69.70, 70.12, 69.69, 70.14, 69.70]|[13.4, 20.5, 13.2, 23.4, 13.0, 25.2]|
2 changes: 1 addition & 1 deletion _partials/_migrate_dual_write_step1.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ ensure that enough disk is pre-provisioned on your Timescale instance.

<OpenSupportRequest />

[create-service]: /use-timescale/:currentVersion:/services/create-a-service/
[create-service]: /getting-started/:currentVersion:/services/
12 changes: 12 additions & 0 deletions _partials/_migrate_dual_write_step2.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,20 @@ you must think about how to handle the failure to write to either the source or
target database, and what mechanism you want to or can build to recover from
such a failure.

Should your time-series data have foreign-key references into a plain table,
you must ensure that your application correctly maintains the foreign key
relations. If the referenced column is a `*SERIAL` type, the same row inserted
into the source and target _may not_ obtain the same autogenerated id. If this
happens, the data backfilled from the source to the target is internally
inconsistent. In the best case it causes a foreign key violation, in the worst
case, the foreign key constraint is maintained, but the data references the
wrong foreign key. To avoid these issues, best practice is to follow
[live migration].

You may also want to execute the same read queries on the source and target
database to evaluate the correctness and performance of the results which the
queries deliver. Bear in mind that the target database spends a certain amount
of time without all data being present, so you should expect that the results
are not the same for some period (potentially a number of days).

[live migration]: /migrate/:currentVersion:/live-migration/
4 changes: 2 additions & 2 deletions _partials/_migrate_live_migration_cleanup.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ docker run --rm -it --name live-migration-clean \
-e PGCOPYDB_TARGET_PGURI=$TARGET \
--pid=host \
-v ~/live-migration:/opt/timescale/ts_cdc \
timescale/live-migration:v0.0.7 clean --prune
timescale/live-migration:v0.0.14 clean --prune
```

The `--prune` flag is used to delete temporary files in the `~/live-migration` directory
that were needed for the migration process. It's important to note that executing the
`clean` command means you cannot resume the interrupted live migration.
`clean` command means you cannot resume the interrupted live migration.
22 changes: 16 additions & 6 deletions _partials/_migrate_live_migration_docker_subcommand.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ docker run --rm -it --name live-migration \
-e PGCOPYDB_TARGET_PGURI=$TARGET \
--pid=host \
-v ~/live-migration:/opt/timescale/ts_cdc \
timescale/live-migration:v0.0.7 --help
timescale/live-migration:v0.0.14 --help

Live migration moves your PostgreSQL/TimescaleDB to Timescale Cloud with minimal downtime.

Expand Down Expand Up @@ -53,7 +53,18 @@ docker run --rm -it --name live-migration-snapshot \
-e PGCOPYDB_TARGET_PGURI=$TARGET \
--pid=host \
-v ~/live-migration:/opt/timescale/ts_cdc \
timescale/live-migration:v0.0.7 snapshot
timescale/live-migration:v0.0.14 snapshot
```

In addition to creating a snapshot, this process also validates prerequisites on the source and target to ensure the database instances are ready for replication.

For example, it checks if all tables on the source have either a PRIMARY KEY or REPLICA IDENTITY set. If not, it displays a warning message listing the tables without REPLICA IDENTITY and waits for user confirmation before proceeding with the snapshot creation.

```sh
2024-03-25T12:40:40.884 WARNING: The following tables in the Source DB have neither a primary key nor a REPLICA IDENTITY (FULL/INDEX)
2024-03-25T12:40:40.884 WARNING: UPDATE and DELETE statements on these tables will not be replicated to the Target DB
2024-03-25T12:40:40.884 WARNING: - public.metrics
Press 'c' and ENTER to continue
```

### 3.b Perform live-migration
Expand All @@ -67,12 +78,11 @@ docker run --rm -it --name live-migration-migrate \
-e PGCOPYDB_TARGET_PGURI=$TARGET \
--pid=host \
-v ~/live-migration:/opt/timescale/ts_cdc \
timescale/live-migration:v0.0.7 migrate
timescale/live-migration:v0.0.14 migrate
```
<Highlight type="note">
If the migrate command stops for any reason during execution, you can resume
the migration from where it left off by adding a `--resume` flag. This is only
possible if the `snapshot` command is intact and if a volume mount, such
as ~/live-migration, is utilized. Note that the `snapshot` command is required
only if you want to resume live migration in event of a crash.
</Highlight>
as `~/live-migration`, is utilized.
</Highlight>
8 changes: 8 additions & 0 deletions _partials/_migrate_live_migration_step2.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,14 @@ import SetupSourceTarget from "versionContent/_partials/_migrate_set_up_source_a

<SetupSourceTarget />

<Highlight type="important">
Do not use a Timescale connection pooler connection for live migration. There
are a number of issues which can arise when using a connection pooler, and no
advantage. Very small instances may not have enough connections configured by
default, in which case you should modify the value of `max_connections`, in
your instance, as shown on [Configure database parameters](/use-timescale/:currentVersion/configuration/customize-configuration/#configure-database-parameters).
</Highlight>

It's important to ensure that the `old_snapshot_threshold` value is set to the
default value of `-1` in your source database. This prevents PostgreSQL from
treating the data in a snapshot as outdated. If this value is set other than
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,11 @@ import OpenSupportRequest from "versionContent/_partials/_migrate_open_support_r

We do not recommend using this migration method to migrate more than
100&nbsp;GB of data, primarily because of the amount of downtime that it
implies for your application, instead use the [dual-write and backfill]
implies for your application, instead use the [live migration]
low-downtime migration solution. Should you nonetheless wish to migrate more
than 400&nbsp;GB of data with this method, open a support request to ensure
that enough disk is pre-provisioned on your Timescale instance.

<OpenSupportRequest />

[dual-write and backfill]: /migrate/:currentVersion:/dual-write-and-backfill

[live migration]: /migrate/:currentVersion:/live-migration
6 changes: 4 additions & 2 deletions _partials/_multi-node-deprecation.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
<Highlight type="warning">
Multi-node support has been deprecated with TimescaleDB 2.13. This is the last version that includes multi-node support and it supports multi-node installations with PostgreSQL versions 13, 14, and 15.

Learn more about it [here][multi-node-deprecation].
[Multi-node support is deprecated][multi-node-deprecation].

TimescaleDB v2.13 is the last release that includes multi-node support for PostgreSQL
versions 13, 14, and 15.
</Highlight>

[multi-node-deprecation]: https://github.com/timescale/timescaledb/blob/main/docs/MultiNodeDeprecation.md
9 changes: 5 additions & 4 deletions _troubleshooting/cloud-singledb.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Cannot add column to a compressed hypertable
title: Cannot create another database
section: troubleshooting
products: [cloud, mst, self_hosted]
topics: [services]
Expand All @@ -26,7 +26,8 @@ tags: [services]
* Copy this comment at the top of every troubleshooting page
-->

Each Timescale service can have a single database. The database must be
named `tsdb`. If you try to create an additional database you receive this error.
Each Timescale service hosts a single database named `tsdb`. You see this error when you try
to create an additional database in a service. If you need another database,
[create a new service][create-service].

If you need another database, you need to create a new service.
[create-service]: /getting-started/:currentVersion:/services/#create-a-timescale-service
4 changes: 2 additions & 2 deletions _troubleshooting/compression-dml-tuple-limit.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The limit can be increased or turned off (set to 0) like so:

```sql
-- set limit to a milion tuples
SET timescaledb.timescaledb.max_tuples_decompressed_per_dml_transaction TO 1000000;
SET timescaledb.max_tuples_decompressed_per_dml_transaction TO 1000000;
-- disable limit by setting to 0
SET timescaledb.timescaledb.max_tuples_decompressed_per_dml_transaction TO 0;
SET timescaledb.max_tuples_decompressed_per_dml_transaction TO 0;
```
16 changes: 8 additions & 8 deletions about/release-notes/changes-in-timescaledb-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -422,7 +422,7 @@ statistics related to all jobs.

#### Removed

* [`timescaledb_information.continuous_aggregate_stats`](https://legacy-docs.timescale.com/v1.7/api#timescaledb_information-continuous_aggregate_stats): Removed in favor of the `job_stats` view mentioned above.
* `timescaledb_information.continuous_aggregate_stats`: Removed in favor of the `job_stats` view mentioned above.

### Updating existing continuous aggregates

Expand Down Expand Up @@ -465,9 +465,9 @@ policies are now available in the main jobs view.

#### Removed

* [`add_drop_chunks_policy`](https://legacy-docs.timescale.com/v1.7/api#add_drop_chunks_policy): removed in favor of the
* `add_drop_chunks_policy`: removed in favor of the
explicit functions above.
* [`timescaledb_information.drop_chunks_policies`](https://legacy-docs.timescale.com/v1.7/api#timescaledb_information-drop_chunks_policies):
* `timescaledb_information.drop_chunks_policies`:
view has been removed in favor of the more general jobs view.

### Compression
Expand All @@ -487,11 +487,11 @@ compression policies are now available in the main jobs view.

#### Removed

* [`add_compress_chunk_policy`](https://legacy-docs.timescale.com/v1.7/api#add_compress_chunks_policy): Removed in favor of the
* `add_compress_chunk_policy`: Removed in favor of the
explicit functions above.
* [`timescaledb_information.compressed_hypertable_stats`](https://legacy-docs.timescale.com/v1.7/api#timescaledb_information-compressed_hypertable_stats):
* `timescaledb_information.compressed_hypertable_stats`:
Removed in favor of the new `hypertable_compression_stats(hypertable)` function linked above
* [`timescaledb_information.compressed_chunk_stats`](https://legacy-docs.timescale.com/v1.7/api#timescaledb_information-compressed_chunk_stats):
* `timescaledb_information.compressed_chunk_stats`:
Removed in favor of the new `chunk_compression_stats(hypertable)` function linked above.

## Managing policies and other jobs
Expand All @@ -517,10 +517,10 @@ features has been removed. All features are available either under the community
open-source Apache-2 License. [This blog post](https://blog.timescale.com/blog/building-open-source-business-in-cloud-era-v2/)
explains the changes. The following changes were made to license API:

* [`timescaledb_information.license`](https://legacy-docs.timescale.com/v1.7/api#timescaledb_information-license): This view
* `timescaledb_information.license`: This view
has been removed, as it primarily provided information on the enterprise license key's expiration date, which is no
longer applicable. The current license used by the extension can instead be viewed in the GUC below.
* `timescaledb.license`: This GUC value (which replaces the former [`timescaledb.license_key`](https://legacy-docs.timescale.com/v1.7/api#timescaledb_license-key) GUC)
* `timescaledb.license`: This GUC value (which replaces the former `timescaledb.license_key` GUC)
can take the value `timescale` or `apache`. It can be set only at startup (in the postgresql.conf configuration file
or on the server command line), and allows limiting access to certain features
by license. For example, setting the
Expand Down
10 changes: 6 additions & 4 deletions about/release-notes/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,10 +74,10 @@ New compression settings take effect on any new chunks that are compressed after
Following the deprecation announcement for Multi-node in TimescaleDB 2.13,
Multi-node is no longer supported starting with TimescaleDB 2.14.

TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here](docs/MultiNodeDeprecation.md).
TimescaleDB 2.13 is the last version that includes multi-node support. Learn more about it [here][multi-node-deprecation].

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB, read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).
[migration documentation][multi-node-to-timescale-service].

#### Deprecation notice: recompress_chunk procedure
TimescaleDB 2.14 is the last version that will include the recompress_chunk procedure. Its
Expand Down Expand Up @@ -174,10 +174,10 @@ available opportunity.
#### Deprecation notice: Multi-node support
TimescaleDB 2.13 is the last version that will include multi-node support. Multi-node
support in 2.13 is available for PostgreSQL 13, 14 and 15. Learn more about it
[here](https://github.com/timescale/timescaledb/blob/main/docs/MultiNodeDeprecation.md).
[here][multi-node-deprecation].

If you want to migrate from multi-node TimescaleDB to single-node TimescaleDB read the
[migration documentation](https://docs.timescale.com/migrate/latest/multi-node-to-timescale-service/).
[migration documentation][multi-node-to-timescale-service].

#### PostgreSQL 13 deprecation announcement
We will continue supporting PostgreSQL 13 until April 2024. Sooner to that time, we will announce the specific version of TimescaleDB in which PostgreSQL 13 support will not be included going forward.
Expand Down Expand Up @@ -692,3 +692,5 @@ For release notes for older TimescaleDB versions, see the
[pg-upgrade]: /self-hosted/:currentVersion:/upgrades/upgrade-pg/
[migrate-caggs]: /use-timescale/:currentVersion:/continuous-aggregates/migrate/
[join-caggs]: /use-timescale/:currentVersion:/continuous-aggregates/create-a-continuous-aggregate/#create-a-continuous-aggregate-with-a-join
[multi-node-to-timescale-service]:/migrate/:currentVersion:/playbooks/multi-node-to-timescale-service/
[multi-node-deprecation]: https://github.com/timescale/timescaledb/blob/main/docs/MultiNodeDeprecation.md