Skip to content

Commit

Permalink
Merge pull request #6894 from andystevensname/rc-v1.312.0
Browse files Browse the repository at this point in the history
[RC] v1.312.0
  • Loading branch information
andystevensname committed Apr 9, 2024
2 parents 8290421 + 7eac0be commit 600c061
Show file tree
Hide file tree
Showing 18 changed files with 187 additions and 8 deletions.
@@ -0,0 +1,29 @@
---
title: "Resize A Database Cluster"
description: "Learn how to resize your database cluster."
authors: ["Linode"]
published: 2024-04-09
---

{{< content "dbass-eos" >}}

You can upscale database clusters to adapt them to your needs. Clusters can’t be downscaled.

{{< note type="alert" >}}
This operation causes downtime for the resized node clusters.

{{< /note >}}

1. Log in to the [Cloud Manager](https://cloud.linode.com/) and select **Databases** from the left navigation menu.

1. Select a database cluster from the list.

1. Navigate to the **Resize** tab.

1. In the *Choose a plan* section, select a new plan for your database cluster.

![Screenshot of Choose a plan section](upscale-plan.png)

1. In the *Summary* section, verify the changes. Click **Resize Database Cluster**.

1. Follow the on-screen instructions and click **Resize Cluster** to confirm. The cluster will be upscaled within two hours.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 3 additions & 2 deletions docs/products/networking/vlans/_index.md
Expand Up @@ -11,7 +11,7 @@ tab_group_main:
cascade:
date: 2020-10-22
product_description: "Fully isolated virtual local area networks that enable private communication between cloud-based resources"
modified: 2024-01-03
modified: 2024-04-03
aliases: ['/guides/platform/vlan/']
---

Expand All @@ -37,7 +37,8 @@ Since VLANs operate on layer 2 of the OSI networking stack, you can use is as pa

## Availability

VLANs are available in all data centers except Fremont (California, USA).
VLANs are available in all data centers.


## Pricing

Expand Down
Expand Up @@ -4,7 +4,7 @@ description: 'This article gives you information to help you determine which dat
keywords: ["data center", "datacenter", "dc", "speed"]
tags: ["linode platform"]
published: 2018-10-31
modified: 2024-02-14
modified: 2024-04-03
modified_by:
name: Linode
aliases: ['/platform/how-to-choose-a-data-center/','/guides/how-to-choose-a-data-center/']
Expand Down Expand Up @@ -34,7 +34,7 @@ An important consideration when choosing a data center is the availability of sp
| Chicago ||| ||||||||||| ✔† ||
| Dallas ||| | ||||||| ||| ||
| Frankfurt |||| ||||||| |||||
| Fremont ||| | |||||| | ||| ||
| Fremont ||| | |||||| | ||| ||
| Jakarta ||| ||| |||||||| ✔† ||
| Los Angeles ||| ||| |||||||| ✔† ||
| London ||| | ||||||| ||| ||
Expand Down
11 changes: 9 additions & 2 deletions docs/products/storage/object-storage/_index.md
Expand Up @@ -7,9 +7,9 @@ tab_group_main:
title: Overview
weight: 10
cascade:
date: 2020-06-02
date: 2024-04-07
product_description: "An S3-compatible object storage solution designed to store, manage, and access unstructured data in the cloud."
modified: 2024-02-14
modified: 2024-04-07
aliases: ['/platform/object-storage/pricing-and-limitations/', '/guides/pricing-and-limitations','/products/storage/object-storage/guides/enable/']
---

Expand Down Expand Up @@ -118,6 +118,13 @@ See the [Network Transfer Usage and Costs](/docs/products/platform/get-started/g
If creating a bucket in our **Jakarta** or **São Paulo** data centers, note that no additional transfer is added to their region-specific transfer pools.
{{< /note >}}

### Optimizing Applications to Avoid Rate Limiting
The rate limit for the number of Requests Per Second (RPS) applies to a bucket and is evaluated against each bucket once per second. If the duration of any request is greater than one second, any open requests will count against the rate limit in the next one second window.

For example, assume there are 750 requests for a single bucket with a duration of two seconds each. All of the requests that do not complete within the first second will count against the rate limit in the next second. With a rate limit of 750 requests per second for the bucket, no additional requests can be processed within the two second window until the first 750 requests complete. Any requests that are rate limited will receive a 503 response.

To help avoid rate limiting you can structure your data across multiple buckets, each of which will have its own rate limit.

### Additional Limits and Specifications

- **Upload file size limit:** 5 GB. The maximum upload size of a single object is *5 GB*, though this can easily be overcome by using multi-part uploads. Both [s3cmd](/docs/products/storage/object-storage/guides/s3cmd/) and [cyberduck](/docs/products/storage/object-storage/guides/cyberduck/) will do this for you automatically if a file exceeds this limit as part of the uploading process.
Expand Down
2 changes: 1 addition & 1 deletion docs/products/storage/object-storage/get-started/index.md
Expand Up @@ -19,7 +19,7 @@ Billing for Object Storage starts when it is enabled on your account, **regardle

## Generate an Access Key

1. Log into the [Linode Cloud Manager](https://cloud.linode.com).
1. Test - Log into the [Linode Cloud Manager](https://cloud.linode.com).

{{< note >}}
Object Storage is not available in the Linode Classic Manager.
Expand Down
141 changes: 141 additions & 0 deletions docs/products/storage/object-storage/guides/grafana-loki/index.md
@@ -0,0 +1,141 @@
---
title: "Using Grafana Loki with Object Storage"
description: "Learn how to use the Loki aggregation and visualization system to store and analyze logs from Linode's Object Storage."
authors: ["Linode"]
---

Grafana Loki is a log aggregation and visualization system for cloud-native environments. It offers a cost-effective, scalable solution for processing large volumes of log data generated by modern applications and microservices. With Grafana Loki, users can query and visualize logs from cloud-native workloads. Loki uses a label-based indexing system. This makes it an ideal choice for observability, monitoring, alerting and data analysis.

## Before you start

[Grafana Loki](https://grafana.com/docs/loki/latest/configure/) can integrate with Linode Object Storage as a backend storage solution using the S3 protocol and S3 storage configuration.

Before you apply the Best Practices to your configuration, review these basic concepts to understand the Loki caching and storage workflow.

### Memcached Cache Store

- The best practices examples use Memcached, the most popular local caching system. Memcached serves as Loki's cache store to provide a fast, distributed caching layer.
- The Memcached configuration stores chunks of log data and associated metadata in its key-value store.

### Chunk Indexing and Caching

- Logs ingested into Loki are typically grouped into chunks. These chunks represent a time-bound segment of log data. The structure of the chunks allows efficient querying based on time ranges and labels.
- The Loki ingester indexes and caches chunks in Memcached for rapid retrieval.
- When a Loki query occurs, the query engine first checks Memcached for the required chunks.

### Caching During Writing

- During ingestion, chunks may be cached in Memcached immediately after they are written to the backend storage such as, Linode Object Store.
- This proactive caching ensures that recently ingested log data is available for querying. It also avoids the latency that occurs when fetching it from the backend storage.
- Loki manages the indexes and chunks separately even though it may use the same backend storage for both.

### Caching During Reading

- When a query occurs the query engine checks Memcached for the required chunks.
- If the query finds the chunks in the cache, it retrieves the chunks directly from Memcached. This results in low-latency query responses.
- Loki fetches chunks that are not present in the cache or chunks evicted due to caching policies from the backend storage.

### Eviction and Management

- Memcached manages its own eviction policies. These policies use factors such as memory usage, access patterns, and expiration times.
- Chunks that are the least frequently accessed or have exceeded their time-to-live (TTL) may be evicted from the cache. This makes room for new data.
- Loki's configuration may include parameters for tuning the eviction policies and cache size to optimize performance and resource utilization.


## Best Practices

Review these best practices for leveraging Linode Object Storage with Grafana Loki. These tips focus on Loki’s caching and storage configuration and provide specific recommendations for the integration.

### Enable caching on Loki

Caching helps to balance Loli's query performance, resource utilization, and data durability. By intelligently managing caching, eviction, and storage operations Loki optimizes the trade-offs between responsiveness and scalability in log aggregation and visualization. Any caching configuration, or lack of it, has direct implications on the number of requests, such as `GETs`, going forward to Object Storage. Today Object Storage supports 750 mixed requests per second (RPS) per bucket. Caching helps to ensure that requests and throughput are not rate-limited. It can also promote optimal latency for typical enterprise workloads.

### Choose a multi-tenancy workload type with an independent storage config

You should understand your Loki workload type and evaluate whether it’s single or multi-tenant. Multi-tenancy is the default for Loki and is common for enterprise solutions. Please see the [Grafana Loki documentation](https://grafana.com/docs/loki/latest/operations/multi-tenancy/) for information about multi-tenancy configurations. Each Loki tenant has a separate configuration per tenant that includes caching, such as Memcached, and storage, such as Linode Object Storage.

In addition to providing data isolation, the multi-tenancy model allows for horizontal scalability and storage load sharding per tenant. Consider this example to understand the impact of the multi-tenancy workload type.

A configuration with one tenant and one bucket only supports 750 mixed RPS for your entire workload. This means that if you have an aggregated workload of 10K queries per second across all your Grafana graphs with a cache configuration that yields a 90% hit ratio:
- ~1K of the requests will land on the backend Object Storage bucket.
- With one tenant and one bucket, your workload will already exceed the rate limit for the bucket.

Alternatively, a configuration with two tenants and two buckets, one each for the org/division/workload type, gets twice the RPS and the workload is unlikely to be rate limited.

### Configure the Loki cache

Loki supports several tunables and configurable caching parameters. Review these recommended options to learn more.

#### Use an optimized cache store like Memcached

In-memory cache is auto-enabled in Loki. It is, however, recommended than you use an optimized cache store like Memcached. To configure Memcached, refer to the [Grafana documentation](https://grafana.com/docs/loki/latest/operations/caching/).

#### Configure the chunk_store_config block

The `chunk_store_config` block lets you configure how chunks are cached and how long to wait before saving them to the backing store.
![Screenshot of the chunk options](loki-chunk-store.png)

Some of the key config parameters include `max_chunk_age` and `chunk_idle_period`. These parameters determine how long the chunks are cached before being flushed out.
![Screenshot of the chunk options](loki-max-chunk.png)
![Screenshot of the chunk options](loki-chunk-memory.png)

Use the `default_validity` parameter for results caching and the `chunk_target_size` parameter to configure the compressed chunk size.
![Screenshot of the chunk options](loki-default-validity.png)
![Screenshot of the chunk options](loki-chunk-validity2.png)

To determine the right values for these parameters for your use-case, it’s critical to consider the following:
- **Load**: The amount of log data in bytes produced per day by your tenant workload. For example, GBs/day.
- **Log access patterns**: Determine whether the log data access patterns for your use-case skews towards recent data only such as, <12h old, or older data such as, >4-5 days.
- **Cache capacity considerations**: Determine whether your Loki deployment has enough resources such as RAM, CPU, and local disk space available allocated for caching.
- **Cost considerations**: Estimates costs for managing the cache locally for Loki, given the memory and disk space capacity requirements.

To learn more you can read the [Grafana Cloud blog post](https://grafana.com/blog/2023/08/23/how-we-scaled-grafana-cloud-logs-memcached-cluster-to-50tb-and-improved-reliability/) that discusses how appropriate cache sizing impacts performance and reliability.

## Loki Storage Configuration

The `s3_storage_config` block configures the connection to the Linode S3 Object Storage backend.
![Screenshot of the chunk options](loki-storage-config.png)

The `bucketnames` storage config parameter allows the Loki tenant workloads to specify more than one bucket. This enables sharding of log data chunks across multiple buckets. It’s highly recommended that you configure more than one bucket, and possibly many depending on the load. This helps with scalability and load balancing since rate limits are enforced at the bucket level.

The following storage backoff settings are also important.
![Screenshot of the chunk options](loki-backoff settings-1.png)

These parameters determine how Loki manages the storage requests when Linode Object Storage enforces rate limits. Rate limits may be enforced, for example, due to request rates higher than the allowed limits. If not configured properly, there can be a cascading effect where the retries contribute further to the request rates. This can result in perpetual or longer than ideal limits enforcement.

Given the Linode Object Storage rate-limiting implementation, the following values are highly recommended:
- `min_period` - 2 seconds
- `max_period` - 5 seconds
- `max_retries` - 5

#### Loki Chunk Size Configuration

The following chunk size and related config parameters are supported.
![Screenshot of the chunk options](loki-chunk-size-config.png)

You should configure chunk sizes between 1.5 MB and 2 MB. The `chunk_target_size` directly translates to the object sizes in Linode Object storage and determines the:
- Number of overall `PUT` requests.
- Number of objects stored in the buckets.
- Effective bandwidth required for the Object Storage operations.

Review the following example to learn why it’s important to choose the `chunk_target_size` value carefully.

If your workload generates 5 TB of logging data per day and the chunk size is 1.5 MB, then on an average, even distribution it will generate:
- ~38 `PUT` requests per second.
- 57 MBps (456 Mbps) of throughput out to the storage.
- ~ 3.3 million objects per day.

If, as an alternative, the chunk size is 2 MB then on an average, even distribution it will generate:
- ~29 `PUT` requests per second for the same aggregate throughput to the storage but with ~2.5 million objects per day.

The number of requests and throughput for `GET` requests per second are highly dependent on the window size and period of time for which logs are being pulled and whether they are in cache or not. For example, ~500 `GET` requests per second missing cache with 2 MB chunk size would result in approximately ~1 GBps (8 Gbps) of egress throughput from Object Storage.

For example, ~500 `GET` requests per second missing cache with 2 MB chunk size would result in approximately ~1 GBps (8 Gbps) of egress throughput from Object Storage.

### Maintain your Loki deployment

Here are some additional best practices to help ensure the operational healthiness of your Loki deployment:
- Monitor the cache utilization and query performance using Grafana dashboards and Prometheus metrics.
- Experiment with different cache configurations to find the optimal balance between memory usage and query responsiveness.
- Regularly review and adjust cache parameters based on the changing workload characteristics and system resource availability.
- Periodically delete and re-configure Linode Object Storage bucket lifecycle policies to delete objects, such as log data, that are no longer required for your use-case.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Expand Up @@ -2,6 +2,7 @@
title: "Deploy a Galera Cluster through the Linode Marketplace"
description: "This guide shows how to deploy a MySQL/MariaDB Galera Cluster through the Linode Marketplace."
published: 2023-03-20
modified: 2024-04-04
modified_by:
name: Linode
keywords: ['database','mysql','rdbms','relational database','mariadb']
Expand All @@ -16,7 +17,7 @@ authors: ["Linode"]
Galera provides a performant MariaDB database solution with synchronous replication to achieve high availability. Galera is deployed with MariaDB, which is an open-source database management system that uses a relational database and SQL (Structured Query Language) to manage its data. MariaDB was originally based off of MySQL and maintains backwards compatibility.

{{< note type="warning" title="Marketplace App Cluster Notice" >}}
This Marketplace App deploys 3 Compute Instances to create a highly available and redundant MeriaDB Galera cluster, each with the plan type and size that you select. Please be aware that each of these Compute Instances will appear on your invoice as separate items. To instead deploy MariaDB on a single Compute Instance, see [Deploy MySQL/MariaDB through the Linode Marketplace](/docs/products/tools/marketplace/guides/mysql/).
This Marketplace App deploys 3 Compute Instances to create a highly available and redundant MariaDB Galera cluster, each with the plan type and size that you select. Please be aware that each of these Compute Instances will appear on your invoice as separate items. To instead deploy MariaDB on a single Compute Instance, see [Deploy MySQL/MariaDB through the Linode Marketplace](/docs/products/tools/marketplace/guides/mysql/).
{{< /note >}}

## Deploying a Marketplace App
Expand Down

0 comments on commit 600c061

Please sign in to comment.