Skip to content

Latest commit

 

History

History
150 lines (122 loc) · 12.4 KB

2_optimization_strategies.md

File metadata and controls

150 lines (122 loc) · 12.4 KB

Optimization Guide

The first point of investigation or periodic review should typically be the performance dashboard /dashboards/redshift_model::redshift_performance

image

By starting at the dashboard, you can focus your performance optimization efforts in areas where they will have the most impact:

  • Individual queries that are taking the most time
  • Suboptimal join patterns that, in aggregate, are taking the most time
  • Capacity limitations that would not be revealed at the per-query level
Note: All data relating to query history is limited to the past 1 day directly in the model. If desired, this can be adjusted in the redshift_queries view definition.

Identifying opportunities from individual queries

The top section of the dashboard gives an overview of all queries run yesterday, with a histogram by run time, and a list of the top 10 longest running queries. You can drill into a list of queries by clicking on any bar in the histogram, and from either that list or the top 10 list, you can inspect queries you think might be problematic. .

image

Identifying opportunities from network activity patterns

The next section of the dashboard deals with the network activity caused by joins. Since network activity is highly impactful on execution time, and since it often follows consistent patterns, it is ripe for optimization, but most Redshift implementations neglect to properly analyze and optimize their network activity.

image

The pie chart on the left gives the approximate share of network activity for each type of network activity. Although there will always be some suboptimal redistribution of data for some share of queries in the system, when the red and yellow types account for more than 50% of the activity, there is a good indication that a different configuration would yield better system-wide average performance.

The list on the right then shows individual opportunities, with all queries performing a particular join pattern grouped into a row, and then sorted by aggregate time running among those queries, so that you can focus on adjusting join patterns that will have a significant impact on your end users.

Once you have identified a candidate join pattern for optimization based on this table, click the query count to see a drill-down of all the matching queries, and then select ones that appear representative or that are particularly slow to run to investigate further.

Note: Nested loops are another problem sometimes caused by joins. Not only will they always result in DB_BCAST_INNER, but they can also cause excessive CPU load and disk-based operations.

Identifying capacity issues

In addition to slowly running queries, you might be experiencing slow response time simply because Redshift is queueing queries as a result of excessive demand / insufficient capacity. The line graph at the bottom of the dashboard will quickly reveal if and during what time of the day queries were queued. The blue line represents all queries received each hour, and the red line represents queries queued each hour. You can also click on the “minutes queued” disabled series to get an estimate of how much, in aggregate, the queued queries were delayed by being in the queue.

image

If you do find an issue here, you can of course increase capacity - or, you could manage demand by adjusting or cleaning out your PDT build schedules and scheduled looks/dashboards.

PDTs: /admin/pdts

Scheduled content by hour: /explore/i__looker/history?fields=history.created_hour,history.query_run_count&f[history.source]=Scheduled+Task&sorts=history.query_run_count+desc&limit=50&dynamic_fields=%5B%5D

In addition to this capacity issue that directly affects query response time, you can also run into disk capacity issues. If your Redshift connection is a superuser connection, you can use the admin elements of the block to check this.

How to interpret diagnostic query information

When you click “Inspect” from any query ID, you’ll be taken to the Query Inspection Dashboard:

image

The dashboard components can be interpreted as follows:

  • Seconds to run: The number of seconds a query took to run, and a good starting point for deciding whether to spend time optimizing this query
  • Mb Scanned: How many megabytes of data did the query scan from disk. This is commonly a significant contributing factor to query slowness. It’s affected by things such as the underlying size of your dataset, whether a sort key could be leveraged to limit which blocks had to be read from disk, the average size of values in the referenced columns, the compression applied to your data on disk, whether a table is set to dist-style all.
  • Mb distributed, Mb broadcast: Network activity is another very common cause for slow queries, and distributing and broadcasting are the two main categories of network activity caused by a query. They occur in order to allow separate points of data to be joined together, by sending the joined data to the same node. Distribution means that for each pair of datapoints to be joined together (e.g., a user and his/her order), Redshift chooses a location on the network for that pair and each node sends their datapoints to that new node for joining. On the otherhand, broadcasting occurs either when Redshift cannot determine ahead of time how datapoints will be matched up (e.g., the join predicate requires a nested loop instead of a hash join) or when Redshift estimates that one side of the relationship is small enough that broadcasting it will be less time costly than distributing both sides.
  • Rows sorted: Less frequently, queries will trigger large sorts. Sorts on large numbers of records can be expensive, so a number greater than 100k here should be looked into. Of course, sometimes these sorts are required, but many times an extraneous order by is left in a derived table view, resulting in a large, unnecessary, and frequently executed sort.
  • Disk-based: Indicates whether any step on any node resulted in an operation that exceeded the available memory and which caused the operation to be completed by storing some data on disk. If yes, see whether there is very high skew in the underlying data, or any step in the query execution section causing this, and if not consider whether additional capacity is needed based on trends across all queries in the system.
  • Query text: Provides the first 4,000 characters of the query
  • Table details: Shows the tables that participated in the query and some key information about those tables, and metrics about the scans of these tables during the execution of this query. Note that these table dimensions are current and may be different from what they were when the query ran.
  • Query Plan: Shows the steps that the query planner produced. This is where the diagnostic heavy lifting gets done. When exploring from here, you can join the parent/child steps together, for example to see how many rows each side of a join contributed.
  • Query Execution: At times, the query planner plans things that don’t work out as expected. While the execution report is less structured and difficult to tie back to the query, it is a good way to check the assumptions made by the planner. For example, you can see how many rows or bytes were scanned, broadcast, or executed, and get a general sense for how the workload is balanced or skewed across nodes in the cluster.

Common problems and corrective actions

Update: I presented on this at our JOIN 2017 conference, and you can find the presentation here

Situation Possible Corrective Actions Considerations
A join pattern causes a nested loop that is unintentional or on large tables Refactor the join into an equijoin (or an equijoin and a small fixed-size nested loop)
Build a relationship table as a PDT so the nested loop only needs to be done once per ETL
Overall join patterns result in frequent broadcasts of inner tables, or distribution of large outer tables, or distribution of both tables Adjust the dist style and distkey of the broadcast table, or of the receiving table based on overall join patterns in your system
Add denormalized column(s) to your ETL to enable better dist keys. E.g., in events -> users -> accounts, you could add account_id to the events table Don’t forget to add the account_id as an additional condition in the events -> users join
Build a PDT to pre-join or redistribute the table Not usually needed, though this may be worth the higher disk usage, and can be more efficient than distribution style “all”
Queries result in large amounts of scanned data Set your first sort key to the most frequently filtered on or joined on columns
Check whether any distribution style “all” tables should be distributed instead (and possibly duplicated and re-distributed) With distribution style all, each node must scan the entire table, vs. just scanning its slice
Adjust table compression
Check for unsorted data in tables, and schedule vacuums or leverage sorted loading for append-only datasets
For large tables, set a always_filter declaration on your sort key to guide users
Queries have large steps with high skew, and/or disk-based operations Check table skew, skew of scan operations, and potentially adjust relevant distribution keys to better distribute the query processing For small queries, higher skew can be ok.
The query planner incorrectly underestimates the resulting rows from a filter, leading to a broadcast of a large number of rows Check how off the statistics are for the relevant table, and schedule analyzes
Adjust your filter condition
Users frequently run full historical queries when recent data would do just as well Use always_filter so users are required to specify a filter value The filtered field is ideally the sort key for a significant table. E.g., the created date field in an event table

In case the above changes require changing content in your LookML model, you can use regex search within your project to find the relevant model code. For example \${[a-zA-Z0-9_]+\.field_id}\s*=|=\s*${[a-zA-Z0-9_]+\.field_id} would let you search for where a given field is involved in an equality/join, if you are using the same field name as the underlying database column name.