Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QueryCache: Shared query cache for all request going through runRequest #86957

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
82 changes: 81 additions & 1 deletion public/app/features/query/state/runRequest.ts
@@ -1,5 +1,6 @@
// Libraries
import { isString, map as isArray } from 'lodash';
import { isString, map as isArray, isPlainObject } from 'lodash';
import { LRUCache } from 'lru-cache';
import { from, merge, Observable, of, timer } from 'rxjs';
import { catchError, map, mapTo, share, takeUntil, tap } from 'rxjs/operators';

Expand Down Expand Up @@ -126,6 +127,15 @@ export function runRequest(
request: DataQueryRequest,
queryFunction?: typeof datasource.query
): Observable<PanelData> {
const cache = getCache();
const cacheKey = getCacheKey(request);
const fromCache = cache?.get(cacheKey);

if (fromCache) {
console.log('runRequest: cache hit');
return of(fromCache);
}

let state: RunningQueryState = {
panelData: {
state: LoadingState.Loading,
Expand Down Expand Up @@ -159,6 +169,7 @@ export function runRequest(
request.endTime = Date.now();

state = processResponsePacket(packet, state);
cache.set(cacheKey, state.panelData);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cache is set after getting the result, so if there are multiple requests initiated before the cache is set they will be executed (e.g. on initial render). Another way to apporach it could be shortly debouncing runRequest per hash. Either way, as you said it may come with some perf cost so maybe it'd make sense if it was optional and enabled on case by case basis.

To cache interpolated queries maybe the cache could live closer to the .query() method. There's already queryCachingTTL for backend caching but it's only used in Enterprise.

My gut feeling is that It seems it could be a nice thing to have in general if was opt-in rather than default 🤔

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd advise some caution when thinking of moving it closer to the query method unless there is a way to opt-out of caching – some teams (including ours, the Alerting team) have already implemented RTK Query in parts of the code base but still call the query() function in queryFn. A double layer of cache would be ... well you know what they say about cache invalidation :D

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ifrost yea, we could implement a query cache concept similar to react-query to handle that (So multiple queries with same key does not cause multiple real queries).

I originally tried to use the react-query cache but does not look possible.

The point with a cache here is to have a shared cache across different apps / parts of Grafana.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if we would go with that, we should only cache successful data, i.e. no error responses.


return state.panelData;
}),
Expand Down Expand Up @@ -225,3 +236,72 @@ export function callQueryMethod(
const returnVal = queryFunction ? queryFunction(request) : datasource.query(request);
return from(returnVal);
}

let cache: LRUCache<string, PanelData> | undefined;

function getCache() {
if (cache) {
return cache;
}

const options = {
max: 500,

// for use with tracking overall storage size
maxSize: 5000,
sizeCalculation: (value, key) => {
return 1;
},

// how long to live in ms
ttl: 1000 * 60 * 5,

// return stale items before removing from cache?
allowStale: false,

updateAgeOnGet: false,
updateAgeOnHas: false,
};

cache = new LRUCache(options);
return cache;
}

function getCacheKey(request: DataQueryRequest) {
const queriesKey = hashKey([request.targets]);
console.log('queriesKey', queriesKey);
console.log('queries hash', cyrb53(queriesKey));
console.log('request.id', request.requestId);
console.log('time', request.range.from.valueOf(), request.range.to.valueOf());

return hashKey([request.targets, request.range.from.valueOf(), request.range.to.valueOf()]);
}

export function hashKey(queryKey: unknown[]): string {
return JSON.stringify(queryKey, (_, val) =>
isPlainObject(val)
? Object.keys(val)
.sort()
.reduce((result, key) => {
result[key] = val[key];
return result;
}, {} as any)
: val
);
}

const cyrb53 = (str: string, seed = 0) => {
let h1 = 0xdeadbeef ^ seed,
h2 = 0x41c6ce57 ^ seed;
for (let i = 0, ch; i < str.length; i++) {
ch = str.charCodeAt(i);
h1 = Math.imul(h1 ^ ch, 2654435761);
h2 = Math.imul(h2 ^ ch, 1597334677);
}
h1 = Math.imul(h1 ^ (h1 >>> 16), 2246822507);
h1 ^= Math.imul(h2 ^ (h2 >>> 13), 3266489909);
h2 = Math.imul(h2 ^ (h2 >>> 16), 2246822507);
h2 ^= Math.imul(h1 ^ (h1 >>> 13), 3266489909);

return 4294967296 * (2097151 & h2) + (h1 >>> 0);
};