Skip to content

Commit

Permalink
Filter the fetches we add to a batch when we create the batch (#5034)
Browse files Browse the repository at this point in the history
Without filtering the query hashes during batch creation we could end up
in a situation where we have additional query hashes in the batch.

This manifests itself by batch queries failing with query hashes
appearing as committed without ever having been registered in a batch.

Filtering during batch creation is now matching the filtering at
subgraph coordination.
  • Loading branch information
garypen authored and bryn committed May 3, 2024
1 parent cd24189 commit 6fada36
Show file tree
Hide file tree
Showing 3 changed files with 22 additions and 6 deletions.
9 changes: 9 additions & 0 deletions .changesets/fix_garypen_router_243_batch_fetch_filter.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
### Filter the fetches we add to a batch when we create the batch ([PR #5034](https://github.com/apollographql/router/pull/5034))

Without filtering the query hashes during batch creation we could end up in a situation where we have additional query hashes in the batch.

This manifests itself by batch queries failing with query hashes appearing as committed without ever having been registered in a batch.

Filtering during batch creation is now matching the filtering at subgraph coordination and fixes this issue.

By [@garypen](https://github.com/garypen) in https://github.com/apollographql/router/pull/5034
12 changes: 9 additions & 3 deletions apollo-router/src/query_planner/plan.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ use serde::Serialize;
pub(crate) use self::fetch::OperationKind;
use super::fetch;
use super::subscription::SubscriptionNode;
use crate::configuration::Batching;
use crate::error::CacheResolverError;
use crate::json_ext::Object;
use crate::json_ext::Path;
Expand Down Expand Up @@ -77,10 +78,12 @@ impl QueryPlan {

pub(crate) fn query_hashes(
&self,
batching_config: Batching,
operation: Option<&str>,
variables: &Object,
) -> Result<Vec<Arc<QueryHash>>, CacheResolverError> {
self.root.query_hashes(operation, variables, &self.query)
self.root
.query_hashes(batching_config, operation, variables, &self.query)
}
}

Expand Down Expand Up @@ -216,6 +219,7 @@ impl PlanNode {
/// details and don't use _ so that future node types must be handled here.
pub(crate) fn query_hashes(
&self,
batching_config: Batching,
operation: Option<&str>,
variables: &Object,
query: &Query,
Expand All @@ -236,8 +240,10 @@ impl PlanNode {
new_targets.extend(nodes);
}
PlanNode::Fetch(node) => {
// If requires.is_empty() we can batch it!
if node.requires.is_empty() {
// If requires.is_empty() we may be able to batch it!
if node.requires.is_empty()
&& batching_config.batch_include(&node.service_name)
{
query_hashes.push(node.schema_aware_hash.clone());
}
}
Expand Down
7 changes: 4 additions & 3 deletions apollo-router/src/services/supergraph/service.rs
Original file line number Diff line number Diff line change
Expand Up @@ -254,9 +254,10 @@ async fn service_call(
return Ok(response);
}
// Now perform query batch analysis
let batching = context.extensions().lock().get::<BatchQuery>().cloned();
if let Some(batch_query) = batching {
let query_hashes = plan.query_hashes(operation_name.as_deref(), &variables)?;
let batch_query_opt = context.extensions().lock().get::<BatchQuery>().cloned();
if let Some(batch_query) = batch_query_opt {
let query_hashes =
plan.query_hashes(batching, operation_name.as_deref(), &variables)?;
batch_query
.set_query_hashes(query_hashes)
.await
Expand Down

0 comments on commit 6fada36

Please sign in to comment.