Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

API google.pubsub.v1.Publisher exceeded 5000 milliseconds when running on Cloud Run #1442

Closed
buffolander opened this issue Dec 15, 2021 · 89 comments
Assignees
Labels
api: pubsub Issues related to the googleapis/nodejs-pubsub API. external This issue is blocked on a bug with the actual product. priority: p3 Desirable enhancement or fix. May not be included in next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.

Comments

@buffolander
Copy link

When the image is run locally on Docker the application is able to successfully publish messages to Cloud PubSub.

When the same image is deployed to Cloud Run, it won't publish a single message. All attempts fail with the error:

GoogleError: Total timeout of API google.pubsub.v1.Publisher exceeded 5000 milliseconds before any response was received.
    at repeat (/usr/root/node_modules/google-gax/build/src/normalCalls/retries.js:66:31)
    at Timeout._onTimeout (/usr/root/node_modules/google-gax/build/src/normalCalls/retries.js:101:25)
    at listOnTimeout (internal/timers.js:557:17)
    at processTimers (internal/timers.js:500:7)
code 4

Sample Code:

import { mongoose } from '@shared/connections'
import MongoDbModel from '@shared/models'
import { PubSub } from '@google-cloud/pubsub'

import {
  REMINDER_COPY_SMS = '',
  REMINDER_MAX_DAYS_SINCE_LAST_MSG = 0,
  REMINDER_MAX_DAYS_SINCE_LAST_READING = 0,
  REMINDER_MESSAGE_TEMPLATE = 'reading-reminder',
  TOPICS_MESSAGE_DISPATCH,
  KEYFILE_PATH,
} from '../constants.js'

const pubsubClient = new PubSub({
  projectId: process.env.GOOGLE_CLOUD_PROJECT,
  keyFilename: KEYFILE_PATH,
})

const msMaxSinceLastReading = REMINDER_MAX_DAYS_SINCE_LAST_READING * 24 * 60 * 60 * 1000
const msMaxSinceLastMsg = REMINDER_MAX_DAYS_SINCE_LAST_MSG * 24 * 60 * 60 * 1000

const { model: Patient } = new MongoDbModel(mongoose, 'Patient')

const findInactivePatients = () => Patient.find({
  status: 'ACTIVE',
  $and: [{
    $or: [{
      last_reading_at: { $exists: false },
    }, {
      last_reading_at: { $lt: new Date(Date.now() - msMaxSinceLastReading) },
    }],
  }, {
    $or: [{
      last_reading_reminder_at: { $exists: false },
    }, {
      last_reading_reminder_at: { $lt: new Date(Date.now() - msMaxSinceLastMsg) },
    }],
  }, {
    // Test with known patients only
    $or: [{
      first_name: { $regex: /Bruno/i }, last_name: { $regex: /Soares/i },
    }],
  }],
}, {
  first_name: 1,
  last_name: 1,
  gender: 1,
  last_reading_at: 1,
  phones: 1,
}, {
  lean: true,
})

const updateLastReadingReminderAt = (patient) => Patient.findOneAndUpdate({
  _id: patient._id,
}, {
  last_reading_reminder_at: Date.now(),
})

const createMessagePayload = (patient) => ({
  body: REMINDER_COPY_SMS,
  channels: [{
    name: 'sms',
    contacts: patient.phones.map((phone) => phone.E164),
    specifications: {
      template: REMINDER_MESSAGE_TEMPLATE,
    },
  }],
})

const dispatchMessage = async (patient) => {
  if (!patient.phones || !patient.phones.length) {
    return
  }
  const payload = createMessagePayload(patient)
  console.info('Worker reminder-report-vitals', 'event payload', JSON.stringify(payload))
  try {
    await pubsubClient.topic(TOPICS_MESSAGE_DISPATCH).publishMessage({ json: payload })
    console.info('Worker reminder-report-vitals', 'event published to cloudPubSub')
    pubsubClient.close()
    await updateLastReadingReminderAt(patient)
    console.info('Worker reminder-report-vitals', 'last_reading_reminder_at updated for patient', patient._id)
  } catch (err) {
    console.error(err)
  }
}

const handler = async () => {
  console.info('Worker reminder-report-vitals', 'exectution started')
  try {
    const inactiveList = await findInactivePatients()
    inactiveList.forEach(dispatchMessage)
  } catch (err) {
    console.error(err)
  }
}

export default handler

The code hangs after logging this line:
console.info('Worker reminder-report-vitals', 'event payload', JSON.stringify(payload))

Sample Event Payload:

{"body":"This is {{ORG}}. We haven't been receiving your vitals. Reply \"start\" to report your vitals now. Reply \"stop\" at any time to opt-out from automated reminders.","channels":[{"name":"sms","contacts":["+15555555555"],"specifications":{"template":"reading-reminder"}}]}

Environment details

  • OS: Linux Alpine
  • Node.js version: 14.16.1
  • @google-cloud/pubsub version: 2.18.4

Steps to reproduce

  1. Create PubSub topic to receive messages and replace its name in constants.js TOPICS_MESSAGE_DISPATCH.
  2. Create nodejs application image from sample code. For sake of replicating only the issue, database operations may be removed - instead the sample event payload may be used.
  3. Run the container locally with Docker.
  4. Push the image to GCP Container Registry
  5. Deploy a Cloud Run service using the image
  6. See timeout erro in Cloud Logs
@product-auto-label product-auto-label bot added the api: pubsub Issues related to the googleapis/nodejs-pubsub API. label Dec 15, 2021
@yoshi-automation yoshi-automation added the triage me I really want to be triaged. label Dec 15, 2021
@buffolander
Copy link
Author

FYI I experienced the same behavior running on Cloud Functions. Rolling back to version 2.12.0 unblocked me.

@yoshi-automation yoshi-automation added the 🚨 This issue needs some love. label Dec 20, 2021
@Sytten
Copy link

Sytten commented Dec 21, 2021

Yeah we are experiencing issues on cloud run.

@Sytten
Copy link

Sytten commented Dec 21, 2021

I don't believe this is the issue in production. Most likely cloud run now kills the network access when the container is inactive (aka when the request has ended) which happens more often on low load. It used to work though, so I believe it might be a change in cloud run infrastructure. I am making sure that all the pubsub calls are awaited properly on my side. You should also set .topic(topic, { batching: { maxMessages: 1 } }) so that the message is sent right away instead of waiting in a queue.

@maurocolella
Copy link

For context: I was referring to an issue with ports on the emulator. I have deleted the relevant comment because I also believe it's not what's happening here. Apologies for the confusion.

@haroldadmin
Copy link

Can confirm, we are facing the same issue on Cloud Functions with @google-cloud/pubsub version 2.18.4.

@gorziza
Copy link

gorziza commented Dec 22, 2021

What's new about this case? I have the same problem

@Sytten
Copy link

Sytten commented Dec 22, 2021

We revered to 2.17. I believe the bug was introduced in 2.18.

@meredithslota
Copy link
Contributor

@Sytten Just checking — the issue does not reoccur with 2.17, only with 2.18? Which minor version were you using? I do see a change that went out re: publish timeouts, https://github.com/googleapis/nodejs-pubsub/releases/tag/v2.18.0 and then updated in https://github.com/googleapis/nodejs-pubsub/releases/tag/v2.18.3. It looks like @feywind did some investigation in an earlier issue here: #1425 and it seems like there was an upstream change we're trying to figure out. Our current believe is that 2.18.3 fixes the issue, but please let me know if this isn't the case.

@meredithslota meredithslota added priority: p2 Moderately-important priority. Fix may not be included in next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns. and removed triage me I really want to be triaged. 🚨 This issue needs some love. labels Dec 22, 2021
@apettiigrew
Copy link

Getting the same issue in google cloud function trigger.

@gorziza
Copy link

gorziza commented Dec 22, 2021

@meredithslota We had the problem with 2.18.3 too

@causticsudo
Copy link

causticsudo commented Dec 22, 2021

same here too, with 2.18.3 and 2.18.4 versions

@apettiigrew
Copy link

apettiigrew commented Dec 22, 2021

FYI I experienced the same behavior running on Cloud Functions. Rolling back to version 2.12.0 unblocked me.

Tried this solution it did not work for my case, while using a background cloud function.

const { PubSub } = require("@google-cloud/pubsub");
const pubsub = new PubSub();

exports.main = async (message, context) => {
  const data = {
    event: "AccountCreated"
  };
  const topicName = 'some valid topic name'
  const pubMessage = Buffer.from(JSON.stringify(data));
  const topic = pubsub.topic(topicName);

  topic.publish(pubMessage);
};
{
  "name": "sample-pubsub",
  "version": "0.0.1",
  "dependencies": {
    "@google-cloud/pubsub": "^2.12.0"
  }
}

@gorziza
Copy link

gorziza commented Dec 23, 2021

@apettiigrew Remove ^... use version only. "@google-cloud/pubsub": "2.17.0"

@cmagk
Copy link

cmagk commented Jan 1, 2022

It occurred to me today as well after upgrading from 2.17x, reverted back but why did they introduce 5s timeout?

@lguendogdu
Copy link

Same here, too. Running 2.18.4 with dockerized node application.

@Benny739
Copy link

Benny739 commented Jan 3, 2022

Same problem here with 2.15.1

@BerangerNt
Copy link

BerangerNt commented Jan 3, 2022

I had the same issue using Google Cloud Functions and "@google-cloud/pubsub": "^2.17.0". I was publishing successive messages probably the wrong way :
using "batching" option and publishing them using Promise.all reduced the processing time, it's now under 5000 ms

@andersondanilo
Copy link

I don't believe this is the issue in production. Most likely cloud run now kills the network access when the container is inactive (aka when the request has ended) which happens more often on low load. It used to work though, so I believe it might be a change in cloud run infrastructure. I am making sure that all the pubsub calls are awaited properly on my side. You should also set .topic(topic, { batching: { maxMessages: 1 } }) so that the message is sent right away instead of waiting in a queue.

Inside cloud run, i needed to disable the batching (maxMessages = 1), and make sure the promise was awaited before the http response was returned.

Note: By default the cloud run indeed dont allocate cpu after the request is finished (https://cloud.google.com/blog/products/serverless/cloud-run-gets-always-on-cpu-allocation)

@minuhariharan
Copy link

We are facing the same issue as well, using pub sub version 2.18.3, is downgrading the only solution?

@minuhariharan
Copy link

We are seeing this error: Received error while publishing: Total timeout of API google.pubsub.v1.Publisher exceeded 600000 milliseconds before any response was received.

@cmagk
Copy link

cmagk commented Jan 10, 2022

We are facing the same issue as well, using pub sub version 2.18.3, is downgrading the only solution?

We only use pubsub in Nodejs for some tests locally. We use the Java sdk on production where it works fine.

I tried deploying it to the gcloud just to see and didn't get the timeout there.I do not know why but I am thinking there is something wrong with the way we have implemented nodejs.

Will do some more tests with some other versions of writing it today or tomorrow.

@timdauer
Copy link

We are facing the same issue as well, using pub sub version 2.18.3, is downgrading the only solution?

Same here - we are using the version 2.18.1 though. So I guess downgrading needs to go at least below that - if that helps anyway.

We also opened (yet another) P3 support case at google. They will refer to the product team - lets hope this issue resolves quick.

@bharathjinka09
Copy link

I got error "google.pubsub.v1.Publisher exceeded 60000 milliseconds before any response was received.", By increasing gaxOpts timeout value to 100,000 solved our problem.

const publishOptions = {
  gaxOpts: {
    timeout: 100000,
  },
};

const topicChannel = pubsub.topic(TOPIC_ID, publishOptions);

Thanks @koushil-mankali for your solution. It's working. I am facing a new issue related to google cloud logging. Can you please let me know how to fix the below error. It will be very helpful. Thanks

Error: Total timeout of API google.logging.v2.LoggingServiceV2 exceeded 60000 milliseconds before any response was received.

@davestimpert
Copy link

Does anyone know why it would take over a minute to publish?

@bharathjinka
Copy link

Does anyone know why it would take over a minute to publish?

If the queue has a lot of data to process, it will take more time. You can reproduce the issue by load testing with multiple requests. The issue will be fixed by increasing the timeout value to a max value of 540000 i.e. 540 secs.

@davestimpert
Copy link

Something to consider - we noticed this issue arising from an HttpFunction where we were sending a response before publishing to PubSub. Having awaits after sending a response is contraindicated by google, and when we switched the order to publish first, the issue went away.

@lemndev
Copy link

lemndev commented Nov 16, 2023

Some point to consider about this issue, in case your application is not receiving consistent traffic/load, this error may affect the initial request(s)/action(s) that boot(s) up your cloud run instance(s). With subsequent requests/actions in the same timeframe, you'll notice that this error may not re-occur.

I cannot explain why it takes that long to publish to pub/sub but increasing the timeout as in koushil-mankali's solution here will save you some headaches.

@ts-geek22
Copy link

I got the error "google.pubsub.v1.Publisher exceeded 60000 milliseconds before any response was received.", By increasing the gaxOpts timeout value to 100,000, we solved our problem.

const publishOptions = {
  gaxOpts: {
    timeout: 100000,
  },
};

const topicChannel = pubsub.topic(TOPIC_ID, publishOptions);

Thanks, @koushil-mankali, for your solution. It's working. I am facing a new issue related to Google Cloud logging. Can you please let me know how to fix the below error? It will be very helpful. Thanks

Error: The total timeout of API google.logging.v2.LoggingServiceV2 exceeded 60000 milliseconds before any response was received.

@bharathjinka09 Did you find any solutions for your last error? I'm facing the same problem, and I have tried several solutions but haven't succeeded.

Solutions I have tried:

  1. Increased timeout from 60 seconds to 600 seconds in pub/sub client.
  2. Increased CPU and RAM for cloud functions.
  3. Used batch messaging for efficiency.
  4. Create a single topic instance and use it across all messages.

@kamalaboulhosn
Copy link
Contributor

Just to reiterate what I said back in June: While the symptom of these issues (publisher timeouts) is similar across the cases, it is highly likely the the underlying causes are different and so this GitHub issue now covers a lot of different cases where more user-specific information is needed. If you are still experiencing issues, please create a support case. Thanks!

Increasing the gax timeout may be a viable solution in some instances, but it is not a general-purpose answer to how to address timeouts in all cases.

We need specific information for each individual case in order to properly diagnose these issues and so a support case is the proper venue for that exchange of information.

@nsrivastava645-ghl
Copy link

nsrivastava645-ghl commented Mar 7, 2024

I solved this issue for me by having a single instance of the topic so you can change your code to something like this:

const topics = {}; // create a global var for topics to reuse them rather creating new topic for every message call.

const dispatchMessage = async (patient) => {
  if (!patient.phones || !patient.phones.length) {
    return
  }
  const payload = createMessagePayload(patient)
  console.info('Worker reminder-report-vitals', 'event payload', JSON.stringify(payload))
  try {
    if(!topics[TOPICS_MESSAGE_DISPATCH]){
      topics[TOPICS_MESSAGE_DISPATCH] = pubsubClient.topic(TOPICS_MESSAGE_DISPATCH, {
        batching: {
          maxMessages: 100, // set it to something like 200-300 depending upon how your subscriber is configured.
          maxMilliseconds: 100, // max wait before messages are published to the server (in ms)
        }
      })
    }
    await topics[TOPICS_MESSAGE_DISPATCH].publishMessage({ json: payload })
    console.info('Worker reminder-report-vitals', 'event published to cloudPubSub')
    pubsubClient.close()
    await updateLastReadingReminderAt(patient)
    console.info('Worker reminder-report-vitals', 'last_reading_reminder_at updated for patient', patient._id)
  } catch (err) {
    console.error(err)
  }
}

Give it a try.

@ts-geek22
Copy link

I have solved this issue. The problem was with environmental variables. Google Cloud functions parse each variable added from the GUI console, and ads escape sequences. So \n in my service account's private will be converted to \\n resulting in wrong secrets.

It's strange that the Pub/Sub SDK does not return proper error messages; it always returns an API timeout, no matter how much time I increase the timeout.

Removing escape sequences from the account's private key solves the problem.

timmy80713 added a commit to timmy80713/android-slack-tools that referenced this issue Apr 1, 2024
According to googleapis/nodejs-pubsub#1442 (comment) suggestion, add publishOptions to adjust timeout to 100000.
@Kripu77
Copy link

Kripu77 commented Apr 5, 2024

I have solved this issue. The problem was with environmental variables. Google Cloud functions parse each variable added from the GUI console, and ads escape sequences. So \n in my service account's private will be converted to \\n resulting in wrong secrets.

It's strange that the Pub/Sub SDK does not return proper error messages; it always returns an API timeout, no matter how much time I increase the timeout.

Removing escape sequences from the account's private key solves the problem.

Hi @ts-geek22, were you able to publish messages through the service at all? Or did the error occur after the application had been running for around 1-2 hours?

@ts-geek22
Copy link

ts-geek22 commented Apr 5, 2024

I have solved this issue. The problem was with environmental variables. Google Cloud functions parse each variable added from the GUI console, and ads escape sequences. So \n in my service account's private will be converted to \\n resulting in wrong secrets.
It's strange that the Pub/Sub SDK does not return proper error messages; it always returns an API timeout, no matter how much time I increase the timeout.
Removing escape sequences from the account's private key solves the problem.

Hi @ts-geek22, were you able to publish messages through the service at all? Or did the error occur after the application had been running for around 1-2 hours?

It's a string parsing issue, so it occurs from the start, as a wrong string is always the wrong string.

A timeout error is a general error in the case of cloud publishing, as it throws the same error for multiple reasons and does not provide any specific contextual information.

This is a link to my Stack Overflow issue, where I have listed a few different solutions I have tried; feel free to give them a try; you might find one that works for you.

@jdziek
Copy link

jdziek commented Apr 10, 2024

Im experiencing the same problem and the solution that worked for me was to initiate pubsub inside the function and then close the topic. Reason for this was apparently that we kept batching it, resulting in cloud function closing before it had a chance to send it.

    try {
      const pubSubClient = new PubSub();
      const topic = pubSubClient.topic(topicName, {
        batching: { maxMessages: 1 },
        gaxOpts: {
          timeout: 100000,
        },
      });

      const messageId = await topic.publishMessage({ json }).catch((err) => {
        logger.error(
          `Received error while publishing to ${topicName}: ${err.message}`
        );
        logger.error(err);
        throw new InternalServerError();
      });
      await pubSubClient.close();
      logger.info(
        `Message ${messageId} was published to ${topicName} with message ${JSON.stringify(
          json
        )}`
      );
    } catch (err) {
      logger.error(err);
      throw new InternalServerError();
    }
 }

EDIT: Added max batching and timeout just due to this thread. Still need to test it.

However, somebody changed it according to documentation a while ago and we started having that issue again just recently, so need to retest that. Just throwing this idea out there, maybe it will help somebody. Will try to give an update after more testing.

In the mentions above, I noticed people talking about importance of awaiting pubsub calls. In our use case we return a response from a server after we call a function that triggers pubsub that is not awaited to pretty much avoid having pubsub fail before we get a chance to respond. However, we await withing the called function. Example

async function example () {
....
      initPubsubMessages(
      message1,
      message2,
      ...
      );
return data;
}

async function initPubsubMessages(message1, message2) {

  await Pusub.publish(topic, message1);
  await Pusub.publish(topic, message2);
}

We use firebase functions gen 2 which are pretty much google cloud functions at this point. To my knowledge. In my head everything here is correct and until the processes initiated in initPubsubMessages, which has been awaited, should be processed without a hitch. But I'm also aware the cloudrun has some quirks so I thought ill ask for somebody else's opinion on that. Would firebase functions/cloudrun stop all processes as soon the server responds, is my question i guess?
I am grasping at straws here but I've experienced weirder things using that platform.

@ts-geek22
Copy link

@jdziek I don't know much about firebase functions, but Cloud function will unallocate memory and CPU as soon as you return response in HTTP cloud function. It assumes that your function's execution is complete as you have returned a response.

@pdfrod
Copy link

pdfrod commented Apr 11, 2024

The same issue about Cloud Run was already mentioned here before: #1442 (comment)

@jdziek
Copy link

jdziek commented Apr 11, 2024

Was worried that this might be the case. The thing is that this has been happening in places where we await as well. Will add an await just to be safe in this case, but doubt it will solve the issue fully :/
Now I just have to hope that hanging pubsub wont prevent response alltogether.
Thank you.

@jdziek
Copy link

jdziek commented May 2, 2024

Was good for about two weeks. Checked we are awaiting everything. Put in tweaks mentioned above. Just had a pubsub error chain this morning. Really at a loss here. Was really hoping that it will work. Any suggestions at this point welcome. Might try to put pubsub back to how it was setup in documentation, maybe this will work this time with proper awaitig.

@bkovari
Copy link

bkovari commented May 2, 2024

@jdziek

Was good for about two weeks. Checked we are awaiting everything. Put in tweaks mentioned above. Just had a pubsub error chain this morning. Really at a loss here. Was really hoping that it will work. Any suggestions at this point welcome. Might try to put pubsub back to how it was setup in documentation, maybe this will work this time with proper awaitig.

Are you referring to your configuration mentioned earlier?

      const topic = pubSubClient.topic(topicName, {
        batching: { maxMessages: 1 },
        gaxOpts: {
          timeout: 100000,
        },
      });

@jdziek
Copy link

jdziek commented May 2, 2024

Yeah. Noticed though that its happening only in one service now. So currently reviewing the code there. Reverted pubsub implementation to how its shown in the docs and pushed. So hopefully nothing new will appear in two weeks

@Kripu77
Copy link

Kripu77 commented May 2, 2024

Yeah. Noticed though that its happening only in one service now. So currently reviewing the code there. Reverted pubsub implementation to how its shown in the docs and pushed. So hopefully nothing new will appear in two weeks

Hey @jdziek are you using any other library to connect to any other Google API's?

@jdziek
Copy link

jdziek commented May 2, 2024

yeah, no. Just google-cloud ones for firestore, pubsub, and storage.

EDIT. I guess also firebase modules.

@Kripu77
Copy link

Kripu77 commented May 2, 2024

yeah, no. Just google-cloud ones for firestore, pubsub, and storage.

Update those packages to the latest versions; hopefully, that will fix your issue. We faced a similar problem with a few of our services; in our case, it was the Google Secret Manager package. Once we had bumped it to the latest version, that fixed this problem. I believe it's due to the gRPC library that most of the npm packages from Google use for communication with their APIs

@jdziek
Copy link

jdziek commented May 2, 2024

Good point. I only updated Pubsub really. Will review them now. Thanks

@sylvainar
Copy link

Hey, I'm subscribing as well to this issue.

I have the same error in all my cloud run services, everything was fine until a few weeks ago when we started having this issue. We batched to 1, we opened and closed the client for each message, but we're still having it. I'm trying to bump all google dependencies to see if it changes anything, let's keep each other posted!

@vinay-panwar-04
Copy link

Hi, folks - I am publishing almost 100k messages to a topic somehow after a while of publishing the topic I started getting the "Error: Total timeout of API google. pub-sub.v1.Publisher exceeded 600000 milliseconds before any response was received" and no matter how many timeout values I increased the error will be still there with waiting for that amount of time specified as the timeout value.

@kdawgwilk
Copy link

@vinay-panwar-04 Same i have the timeout increased on the function and can confirm the timeout on the function is larger than 60000ms and i can also see the acknowledgment deadline for the pubsub subscription is also greater than 60000ms but yet i still see this error

Error: Total timeout of API google.pubsub.v1.Publisher exceeded 60000 milliseconds before any response was received.
    at repeat (/workspace/node_modules/google-gax/build/src/normalCalls/retries.js:66:31)
    at Timeout._onTimeout (/workspace/node_modules/google-gax/build/src/normalCalls/retries.js:102:25)
    at listOnTimeout (node:internal/timers:569:17)
    at process.processTimers (node:internal/timers:512:7)

I don't see a way to set the gaxTimeout for firebase functions

@Kripu77
Copy link

Kripu77 commented May 15, 2024

Good point. I only updated Pubsub really. Will review them now. Thanks

Hey @jdziek, did updating the packages fixed the issue for you?

@sylvainar
Copy link

Hey all,

We had this timeout issue happening randomly on our Cloud Run instances, that could be mitigated by deploying a new revision. I haven't investigated the root cause, but instinctively it looks like when the Cloud Run instance runs for too long, the connection to pubsub stops working, maybe for something related to auth or long-term gax connections.

We finally managed to fix the issue. We needed to upgrade all google packages, including @google/<something> packages, google-auth-library, google-gax, googleapis-common and other sub-dependencies.

If (like us) you're using yarn v1, here's the list of commands that can help you:

  • yarn upgrade-interactive: select all google packages and bump them
  • yarn outdated: see all packages that have new major versions
  • https://www.npmjs.com/package/yarn-deduplicate to remove duplicates in your lockfile (that's the issue we had, we had olders versions of google packages that were still in our lockfile).
  • yarn why googleapis-common to check that you have a single version of a lib and that is the latest one

Hope it helped, good luck!

@jdziek
Copy link

jdziek commented May 15, 2024

@Kripu77 So far so good. We havnt had issues. Yet. The problem has been so erratic that i cant say for sure yet, but its seems at least like its improved.

@bkovari
Copy link

bkovari commented May 24, 2024

With this new configuration the timeout errors seem to be disappeared completely. There were no occurance in the last 2 weeks:

    const publishOptions = {
         gaxOpts: {
             timeout: 540000
         },
     };
     
    pubsub.topic(topicName, publishOptions);

Previously, I did increase the timeout of the cloud function itself, which did not help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: pubsub Issues related to the googleapis/nodejs-pubsub API. external This issue is blocked on a bug with the actual product. priority: p3 Desirable enhancement or fix. May not be included in next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.
Projects
None yet
Development

No branches or pull requests