New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploying functions extremely slow #536
Comments
Thanks for filing! We know that slow deployment is a major pain point for the functions experience, and it is something that we are working to resolve through various strategies. |
@laurenzlong |
As a newbie, I spent many times canceling on this thinking that I was doing something wrong. |
The problem is kind of obvious. The init command created a 'node modules' folder that is huge. On my machine it is: 22,903 Files, 2,782 Folders. The code copies all of that to a temp folder. Here is what I did:
Line 26 in prepareFunctionsUpload.js:
The functions folder was created by the CLI command init. It looks like this:
It seems to me that the CLI command init should NOT have created that node_modules folder. It's 165MB large. That seems unreasonable to add to every project. |
That's actually not the issue. The node_modules is not copied over to the temporary folder, as you can see from this line here: https://github.com/firebase/firebase-tools/blob/master/lib/prepareFunctionsUpload.js#L76 It is necessary for |
I put a debug print line before line 26 and after. That line took minutes. Then, I added a filter to print out which files were being copied. It included all the node_modules files. Then, I changed the filter to exclude the node_module files. The deployment progressed fast now. However a little later, it seemed that the script was trying to evaluate the correctness of the cloud functions. That code failed because it was missing the library dependencies. The source code line that you point to seems to be a later step. It seems like the files (minus the node_modules) are archived into a zip file before uploading to the cloud. That line doesn't seem to run slow on my machine. |
Yes you're right, I was mistaken, node_modules does get copied over. I think it is a valid idea to not copy over node_modules to the temp directory. What complicates this a bit is the fact that the CLI writes a ".runtimeconfig.json" to the temp folder prior to trigger parsing, and this file gets uploaded with the rest of the functions source code, and we didn't want to write this file into the actual source code directory. So there is probably a good solution that both improves deployment speed and ensures there aren't unintended side effects, but I'd have to play around with a bit. You can also feel free to make a pull request. |
I'm having the same issue. It might be a good idea to print more messages during the "preparing directory..." steps so that the user doesn't think firebase-tools is hanging. Edit: This was on Ubuntu WSL. On Linux, the "preparing" phase doesn't hang. The "creating function" step can be slow, but not as much as I experienced earlier. |
This issue is a real pain in the bum. I think it should be given high priority :/ |
I'm trying to implement firebase functions to my project and because this error I had to postpone it. |
This issue is a major hindrance. Mostly because I use firestore and in that stuff like aggregates, counters and presence can only be handled decently by cloud functions and it just hangs for 5 minutes every time. |
@PulpoEnPatineta This is not an error. This is just simply an issue with the deploy time. |
@mcstuffins If your car takes five minutes to turn on, is it an error or simply an issue with start time? |
is there a fix for this? It is really extremely slow. |
I kept track of this issue from the beginning, but it's never been a problem for me since I have a CD set up and it does all the work for me. I also never deploy functions just to test if they work. So basically it didn't matter to me. Until today, when I came across an unexpected limitation: I cannot deploy my functions anymore, cause deployment to production exceeds the daily quota (12,000 seconds). I have ~55 functions with various triggers (pubsub, firestore, https). Is it too much to handle? Now I have to deploy my application for two days 🤣 🍭 👍 🥇 ⚰️ 🎉 🌮 🌵 💃 😈 |
Sometimes, when I am deploying, it is extremely slow, and then there is a warning in the terminal that says "Error in the build environment" |
@srinurp Please see the pull request I linked to above, it will address a part of the problem. And the backend team is working to address other parts of the issue (but it's a very complicated undertaking, so we appreciate your patience.) @merlinnot Unless you are updating code for all of your functions with each deploy, I would recommend using the --only command to deploy individual functions or groups of functions. See https://firebase.google.com/docs/cli/#partial_deploys. @mcstuffins "Error in the build environment" usually indicates a production issue, in that case please file a support ticket at https://firebase.google.com/support/, you can see if there are any ongoing production issues by visiting the Firebase Status Dashboard |
@laurenzlong I would have to configure my CI to automatically detect changes in each function between deployments (including resolving dependencies). And what if I update packages like |
@merlinnot That's a very legitimate use case. Deployment quotas are controlled by the Google Cloud Functions, I would recommend filing a request on their public issues tracker: https://cloud.google.com/functions/docs/support |
For everyone interested: https://issuetracker.google.com/issues/71385193 |
@laurenzlong Could |
@horacehylee The other thing is the caching: if Google was to download packages on your behalf, they would have to implement some internal caching mechanism. We all have local caches on our computers (both npm and yarn have caching mechanisms), therefore we don't kill npm's servers ;) Some people might also want to simply test some changes in external libraries. It's much easier to just change a file and deploy the function than it is to create a fork, make changes, temporarily change a reference to the package, ... Bottom line: it works just fine, leave it as it is now 👍 |
@horacehylee @merlinnot Thanks for the 2 cents. Please see #578, the next release of the CLI will no longer copy the functions source folder period. |
I'm really not sure what firebase could be doing that takes MINUTES to deploy a 5 line, 1kb function, on a six 4GHz core machine, sitting on a 1Gbps fiber connection. I know it sounds like I'm taking the piss, but I'm genuinely curious what is going on during "preparing directory for upload". Anyone actually know? |
Copying your functions directory, including node modules, to a tmp dir.
Our next release will address this and no longer need to copy before
deploying.
…On Sun, Jan 7, 2018, 5:05 PM hmexx ***@***.***> wrote:
I'm really not sure what firebase could be doing that takes MINUTES to
deploy a 5 line, 1kb function, on a six 4GHz core machine, sitting on a
1Gbps fiber connection.
I know it sounds like I'm taking the piss, but I'm genuinely curious what
is going on during "preparing directory for upload". Anyone actually know?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#536 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAD_nUptJWFuYGvEXI0MwmQR-bG9_MKks5tIWm1gaJpZM4QdF3g>
.
|
Ah right. that explains it. It's kind of comedic seeing a workstation go crazy for a minute or two, only to have the next line print out "packaged functions (37.55kb !!!) successfully for uploading" lol Look forward to the next release. Thx for responding. H. |
Day 19. Firebase still deploying. * grabs popcorn * |
That's really sad to hear. Another approach is to bundle multiple entrypoints/routes into a single function. This is not the best from the separation of concerns/size/security obviously, but this is what we did with all of our API endpoints (single entrypoint that then uses an internal router), and haven't experienced many quota limits since (but they do happen occasionally). Still, this quota limit seems really low, since the official doc lists "80 per 100 seconds" for "Calls to deploy or delete functions via the Cloud Functions API". I'm assuming Google support could not raise this limit even further? |
@dinvlad I was inquiring to both sales and support about getting that API limit increased, it's explicitly disabled / greyed out in the GCP console quotas section. Eventually was able to get a Google engineer on hangouts who informed me "That quota does not change".... so I guess not. |
If a functions-only developer chooses to put their functions source in the root folder, this could be a source of longer deploy times. See #536
We have added That should speed up releases for some. Of course there's not much we (the Firebase CLI team) can do about latency on the Cloud Functions backend. |
make sure to do |
Over here we've also been grouping related functionality into a function, so it ends up looking like "one function per feature area” rather than “one function per individual granule of functionality”. The performance characteristics of deployment are strongly driving the architecture of what goes into one function! |
When and why did this issue get closed. It was opened in Nov 2017 and it looks like it is still just as much a problem as it was then. I can't see a reference to 'Closed' here and I would be interested to know why it was closed. I'm not complaining, just wondering. I'm sure it's being worked on but it would be good to know about any progress. |
@chriscurnow it was closed because the real cause is apparently is due to the (closed-source) server side not the CLI tools, even though it manifests to end users as a CLI issue (per @samtstern in #536 (comment)). Unfortunately there's no good public tracker for the back end so we get updates here :( |
How can we open this bug for google to fix it in their backend? Like, what's even the point of using firebase functions if it's going to take this long |
Hey guys, maybe we could all report this issue to google and maybe they'll listen? Every time I try to use something from google for something important I'm reminded that they are one of the least consumer friendly companies in existence, but hey, we could try. |
@RenFontes I just filed Case 00075974: Deploying firebase functions is so slow as to be unusable with Firebase support. I will update this message with whatever they come back with. |
It's possible to deploy specific functions. Couldn't the CLI have a "checksum tracker" for each function and only deploy the changed ones (and also, track everything it uses: vars, packages, ...)? |
@SrBrahma sure, but the issue is still present even if you only deploy a single function (e.g., single function deploy can still fail due to timeout). It shouldn't have to fall on the user to manage batching function deployment, and in many cases function updates are needed atomically, therefore can't be split/batched. Appreciate the suggestions for a workaround, but really what we need here is an SLA from Google and their adherence to it. |
IT is True that Firebase Functions Sometimes?(maybe often) sucks at performance..It is really slow..You should support Rust rather.. Go? is it much faster than Node? |
Still waiting for Google to improve Google Functions deploying time right ? |
Deploying the initial helloWorld sample function takes 1 min 40 sec with the above ".ignore" rule (Jan. 2021) so I think it's safe to say this is not a priority for Google to improve. Just in case this helps any other beginners: run the Firebase emulator, then change "tsc" to "tsc -w" (if using typescript) in package.json, and finally run "npm run build" in another window. With this, the emulator reloads your function changes right away. This makes local development a lot faster than waiting 2 minutes to test changes on Google's servers. |
I have made this start script in package.json some time ago, that is incredibly helpful and useful to quickly start the emulator and set tsc watch with a single command:
Install concurrently with I also have the startClean script that removes previous state data:
Edit: I think I remembered about the |
I'm writing this comment in the midst of a major outage that I'm in charge of fixing. "But @LilRed", you might say, "why are you replying to closed GitHub issues instead of attending to your major outage?" Because every time I attempt a fix and re-deploy a single Firebase Function, I have to wait and twiddle my thumbs for five minutes and a half before deployment completes and I can see if my fix worked. And maybe I need to make two or three attempts. At this point more than half of our backend incident response time is waiting on Cloud Functions deploys. |
@LilRed I would recommend using the functions emulator for testing changes instead of deploying them up. This was our problem too before we configured the emulator for development and testing. |
Did they get back to you with any solutions? @RenFontes We're facing the same issue, as even single functions deployment is so slow that creating new functions inevitably fails, unless we force a longer Also it'd be nice to have more useful error logs for these timeout-related deployment errors, rather than just a |
@Scino deployment errors appear in the function logs on the firebase console :) Still sometimes cryptic but gives definitely more insight on the possible root cause Another workaround that I've found useful is to use a bundle for the final artifacts (webpack, rollup, etc) : source code stays separated but the artifact that you deploy is a bundle of all the functions. Still slow, but you bite the bullet only once :) I still would like to see a better experience with functions. |
That sounds promising. Did you have to play around with Google Cloud Build directly or something like that? |
I was using Github Actions at that time. Didn't have to dig into cloud build. But I didn't have a huge amount of function neither (~40) |
Not really @Scino. This is what I received from Firebase support, and I didn't follow up further. Nobody on the Firebase/Google team seems to appreciate that every now and then we actually need to push functions to production rather than just play with them in an emulator. Single function uploads often fail, even if they have been tested and validated locally via the emulator. Good luck trying to get multiple updates to push in any sort of atomic way.
|
Anyone had any success using modern bundlers like es-build to bundle and speed up function deployments? |
I'm currently doing that. It seems to have slowed down deployments though haha |
my console is still deploying still 2017, I'm still positive for a fix, i will keep it open |
Version info
3.15.0
Steps to reproduce
Make a simple
functions
directory with only 1 function:Now deploy using
firebase deploy --only functions
.Expected behavior
Deploy faster. Now it takes minutes to deploy a small functions file. If I compare this to the hosting upload/deploy, that one goes pretty fast and is much more then one file.
Actual behavior
Takes extremely long to upload/deploy. It hangs during the
preparing functions directory for uploading...
phase.Debug log for
firebase deploy --only functions
:please note I used another function then in my reproduce step, but it's the same idea: a small function with only a few lines of code.
The text was updated successfully, but these errors were encountered: