New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open-next V3 #3567
base: master
Are you sure you want to change the base?
Open-next V3 #3567
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I haven't deployed this latest version yet though.
bundle: path.join(sitePath, fn.bundle), | ||
runtime: "nodejs18.x" as const, | ||
architecture: Architecture.ARM_64, | ||
memorySize: 1024, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should default the lambdas to 1536MB, see: https://docs.aws.amazon.com/lambda/latest/operatorguide/computing-power.html
tl;dr - 1536 MB is almost as cheap as 128 MB but you also get max speed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe that's strictly true. The AWS table provided is an example, they explain that by increasing memory you also increase vCPU given and that for CPU bound operations it will be cheaper as the function will run faster.
But if it doesn't complete faster, it'll just be more expensive. That's why they've got a complicated tool to measure and profile your services ideal memory:
Without benchmarking and testing an "average" nextjs app with different memory configurations, I think 1024
sounds like a reasonable default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fair. It's a bit tricky b/c there's been several people getting slow latency from certain dependencies like prisma and increasing the memory helped minimize it. But everyone's mileage may vary.
mode: this.doNotDeploy | ||
? ("placeholder" as const) | ||
: ("deployed" as const), | ||
path: this.props.path, | ||
runtime: this.props.runtime, | ||
customDomainUrl: this.customDomainUrl, | ||
url: this.url, | ||
edge: this.edge, | ||
// edge: this.edge, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this line
@@ -38,7 +38,7 @@ import { SsrFunction } from "./SsrFunction.js"; | |||
import { Asset } from "aws-cdk-lib/aws-s3-assets"; | |||
import { useFunctions, FunctionProps } from "./Function.js"; | |||
import { useDeferredTasks } from "./deferred_task.js"; | |||
import { Logger } from "../logger.js";import { SsrContainer, SsrContainerProps } from "./SsrContainer.js"; | |||
import { Logger } from "../logger.js"; import { SsrContainer, SsrContainerProps } from "./SsrContainer.js"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New line?
811c929
to
405b497
Compare
This PR add supports for open-next V3.
It should support all the features of open-next V3.
This PR also added some new things in the cloudfront function to improve cache hit ratio by a lot
It requires OpenNext v3, latest at the time of writing is
3.0.0-rc.13
Here is a simple example to use this construct:
This is mostly done, there is still some things that need to be figured out before release
Right now it works with ECS as well as lambda, do we want to allow other deployment method (EC2, App Runner, EKS)This has been removed, could be added again laterSSRSite
?Makes warmer work with multiple lambdaThis should work now, but it might have broken other framework (Not tested)