Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split Fargate Service into WEB and API #945

Closed
joswayski opened this issue Jan 14, 2024 · 1 comment
Closed

Split Fargate Service into WEB and API #945

joswayski opened this issue Jan 14, 2024 · 1 comment
Labels
🗃️ backlog Stuff I would like to get to eventually, but not prioritized at the moment.

Comments

@joswayski
Copy link
Member

joswayski commented Jan 14, 2024

Right now the NextJs app and Rust API are on the same fargate task. We should figure out a way to split these using the service-2-service protocol of the month (ECS Service Connect) from AWS but im not sure how much benefit we will get aside form being able to scale independently. Will probably introduce some latency which im hesitant to do just for this one benefit :/

https://www.reddit.com/r/aws/comments/zpc7rh/how_to_have_inter_container_communication/

https://docs.aws.amazon.com/AmazonECS/latest/bestpracticesguide/networking-connecting-services.html

Local task networking is ideal for communicating between containers that are tightly coupled and require maximum networking performance between them. However, when you deploy one or more containers as part of the same task they are always deployed together so it removes the ability to independently scale different types of workload up and down.

In the example of the application with a web tier and an API tier, it may be the case that powering the application requires only two web tier containers but 10 API tier containers. If local container networking is used between these two container types, then an extra eight unnecessary web tier containers would end up being run instead of allowing the two different services to scale independently.

A better approach would be to deploy the two containers as two different services, each with its own load balancer. This allows clients to communicate with the two web containers via the web service’s load balancer. The web service could distribute requests across the eight backend API containers via the API service’s load balancer.

Meh.

The thinking was that we wouldn't need lambda consumers and can process everything in one monolith using threads, but i think i want to decouple this completely anyway and have the API standalone as an API & producer of messages and leave consuming to something else / another service.

@joswayski joswayski added the 🗃️ backlog Stuff I would like to get to eventually, but not prioritized at the moment. label Jan 14, 2024
@joswayski
Copy link
Member Author

Finished in #952 but k3s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🗃️ backlog Stuff I would like to get to eventually, but not prioritized at the moment.
Projects
None yet
Development

No branches or pull requests

1 participant