Skip to content
This repository has been archived by the owner on Jun 10, 2019. It is now read-only.

Cloud Deployment Feature #62

Draft
wants to merge 14 commits into
base: master
Choose a base branch
from

Conversation

Voxelot
Copy link

@Voxelot Voxelot commented Feb 19, 2019

Staging up WIP updates in a draft pull request for now. Will start using this for updates and discussion about #30.

@Voxelot Voxelot changed the title Cloud Operator Feature Cloud Deployment Feature Feb 19, 2019
@karlfloersch
Copy link
Contributor

Hey! What's the status of this? I realize a lot of the code changing as you work on this so I worry that you'll have a hard time staying up-to-date with everything considering this change has large scope

@Voxelot
Copy link
Author

Voxelot commented Feb 21, 2019

Hey Karl,

Hey sorry for the delay / lack of progress so far. If you have a tight timeline on this let me know. Here's my current plan, let me know if there's other todos or changes that may conflict.

  1. Since the ecs-cli has support for docker-compose, I'd like to be able to incorporate introduce basic docker-support #36 into this. However, it's been left open for quite a while and so I'm not sure if I should start from scratch on this or not.
  2. The current command line tool is very specific for a local installation and some of the semantics might be complicated to overload. For instance, plasma-chain start is not geared for cloud operation at all. plasma-chain deploy is based around ethereum smart contracts and may be confusing if overloaded with both aws and ethereum logic (although it could be useful to combine them so that the IP outputs from the cloud formation template can be fed directly into the smart contract setup). I'm leaning towards creating a separate command line tool for now, maybe plasma-chain-aws, just to simplify the process for now and then we can figure out how to merge it into a single one later.
  3. LevelDB & statelessness - scaling the operator will be limited in the current setup. LevelDB isn't a cloud-friendly database solution. It'll require the operator to manage volumes and disks, backups, with no multi-region support or throughput auto-scaling. I did find a DynamoDB adaptation of LevelDown but not sure how reliable it is. This is most likely a limitation we'll have to live with for now.
  4. Configuration management - currently there's a mix of options via configuration files and command line args. I'm thinking this should be reworked to use a library like node-convict to allow config params to be pulled from either env vars, command line args, or files. This will make it easier to parameterize certain things in the cloud deployment using env vars and SSM to avoid baking any config into container images etc.

For now I can focus on the IPC -> SQS conversion since that should have the least amount of conflicts while we flesh out the broader picture.

@Voxelot
Copy link
Author

Voxelot commented Feb 21, 2019

Looking into it further, while some things like deposit events from the eth watcher and s3-backed ingests should be queued up via SQS, other things with latency-sensitive request/response patterns currently using IPC like the GET_RECENT_TXS_METHOD should be done directly service-to-service over HTTP. Now that AWS has built-in support for Envoy via AppMesh, what are your thoughts on using gRPC instead of jsonrpc? Seems to be much more efficient with http traffic and connections jsonrpc vs grpc. All the routing currently happening in server.js could be abstracted out to Envoy with this approach as well.

@karlfloersch
Copy link
Contributor

@Voxelot we're currently in the middle of a refactor of the codebase which would greatly effect the cloud deployment. Considering the massively shifting codebase since this issue started, I think it would make sense to put this on hold for now. What you've written so far looks great but don't want it to not be able to be deployed effectively based on larger scale changes in the codebase.

Anyway, I'll shoot you an email & we can chat about all of this. Thank you so much btw!!! :)

@Voxelot
Copy link
Author

Voxelot commented Mar 15, 2019

Maybe we can split this into separate PR's to checkpoint progress? The S3 integration shouldn't impact much in the way of your upcoming changes.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants