-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Emily API Local Environment #150
Emily API Local Environment #150
Conversation
03d3506
to
52a52e1
Compare
@@ -3,7 +3,7 @@ | |||
"scripts": { | |||
"build-openapi": "smithy build", | |||
"clean-openapi": "smithy clean", | |||
"build-rs": "openapi-generator-cli generate -i ../../.generated-sources/smithy/openapi/openapi/Emily.openapi.json -g rust -c rust-config.json -o ../../.generated-sources/emily", | |||
"build-rs": "npx @openapitools/openapi-generator-cli generate -i ../../.generated-sources/smithy/openapi/openapi/Emily.openapi.json -g rust -c rust-config.json -o ../../.generated-sources/emily", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is necessary to prevent people from needing to install the openapi-generator-cli
locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hstove FYI, feel a bit bad about reversing this change especially since I approved it.
52a52e1
to
d9e07a8
Compare
devenv/aws-setup/initialize.py
Outdated
template = json.load(fp) | ||
|
||
# Create convenience mapping. | ||
template_resource_ids_for_type = defaultdict(list) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The resource ids used as a key for a resource in the template is actually a unique token that is associated with the resource for within the template, but there's a separate AWS resource id, which is why this refers to template_resource_id
and not resource_id
.
template_resource_token
could be a better name, but it is an identifier.
throw new Error('Must define AWS account on either "AWS_ACCOUNT" or "CDK_DEFAULT_ACCOUNT" env variables.'); | ||
} | ||
// If the account is undefined and we're letting it slide set the account to 12 "x"s. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The local environment needs this to be set, but it doesn't matter what it is. Someone who hasn't set up AWS locally shouldn't need to setup AWS CLI to run the local environment if it isn't strictly necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't fully follow everything that is happening here, but generally it seems to look alright. The only showstopper for me is calling sudo from within the makefile. Would it be possible to skip that?
docker-compose.emily.yml
Outdated
- AWS_ACCESS_KEY_ID=xxxxxxxxxxxx | ||
- AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume these placeholders doesn't do anything. Can we omit them completely?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, they need to be set unfortunately. (boto3
vomits otherwise)
Maybe I could set them in the script...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually ignore me, you can set it in the client.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice 👍
d9e07a8
to
183211a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, I did have one question though.
@@ -5,16 +5,15 @@ | |||
"emily": "bin/emily.js" | |||
}, | |||
"scripts": { | |||
"build": "(cd ../api-definition && npm run build-openapi) && tsc", | |||
"synth": "tsc && npx aws-cdk synth -q --no-staging", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this command generates the CloudFormation files now? Maybe this is obvious, but why is build no longer necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original "build"
did something a little misleading.
It would:
- make the openapi template from the
api-description
sibling directory - Compile the typescript to javascript
That just builds the sources, but it doesn't build any cdk
artifact. Normally you'd run npm run build
and then run some command like cdk deploy
or cdk synth
that utilize the generated javascript.cdk deploy
synthesizes the template and then deploys the artifacts to AWS, and the cdk synth
command just makes the template.
I've changed this so that it'll take the typescript
sources and turn them into a the cdk template right off the bat and assumes that the openapi template has already been generated. This is a better separation of responsibilities given the new structure of the Makefile. In the build command before there was no usable output (no template) and it modified sources even when it didn't need to be rebuilt.
I also changed the name to synth
because it better matches the cdk idioms.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Thank you for the explanation, I did not know that.
So going forward, when we do make build
it would not build the cdk template files anymore. That would get done if we run emily integration tests, setup a dev env (but with local configuration), or when/if we add some commands to deploy.
183211a
to
4958fe1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
4958fe1
to
7f40341
Compare
805497a
to
3339bc2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd love to do a workshop/lab to go over everything here. I can spin things up locally now, but if I try to curl Emily I only get server errors.
curl http://localhost:3000/deposits | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 36 100 36 0 0 2 0 0:00:18 0:00:12 0:00:06 9
{
"message": "Internal server error"
}
with the logs
Timed out while attempting to establish a connection to the container. You can increase this timeout by setting the SAM_CLI_CONTAINER_CONNECTION_TIMEOUT environment variable. The current timeout is 10.0 (seconds).
I'm probably just missing something here. Looking forward to learning more about this!
def main(): | ||
""" | ||
Main function to read a CDK template, create missing DynamoDB tables, | ||
replace local Lambda paths, and write the modified template. | ||
""" | ||
ddb_client = boto3.client( | ||
'dynamodb', | ||
endpoint_url=DYNAMODB_ENDPOINT, | ||
region_name="us-west-2", | ||
aws_access_key_id="xxxxxxxx", | ||
aws_secret_access_key="xxxxxxxx", | ||
) | ||
template = read_template(INPUT_CDK_TEMPLATE) | ||
create_missing_tables(template, ddb_client) | ||
modified_template = replace_local_lambda_path(template, LOCAL_LAMBDA_PATH) | ||
write_template(modified_template, OUTPUT_CDK_TEMPLATE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Beautiful. Thank you for refactoring this 🙏 It's much easier for me to understand what's happening now.
This is what I talked about on the call earlier; there's an issue on Linux computers that I wasn't able to resolve, and I've had very poor luck debugging it. There aren't any tests in CI and most of us are using MacOS which makes me consider this temporarily okay, but I need to find a resolution. I plan to track that in #74 The SAM docker container which I've named By default the SAM cli will create the lambda instance as a child container of the one running the cli, but since we have the cli running in its own docker container it needs to make a sibling container as opposed to a child container. There are a number of differences between MacOS and Linux when it comes to docker, and I think I've caught and handled most of them, but for whatever reason when the If the sam cli is running outside of a container on Linux it will work, so I believe it's safe to say it's an issue with the way the network is set up, but I've tried a lot of things and nothing seems to work:
|
Ah I see. Thank you for the explanation, and nice that you created a ticket to track this. Then it also sounds like I could work around this for the time being by using the sam CLI on my machine instead of in docker. |
Is the problem calling host services? If so running the container with network host flag on linux might work?
|
I was going to suggest trying this: extra_hosts:
- "host.docker.internal:host-gateway" But I'm not sure about the differences between the two. |
3339bc2
to
910db38
Compare
Note: @netrome @djordon I made two changes after merging with the latest change from the blocklist client:
This PR ended up changing more than I wanted because it introduced a new |
AUTOGENERATED_BLOCKLIST_CLIENT_CLIENT := .generated-sources/blocklist-api/src/lib.rs | ||
BLOCKLIST_OPENAPI_PATH=./.generated-sources/blocklist-openapi-gen | ||
BLOCKLIST_OPENAPI_SPEC=$(BLOCKLIST_OPENAPI_PATH)/blocklist-client-openapi.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Highlighed the new things.
BLOCKLIST_OPENAPI_PATH=./.generated-sources/blocklist-openapi-gen | ||
# Generate the client code using the OpenAPI spec | ||
$(AUTOGENERATED_BLOCKLIST_CLIENT_CLIENT): $(BLOCKLIST_OPENAPI_SPEC) | ||
pnpm --prefix $(BLOCKLIST_OPENAPI_PATH) run build | ||
|
||
# Generate the OpenAPI spec for Blocklist Client | ||
generate-openapi-spec: install | ||
$(BLOCKLIST_OPENAPI_SPEC): $(INSTALL_TARGET) $(filter-out $(BLOCKLIST_OPENAPI_SPEC), $(wildcard $(subst dir, $(BLOCKLIST_OPENAPI_PATH), $(THREE_DIRS_DEEP)))) | ||
cd ./.generated-sources/blocklist-openapi-gen && cargo build | ||
|
||
# Generate the client code using the OpenAPI spec | ||
blocklist-api-source: generate-openapi-spec | ||
pnpm --prefix $(BLOCKLIST_OPENAPI_PATH) run build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New.
.github/workflows/unit-tests.yaml
Outdated
- name: Generate OpenAPI spec for blocklist client | ||
run: make generate-openapi-spec | ||
- name: Run Basic tests | ||
run: make test | ||
|
||
- name: Generate blocklist API client | ||
run: make blocklist-api-source | ||
|
||
- name: Run tests | ||
- name: Run Integration tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think our CI is doing cargo test
twice now but besides that it looks good.
ifeq ($(findstring Linux, $(shell uname)), Linux) | ||
_CONTAINER_HOST := localhost | ||
else | ||
_CONTAINER_HOST := host.docker.internal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh interesting, I had thought the host.docker.internal
path would work on both macOS and linux. Good to know.
.github/workflows/unit-tests.yaml
Outdated
- name: Run Basic tests | ||
run: make test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now the make integration-tests
command runs the basic tests as well. We can either remove this or update the stanza in the yaml or update the command. I'm okay with either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll remove the test command then push it.
2e3ad3d
to
4fa8075
Compare
I see the suggestions on how to fix the issues on Linux computers and will use them in #74, right now just trying to get this through. |
Emily API Local Environment
Description
This PR adds a local Emily development environment.
x86
when running locally on anx86
machine - necessaryNote that this may still not work for some machines with weird docker setups.
Makefile Optimization
I included Makefile caching because running
cargo-lambda
takes a while (around 1 minute) whencargo
believes that the sources foremily/lambda
or the dependencies of the sources have been changed. We've been using Makefiles incorrectly so far; each target is loosely meant to be a file, and in cases where the target isn't a file you're meant to make it a dependency of.PHONEY
.For example, this command doesn't make the file
install
so it's departing from the "file as target" paradigm.Makefiles define a target to be out of date when the target corresponds to a file that has a "last modified" timestamp earlier than the targets it's dependent on OR if any of the dependencies are out of date. Note that the definition of out of date is:
An approach to this that takes advantage of the caching for installation needs to associate the installation with a file that can be updated when there's a dependency change. Here's the updated installation task from this PR:
Here we're using the last updated timestamp of the
pnpm-lock.yaml
to indicate if the installation is out of date or needs to be rerun because thepackage.json
files have been modified. I've kept theinstall
target because when the repository is first pulled in the "last modified" timestamp forpnpm-lock.yaml
and thepackage.json
files will have the same timestamp somake
will refuse to reinstall because the target is up to date, theinstall
command forgoes that.Other changes
Comments added in the files
Testing
Ran the integration environment from a fresh repository by running the following:
and then hit it with