Skip to content

Core developer cheat sheet

Jen Weber edited this page Mar 6, 2020 · 16 revisions

Run the JavaScript linter without a cache

Sometimes, the eslint cache breaks because our project is so big and it stops noticing problems when run locally. If you skip the cache by omitting --cache, it takes forever, but if you specify the path to the specific files you changed, it works great. Running the linter through npm is necessary so you run the right version of eslint with all its dependencies.

"lint:js": "eslint ./path/to/package --ignore-path .gitignore --ext=js,ts",

Kill all Docker containers with fire

Prefer using yarn stop-prereqs when available. If that doesn't work:

docker kill $(docker ps -q)

Restart them with yarn start-prereqs.

See postgres instances

ps -ef | grep postgres

Run a specific mocha test

mocha ./packages/test-support/bin/run.js --timeout 20000 --grep "your test name or search"

Force breaking the embroider cache

rm -rf $TMPDIR/embroider

psql into a local hub database

Note that the database has lots of whitespace padding within columns, so it is useful to use a client like Postico to view the data, or do SELECT for specific columns. Otherwise, the tables might appear empty.

yarn start-hub
docker exec -it cardstack-pg psql -U postgres pgsearch_cardboard_development
SELECT id from documents;

Destroy symlinks and reinstall all dependencies

Since we use a mono-repo, yarn install sometimes misses updating dependencies when you switch branches. Anytime you get strange errors, stop your local servers and docker containers, then, from the root of the project:

npx lerna clean
yarn install

If lerna doesn't help, you can run the following command to delete everything not committed to git (env files, vscode launch scripts, IDE configs, etc). Look up what the flags do before you use this!

# this is destructive to anything not committed
git clean -dfX

End a node process running on a particular port

Useful if you try to close a zombie hub server.

kill $(lsof -t -i:3000)

Run just a subset of Ember tests for the Builder

cd packages/cardhost
ember test --filter "test-name"

Restart builder prereqs and start the app, in one line

cd packages/cardhost
yarn stop-prereqs && sleep 3 && yarn start-prereqs && sleep 3 && INDEX_INTERVAL=300 yarn start

Manually reindex the hub in deployment

  1. AWS_PROFILE=your-profile aws ecr --region us-east-1 get-login --no-include-email | bash
  2. socat "UNIX-LISTEN:/tmp/cardstack-remote-docker,reuseaddr,fork" EXEC:"ssh -T docker-control@sc.something.something" & Where something.something is the swarm controller URL, i.e. builder.example.com
  3. remote=unix:///tmp/cardstack-remote-docker
  4. docker -H $remote service scale hub_hub=0 - This turns off the hub so that it doesn't react badly to what you are doing. This command causes downtime.
  5. Do any work you need to, like clearing pgboss, hotfixing, making changes to the git data source, etc.
    • If you want to connect to the db, make sure you have the entry for it in your /etc/hosts and are connected to your bastion host, if you have one (zerotier)
    • If you plan to force a reindex, make sure to clear the pgboss queue, in case it has a backlog of pending or stalled jobs. Start up postico, connect, pgboss_production, and under pgboss schema, there is a job table. "Update state failed" indicates a blocking job.
    • If you plan to make changes to a git data source, in the table pgsearch, meta, delete the “default” row. This holds the last commit sha that keeps track of where you left off in git indexing. When you delete this row, you are telling the indexer to run on everything again.
    • If you manually remove cards from the index, keep in mind that some cards like base-card also exist as a content-type, and you may need to remove that record too.
  6. docker -H $remote service scale hub_hub=1 brings the hub back up
  7. curl -x post https://<stackname>-hub.stack.cards/reindex forces reindex from the data sources, if you need this to happen.

Debugging docker containers in deployment

Here are some clues to follow if you see a “can’t find docker container” error.

Set the $remote:

AWS_PROFILE=your-profile aws ecr --region us-east-1 get-login --no-include-email | bash
socat "UNIX-LISTEN:/tmp/cardstack-remote-docker,reuseaddr,fork" EXEC:"ssh -T docker-control@sc.something.something" &
remote=unix:///tmp/cardstack-remote-docker

See all docker containers:

docker -h $remote ps --all

Kill a docker service and redeploy (this causes downtime):

docker -H $remote service rm hub_hub

Put the service back by forcing a redeploy (through Travis or GitHub Actions). The first deploy will always appear to fail with an exit code. However your app may still be successfully deployed.

Enter a running docker container to inspect its code and environmental vars

Check the instance IDs. You may need to use sudo before this command if you get permission denied.

docker ps

Enter the container:

docker run -it <image id> /bin/bash

Note that this is a very bare-bones environment. For example, use cat <filename> to view files, as less and vim are not available.