Core developer cheat sheet
Sometimes, the eslint cache breaks because our project is so big and it stops noticing problems when run locally.
If you skip the cache by omitting --cache
, it takes forever, but if you specify the path to the specific files you changed, it works great.
Running the linter through npm is necessary so you run the right version of eslint with all its dependencies.
"lint:js": "eslint ./path/to/package --ignore-path .gitignore --ext=js,ts",
Prefer using yarn stop-prereqs
when available. If that doesn't work:
docker kill $(docker ps -q)
Restart them with yarn start-prereqs
.
ps -ef | grep postgres
mocha ./packages/test-support/bin/run.js --timeout 20000 --grep "your test name or search"
rm -rf $TMPDIR/embroider
Note that the database has lots of whitespace padding within columns, so it is useful to use a client like Postico to view the data, or do SELECT
for specific columns.
Otherwise, the tables might appear empty.
yarn start-hub
docker exec -it cardstack-pg psql -U postgres pgsearch_cardboard_development
SELECT id from documents;
Since we use a mono-repo, yarn install sometimes misses updating dependencies when you switch branches. Anytime you get strange errors, stop your local servers and docker containers, then, from the root of the project:
npx lerna clean
yarn install
If lerna
doesn't help, you can run the following command to delete everything not committed to git (env files, vscode launch scripts, IDE configs, etc). Look up what the flags do before you use this!
# this is destructive to anything not committed
git clean -dfX
Useful if you try to close a zombie hub server.
kill $(lsof -t -i:3000)
cd packages/cardhost
ember test --filter "test-name"
cd packages/cardhost
yarn stop-prereqs && sleep 3 && yarn start-prereqs && sleep 3 && INDEX_INTERVAL=300 yarn start
AWS_PROFILE=your-profile aws ecr --region us-east-1 get-login --no-include-email | bash
-
socat "UNIX-LISTEN:/tmp/cardstack-remote-docker,reuseaddr,fork" EXEC:"ssh -T docker-control@sc.something.something" &
Wheresomething.something
is the swarm controller URL, i.e.builder.example.com
remote=unix:///tmp/cardstack-remote-docker
-
docker -H $remote service scale hub_hub=0
- This turns off the hub so that it doesn't react badly to what you are doing. This command causes downtime. - Do any work you need to, like clearing pgboss, hotfixing, making changes to the git data source, etc.
- If you want to connect to the db, make sure you have the entry for it in your
/etc/hosts
and are connected to your bastion host, if you have one (zerotier) - If you plan to force a reindex, make sure to clear the pgboss queue, in case it has a backlog of pending or stalled jobs. Start up postico, connect, pgboss_production, and under pgboss schema, there is a job table. "Update state failed" indicates a blocking job.
- If you plan to make changes to a git data source, in the table pgsearch, meta, delete the “default” row. This holds the last commit sha that keeps track of where you left off in git indexing. When you delete this row, you are telling the indexer to run on everything again.
- If you manually remove cards from the index, keep in mind that some cards like
base-card
also exist as acontent-type
, and you may need to remove that record too.
- If you want to connect to the db, make sure you have the entry for it in your
-
docker -H $remote service scale hub_hub=1
brings the hub back up -
curl -x post https://<stackname>-hub.stack.cards/reindex
forces reindex from the data sources, if you need this to happen.
Here are some clues to follow if you see a “can’t find docker container” error.
Set the $remote
:
AWS_PROFILE=your-profile aws ecr --region us-east-1 get-login --no-include-email | bash
socat "UNIX-LISTEN:/tmp/cardstack-remote-docker,reuseaddr,fork" EXEC:"ssh -T docker-control@sc.something.something" &
remote=unix:///tmp/cardstack-remote-docker
See all docker containers:
docker -h $remote ps --all
Kill a docker service and redeploy (this causes downtime):
docker -H $remote service rm hub_hub
Put the service back by forcing a redeploy (through Travis or GitHub Actions). The first deploy will always appear to fail with an exit code. However your app may still be successfully deployed.
Check the instance IDs. You may need to use sudo
before this command if you get permission denied.
docker ps
Enter the container:
docker run -it <image id> /bin/bash
Note that this is a very bare-bones environment. For example, use cat <filename>
to view files, as less and vim are not available.