Skip to content

mojaloop/account-lookup-service

Repository files navigation

Account Lookup Service

Git Commit Git Releases Docker pulls CircleCI

Documentation

Database initialisation

You can start the database easily within docker, using docker-compose:

docker-compose up mysql-als

To populate the database with tables and seeded valued, ensure that the correct database URI is in the default.json file, or set the ALS_DATABASE_URI accordingly, and run the following command:

npm run migrate

Caching

This services uses Mojaloop's central-services-shared library to fetch participants and participant endpoints from the central ledger. The cache's are initialized in server.js.

  await ParticipantEndpointCache.initializeCache(Config.ENDPOINT_CACHE_CONFIG)
  await ParticipantCache.initializeCache(Config.PARTICIPANT_CACHE_CONFIG)

with the default config structure being

{
    "expiresIn": 180000,
    "generateTimeout": 30000,
    "getDecoratedValue": true
}

getDecorated used by the library for cache statistics and needs to be true for Prometheus metrics to display cache hits.

Further configuration reading can be found here https://hapi.dev/module/catbox/api/?v=12.1.1#policy

Start API

To run the API and/or Admin servers run the following commands

Both Admin + API

#NPM:
npm start

#CLI:
node src/index.js server

API

#NPM:
npm run start:api

#CLI:
node src/index.js server --api

Admin

#NPM:
npm run start:admin

#CLI:
node src/index.js server --admin

Tests

Unit Testing

Running unit tests

npm run test:unit

Code Coverage

npm run test:coverage-check

Integration tests

The integration tests use docker-compose to spin up a test environment for running the integration tests. The tests are executed inside a standalone account-lookup-service-int container, defined in docker-compose.integration.yml.

Run the tests in a standalone mode with:

npm run test:integration

By default, the test results will be available in /tmp/junit.xml. See below to configure the output directory and file name of the test results.

Running integration tests repetitively

In order to debug and fix broken integration tests, you may want to run the tests without tearing down the environment every time. To do this, you can set TEST_MODE to wait, which sets up the integration runner to start the docker containers, run the migrations, and then wait for you to log into the account-lookup-service-int container and run the tests yourself.

Note: The docker-compose.integration.yml file mounts the ./src and ./test directories inside the docker-container, so you can re-run your tests repeatedly without removing and rebuilding your containers each time.

For example:

export TEST_MODE=wait
npm run test:integration
# containers will now be ready and waiting for the tests

# log into the `account-lookup-service-int` container
docker exec -it als_account-lookup-service-int sh

# now run the integration tests
npm run test:int

You can then stop and remove the containers with the following commands:

docker-compose -f docker-compose.yml -f docker-compose.integration.yml stop
docker-compose -f docker-compose.yml -f docker-compose.integration.yml rm -f

Running Integration Tests interactively

If you want to run integration tests in a repetitive manner, you can startup the test containers using docker-compose via one of the following methods:

  • Running locally

    Start containers required for Integration Tests

    docker-compose -f docker-compose.yml up -d

    Run wait script which will report once all required containers are up and running

    npm run wait-4-docker

    Run the Integration Tests

    npm run test:int
  • Running inside docker

    Start containers required for Integration Tests, including a account-lookup-service container which will be used as a proxy shell.

    docker-compose -f docker-compose.yml -f docker-compose.integration.yml up -d

    Run the Integration Tests from the account-lookup-service-int container

    docker exec -it als_account-lookup-service-int sh
    npm run test:int

Environment Variables

Environment variable Description Example values Default Value
TEST_MODE The mode that integration-runner.sh uses. See ./test/integration-runner.sh for more information. default, wait, rm default
JEST_JUNIT_OUTPUT_DIR The output directory (inside the docker container) for the jest runner /tmp, /opt/app/test/results /tmp
JEST_JUNIT_OUTPUT_NAME The filename (inside the docker container) for the jest runner junit.xml junit.xml
RESULTS_DIR The output directory (on the host machine) that the test results is copied to /tmp /tmp

Auditing Dependencies

We use audit-ci along with npm audit to check dependencies for node vulnerabilities, and keep track of resolved dependencies with an audit-ci.jsonc file.

To start a new resolution process, run:

npm run audit:fix

You can then check to see if the CI will pass based on the current dependencies with:

npm run audit:check

The audit-ci.jsonc contains any audit-exceptions that cannot be fixed to ensure that CircleCI will build correctly.

Container Scans

As part of our CI/CD process, we use anchore-cli to scan our built docker container for vulnerabilities upon release.

If you find your release builds are failing, refer to the container scanning in our shared Mojaloop CI config repo. There is a good chance you simply need to update the mojaloop-policy-generator.js file and re-run the circleci workflow.

For more information on anchore and anchore-cli, refer to:

Automated Releases

As part of our CI/CD process, we use a combination of CircleCI, standard-version npm package and github-release CircleCI orb to automatically trigger our releases and image builds. This process essentially mimics a manual tag and release.

On a merge to main, CircleCI is configured to use the mojaloopci github account to push the latest generated CHANGELOG and package version number.

Once those changes are pushed, CircleCI will pull the updated main, tag and push a release triggering another subsequent build that also publishes a docker image.

Potential problems

  • There is a case where the merge to main workflow will resolve successfully, triggering a release. Then that tagged release workflow subsequently failing due to the image scan, audit check, vulnerability check or other "live" checks.

    This will leave main without an associated published build. Fixes that require a new merge will essentially cause a skip in version number or require a clean up of the main branch to the commit before the CHANGELOG and bump.

    This may be resolved by relying solely on the previous checks of the merge to main workflow to assume that our tagged release is of sound quality. We are still mulling over this solution since catching bugs/vulnerabilities/etc earlier is a boon.

  • It is unknown if a race condition might occur with multiple merges with main in quick succession, but this is a suspected edge case.

Additional Notes

  • For all put parties callbacks FSPIOP-Destination header is considered to be mandatory.