Skip to content

Orchestration-based saga implementation dealing with distributed transactions in a microservice architecture.

Notifications You must be signed in to change notification settings

minghsu0107/saga-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Saga Example

Saga pattern is a failure management pattern that helps establish consistency in distributed applications, and coordinates transactions between multiple microservices to maintain data consistency. This project shows an orchestration-based saga implementation.

Related repositories (10K+ LOC):

An all-in-one docker-compose deployment is provided, which includes the following components:

  • Traefik - edge proxy that is responsible for external traffic routing and internal grpc load-balancing.
  • Account service - service that handles login, sigup, authentication, and token management.
  • Purchase service - service that creates purchase and streams results of each saga step.
  • Transaction services
    • Product service - service that creates and checks products; updates product inventories.
    • Order service - service that creates and queries orders.
    • Payment service - service that creates and queries payments.
    • Orchestrator service - stateless saga orchestrator.
  • Local databases
    • Account database (MySQL 8.0)
    • Product database (MySQL 8.0)
    • Payment database (MySQL 8.0)
    • Order database (MySQL 8.0)
  • Six-node redis cluster
    • As an in-memory cache for account, product, order, and payment.
    • As the bloom/cuckoo filter for preventing cache penetration (using Redis Bloom).
      • Possible issue: as we always check the bloom/cuckoo filter before querying database, we heavily reply on the data consistence between Redis and MySQL. If any update to filter fails or the filter is evicted by Redis, however, the state of data will be inconsistent among cache and storage, causing false-positive queries.
      • Solution: decouple bloom filter querying into another service, and use message broker (such as Kafka) to handle all data modification events in order to ensure at-least-once delievery (TODO).
    • As distributed locks for preventing cache avalanche
    • As a pub/sub for local cache invalidation.
    • As a streaming platform for obtaining real-time purchase result.
  • Observibility
    • Prometheus - pulling metrics from all services.
    • Jaeger - preserving and querying tracing spans accross service boundaries.
  • NATS Streaming - message broker for saga commands and events.

The following diagram shows a brief overview of the architecture.

image

This diagram omits cache data flow, bloom filters, and local databases.

Usage

To run all services locally via docker-compose v1, execute:

./run.sh run

This will bootsrap all services as well as their replicas in Docker containers.

To stop all services, execute:

./run.sh stop

Account Service

First, we need to signup a new user:

curl -X POST localhost/api/account/auth/signup \
    --data '{"password":"abcd5432","firstname":"ming","lastname":"hsu","email":"ming@ming.com","address":"taipei","phone_number":"1234567"}'

User account login:

curl -X POST localhost/api/account/auth/login \
    --data '{"email":"ming@ming.com","password":"abcd5432"}'

This will return a new token pair (refresh token + access token). We should provide the access token in the Authorization header for those APIs with authentication.

We could obtain a new token pair by refreshing with the refresh token:

curl -X POST localhost/api/account/auth/refresh \
    --data '{"refresh_token":"<refresh_token>"}'

Get user personal information:

curl localhost/api/account/info/person -H "Authorization: bearer <access_token>"

Update user personal information:

curl -X PUT localhost/api/account/info/person -H "Authorization: bearer <access_token>" \
    --data '{"firstname":"newfirst","lastname":"newlast","email":"ming3@ming.com"}'

Get user shipping information:

curl localhost/api/account/info/shipping -H "Authorization: bearer <access_token>"

Update user shipping information:

curl -X PUT localhost/api/account/info/shipping -H "Authorization: bearer <access_token>" \
    --data '{"address":"japan","phone_number":"54321"}'

Product Service

Next, let's create some new products:

curl -X POST localhost/api/product \
     --data '{"name": "product1","description":"first product","brand_name":"mingbrand","price":100,"inventory":1000}'
curl -X POST localhost/api/product \
     --data '{"name": "product2","description":"second product","brand_name":"mingbrand","price":100,"inventory":10}'

The API will return the ID of the created product.

List all products with pagination:

curl "localhost/api/products?offset=0&size=100"

This will return a list of product catalog, including its ID, name, price, and current inventory.

Get product details:

curl "localhost/api/product/<product_id>"

This will return the name, description, brand, price and cached inventory of the queried product.

Purchase Service

Here comes the core part. We are going to create a new purchase, which sends a new purchase event to the saga orchestrator and triggers distributed transactions. It will return the ID of the new purchase when success.

curl -X POST localhost/api/purchase -H "Authorization: bearer <access_token>" \
    --data '{"purchase_items":[{"product_id":<product_id>,"amount":1}],"payment":{"currency_code":"NT"}}'

After creating a purchase, we can subscribe to /api/purchase/result to receive realtime transaction results. The purchase service pushes results using server-sent events (SSE). The following code example shows how to subscribe to server-sent events using Javascript. We will use this library to send SSE request with Authorization header.

var script = document.createElement('script');script.src = "https://unpkg.com/event-source-polyfill@1.0.9/src/eventsource.js";document.getElementsByTagName('head')[0].appendChild(script);
var es = new EventSourcePolyfill('http://localhost/api/purchase/result', {
  headers: {
    'Authorization': 'bearer <access_token>'
  },
});
var listener = function (event) {
  var data = JSON.stringify(event.data);
  console.log(data);
};
es.addEventListener("data", listener);

If the subscription is successful, we would receive realtime results like the following:

image

Order Service

Next, we could check whether our order is successfully created:

curl "localhost/api/order/<payment_id>"  -H "Authorization: bearer <access_token>"

Payment Service

Finally, we could check whether our payment is successfully created:

curl "localhost/api/payment/<payment_id>"  -H "Authorization: bearer <access_token>"

Observability

All services could be configured to expose Prometheus metrics and send tracing spans. By default, all services have their Prometheus metric endpoints exposed on port 8080. As for distributed tracing, we could simply enable it by setting environment variable JAEGER_URL to the Jaeger collection endpoint.

To check whether all services are alive, visit Prometheus at http://localhost:9090/targets.

image

Visit the Jaeger web UI at http://localhost:16686. We can check all tracing spans of our API calling chains, starting from Traefik. For example, the following figure shows a request that queries /api/order/<order_id>. We can see that once order service receives the request, it authenticates the request first by calling auth.AuthService.Auth, a gRPC authentication API provided by account service. If the authentication is successful, order service will continue processing the request. To obtain a complete order, order service will ask product service for details of purchased products through another gRPC call product.ProductService.GetProducts.

image

Let's see a more complex example. This figure shows how transaction services interact with each other after we create a new purchase. The authentication process is similar to the previous example. After purchase service authenticates the request successfully, it publishes a CreatePurchaseCmd event to the message broker. Orchestrator service will then receive the event and start saga transactions. The following diagram show all related traces in a single purchase, including traces of streaming results and Redis operations.

image

Each transaction service adds the current span context to the event before publishing it. When a subscriber receives a new event, it extracts the span context from the event payload. This extracted span then becomes the parent span of the current span. By doing this, we could generate a full pub/sub calling chain across all transactions.

In addition, Jaeger will create service topologies for our spans. The following figure shows the topology when a client creates a new purchase.

About

Orchestration-based saga implementation dealing with distributed transactions in a microservice architecture.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published