Skip to content

WilliamFalci/Prisma-Cluster

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PRISMA CLUSTER

Created with love from Italy 💚🤍❤️

DisclaimerWhat is Prisma ClusterFeaturesGlobal Requirements Logical ArchitecturePrdocution WorkflowProject TreeCLIReplicationHow to clone for dev/prodHow init service from existing schemaHow init service from 0How re-init a serviceWhat about if in replication mode the Parent DB change some columns?

Created with:

Generic badge Generic badge Generic badge Generic badge Generic badge Generic badge Generic badge Generic badge


Disclaimer

Prisma Cluster is not officially Releated to Prisma.io , I named this project "Prisma Cluster" because it use the powerfull of Prisma's introspection and interfacing .

This project have the goal to make more "simple" make an Microservice infrastructure and use it with an RPC server to provide data. The same project is already pre-configured to be scalable in a Replication structure where this project has a slave DB from an exnternal Master.

The Repository will be updated everytime it's needed, the same project is released with Open Source License, The usage and configuration is your business.

FOSSA Status


What is Prisma Cluster? (Back to Top)

PRSIMA CLUSTER (3)

Prisma Cluster is an Full Stack Service Manager, using the Next-Generation ORM Prisma and the Powerfull of Jayson RPC + Dockerized Postgres, Prisma Cluster allow you to create an cluster of Databases with:

  • (Optional Slave DB connected with external Parent DB)
  • Unique Database per service with specify user / credentials
  • Automatic link to Service DB with Service Controller and RPC server
  • Automatic Storage creation for the service and auto-link with the service's controllers
  • Full Backup creation
  • Full Rollback to an specific Backup
  • Service deletion tracker
  • Service's data export (DB's data and files (files are optional) )
  • Service's data import (DB's data and files (files are optional) )
  • Gloabl's data export (DB's data and files (filese are optional) )
  • Gloabl's data import (DB's data and files (filese are optional) )
  • Import Tables / Data from existing projects
  • Generate Model and Interface using Database Introspection

All that just using the CLI, allowing you to focus on logical service architecture without lose time on configurations and links.

Prisma is the ORM, generate the models and client interface (for client interface I mean the client library to retrieve database objects)

Jayson is the RPC Compilant, will be in listening on TCP Port (default: 3000) and will determinate the service's router, method and controler to invoke

Node-Filestorage is the Storage Engine, will handle the storage, please read the documentation here: petersirka/node-filestorage

Cron is the Cron Job Tool, will handle the cron jobs, please read the documentation here: kelektiv/node-cron

Node-Cache is a simple caching module node-cache/node-cache

Look at Logical Architecture to understand the pipeline

  • Why Prisma Cluster is in listening on TCP and not on HTTP/s?

Simple, many devs / sites / platforms are already stuctured with API, plus for security reasons many don't want expose directly the Database server, so... to make comfortable with the major realities outside here the most simple solution is to make the RPC call from your API server, in this way the API server will delivery directly the result of the RPC call without expose the database server


Features (Back to Top)

  • Service Creation
  • Service Removal
  • Service Storage
  • Service Jobs
  • Service Local Migration
  • Service Dev/Prod Deploy
  • Backup System
  • Backup Rollback
  • Prisma Studio
  • Introspection
  • Postgress Replication pre-configured
  • RPC Server using Jayson
  • RPC Client example using Jayson (look example.client.js)
  • Node-Cache
  • WebSocket Handler Using fastify
  • Websocket Subscription to the wal logical of Postgress (Realtime)
  • Prima-Cluster GUI (To manage the RPC Infrastructure directly from a local website)

Global Requirements (Back to Top)

  • Node
  • Docker
  • Docker-Compose v2
  • Prisma: yarn global add prisma
  • Dotenv-cli: yarn global add dotenv-cli

Logical Architecture (Back to Top)

Simple example of this project architecture with 2 services

Workflow (Back to Top)

┌─ Local Workflow
├─ do your yob
├─ migrate your changes to your local db using Prisma Cluster CLI
└─ test your changes
   ├─ push your changes in your own branch
   └─ at the task completation make a pull request to Development Branch, 
      if you removed some services specify it into Pull Request reporting the service's name deleted
      └─> Development Workflow
          ├─ Fetch changes
          ├─ run the global deploy (to apply all migrations changes) using Prisma Cluster CLI
          └─ test
             └─ at the end of the test if stable make a pull request to Master branch
                └─> Production Workflow
                   ├─ Fetch changes
                   └─> run the global deploy (to apply all migrations changes) using Prisma Cluster CLI   

Project Tree (Back to Top)

┌ prisma-cluster
├─ .github                                (Repository GIT Community Files)
|  ├─ ISSUE_TEMPLATE
|  |  ├─ bug_report.md
|  |  └─ feature_request.md
|  ├─ CODE_OF_CONDUCT.md
|  └─ CONTRIBUTING.md
├─ CLI
|  ├─ env
|  |  ├─ .blank.env
|  |  └─ .env                             (Not provided - copy blank.env and replace the values)
|  ├─ helpers
|  |  └─ text.sh
|  ├─ modules
|  |  ├─ backups
|  |  |  ├─ commands
|  |  |  |  ├─ create.sh
|  |  |  |  └─ rollback.sh
|  |  |  └─ backup.sh 
|  |  ├─ db
|  |  |  ├─ commands
|  |  |  |  ├─ export_data.sh
|  |  |  |  ├─ fetch.sh
|  |  |  |  ├─ import.sh
|  |  |  |  └─ restore_data.sh
|  |  |  └─ db.sh
|  |  ├─ master_interface
|  |  |  ├─ commands
|  |  |  |  ├─ generate.sh
|  |  |  |  └─ update.sh
|  |  |  └─ master_interface.sh
|  |  └─ services
|  |     ├─ commands
|  |     |  ├─ create.sh
|  |     |  ├─ delete.sh
|  |     |  ├─ deploy.sh
|  |     |  ├─ jobs.sh
|  |     |  ├─ method.sh
|  |     |  ├─ migrate.sh
|  |     |  └─ studio.sh
|  |     └─ service.sh 
|  ├─ package.json
|  └─ rpc.sh
├─ DB
|  ├─ import
|  |  └─ [file-to-import].sql
|  ├─ replication                         (postgres replication slave mode config folder)
|  |  ├─ config 
|  |  |  ├─ blank.pg_hba.conf       
|  |  |  ├─ blank.postgresql.conf
|  |  |  ├─ pg_hba.conf                   (Not provided - copy blank.pg_hba.conf - be sure to fix the permission)
|  |  |  └─ postgresql.conf               (Not provided - copy blank.postgresql.conf - be sure to fix the permission)
|  |  ├─ env
|  |  |  ├─ .blank.env.replication
|  |  |  └─ .env                          (Not provided - copy blank.env.replication and replace the values)
|  |  ├─ init
|  |  |  ├─ 00-Create_repl_userl.sql      (SQL command to generate repl_user for the replication mode)         
|  |  |  ├─ 01-Create_table_[table].sql   (File auto-generated when an subscription will be created, will contain che creation of the table schema)
|  |  |  └─ 02-Create_sub_[table].sql     (File auto-generated when an subscription will be created, will contain che creation of subscription)
|  |  ├─ parent_example
|  |  |  ├─ scripts
|  |  |  |  ├─ MAKE_REPL_USER.sh
|  |  |  |  ├─ publications.sh
|  |  |  |  └─ single-publication.sh
|  |  |  ├─ .pg_hba.conf
|  |  |  ├─ .postgresql.conf
|  |  |  ├─ .replication.env
|  |  |  ├─ README
|  |  |  ├─ server.sh
|  |  |  └─ tables
|  |  ├─ scripts
|  |  |  └─ subscriptions
|  |  |     ├─ single-subscription.sh     (postgres make subscription script)
|  |  |     └─ supscriptions.sh           (postgres make subscriptions from file "tables" script)
|  |  ├─ MAKE_REPL_USER_SQL.sh
|  |  └─ Tables                           (list of tables in subscriptions to parent DB)
|  ├─ blank.docker-compose.replication.yml
|  ├─ docker-compose.replication.yml      (Not provided - copy blank.docker-compose.replication.yml and replace the values)
|  ├─ docker-compose.yml
|  └─ server.sh
├─ RPC
|  ├─ data-export                         (not provided - will be auto-generated from CLI in case you export data)
|  ├─ jobs                                (not provided - will be auto-generated from CLI in case you add an job to the service)
|  |  ├─ services
|  |  |  └─ [service-name] * n
|  |  └─ index.js
|  ├─ master                              (not provided - will be auto-generated from CLI under your command)
|  |  ├─ model
|  |  |  ├─ interface                     
|  |  |  ├─ node_modules                  
|  |  |  ├─ prisma                        
|  |  |  ├─ .gitignore                    
|  |  |  ├─ .package-lock.json            
|  |  |  └─ .package.json                 
|  |  └─ master_interface.js              (not provided - will be auto-generated from CLI during master's init)
|  ├─ middleware                          (not provided - not handled by CLI - recommended to make it and place your custom middlewares / classes)
|  ├─ node_modules                        (not provided - run yarn from RPC folder)
|  ├─ services
|  |  └─ [service-name] * n
|  |     ├─ controller                    (rpc server functions)
|  |     ├─ method                        (rpc server methods)
|  |     ├─ model
|  |     |  ├─ interface                  (rpc model interface)
|  |     |  ├─ node_modules
|  |     |  ├─ prisma
|  |     |  |  └─ schema.prisma           (prisma rpc model generator and migration handler)
|  |     |  ├─ .gitignore
|  |     |  ├─ package-lock.json
|  |     |  └─ package.json
|  |     └─ routes                        (rpc server routing)
|  ├─ services-backups                    (not provided - will be auto-generated from CLI if you will generate an backup)
|  |  └─ [dd/mm/YYYY_hh_mm_ss].tar.gz
|  ├─ services-deleted                    (not provided - will be auto-generated from CLI if you will delete an service)
|  |  └─ [service_name]
|  ├─ storage
|  |  ├─ buckets
|  |  |  └─ [service-name] * n            (service storage bucket)
|  |  |     ├─ OOO-000-NNN                (auto-gerated folder by storage system - look storage documentation to know more)
|  |  |     |  ├─ OOO000NNN.data          (file added will be atuomatically saved with an incremental number - look storage documentation to know more)
|  |  |     |  └─ config                  (auto-gerated folder by storage system - look storage documentation to know more)
|  |  |     └─config                      (auto-gerated folder by storage system - look storage documentation to know more)
|  |  └─ index.js
|  ├─ example.client.js
|  ├─ package.json
|  ├─ router.js
|  ├─ server.js
|  └─ yarn.lock
├─ .gitignore
├─ CONTRIBUTING.md
├─ LICENSE.md
└─ README.md

PRISMA CLUSTER CLI (Back to Top)

  • Locate prisma-cluster/CLI
  • From the folder run yarn rpc [commands]

Available commands: (Back to Top)

  • SERVICE
    • Connect: yarn rpc service connect [service-name] [method-name]
    • Create: yarn rpc service create [service-name] [OPTIONAL | create_credentials: [true/false] | default = true] [OPTIONAL | only_master: [true/false] | default = false]
    • Delete: yarn rpc service delete [service-name]
    • Method: yarn rpc service method [service-name] [action: add/delete] [method-name] [(optional): master_only]
    • Migrate: yarn rpc service migrate [mode/service-name]
      • Specific Service: yarn rpc service migrate [service-name]
      • All Services: yarn rpc service migrate global
    • Deploy: yarn rpc service deploy [mode/service-name]
      • Specific Service: yarn rpc service deploy [service-name]
      • All Services: yarn rpc service deploy global
    • Jobs: yarn rpc service jobs [service-name] add [job-name] [(optional): include_master]
      • Whitout master interface injection: yarn rpc service jobs [service-name] add [job-name]
      • Whit master interface injection: yarn rpc service jobs [service-name] add [job-name] include_master
    • Studio: yarn rpc service studio [service-name]
  • BACKUP
    • Create: yarn rpc backup create
    • Rollback: yarn rpc backup rollback [dd/mm/YYYY_hh_mm_ss]
  • DB
    • Fetch: yarn rpc db fetch [service-name]
    • Import: yarn rpc db import [service-name] [file-to-import.sql]
    • export_data: yarn rpc db export_data [mode/service-name]
    • restore_data: yarn rpc db restore_data [file-to-restore]
  • MASTER INTERFACE
    • Generate: yarn rpc master_interface generate
    • Update: yarn rpc master_interface update

SERVICE > CONNECT: (Back to Top)

  • Will connect the master interface to the method
    • This command must be runned ONLY if you use the db: Master - which contain data external of the service
    • Before run this command you need generate the master_interface using the command: yarn rpc master_interface update

SERVICE > CREATE: (Back to Top)

  • In case of only master interface you have to run: yarn rpc service create [service-name] false true
  • Will create the service running automatically this actions:
    • Service Folder Structure under ./services/[service-name]
    • RPC Router injection in [root]/router.js
    • Creation of [service-name] db into Postgress (skipped whith create_credentials setted to false)
    • Creation of user's db (ower of [service-name] db) (skipped whith create_credentials setted to false)
    • Saving of enviroment variables to link prisma to db created (skipped whith create_credentials setted to false)
    • Creation of the specific Storage Bucket

SERVICE > DELETE: (Back to Top)

  • Will delete the service running automatically this actions:
    • Deletion of ./services/[service-name]
    • Deletion of [service-name] DB
    • Deletion of [service-name] DB's user
    • Deletion of enviroment variables related to [service-name] DB's user
    • Deletion of [service-name] router injection into RPC router [root]/router.js
    • Create a deletion file under ./services-deleted/ named with [service-name] and containing Operator's Name and Reason of deletion
    • Deletion of Storage logical link
    • Physically deletion of Service's stored files (optional)

SERVICE > METHOD: (Back to Top)

  • The usage of "master_only" option will work only if you already generated the master_interface, the CLI will generate the Service's Method and Controller without create an specific Database, Credentials and DB Interface, this kind of service must be used if you need create a Service using only the data of Master DB (so you are running this project in replication mode), anyway will be created always and specific Service's Storage.
  • ADD:
    • Whitout "master_only" option will create the method of the service running automatically this actions:
      • Creation of [method-name] controller under ./services/[service-name]/controllers/[method-name]_controller.js
      • Creation of [method-name] method under ./services/[service-name]/methods/[method-name]_method.js
      • Injection of [method-name] into [service-name]'s router
      • Auto configuration of [method-name] controller with [service-name]'s model interface into [method-name]_controller.js
  • DELETE:
    • Will delete the method of the service running automatically this actions:
      • Deletion of [method-name] controller under ./services/[service-name]/controllers/[method-name]_controller.js
      • Deletion of [method-name] method under ./services/[service-name]/methods/[method-name]_method.js
      • Deletion of injection into [service-name]'s router
      • Auto configuration of [method-name] controller with [service-name]'s model interface into [method-name]_controller.js

SERVICE > MIGRATE: (Back to Top)

  • Apply schema changes to DB
  • Make migration file

SERVICE > DEPLOY (MUST BE RUNNED ONLY ON DEV / PRODUCTION) (Back to Top)

  • Apply migrations to DB
  • Loop the ./services-deleted to check if some services must be deleted

SERVICE > JOBS (Back to Top)

  • Will generate an service's cron jobs handler under: ./RPC/Jobs/services/[service-name].js the execution will be handled by ./RPC/Jobs/index.js automatically
    • Use include_master only if you need connect the Job with the DB Master's Data
    • The Job will be automatically connected with: Service's Interface, Service's Storage
  • THe logic part of cron job will be under his own key in ./RPC/Jobs/services/[service-name].js

SERVICE > STUDIO (Back to Top)

BACKUP > CREATE: (Back to Top)

  • Will create a full dump of all DBs (Schema and Data
  • Will create a Bk of current enviroment variables
  • Will create a Bk of all current services controllers and routes
  • Will zip the all previous points

BACKUP > ROLLBACK (ATTENTION ON USE IT): (Back to Top)

  • Will apply the backup you specifiend deleting all services and databases to re-apply them from the backup

DB > IMPORT: (Back to Top)

  • Will run the sql file to the service's db, the file must be located under ./DB/import

DB > FETCH (ATTENTION ON USE IT): (Back to Top)

  • This command will fetch the entire database generating Prisma model, the usage scenario is when you need import structure from an already exist DB (so previously you used DB > Import) and then you need generate the model's interface, if you use it outside this scenario be carefull on generate conflicts with migrations
  • Please read How init service from existing schema to avoid conflicts on migration process

DB > EXPORT_DATA: (Back to Top)

  • This command will export the data you required. Using: yarn rpc db export_data global will be exported the data of all services, asking you if you want export files stored into file storages as well, else, using yarn rpc db export_data [service-name] will be exported only the specific service's data, always asking you if you want export files stored into file storage for the specific service.
  • The data dump and files will be stored under the folder ./RPC/data-export, the dump will be automatically compressed and the name will be data_[current-datetime].tar.gz

DB > RESTORE_DATA: (Back to Top)

  • Usage: yarn rpc db restore_data data_[current-datetime] , the file must be under the folder ./RPC/data-export
  • This command will just apply the data (and files if they are contained into the dump)
  • ATTENTION: this logic is made to RESTORE and not IMPORT, so if you have already data in your data this will make conflicts in case of unique keys and primary keys, so is in your logic the application of it. If you want mod the dump you then:
    • Locate your dump under ./RPC/data-export
    • Run: tar -xf [dump-name].tar.gz (this will extract the dump)
    • Now you will have a new folder named like the dump with 2 subfolder or just 1 (depends if you exported files as well)
    • Locate the db folder under the dump's folder and there you will find the .sql file
    • If you need mod the files exported from files storage then you have to locate the folder ```data`` under the dump's folder, this actions is not recommended if you don't know good petersirka/node-filestorage

MASTER INTERFACE > GENERATE: (Back to Top)

  • Will generate the interface of Master DB

MASTER INTERFACE > UPDATE: (Back to Top)

  • Will update the interface of Master DB

POSTGRES REPLICATION SLAVE MODE (Back to Top)

Quick Start

  • From prisma-cluster/DB/replication/env make .env.replication from .blank.env.replication replacing the values
  • Locate prisma-cluster/DB/replication/scripts and run MAKE_REPL_USER_SQL.sh this will generate 00-Create_repl_user.sql under prisma-cluster/DB/replication/init
  • Locate prisma-cluster/DB/replication/config make pg_hba.conf and postgresql.conf from blank.pg_hba.conf and blank.postgresql.conf
  • Locate prisma-cluster/DB make docker-compose.replication.yml from blank.docker-compose.replication.yml and replace value
  • Using prisma-cluster/DB/server.sh replication up --build -d (for the building and up) or ./database/server.sh replication up -d (for the up) Docker will run Postgres with Replication's Configuration, fetched from prisma-cluster/DB/replication/config.

If Docker return permission denied on conf files please be sure to fix the permission, example:

image (1)

Concept

Considering the scenario of 2 Postgres Servers:

  • First Server: MAIN data center (where you stored all your data like, users, orders, invoices or other kind of data)
  • Second Server: Prisma Cluster Postgres Server (so this project)

In a scenario like that you need "move" the data from First Server to the Second, the Replication Mode allow you to run the Second Server in "Slave" mode ready to receive Publications from the First Server, you have just to configure some lines :P\

Configuration on First Server:

  • Create the user will run the publications: sql CREATE ROLE [first-server-replication-user] REPLICATION LOGIN PASSWORD '[first-server-replication-user-password]';
  • Edit the pg_hba.conf (usually located in /var/lib/postgresql/data/) adding: host all [POSTGRES_REPLICATION_USER from ./database/.env.replication] slave_node md5 under IPv4 connections
  • Edit the postgresql.conf enable wal_level and set it to logical
  • Make a host rule for slave_node and point it to this project host

Configuration Second Server (this project):

In prisma-cluster/DB/replication/env/env.replication apply:

  • master_node: your First Server IP
  • PARENT_REPLICATION_DB: the db's name contain the data into First Server
  • PARENT_REPLICATION_USER: user will rul the publications on First Server
  • PARENT_REPLICATION_PASSWORD: user's password will run the publication on First Server
  • Under prisma-cluster/DB/replication/scripts run MAKE_REPL_USER_SQL.sh - This script will generate automatically the SQL to create replication user of this project

Creation of a publication (to run on First Server):

  • Run psql CREATE PUBLICATION [table-name]_pub FOR TABLE [table-name]; - This will create the publication
  • Run psql GRANT ALL ON [table-name] TO [first-server-replication-user]; - This will grant the access
  • Run psql pg_dump -U [first-server-root] -d [first-server-db] -t [table-name] -s | psql -U [POSTGRES_SLAVE_USER from ./database/.env.replication] -d master -h slave_node this will apply the schema on slave server (this poject server)
  • Now you can subscribe from the slave server

Subscription (to run from Second Server)

  • From prisma-cluster/DB/replication/scripts run single-subscription.sh [table-name]

If you did everything right then will work XD

What do this script? The script will add the table to tables to keep track of tables subscribed, then generate the SQL of subscription under prisma-cluster/DB/replication/init in this way everytime the docker will be mounted will import the subscriptions if they not exist already, then if the docker container is running will directly invoke psql to generate the subscription


Use this project in real case of development/production (Back to Top)

  • Create your own empty private repository
  • Clone the project https://github.com/WilliamFalci/Prisma-Cluster.git
  • Run cd Prisma-Cluster
  • Run git remote rename origin public
  • Run git remote set-url --push public DISABLE
  • Run git remote add own [uri-of-your-private-repository]

In this way:

  • You will be able to fetch/pull from this public repository
  • You will be able to fetch/pull/push from your own private repository
  • You will be able to modify the gitignore based on your necessity

How init service from 0 (Back to Top)

  • Before everything we must create the service running yarn rpc service create [service-name]
  • At this point the CLI created the service folder's structure + the blank DB on postgres + the DB's credentials on enviroment, but not the interface, why? Because the DB is empty
  • We must "init" our DB with at least 1 table to generate the interface, running yarn rpc service schema [service-name] the service's schema will be opened
  • Referer to Prisma Schema to create your tables
  • After this we have to generate our first migration running the command yarn rpc service migrate [service-name], this command will go to apply the changes of schema to DB, will generate the migration file and will create/update the interface
  • Now we can create our service's methods, running yarn rpc service method [service-name] add [method-name], this command will geneterate the method and relative controller already linked with the interface ect.

How init service from existing schema (Back to Top)

To understand this workflow we have to consider a scenario where you need make a service importing the relative tables from another DB. To help to understand I prepared an example.

In this example I need make an service named: emailer

  • I will run: yarn rpc service create emailer
  • After that I need import 7 tables from my old DB named "production" so I will generated 2 dumps, the first one just with tables structure, the second dump will contain just the data.
  • At this point I will place the first one (schema only) named 01-Emailer.sql into /prisma-cluster/DB/import then I will run yarn rpc db import emailer 01-Emailer.sql
  • This will go to generate the 7 tables into Emailer's service database, at this point I need generate the model's interface and sync prisma to the current status, to do that I will run yarn rpc db fetch emailer, the CLI will alert you about Drift detected asking you to confirm the sync (this will normally delete all your data, but in our workflow we haven't data so is ok, confirm)
  • After the confirm the Service's Model, Schema and Migration are perfectly in sync, so now you can import your data, copy the second dump (data only) named 02-Emailer_data.sql into /prisma-cluster/DB/import then run yarn rpc db import 02-Emailer_data.sql -Enjoy

If you want more information about this kind of possible troubleshooting look at Prisma Development Troubleshooting

Schermata del 2022-03-29 10-53-40 Schermata del 2022-03-29 10-54-54


How re-init a service (Back to Top)

Damn! I wrong something and I need re-init the service excluding it from "services-deleted" how I cand do it?

So.. if you haven't already runned the command yarn rpc service delete [service-name] do it, this command will delete all the relative enviroments variables of the service, will delete the relative DB, data, and storage, then will generate an file named with the service's name under the folder services-deleted if you want exclude this deletion from the deletion's tracking, you have just to delete the file.

Then you will be free to re-create the service


What about if in replication mode the Parent DB change some columns? (Back to Top)

Imagine you have the table users in both servers Parent Server and Slave Server, the slave is obviously the DB of this project, and you are running it in replication mode.

You already made the publication on Parent Server and the subscription on the Slave, so you have already received the data, but... for some reason you need change the data type of an Column on the Parent DB, plus you add another column and drop another one, always on the Parent Server.

What will happen?

The Slave DB will not be more able to apply the data for those columns, why? The logical replication is just a data replication, not structural this mean you have to handle those changes.

How handle those changes?

So... if you want do it manually you can do it, else... I made this little tool in node: node-pg-compare

Following the "How to use" of Node-Pg-Compare you will get the SQL File to apply ont Prisma-Cluster, to apply it follow those steps:

  • Copy the SQL File generated with node-pg-compare to ./DB/import/
  • Run the following command yarn rpc db import master [sql-filename].sql
  • Enjoy