Skip to content

opiproject/opi-spdk-bridge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OPI storage gRPC to SPDK json-rpc bridge

Linters CodeQL OpenSSF Scorecard tests License codecov Go Report Card Go Doc Last Release GitHub stars GitHub Contributors

This is a simple SPDK based storage API PoC.

  • SPDK - container with SPDK app that is running on xPU
  • Server - container with OPI gRPC storage APIs to SPDK json-rpc APIs bridge
  • Client - use goDPU for testing of the above server/bridge

I Want To Contribute

This project welcomes contributions and suggestions. We are happy to have the Community involved via submission of Issues and Pull Requests (with substantive content or even just fixes). We are hoping for the documents, test framework, etc. to become a community process with active engagement. PRs can be reviewed by by any number of people, and a maintainer may accept.

See CONTRIBUTING and GitHub Basic Process for more details.

Docs

OPI-SPDK Bridge Block Diagram

The following is the example architecture we envision for the OPI Storage SPDK bridge APIs. It utilizes SPDK to handle storage services, and the configuration is handled by standard JSON-RPC based APIs see https://spdk.io/doc/jsonrpc.html

We recongnise, not all companies use SPDK, so for them only PROTOBUF definitions are going to be the OPI conumable product. For those that wish to use SPDK, this is a refernce implementation not intended to use in production.

OPI Storage SPDK bridge/server

OPI-SPDK Bridge Sequence Diagram

The following is the example sequence diagram for OPI-SPDK bridge APIs. It is just an example and implies SPDK just as example, not mandated by OPI.

OPI Storage SPDK bridge/server

Getting started

âť— docker-compose is deprecated. For details, see Migrate to Compose V2.

QEMU example

Real DPU/IPU example

on DPU/IPU (i.e. with IP=10.10.10.10) run

$ docker run --rm -it -v /var/tmp/:/var/tmp/ -p 50051:50051 -p 8082:8082 ghcr.io/opiproject/opi-spdk-bridge:main
2023/09/12 20:29:05 Connection to SPDK will be via: unix detected from /var/tmp/spdk.sock
2023/09/12 20:29:05 gRPC server listening at [::]:50051
2023/09/12 20:29:05 HTTP Server listening at 8082

on X86 management VM run either gRPC or HTTP requests

# gRPC requests
docker run --network=host --rm -it namely/grpc-cli ls   --json_input --json_output 10.10.10.10:50051 -l
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmeSubsystem "{nvme_subsystem : {spec : {nqn: 'nqn.2022-09.io.spdk:opitest2', serial_number: 'myserial2', model_number: 'mymodel2', max_namespaces: 11} }, nvme_subsystem_id : 'subsystem2' }"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 ListNvmeSubsystems "{}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 GetNvmeSubsystem "{name : 'nvmeSubsystems/subsystem2'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmeController "{parent: 'nvmeSubsystems/subsystem2', nvme_controller : {spec : {nvme_controller_id: 2, pcie_id : {physical_function : 0, virtual_function : 0, port_id: 0}, max_nsq:5, max_ncq:5, 'trtype': 'NVME_TRANSPORT_TYPE_PCIE' } }, nvme_controller_id : 'controller1'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmeSubsystem "{nvme_subsystem : {spec : {nqn: 'nqn.2022-09.io.spdk:opitest3', serial_number: 'myserial2', model_number: 'mymodel2', max_namespaces: 11} }, nvme_subsystem_id : 'subsystem3' }"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmeController "{'parent':'nvmeSubsystems/subsystem3','nvme_controller':{'spec':{'nvme_controller_id':2,'fabrics_id':{'traddr': '127.0.0.1', trsvcid: '4421', adrfam: 'NVME_ADDRESS_FAMILY_IPV4'}, 'max_nsq':5,'max_ncq':5, 'trtype': 'NVME_TRANSPORT_TYPE_TCP'}},'nvme_controller_id':'controller3'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 ListNvmeControllers "{parent : 'nvmeSubsystems/subsystem2'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 GetNvmeController "{name : 'nvmeSubsystems/subsystem2/nvmeControllers/controller1'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 GetNvmeController "{name : 'nvmeSubsystems/subsystem3/nvmeControllers/controller3'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmeNamespace "{parent: 'nvmeSubsystems/subsystem2', nvme_namespace : {spec : {volume_name_ref : 'Malloc0', 'host_nsid' : '10', uuid:{value : '1b4e28ba-2fa1-11d2-883f-b9a761bde3fb'}, nguid: '1b4e28ba-2fa1-11d2-883f-b9a761bde3fb', eui64: 1967554867335598546 } }, nvme_namespace_id: 'namespace1'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 ListNvmeNamespaces "{parent : 'nvmeSubsystems/subsystem2'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 GetNvmeNamespace "{name : 'nvmeSubsystems/subsystem2/nvmeNamespaces/namespace1'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 StatsNvmeNamespace "{name : 'nvmeSubsystems/subsystem2/nvmeNamespaces/namespace1'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmeRemoteController "{nvme_remote_controller : {multipath: 'NVME_MULTIPATH_MULTIPATH', tcp: {hdgst: false, ddgst: false}}, nvme_remote_controller_id: 'nvmetcp12'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 ListNvmeRemoteControllers "{}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 GetNvmeRemoteController "{name: 'nvmeRemoteControllers/nvmetcp12'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmePath "{parent: 'nvmeRemoteControllers/nvmetcp12', nvme_path : {traddr:'11.11.11.2', trtype:'NVME_TRANSPORT_TYPE_TCP', fabrics:{subnqn:'nqn.2016-06.com.opi.spdk.target0', trsvcid:'4444', adrfam:'NVME_ADDRESS_FAMILY_IPV4', hostnqn:'nqn.2014-08.org.nvmexpress:uuid:feb98abe-d51f-40c8-b348-2753f3571d3c'}}, nvme_path_id: 'nvmetcp12path0'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmeRemoteController "{nvme_remote_controller : {multipath: 'NVME_MULTIPATH_DISABLE'}, nvme_remote_controller_id: 'nvmepcie13'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 CreateNvmePath "{parent: 'nvmeRemoteControllers/nvmepcie13', nvme_path : {traddr:'0000:01:00.0', trtype:'NVME_TRANSPORT_TYPE_PCIE'}, nvme_path_id: 'nvmepcie13path0'}"

docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 ListNvmePaths "{parent : 'nvmeRemoteControllers/nvmepcie13'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmePath "{name: 'nvmeRemoteControllers/nvmepcie13/nvmePaths/nvmepcie13path0'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmeRemoteController "{name: 'nvmeRemoteControllers/nvmepcie13'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 GetNvmePath "{name: 'nvmeRemoteControllers/nvmetcp12/nvmePaths/nvmetcp12path0'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmePath "{name: 'nvmeRemoteControllers/nvmetcp12/nvmePaths/nvmetcp12path0'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmeRemoteController "{name: 'nvmeRemoteControllers/nvmetcp12'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmeNamespace "{name : 'nvmeSubsystems/subsystem2/nvmeNamespaces/namespace1'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmeController "{name : 'nvmeSubsystems/subsystem2/nvmeControllers/controller1'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmeSubsystem "{name : 'nvmeSubsystems/subsystem2'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmeController "{name : 'nvmeSubsystems/subsystem3/nvmeControllers/controller3'}"
docker run --network=host --rm -it namely/grpc-cli call --json_input --json_output 10.10.10.10:50051 DeleteNvmeSubsystem "{name : 'nvmeSubsystems/subsystem3'}"
# HTTP requests
# inventory
curl -kL http://10.10.10.10:8082/v1/inventory/1/inventory/2

# Nvme
# create
curl -X POST -f http://10.10.10.10:8082/v1/nvmeRemoteControllers?nvme_remote_controller_id=nvmetcp12 -d '{"multipath": "NVME_MULTIPATH_MULTIPATH"}'
curl -X POST -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12/nvmePaths?nvme_path_id=nvmetcp12path0 -d '{"traddr":"11.11.11.2", "trtype":"NVME_TRANSPORT_TYPE_TCP", "fabrics":{"subnqn":"nqn.2016-06.com.opi.spdk.target0", "trsvcid":"4444", "adrfam":"NVME_ADDRESS_FAMILY_IPV4", "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:feb98abe-d51f-40c8-b348-2753f3571d3c"}}'
curl -X POST -f http://10.10.10.10:8082/v1/nvmeSubsystems?nvme_subsystem_id=subsys0 -d '{"spec": {"nqn": "nqn.2022-09.io.spdk:opitest1"}}'
curl -X POST -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeNamespaces?nvme_namespace_id=namespace0 -d '{"spec": {"volume_name_ref": "Malloc0", "host_nsid": 10}}'
curl -X POST -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeControllers?nvme_controller_id=ctrl0 -d '{"spec": {"trtype": "NVME_TRANSPORT_TYPE_TCP", "fabrics_id":{"traddr": "127.0.0.1", "trsvcid": "4421", "adrfam": "NVME_ADDRESS_FAMILY_IPV4"}}}'
# get
curl -X GET -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12
curl -X GET -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12/nvmePaths/nvmetcp12path0
curl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0
curl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeNamespaces/namespace0
curl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeControllers/ctrl0
# list
curl -X GET -f http://10.10.10.10:8082/v1/nvmeRemoteControllers
curl -X GET -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12/nvmePaths
curl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems
curl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeNamespaces
curl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeControllers
# stats
curl -X GET -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12:stats
curl -X GET -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12/nvmePaths/nvmetcp12path0:stats
curl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0:stats
curl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeNamespaces/namespace0:stats
curl -X GET -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeControllers/ctrl0:stats
# update
curl -X PATCH -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12 -d '{"multipath": "NVME_MULTIPATH_MULTIPATH"}'
curl -X PATCH -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12/nvmePaths/nvmetcp12path0 -d '{"traddr":"11.11.11.2", "trtype":"NVME_TRANSPORT_TYPE_TCP", "fabrics":{"subnqn":"nqn.2016-06.com.opi.spdk.target0", "trsvcid":"4444", "adrfam":"NVME_ADDRESS_FAMILY_IPV4", "hostnqn":"nqn.2014-08.org.nvmexpress:uuid:feb98abe-d51f-40c8-b348-2753f3571d3c"}}'
curl -X PATCH -k http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeNamespaces/namespace0 -d '{"spec": {"volume_name_ref": "Malloc1", "host_nsid": 10}}'
curl -X PATCH -k http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeControllers/ctrl0 -d '{"spec": {"trtype": "NVME_TRANSPORT_TYPE_TCP", "fabrics_id":{"traddr": "127.0.0.1", "trsvcid": "4421", "adrfam": "NVME_ADDRESS_FAMILY_IPV4"}}}'
# delete
curl -X DELETE -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeControllers/ctrl0
curl -X DELETE -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0/nvmeNamespaces/namespace0
curl -X DELETE -f http://10.10.10.10:8082/v1/nvmeSubsystems/subsys0
curl -X DELETE -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12/nvmePaths/nvmetcp12path0
curl -X DELETE -f http://10.10.10.10:8082/v1/nvmeRemoteControllers/nvmetcp12

Test SPDK is up

curl -k --user spdkuser:spdkpass -X POST -H "Content-Type: application/json" -d '{"id": 1, "method": "bdev_get_bdevs", "params": {"name": "Malloc0"}}' http://127.0.0.1:9009/

gRPC CLI examples

From https://github.com/grpc/grpc-go/blob/master/Documentation/server-reflection-tutorial.md

Alias

alias grpc_cli='docker run --network=opi-spdk-bridge_opi --rm -it namely/grpc-cli'

See services

$ grpc_cli ls opi-spdk-server:50051
grpc.reflection.v1alpha.ServerReflection
opi_api.storage.v1.AioVolumeService
opi_api.storage.v1.FrontendNvmeService
opi_api.storage.v1.FrontendVirtioBlkService
opi_api.storage.v1.FrontendVirtioScsiService
opi_api.storage.v1.MiddleendEncryptionService
opi_api.storage.v1.MiddleendQosVolumeService
opi_api.storage.v1.NvmeRemoteControllerService
opi_api.storage.v1.NullVolumeService

See commands

$ grpc_cli ls opi-spdk-server:50051 opi_api.storage.v1.FrontendNvmeService -l
filename: frontend_nvme_pcie.proto
package: opi_api.storage.v1;
service FrontendNvmeService {
  rpc CreateNvmeSubsystem(opi_api.storage.v1.CreateNvmeSubsystemRequest) returns (opi_api.storage.v1.NvmeSubsystem) {}
  rpc DeleteNvmeSubsystem(opi_api.storage.v1.DeleteNvmeSubsystemRequest) returns (google.protobuf.Empty) {}
  rpc UpdateNvmeSubsystem(opi_api.storage.v1.UpdateNvmeSubsystemRequest) returns (opi_api.storage.v1.NvmeSubsystem) {}
  rpc ListNvmeSubsystem(opi_api.storage.v1.ListNvmeSubsystemRequest) returns (opi_api.storage.v1.ListNvmeSubsystemResponse) {}
  rpc GetNvmeSubsystem(opi_api.storage.v1.GetNvmeSubsystemRequest) returns (opi_api.storage.v1.NvmeSubsystem) {}
  rpc StatsNvmeSubsystem(opi_api.storage.v1.StatsNvmeSubsystemRequest) returns (opi_api.storage.v1.StatsNvmeSubsystemResponse) {}
  rpc CreateNvmeController(opi_api.storage.v1.CreateNvmeControllerRequest) returns (opi_api.storage.v1.NvmeController) {}
  rpc DeleteNvmeController(opi_api.storage.v1.DeleteNvmeControllerRequest) returns (google.protobuf.Empty) {}
  rpc UpdateNvmeController(opi_api.storage.v1.UpdateNvmeControllerRequest) returns (opi_api.storage.v1.NvmeController) {}
  rpc ListNvmeController(opi_api.storage.v1.ListNvmeControllerRequest) returns (opi_api.storage.v1.ListNvmeControllerResponse) {}
  rpc GetNvmeController(opi_api.storage.v1.GetNvmeControllerRequest) returns (opi_api.storage.v1.NvmeController) {}
  rpc StatsNvmeController(opi_api.storage.v1.StatsNvmeControllerRequest) returns (opi_api.storage.v1.StatsNvmeControllerResponse) {}
  rpc CreateNvmeNamespace(opi_api.storage.v1.CreateNvmeNamespaceRequest) returns (opi_api.storage.v1.NvmeNamespace) {}
  rpc DeleteNvmeNamespace(opi_api.storage.v1.DeleteNvmeNamespaceRequest) returns (google.protobuf.Empty) {}
  rpc UpdateNvmeNamespace(opi_api.storage.v1.UpdateNvmeNamespaceRequest) returns (opi_api.storage.v1.NvmeNamespace) {}
  rpc ListNvmeNamespace(opi_api.storage.v1.ListNvmeNamespaceRequest) returns (opi_api.storage.v1.ListNvmeNamespaceResponse) {}
  rpc GetNvmeNamespace(opi_api.storage.v1.GetNvmeNamespaceRequest) returns (opi_api.storage.v1.NvmeNamespace) {}
  rpc StatsNvmeNamespace(opi_api.storage.v1.StatsNvmeNamespaceRequest) returns (opi_api.storage.v1.StatsNvmeNamespaceResponse) {}
}

See methods

$ grpc_cli ls opi-spdk-server:50051 opi_api.storage.v1.FrontendNvmeService.CreateNvmeController -l
  rpc CreateNvmeController(opi_api.storage.v1.CreateNvmeControllerRequest) returns (opi_api.storage.v1.NvmeController) {}

See messages

$ grpc_cli type opi-spdk-server:50051 opi_api.storage.v1.NvmeControllerSpec
message NvmeControllerSpec {
  .opi_api.common.v1.ObjectKey id = 1 [json_name = "id"];
  int32 nvme_controller_id = 2 [json_name = "nvmeControllerId"];
  .opi_api.common.v1.ObjectKey subsystem_id = 3 [json_name = "subsystemId"];
  .opi_api.storage.v1.PciEndpoint pcie_id = 4 [json_name = "pcieId"];
  int32 max_nsq = 5 [json_name = "maxNsq"];
  int32 max_ncq = 6 [json_name = "maxNcq"];
  int32 sqes = 7 [json_name = "sqes"];
  int32 cqes = 8 [json_name = "cqes"];
  int32 max_namespaces = 9 [json_name = "maxNamespaces"];
}

$ grpc_cli type opi-spdk-server:50051 opi_api.storage.v1.PciEndpoint
message PciEndpoint {
  int32 port_id = 1 [json_name = "portId"];
  int32 physical_function = 2 [json_name = "physicalFunction"];
  int32 virtual_function = 3 [json_name = "virtualFunction"];
}

Call remote method

$ grpc_cli call --json_input --json_output opi-spdk-server:50051 DeleteNvmeController "{name : 'nvmeSubsystems/subsystem2/nvmeControllers/controller1'}"
connecting to opi-spdk-server:50051
{}
Rpc succeeded with OK status

Server log

opi-spdk-server_1  | 2022/08/05 14:31:14 server listening at [::]:50051
opi-spdk-server_1  | 2022/08/05 14:39:40 DeleteNvmeSubsystem: Received from client: id:8
opi-spdk-server_1  | 2022/08/05 14:39:40 Sending to SPDK: {"jsonrpc":"2.0","id":1,"method":"bdev_malloc_delete","params":{"name":"OpiMalloc8"}}
opi-spdk-server_1  | 2022/08/05 14:39:40 Received from SPDK: {1 {-19 No such device} 0xc000029f4e}
opi-spdk-server_1  | 2022/08/05 14:39:40 error: bdev_malloc_delete: json response error: No such device
opi-spdk-server_1  | 2022/08/05 14:39:40 Received from SPDK: false
opi-spdk-server_1  | 2022/08/05 14:39:40 Could not delete: id:8

Another remote call example

$ grpc_cli call --json_input --json_output opi-spdk-server:50051 ListNvmeSubsystem {}
connecting to opi-spdk-server:50051
{
 "subsystem": [
  {
   "nqn": "nqn.2014-08.org.nvmexpress.discovery"
  },
  {
   "nqn": "nqn.2016-06.io.spdk:cnode1"
  }
 ]
}
Rpc succeeded with OK status

Another Server log

2022/09/21 19:38:26 ListNvmeSubsystem: Received from client:
2022/09/21 19:38:26 Sending to SPDK: {"jsonrpc":"2.0","id":1,"method":"bdev_get_bdevs"}
2022/09/21 19:38:26 Received from SPDK: {1 {0 } 0x40003de660}
2022/09/21 19:38:26 Received from SPDK: [{Malloc0 512 131072 08cd0d67-eb57-41c2-957b-585faed7d81a} {Malloc1 512 131072 78c4b40f-dd16-42c1-b057-f95c11db7aaf}]