Skip to content

Latest commit

 

History

History
91 lines (63 loc) · 2.26 KB

README.md

File metadata and controls

91 lines (63 loc) · 2.26 KB

Standard Tests Development Guide

Standard test cases are found in betterproto/tests/inputs, where each subdirectory represents a testcase, that is verified in isolation.

inputs/
   bool/
   double/
   int32/
   ...

Test case directory structure

Each testcase has a <name>.proto file with a message called Test, and optionally a matching .json file and a custom test called test_*.py.

bool/
  bool.proto
  bool.json     # optional
  test_bool.py  # optional

proto

<name>.protoThe protobuf message to test

syntax = "proto3";

message Test {
    bool value = 1;
}

You can add multiple .proto files to the test case, as long as one file matches the directory name.

json

<name>.json — Test-data to validate the message with

{
  "value": true
}

pytest

test_<name>.pyCustom test to validate specific aspects of the generated class

from tests.output_betterproto.bool.bool import Test

def test_value():
    message = Test()
    assert not message.value, "Boolean is False by default"

Standard tests

The following tests are automatically executed for all cases:

  • Can the generated python code be imported?
  • Can the generated message class be instantiated?
  • Is the generated code compatible with the Google's grpc_tools.protoc implementation?
    • when .json is present

Running the tests

  • pipenv run generate
    This generates:
    • betterproto/tests/output_betterproto — the plugin generated python classes
    • betterproto/tests/output_referencereference implementation classes
  • pipenv run test

Intentionally Failing tests

The standard test suite includes tests that fail by intention. These tests document known bugs and missing features that are intended to be corrected in the future.

When running pytest, they show up as x or X in the test results.

betterproto/tests/test_inputs.py ..x...x..x...x.X........xx........x.....x.......x.xx....x...................... [ 84%]
  • . — PASSED
  • x — XFAIL: expected failure
  • X — XPASS: expected failure, but still passed

Test cases marked for expected failure are declared in inputs/config.py