Skip to content

Commit

Permalink
Updated getting started docs to include inference code for Jupyter No…
Browse files Browse the repository at this point in the history
…tebooks
  • Loading branch information
bmunday3 committed Oct 26, 2023
1 parent 08c43dc commit c8462d6
Show file tree
Hide file tree
Showing 2 changed files with 42 additions and 3 deletions.
24 changes: 23 additions & 1 deletion docs/docs/getting-started/full-workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,10 +124,32 @@ Serving on: 45000
Next, open a Python file (new or existing) and paste the following inference code. Again, we will use a convenience import with Chassis's quickstart mode to load a sample piece of data.

!!! example "Inference"
=== "Python"
=== "Jupyter Notebook"

The below inference code leverages Chassis's `OMIClient` for inference. This client provides a convenience wrapper around a gRPC client that allows you to interact with the gRPC server within your model container.

```python
from chassis.client import OMIClient
from chassis.guides import DigitsSampleData

# Call and view results of status RPC
status = await client.status()
print(f"Status: {status}")
# Submit inference with quickstart sample data
res = await client.run(DigitsSampleData)
# Parse results from output item
result = res.outputs[0].output["results.json"]
# View results
print(f"Result: {result}")

```

Execute this code to perform an inference against your running container.

=== "Other Python IDE"

The below inference code leverages Chassis's `OMIClient` for inference. Notice this code is slighly different than when running it in a Jupyter notebook, due to the built-in async functionality that comes with IPython.

```python
import asyncio
from chassis.client import OMIClient
Expand Down
21 changes: 19 additions & 2 deletions docs/docs/getting-started/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,9 +62,26 @@ To quickly test your new model container, you can leverage Chassis's `OMIClient.
Open a Python file (new or existing) and paste the following inference code. Again, we will use Chassis's quickstart mode to import load a sample piece of data.

!!! example "Inference"
=== "Python"
=== "Jupyter Notebook"
The below inference code leverages Chassis's `OMIClient`. This client provides a convenience wrapper around a gRPC client that allows you to interact with the gRPC server within your model container.

```python
from chassis.client import OMIClient
from chassis.guides import DigitsSampleData

# Execute the test_container method to spin up the container, run inference, and return the results
res = await OMIClient.test_container(container_name="my-first-chassis-model", inputs=DigitsSampleData, pull=False)
# Parse results from output item
result = res.outputs[0].output["results.json"]
# View results
print(f"Result: {result}")
```

Execute this code to perform an inference against your running container.

=== "Other Python IDE"
The below inference code leverages Chassis's `OMIClient` to run inference. Notice this code is slighly different than when running it in a Jupyter notebook, due to the built-in async functionality that comes with IPython.

```python
import asyncio
from chassis.client import OMIClient
Expand All @@ -82,7 +99,7 @@ Open a Python file (new or existing) and paste the following inference code. Aga
asyncio.run(run_test())
```

Execute this code to perform an inference against your running container.
Execute this code to perform an inference against your running container.

A successful inference run should yield the following result:

Expand Down

0 comments on commit c8462d6

Please sign in to comment.