This repository contains a Python application that utilizes AI models to generate code based on instructions received through an API URL. The application processes the instructions, runs the relevant AI models, and returns the generated code as a response.
The project architecture is designed as a two-tier system. The first tier (current repo) handles the request processing, interacts with the API, and triggers the AI models. The second tier hosts AI inference and utilises the received preprocessed data to execute an AI model inference. The architecture diagram below provides an overview of the system:
To run the application using Docker Compose, follow these steps:
-
Navigate to the root directory of the project:
cd /path/to/your/gen/repo
-
Build and start the services using Docker Compose:
docker-compose up --build
The
--build
option ensures that Docker Compose builds the images before starting the services if non existent -
The services should now be running, and you can access them using the specified ports:
- Signature Generator: http://localhost:8001
- Code Generator: http://localhost:8002
- Inference: http://localhost:8003
-
To stop the services, press
Ctrl+C
in the terminal where Docker Compose is running.
Note: No docker image is stored at the time of writing and have to be built locally as shown above.
Note: Ensure that you have Docker and Docker Compose installed on your machine.
For more detailed instructions or troubleshooting, refer to the Docker documentation: Get Docker and Install Docker Compose.
To set up the project and ensure proper functionality, follow these steps:
-
Clone the repository to your local machine:
git clone https://github.com/OpenFn/gen.git
-
Copy the example env and set your API keys as required:
cp ./services/inference/.env.example ./services/inference/.env
-
Navigate to the desired module's directory:
cd services/<module>
-
Install the required dependencies using Poetry:
poetry install
Repeat these steps for each module under the services
directory that you want to use.
You can initiate each service using the following steps:
-
Navigate to the desired service module:
cd services/<module>
-
Run the service using Poetry:
poetry run ./run.sh
Additionally, each module includes a demo.py
file that can be executed. To run the entire flow:
-
Navigate to the
services
directory:cd services/<module>
-
Run the demo:
poetry run python demo.py
To execute the entire generation process for the provided samples, run the following command:
cd services/
python3 demo.py
This prepares the data, performs requests and saves the outputs (Adaptor.d.ts, Adapter.js, and Adapter.test.js).
The project structure is organized as follows:
services/
: Contains the three services and demo file.utils/
: Directory containing utility functions and helper modules for processing instructions and generating code.tests/
: Directory containing test files for the application.
The service APIs are documented using OpenAPI (FastAPI). You can view the API documentation for each running service by navigating to the /docs
endpoint. Here are the URLs for each service:
- Signature Generator Service: http://localhost:8001/docs
- Code Generator Service: http://localhost:8002/docs
- Inference Service: http://localhost:8003/docs
Simply access the respective URL to explore and interact with the API documentation. Replace the endpoints with the actual service endpoints.
Contributions to this project are welcome. Feel free to open issues or submit pull requests for improvements, bug fixes, or new features.
This project is licensed under the MIT License - see the LICENSE file for details.