Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install on local server - Not on google cloud #41

Open
v-pourahmadi opened this issue Apr 15, 2020 · 6 comments
Open

Install on local server - Not on google cloud #41

v-pourahmadi opened this issue Apr 15, 2020 · 6 comments

Comments

@v-pourahmadi
Copy link

Hi all,

Thanks a lot for the great project.
We do not have access to google cloud but want to deploy your project on our server to start learning all the parts. As we are not experts in cloud deployment and tools, we do not know which parts and configuration files should be changed so we can go from the G-cloud deployment to our local server deployment.

Could you please guide what is the easiest way to make such modifications? and possibly do you have the setup instruction and files for such deployment

Thanks a lot

@kaiwaehner
Copy link
Owner

I think it depends if you still want to leverage Kubernetes.

If yes, then you need to adjust the Terraform scripts to use a local infrastructure. This should be relative trivial.
If no, then it is probably easier to set up your own infrastructure and just use the project as template for configurations of the pipeline, connectors, test-generator, etc.

@ora0600 or @sbaier1 might be able to add some more thoughts.

@ora0600
Copy link
Contributor

ora0600 commented Apr 15, 2020

Yes, you are right. The Confluent Operator support different k8s environments (AWS, Google, Azure and also on-prem k8s).
You need to change gcp.yml with your yml (there is template in the operator under provider) and then you can drop the deployment of GCP k8s engine completely

@v-pourahmadi
Copy link
Author

Thanks a lot for your help. We will work on it and get back to you in case we face any difficulties.

@v-pourahmadi
Copy link
Author

By the way, in terraform-gcp/main.tf I see some references to the google cloud APIs. I was wondering if change of confluent/gcp.yaml is enough or we something needs to be changed in the "terraform" folder as well?

@kaiwaehner
Copy link
Owner

In theory, this should do it - as Terraform does all the rest.

In practice, there is some other dependencies, e.g. the variables file where you configure your account and region:
https://github.com/kaiwaehner/hivemq-mqtt-tensorflow-kafka-realtime-iot-machine-learning-training-inference/blob/master/infrastructure/terraform-gcp/variables.tf

Also the storage of the TensorFlow model is using a GCP S3 bucket. But if you get until here, you have already set up 98%, so don't worry about it too much.

I propose the following:

  1. Run a local setup with Confluent Operator (download the 30day trial and use the getting started guide) so that you have a running instance on your Kubernetes cluster. This way you understand how the infrastructure works.
  2. Now replace the gcp.yml with your config and try creating the infrastructure.
  3. Fix any other issues or dependencies.

@v-pourahmadi
Copy link
Author

Thanks a lot Kaiwaehner for your explanations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants