Skip to content

Cloud native one-stop machine learning platform, Multi-user, Dataleap, Notebook, Drag-and-Drop pipeline, Multi-machine multi-card distributed training, Automl, Inference, Edge computing, Federation schedule, Real-time training, large models, AIHub

License

Notifications You must be signed in to change notification settings

bziwei/cube-studio

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cube Studio

English | 简体中文

Infra

image

cube-studio is a one-stop cloud-native machine learning platform open sourced by Tencent Music, Currently mainly includes the following functions

  • 1、data management: feature store, online and offline features; dataset management, structure data and media data, data label platform
  • 2、develop: notebook(vscode/jupyter); docker image management; image build online
  • 3、train: pipeline drag and drop online; open template market; distributed computing/training tasks, example tf/pytorch/mxnet/spark/ray/horovod/kaldi/volcano; batch priority scheduling; resource monitoring/alarm/balancing; cron scheduling
  • 4、automl: nni, katib, ray
  • 5、inference: model manager; serverless traffic control; tf/pytorch/onnx/tensorrt model deploy, tfserving/torchserver/onnxruntime/triton inference; VGPU; load balancing、high availability、elastic scaling
  • 6、infra: multi-user; multi-project; multi-cluster; edge cluster mode; blockchain sharing;

Doc

https://github.com/tencentmusic/cube-studio/wiki

WeChat group

learning、deploy、consult、contribution、cooperation, join group, wechart id luanpeng1234 remark<open source>, construction guide

Job Template

tips:

  • 1、You can develop your own template, Easy to develop and more suitable for your own scenarios
template type describe
linux base Custom stand-alone operating environment, free to implement all custom stand-alone functions
datax import export Import and export of heterogeneous data sources
hadoop data processing hdfs,hbase,sqoop,spark client
sparkjob data processing spark serverless
volcanojob data processing volcano multi-machine distributed framework
ray data processing python ray multi-machine distributed framework
ray-sklearn machine learning sklearn based on ray framework supports multi-machine distributed parallel computing
xgb machine learning xgb model training and inference
tfjob deep learning Multi-machine distributed training of tensorflow
pytorchjob deep learning Multi-machine distributed training of pytorch
horovod deep learning Multi-machine distributed training of horovod
paddle deep learning Multi-machine distributed training of paddle
mxnet deep learning Multi-machine distributed training of mxnet
kaldi deep learning Multi-machine distributed training of kaldi
tfjob-train model train distributed training of tensorflow: plain and runner
tfjob-runner model train distributed training of tensorflow: runner method
tfjob-plain model train distributed training of tensorflow: plain method
tf-model-evaluation model evaluate distributed model evaluation of tensorflow2.3
tf-offline-predict model inference distributed offline model inference of tensorflow2.3
model-register model service register model to platform
model-offline-predict model service distributed offline model inference of framework
deploy-service model service deploy inference service
media-download multimedia data processing Distributed download of media files
video-audio multimedia data processing Distributed extraction of audio from video
video-img multimedia data processing Distributed extraction of pictures from video
object-detection-on-darknet machine vision object-detection with darknet yolov3
ner natural language Named Entity Recognition

Deploy

wiki

cube

Contributor

algorithm: @hujunaifuture @jaffe-fly @JLWLL @ma-chengcheng @chendile

platform: @xiaoyangmai @VincentWei2021 @SeibertronSS @cyxnzb @gilearn @wulingling0108

Company

image

About

Cloud native one-stop machine learning platform, Multi-user, Dataleap, Notebook, Drag-and-Drop pipeline, Multi-machine multi-card distributed training, Automl, Inference, Edge computing, Federation schedule, Real-time training, large models, AIHub

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 56.1%
  • TypeScript 20.7%
  • Jupyter Notebook 10.6%
  • HTML 4.8%
  • JavaScript 2.6%
  • Shell 1.4%
  • Other 3.8%