Skip to content
#

pretrained-models

Here are 640 public repositories matching this topic...

BERT (Bidirectional Encoder Representations from Transformers) is a transformer-based method of learning language representations. It is a bidirectional transformer pre-trained model developed using a combination of two tasks namely: masked language modeling objective and next sentence prediction on a large corpus.

  • Updated Aug 10, 2020
  • Python

The Flask-Python web app utilizes a pre-trained image colorization model based on Caffe. It allows users to upload black and white images and applies the colorization model to automatically generate colored versions. The app leverages the power of deep learning to provide an intuitive and interactive way to add color to grayscale images with ease.

  • Updated Apr 28, 2024
  • CSS

Improve this page

Add a description, image, and links to the pretrained-models topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the pretrained-models topic, visit your repo's landing page and select "manage topics."

Learn more