Skip to content

Transformer translator website with multithreaded web server in Rust

Notifications You must be signed in to change notification settings

tate8/translator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

translator

Description

A English-to-Spanish translation website running on a multithreaded web server.
Check out this notebook for the ML code.

Contributor(s)

  • Tate Larkin

Server

I created a multithreaded web server in Rust from scratch to serve the website. The server interacts with the machine learning model to dynamically process user questions and serve up a response. It uses a thread pool to hold threads which it can allocate to a task. Workers accept the code that needs to be executed and run that code on different threads in parallel, and when they complete their task, they return to the thread pool to accept a new task.

Machine Learning Model

I used a Transformer model as described in the Attention is All you Need paper. This architecture utilizes only attention mechanisms to track relationships and find patterns in data. It uses self-attention to weight the significance of each part of the input data. Inputs are embedded, positionally encoded, and sent through an encoder and decoder with multiple iterations of Multi-Head Attention layers which runs through attention mechanisms multiple times in parallel. It then uses a final point-wise network for its predictions.

Scaled Dot Product Attention and Multi-Head Attention diagrams

ScaledDotProductAttn


Full Model

Dataset

Tensorflow English-Spanish dataset

Website

For the website design, I mimicked a messaging app such as Apple's Messages and other texting software. I used Mobile First design tactics to ensure great quality and responsiveness on any screen size.