Skip to content

nico-martin/markdown-editor

Repository files navigation

[md.edit]

[md.edit] is a web based markdown editor that uses modern progressive web app features to provide a great, cross-platform editing experience.

md.nico.dev

File System Access API

The main goal of this project is to showcase the File System Access API.

This API allows a web app to access local files. That means files can be opened and saved directly in the browser.

https://web.dev/file-system-access/

File Handling

To make it even more convenient to open and save files, this app also uses the File Handling API. If installed the app can register itself as a handler for markdown files. That means markdown files can be opened directly in the app from the file explorer.

Installable PWA

This webapp is a Progressive Web App and can be installed on your device.
It uses a webapp manifest (generated by Vite PWA Plugin) to present itself as an installable PWA to the browser.

Offline support

It also comes with a service worker (generated by WorkboxJS) to cache all assets and make the app available offline.
Furthermore, all downloaded audio files are cached and can be used offline.

AI

This app uses a couple of AI models to provide additional features. All the models run directly in the browser and don't require any server communication.

Translation

Sections of the markdown can be translated to other languages using AI models.
It therefore uses Transformers.js to run the model in a web worker and then pipes the translated output back to the editor.

Speech to Text

Instead of writing paragraphs yourself, md.edit offers a speech to text feature. You can now record whatever you want to say and the app will transcribe it. It uses different versions of https://huggingface.co/openai/whisper-base that again run in a web worker using Transformers.js.

Text generation / LLM

In addition to the smaller translation and speech to text models, the app also uses large language models to generate or improve sections of the file.

md.edit therefore uses LLMs, like the Mistral 7B Instruct, compiled for the web using MLC LLM. Everything runs in a web worker while the calculations are done on the GPU using WebGPU.

Special thanks to the team behind WebLLM where most of my MLC-adapter is based on.