Skip to content

andrei-zgirvaci/expo-stable-diffusion

Repository files navigation

expo-stable-diffusion

GitHub last commit npm GitHub issues GitHub stars GitHub Sponsor

Have you ever wondered if it's possible to generate images using Stable Diffusion natively on your iPhone or iPad while taking advantage of Core ML in an Expo and React Native app?

Well, now you can!

💡 Read The Full Detailed Guide

Good to Know

❗️ expo-stable-diffusion currently only works on iOS due to the platform's ability to run Stable Diffusion models on Apple Neural Engine!

❗️ This package is not included in the Expo Go. You will have to use a Development Build or build it locally using Xcode!

Getting Started

Start by installing the expo-stable-diffusion module into your Expo managed project:

npx expo install expo-stable-diffusion

Configuration

Update iOS Deployment Target

In order for the project to build successfully, you have to set the iOS Deployment Target to 16.2. You can achieve this by installing the expo-build-properties plugin:

npx expo install expo-build-properties

Configure the plugin by adding the following to your app.json:

{
  "expo": {
    "plugins": [
      [
        "expo-build-properties",
        {
          "ios": {
            "deploymentTarget": "16.2"
          }
        }
      ]
    ]
  }
}

Enable Increased Memory Limit

To prevent memory issues, add the Increased Memory Limit capability to your iOS project. Add the following to your app.json:

{
  "expo": {
    "ios": {
      "entitlements": {
        "com.apple.developer.kernel.increased-memory-limit": true
      }
    }
  }
}

Build Your iOS App

npx expo prebuild --clean --platform ios
npx expo run:ios

Usage

After installation and configuration, you can start generating images using expo-stable-diffusion. Here's a basic example:

import * as FileSystem from "expo-file-system";
import * as ExpoStableDiffusion from "expo-stable-diffusion";

const MODEL_PATH = FileSystem.documentDirectory + "Model/stable-diffusion-2-1";
const SAVE_PATH = FileSystem.documentDirectory + "image.jpeg";

await ExpoStableDiffusion.loadModel(MODEL_PATH);

const subscription = ExpoStableDiffusion.addStepListener(({ step }) => {
  console.log(`Current Step: ${step}`);
});

await ExpoStableDiffusion.generateImage({
  prompt: "a cat coding at night",
  stepCount: 25,
  savePath: SAVE_PATH,
});

subscription.remove();

💡 If you are saving the image in a custom directory, make sure the directory exists. You can create a directory by calling the FileSystem.makeDirectoryAsync(fileUri, options) function from expo-file-system.

Obtaining Stable Diffusion Models

To use the expo-stable-diffusion module, you need a converted Core ML Stable Diffusion model. You can convert your own model using Apple's official guide or download pre-converted models from Apple's Hugging Face repository or my Hugging Face repository.

Troubleshooting

❗️ The model load time and image generation duration take some time, especially on devices with lower RAM than 6GB! Find more information in Q6 in the FAQ section of the ml-stable-diffusion repo.

Running Stable Diffusion on Lower-End Devices

failed to load ANE model

Sponsorship 🩷

I am looking for at least $1,000 in sponsorship so that I can go full-time into building this project.

Currently, I dedicate my spare time to the development of this library. Please consider supporting this project if you find expo-stable-diffusion helpful or if you are using it in a production-ready app. This will motivate me to work on improving it and adding new features like Android support!

In case you need premium guidance on integrating expo-stable-diffusion into your own project, bugfixes or any other help, feel free to contact me. I will be happy to help!