Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not all image formats are supported. #128

Open
lucius346346 opened this issue Apr 29, 2024 · 6 comments
Open

Not all image formats are supported. #128

lucius346346 opened this issue Apr 29, 2024 · 6 comments

Comments

@lucius346346
Copy link

Some image formats don't work corectly in Hoarder.

PNG and BMP can't be added at all using Web UI
WEBP can't be parsed with AI - Ollama in my case.

@scubanarc
Copy link

Didn't test BMP or WEBP but agree with PNG:

https://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png

@MohamedBassem
Copy link
Collaborator

PNGs seems to be working fine for me.

Screenshot 2024-05-01 at 9 20 21 AM

As for BMP, yeah, I didn't add support for that just yet. Should be easy to add.

WEBP can't be parsed with AI - Ollama in my case.

hmmm, yeah, this depends on the model. One thing we can consider is to convert the image before passing it to the tag inferrence.

@lucius346346
Copy link
Author

PNGs seems to be working fine for me.

Ok, that one is on me. Misconfiguration of Nginx on my part.

@Deathproof76
Copy link

PNGs seems to be working fine for me.

Screenshot 2024-05-01 at 9 20 21 AM As for BMP, yeah, I didn't add support for that just yet. Should be easy to add.

WEBP can't be parsed with AI - Ollama in my case.

hmmm, yeah, this depends on the model. One thing we can consider is to convert the image before passing it to the tag inferrence.

The problem with WEBP definitely seems to lie with ollamas implementation ollama/ollama#2457 currently only png and jpeg are working. Multimodal llm based on LLaVa, for example, should be able to handle webp and many other formats too.

@Deathproof76
Copy link

Deathproof76 commented May 7, 2024

@MohamedBassem maybe sharp could be used for something like this for ollama in inference.ts? Convert to temporary .jpeg images which get sent to ollama and deleted afterwards (Disclaimer: Not a programmer, don't understand the code, just used AI):

import { Ollama } from "ollama";
import OpenAI from "openai";
import sharp from 'sharp';

import serverConfig from "@hoarder/shared/config";
import logger from "@hoarder/shared/logger";

export interface InferenceResponse {
  response: string;
  totalTokens: number | undefined;
}

export interface InferenceClient {
  inferFromText(prompt: string): Promise<InferenceResponse>;
  inferFromImage(
    prompt: string,
    contentType: string,
    image: string,
  ): Promise<InferenceResponse>;
}

export class InferenceClientFactory {
  static build(): InferenceClient | null {
    if (serverConfig.inference.openAIApiKey) {
      return new OpenAIInferenceClient();
    }

    if (serverConfig.inference.ollamaBaseUrl) {
      return new OllamaInferenceClient();
    }
    return null;
  }
}

class OpenAIInferenceClient implements InferenceClient {
  openAI: OpenAI;

  constructor() {
    this.openAI = new OpenAI({
      apiKey: serverConfig.inference.openAIApiKey,
      baseURL: serverConfig.inference.openAIBaseUrl,
    });
  }

  async inferFromText(prompt: string): Promise<InferenceResponse> {
    const chatCompletion = await this.openAI.chat.completions.create({
      messages: [{ role: "system", content: prompt }],
      model: serverConfig.inference.textModel,
      response_format: { type: "json_object" },
    });

    const response = chatCompletion.choices[0].message.content;
    if (!response) {
      throw new Error(`Got no message content from OpenAI`);
    }
    return { response, totalTokens: chatCompletion.usage?.total_tokens };
  }

  async inferFromImage(
    prompt: string,
    contentType: string,
    image: string,
  ): Promise<InferenceResponse> {
    const chatCompletion = await this.openAI.chat.completions.create({
      model: serverConfig.inference.imageModel,
      response_format: { type: "json_object" },
      messages: [
        {
          role: "user",
          content: [
            { type: "text", text: prompt },
            {
              type: "image_url",
              image_url: {
                url: `data:${contentType};base64,${image}`,
                detail: "low",
              },
            },
          ],
        },
      ],
      max_tokens: 2000,
    });

    const response = chatCompletion.choices[0].message.content;
    if (!response) {
      throw new Error(`Got no message content from OpenAI`);
    }
    return { response, totalTokens: chatCompletion.usage?.total_tokens };
  }
}

class OllamaInferenceClient implements InferenceClient {
  ollama: Ollama;

  constructor() {
    this.ollama = new Ollama({
      host: serverConfig.inference.ollamaBaseUrl,
    });
  }

  async runModel(model: string, prompt: string, image?: string) {
    const chatCompletion = await this.ollama.chat({
      model: model,
      format: "json",
      stream: true,
      messages: [
        { role: "user", content: prompt, images: image ? [image] : undefined },
      ],
    });

    let totalTokens = 0;
    let response = "";
    try {
      for await (const part of chatCompletion) {
        response += part.message.content;
        if (!isNaN(part.eval_count)) {
          totalTokens += part.eval_count;
        }
        if (!isNaN(part.prompt_eval_count)) {
          totalTokens += part.prompt_eval_count;
        }
      }
    } catch (e) {
      // There seem to be some bug in ollama where you can get some successfull response, but still throw an error.
      // Using stream + accumulating the response so far is a workaround.
      // https://github.com/ollama/ollama-js/issues/72
      totalTokens = NaN;
      logger.warn(
        `Got an exception from ollama, will still attempt to deserialize the response we got so far: ${e}`,
      );
    }

    return { response, totalTokens };
  }

  async inferFromText(prompt: string): Promise<InferenceResponse> {
    return await this.runModel(serverConfig.inference.textModel, prompt);
  }

  async inferFromImage(
    prompt: string,
    contentType: string,
    image: string,
  ): Promise<InferenceResponse> {
    // Convert the image to a Buffer
    const buffer = Buffer.from(image, 'base64');

    // Check if the image format is webp or heic
    const isWebp = contentType.includes('image/webp');
    const isHeic = contentType.includes('image/heic');

    // If the image format is webp or heic, convert it to jpeg
    let convertedBuffer;
    if (isWebp || isHeic) {
      convertedBuffer = await sharp(buffer)
        .jpeg({ quality: 80 }) // You can adjust the quality as needed
        .toBuffer();
    } else {
      convertedBuffer = buffer;
    }

    // Encode the converted image as a base64 string
    const convertedImage = convertedBuffer.toString('base64');

    // Run the model with the converted image
    const inferenceResult = await this.runModel(
      serverConfig.inference.imageModel,
      prompt,
      `data:image/jpeg;base64,${convertedImage}`,
    );

    // Delete the converted image after inference
    convertedBuffer = null;
    convertedImage = null;

    return inferenceResult;
  }
}

heic and webp just as an example. But it seems that sharp doesn't even support heic out of the box https://obviy.us/blog/sharp-heic-on-aws-lambda/ "only JPEG, PNG, WebP, GIF, AVIF, TIFF and SVG images". Well, maybe it helps😅👍

@MohamedBassem
Copy link
Collaborator

@Deathproof76 thanks for sharing the code, I'm already working on something similar using sharp as well :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants