Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The squared l2 distance values are different from that of official OpenFace demos. #3

Open
osmszk opened this issue Dec 19, 2017 · 2 comments

Comments

@osmszk
Copy link

osmszk commented Dec 19, 2017

Thank you for publishing such a nice library!
But I am in a little bit trouble about h5 file.
I tried "nn4.small2.v1.h5", and using it I compared face images same as official demo.

As a result, the squared l2 distance between clapton1 and clapton2 is so small, 0.0360859, although two images are same persons.
That value between clapton-1 and lennon-2 is 0.0384319.
That value between clapton-2 and lennon-2 is 0.0224784.

On the other hand, the official demo values are here.

clapton-1 and clapton-2

  • 0.318360479234912674

clapton-1 and lennon-2

  • 1.447068150294569921

clapton-2 and lennon-2

  • 1.520698983951225713

I think it may be wrong to write my code, how to calculate the distance.

Code to calculate the distance is here. Any problem?

from keras.models import load_model
from keras.utils import CustomObjectScope
import tensorflow as tf
from keras.preprocessing.image import load_img, img_to_array
import numpy as np
from matplotlib.pyplot import imshow

img_path = './images/'

#96 x 96 , already aligned ,cropped face
file1 = 'clapton-1_aligned.png'
file2 = 'clapton-2_aligned.png'
file3 = 'lennon-2_aligned.png'

with CustomObjectScope({'tf': tf}):
  model = load_model('./model/nn4.small2.v1.h5')

def face_vector(file):
    img = load_img(img_path+file, target_size=(96, 96))
    imshow(np.asarray(img))
    x = img_to_array(img)
    x = np.expand_dims(x, axis=0)

    p = model.predict(x)
    return p[0]

rep_clapton1 = face_vector(file1)
rep_clapton2 = face_vector(file2)

#clapton-1 and clapton-2
d1 = rep_clapton1 - rep_clapton2
distance1 = np.dot(d1, d1)
print distance1

All code is here
https://github.com/osmszk/Keras-OpenFace-test/blob/master/Keras-Openface-test.ipynb

@pkmandke
Copy link

pkmandke commented Mar 29, 2019

This is almost little more than a year later than you asked.
Nevertheless, maybe the reason is that you haven't normalized the input image before computing the forward pass.
Try doing this before feeding the image to model.predict
img = np.around(np.transpose(img, (0,1,2))/255.0, decimals=12)
You can refer the generate embeddings section here for more details.

@bdiaz29
Copy link

bdiaz29 commented Jun 26, 2020

I got similar results without normalizing the input
0.036086053
0.03843207
0.022478495

after normalizing the input I got

0.3701709
1.8563383
2.1030128

which is closer to the demo.

I used the model against my own dataset using MTCNN to align the faces and got an average of .8 for the same person and 1.4 for different people.
is that to be expected?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants