Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

select faces to not be blurred #51

Open
reeseherber opened this issue Sep 26, 2023 · 8 comments
Open

select faces to not be blurred #51

reeseherber opened this issue Sep 26, 2023 · 8 comments
Labels
enhancement New feature or request

Comments

@reeseherber
Copy link

I was looking into this project and was wondering if it would be possible to select specific faces out of the file to leave unblurred.

@StealUrKill
Copy link

I second this

@mdraw
Copy link
Member

mdraw commented Oct 6, 2023

The face detection model used internally by deface doesn't support face recognition (which would be required for matching a specific face), only general face detection. This feature would be nice to have but I'm afraid implementing it properly would require quite some work and would make the code much more complicated.
If I do a rewrite some day, I'll keep this use case in mind.

@mdraw mdraw added the enhancement New feature or request label Oct 6, 2023
@mthebaud
Copy link

Hi, I have quite the same need, but I have another idea for implementation ?
Could it be possible to define a "detection box" that, with a given pixel rectangle, would ignore the outter of this rectangle.
My use case : two person facing the camera, I want to blur the right one > I define a global rectangle on the right part and the face of the left one is not blurred.
Thanks

@mdraw
Copy link
Member

mdraw commented Jan 23, 2024

Hi @mthebaud, this feature might be a bit too specific for the main project but feel free to implement this in a fork by filtering the detections dets in this loop by their coordinate range:

deface/deface/deface.py

Lines 83 to 89 in 1e6a87f

for i, det in enumerate(dets):
boxes, score = det[:4], det[4]
x1, y1, x2, y2 = boxes.astype(int)
x1, y1, x2, y2 = scale_bb(x1, y1, x2, y2, mask_scale)
# Clip bb coordinates to valid frame region
y1, y2 = max(0, y1), min(frame.shape[0] - 1, y2)
x1, x2 = max(0, x1), min(frame.shape[1] - 1, x2)

As you see, detection boxes are already being clipped to the valid frame coordinate range, so you could just change the first arguments of the max() and min() calls respectively.

@mthebaud
Copy link

That's great, thank for the information !

@nemesis567
Copy link

Honestly that is hugely specific. Using face recognition seems to be the way to go to achieve this. Just run face recognition and then contour recognition, place the person in a buffer then paste that on top of the blurred frame.

@mthebaud
Copy link

mthebaud commented Mar 20, 2024

Yes, I understand that is specific, so I suggest that that this issue can be closed as we have answers to our questions.

@andyg2
Copy link

andyg2 commented Mar 26, 2024

I think it would be less specific if the faces were ordered from left to right and the indexes of that list could be an option.
Obviously someone might need to cut up a video into segments when that order doesn't change - but for interviews etc. having an option to select one or more faces to skip is a useful option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants