Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ControlNet + Inpainting Projection #616

Draft
wants to merge 28 commits into
base: main
Choose a base branch
from

Conversation

carson-katri
Copy link
Owner

@carson-katri carson-katri commented Mar 27, 2023

Usage

Improved UI still needs to be created. For now:

  1. Place cameras around a single object
  2. Select the object and go into edit mode
  3. In the projection panel, enable "Use ControlNet" and choose a depth ControlNet model
  4. Ensure you have downloaded the models runwayml/stable-diffusion-v1-5 and runwayml/stable-diffusion-inpainting as the model is currently hard-coded
  5. Press "Project", Blender will hang while the generation occurs

Method Overview

For the first perspective

  1. Generate the first angle with a depth ControlNet and text to image model.

res = gen.control_net(
model='models--runwayml--stable-diffusion-v1-5',
control=[depth],
image=None,
inpaint=False,
inpaint_mask_src='alpha',
**generated_args
).result()
generation_result.pixels[:] = res[-1].images[0].ravel()
generation_result.update()

generation_result

  1. Bake the texture to the result.

color = bake(context, split_mesh, 512, 512, res[-1].images[0].ravel(), projection_uvs, uvs)

For subsequent perspectives

  1. Create a mask for faces oriented toward this camera, in both projected and baked forms.

fragColor = dot(NormalMatrix * normal, CameraNormal) > Threshold ? vec4(1, 1, 1, 1) : vec4(0, 0, 0, 1);

mask

  1. Capture the object with the latest result texture from this new perspective, and use the mask as the alpha channel.

color, mask, baked_mask = self.masked_init_image(context, camera, np.array(texture.pixels, dtype=np.float32), mesh, split_mesh, uvs)
color[:, :, 3] = 1 - mask[:, :, 0]

  1. Inpaint using the masked color with a depth ControlNet and inpainting model.

res = gen.control_net(
model='models--runwayml--stable-diffusion-inpainting',
control=[depth],
image=np.flipud(color * 255),
inpaint=True,
inpaint_mask_src='alpha',
**generated_args
).result()
inpaint_result.pixels[:] = res[-1].images[0].ravel()
inpaint_result.update()
mask_result.pixels[:] = mask.ravel()
mask_result.update()

inpaint_result

  1. Bake and merge with the latest result texture.

color = bake(context, split_mesh, 512, 512, res[-1].images[0].ravel(), projection_uvs, uvs).ravel()
baked_mask = baked_mask.ravel()
color = (np.array(texture.pixels, dtype=np.float32) * (1 - baked_mask)) + (color * baked_mask)

Results

I have not thoroughly tested this, but here is one result using only two camera angles.

Screenshot 2023-03-27 at 7 21 58 PM

Coherence between the two sides seems better than if two completely random generations were merged.

Base automatically changed from controlnet to main April 2, 2023 14:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant