Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a neat approach. Basically it is a combination of:

(1) Fitting 3D stock models to existing models using a simple but interactive ray casting approach.

(2) Estimating soft lighting on objects fairly convincingly.

(3) Re-rendering the stock models using the artificial lighting and textures of the original photographs.

It is a pretty cool approach. There are real limitations to this but I think that the automated lighting estimate is just cool and has wide applications in the visual effects space.



And they appear to have some way to fill in the part of the photo occluded by the cut-out objects.


They never seem to mention that in the paper, at least not prominently as I of course skimmed it today. But Photoshop already has a built in tool for this, so I guess they can just use the standard methods that seem to work fairly well.


"We compute a mask for the object pixels, and use this mask to inpaint the background using the PatchMatch algorithm [Barnes et al. 2009]"

PaintMatch algorithm: http://gfx.cs.princeton.edu/pubs/Barnes_2009_PAR/index.php


Yes, that is the one that Photoshop has adopted and renamed " Content Aware Fill"! Details: http://www.adobe.com/technology/projects/patchmatch.html


Judging from the YouTube videos, the novel part is that they can fill out the part that is occluded from the photo (either using textures from the 3D model, or by using InPaint) because they refer to earlier work that already lets you cut out and manipulate the objects using 3D models.


Yep, they probably used an off-the-shelf method for that. It's often called image interpolation or "inpainting" (https://en.wikipedia.org/wiki/Inpainting).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: