The technique can include a chrome ball in a photo that looks like the true reflection of the environment. This chrome ball can help calculate what the lighting in the photo looks like.
Then, they used this lighting information to add new objects to the photos, making them appear as if they were taken under the original lighting conditions.
Simply put, it is to detect the light source (lighting information) in the picture, and then insert other objects into the picture according to the light source, achieving the same lighting effect without any sense of inconsistency.
The working principle is as follows:
1. Enter a picture: You provide a photo, such as an indoor scene.
2. Add a chrome ball: Use DiffusionLight technology to draw a chrome ball in the right place in the photo. This ball will reflect light and color in the scene.
3. Generate an environment map: The reflection of the chrome sphere can be converted into a high dynamic range (HDR) environment map, which contains information about the lighting of the scene.
4. Illumination estimation: By analyzing the reflection on the chrome sphere, DiffusionLight estimates the position and brightness of the light source in the image.
5. 3D Object Insertion: Using the estimated lighting information, a 3D model can be added to the photo and reflect the lighting effects in a way that looks natural and realistic.
A key innovation of this technology is that it doesn’t require expensive or complex equipment to capture lighting conditions. All it requires is a picture and powerful algorithms. This means it can be used in a variety of applications from professional filmmaking to mobile phone photography, creating new possibilities for artists and developers.
The technology can be used for a variety of input images, such as indoor and outdoor scenes, close-ups, paintings, and face photos.
Projects and Demonstrations:https://diffusionlight.github.io
Paper:https://arxiv.org/abs/2312.09168
GitHub:https://github.com/DiffusionLight/DiffusionLight