Google Open Sources RawNeRF, Computational Photography Re-evolves

Google's RawNeRF is an AI-based tool that intelligently denoises an image while also changing its shooting angle, focus, exposure level, and tone mapping, all of which can be adjusted after the photo is taken .

Users familiar with mobile phone photography should know that nowadays mobile phone cameras are getting higher and higher megapixels and larger sensor sizes. Even so, when it comes to shooting in extreme low light or complete darkness, the photos taken Usually it is still not clear enough, and there will be a lot of noise on the screen. But new tools from Google could change that.

RawNeRF is an AI tool open sourced by Google and part of the larger MultiNeRF project. By using NeRF (Neural Radiance Fields) to scan a series of images, RawNeRF can reconstruct a 3D rendering of a scene. With the reconstructed rendering, in addition to reducing noise in the picture, users can also change the camera position, exposure, focus, etc. after the photo is taken.

It can also be seen from the pictures that the pictures processed by RawNeRF are clearer and the colors are not distorted. The reason it performs so well is that the tool is trained on data collected from RAW images, not standard JPEG images. Compared to standard photos, RAW images contain more detail that can be used to enhance images in post-processing, and it is this extra data that can help Google better train AI tools.

One of the hallmarks of the Google Pixel line of phones is that they are very good at taking pictures, especially when only one camera was able to take better pictures than other phones with multiple cameras. It is highly anticipated that RawNeRF (or similar technology) will appear on smartphones in the near future, further promoting the development of computational photography.

RawNeRF project address

おすすめ

転載: www.oschina.net/news/207981/google-research-rawnerf