Deep Combiner for Independent and Correlated Pixel Estimates (Supplementary Material)

Jonghee Back1, Binh-Son Hua2, 3, Toshiya Hachisuka4, and Bochang Moon1

1 GIST, South Korea, 2 VinAi Research, Vietnam, 3 VinUniversity, Vietnam, 4 The University of Tokyo, Japan

Overview

This supplemental material provides full resolution rendering of our method and previous methods. Please click on each scene to launch the interactive viewer that shows both visual and quantitative comparisons.

We use the following methods to generate correlated pixel estimates: NFOR [2] and KPCN denoisers [1], L1 reconstruction and L2 reconstruction in gradient-domain path tracing [5], common random number for each pixel (CRN). Additionally, we test two correlated pixel estimates, BCD [3] and PPM [4], which are not trained with our network.

We provide two equal-time comparisons with short-term (inserted figures in paper) and long-term. We also offer tables including samples per pixel (spp), running time and relMSE [6] of the results with epsilon 1e-2, which has been widely used for evaluating the quality of the rendered image. The unit of running time in the tables is second. The relMSE of each method is evaluated from the average of ten images rendered with different seeds.

Equal-Time Comparisons with Image Denoising (NFOR and KPCN)


Bathroom

Bookshelf

Kitchen

Conference

Living-room

Veach-lamp

Equal-Time Comparisons with Gradient-Domain Rendering (GPT-L1 and GPT-L2)


Bathroom

Bookshelf

Kitchen

Conference

Living-room

Veach-lamp

Equal-Time Comparisons with Correlated Sampling (CRN)


Bathroom

Bookshelf

Kitchen

Conference

Living-room

Veach-lamp

Equal-Time Comparisons with Untrained Correlated Pixel Estimates (BCD)


Bathroom

Bookshelf

Kitchen

Conference

Equal-Time Comparisons with Untrained Correlated Pixel Estimates (PPM)


Bathroom

Bookshelf

References

[1] Steve Bako, Thijs Vogels, Brian Mcwilliams, Mark Meyer, Jan Novák, Alex Harvill, Pradeep Sen, Tony Derose, and Fabrice Rousselle. 2017. Kernel-Predicting Convolutional Networks for Denoising Monte Carlo Renderings. ACM Trans. Graph. 36, 4, Article 97 (2017), 14 pages

[2] Benedikt Bitterli, Fabrice Rousselle, Bochang Moon, José A. Iglesias-Guitián, David Adler, Kenny Mitchell, Wojciech Jarosz, and Jan Novák. 2016. Nonlinearly Weighted First-order Regression for Denoising Monte Carlo Renderings. Computer Graphics Forum 35, 4 (2016), 107–117

[3] Malik Boughida and Tamy Boubekeur. 2017. Bayesian Collaborative Denoising for Monte Carlo Rendering. Computer Graphics Forum 36, 4 (2017), 137–153

[4] Toshiya Hachisuka, Shinji Ogaki, and Henrik Wann Jensen. 2008. Progressive Photon Mapping. ACM Trans. Graph. 27, 5, Article 130 (2008), 8 pages

[5] Markus Kettunen, Marco Manzi, Miika Aittala, Jaakko Lehtinen, Frédo Durand, and Matthias Zwicker. 2015. Gradient-domain Path Tracing. ACM Trans. Graph. 34, 4, Article 123 (2015), 13 pages

[6] Fabrice Rousselle, Claude Knaus, and Matthias Zwicker. 2011. Adaptive Sampling and Reconstruction Using Greedy Error Minimization. ACM Trans. Graph. 30, 6, Article 159 (2011), 12 pages

Acknowledgements

We thank the following authors and artists for the tested scenes: nacimus (Bathroom), Tiziano Portenier (Bookshelf), Anton Kaplanyan (Kitchen), Anat Grynberg and Greg Ward (Conference), Jay-Artist (Living-room) and Benedikt Bitterli (Veach-lamp).

We also thank Joey Litalien, Jan Novák and Benedikt Bitterli for the interactive viewer.