Abstract

Point cloud 3D models are gaining increasing popularity due to the proliferation of scanning systems in various fields, including autonomous vehicles and robotics. When employed for rendering purposes, point clouds are typically depicted with their original colors acquired during the acquisition, often without taking into account the lighting conditions of the scene in which the model is situated. This can result in a lack of realism in numerous contexts, especially when dealing with animated point clouds used in eXtended reality applications, where it is desirable for the model to respond to incoming light and seamlessly blend with the surrounding environment. This paper proposes the application of physically based rendering (PBR), a rendering technique widely used in real-time computer graphics applications, to animated point cloud models for reproducing specular reflections, and achieving a photo-realistic and physically accurate look under any lighting condition. To achieve this, we first explore the extension of commonly used animated point cloud formats to incorporate normal vectors and PBR parameters, like roughness and metalness. Additionally, the encoding of the animated environment maps necessary for the PBR technique is investigated. Then, an animated point cloud model is rendered with a shader implementing the proposed PBR method. Finally, we compare the outcomes of this PBR pipeline with traditional renderings of the same point cloud produced using commonly used shaders, taking into account different lighting conditions and environments. Through these comparisons, we demonstrate how the proposed PBR method enhances the visual integration of the point cloud with its surroundings. Furthermore, it will be shown that using this rendering technique, it is possible to render different materials, by exploiting the features of PBR and the encoding of the surrounding environment.

References

1.
Javaheri
,
A.
,
Brites
,
C.
,
Pereira
,
F.
, and
Ascenso
,
J.
,
2021
, “
Point Cloud Rendering After Coding: Impacts on Subjective and Objective Quality
,”
IEEE Trans. Multimedia
,
23
, pp.
4049
4064
.
2.
Han
,
X.-F.
,
Jin
,
J. S.
,
Wang
,
M.-J.
,
Jiang
,
W.
,
Gao
,
L.
, and
Xiao
,
L.
,
2017
, “
A Review of Algorithms for Filtering the 3D Point Cloud
,”
Signal Process. Image Commun.
,
57
, pp.
103
112
.
3.
Ipsita
,
A.
,
Duan
,
R.
,
Li
,
H.
,
Liu
,
M.
,
Quinn
,
A. J.
, and
Ramani
,
K.
,
2023
, “
The Design of a Virtual Prototyping System for Authoring Interactive VR Environments From Real World Scans
,”
ASME J. Comput. Inf. Sci. Eng.
, pp.
1
18
.
4.
Kivi
,
P. E. J.
,
Makitalo
,
M. J.
,
Zadnik
,
J.
,
Ikkala
,
J.
,
Vadakital
,
V. K. M.
, and
Jaaskelainen
,
P. O.
,
2022
, “
Real-Time Rendering of Point Clouds With Photorealistic Effects: A Survey
,”
IEEE Access
,
10
, pp.
13151
13173
.
5.
Pharr
,
M.
,
Jakob
,
W.
, and
Humphreys
,
G.
,
2023
,
Physically Based Rendering: From Theory to Implementation
, 4th ed.,
MIT Press
,
Cambridge, MA
.
6.
McAuley
,
S.
,
Hill
,
S.
,
Martinez
,
A.
,
Villemin
,
R.
,
Pettineo
,
M.
,
Lazarov
,
D.
, and
Neubelt
,
D.
,
2020
, “
Physically Based Shading in Theory and Practice
,”
ACM SIGGRAPH '20
,
Virtual
.
7.
Magnor
,
M.
, and
Sorkine-Hornung
,
A.
,
2020
, “Capture, Reconstruction, and Representation of the Visual Real World for Virtual Reality,”
Real VR—Immersive Digital Reality
,
Springer
,
Cham
, pp.
3
32
.
8.
Berger
,
M.
,
Tagliasacchi
,
A.
,
Seversky
,
L. M.
,
Alliez
,
P.
,
Guennebaud
,
G.
,
Levine
,
J. A.
,
Sharf
,
A.
, and
Silva
,
C. T.
,
2016
, “
A Survey of Surface Reconstruction From Point Clouds
,”
Comput. Graph. Forum
,
36
(
1
), pp.
301
329
.
9.
Schwarz
,
S.
,
Preda
,
M.
,
Baroncini
,
V.
,
Budagavi
,
M.
,
Cesar
,
P.
,
Chou
,
P. A.
,
Cohen
,
R. A.
, et al.,
2019
, “
Emerging MPEG Standards for Point Cloud Compression
,”
IEEE J. Emerg. Sel. Topics Circ. Syst.
,
9
(
1
), pp.
133
148
.
10.
Cao
,
K.
,
Xu
,
Y.
, and
Cosman
,
P.
,
2020
, “
Visual Quality of Compressed Mesh and Point Cloud Sequences
,”
IEEE Access
,
8
, p.
171203
.
11.
Christensen
,
P.
,
2008
, “
Point-Based Approximate Color Bleeding
,”
Pixar Tech. Notes
,
2
(
5
), p.
6
.
12.
Kronander
,
J.
,
Banterle
,
F.
,
Gardner
,
A.
,
Miandji
,
E.
, and
Unger
,
J.
,
2015
, “
Photorealistic Rendering of Mixed Reality Scenes
,”
Comput. Graph. Forum
,
34
(
2
), pp.
643
665
.
13.
Debevec
,
P.
,
2006
, “
Image-Based Lighting
,”
ACM SIGGRAPH 2006
,
Boston, MA
,
July 30–Aug. 3
.
14.
Sun
,
T.
,
Xu
,
Z.
,
Zhang
,
X.
,
Fanello
,
S.
,
Rhemann
,
C.
,
Debevec
,
P.
,
Tsai
,
Y.-T.
,
Barron
,
J. T.
, and
Ramamoorthi
,
R.
,
2020
, “
Light Stage Super-Resolution
,”
ACM Trans. Graph.
,
39
(
6
), pp.
1
12
.
15.
Graziosi
,
D.
,
Nakagami
,
O.
,
Kuma
,
S.
,
Zaghetto
,
A.
,
Suzuki
,
T.
, and
Tabatabai
,
A.
,
2020
, “
An Overview of Ongoing Point Cloud Compression Standardization Activities: Video-Based (v-PCC) and Geometry-Based (g-PCC)
,”
APSIPA Trans. Signal Inform. Process.
,
9
(
1
), p.
E13
.
16.
Zerman
,
E.
,
Ozcinar
,
C.
,
Gao
,
P.
, and
Smolic
,
A.
,
2020
, “
Textured Mesh Vs Coloured Point Cloud: A Subjective Study for Volumetric Video Compression
,”
2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX)
,
Virtual
,
May 26–28
, IEEE, pp.
1
6
.
17.
Benedek
,
C.
,
Majdik
,
A.
,
Nagy
,
B.
,
Rozsa
,
Z.
, and
Sziranyi
,
T.
,
2021
, “
Positioning and Perception in LIDAR Point Clouds
,”
Digital Signal Process.
,
119
(
Dec.
), p.
103193
.
18.
Wang
,
Q.
, and
Kim
,
M.-K.
,
2019
, “
Applications of 3D Point Cloud Data in the Construction Industry: A Fifteen-Year Review From 2004 to 2018
,”
Adv. Eng. Inform.
,
39
(
Jan.
), pp.
306
319
.
19.
Scharstein
,
D.
, and
Szeliski
,
R.
,
2002
, “
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms
,”
Int. J. Comput. Vis.
,
47
(
1
), pp.
7
42
.
20.
Barbierato
,
E.
,
Gribaudo
,
M.
,
Iacono
,
M.
, and
Piazzolla
,
P.
,
2018
, “Second Order Fluid Performance Evaluation Models for Interactive 3D Multimedia Streaming,”
Computer Performance Engineering
,
R.
Bakhshi
,
P.
Ballarini
,
B.
Barbot
, et al., eds.,
Springer International Publishing
,
Cham
, pp.
205
218
.
21.
Kim
,
H.
,
Kitahara
,
I.
,
Kogure
,
K.
, and
Sohn
,
K.
,
2006
, “A Real-Time 3D Modeling System Using Multiple Stereo Cameras for Free-Viewpoint Video Generation,”
Image Analysis and Recognition
,
A.
Campilho
, and
M.
Kamel
, eds.,
Springer
,
Berlin
, pp.
237
249
.
22.
Intel
,
2023
, “
Intel Realsense
,” https://www.intelrealsense.com/ [Online], Accessed September 12, 2023.
23.
Zhang
,
Z.
,
2012
, “
Microsoft Kinect Sensor and Its Effect
,”
IEEE MultiMedia
,
19
(
2
), pp.
4
10
.
24.
8i
,
2023
, “
Studios
,” https://8i.com/ [Online], Accessed September 12, 2023.
25.
Boyce
,
J. M.
,
Dore
,
R.
,
Dziembowski
,
A.
,
Fleureau
,
J.
,
Jung
,
J.
,
Kroon
,
B.
,
Salahieh
,
B.
,
Vadakital
,
V. K. M.
, and
Yu
,
L.
,
2021
, “
MPEG Immersive Video Coding Standard
,”
Proc. IEEE
,
109
(
9
), pp.
1521
1536
.
26.
Jang
,
E. S.
,
Preda
,
M.
,
Mammou
,
K.
,
Tourapis
,
A. M.
,
Kim
,
J.
,
Graziosi
,
D. B.
,
Rhyu
,
S.
, and
Budagavi
,
M.
,
2019
, “
Video-Based Point-Cloud-Compression Standard in MPEG: From Evidence Collection to Committee Draft [Standards in a Nutshell]
,”
IEEE Signal Process. Mag.
,
36
(
3
), pp.
118
123
.
27.
Akhtar
,
A.
,
Gao
,
W.
,
Li
,
L.
,
Li
,
Z.
,
Jia
,
W.
, and
Liu
,
S.
,
2022
, “
Video-Based Point Cloud Compression Artifact Removal
,”
IEEE Trans. Multimedia
,
24
, pp.
2866
2876
.
28.
CAO
,
C.
,
2020
, “
What’s New in Point Cloud Compression?
,”
Global J. Eng. Sci.
,
4
(
5
).
29.
ISO/IEC MPEG (JTC 1/SC 29/WG 7)
,
2021
, “
G-PCC Codec Description v2
.”
30.
Sheng
,
X.
,
Li
,
L.
,
Liu
,
D.
, and
Xiong
,
Z.
,
2022
, “
Attribute Artifacts Removal for Geometry-Based Point Cloud Compression
,”
IEEE Trans. Image Process.
,
31
, pp.
3399
3413
.
31.
Wang
,
J.
,
Zhu
,
H.
,
Liu
,
H.
, and
Ma
,
Z.
,
2021
, “
Lossy Point Cloud Geometry Compression Via End-to-End Learning
,”
IEEE Trans. Circ. Syst. Video Technol.
,
31
(
12
), pp.
4909
4923
.
32.
Debevec
,
P.
,
2006
, “
Image-Based Lighting
,”
ACM SIGGRAPH '06
,
Boston, MA
.
33.
da Silva Nunes
,
M.
,
Nascimento
,
F. M.
,
Miranda
,
G. F.
, and
Andrade
,
B. T.
,
2021
, “
Techniques for BRDF Evaluation
,”
Vis. Comput.
,
38
(
2
), pp.
573
589
.
34.
Blender Development Team
,” Blender 3.1.0.
35.
Deschaintre
,
V.
,
Aittala
,
M.
,
Durand
,
F.
,
Drettakis
,
G.
, and
Bousseau
,
A.
,
2018
, “
Single-Image SVBRDF Capture With a Rendering-Aware Deep Network
,”
ACM Trans. Graph.
,
37
(
4
), pp.
1
15
.
36.
Xu
,
Y.
,
Lu
,
Y.
, and
Wen
,
Z.
,
2017
, “
Owlii Dynamic Human Mesh Sequence Dataset
,” https://mpeg-pcc.org/Downloads/Owlii/Owlii.zip [Online], Accessed September 12, 2023.
37.
Molnar
,
S.
,
Kelenyi
,
B.
, and
Tamas
,
L.
,
2021
, “
ToFNest: Efficient Normal Estimation for Time-of-Flight Depth Cameras
,”
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
,
Montreal, BC, Canada
,
Oct. 11–17
, IEEE, pp.
1791
1798
.
38.
Cignoni
,
P.
,
Callieri
,
M.
,
Corsini
,
M.
,
Dellepiane
,
M.
,
Ganovelli
,
F.
, and
Ranzuglia
,
G.
,
2008
, “MeshLab: an Open-Source Mesh Processing Tool,”
Eurographics Italian Chapter Conference
,
V.
Scarano
,
R. D.
Chiara
, and
U.
Erra
, eds.,
Eurographics
,
Eindhoven, The Netherlands
.
You do not currently have access to this content.