TauBench: Dynamic Benchmark for Graphics Rendering

Many graphics rendering algorithms used in both real time games and virtual reality applications can get performance boosts by reusing previous computations. However, the temporal reuse based algorithms are typically measured using trivial benchmarks with very limited dynamic features. To this end,...

Full description

Bibliographic Details
Published in:Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Main Authors: Alanko, Joel, Mäkitalo, Markku, Jääskeläinen, Pekka
Other Authors: Tampere University, Computing Sciences
Format: Conference Object
Language:English
Published: SCITEPRESS - Science and Technology Publications 2022
Subjects:
Online Access:https://trepo.tuni.fi/handle/10024/139755
https://doi.org/10.5220/0010819200003124
Description
Summary:Many graphics rendering algorithms used in both real time games and virtual reality applications can get performance boosts by reusing previous computations. However, the temporal reuse based algorithms are typically measured using trivial benchmarks with very limited dynamic features. To this end, we present two new benchmarks that stress temporal reuse algorithms: EternalValleyVR and EternalValleyFPS. These datasets represent scenarios that are common contexts for temporal methods: EternalValleyFPS represents a typical interactive multiplayer game scenario with dynamically changing lighting conditions and geometry animations. EternalValleyVR adds rapid camera motion caused by head-mounted displays popular with virtual reality applications. In order to systematically assess the quality of the proposed benchmarks in reuse algorithm stress testing, we identify common input features used in state-of-the-art reuse algorithms and propose metrics that quantify changes in the temporally interesting features. Cameras in the proposed benchmarks rotate on average 18.5x more per frame compared to the popular NVidia ORCA datasets, which results in 51x more pixels introduced each frame. In addition to the camera activity, we compare the number of low confidence pixels. We show that the proposed datasets have 1.6x less pixel reuse opportunities by changes in pixels’ world positions, and 3.5x higher direct radiance discard rate. Peer reviewed