Optimized Wavelet-Based Texture Representation and Streaming for GPU Texture Mapping
This publication appears in: Multimedia Tools and Applications
Authors: B. Andries, J. Lemeire and A. Munteanu
Publication Date: Jan. 2018
Because of the ever increasing resolution of consumer displays, high quality real-time 3D rendering applications require large amounts of 2D texture data, which in turn require large amounts of storage, memory and bandwidth. One of the major advantages of using compressed texture data is the significant decrease in bandwidth requirements, decreasing texture loading times. Additionally, instead of preloading all the potentially required information, texture data can be streamed progressively at run-time, which is a very common scenario in web-based applications, games and large virtual environments. This paper proposes a new texture streaming system, utilizing both pre-computed and run-time scene analysis, camera prediction and wavelet-based texture compression to provide maximal visual quality within bandwidth and real-time constraints. Texture decoding is done on the GPU, saving both on streaming bandwidth and GPU memory. The pre-computed scene analysis system can perform the run-time analysis at virtually no performance cost in multi-threaded systems. The proposed solution can easily be plugged into existing classical texture mapping solutions, as it features drop-in replacement shaders and re-uses existing render facilities. The wavelet-based streaming system results in PSNR improvements of up to 2 dB when compared to a DXT1-based streaming system.