Deep-Learning based Lossless Image Coding
This publication appears in: IEEE Transactions on Circuits and Systems for Video Technology
Authors: I. Schiopu and A. Munteanu
Number of Pages: 14
Publication Date: Apr. 2019
The paper proposes a novel approach for lossless image compression. The proposed coding approach employs a deep-learning based method to compute the prediction for each pixel, and a context-tree based bit-plane codec to encode the prediction errors. Firstly, a novel deep learning-based predictor is proposed to estimate the residuals produced by traditional prediction methods. It is shown that the use of a deep-learning paradigm substantially boosts the prediction accuracy compared to traditional prediction methods. Secondly, the prediction error is modeled by a context modeling method and encoded using a novel context-tree based bit-plane codec. Codec profiles performing either one or two coding passes are proposed, trading off complexity for compression performance. The experimental evaluation is carried out on three different types of data: photographic images, lenslet images, and video sequences. Experimental results show that the proposed lossless coding approach systematically and substantially outperforms the state-of-the-art methods for each type of data.