Deep learning-based methods have shown to achieve excellent results in a variety of domains, however, some important assets are absent. Quality scalability is one of them. In this work, we introduce a novel and generic neural network layer, named MaskLayer. It can be integrated in any feedforward network, allowing quality scalability by design by creating embedded feature sets. These are obtained by imposing a specific structure of the feature vector during training. To further improve the performance, a masked optimizer and a balancing gradient rescaling approach are proposed. Our experiments show that the cost of introducing scalability using MaskLayer remains limited. In order to prove its generality and applicability, we integrated the proposed techniques in existing, non-scalable networks for point cloud compression and semantic hashing with excellent results. To the best of our knowledge, this is the first work presenting a generic solution able to achieve quality scalable results within the deep learning framework.
Royen, RD , Denis, L , Bolsée, Q , Hu, P & Munteanu, A 2021, ' MaskLayer: Enabling scalable deep learning solutions by training embedded feature sets ', Neural Networks , vol. 137, pp. 43-53.
Royen, R. D. , Denis, L. , Bolsée, Q. , Hu, P. , & Munteanu, A. (2021). MaskLayer: Enabling scalable deep learning solutions by training embedded feature sets . Neural Networks , 137 , 43-53.
@article{26ac43c0ef3244879e7adfc4729baa4e,
title = " MaskLayer: Enabling scalable deep learning solutions by training embedded feature sets " ,
abstract = " Deep learning-based methods have shown to achieve excellent results in a variety of domains, however, some important assets are absent. Quality scalability is one of them. In this work, we introduce a novel and generic neural network layer, named MaskLayer. It can be integrated in any feedforward network, allowing quality scalability by design by creating embedded feature sets. These are obtained by imposing a specific structure of the feature vector during training. To further improve the performance, a masked optimizer and a balancing gradient rescaling approach are proposed. Our experiments show that the cost of introducing scalability using MaskLayer remains limited. In order to prove its generality and applicability, we integrated the proposed techniques in existing, non-scalable networks for point cloud compression and semantic hashing with excellent results. To the best of our knowledge, this is the first work presenting a generic solution able to achieve quality scalable results within the deep learning framework. " ,
author = " Royen, {Remco Donovan} and Leon Denis and Quentin Bols{'e}e and Pengpeng Hu and Adrian Munteanu " ,
note = " Funding Information: The first author is a FWO-SB PhD fellow funded by Research Foundation Flanders (FWO), Belgium , project number 1S89420N . Publisher Copyright: { extcopyright} 2021 Elsevier Ltd Copyright: Copyright 2021 Elsevier B.V., All rights reserved. " ,
year = " 2021 " ,
month = may,
doi = " 10.1016/j.neunet.2021.01.015 " ,
language = " English " ,
volume = " 137 " ,
pages = " 4353 " ,
journal = " Neural Networks " ,
issn = " 0893-6080 " ,
publisher = " Elsevier Limited " ,
}