Project Details
Overview
 
 
 
Project description 

Many IT industries (e.g. pre-press, electronic publishing, medical imaging, remote sensing, satellite imaging, aerial photography, cartography) shift towards full digital productions processes. Such production processes bring forward several technological changes, especially concerning the efficient use of large amounts of digital images, and the mastering of the resulting data-flow. This has important consequences on the economical aspect of these labor-intensive production processes. The project will investigate concepts and methods to efficiently handle considerable long sequences of large digital images to perform operations like acquisition, processing, storage and transmission. A specific topic that will be considered is deriving 3D-scene structure from sequences of 2D images, by using the dynamic stereopsis approach.Large images need to be accessed efficiently at different resolution levels as typically the operator needs both image details and global or partial overviews. Hence, a multiresolution data representation is needed as well as appropriate methods for visualization and analysis. The huge data stream that results (e.g. several GB per day in cartographic industry 266 MB every 15 minutes for Meteosat Second Generation) has to be efficiently handled in the production environment. The order in which the images must be accessed might be different per production step, causing transmission and retrieval problems, which can only be solved by a compact data representation, with progressive transmission capabilities. Quality and resolution scalability as well as Region-Of-Interest (ROI) processing should be supported by the chosen data structure. A region-of-interest delineated in an image is not always rectangular: in most cases it has an irregular shape, and usually outside that shape there is no real interest in the image. So we could compress the image regarding to this semantic ROI, and obtain a high overall compression rate. Additionally, in the analysis we may need the image several times only in some limited region, to place control points (in aerial photography, cartography), to look for a certain edge in some place (in medical imaging, satellite imaging, remote sensing), or to define a region based upon image properties given by classification and texture analysis (in cartography, medical imaging, satellite imaging, remote sensing). This a posteriori ROI for wavelet encoded images is very important in this project, together with local image processing operations like edge detection, classification and texture analysis. For these operations on wavelet encoded images, we hope to find methods which outperform current methods.An efficient handling of large data sets in an interactive production environment implies a fast image analysis. Hence, hardware/software architectures for time critical processing modules should be investigated. The digital image industry did not produce yet satisfactory answers to all of these requests. In summary, the project concentrates on the implementation of a wavelet image data structure, which supports the above mentioned functionalities, and on the analysis of wavelet encoded images. The project requires support of different classes of image processing and analysis algorithms: (1) wavelet coding of images, (2) multiscale wavelet based edge detection, (3) multiscale wavelet based texture analysis and classification, (4) 3D scene synthesis from sequences of 2D images.The last mentioned task is based on an original hypothesis which leads to a new method of jointly processing sequences of 2D images, taken by a single moving camera, to generate a 3D representation in a simpler way than the current stereoscopic methods based on the bi-receptor acquisition of images. In addition, the method can generate a 3D representation with a resolution higher than the resolution of each individual 2D image in the sequence. Consequently, compression of 3D scenes can be achieved, and stereo video transmissions with lower bandwidth requirements can be developed.In order to provide efficient hardware/software implementations, the processing requirements - i.e. the number of execution cycles, the number of memory accesses, the number of arithmetic operations and the required memory size - of critical sub-modules should be investigated. Ideally, the behavior of these modules with a limited, but relevant set of system parameters should be determined. Eventually, this will enable 'Computational Graceful Degradation' strategies, which reduce the processing power with a minimal degradation of the image quality, whenever the available processing power is too limited. Obviously, multi-resolution data representations best fit these requirements: the low-resolution information can be used with low processing requirements, while at higher available processing power, the higher-resolution information can be processed as well.High-level mathematical/statistical models uniquely relate the amount of processed information to the processing requirements.Due to exponential growth of microelectronics integrated circuits complexity, the impact of a Silicon implementation (at the level of the building blocks) of the Wavelet-based image processing engine has to be analyzed. In the first stage of the project, the basic blocks best adapted for data processing in the wavelet domain will be identified. The hardware mapping for time-critical modules of the wavelet software algorithms will be provided, using these building blocks. The design architecture will be captured using hardware description languages and simulated, targeting silicon implementation.

Runtime: 1999 - 2002