Publication Details
Overview
 
 
Lamyaa Aljuaid, Matthew Chapman, Frederik Temmermans, Touradj Ebrahimi, Andrew Tewkesbury, Deepayan Bhowmik
 

Chapter in Book/ Report/ Conference proceeding

Abstract 

Content generation with generative AI has become a common practice in recent years. Manipulated images have become widespread today due to the ease with which they can be modified using sophisticated tools, which is a significant challenge. AI-driven visual content creation enhances creativity and efficiency. However, today, it is also one of the considerable sources of misinformation, hate crimes, counterfeiting, fraud, and manipulated content. Therefore, there is an urgent need for robust detection and verification mechanisms. Traditional image manipulation detection methods often focus on either image features or metadata analysis. Both have limitations and alone are insufficient against more advanced AI-based manipulation. We propose a novel framework that leverages the recent JPEG Trust international standard (ISO/IEC 21617-1) with deep learning-based detection and localisation techniques to address AI-manipulated image detection challenges. The proposed framework consists of two components: A) a component that enables users to record the provenance metadata about AI-powered image processing and support ethical use using the existing JPEG Trust standard and its extension, and B) a component that enables verification of the image{\textquoteright}s authenticity through detection tools. The proposed framework aims to improve the trustworthiness of AI-powered image processing activities within the media consumption chain as it provides a robust two-layer verification system that strengthens confidence in image authenticity. To demonstrate the capability of this framework, we describe the adoption of the framework for two case studies: 1) earth observation applications with satellite imagery and 2) digitised cultural heritage.

Reference