Show simple item record

dc.contributor.authorMartín Rodríguez, Fernando 
dc.contributor.authorGarcía Mojón, Rocio
dc.contributor.authorFernández Barciela, Mónica 
dc.date.accessioned2023-11-10T08:12:59Z
dc.date.available2023-11-10T08:12:59Z
dc.date.issued2023-11-08
dc.identifier.citationSensors, 23(22): 9037 (2023)spa
dc.identifier.issn14248220
dc.identifier.urihttp://hdl.handle.net/11093/5324
dc.description.abstractGenerative AI has gained enormous interest nowadays due to new applications like ChatGPT, DALL E, Stable Diffusion, and Deepfake. In particular, DALL E, Stable Diffusion, and others (Adobe Firefly, ImagineArt, etc.) can create images from a text prompt and are even able to create photorealistic images. Due to this fact, intense research has been performed to create new image forensics applications able to distinguish between real captured images and videos and artificial ones. Detecting forgeries made with Deepfake is one of the most researched issues. This paper is about another kind of forgery detection. The purpose of this research is to detect photorealistic AI-created images versus real photos coming from a physical camera. Id est, making a binary decision over an image, asking whether it is artificially or naturally created. Artificial images do not need to try to represent any real object, person, or place. For this purpose, techniques that perform a pixel-level feature extraction are used. The first one is Photo Response Non-Uniformity (PRNU). PRNU is a special noise due to imperfections on the camera sensor that is used for source camera identification. The underlying idea is that AI images will have a different PRNU pattern. The second one is error level analysis (ELA). This is another type of feature extraction traditionally used for detecting image editing. ELA is being used nowadays by photographers for the manual detection of AI-created images. Both kinds of features are used to train convolutional neural networks to differentiate between AI images and real photographs. Good results are obtained, achieving accuracy rates of over 95%. Both extraction methods are carefully assessed by computing precision/recall and F1-score measurements.spa
dc.language.isoengspa
dc.publisherSensorsspa
dc.rightsAttribution 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleDetection of AI-created images using pixel-wise feature extraction and convolutional neural networksen
dc.typearticlespa
dc.rights.accessRightsopenAccessspa
dc.identifier.doi10.3390/s23229037
dc.identifier.editorhttps://www.mdpi.com/1424-8220/23/22/9037spa
dc.publisher.departamentoTeoría do sinal e comunicaciónsspa
dc.publisher.grupoinvestigacionGrupo de Dispositivos de Alta Frecuenciaspa
dc.subject.unesco3325 Tecnología de las Telecomunicacionesspa
dc.subject.unesco3304 Tecnología de Los Ordenadoresspa
dc.subject.unesco3307 Tecnología Electrónicaspa
dc.date.updated2023-11-10T07:17:17Z
dc.computerCitationpub_title=Sensors|volume=23|journal_number=22|start_pag=9037|end_pag=spa


Files in this item

[PDF]

    Show simple item record

    Attribution 4.0 International
    Except where otherwise noted, this item's license is described as Attribution 4.0 International