000954566 001__ 954566 000954566 005__ 20251014112618.0 000954566 0247_ $$2datacite_doi$$a10.18154/RWTH-2023-03250 000954566 037__ $$aRWTH-2023-03250 000954566 041__ $$aEnglish 000954566 082__ $$a510 000954566 1001_ $$0P:(DE-82)954605$$aYermakov, Ruslan$$b0$$urwth 000954566 245__ $$aModel-based analysis of the image generation quality of adversarial latent autoencoders for industrial machine vision$$cRuslan Yermakov$$honline 000954566 260__ $$aAachen$$bRWTH Aachen University$$c2023 000954566 300__ $$a1 Online-Ressource : Illustrationen, Diagramme 000954566 3367_ $$02$$2EndNote$$aThesis 000954566 3367_ $$0PUB:(DE-HGF)19$$2PUB:(DE-HGF)$$aMaster Thesis$$bmaster$$mmaster 000954566 3367_ $$2BibTeX$$aMASTERSTHESIS 000954566 3367_ $$2DRIVER$$amasterThesis 000954566 3367_ $$2DataCite$$aOutput Types/Supervised Student Publication 000954566 3367_ $$2ORCID$$aSUPERVISED_STUDENT_PUBLICATION 000954566 500__ $$aVeröffentlicht auf dem Publikationsserver der RWTH Aachen University 000954566 502__ $$aMasterarbeit, RWTH Aachen University, 2022$$bMasterarbeit$$cRWTH Aachen University$$d2022$$gFak01$$o2022-10-12 000954566 5203_ $$lger 000954566 520__ $$aIn industrial machine vision applications, generative models have an advantage over discriminative models because they enable interpretable feature extraction and overcome the limitations of non-transparent and inexplicable decisions associated with the latter. The state-of-the-art style-based generative adversarial networks (StyleGANs) generate high-quality images mapped from a latent feature vector space by learning the approximation of the high-dimensional training data distribution. Thereby, learned representations can be identified by interpreting the latent space and used to control the properties of the synthesized images. StyleGANs in combination with an embedding algorithm - adversarial latent autoencoder (ALAE) - enable the assessment of the properties of embedded images through their latent space representations. However, in order to achieve the best possible approximation of the training data distribution, it is necessary to optimize the network's design parameters based on its effectiveness to learn the quality characteristics of industrial machine vision data. This requires a quantitative evaluation of the quality of generated, and reconstructed images in comparison to the quality of original images. This work presents an evaluation framework to assess the capabilities of generative ALAE models to learn the features and characteristics of the original data consistently and reliably. Feature consistency is proposed as an evaluation criterium to estimate the performance of the generative models. The quality of images generated by ALAE is quantitatively evaluated, with a focus on determining how the latent space size affects the outcome. Latent space size systematically varied during training on industrially relevant datasets with representative features. Based on the application-specific and human-interpretable features, the image quality of multiple ALAEs with varying latent space dimensionalities is compared using statistical tests to select the most favorable latent space dimension.$$leng 000954566 591__ $$aGermany 000954566 653_7 $$adeep learning 000954566 653_7 $$afeature extraction 000954566 653_7 $$agenerative modeling 000954566 653_7 $$aimage quality 000954566 653_7 $$amachine learning 000954566 653_7 $$amachine vision 000954566 7001_ $$0P:(DE-82)IDM01595$$aBerkels, Benjamin$$b1$$eThesis advisor$$urwth 000954566 7001_ $$0P:(DE-82)IDM00865$$aSchmitt, Robert H.$$b2$$eThesis advisor$$urwth 000954566 7001_ $$0P:(DE-82)IDM03845$$aWolfschläger, Dominik$$b3$$eConsultant$$urwth 000954566 8564_ $$uhttps://publications.rwth-aachen.de/record/954566/files/954566.pdf$$yOpenAccess 000954566 909CO $$ooai:publications.rwth-aachen.de:954566$$popenaire$$popen_access$$pVDB$$pdriver$$pdnbdelivery 000954566 9101_ $$0I:(DE-588b)36225-6$$6P:(DE-82)954605$$aRWTH Aachen$$b0$$kRWTH 000954566 9101_ $$0I:(DE-588b)36225-6$$6P:(DE-82)IDM01595$$aRWTH Aachen$$b1$$kRWTH 000954566 9101_ $$0I:(DE-588b)36225-6$$6P:(DE-82)IDM00865$$aRWTH Aachen$$b2$$kRWTH 000954566 9101_ $$0I:(DE-588b)36225-6$$6P:(DE-82)IDM03845$$aRWTH Aachen$$b3$$kRWTH 000954566 9141_ $$y2023 000954566 915__ $$0StatID:(DE-HGF)0510$$2StatID$$aOpenAccess 000954566 9201_ $$0I:(DE-82)112430_20140620$$k112430$$lJuniorprofessur für Mathematische Bild- und Signalverarbeitung$$x0 000954566 9201_ $$0I:(DE-82)110000_20140620$$k110000$$lFachgruppe Mathematik$$x1 000954566 9201_ $$0I:(DE-82)080003_20140620$$k080003$$lAachen Institute for Advanced Study in Computational Engineering Science$$x2 000954566 961__ $$c2023-03-30T09:45:45.598605$$x2023-03-29T14:21:24.326737$$z2023-03-30T09:45:45.598605 000954566 9801_ $$aFullTexts 000954566 980__ $$aI:(DE-82)080003_20140620 000954566 980__ $$aI:(DE-82)110000_20140620 000954566 980__ $$aI:(DE-82)112430_20140620 000954566 980__ $$aUNRESTRICTED 000954566 980__ $$aVDB 000954566 980__ $$amaster