OggS O) ^BOpusHead8 OggS O) sOpusTags Lavf58.45.100 language=deu handler_name=SoundHandler encoder=Lavc58.91.100 libopus major_brand=isom minor_version=512" compatible_brands=isomiso2avc1mp41( author=Ting-Chun Liu, Leon-Etienne Kühr genre=lecture title=Self-cannibalizing AIQ copyright=Licensed to the public under http://creativecommons.org/licenses/by/4.0 album=37C3 ( artist=Ting-Chun Liu, Leon-Etienne KührT description=What occurs when machines learn from one another and engage in self-cannibalism within the generative process? Can an image model identify the happiest person or determine ethnicity from a random image? Most state-of-the-art text-to-image implementations rely on a number of limited datasets, models, and algorithms. These models, initially appearing as black boxes, reveal complex pipelines involving multiple linked models and algorithms upon closer examination. We engage artistic strategies like feedback, misuse, and hacking to crack the inner workings of image-generation models. This includes recursively confronting models with their output, deconstructing text-to-image pipelines, labelling images, and discovering unexpected correlations. During the talk, we will share our experiments on investigating Stable-Diffusion pipelines, manipulating aesthetic scoring in extensive public text-to-image datasets, revealing NSFW classification, and utilizing Contrastive Language-Image Pre-training (CLIP) to reveal biases and problematic correlations inherent in the daily use of these models.OggS O) Nay;.,.#%! :qw%!O ^i@A9*SyrtGcn7dNCw0mi~D-sxj52p@n]?W2Yw{'XmkM^iN>u&N`5)CRB4< v3b[nVTVJ80yFwSKZ d#%8ud`\d#ƈ(d G 7 E ȫ2' sP *%&<#k]+0P3YK=;@:D@,`F과w[0ѣh(8ș!rGU9>BI4s 7(qW3wKFs.SkK9e&CB3+FmzM C`oab5#+8[м.\p1#P=^.%L閘d9g5jee˔3- $}Ǟ(Twc(>'*=Fv ͌ıɗGz/jۥا%}R F.?m3kΝszϜr''vvhl){AT }' ɿ]7:zlIq'Z#eD{$6 TbeFq;{B{o0 /0Þ%CE It >-2A4:U qzF6k{M2r)Pz.ܥaT4N5}heǩ8w;UQ<0go,PIYDYs$-0*5Ms&YiUN\{ יH3 Ěa