As architecture students, we often struggle trying to look for cutout figures (I am calling them peeps) to tell the narratives of our renders. The peeps are often of the wrong pose, wrong perspective, and an after-thought whose visual aesthetic never matches that of the render; they often distract us from our narrative, and we tend to find the same cutout figures from the same websites throughout the students' work. This project, placed in a short Deepfakes workshop by the Media Lab, explores how GANs coupled with different algorithms could help us generate a different kind of peeps; how to look for dataset and preprocess them, and how the results of the biases of neural networks generates an interpretations that does not directly reflect the dataset provided - which offers room for creativity in the training process of GANs.

The project utilizes a couple of open-source algorithms like pix2pixHD, u2net and densepose. It uses the Fashionpedia dataset for the images.