While exploring the boundaries of photography with these computer-generated auto-portraits, I tried to reflect on how big data companies categorise people to create personal targeted advertising. The auto-portraits show the noise created by the interaction between humans and machines. The struggle of trying to understand and decode human behaviour in the hope to find its desires. Ultimately wanting to turn the human species into a product database, offering it what it desires the most. Does the algorithm know the photographer better than itself or is it merely a bad copy? Our online behaviour creates an immortal double, based on each of our interactions.
When does it become us, or when do we become it?
Under the GDPR law, I got access to my personal data held by Twitter. Twitter had categorised my person into over 830 different personal interests based on my activity on the platform. The sole purpose of categorising users, is to create personal targeted advertising. Ads would be shown to me depending on these criteria.
In a quest to understand this data, I started categorising and colour-coding these “interests” to get a blueprint of me as a person in the eyes of Twitter. This personal blueprint was accurate and false at the same time. Anomalies found seemed to be so different from my person while the data was solely originated by my twitter usage. Did Twitter know me better than myself, or was it completely wrong?
I used all the pictures of myself that I ever published on social media platforms as a data point for the new pictures. Every picture got a unique colour on 50% transparency on top of it. Each picture represented an “interest”.
The pictures were created using StyleGan2. The purpose of the program is to recreate images based on an image bank. For this project, I used 830 images as the image bank so that the program would be able to create new portraits of me. The result is an interesting image that is me and not me at the same time. It is a corporate vision of who I am. It is me on twitter.