Anastasis Germanidis wants to be everyone (about, mailing list, github, twitter)

Anastasis Germanidis (b. 1991) is a transdisciplinary artist exploring identity crisis in an age of rapid technological change. His artwork has been shown internationally across the US and Europe, and his recent project Antipersona was one of Wired UK's Best Apps of 2016. He's happiest when he works in public and hopes to one day have every part of his behavior and personality be generated by computer programs that he's written.

Contact: agermanidis@gmail.com (PGP key).

Randomly Generated Social Interactions

2015
Participatory Performance

How can we use technology to automate the surface content of our interactions, gaining the freedom to explore deeper layers of connection between each other? What do we lose and what do we gain by giving up our social agency to a computer program? Randomly Generated Social Interactions, a participatory performance first organized for Happenings for Na Kim’s SET in Doosan Gallery, playfully brings the absurd repetitiveness of our digital communications to our physical, real-world interactions.

Each participant is given a set of earphones and is asked to visit a website prepared for the performance, where they are assigned an identity consisting of a random name, age, occupation, and personality quirk. Following that, each participant is randomly matched with another participant, and is instructed to interact with them, receiving commands for exactly what to say and do during the interaction. A computer program randomly generates those instructions by mixing a number of “conversation routines” together. Some routines are entirely verbal (e.g. a discussion about the 2016 presidential election, or about creating art), while others are more physical and comical (e.g. an exercise routine, or a routine in which one participants faints and the other is trying to rescue them).

Credits

Director: Taeyoon Choi

Performers: Andy Dayton, Becca Moore, Kevin Stirnweis, Natalia Cabrera, Yifan Hu

Videographer: Stephanie Andreou

Links

Taeyoon Choi's write-up on Happenings for SET
Performance Website
Source Code

Welcome, Programmable Human

2015
Performance

Welcome, Programmable Human is an experimental performance in which all my actions, which include giving art critiques of student works, having conversations about today's news stories with visitors, and visualizing the stock market with my body, are generated by a computer program.

A set of Python modules are sending instructions telling me the exact words to say and actions to take to my phone, and then, through text-to-speech, to my earphones. The instructions generated are a function of a variety of online data sources scraped in real time; for instance, recent tweets containing the keyword "TFW" dictate what feelings I will express to the visitors.

For the duration of the performance, the source code of the program is projected on the wall, with the line currently being evaluated highlighted, so as to enable complete algorithmic transparency of my behavior.

Links

Source Code

Antipersona

2016
Mobile Application

Antipersona simulates the experience of using Twitter as if you're signed in from any user account of your choice, providing a window into someone else's social media point-of-view. When you "become" an account on Antipersona, you can see the same timeline they see and receive the same notifications (for follows, mentions, and retweets) they receive, for 24 hours.

The advent of social media has turned our personal identities into discrete entities that we can mold to our wishes. At the same time, it has made us feel closer to other people's identities, enabling us to live a low-resolution version of their lives alongside them day by day. The boundaries separating our own identities from those of other people are quickly becoming irrelevant.

Perhaps being confined to a single identity is not how we want to exist in the world anymore. If that's the case, we need to come up with novel social and technological arrangements for sharing and adopting identities, turning them into a new kind of commons.

Mentions

Wired UK's Best Apps of 2016
Discussion at Hacker News
"Being John Malkovich, and six other Twitter users" by Eva Amsen

Links

App Store Page
Website
Source Code

Emotions Folder

2015
Desktop Application

Emotions Folder is an OS X application that uses your webcam to continuously monitor your facial expression. When it recognizes any display of a basic emotion (happiness, sadness, fear, anger, disgust, surprise), it captures a GIF snapshot of your face alongside with what’s on your screen and stores it inside a folder named “Emotions” on your home directory.

Emotions Folder is meant to be companion in self-reflection, allowing you to inspect your emotional past anytime you want and answer questions such as “what kind of stuff tends to make me sad?” or “what day of the week do I feel angry the most?”

If you're in a confessional mood, you now get to quickly share your recent emotions with your friends or on social media by dragging + dropping snapshots from your Emotions Folder. And if you ever want a clean break from the past, you can throw the Emotions Folder in the Trash Can. (Don't worry, the Emotions Folder will be regenerated when you start feeling things again.)

Emotions Folder constructs a digital trail of your emotional past, proving that you're not just someone who consumes and produces content, but someone who feels things while doing so. It’s a first step towards a new kind of software that doesn’t treat you as a one-dimensional, utilitarian user but as a complex human being!

Links

Download
Website
Source Code

Thingscoop

2015
Command-line Utility

Thingscoop is a command-line utility for analyzing videos semantically - that means searching, filtering, and describing videos based on objects and places that appear in them.

When you first run thingscoop on a video file, it uses a convolutional neural network to create an "index" of what's contained in the every second of the input video by repeatedly performing image classification on its frames. Once an index for a video file has been created, you can search (i.e. get the start and end times of the regions in the video matching the query) and filter (i.e. create a supercut of the matching regions) the input using arbitrary queries.

Thingscoop uses a very basic query language that lets you to compose queries that test for the presence or absence of labels with the logical operators ! (not), || (or) and && (and). For example, to search a video for the presence of the sky and the absence of the ocean: thingscoop search 'sky && !ocean'.

Right now two models are supported by thingscoop: vgg_imagenet uses the architecture described in "Very Deep Convolutional Networks for Large-Scale Image Recognition" to recognize objects from the ImageNet database, and googlenet_places uses the architecture described in Going Deeper with Convolutions" to recognize settings and places from the MIT Places database.

Thingscoop is based on Caffe, an open-source deep learning framework.

Links

Source Code

Grouping scenes from various films by setting

All instances of "highway" (MIT Places label) in the movie Koyaanisqatsi

All instances of "military uniform" (ImageNet label) in the movie Moonrise Kingdom

All MIT Places labels appearing in the movie Clockwork Orange in alphabetical order

Videodigest

2015
Command-line Utility

Videodigest is a command-line utility for generating condensed versions of videos. It does so by applying an automatic text summarization algorithm to the subtitles of the input video to find the N most important sentences, then compiling the video regions where those sentences appear using moviepy.

The following summarization algorithms are supported:

Links

Source Code

3 minute summary of The Empire Strikes Back

1 minute summary of the congressional hearing on Planned Parenthood

1 minute summary of the first lecture of MIT's Quantum Physics I

Oops. The page you requested was not found. Instead, here's a fictional project description generated just for you.

Fickle News

2015
News Reader

The web has become overfilled with user tracking scripts. They’re all there to answer the question: who are you? What kind of things do you like? The assumption is that your personality be reduced to a vector in some kind of hyper-dimensional space of possible personalities. And that, once your unique “personality vector” has been determined, your experience of technology can become better through personalization. Netflix can show you the best movies for your taste, Pandora can play music that you will like, OkCupid can discover the optimal romantic partners for you, and so on.

Fickle News is a news reader web app that uses facial expression recognition to present you news that correspond to your inferred real-time emotional state, thereby implementing a very drastic kind of media personalization. It’s an interface-as-provocation inviting its users to consider the desirability of a future where technology has been ubiquitously designed to expose them exclusively to algorithmically curated information that is compatible with their existing life-outlook and worldview.

Links

Writeup on The Daily Dot