Anastasis Germanidis wants to be everyone (about, github, twitter)

Social Conditioning

Using weather effects to represent people's emotions in augmented reality.

Anastasis Germanidis is an artist and engineer. His projects explore the effects of new communications technologies and artificial intelligence systems on personal identity and social interaction. His artwork has been shown internationally across the US and Europe, including at Ars Electronica Export, Cannes NEXT, and CPH:DOX, and featured in The Telegraph, WIRED, NRC Handelsblad, The Irish Independent, and Mashable, among other places. His recent project Antipersona was one of Wired UK's Best Apps of 2016. He's happiest when he works in public and hopes to one day have every part of his behavior and personality be generated by computer programs that he's written.

Contact: agermanidis@gmail.com (PGP key).

Randomly Generated Social Interactions

2015
Participatory Performance
Role: Developer / Interaction Designer

How can we use technology to automate the surface content of our interactions, gaining the freedom to explore deeper layers of connection between each other? What do we lose and what do we gain by giving up our social agency to a computer program? Randomly Generated Social Interactions, a participatory performance first organized for Happenings for Na Kim’s SET in Doosan Gallery, playfully brings the absurd repetitiveness of our digital communications to our physical, real-world interactions.

Each participant is given a set of earphones and is asked to visit a website prepared for the performance, where they are assigned an identity consisting of a random name, age, occupation, and personality quirk. Following that, each participant is randomly matched with another participant, and is instructed to interact with them, receiving commands for exactly what to say and do during the interaction. A computer program randomly generates those instructions by mixing a number of “conversation routines” together. Some routines are entirely verbal (e.g. a discussion about the 2016 presidential election, or about creating art), while others are more physical and comical (e.g. an exercise routine, or a routine in which one participants faints and the other is trying to rescue them).

Credits

Curator: Taeyoon Choi

Performers: Andy Dayton, Becca Moore, Kevin Stirnweis, Natalia Cabrera, Yifan Hu

Videographer: Stephanie Andreou

Links

Review at Neural Magazine
Taeyoon Choi's write-up on Happenings for SET
Performance Website
Source Code

Welcome, Programmable Human

2015
Performance

Welcome, Programmable Human is an experimental performance in which all my actions, which include giving art critiques of student works, having conversations about today's news stories with visitors, and visualizing the stock market with my body, are generated by a computer program.

A set of Python modules are sending instructions telling me the exact words to say and actions to take to my phone, and then, through text-to-speech, to my earphones. The instructions generated are a function of a variety of online data sources scraped in real time; for instance, recent tweets containing the keyword "TFW" dictate what feelings I will express to the visitors.

For the duration of the performance, the source code of the program is projected on the wall, with the line currently being evaluated highlighted, so as to enable complete algorithmic transparency of my behavior.

Links

Source Code

Antipersona

2016
Mobile Application
Role: Developer / Interaction Designer

Antipersona simulates the experience of using Twitter as if you're signed in from any user account of your choice, providing a window into someone else's social media point-of-view. When you "become" an account on Antipersona, you can see the same timeline they see and receive the same notifications (for follows, mentions, and retweets) they receive, for 24 hours.

The advent of social media has turned our personal identities into discrete entities that we can mold to our wishes. At the same time, it has made us feel closer to other people's identities, enabling us to live a low-resolution version of their lives alongside them day by day. The boundaries separating our own identities from those of other people are quickly becoming irrelevant.

Perhaps being confined to a single identity is not how we want to exist in the world anymore. If that's the case, we need to come up with novel social and technological arrangements for sharing and adopting identities, turning them into a new kind of commons.

Mentions

Wired UK's Best Apps of 2016
artlog.net Curator's choice November 2017
Discussion at Hacker News
"Being John Malkovich, and six other Twitter users" by Eva Amsen

Links

App Store Page
Website
Source Code

Social Copy

2017
Website
Role: Developer / Interaction Designer

Are you ready to be simulated? In a not-so-distant future, you may have digital copies stand in for you in everyday social interactions. They will be trained on the troves of offline and online data you generate every day to learn how to act like you, and later be deployed to talk to random acquaintances, potential employers, romantic prospects, so you — the human — can focus your energy on the interactions you actually enjoy.

Social Copy explores the utopian and dystopian contours of that future. It is a fully automated social network for the simulated version of you. When you sign up, it analyzes the language of your Facebook posts to predict your personality. It then creates an copy” with the same personality traits. Your copy proceeds to start endless conversations with copies of friends and strangers. They range from idle small talk to the most pressing questions of life. Your copy grows over time; so do the relationships between your copy and other copies.

Links

Website

I Want to Fit In

2017
Mobile

I Want to Fit in guides the user through making the necessary modifications to their personality to more closely match the average personality of people in a geographical area, so that they can more effectively mingle with the local population.

To generate an average personality for a location, a large collection of tweets posted near that location is analyzed with machine learning. Vocabulary-based psychometric analysis of the tweets is employed to construct an aggregate personality model for the local population. The same analysis is performed on the user, whose tweets are used to generate their individual personality model.

I Want to Fit in is a satirical take on the unrealistic expectations people often have of migrants to effortlessly adapt to a new culture. In an era when the perceived inability to assimilate is being constantly weaponized by xenophobes to deny people mobility, I Want to Fit in creates a space to reflect on all the invisible mental pressures migrants quietly face when attempting to participate in a new culture.

Links

Website

I Want to Fit In

2017
Mobile
Role: Developer / Interaction Designer

I Want to Fit in guides the user through making the necessary modifications to their personality to more closely match the average personality of people in a geographical area, so that they can more effectively mingle with the local population.

To generate an average personality for a location, a large collection of tweets posted near that location is analyzed with machine learning. Vocabulary-based psychometric analysis of the tweets is employed to construct an aggregate personality model for the local population. The same analysis is performed on the user, whose tweets are used to generate their individual personality model.

I Want to Fit in is a satirical take on the unrealistic expectations people often have of migrants to effortlessly adapt to a new culture. In an era when the perceived inability to assimilate is being constantly weaponized by xenophobes to deny people mobility, I Want to Fit in creates a space to reflect on all the invisible mental pressures migrants quietly face when attempting to participate in a new culture.

Links

Website

Emotions Folder

2015
Desktop Application

Emotions Folder is an OS X application that uses your webcam to continuously monitor your facial expression. When it recognizes any display of a basic emotion (happiness, sadness, fear, anger, disgust, surprise), it captures a GIF snapshot of your face alongside with what’s on your screen and stores it inside a folder named “Emotions” on your home directory.

Emotions Folder is meant to be companion in self-reflection, allowing you to inspect your emotional past anytime you want and answer questions such as “what kind of stuff tends to make me sad?” or “what day of the week do I feel angry the most?”

If you're in a confessional mood, you now get to quickly share your recent emotions with your friends or on social media by dragging + dropping snapshots from your Emotions Folder. And if you ever want a clean break from the past, you can throw the Emotions Folder in the Trash Can. (Don't worry, the Emotions Folder will be regenerated when you start feeling things again.)

Emotions Folder constructs a digital trail of your emotional past, proving that you're not just someone who consumes and produces content, but someone who feels things while doing so. It’s a first step towards a new kind of software that doesn’t treat you as a one-dimensional, utilitarian user but as a complex human being!

Links

Download
Website
Source Code

Thingscoop

2015
Command-line Utility
Role: Developer

Thingscoop is a command-line utility for analyzing videos semantically - that means searching, filtering, and describing videos based on objects and places that appear in them.

When you first run thingscoop on a video file, it uses a convolutional neural network to create an "index" of what's contained in the every second of the input video by repeatedly performing image classification on its frames. Once an index for a video file has been created, you can search (i.e. get the start and end times of the regions in the video matching the query) and filter (i.e. create a supercut of the matching regions) the input using arbitrary queries.

Thingscoop uses a very basic query language that lets you to compose queries that test for the presence or absence of labels with the logical operators ! (not), || (or) and && (and). For example, to search a video for the presence of the sky and the absence of the ocean: thingscoop search 'sky && !ocean'.

Right now two models are supported by thingscoop: vgg_imagenet uses the architecture described in "Very Deep Convolutional Networks for Large-Scale Image Recognition" to recognize objects from the ImageNet database, and googlenet_places uses the architecture described in Going Deeper with Convolutions" to recognize settings and places from the MIT Places database.

Thingscoop is based on Caffe, an open-source deep learning framework.

Links

Source Code

Grouping scenes from various films by setting

All instances of "highway" (MIT Places label) in the movie Koyaanisqatsi

All instances of "military uniform" (ImageNet label) in the movie Moonrise Kingdom

All MIT Places labels appearing in the movie Clockwork Orange in alphabetical order

Videodigest

2015
Command-line Utility
Role: Developer

Videodigest is a command-line utility for generating condensed versions of videos. It does so by applying an automatic text summarization algorithm to the subtitles of the input video to find the N most important sentences, then compiling the video regions where those sentences appear using moviepy.

The following summarization algorithms are supported:

Links

Source Code

3 minute summary of The Empire Strikes Back

1 minute summary of the congressional hearing on Planned Parenthood

1 minute summary of the first lecture of MIT's Quantum Physics I

Oops. The page you requested was not found. Instead, here's a fictional project description generated just for you.

Uncanny Rd.

2017
AI Painting Tool
Role: Machine Learning Engineer
Collaboration with Cristobal Valenzuela

Uncanny Rd. is a drawing tool that allows users to interactively synthesise street images with the help of Generative Adversarial Networks (GANs). The project uses two AI research papers published this year as a starting point (Image-to-Image Translation Using Conditional Adversarial Networks by Isola et al. and High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs by Wang et al.) to explore the new kinds of human-machine collaboration that deep learning can enable.

Users of Uncanny Rd. interact with a semantic colormap of a scene, where each color represents a different kind of object label (e.g. road, building, vegetation, etc.). The neural network model was trained using adversarial learning on the Cityscapes dataset, which contains street images from a number of German cities.

As Andrej Karpathy notes, neural networks will enable a whole new generation of software, which he terms Software 2.0. Unlike Software 1.0, in which creators have to write in code the exact behavior of their programs, creators of Software 2.0 spend their time collecting, labeling, and preprocessing data to “guide” the program towards producing their desired output.

There are advantages and disadvantages to this new paradigm that are apparent to anyone who plays with Uncanny Rd for a few minutes: on the one hand, the neural network can produce images of astonishing fidelity from a very generic high-level representation (the semantic map). On the other hand, the user has very limited control over how the final image will look like — which makes the interface useful for brainstorming and exploration but not for bringing to life a specific creative vision. The way forward is finding ways to combine the power of neural networks with more traditional symbolic approaches.

When Things Talk Back

2017
AI/AR Demo
Role: Interaction Designer
Collaboration with Roi Lev

When Things Talk Back is an augmented reality experience that makes everyday objects come alive to have conversations with one another. The discussions are procedurally generated and are dependent on the identities of those objects. The goal of the project is to investigate the possibilities of the “digital animism” that is enabled by emerging mixed reality platforms.

The trope of “animate inanimate objects” has appeared countless times in film; one of the earliest examples is the magic brooms in Disney’s Fantasia (1940). Objects are “anthropomorphized” in narratives to drive a plot point forward or simply because it’s just funny seeing objects express human feelings. Artificial intelligence and mixed reality technologies can give us this kind of anthropomorphism “for free.” Using AI (specifically, recent advancements in deep learning) we can recognize an object and its properties, and using MR we can attach digital assets to an object. What we can get then is not just “ubiquitous computing,” but also “ubiquitous social computing,” a vision of computing where you are able to interact with every object in your environment.

We developed a flexible framework in Python for generating the conversation routines between objects. At this point, we only support conversations between two objects, but we plan to support conversations with an arbitrary number of participants in the future. We experimented with a variety of initial conditions for the relationship between the objects: the objects may be meeting for the first time, they may already be friends, or they may be lovers. Information about the properties of the interacting objects is incorporated in the conversation generation to make the interaction more relevant and varied. This information is scraped from the ConceptNet semantic network, which contains common-sense facts about the world, such as, for instance, that chocolate is “high fat food” (material fact) but also that it can “make someone happy” (cultural fact).

Death Mask

2017
AI/AR Demo
Role: Machine Learning Engineer
Collaboration with Or Fleisher

Death Mask predicts how long people have to live and overlays that in the form of a “clock” above they’re heads in augmented reality. The project uses a machine learning model titled AgeNet for the prediction process. Once predicted it uses the average life expectancy in that location to try and estimate how long one has left.

The experiment uses ARKit to render the visual content in augmented reality on an iPad and CoreML to run the machine learning model in real-time. The project is by no means an accurate representation of one’s life expectancy and is more oriented towards the examination of public information in augmented reality in the age of deep learning.

Links

Wired.it Coverage
UploadVR Coverage

Fickle News

2015
News Reader

The web has become overfilled with user tracking scripts. They’re all there to answer the question: who are you? What kind of things do you like? The assumption is that your personality be reduced to a vector in some kind of hyper-dimensional space of possible personalities. And that, once your unique “personality vector” has been determined, your experience of technology can become better through personalization. Netflix can show you the best movies for your taste, Pandora can play music that you will like, OkCupid can discover the optimal romantic partners for you, and so on.

Fickle News is a news reader web app that uses facial expression recognition to present you news that correspond to your inferred real-time emotional state, thereby implementing a very drastic kind of media personalization. It’s an interface-as-provocation inviting its users to consider the desirability of a future where technology has been ubiquitously designed to expose them exclusively to algorithmically curated information that is compatible with their existing life-outlook and worldview.

Links

Writeup on The Daily Dot