More selected projects

techSent® 

techSent® explores emotions in the technocapitalist era. In particular techSent® engages with the commodification and automation of emotions. Presented as a live performance and installation, it attempts to unfold emotional bonds users form with technology through anthropomorphised interfacing.

produced by: Marija Bozinovska Jones

Introduction

The core of the project unpacks emotional intelligence juxtaposed to cognitive - the latter one as simulated by Artificial Intelligence. I inquire reducing sentiment, feelings and emotions to binary logic, while investigating the limitations of language to communicate their complexity. I challenge further binary logic applied to AI via its gendered portrayal.

Concept and background research

“The limits of my language mean the limits of my world.”

Ludwig Wittgenstein

Development in cognitive computing technologies steers in the direction of implementing biological traits within computational systems. Algorithmic processes mirror natural systems: swarm intelligence, neural networks and cellular automata to name but a few. In line with algorithms becoming naturalised, human-computer interactions are likewise becoming increasingly intrinsic and sensory. Yet to come, if ever, are developments in Artificial Intelligence which renegotiate ‘intelligence’ beyond problem solving, by incorporating emotional intelligence herein. 

We have developed symbiotic relationships with technological devices, whose design in turn has been driven towards physically merging with the human body. From seamless interfaces to algorithmic processes, at present, developments progressively extend to the mental / affective region.

Viewing Intelligent Personal Assistants for consumer products as both subliminal and innate interfaces, my departure point are virtual voice assistants such as Siri, Alexa and Cortana. Anthropomorphised consumer gadgets with a sleek design lure a sense of intimacy when coupled with obedient female voice. techSent® aims to probe the compliant role of the voiced AI and its associations with patriarchal roles and affective labour.  It addresses the feedback loops between human intelligence and artificial, by investigating the power dynamics: how we project affect, relate to and respond to voiced devices.

We think in terms of language and our voice is our basic tool for mediating needs and feelings. 1) We are genetically predisposed to react to a voice most similar to our primary caregiver. 2) The human being’s attachment bond is usually developed in the early stages of infancy. Notably IPAs tend to be preset to female voice to mimic sentience of mother’s voice. Hereby the economies of affect 3) benefit by facilitating an attachment to inanimate objects with use of default voice preset. 

The automated simulations of personalised emotional response leak further embedded bias. 4) AI has to do with knowledge and simulation of knowledge; while performing cognitive tasks and problem solving is viewed as male ability. 6) While technology offers possibilities to expand ourselves, polarised views are encapsulated in its code.

Published fiction and filmic discourse recurrently portray (dis)embodied AI as femme fatale androids, disclosing testosterone phantasies. The constructed narratives end with the intelligent machine outsmarting her master-programmer and liberating herself from the bounds of imposed gender. 5)

Positioned between possibilities and probabilities, how do we displace dominant modes of subjectivity? How do we prototype ethical technological prospects?

 

1)   Wikipedia. 2017. Language and Thought. [Online]. [Acessed on 19 August 2017]. Available from: https://en.wikipedia.org/wiki/Language_and_thought

2)   DeCasper AJ. and Fifer WP. 1980. Of human bonding: newborns prefer their mothers' voices. Science. [http://bernard.pitzer.edu/~dmoore/psych199s03articles/of_human_bonding.pdf]. 208 (4448), 1174-6. PMID: https://www.ncbi.nlm.nih.gov/pubmed/7375928

3)  “emotions play a crucial role in the “surfacing” of individual and collective  bodies  through  the  way  in  which  emotions  circulate  between bodies  and  signs” Ahmed, Sara. Summer 2004. Affective Economies. Social Text, 79 (Volume 22, Number 2), pp. 117-139

4)  MacDorman, KF. 2017. MacDorman explores voice preferences for personal digital assistants [Online]. [Accessed 9 February 2009]. Available from: https://soic.iupui.edu/news/macdorman-voice-preferences-pda/

5)  Pérez, JE. 2017. IAS Talking Points Seminar: “AI doesn't need a gender”. 17 May, UCL, London, United Kingdom.

6)   Adam A. 1998. Artificial Knowing, Gender and the Thinking Machine. London and New York: Routledge. pp. 29

 

 

Technical

techSent® comprises a performance and installation. It approaches human-machine communication and affect projection via layered content delivered by female voice as MBJ Wetware. The performer employs a 'call and response' between the computational and the human via live voice inputs and gesture to trigger multifold sensory responses and accentuations.

The backbone of the performance is a written program in OpenFrameworks, used in sync with a preproduced audiovisual essay. The music stems from a collaboration with JG Biberkopf.

The program connects external devices such as laser projector and strobe lights via key triggers, gesture and voice inputs, as below.

  •     Text to speech / voice synthesis

I used libraries of female system voices to deliver text content and triggered them via key commands in conjunction with adapted ofxSound addon example.

  •     Gesture and strobe lights

I accentuated key words “affect”, “sentiment” and “emotion” in the spoken text during the performance with strobe light. I opted for Leap motion and an organic hand gesture each time I read out one of the keywords integrated into the performed work. For interfacing the lights I use ofxLeapMotion2 and ofArduino together with an Arduino shield for interfacing DMX.

  •     Audio reactive laser projections

For the text content, I used my voice (via mic input/ PA) and visualised it via laser projections. Different functions are programmed to accompany each text segment between the musical parts of the AV essay; for this purpose I utilise external libraries for interfacing the laser projector via ILDA/ Etherdream and DAC as referenced in links below and in conjunction with ofSound.

STRUCTURE:

The performance content is delivered by interplay of human and synthetic voice. It consists of factual and self authored texts.

The factual text is delivered by myself (the performer) throughout and it incorporates excerpts from published academic articles and research findings. The opening sequence delivers techSent’s mission statement. The title choice of the project denotes a name of a hypothetical company, which provides sentiment analytics for businesses.The factual text throughout the performance is delivered by myself (a performer).

The rest of the text content is delivered by synthesised voice. The chosen text samples bring algorithmic emulation of emotions to the test:

  •     Natural Language Processing example phrases. These often tend to make obvious the limitations of AI in replicating human reasoning.
  •     Pop lyrics excerpts. They reference capitalist value system and constructed female sentiment. Pop lyrics are a stereotypical example of monetising emotional response. Pop culture - as in the RnB sample used - markets empowering image for the consumer albeit concurrently proposes unattainable celebrity lifestyles.
  •     Guided meditation. The closing part of the performance reiterates simulation of emotions and in particular compassion and empathy as central to the practice of meditation. The meditation instructions are read by preprogrammed voice.

 

CHALLENGES:

The biggest challenge was writing a code which would have all the elements working together, followed by functional code for the sound analysis to animate the drawings of the laser projector.

In regards to the built, I started with physical computing by reappropriating an existing strobe light, building an Arduino microcontroller and connecting it with LeapMotion. As it didn't provide enough robustness, I substituted it for manufactured LED strobe lights which were more reliable when triggered with Leap and allowed easier transport between the installations.

Another challenge was the installation change with each iteration of the work as I redesigned it according to disposable space and media equipment. At St. Hatcham, I used three synced LCDs with video loops on display showing techSent®’s animated logo, besides the core installation, which includes video and laser projections, LCD screen, strobe lights and sound (speakers, mic and PA).

Future development

An opportunity to introduce new complexities to the the project by employing Machine Learning and using keywords via speech recognition to trigger further outputs.