Expressive Machine
An exploration of proposing new expressive nonverbal channels for CA to adjust user expectations
Project Info
Mar. 2019 - Jun. 2019
Unit 2 A Tool for Whom
Royal College of Art, London
Tools
Python - Machine Learning prototype
Unity - Working Drawing Interface
Sketch, Principle - APP Interface & Interaction
Role & Teammates
Personal Project
Project Info
Nov. 2018 - Jan. 2019
Unit 1 - Smart Habitat
Royal College of Art, London
Tools
Wekinator - Machine Learning
Leap Motion, Kinect - Gesture Tracking
After Effects - Concept Video
Role & Teammates
Personal Project
Project Info
Nov. 2019 - Jan. 2020
Unit 4 Design For The Unsettling
Royal College of Art, London
Tools
Google Assistant API - Voice Interaction
Arduino - Functional Prototypes
IFTTT - Online IoT Platform
Role & Teammates
Personal Project
Project Info
Oct. 2019 - Nov. 2019
Unit 3 - The Exhibitionist
Royal College of Art, London
Tools
Unity, C4D, Premiere - VR Prototype
Raspberry Pi - Tangible Heartbeat Prototype
AR.js, A-Frame - AR Poster
Role & Teammates
Research, Concept Generation, VR + AR + Tangible Interaction Design & Prototyping
Anne-Marie Heck, Rashmi Bidasaria
Project Info
Nov. 2018 - 5 days workshop
AcrossRCA - SenseAbility
Royal College of Art, London
Tools
Arduino - Sensory Interaction Prototype
Scent Making
A/B Testing
Role & Teammates
Research, Concept Generation, Sensory Interaction Design & Prototyping
Moritz Dittrich, Janina Frye, Sushila Pun, Yiling Zhang
Overview

Framed as “dialogue systems often endowed with ‘humanlike’ behaviour”, conversational agents (CA) are becoming ever more common human-computer interfaces. But in the majority of instances, the operation of the CA systems failed to bridge the gap between user expectation and system operation. Playful/humorous interactions such as telling a joke have the effect of reinforcing anthropomorphic qualities, setting unrealistic users’ expectations of CA capability. On the other hand, user interaction with CA remains passive as user always need to say “Hey Siri” or “Ok Google” to start the conversation and voice works as the only channel for expression. 

In order to make the CA more engaging and help user create more accurate expectations, Expressive machine is an exploration of new expressive nonverbal channels for CA in the form of embedding the agent’s personality by different modalities(Skin texture and colour, Smell and Gesture) to express emotional and internal states for interaction.

Machine 1 adapts to the surrounding environment by changing colours and only stands out to attract attention when necessary. Machine 2 diffuses different smells designed from machine perspective to provide information. Machine 3 expresses its internal state using skin texture changes. Machine 4 raises its hands to actively inform but not overburden user. Different combinations between those modalities and sentences even generate richer and funnier meaning of communications. As a result, Expressive machine is being used as an example of significantly enhancing the expressive spectrum of machines for social interaction to support user assessment of system intelligence.

Tangible Heartbeat Prototype
The two drumsticks beat the surface of the box with a certain time difference to perfectly simulate the beating of the heart, so that the experiencers who put their hands on the shell can feel the dual sensory stimulation of vibration and sound. And this sensory stimulation can also stimulate people's curiosity about this installation during non-experience time.
Visual Guide Map
Click to enlarge
AR Poster
Mock Up - Full Version Exhibition
The ‘Exhibition mock-up’ explores the possibilities of showcasing what a larger vision for this project could be.
KEY FEATURES
Welcome to Doodle Pet
Skill Training
Users can train their own unique AI pets in a variety of ways to unlock their favorite skills.

Taking cat as an example, users can train AI pets by taking pictures of what they think are cats in daily life. They can also import the previous images from the album, or choose from the provided network image libraries.
Draw With AI Pet
After the skills are unlocked, users can collaborate with their AI pets to create unique drawings. When the users draw the outline, their pets will create the corresponding textures to provide uncertainty and inspire users.

Since users choose different images for training skills, the texture will reflect users' unique perspectives.
Review Diary
On the diary page, users can view images selected to train and unlock skills, as well as view previous creations.

For parents, they can better understand the world in the eyes of children and the cognitive development process of children by viewing these images and statistics.
motivation
Identify opportunity areas for innovation
AI Mark-Making
Google DeepDream Exhibition
Obvious Art - AI portrait fetched over $400,000

Train AI with lots of famous art works
But how AI works is still a black box for the public
Machine For Myself
There is also a trend that people who make things have started to shift their focus from objects to processes, building their own machines that make things.

New relationship with machine from subordination to collaboration
What will be a new form of Human-machine Interaction in creative area
when tool has its own thought?

How to let the general public better understand AI and even cooperate with it?
Initial Prototype
Trained algorithm with paired images - Cats’ edges & images of Cats and turned it into an interactive interface using Unity.
Based on Pix2Pix - Image-to-Image Translation with Conditional Adversarial Nets
An interactive real-time drawing recommendation system.
AI just try to make suggestions instead of taking over control.
Suggestions provided by AI doesn't aim to obtain a suitable answer, but can create uncertainty and inspire users.
Drawing Outcome
The obstacles to human creativity are human experience, logic, and methods used by humans.
Can AI help us to transcend our experience or logic or methods, so that human creativity can be further released?
RESEARCH
Who can benefit from it?
Case Study
It is very important for young people to cultivate creativity in the era of AI. What children need is technology that matches their infinite imagination.

How to cultivate creativity?
1. Go out in nature and capture inspiration from life
2. Art-related activities such as drawing and photography
Everyone can create @ Apple inc.
Literature Review
Piaget’s theory of cognitive development

The Concrete Operational Stage (7~11)
Children are able to incorporate inductive reasoning. Inductive reasoning involves drawing inferences from observations in order to make a generalization.
User Interview
Most of the parents believe that their children need to learn AI-related content.

Drawing can extend child’s imagination and creativity

Child from 1~6 can always think independently
Child after 6 are more easy to be limited by others
Persona
Limitation or Diversification?
Normally, AI scientists decide what kind of data to select.
So the output highly depends on what they think “Cat” is.
What if we give the right to children to choose the training images
depending on their views of the world?

How might we design a tablet app to
extend creativity and maintain independent thinking for children(6~10)
by using machine learning technology?
PROTOTYPING & ITERATION
Make the technology invisible
Prototype 1.0
User Journey
Competitor Analysis
Virtual pet games can motivate behavior change
Virtual pet games have similar logic with machine learning process
Prototype 2.0 - Virtual Pet
User Testing
Users pay more attention to upgrading pets by feeding, rather than drawing with pets.
Deviated from the main objective - cultivate creativity
Cats drawing @Whalen & Paper prototype testing
User Flow
Click to enlarge
Wireframe & IA
Click to enlarge
Stakeholder Analysis
Inspired By Biological System
Gulf - Expectation & Experience

Playful/humorous interactions had the effect of reinforcing anthropomorphic qualities, setting unrealistic users’ expectations of CA capability.

Human and animal have a wide variety of modalities to express emotional and internal states for interaction

These have been studied unequally, with a heavy focus on body movement, facial expression, and vocalics

How might we develop a new expressive nonverbal channel
for the CA  to express emotional and internal states for interaction?

AI has given machine situational awareness and autonomous cognition capabilities.

From passive interaction to proactive interaction.

But how about Color, Smell, Skin Texture, Subtle Gesture...?
Machine Proactive Interaction
The Context
Conversational Agent(CA)

Spoken dialogue interfaces will become the future gateways to many key services,and might be the next natural form of HCI.

But voice works as the only channel for expression.

EXPERIENCE PROTOTYPING
Machine1 - Cryptic Coloration

As cryptic coloration is not only for camouflage, also working as a tool for information exchange to express intentions and emotions. Machine 1 adapts to the surrounding environment by changing color and only stands out to attract attention when necessary - Calm technology.

Machine 2 - Scent Making

Smell can strongly influence human perceptions and expectations of living things. Machine 2 diffuses different smells designed from machine perspective to indicate different emotional and internal states of the smart speaker.

What does the machine's emotions smell like? I selected 4 basic emotions and tried to make scents from the machine's perspective. The angry scent is designed to have a smoky, electronic burning scent. Because if someone destroys the circuit inside the machine, then the machine may be very angry about it. The sad scent is designed with a wet feeling, and the wet environment is not suitable for the machine to work, so the machine will also feel sad in this situation.

Machine 3 - Texture Waving

Some Creatures display their internal states(fear, excitement, and other emotional states) through skin texture change. Texture is widespread and easily readable behavior. But it is rarely explored as a new way of human-computer interaction. Machine 3 expresses its internal state using skin texture changes.

Voice Control Prototype

Map with Russell’s circumplex model of emotions

Texture operates on two channels
Provide both tactile and visual sensation

Frequency - represent the Arousal dimension
Amplitude - represents the Valence dimension

Machine 4 - Hands Up

Gesture is a very subtle behavior to effectively convey information. Different combinations between gesture and voice expressions will generate richer and funnier meaning of the response. Machine 4 raises its hands to actively inform but not overburden user. Sometimes he will initiate interactions for his own benefit rather than to provide services to humans.

Expectations of Intelligence Level

Confidence Interval for ML Model Accuracy

Different Proportions also affect user expectations. We will expect more for objects with a ratio close to the human body

The nuances of raising hands adjust the user's conceptual model of CA and establish accurate user expectations

Exhibition & Validation
view more projects