Framed as “dialogue systems often endowed with ‘humanlike’ behaviour”, conversational agents (CA) are becoming ever more common human-computer interfaces. But in the majority of instances, the operation of the CA systems failed to bridge the gap between user expectation and system operation. Playful/humorous interactions such as telling a joke have the effect of reinforcing anthropomorphic qualities, setting unrealistic users’ expectations of CA capability. On the other hand, user interaction with CA remains passive as user always need to say “Hey Siri” or “Ok Google” to start the conversation and voice works as the only channel for expression.
In order to make the CA more engaging and help user create more accurate expectations, Expressive machine is an exploration of new expressive nonverbal channels for CA in the form of embedding the agent’s personality by different modalities(Skin texture and colour, Smell and Gesture) to express emotional and internal states for interaction.
Machine 1 adapts to the surrounding environment by changing colours and only stands out to attract attention when necessary. Machine 2 diffuses different smells designed from machine perspective to provide information. Machine 3 expresses its internal state using skin texture changes. Machine 4 raises its hands to actively inform but not overburden user. Different combinations between those modalities and sentences even generate richer and funnier meaning of communications. As a result, Expressive machine is being used as an example of significantly enhancing the expressive spectrum of machines for social interaction to support user assessment of system intelligence.
Playful/humorous interactions had the effect of reinforcing anthropomorphic qualities, setting unrealistic users’ expectations of CA capability.
Human and animal have a wide variety of modalities to express emotional and internal states for interaction
These have been studied unequally, with a heavy focus on body movement, facial expression, and vocalics
AI has given machine situational awareness and autonomous cognition capabilities.
From passive interaction to proactive interaction.
Spoken dialogue interfaces will become the future gateways to many key services,and might be the next natural form of HCI.
But voice works as the only channel for expression.
As cryptic coloration is not only for camouflage, also working as a tool for information exchange to express intentions and emotions. Machine 1 adapts to the surrounding environment by changing color and only stands out to attract attention when necessary - Calm technology.
Smell can strongly influence human perceptions and expectations of living things. Machine 2 diffuses different smells designed from machine perspective to indicate different emotional and internal states of the smart speaker.
What does the machine's emotions smell like? I selected 4 basic emotions and tried to make scents from the machine's perspective. The angry scent is designed to have a smoky, electronic burning scent. Because if someone destroys the circuit inside the machine, then the machine may be very angry about it. The sad scent is designed with a wet feeling, and the wet environment is not suitable for the machine to work, so the machine will also feel sad in this situation.
Some Creatures display their internal states(fear, excitement, and other emotional states) through skin texture change. Texture is widespread and easily readable behavior. But it is rarely explored as a new way of human-computer interaction. Machine 3 expresses its internal state using skin texture changes.
Voice Control Prototype
Map with Russell’s circumplex model of emotions
Texture operates on two channels
Provide both tactile and visual sensation
Frequency - represent the Arousal dimension
Amplitude - represents the Valence dimension
Gesture is a very subtle behavior to effectively convey information. Different combinations between gesture and voice expressions will generate richer and funnier meaning of the response. Machine 4 raises its hands to actively inform but not overburden user. Sometimes he will initiate interactions for his own benefit rather than to provide services to humans.
Expectations of Intelligence Level
Confidence Interval for ML Model Accuracy
Different Proportions also affect user expectations. We will expect more for objects with a ratio close to the human body
The nuances of raising hands adjust the user's conceptual model of CA and establish accurate user expectations