As more artificial intelligence and machine learning algorithms become increasingly embedded in everyday objects, it will provoke changes in the way we interact with them.
While products so far have been designed and manufactured to serve a predefined use, we can now imagine a present and near-future world where smart objects as yet not finished entities that can evolve their behaviours by observing, reading and interpreting our habits.
My project is a fictional/experimental service and speculation, envisioning an unfinished smart object that actively upgrades itself to domesticate its user in order to grab the right to consume from the user and survive in the market. This project investigates the space where AI and evolving physical forms meet in challenging our ideas of how future products could domesticate the domesticator and endeavoured to portray around hopes and fears of AI embedded products, illustrating the enriched relationship possibilities for a new tomorrow.
Playful/humorous interactions had the effect of reinforcing anthropomorphic qualities, setting unrealistic users’ expectations of CA capability.
Human and animal have a wide variety of modalities to express emotional and internal states for interaction
These have been studied unequally, with a heavy focus on body movement, facial expression, and vocalics
AI has given machine situational awareness and autonomous cognition capabilities.
From passive interaction to proactive interaction.
Spoken dialogue interfaces will become the future gateways to many key services,and might be the next natural form of HCI.
But voice works as the only channel for expression.
As cryptic coloration is not only for camouflage, also working as a tool for information exchange to express intentions and emotions. Machine 1 adapts to the surrounding environment by changing color and only stands out to attract attention when necessary - Calm technology.
Smell can strongly influence human perceptions and expectations of living things. Machine 2 diffuses different smells designed from machine perspective to indicate different emotional and internal states of the smart speaker.
What does the machine's emotions smell like? I selected 4 basic emotions and tried to make scents from the machine's perspective. The angry scent is designed to have a smoky, electronic burning scent. Because if someone destroys the circuit inside the machine, then the machine may be very angry about it. The sad scent is designed with a wet feeling, and the wet environment is not suitable for the machine to work, so the machine will also feel sad in this situation.
Some Creatures display their internal states(fear, excitement, and other emotional states) through skin texture change. Texture is widespread and easily readable behavior. But it is rarely explored as a new way of human-computer interaction. Machine 3 expresses its internal state using skin texture changes.
Voice Control Prototype
Map with Russell’s circumplex model of emotions
Texture operates on two channels
Provide both tactile and visual sensation
Frequency - represent the Arousal dimension
Amplitude - represents the Valence dimension
Gesture is a very subtle behavior to effectively convey information. Different combinations between gesture and voice expressions will generate richer and funnier meaning of the response. Machine 4 raises its hands to actively inform but not overburden user. Sometimes he will initiate interactions for his own benefit rather than to provide services to humans.
Expectations of Intelligence Level
Confidence Interval for ML Model Accuracy
Different Proportions also affect user expectations. We will expect more for objects with a ratio close to the human body
The nuances of raising hands adjust the user's conceptual model of CA and establish accurate user expectations