The science behind the comic
Chapter 9 ▾
This chapter shows a negative reaction of Xavier, Nadir’s date, towards his social robot. He clearly identifies Sowana as a “living being” as he attributes to Nadir the intention of mastering and controlling it.
Indeed, many studies provides critical evidence that despite its clearly nonhuman origin, a social robot as Sowana will be responded to as if it manifested personality. Individuals identified a robot’s personality and used personality-based social rules in their evaluation of the social robot. More specifically, individuals regarded a robot with a personality complementary to their own (for example, an extrovert robot perceived by an introvert human) as being more intelligent, more attractive, and more socially present than a robot with a similar personality.
Interaction with embodied agents is much more complicated and much more similar to human–human interaction in everyday life.
Source: Lee KM, Peng W, Jin SA & Yan C (2006). Can robots manifest personality?: An empirical test of personality recognition, social responses, and social presence in human–robot interaction. Journal of communication, 56(4), 754-772.
Chapter 8 ▾
This chapter explores various facets of unexpected behavior in collaborative robotics. It is a field in which research to develop solutions is generally lacking. On of the few articles about this subject mirrors the events narrated in this chapter. Indeed, researchers explored the benefits of an affective interaction (as the one between Anastius, his friends and nobody) as opposed to a more efficient, less error prone but non-communicative one (as the one between T.O and Pavel). The experiment took the form of an omelet-making task, with a wide range of participants interacting directly with two version of a humanoid robot assistant, a more expressive one and a more efficient one.
Researchers found that an expressive robot is preferred over a more efficient one, despite a trade off in time taken to do the task. Participants’ scores also indicated that they felt rushed with the more efficient robot, which would suggest that they placed less value on task performance and more on transparency, control and feedback, despite at least nine instances of speech recognition failure. Satisfaction was significantly increased in the communicative condition, and participants were appreciative of behavior, which they interpreted as responsive.
Humanlike attributes, such as regret, were shown to be powerful tools in negating dissatisfaction but also to have a negative effect if the behavior is deemed to cross a line.
In conclusion, the design of more reliable, acceptable and trustworthy robot companions should judiciously incorporating human-like attributes can significantly mitigate dissatisfaction arising from unexpected or erroneous behavior, improving collaboration as in the case where Anastasius and his friends spontaneously help Nobody trying to make its task easier and preventing mistakes.
Reference: Adriana Hamachera, Nadia Bianchi-Berthouzeb, Anthony G. Pipec and Kerstin Ederd. Believing in BERT: Using expressive communication to enhance trust and counteract operational error in physical Human-Robot Interaction. 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) August 26-31, 2016. Columbia University, NY, USA
Chapter 7 ▾
What happens in our brain when we really “believe” that robots are alive and they have their own intentions (like the doll ratting on Lou)? One of the results of attributing mind to interaction partners is the increase of social relevance we ascribe to others’ actions and of the amount of attention we dedicate to them. But what happens in our brains when we perceive robots as humans? In a neuroimaging study, participants saw images of an anthropomorphic robot that moved its eyes left- or rightwards to signal the appearance of an upcoming stimulus in the same or opposite location. Independently, participants’ beliefs about the intentionality underlying the observed eye movements were manipulated by describing the eye movements as under human control or preprogrammed. The study results observed a validity effect behaviorally and neurologically (increased response times and activation in the invalid vs. valid condition). This effect was more pronounced for the condition in which the robot’s behavior was believed to be controlled by a human, as opposed to be preprogrammed. This interaction effect between cue validity and belief was, however, only found at the neural level and was manifested as a significant increase of activation in bilateral anterior temporoparietal junction.
Believing androids – fMRI activation in the right temporo-parietal junction is modulated by ascribing intentions to non-human agents Ceylan Özdema *, Eva Wieseb,c*, Agnieszka Wykowskad,e, Hermann Müllerc, Marcel Brassf and Frank Van Overwalle. SOCIAL NEUROSCIENCE, 2016
Chapter 6 ▾
Social robots are starting to become more common in our society and can benefit us by providing companionship, increasing communication, and reducing costs, especially in healthcare settings. Engineers are attempting to make robots look and behave like humans and animals so that we feel more comfortable with them. However, as in the robot store and in the Lou episode with the stroller robots can also make us feel uncomfortable, especially when their behavior violates expectations. Interacting with robots can make us question who we are, and we can project our desires onto them. Theories suggest we are wired through a combination of nature and nurture to perceive robots through human filters. We mindlessly interact with robots and other technologies as if they were human and we perceive humanlike characteristics in them, including thoughts and emotions: this can lead to situation like the one with Nadir, who is profoundly blessed by a robot misgendering and by an algorithm, and at the same time looks for comfort in chatting with an artificial assistant.
Chapter 5 ▾
Collaborative robotics, where robots, as in this chapter, help humans to carry out specific tasks - assembly electronic material or building a hut - is a research field per se. And the aim of research is to develop solutions is lacking.
An experiment has been carried out to explore the benefits of an affective interaction, as opposed to a more efficient, less error prone but non-communicative one, as in the two storylines of this chapter. The experiment took the form of an omelet-making task, with a wide range of participants interacting directly with a humanoid robot assistant.
Having significant implications for design, results suggest that efficiency is not the most important aspect of performance for users; a personable, expressive robot was found to be preferable over a more efficient one, despite a considerable trade off in time taken to perform the task.
Reference: Hamacher A, Bianchi-Berthouze N, Pipe AG & Eder K (2016). Believing in BERT: Using expressive communication to enhance trust and counteract operational error in physical Human-Robot Interaction. In 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), (pp. 493-500). IEEE.hh
Chapter 4 ▾
Here’s a more close look at the relationship of Nadir with Sowan. We feel closer to actual robots than virtual robots (like I-CARE, subject of Nadir’s article) or computers, suggesting that physical embodiment is important in our relationships with artificial technologies. What does this mean for the future? David Levy (2007) argues in his book Love and Sex with Robots that people (men in particular) often have few close friends yet crave affection. He argues that people may prefer relationships with robots that are programmed to always be social, smart, and loyal over relationships with unpredictable humans who do not always behave as desired and get upset when we behave badly. Ethicists even argue that the creation of such beings may lead to the breakdown of society because people will prefer to interact with robots rather than each other (Whitby 2008).
The research in this field is still in its exploratory phase. There is enormous potential for psychologists to contribute to this strangely compelling field. This can be a win-win situation, with the study of human behavior informing the construction of robots and tests with robots informing us about human cognition, emotion, and behavior. What will be the consequences of the human quest to make copies of ourselves? Psychologists have a role to play in helping shape our future and that of our robot companions.
To know more:
Levy D. 2007. Love and Sex with Robots: The Evolution of Human-Robot Relationships. New York: Harper Collins
Chapter 3 ▾
In this episode we find an important reference to scientific research: the experimental study of apparent behavior by Fritz Heider & Marianne Simmel (1944). When Lou plays with the fishes, we see that two basic shapes trigger in her a cognitive response of behavior attribution.
You can conduct the experience on yourself by looking at the Heider & Simmel video:
You’ll notice that these basic shapes will elicit in you an automatic response. You will interpret their movements and their shapes as clues of personality and you will make up a story to justify their “apparent behavior” . This automatic attribution is at the core of our relationship with robots . Indeed, if we attribute emotions and intentionality to simple moving shapes, imagine what are the automatic responses that can be triggered by an embodied and interactive robot!
Chapter 2 ▾
Do looks matter?
One of the main subjects researched in the framework of the Social Robots project by Emily Cross is that the looks of a robot can hugely influence our way of perceiving it.
For example, in this chapter, the artist Emmanuel Espinasse imagines the I-CARE robot, designed to welcome and check upon the visitors of a robotized farm. I-CARE is “physically” overwhelming as it is huge and overhanging. Moreover, its facial features are very basic and, above all, it is a disembodied agent, which means that we cannot identify it as “someone like us” - as it doesn’t have a body.
On the other hand of the spectrum, Anastasius is precisely trying to give his helper Nobody a “touch of humanity” by changing its facial features, adding “hand-made” details and… making it look like himself.
Chapter 1 ▾
In the future, we will be surrounded by more and more virtual agents, such as robots and avatars.
The more robots will become integrated into our social milieu, the more we will have to understand how we perceive their display of emotion, and what will be our emotions toward them.
How may an older person interact with a life assistant robot? Will the robot be accepted or rejected? The person will develop an attachment to it? Or, what kind of feelings a kid will develop toward her robot toy? Is it possible that traditionally human activities such as teaching will be carried out by AIs? The ERC project Social Robots by Emily Cross (University of Glasgow) inspired "You, Robot", a webcomic that will imagine stories and possibilities based on actual research.