Could seeing a virtual replica of yourself exercising or eating healthy food encourage you to be healthier in real life?
That’s the question researchers are now asking in Stanford’s Virtual Human Interaction Lab (VHIL), where experimentation with avatar technology is hardly novel — the lab has produced studies and publications on the topic since 2005.
However, the release of James Cameron’s visually stunning film, “Avatar,” has created a new wave of public interest in the field and drawn attention to a current VHIL study on the use of avatars to promote healthy living.
The goal of the study was to find out if seeing a virtual replica of oneself engaging in healthy activity would encourage a person to perform that activity in real life. According to the lab’s Web site, “models can be valuable stimuli for encouraging the imitation of particular behaviors. Thus, we are investigating how using self-models . . . can influence imitation, particularly in the context of health and consumer behaviors.”
Jesse Fox, the author of the study, arrived at VHIL in 2006 to begin work on her Ph.D. Interested in the use of avatar technology as a pro-social tool for improving public health, Fox began running a series of three consecutive studies almost immediately.
First, each participant had to be digitally photographed at the lab. These photographs were wrapped around three-dimensional virtual figures to create avatars with high levels of physical similarity to the participants.
The animation technology used by VHIL is similar to the kind used in movies such as “Shrek” and “Up,” in which predetermined actions based on live human models are rendered into very realistic and accurate animations, according to Fox.
Participants then returned to the lab, where they put on a head-mounted display helmet and became totally immersed in an exact virtual replication of the room they were in. Each participant spent approximately 45 minutes watching virtual avatars — sometimes accurately modeled after the participant and sometimes appearing as random others — perform activities of varying healthfulness.
Fox was able to manipulate the virtual selves’ activities and appearances extremely rapidly, allowing her to gauge participants’ responses to physical phenomena in a short period of time that would otherwise take weeks or months to occur.
“We can make your avatar gain 30 pounds in a minute or lose 15 pounds in a minute, which you can’t do in the real world,” Fox said.
The discrepancy between people’s attitudes toward health and their behaviors is one of the large problems faced by those working toward improved public health, Fox said. In other words, a person saying that he should or will exercise is very different from him actually exercising.
After a series of follow-up calls to participants asking about their exercise and eating activities in the 24 hours following the first of the three studies, Fox was shocked to find that the manipulation had worked. She and her advisor, Jeremy Bailenson, a communication professor, were both somewhat skeptical going into the studies, as there had never been previous work done trying to get subjects to model their virtual selves’ behavior.
Fox also expressed concern about the potential influence of violent or offensive games on players’ behavior and attitudes.
Spurred by studies on aggression in video games, she conducted a study, published in June 2009, on participants’ attitudes toward stereotypical female avatars, from overtly sexual “Lara Croft” types, to “Grand Theft Auto” prostitutes, to more demure characters.
What Fox found disturbed her: after virtual encounters with stereotypical females, participants of both genders exhibited higher levels of sexism than those who encountered non-stereotypical avatars, including increased frequency of rape acceptance and victim blaming.
Fox added that although the entertainment industry tends to focus on the negative aspects of avatar technology, it is a broadly engaged in field that will continue gaining momentum.
“We need to keep in mind that we’re the ones who program the machines,” Fox said, “and we’re responsible as creators and users and know where to draw the line.”