HabitLab, a mechanism for improving productivity, exists in the form of a browser extension, complete with dozens of customizable intervention techniques called “nudges” that help users track and “regain control” of their online browsing habits.
When people browse with HabitLab, their browsing activity triggers the nudges, which then prompt them to take action in adjusting their browsing behaviors.
The project, developed by Geza Kovacs and his team of researchers and advised by computer science assistant professor Michael Bernstein, started two years ago and has since entered the Google Play Store for Android devices. With over 8,400 active users, the browser extension earned a feature on WIRED Magazine for its “highly individualized” approach and attention to encouraging “mindful” internet activity.
“We wanted to ask if people can engage in [what is] essentially a micro-experiment on themselves to see what kinds of ‘nudges’ work well,” Bernstein said. “Rather than trying to have this one-size-fits-all approach, we would rather go for a solution that helps you learn what works for you.”
Kovacs and Bernstein are both members of Stanford’s Human-Computer Interaction (HCI) Group, a research group focusing on the ways in which computer technology, computational algorithms and design thinking interact in a multidisciplinary field.
Kovacs has had a long-abiding interest in online behavior change and its impacts on education and time management. While developing the various interventions for HabitLab, Kovacs consulted extensive literature on behavioral science and often consulted one of his advisors, founder and director of the Stanford Behavioral Design Lab BJ Fogg.
According to Fogg’s behavioral science research, human behavior is dependent on motivation, attention and prompts; behavioral change can then be achieved by manipulating a combination of these factors. HabitLab grew out of this idea, designed to present users a “decision point” in a sequence of otherwise mindless browsing behavior. These points of intervention, or “nudges,” are timely, personalized and contextualized to provide help for the users while they are browsing online.
“With, for example, an intervention screen asking, ‘Are you sure you want to go to Facebook?’ you make it a little harder for people to mindlessly scroll, which adjusts their motivation and behavior,” Fogg said.
Kovacs also drew inspiration for some of the nudges from a class he took at the d.school with best-selling author and behavioral design expert Nir Eyal M.B.A. ’08.
Kovacs said the idea for the system of nudges stemmed from learning how to use the Hook Model to build habits, which involves four factors: trigger, action, investment and reward. To implement these factors of investment and reward, Kovacs took an experimental approach.
“We previously had nudges with explicit rewards,” Kovacs said. “It was an experiment with a .gif feature — when you close a tab, we show you celebratory cats. Following the tab-closing action, you are rewarded for doing what HabitLab is asking you to do.”
However, some users found the cat .gif feature to be rather bothersome; Habitlab took the feedback into account, and it is no longer enabled by default but as an optional feature.
Another potential avenue Kovacs explored in the design of HabitLab was gamification. Eventually, though, he decided against the gamified design approach out of concern that it may cause people to lose track of their true goals online.
“I originally planned an elaborate ‘gamified’ system for users to earn points based on time saved,” Kovacs said. “But we don’t want people to just collect ‘Habit hearts’ and forget that this is all about becoming a more productive person online.”
To Kovacs, the effectiveness of HabitLab to the individual user is difficult to define, as it is dependent on the user’s personal goals and objectives. From a research perspective, though, he measures effectiveness by comparing the difference between a baseline and the average browsing time with HabitLab in use.
“Sometimes when users visit the site, the program does not push nudges by design, so we have a control condition to measure people’s browsing habits in the absence of nudges,” Kovacs said. “Based on this metric, we then compute the expected amount of time saved and expected attrition rate.”
Kovacs defines the expected attrition rate as the probability of users uninstalling each time they visit a site and receive a nudge. Compared to the users who uninstall the program early on — those who have most likely mistakenly installed the program — Kovacs is more interested in the users who uninstall HabitLab later, as that reflects the true user experience. By collecting feedback from users during their uninstallation process, Kovacs learned the delicate balance between the nudges’ effectiveness and its attrition.
“Intrusive interventions induce attrition,” Kovacs said. “Blocking a page would be highly effective as an intervention, but this also leads to high attrition.”
To combat high attrition, Kovacs often needs to adjust the nudges or turn them off. Meanwhile, other less intrusive nudges, like a text box in which users can type their plan for their site visit, have received better feedback — Kovacs said that they display this goal on the screen as users browse the site, which people found to be “very inspirational.”
Other user feedback during deinstallation came as a surprise to Kovacs. Some users were using Habitlab as a time-tracker for their browsing, despite the already somewhat saturated market for browsing time-trackers. The users also had different motivations, goals and expectations for using Habitlab as a tool, which resulted in a mixture of opinions on HabitLab’s intervention style: While some users found HabitLab to be an intrusive “nagging nanny,” others thought the nudges were too lenient and insufficient. Kovacs emphasized that the nudges are customizable, and certain nudges can be turned on or off.
To Fogg, the mixture of user opinion on HabitLab is only natural.
“One of my maxims to design a product is for the product to succeed in helping people do what they already want to do,” said Fogg. “What the product won’t do well is to make people do what they don’t want to do.”
Kovacs’s favorite kind of user feedback is from those who have improved their browsing habits through using HabitLab, and therefore no longer need the program.
As he will finish his Ph.D. this year, Kovacs plans to have current undergrads and coterm students take over HabitLab.
“An important aspect of HabitLab, in particular, is that it requires a lot of maintenance, because the sites are constantly updating,” Kovacs said.
For example, after the recent YouTube update, all of HabitLab’s YouTube nudges had to be re-written.
“We are still trying to actively grow user base, since more users can provide us more data and more significant results,” Kovacs said. “Even though we have never done any advertising. The program just grew organically.”
Another research interest of Kovacs’s is the “conservation of procrastination” and specifically looking at whether the time people save from HabitLab’s in-browser interventions is being redirected to other forms of unproductive time usage. From the data HabitLab collected so far, there is no clear sign of such “conservation.”
Drew Gregory ’21, a member of the HabitLab research team last summer, noted that from the data HabitLab has collected so far, the team has not found “any effect of reduction or conservation across sites or on other apps [on the Android platform].”
Bernstein has personally found his own browsing to be more purposeful since installing Habitlab.
“I am more intentional about whether I want to be relaxing,” he said, adding that “this project serves as a macro-scale lens onto how behavioral change can be achieved.”
“Browsing data can be really indicative of what people spend their time on, what they value, and how behavior works on the grand scale,” Gregory agreed.
Gregory is also involved in a side project which models the amount of time users spend on certain sites given certain nudges, which helps develop an algorithm to automatically deploy the effective nudges more frequently for the individual user.
While HabitLab users include those who already have productive habits and others who are unable to commit to changing their browsing behavior, Fogg is most interested in the users in the middle of the spectrum who have concerns about their own browsing habits and can be influenced by the interventions.
Fogg looks forward to seeing whether the behavioral changes induced by HabitLab are long-term or limited.
“We can periodically repeat span interventions and test out people’s reactions after minimum and long-term commitment to the program,” he said.
Contact Lyndsey Kong at lck1 ‘at’ stanford.edu.