Allison Berke is the executive director of the Stanford Cyber Initiative, where she manages the program’s research, education and outreach work. Stanford Cyber Initiative conducts research related to the secure integration of cyber technologies into society. The Daily sat down with Berke to discuss the interdisciplinary research, policy relevance and ethical implications of the Stanford Cyber Initiative.
The Stanford Daily (TSD): How did you get started with the Stanford Cyber Initiative?
Allison Berke (AB): I started working there in January 2015, which was pretty soon after we were established with $15 million from the Hewlett Foundation in gift funding. I was hired shortly after that was awarded to Stanford to run the program.
TSD: What is the goal of the Stanford Cyber Initiative?
AB: Our motto is that we want to produce research and frame debates on future cybersecurity and society. We want to expand the field of cybersecurity to include research that is policy-relevant and that brings in multidisciplinary viewpoints from the social sciences, the humanities, law, medicine and other disciplines outside of traditional computer science.
TSD: Why is a multidisciplinary perspective important?
AB: Bringing different viewpoints into the discussion of cybersecurity helps us figure out what different sectors’ needs are in terms of their security behaviors, what types of data they need to secure and what difficulties they run into or what adversaries they come up against. It also helps us produce more creative solutions that address those problems and also translate well to outside readers or policymakers who are trying to apply those solutions. That way that we’re not just talking directly to other computer scientists or engineers, but instead we’re communicating with everyone who’s going to be using these tools, everyone who has data to protect.
TSD: How does Stanford Cyber Initiative differ from existing research projects about cyber technologies?
AB: When we were were created we were intended to be a hub that would bring together some of the other centers of work on cybersecurity like the CS Department itself, the Computer Security Lab or the Center for Internet and Society at the Law School and to fund research that would bridge those different hubs. One way we differ is that we’re not bound by one department and that the vast majority of the research we fund is multidisciplinary. And another way we differ is that we are intending to produce research products that would both be read by academics – that would be journal articles, and so on – and also products that would be read by the public, or by policymakers in particular. I encourage our researchers to write white papers or give political testimony, whether it’s committee hearings on topics of relevance like election hacking or fake news.
TSD: Why is it important to think about policy framework when conducting research?
AB: I would say it’s important for a lot of different types of research to think about policy relevance in terms of how are these insights going to be applied, how are these technologies going to be used. But particularly for cybersecurity, because it’s something that takes a pretty technical topic that can have some arcane details but then implements it in technologies …We interact with data that is secured in various ways, and to secure our own data, we have to work with these tools and implement them. The government also has to do that, and the adversaries that we’re facing in some cases are the same as those faced by companies and by the government.
TSD: How does the Stanford Cyber Initiative think about ethics in particular?
AB: I have co-taught CS181: Computers, Ethics and Public Policy with Keith Weinstein. But on a whole, the way we think about ethics is that producing research that’s going to have a broader audience than just academics both helps bring in more diverse voices and more diverse viewpoints, which in and of itself predisposes the solutions to be more relevant because they’re collecting information and experiences and data from multiple different disciplines. And, also, those solutions have a better chance of being translated into practice by virtue of having multiple audiences, by virtue of trying to be heard outside of academia. So, for technologies like computer security where data security can have real impacts on people’s finances and on their safety, I guess we feel an ethical imperative to try and translate that work, that we don’t discover something important about security within academia but then fail to share that information with people in business or people in politics who can bring that to a wider audience.
TSD: Do you think Stanford students are aware of ethical implications when conducting research?
AB: I think for the most part, they are. We definitely see that students in CS181 – from the time when they come into class, but also over the course of the class – get a better appreciation and are thinking about the ethical implications of… what jobs they’re taking, what companies they’re going to work for, what projects they might end up contributing to in those companies…One of the nice things about doing research is that you’re not beholden to a particular outcome, that you’re exploring a question and you don’t necessarily need to have a financial motivation for exploring that question and you don’t need the result to come out either way. I think that frees up researchers to consider more of the implications in their work.
We have a couple of projects that are looking at how platforms can evaluate communication, such as either government directed communication in the case of platforms in China and Russia where governments will contract out people to post things on the platform that… are from a government viewpoint. And, similarly, looking at how platforms like Twitter and Facebook are controlling propaganda on their platforms and how they’re preventing abuse or preventing bullying.
[Our researchers are also investigating the effects of a technology that is] kind of like a Marauder’s Map for the hospital, where everyone wears these little tracking chips in their name tags and there are different tracking stations around the hospital. The primary reason they do this is so that they can see where patients are, so they can track patients when they go from the waiting room to a procedure room to the doctors’ office. They can also track larger things like radiography equipment so they know where that is in the hospital.
In terms of the effects of such technology, we are looking at whether compliance with wearing the trackers was higher among doctors, nurses or staff and looking at what workers’ responses are to knowing that they’re being tracked like that both in terms of how they now interact with their managers now that their managers have this data and also how they interact with patients knowing that some part of that patient’s journey is going to be algorithmically optimized in terms of where they go in the hospital and what kind of treatment they get. That I think is interesting from an ethical standpoint because it brings into question some of these issues about autonomy and surveillance.
TSD: You have written about blockchain technology. Can you give a little overview about what that is? What are its ethical implications?
AB: I [became] interested in blockchain and cryptocurrency through our faculty co-director Dan Boneh, the cryptographer who teaches this course on blockchain and cryptocurrency. The initiative runs an academic conference called “Blockchain Protocol Analysis and Security Engineering,” that brings together cryptographers and researchers to talk about different aspects of the security of blockchain-driven systems. We also hosted “Scaling Bitcoin” last year, which is one of the larger cryptocurrency conferences focused just on Bitcoin.
My interest in those technologies lies in how they can provide an alternative financial system and an alternative structure to traditional banking systems that allow people to accomplish simple trades more efficiently or allow people to spend items of value – currency or tokens – across borders without having to pay multiple transfer fees or without having to be subject to the controls of various governments in terms of cross-border currency flow. There are some ethical issues that interest me too: To what extent are individuals allowed to create their own financial systems or their own systems that run in parallel to traditional financial systems? How do these cryptocurrencies interact with our already established rules about ownership, taxation and bankruptcy laws? A new technology is something where the security and the mathematical challenges are very interesting, but then the legal and financial challenges are completely separate from those and are interesting in terms of how new technologies fit into very old systems.
This interview has been lightly edited and condensed.
Contact Alex Tsai at aotsai ‘at’ stanford.edu.