Is identifying turtles with facial recognition models a risky use of computing power?
That’s the question Nik Marda ’21 M.S. ’21 asked the audience at a Stanford Law School event Tuesday, reflecting on whether the federal government should regulate artificial intelligence (AI) based on thresholds of technology and resource use.
Many think that facial recognition is inherently a risky use of technology, but we should also consider its practical application before making judgments, Marda said.
Marda, former chief of staff for the technology division of the White House Office of Science and Technology Policy (OSTP), advocated for national privacy legislation regulating the use of AI. The event, titled “Working on AI Policy at the White House,” was co-hosted by the Stanford Artificial Intelligence & Law Society (SAILS), Stanford Law and Technology Association (SLATA) and Stanford National Security & the Law Society (SNSLS).
Marda recently concluded two and a half years of work on technology policy in the White House. He joined the White House as a OSTP fellow after completing a bachelor’s degree in political science and master’s degree in computer science. Later, he worked as a policy advisor before rising to chief of staff of OSTP’s technology division.
Sitting in front of the whiteboard of Room 280A, Marda explained milestones of federal AI policy in conversation with Emma Lurie, a second-year law student and co-president of SAILS. The audience, a mix of law students and undergraduates, listened closely with slices of pizza in hand.
Chief among Marda’s many hopes for future policy was a federal bill on data privacy.
“In practice, AI use focuses on collecting data and using data,” Marda said. “A privacy law would go a long way in laying out stronger protections for individuals.”
Marda acknowledged that many in California, home to business giants in the technology sector, may push back on a nationwide privacy legislation. Regardless, he said the federal bill is the “number one most important” congressional action in the AI realm.
Looking forward, Marda hopes that AI privacy regulations can take shape through the American Data Privacy Protection Act, which was introduced in the House of Representatives in June 2022. The bill would establish and enforce consumer data protections.
“It’s a really strong piece of legislation with a lot of support,” Marda told The Daily.
Other congressional actions that Marda envisions include more spending for “moonshot projects around making AI safer” and the Creating Resources for Every American To Experiment with Artificial Intelligence Act of 2023 (CREATE AI Act).
The CREATE AI Act, which was introduced in the U.S. Senate last July, will establish the National Artificial Intelligence Research Resource as a shared nationwide infrastructure. According to Marda, the bill enjoys bipartisan and bicameral support. The bill was spearheaded by Anna Eshoo, co-chair of the Congressional Artificial Intelligence Caucus and representative of California’s District 16.
Marda identified 2023 as a “year of government action” on AI research and rule-setting, including landmark actions like the White House’s Blueprint for an AI Bill of Rights and President Joe Biden’s “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
“The last year shows what can happen in AI policy when there is momentum, and clearly that momentum built on the last decade of work,” Marda told The Daily. “You can see that the research at Stanford and elsewhere is now playing out in real policy impact.”
Marda identified the Office of Management and Budget’s draft policy on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” as one of the Biden administration’s most powerful initiatives on AI. The 26-page document outlines rules for government agencies’ use of AI.
“It’s quite forward leaning. It really clarifies the types of systems that we think are inherently more risky and the types of risks we care about more broadly,” Marda said of the draft policy. “It spells out a real framework to mitigate those risks.”
Audience members said they found Marda’s proposal for a nationwide privacy regulation to be meaningful.
“It was really cool to hear his perspective on where AI regulations are going, and which is most important,” said Silva Stewart, a second-year law student and vice president of SAILS. “In law, we talk about how certain rights are preservative of other rights, and the way he presented privacy as a baseline right that is foundational for other advancements was very interesting.”
Lurie, who met Marda at a digital fellowship program in 2019, said she was a “long-time admirer of his work.”
Though Marda’s time at the White House has concluded, he plans to stay involved in the AI policy space in the future.
“It felt like a really important moment to be thinking rigorously about how we want to see these technologies deployed in society,” Marda said, regarding his motivation for pursuing a career in technology policy. “This was a really important moment to get governance right.”