LIDS PhD student Sarah Cen remembers the lecture that sent her down the track to an upstream question.

At a talk on ethical AI, the speaker brought up a variation on the famous trolley problem, describing the following scenario: Say a selfdriving car is traveling down a narrow alley with an elderly woman on one side, a small child on the other, and no way to thread between both without a fatality. Who should the car hit?

Then the speaker said: Let’s take a step back. Is this the question we should even be asking?

That’s when things clicked for Sarah. Instead of considering the point of impact, a self-driving car could avoid choosing between two bad outcomes by making a decision earlier on. The speaker pointed out that the car should have determined that the space was narrow when entering the alley and slowed to a speed that kept everyone safe.

Recognizing that today’s AI safety approaches often resemble the trolley problem, focusing on downstream regulation such as liability after someone is left with no good choices, Sarah wondered: What if we could design better upstream and downstream safeguards to such problems? This question has informed much of Sarah’s work.

“Engineering systems are not divorced from the social systems on which they intervene,” Sarah says. Ignoring this fact risks creating tools that fail to be useful when deployed or, more worryingly, that are harmful.

Sarah arrived at LIDS in 2018 via a slightly roundabout route. She first got a taste for research during her undergraduate degree at Princeton University, where she majored in mechanical engineering. For her master’s degree, she changed courses, working on radar solutions in mobile robotics (primarily, for self-driving cars) at the University of Oxford. There, she developed an interest in AI algorithms, curious about when and why they misbehave. So, she came to MIT and LIDS for her doctoral research, working with Professor Devavrat Shah in the Department of Electrical Engineering and Computer Science, for a stronger theoretical grounding in information systems.

Together with Devavrat and other collaborators, Sarah has worked on a wide range of projects in her time at LIDS. One such project focuses on a method for translating human-readable social media regulations into concrete auditing procedures.

Suppose, for example, that regulators require that any social media content containing public health information not be vastly different for left- and right-leaning users. How should auditors check that a social media platform complies with this regulation?

Designing an auditing procedure is difficult in large part because there are so many stakeholders when it comes to social media. Auditors have to inspect the algorithm without accessing sensitive user data. They also have to work around tricky trade secrets, which can prevent them from getting a close look at the very algorithm that they are auditing because these algorithms are legally protected. Other considerations come into play as well, including balancing the removal of misinformation with the protection of free speech.

To meet these challenges Sarah and Devavrat developed an auditing procedure that does not need more than black-box access to the social media algorithm (which respects trade secrets), does not remove content (which avoid issues of censorship), and does not require access to users (which preserves users’ privacy).

In their design process, the team also analyzed the properties of their auditing procedure, finding that it ensures a desirable property they call decision robustness. As good news for the platform, they show that a platform can pass the audit without sacrificing profits. Interestingly, they also found the audit naturally incentivizes content diversity, which is known to help reduce the spread of misinformation, counteract echo chambers, and more.

In another line of work, Sarah looks at whether people can receive good long-term outcomes when they compete for resources without knowing upfront what resources are best for them.

Take, for example, the process of finding employment. Workers want to be matched with employers, and vice versa. In this matching market, both workers and employers have matching preferences: workers prefer some jobs over others, and employers prefer some qualifications over others. However, workers and employers need to learn these preferences. For instance, workers may learn their job preferences from internships.

But learning can be disrupted by competition. If workers with a particular background are repeatedly denied jobs in tech due to high competition, for instance, they may never get the knowledge they need to make an informed decision about whether they want to work in tech. Similarly, tech employers may never see and learn what these workers could do if they were hired.

Sarah and Devavrat’s work examines the interaction between learning and competition, studying whether it is possible for individuals on both sides of the matching market to walk away happy. They focused on four criteria: stability, low regret, fairness, and high social welfare. Interestingly, they found that it is indeed possible to achieve all four simultaneously and discussed what conditions make this outcome possible.

For the next few years Sarah plans to work on a new project, studying how to quantify the effect of an action X on an outcome Y when it’s expensive — or impossible — to measure this effect directly and focusing on systems with complex social behaviors.

For instance, during the height of the COVID-19 pandemic, many cities had to decide what restrictions to adopt, such as mask mandates, business closures, or stay-home orders. They had to act fast and balance public health with community and business needs, public spending, and a host of other considerations.

Typically, in order to estimate the effect of each restriction on infection rates, one might compare the infection rates in areas that adopted different restrictions. If one county has a mask mandate while its neighboring county does not, one might think comparing the counties’ infection rates would reveal the effectiveness of mask mandates.

But of course, no county exists in a vacuum. If, for instance, people from both counties gather to watch a football game in the maskless county every week, these counties mix. These complex interactions matter, and Sarah plans to study questions of cause-and-effect in such settings.

“We’re interested in how decisions or interventions affect an outcome of interest, such as how criminal justice reform affects incarceration rates or how an ad campaign might change the public’s behaviors,” Sarah says.

Sarah has also stayed involved in the MIT community. As one of three co-presidents of the Graduate Women in MIT EECS student group, she helped organize the inaugural GW6 research summit featuring the research of women graduate students — not only to showcase positive role models to students, but also to highlight the many successful graduate women at MIT.

Whether in computing or in the community, a system taking steps to address bias is one that enjoys legitimacy and trust, Sarah says. “Accountability, legitimacy, trust — these principles play crucial roles in society and, ultimately, will determine which systems endure with time.”