Adaptation is at the heart of graduate student Alireza Fallah’s research. Currently in his fourth year at MIT, he received his master’s degree in 2019 and is continuing for his doctorate, finding a way forward despite the significant alterations to study, work, and life caused by the COVID-19 pandemic.

Supervised by Prof. Asu Ozdaglar in the Electrical Engineering and Computer Science Department, he came to LIDS after getting his Bachelor of Science in electrical engineering and mathematics from Sharif University of Technology in Tehran, Iran. He chose MIT because of the lab’s stellar reputation. “What LIDS is famous for is using math for characterizing and modelling many of the things that happen in the real world. So the idea of borrowing all these mathematical tools and using them to deal with these problems was something I was looking for, and at MIT, lots of people are doing it.”

Alireza’s main interest is optimization. “What I have been working on recently is characterizing the theory of optimization algorithms used in various machine learning problems, in particular in meta learning and federated learning,” he explains. “In machine learning, the idea is you have some data set and you’re trying to train the model so that it works well on potentially unseen data...[but] the idea of meta learning is to train a meta model for lots of tasks so that it can be updated and then used for a specific task.”

To get a sense of meta and federated learning, think of Apple training a model to finish a person’s words when they’re writing a text message. This is quite a complex process because these automatic suggested endings have to adapt to the language and slang of the hundreds of millions of people worldwide who use iPhones. So Apple needs to get information from a percentage of their users, but they need to do it in a secure way, rather than the user sending them all of their data directly.

One way to approach this would be to use federated learning, which can take advantage of the combined computational power of all users, yielding a richer model trained over a larger set of data points while maintaining privacy. If they were to use federated learning, Apple would start with a general model, or algorithm, that they send from their server to a user’s phone. Then, as each user sends texts — in effect training the model — their data, combined with that of other users, would be used to update the general model on the server. This improved, updated model would then be sent back to all users.

“Federated learning means that you’ll be training that [model] by basically giving the model a slight update and sending it back to the server,” says Alireza. “The issue with that you can immediately see is that there’s one model. There’s one model that’s supposed to work for your text messages, my text messages, someone living in England, in New York, in California, but people have different languages, different slangs...[and] having a shared model might not be a good idea, and having a little bit of personalization might be good.”

So what you actually have at server is a meta model — an algorithm that gets updated locally, based on each user’s data. Alireza explains, “You update it with your own data, I update it with my own data, so I get a personalized model. We all start from that meta model, but we get our own copy and update it with respect to our data.”

In his recent work, Alireza has been designing an algorithm that takes advantage of the benefits of federated learning while keeping the personalization that meta learning can provide. He and his colleagues have found that taking a meta-learning approach to federated learning can give crucial and timely personalization.

Another application that could see benefit from meta-learning is autonomous cars. Alireza says, “Let’s say you want to train the model that is behind the artificial intelligence model used in this car. So the thing is that sometimes your car needs to make a decision within a second. Let’s say you’re going from your home to your workplace, you don’t know whether the light will be green or red, you don’t know whether you’re going to see a kid and need to stop, and so forth. So the idea is that if I can have a meta model trained so that it can adjust to each of the situations with just a little amount of data, if my sensor just gives me one second of data and it can adjust the meta model, that would be perfect.” In other words, the model should behave as much like the human brain as possible, able to adapt quickly and make the safest, objectively best decision in a given situation without a lot of data.

Part of Alireza’s goal is to design and analyze algorithms for training such a model. “There are some algorithms where we have shown promising results in practice in real-world applications, in robotics for example. But the issue is we still have a lack of theory. You want to have guarantees for your algorithm. If you’re sending an autonomous car to the road, you want to be sure that it’s going to work. Our goal is to give theories and new algorithms for that training part of that meta model.”

Because meta learning in particular is still in the early stages of research and development, there’s a lot of theory to explore in the search to discover why and how it works. “This is a challenge for me, to complete the piece of the puzzle that is missing,” Alireza says. “And I like the problem also because it has its own challenges, it’s not straightforward, but it’s not impossible to tackle. And at the same time, you get something that you can use.”

This past summer, despite the pandemic, he completed an internship at Apple, working remotely from his home in Cambridge rather than on site in California. Of the ways his working style has changed due to COVID-19, he says, “There are definitely challenges. In the beginning, it was hard to really just stay at home and work. For people who are trying to do the theory, say you are dealing with a math problem for like two days, and if you’re in the office, you can talk with your office mate, talk with your friend. But at home you’re just yourself and yourself and yourself talking through the problem. But over the past few months, I have learned more how to push myself when I’m stuck and at home.” Since conferences have mainly been moved online, it’s still pos- sible to attend some of them, although as Alireza says, it makes networking harder. But overall, the MIT community has pulled together to make this unusual situation work until they can be together in person again — something Alireza is very much looking forward to.