CS Talk - A. Feder Cooper

Event time: 
Thursday, March 7, 2024 - 10:30am
Location: 
AKW 200 See map
51 Prospect Street
New Haven, CT 06511
Event description: 

CS Talk
A. Feder Cooper

Host: Joan Feigenbaum

Title: Reliable Measurement for ML at Scale

Abstract:

To develop rigorous knowledge about ML models — and the systems in which they are embedded — we need reliable measurements. But reliable measurement is fundamentally challenging, and touches on issues of reproducibility, scalability, uncertainty quantification, epistemology, and more. In this talk, I will discuss the criteria needed to take reliability seriously: both criteria for designing meaningful metrics, and for methodologies that ensure that we can dependably and efficiently measure these metrics at scale and in practice. I will give two examples of my research that put these criteria into practice: (1) large-scale evaluation of training-data memorization in large language models, and (2) evaluating latent arbitrariness in algorithmic fairness binary classification contexts. Throughout this discussion, I will emphasize how public governance requires making metrics understandable to diverse stakeholders. For this reason, my work aims to design metrics that are legally cognizable — a goal that frames both my ML and legal scholarship. I will highlight several important connections that I have uncovered between ML and law, including the relationships between (1) the generative-AI supply chain and US copyright law, and (2) ML arbitrariness and arbitrariness in legal rules. 

This talk reflects joint work with collaborators at The GenLaw Center, Cornell CS, Cornell Law School, Google DeepMind, and Microsoft Research.

Bio:

A. Feder Cooper is a scalable machine-learning (ML) researcher, working on reliable measurement and evaluation of ML. Cooper’s research develops nuanced quality metrics for ML behaviors, and makes sure that we can effectively measure these metrics at scale and in practice. Cooper’s contributions span distributed training, hyperparameter optimization, uncertainty estimation, model selection, and generative AI. To ensure that our evaluation metrics can meaningfully measure our goals for ML, Cooper also leads research in tech policy and law, and spends a lot of time working to effectively communicate the capabilities and limits of AI/ML to the broader public.

Cooper is a CS Ph.D. candidate at Cornell University; an Affiliate at the Berkman Klein Center for Internet & Society at Harvard University; a co-founder of The Center for Generative AI, Law, and Policy Research (The GenLaw Center); and a student researcher at Google Research. Cooper has received many spotlight and oral awards at top conferences, including NeurIPS, AAAI, and AIES, and was named a “Rising Star in EECS” by MIT.