Oasis Labs partners with Meta to assess fairness for its AI models using cutting-edge privacy technologies while protecting people’s privacy.
As Meta’s technology partner, Oasis Labs built the platform that uses Secure Multi-Party Computation (SMPC). This is to safeguard information as Meta asks users on Instagram to take a survey in which they can voluntarily share their race or ethnicity.
The project will advance fairness measurement in AI models. It will positively impact the lives of individuals across the globe and benefit society as a whole. This first-of-its-kind platform will play a major role in an initiative that is an important step toward identifying whether an AI model is fair and allowing for appropriate mitigation.
How the platform will assess fairness in AI models
Meta’s Responsible AI, Instagram Equity, and Civil Rights teams are introducing an off-platform survey to people who use Instagram. Users will be asked to share their race and/or ethnicity on a voluntary basis.
The data, collected by a third-party survey provider, will be secret-shared with third-party facilitators in a way such that the user’s survey responses cannot be learned by either the facilitators or Meta
Furthermore, the measurement is then computed by the facilitators using encrypted prediction data from AI models. Moreover, the cryptographically shared by Meta, with the combined, de-identified results from each facilitator reconstituted into aggregate fairness measurement results by Meta.
Additionally, the cryptographic techniques used by the platform enable Meta to measure for bias and fairness. Providing individuals that contribute sensitive demographic measurement data with high levels of privacy protection.
PROJECTS COMMON VISION
Meta and Oasis share a common vision around responsible AI and responsible use of data. The cryptographic techniques being employed on the platform, at the scale at which they will be used, is unprecedented. This is the beginning of a new journey.
“We seek to ensure AI at Meta benefits people and society which requires deep collaboration, both internally and externally, across a diverse set of teams. The Secure Multi Party Compute methodology is a privacy-focused approach developed in partnership with Oasis Labs that enables crucial measurement work on fairness while keeping people’s privacy at the forefront by adopting well-established privacy-preserving methods.”— Esteban Arcaute, Director of Responsible AI at Meta
Together with Meta, Oasis Labs will explore further privacy-preserving approaches for more complex bias studies. Moreover, given the desire to reach billions of people everywhere in the world, they hope to explore novel uses of emerging Web3 technologies. Underpinned by blockchain networks. The goal is to provide further global accessibility, audibility, and transparency. Conducting and gathering survey data, and its use in measurement.
“We are excited to be the technology partner with Meta on this groundbreaking initiative to assess fairness in AI models, while protecting users’ privacy, using cutting-edge cryptographic techniques. This is an unprecedented use of these techniques for a large-scale measurement of AI model fairness in the real world. We look forward to working with Meta to build towards responsible AI and responsible data use for a fairer and more inclusive society.”—Professor Dawn Song, Founder of Oasis Labs
ABOUT Oasis Labs
Oasis Labs is building a more trusted product with the latest in data security and governance technology. Oasis’ technologies focus on making it easier for developers to incorporate privacy-preserving data storage, governance, and computation.
Meta builds technologies that help people connect, find communities and grow businesses.