autistic-neural-network-3.jpg

O'Neil Risk Consulting & Algorithmic Auditing

 

It’s the age of the algorithm and we have arrived unprepared.

Algorithms are increasingly assisting or replacing people in making important decisions. Today, algorithms help decide who gets hired, how much to charge for insurance, who gets approved for a mortgage or a credit card. They also inform choices about sentencing, parole, and bail. We tend to hear about these algorithms when they mess up -- when they offer women less credit than men, or make it harder for people with mental health status to get jobs, or treat black defendants more harshly than white ones.  

Whether made by people or algorithms, these are hard decisions. Sometimes they will be wrong. But there is no excuse for an algorithm to be racist, sexist, ageist, ableist, or otherwise discriminatory.

services-background.jpg

What We Do

ORCAA is a consultancy that helps companies and organizations manage and audit algorithmic risks. When we consider an algorithm we ask two questions: 

  1. What does it mean for this algorithm to work? 

  2. How could this algorithm fail, and for whom?

Often we ask these questions directly with companies and other organizations, focusing on algorithms they are using. We also ask them with regulators and lawmakers in the course of developing standards for algorithmic auditing, including translating existing fairness laws into rules for algorithm builders. No matter the partner, our approach is inclusive: we aim to incorporate and address concerns from all the stakeholders in an algorithm, not just those who built or deployed it.

Services

Algorithmic audit

We work with an organization to audit the use of a particular algorithm in context, identifying issues of fairness, bias, and discrimination and recommending steps for remediation. Extensions of the basic audit include:

  • Assistance with remediation of fairness issues identified

  • Developing strategic communications about the audit and findings

  • Public-facing Certification for fairness (where applicable)

Early warning system

Do you worry that your organization’s algorithms in development or in production are problematic? We can tailor an "early warning system" that gives you advance warning for such problems, raising questions and ethical or legal issues to the correct committee.

Vendor vetting + procurement assistance

We help organizations perform stronger due diligence when procuring AI or predictive technologies from third parties. For instance, we can raise potential issues in advance, prepare questions and requests for vendors, and review materials provided.

Workshops + education

We offer workshops that give participants hands-on experience with our auditing framework and process. We also give talks and trainings on algorithmic auditing and fairness.

Expert witness work

We assist public agencies and law firms in legal actions related to algorithmic discrimination and harm. 

Bespoke consulting

We help organizations prepare for the age of algorithms by

  • Creating strategies and processes to operationalize fairness as they develop and/or incorporate algorithmic tools in their operations

  • Working with industry regulators to translate fairness laws and rules into specific standards for algorithm builders

Industries + Clients

Our partners range from silicon valley to the DMV, including private and public companies of all sizes and public agencies, in the US and internationally. We have a special interest in algorithms being used for credit, insurance, education, and hiring – because we think algorithms in these sectors are under relatively little scrutiny, but have major impacts on people’s lives. 

Past partners include:

In the news

Principles

Context Matters

An algorithm isn’t good or bad per se – it is just a tool. Who is using it, and to what end? What are the consequences? We go beyond traditional accuracy metrics to ask how the algorithm is used to make decisions, and who is affected.

Putting the science into data science

Graphs and statistical tests can be useful – if they answer a meaningful question. We translate specific ideas and concerns about fairness into statements we can test empirically. Then we design experiments to say whether those standards are being met.

AI Ethics cannot be automated

There cannot be a universal algorithmic auditing checklist or a fully automated review process – context is too important. An audit for a specific purpose may be repeatable, but human discussion and judgment will always be essential. We will tailor a tool to your needs, not repurpose what worked in another setting.

Contact

Name

 

We are ORCAA

Copy+of+Cathy+O_Neil+%28125%29-Full+Size.jpg

Cathy O’Neil

CEO

Cathy has been an independent data science consultant since 2012 and has worked for clients including the Illinois Attorney General’s Office and Consumer Reports. She wrote the book Doing Data Science in 2013 and Weapons of Math Destruction: How Big Data Increases Inequality And Threatens Democracy, released in September 2016.

 
DSC0085wcurversw_crop2_700.jpeg

Tom Adams

COO and General Counsel

Thomas Adams has over twenty-five years of business and legal experience. He has represented banks, companies and individuals on corporate, securities and business law matters. He also provided strategic advice, litigation support and expert witness testimony on issues relating to the financial crisis. Mr. Adams is an expert in creating solutions and solving problems for complex financial and corporate transactions and has provided strategic advice and analysis to banks, insurance companies, private equity companies, hedge funds and a variety of other companies. He graduated from Fordham Law School in 1989 and Colgate University in 1986. He is admitted to practice in New York.

 
jacob_appel_270219_02.jpg

Jacob Appel

Chief Strategist

Jake has spent the past 15 years delivering and measuring social impact. His work focuses on finding and quantifying improvements to current policy and practice using randomized experiments, behavioral science, and human centered design approaches. Before joining ORCAA, as a consultant with the Behavioral Insights Team, he advised state and local governments in designing and testing “nudges” in citizen-facing policies and programs. Jake received a BS in mathematics from Columbia University and an MPA from Princeton School of Public and International Affairs. He coauthored two books: More Than Good Intentions: How a new economics is helping to solve global poverty, and Failing in the Field: What we can learn when field research goes wrong.

 
MeredithBroussard3_by_Lucy_Baber.jpeg

Meredith Broussard

Affiliate

Data journalist Meredith Broussard is an associate professor at the Arthur L. Carter Journalism Institute of New York University, research director at the NYU Alliance for Public Interest Technology, and the author of “Artificial Unintelligence: How Computers Misunderstand the World.” Her academic research focuses on artificial intelligence in investigative reporting and ethical AI, with a particular interest in using data analysis for social good. She appeared in the 2020 documentary Coded Bias, an official selection of the Sundance Film Festival that was nominated for an NAACP Image Award. She is an affiliate faculty member at the Moore Sloan Data Science Environment at the NYU Center for Data Science, a 2019 Reynolds Journalism Institute Fellow, and her work has been supported by New America, the Institute of Museum & Library Services, and the Tow Center at Columbia Journalism School. A former features editor at the Philadelphia Inquirer, she has also worked as a software developer at AT&T Bell Labs and the MIT Media Lab. Her features and essays have appeared in The Atlantic, The New York Times, Slate, and other outlets.