Senators Cory Booker and Ron Wyden, along with Rep. Yvette Clarke recently introduced a bill that will require large companies to audit any machine learning-based systems, such as facial recognition and ad-targeting algorithms – for any sort of bias.

Called the Algorithmic Accountability Act, this bill calls for the Federal Trade Commission to lay down new rules on assessing automated systems, and evaluating if they contain any tools that may be deemed biased or are of a “discriminatory” nature. Once passed, the bill will require big companies to apply the new FTC guidelines to audit their algorithms for bias, discrimination, and for any privacy or security risks they may pose for consumers.

The bill will apply to companies that meet the following criteria:

  1. Companies that make over $50 million annually
  2. Have data on at least 1 million people or 1 million devices
  3. Primarily function as “data brokers” that buy or sell consumer data

If the algorithm shows any major indications of privacy violations, discrimination or other serious issues, the company is required to address the anomalies or issues within a reasonable amount of time. The drafting of this bill is likely in response to several incidents of “algorithmic bias”, such as when Amazon created an AI recruitment tool that discriminated against female applicants, facial recognition AI that only recognized Caucasian faces, and most recently, when Facebook was sued by the Dept. of Housing and Urban Development for “unfairly limiting who saw their ads.” 

To get the full details of this story, read more about it here: https://www.theverge.com/2019/4/10/18304960/congress-algorithmic-accountability-act-wyden-clarke-booker-bill-introduced-house-senate

Do you think this bill is sufficient to “clean up” AI of bias? Is this bill necessary, or can companies be trusted to do their own “policing” of AI for biases? Let us know your thoughts in the comments!

Leave a Reply