Algorithmic Bias

Algorithmic bias is when an AI algorithm generates results that are skewed in favor of or against a certain demographic, in the simplest terms. There are many causes of algorithmic bias. One such cause is, believe it or not, human biases. If, for example, racism made it so that more African-Americans were incarcerated by the police, then a predictive policing model trained on that racism-influenced data is more likely to have algorithmic bias against African Americans in its results. Such a result happened with the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a tool for predictive policing. Another cause of algorithmic bias is incomplete/under-representative training data. Namely, if a demographic is under-represented in the training data, then there’s a high chance that the predictions generated by the model will be less accurate and overall worse for the underrepresented demographic. An example of such would be if a facial-recognition software performed poorly on non-white people due to a lack of diverse faces included in the training data.