The document discusses how artificial intelligence systems used for risk assessments could potentially be racist due to biases in their training data. It notes that AI systems are increasingly being used for pre-trial risk assessments in the US legal system. These systems use supervised learning techniques on past data to make predictions, but if the historical data reflects human biases then the resulting AI systems may also exhibit unfair or unequal treatment based on characteristics like race or gender. The document promotes an AI-based people search tool for use in the legal industry and provides a contact email.
2. 2
VIJILENT
What IS Artificial Intelligence?
The learned capability of a machine to
perform complex tasks, make decisions,
or predict future outcomes.