Principle of Discriminatory Non-Harm

Principle of Discriminatory Non-Harm:

The designers and users of AI systems, which process social or demographic data pertaining to features of human subjects, societal patterns, or cultural formations, should prioritize the mitigation of bias and the exclusion of discriminatory influences on the outputs and implementations of their models. Prioritising discriminatory non-harm implies that the designers and users of AI systems ensure that the decisions and behaviours of their models do not generate discriminatory or inequitable impacts on affected individuals and communities. This entails that these designers and users ensure that the AI systems they are developing and deploying:

  1. Data Fairness: Are trained and tested on properly representative, relevant, accurate, and generalizable datasets.

  2. Design Fairness: Have model architectures that do not include target variables, features, processes, or analytical structures (correlations, interactions, and inferences) which are unreasonable, morally objectionable, or unjustifiable.

  3. Outcome Fairness: Do not have discriminatory or inequitable impacts on the lives of the people they affect.

  4. Implementation Fairness: Are deployed by users sufficiently trained to implement them responsibly and without bias.

Last updated