Skip to main content

56th Session of the Human Rights Council

Interactive Dialogue with the Special Rapporteur on contemporary forms of racism report on AI and racism

8 July 2024

Australian Statement

Australia thanks the Special Rapporteur for her report.

Australia strongly supports a Safety by Design and human-rights based approach to new and emerging technologies throughout the life cycle of systems, including AI.

Australia is deeply concerned by your report’s findings of the cross-cutting ways in which AI can contribute to manifestations of racial discrimination. We are deeply concerned that women, minority groups and marginalised people are disproportionately affected by algorithmic bias.

Datasets, training data and algorithm-driven analysis used in AI technology must be diverse, representative and inclusive of all people. There must be diversity and equity in the teams developing AI systems.

In Australia, to ensure AI is safe and responsible, we are exploring mandatory guardrails for developers and organisations using AI, clarifying and strengthening existing laws, and providing guidance to industry.

AI developers have an important role. Safety measures including independent oversight, risk assessments, transparency reporting, and red-teaming will help prevent AI from perpetuating and amplifying systems of racism, bias, and hate.

Special Rapporteur, what advice can you give to address multiple and intersecting forms of discrimination faced by women and girls in AI systems?

Back to top