close
close

Australia Proposes Mandatory Protections for AI

Australia Proposes Mandatory Protections for AI

Requiring AI models to be tested, keeping humans involved and giving people the right to challenge automated decisions made by AI are just some of the 10 mandatory measures the Australian government has proposed to minimise AI risk and increase public trust in the technology.

These barriers, which were put out to public consultation by Minister of Industry and Science Ed Husic in September 2024, could soon be applied to AI used in high-risk environments. They are complemented by a new Voluntary AI Security Standard designed to encourage businesses to adopt best-practice AI now.

What are the proposed mandatory AI protections?

Australia’s 10 recommended mandatory guardrails They are designed to set clear expectations for how to use AI safely and responsibly when developing and deploying AI in high-risk environments. They aim to address the risks and harms from AI, build public trust, and provide greater regulatory certainty to businesses.

Protection 1: Accountability

Both in Canada and EU AI legislationorganisations will need to establish, implement and publish an accountability process for regulatory compliance. This will include aspects such as data and risk management policies and clear internal roles and responsibilities.

Protection 2: Risk management

A risk management process should be established and implemented to identify and mitigate the risks of AI. This should go beyond a technical risk assessment to consider the potential impacts on people, population groups, and society before a high-risk AI system is rolled out.

TO SEE: 9 innovative use cases for AI in Australian businesses in 2024

Protection 3: Data protection

Organizations will need to protect AI systems with cybersecurity measures to protect privacy and create robust data governance measures to manage the quality of data and where it comes from. The government has observed that data quality directly impacts the performance and reliability of an AI model.

Protection 4: Testing

High-risk AI systems will need to be tested and evaluated before they are released. They will also need to be continuously monitored once deployed to ensure they are performing as expected. This is to ensure they meet specific, objective, and measurable performance metrics and that risk is minimized.

Ways the Australian Government is supporting safe and responsible AI

Protection 5: Human control

Meaningful human oversight will be required for high-risk AI systems. This will mean that organizations need to ensure that humans can effectively understand the AI ​​system, oversee its operation, and intervene as needed throughout the AI ​​supply chain and throughout the AI ​​lifecycle.

Guardrail 6: User information

Organisations will need to inform end users whether they are the subject of any AI-powered decisions, interacting with AI or consuming any AI-generated content so they know how AI is being used and where it impacts them. This will need to be communicated in a clear, accessible and relevant way.

Guardrail 7: Challenging AI

People adversely affected by AI systems will have the right to object to use or outcomes. Organizations will need to establish processes for people affected by high-risk AI systems to object to AI-assisted decisions or make complaints about their experiences or treatment.

Protection 8: Transparency

To help organizations effectively address risks, organizations must be transparent about data, models, and systems across their AI supply chain. This is because some actors may lack critical information about how a system works, leading to limited explainability, similar to issues with today’s advanced AI models.

Protection 9: AI records

Keeping and maintaining a set of records on AI systems will be required throughout their lifecycle, including technical documentation. Organizations should be prepared to provide these records to relevant authorities upon request and for the purpose of assessing their compliance with safeguards.

TO SEE: Why are productive AI projects at risk of failure without business understanding?

Protection 10: AI assessments

Organizations will be subject to conformity assessments, defined as an accountability and quality assurance mechanism to demonstrate that they comply with safeguards for high-risk AI systems. These will be conducted by AI system developers, third parties, or government organizations or regulators.

When and how will the 10 new mandatory guardrails come into effect?

The mandatory guardrails will be available for public consultation until October 4, 2024.

Husic said the government would now look to finalise and enforce barriers, which could include creating a new Australian Artificial Intelligence Act.

Other options include:

  • Adapting existing regulatory frameworks to accommodate new barriers.
  • Presentation of framework legislation including amendments to existing legislation.

Husic He said the government would do this “as soon as possible”. The safeguards are the result of a longer consultation process on AI regulation that has been ongoing since June 2023.

Why does the government approach regulation this way?

The Australian government is following the EU in taking a risk-based approach to regulating AI, which aims to balance the benefits AI promises to bring with deployment in high-risk environments.

Focus on high-risk environments

The government said the preventive measures proposed in the guardrails were aimed at “preventing catastrophic damage before it occurs” Safe and responsible AI recommendations document in Australia.

The government will identify high-risk AI as part of the consultation, but suggests it will consider scenarios such as adverse impacts on an individual’s human rights, adverse impacts on physical or mental health or safety, and legal impacts such as defamatory material, among other potential risks.

Businesses need guidance on AI

The government claims that businesses need clear boundaries so they can implement AI safely and responsibly.

The newly published Responsible AI Index 2024, commissioned by the National Artificial Intelligence Centre, shows that Australian businesses consistently overestimate their capacity to use responsible AI applications.

The index results are as follows:

  • 78% of Australian businesses believe they are implementing AI safely and responsibly, but this was true in only 29% of cases.
  • On average, Australian organisations adopt only 12 out of 38 responsible AI applications.

What should businesses and IT teams do now?

Mandatory security measures will create new liabilities for organizations using AI in high-risk environments.

IT and security teams are likely to be tasked with meeting some of these requirements, including data quality and security obligations, and ensuring model transparency throughout the supply chain.

Voluntary AI Security Standard

The government has issued a notice Voluntary AI Security Standard is now available for businesses to use.

To be prepared, IT teams can use the AI ​​Security Standard to help them comply with any future regulatory requirements, which may include new mandatory barriers.

The AI ​​Security Standard includes recommendations on how businesses can implement and adopt the standard through specific case study examples, including a common use case of a general-purpose AI chatbot.