in some cases, the same organisation may be both a Developer and a Deployer.<\/li>\n<\/ul>\nA Deployer using an AI system does not generally have control over design decisions made and data used by the organisation that developed the system. Likewise, a Developer generally does not have control over the subsequent uses of the AI system by an organisation that deploys the system.<\/p>\n
It is critical that Deployers have the necessary data governance practices in place to ensure the proper use of AI models provided by developers. Data used in AI applications is specific to the use case and Deployers should understand the associated risk of using poor quality or inaccurate data will have on the AI model\u2019s decision making.<\/p>\n
AI regulation should make clear the responsibilities and obligations of AI Developers and Deployers. This ensures the appropriate organisation in the supply chain can identify and mitigate risk. Importantly from a citizen or customer perspective it ensures that lines of accountability are matched to the relationship between the end user and provider.<\/p>\n
For example, transparency obligations as to the use of AI in high impact decision making would be best applied to the Deployer as they have a direct relationship with the customer whereas the Developer will have no relationship with the customer.<\/p>\n
This distinction has parallels to the General Data Protection Regulation (GDPR<\/strong>), widely considered to be best practice privacy regulation which has the Processor \/ Controller distinction. The benefit of this distinction was recognised in the context of the Privacy Act 1988<\/em> (Cth) (\u2018Privacy Act’) reforms which recommend amendments to the Act to incorporate the Processor Controller distinction.<\/p>\nFurthermore, a distinction can also be made between scenarios which involve direct interaction with consumers or citizens and business to business interactions. In the business-to-business context we would recommend a maximum freedom to contract to establish responsibilities and obligations on the entities best placed to comply with them and mitigate risks and understand the specific context and use-case.<\/p>\n
Interventions should leverage existing regulation<\/strong><\/h3>\nThere are a number of issues that have been raised with the increased prevalence and power of AI technology. For example:<\/p>\n
\n- Generative AI creates believable images and videos of events that never occurred. This has the potential to create misinformation and unfairly damage the reputation of an individual, undermine democratic institutions, or infringe on the commercial copyright of an individual or business.<\/li>\n
- The use of AI in autonomous vehicles is not always effective, and it could potentially result in harm to the driver or passengers. This raises concerns regarding consumer protection and product safety.<\/li>\n
- AI-assisted decision making, which is built on incomplete or biased data sets, could result in unfair discrimination against an individual. The impact of such discrimination could be life-changing depending on the importance of the decision being made.<\/li>\n<\/ul>\n
麻豆原创 recognises that governments will need to intervene to manage these potential harms. However, we consider that actions taken should build on existing regulation, rather than using a technology-specific regulatory approach. For example, considerations of the impact of AI on worker rights and employment conditions will be different to those on vehicle safety. Consideration on whether a government response is required will be best delivered by those policy makers and regulators responsible for those matters.<\/p>\n
As noted by the Paper, AI is an enabling technology, as such it is often an element in other systems and technologies and used across industries. This will mean that AI is regulated under multiple laws increasing the likelihood of duplication and conflict between<\/u><\/strong> regulatory systems<\/u><\/strong>.<\/p>\nAI development often depends on the ingestion of large sets of data that are used to train algorithms to produce models that assist with decision-making. To the extent that any of the input for an AI model involves personal data, or any output is used to make decisions that affect the rights or interests of individuals, the AI model and its applications are already directly subject to the Privacy Act.<\/p>\n
We support the existing approach taken by the Government where it has leveraged existing frameworks and avoid duplicating or creating any conflicting requirements with these frameworks while already promoting trust by enforcing existing legislation. For example, the Privacy Act, which already protects the use of citizen data in large data sets (an essential element for AI) and is considering measures to improve transparency on the use of Automated Decision Making.<\/p>\n
To the extent it is not already being done, there is value in reviewing existing regulatory frameworks as they relate to consumer, data protection and privacy, corporate, criminal, online safety, administrative, copyright, and intellectual property laws against the potential harms from AI to determine whether they are fit for purpose. However, this should be done with an:<\/p>\n
\n- understanding of the AI supply chain;<\/li>\n
- application of a risk based methodology (see section Regulatory intervention should take a risk based approach<\/em>);<\/li>\n
- against set of common principles (see section Ensure a common set of principles for government AI and the application of regulation<\/em>); and<\/li>\n
- in a co-ordinated manner (see A co-ordinating body and governance structure<\/em>)<\/li>\n<\/ul>\n
Otherwise, there is a risk that over-prescriptive rules will hinder investments in AI as well as the use of innovative AI solutions.<\/p>\n
Ensure a common set of principles for government AI and the application of regulation <\/strong><\/h3>\nThe Australian Government should review the effects of AI against existing regulatory powers to determine whether action is necessary. Since this will require consideration from regulators and policy makers from various portfolios, the Government should establish a set of common principles to guide the assessment and ensure a consistent approach. These principles would guide policy making, reform and enforcement.<\/p>\n
Globally and domestically, there are many existing AI ethical frameworks and principles that are applied by organisations utilising AI in the operation of their businesses. As noted previously 麻豆原创’s Global AI Ethics Policy uses the principles of Human Agency, and Addressing bias, Discrimination and Transparency and Explainability.<\/p>\n
A review of these global policies can be distilled to some common themes that are reflected in the UK Government\u2019s White Paper.<\/p>\n
\n- Transparency \u2013 How clear is it what systems and processes are AI enabled and to what extent can the basis for its decision making be understood?<\/li>\n
- Fairness\/ Non-bias \u2013 To what extent does AI discriminate unfairly against individuals or groups?<\/li>\n
- Contestability \u2013 Within a decision-making construct enabled by AI to what extent can an individual challenge the decision or have it reviewed?<\/li>\n
- Safety \u2013 Does the AI system work robustly, securely, and safely?<\/li>\n
- Accountability \u2013 Is there sufficient oversight of decision making and are organisations accountable for the AI system to operate effectively and is there sufficient governance around decision making around the AI implementation.<\/li>\n<\/ol>\n
Whichever principles the Australian government conclude on \u2013 what is critical is that these principles exist and form part of a mandatory framework for consideration of policy or regulatory intervention that impacts on AI technologies.<\/p>\n
Regulatory intervention should take a risk based approach <\/strong><\/h3>\nThe application of these principles to the AI related issue should be undertaken on a risk-based approach. Regulatory interventions should be scaled to meet the risk i.e. an assessment of the likelihood of harm against an individual or organisation combined with the severity of that outcome. For example:<\/p>\n
\n- High Impact High Likelihood – The governance and assessment of an AI system used in an autonomous vehicle would more likely require some form of intervention or governance oversight<\/li>\n
- Low Impact Low likelihood \u2013 An AI platform used by a business to optimise its inventory management system.<\/li>\n<\/ul>\n
To reflect on the risk-based approach, it is essential to set up a clear and precise criterion for high-risk AI systems based on the probability of occurrence and consequences for individual rights and freedoms, security, or safety and how to mitigate such risks.<\/p>\n
Any risk assessment should include assessing the benefits of a proposed AI application or the risks of not proceeding with the development or deployment of the AI application. This is just as important as focusing on the harm that may result from proceeding with the AI application and makes sure the use of an AI application is proportionate to the desired outcomes. There are many high risks with an AI system processing sensitive personal data (e.g. in healthcare) that would be outweighed by the benefits to individuals and society at large from such applications.<\/p>\n
To conduct risk mitigation when using high-risk industrial AI applications, international standardisation bodies, driven by industry, should develop common standards for the use of AI in business. Policy makers should seek to advance innovation and promote a risk-based approach to AI that fosters trust, promotes harmonisation of standards, and support global alignment on AI through the OECD and other international for a and work with our allies to advance AI research and development.<\/p>\n
The establishment of regulatory sandboxes are also essential for a risk-based approach. Here AI applications can be tested in protected legal environments to develop innovations and regulations with high practical applicability. In addition, joint experimental spaces for AI applications with partner jurisdictions should be pursued.<\/p>\n
A co-ordinating body and governance structure<\/strong><\/h3>\nGiven the cross sectoral impact of AI technology, co-ordination by government will be critical. This will require new ways of working and potentially cabinet endorsement of a formal approach that will require all government agencies to follow.<\/p>\n
At the centre of this would be a government body to ensure no duplication of effort by organisations and individuals, no conflict between policy and legislative proposals, and where avoidance of overlap is not possible clear guidance on which rules should be followed by industry.<\/p>\n
In acknowledgement of the independence of regulators \u2013 at a minimum the new body would be available to ensure all regulators remained cognisant of approaches being made by other regulators, alert them to potential areas of conflict and how other regulators were applying the principles.<\/p>\n
This body could also be source of AI understanding and expertise that could support whole of government and regulator understanding of AI and emerging issues. This function should however be sub-ordinate to the key function which is to support co-ordination between policy makers and regulators.<\/p>\n
Without co-ordination there is a risk of overlapping regulations that will hamper the ability of companies to develop AI-based innovative business models and thereby remain competitive at a global scale.<\/p>\n
Regarding the creation of this body, we are agnostic as to whether a new statutory organisation is required or whether the function of the body could sit within an existing Agency. What is important is that the organisation is empowered to act as a central co-ordinating body across government.<\/p>\n
The role of Government as an exemplar<\/strong><\/h3>\nWe consider that there is also an opportunity for the Government to play a more active role in the use of AI in the Australian Economy. Governments play a critical role in the safe and responsible economy wide take up of AI as they:<\/p>\n
\n- set and enforce data privacy laws;<\/li>\n
- are responsible for large amounts of data;<\/li>\n
- are large service providers;<\/li>\n
- have IT budgets to deliver major projects; and<\/li>\n
- are directly accountable to their citizenry.<\/li>\n<\/ul>\n
Governments have an important role in influencing public attitudes to AI. If governments use AI ethically and responsibly, this will build public trust and acceptance of the use of AI across the economy. The Government must accelerate its use of AI in its systems of operation of government and service delivery.<\/p>\n
This is also critical to good policy making. To make good policies about AI, you need to understand it well. This will only happen with greater use of AI within government. This will allow the government to best appreciate how it works, what the risks are and how to best mitigate them.<\/p>\n
麻豆原创\u2019s Institute of Digital Government has undertaken research into the use of AI within Government and its challenges. The research has indicated that there are four major challenges facing the government\u2019s use of AI.<\/p>\n
\n- AI is resource intensive<\/strong> \u2013 To effectively utilise AI requires people with the skills to develop and use AI, the systems to undertake the analysis and the data sets on which to undertake the analysis \u2013 government agencies face challenges in all those matters.<\/li>\n
- The right operating model<\/strong> \u2013 Operationalising the use of AI requires new ways of operating \u2013 it requires a combination of data science skills along with policy domain expertise in the design and operation of the AI. This requires new ways of operating within typically separate areas within agencies<\/li>\n
- Opaqueness of AI models – <\/strong>Many advanced AI systems face the challenge of not being able to be understood by humans, the so called explainability problem This is a particularly a challenge for governments due to the potential ramifications of government decisions on an individual and the importance for governments to be able to demonstrate fairness and transparency in decision making.<\/li>\n
- Cultural issues <\/strong>\u2013 Perceived and actual impact on employment, challenges to established wisdom and concerns around the public\u2019s acceptance of AI within service delivery are all elements that impact on AI take up within the public sector.<\/li>\n<\/ol>\n
AI is ultimately a tool, and its use should result in better business and government operations. If it fails to do so, its adoption within government will face challenges.<\/p>\n
Currently, the adoption of AI technologies by businesses and governments is still in its early stages. Use cases are still being developed, and the application of the technology to existing processes is relatively new. This can understandably lead to reticence within government to undertake AI projects where the benefits are unclear or the capabilities are unproven. Like all new technologies, making a business case for their use when the results are unclear or unproven is challenging. Equally problematic is the high expectations on the insights the application of AI can provide, where there is a lack of clear understanding of what the AI can deliver.<\/p>\n
However, all these challenges are surmountable, and the Australian Government, in partnership with industry, academia, and community organisations, must embrace AI technologies. We are seeing positive developments across the public sector to help with the growth of AI use. For example, the DATA Act will empower access to greater data sets across the economy. There is also a growing willingness by political leadership to ask how AI technologies can help make the public sector more efficient and deliver better customer services.<\/p>\n
ATTACHMENT A<\/strong><\/h3>\nResponses to Specific Questions Raised by the Paper<\/h3>\n
Potential Gaps in approaches<\/strong><\/p>\nWhat potential risks from AI are not covered by Australia\u2019s existing regulatory approaches? Do you have suggestions for possible regulatory action to mitigate these risks? <\/em><\/strong><\/p>\nWe support the existing approach taken by the Government where it has leveraged existing frameworks and avoid duplicating or creating any conflicting requirements with these frameworks while already promoting trust by enforcing existing legislation. For example, the Privacy Act, which already protects the use of citizen data in large data sets (an essential element for AI) and is considering measures to improve transparency on the use of Automated Decision Making.<\/p>\n
To the extent it is not being done, there is value in reviewing existing regulatory frameworks as they relate to consumer, data protection and privacy, corporate, criminal, online safety, administrative, copyright, and intellectual property laws against the potential harms from AI to determine whether they are fit for purpose. However, this should be done with an:<\/p>\n
\n- understanding of the AI supply chain;<\/li>\n
- application of a risk based methodology (see section Regulatory intervention should take a risk based approach<\/em>);<\/li>\n
- against set of common principles (see section Ensure a common set of principles for government AI and the application of regulation<\/em>); and<\/li>\n
- in a co-ordinated manner (see A co-ordinating body and governance structure<\/em>)<\/li>\n<\/ul>\n
Otherwise, there is a risk that over-prescriptive rules will hinder investments in AI as well as the use of innovative AI solutions.<\/p>\n
Are there any further non-regulatory initiatives the Australian Government could implement to support responsible AI practices in Australia? Please describe these and their benefits or impacts. <\/em><\/strong><\/p>\nWe consider that there is an opportunity for the Government to play a more active role in the use of AI in the Australian Economy. Governments play a critical role in the safe and responsible economy wide take up of AI as they:<\/p>\n
\n- set and enforce data privacy laws;<\/li>\n
- are responsible for large amounts of data;<\/li>\n
- are large service providers;<\/li>\n
- have IT budgets to deliver major projects; and<\/li>\n
- are directly accountable to their citizenry.<\/li>\n<\/ul>\n
Governments have an important role in influencing public attitudes to AI. If governments use AI ethically and responsibly, this will build public trust and acceptance of the use of AI across the economy. The Government must accelerate its use of AI in its systems of operation of government and service delivery.<\/p>\n
Do you have suggestions on coordination of AI governance across government? Please outline the goals that any coordination mechanisms could achieve and how they could influence the development and uptake of AI in Australia. <\/em><\/strong><\/p>\nGiven the cross sectoral impact of AI technology, co-ordination by government will be critical. This will require new ways of working and potentially cabinet endorsement of a formal approach that will require all government agencies to follow.<\/p>\n
At the centre of this would be a government body to ensure no duplication of effort by organisations and individuals, no conflict between policy and legislative proposals, and where avoidance of overlap is not possible clear guidance on which rules should be followed by industry.<\/p>\n
In acknowledgement of the independence of regulators \u2013 at a minimum the new body would be available to ensure all regulators remained cognisant of approaches being made by other regulators, alert them to potential areas of conflict and how other regulators were applying the principles.<\/p>\n
This body could also be source of AI understanding and expertise that could support whole of government and regulator understanding of AI and early identification of cross-sectoral emerging issues. This function should however be sub-ordinate to the key function which is to support co-ordination between policy makers and regulators.<\/p>\n
Without co-ordination there is a risk of overlapping regulations that will hamper the ability of companies to develop AI-based innovative business models and thereby remain competitive at a global scale.<\/p>\n
We are agnostic as to whether a new statutory organisation is required or whether the function of the body could sit within an existing Agency. What is important is that the organisation is empowered to act as a central co-ordinating body across government.<\/p>\n
Responses suitable for Australia <\/strong><\/p>\nAre there any governance measures being taken or considered by other countries (including any not discussed in this paper) that are relevant, adaptable and desirable for Australia? <\/em><\/strong><\/p>\n麻豆原创 is supportive of the pro-innovation governance approach of the UK government. This is based on sector specific regulation, linked to cross-sectoral principles that can be tailored to each sector backed by a central agency responsible for regulatory co-ordination and assessment of any cross cutting AI risks.<\/p>\n
Target Areas<\/strong><\/p>\nShould different approaches apply to public and private sector use of AI technologies? If so, how should the approaches differ? <\/em><\/strong><\/p>\nWhat constitutes responsible use of AI technology should not differ depending on the type of organisation.\u00a0 However, Governments are in a unique position in their use of AI technologies given their role and ability to influence public perception of responsible use of AI. If governments use AI ethically and responsibly, this will build public trust and acceptance of the use of AI across the economy.<\/p>\n
How can the Australian Government further support responsible AI practices in its own agencies?\u00a0 <\/em><\/strong><\/p>\nAI offers huge potential benefits for the public sector \u2012 such as delivering enhanced citizen services, improving process efficiency, enabling future cities, and ensuring public security and safety. But there have been challenges in its adoption within government processes. We have collaborated with the University of Queensland to investigate how government organisations can break down the barriers for artificial intelligence adoption and value creation.<\/p>\n
The first stage of the research identifies the AI challenges for government and develops a high-level framework of capabilities, capacities and processes that are needed to create value from AI while minimising the risks.<\/p>\n
The second stage of the research addresses the specific challenge of \u2018explainability\u2019 of the AI results with an emphasis on aligning AI operations with the stakeholder-specific perspectives and knowledge thus delivering the intended value of the use of the technology.<\/p>\n
The third stage examines the specific capabilities required within the public sector to fully realise the value of AI across the organisation.<\/p>\n
The fourth stage analyses how to segment and manage the impact of integrating AI within existing work processes.<\/p>\n
Further detail and documentation on these stages is available at 麻豆原创\u2019s Institute for Digital Government.<\/p>\n
Given the importance of transparency across the AI lifecycle, please share your thoughts on: <\/em><\/strong><\/p>\n\n- where and when transparency will be most critical and valuable to mitigate potential AI risks and to improve public trust and confidence in AI? <\/em><\/strong><\/li>\n<\/ol>\n
There is a role for Transparency across the steps AI lifecycle. At 麻豆原创 we define 5 steps in the AI development lifecycle.<\/p>\n
\n- Ideation<\/strong> \u2013 use case identification based on common domain and AI expertise<\/li>\n
- Validation<\/strong> \u2013 Experiments to assess feasibility<\/li>\n
- Realisation<\/strong> \u2013 Development of AI functions<\/li>\n
- Productisation<\/strong> \u2013 Integrating of AI functions into business processes<\/li>\n
- Operations<\/strong> \u2013 Delivery of embedded AI functions to customers<\/li>\n<\/ol>\n
For the Ideation and Validation phase the following transparency principles are applied<\/p>\n