Canadian government introduces AI guidelines for public servants
In a move to ensure the responsible use of artificial intelligence (AI) within its ranks, the Canadian federal government has rolled out new guidelines for public servants. Treasury Board President Anita Anand announced the guidelines, emphasizing the government's commitment to monitoring AI usage to prevent potential issues such as bias or discrimination.
Anand, drawing from personal experiences as a racialized woman, highlighted the importance of preventing bias in decision-making processes.
The goal of these guidelines is to ensure the responsible use of generative AI, and we will be vigilant in ensuring that bias doesn't inadvertently become a part of the process.
Generative AI, as defined by the guidelines, refers to technology capable of producing content, including text, audio, code, videos, and images. This encompasses tools like chatbots, automated emails, briefing notes, and more. The guidelines stress the importance of transparency, especially when AI is used in public communications or to automate decisions about clients. Departments are advised to clearly identify content produced by AI, notify users when they're interacting with AI tools, and document decisions made with the assistance of AI.
Anand further clarified that these guidelines are not meant to replace existing legislation, such as the Privacy Act. Instead, they serve as an additional layer of guidance for employees. "The legal obligation remains with all employees, irrespective of these guidelines," she added.
The Treasury Board's guidelines also highlight potential risks associated with AI, including cybersecurity threats, bias, privacy violations, and the dissemination of inaccurate information. While the guidelines encourage federal institutions to explore the benefits of AI, they also advise caution.
Jennifer Carr, president of the Professional Institute of the Public Service of Canada (PIPSC), expressed concerns about the guidelines' vagueness. "The term 'be careful' is subjective. We need clear regulations that define the boundaries of AI usage," Carr commented. She also emphasized the importance of human intervention, especially in critical decision-making processes.
Chris Aylward, president of the Public Service Alliance of Canada (PSAC), echoed Carr's sentiments, advocating for worker consultation when introducing AI tools. "AI should enhance workers' jobs and conditions, not replace them," Aylward mentioned in a statement.
Federal institutions are encouraged to explore the potential of generative AI tools to enhance their operations and deliver better outcomes for Canadians. However, they must also be aware of the challenges and risks associated with these tools. The guidelines emphasize that AI should be used to assist employees, not replace them. Public servants are directed to refer to the ethical decision-making guide in section 6 of "Values Alive: A Discussion Guide to the 'Values and Ethics Code for the Public Sector'" when considering the deployment of AI tools.
The “FASTER” Principles:
To maintain public trust, the guidelines introduce the "FASTER" principles for AI usage:
Fair: Ensure AI tools don't amplify biases and adhere to human rights and fairness obligations.
Accountable: Take responsibility for AI-generated content, ensuring its accuracy and compliance.
Secure: Protect privacy and personal information, and ensure the infrastructure is appropriate for the data's security classification.
Transparent: Clearly identify AI-generated content and provide explanations for AI-supported decisions.
Educated: Understand the strengths and limitations of AI tools and learn to identify potential weaknesses in their outputs.
Relevant: Ensure AI tools meet user and organizational needs and are the right fit for the task at hand.
Public servants are advised to consult with relevant stakeholders, including legal services, privacy and security experts, and diversity and inclusion specialists, to ensure the responsible use of AI tools.
The guidelines emphasize the importance of privacy when using AI tools. Personal information should not be entered into an AI tool unless there's a contract with the supplier detailing how the information will be used and protected. All personal information used by or obtained through AI tools is subject to the Privacy Act and related policy instruments. This means that personal information must be accurate, up-to-date, and complete. Direct collection from the individual is often required, allowing them to be informed about the collection and usage of their data.
If AI tools create new personal information, it must also adhere to privacy requirements. Users should validate any AI-generated personal information for accuracy. Federal institutions must ensure that individuals can access and correct their personal information upon request.
Best Practices for AI Usage:
For all users of generative AI in federal institutions, the guidelines recommend:
Avoid entering sensitive or personal information into non-GC managed tools.
Understand how a system uses input data, such as whether it's used for training or accessible to providers.
Consult legal services and the departmental chief security officer (CSO) before using any system to process sensitive information.
Use infrastructure and tools that align with the security classification of the information.
Use the “opt-out” feature to ensure prompts aren't used for AI training.
For institutions deploying AI tools, they should:
Regularly test systems to ensure they meet performance targets.
Plan independent audits for assessing AI systems against risk and impact frameworks.
The introduction of these guidelines underscores the Canadian government's commitment to ensuring that AI is used responsibly and ethically, safeguarding the interests of both public servants and the Canadian public.