ATP Submits Comments to the Office of the

Privacy Commissioner of Canada

 

In early March, the Association of Test Publishers submitted comments expressing the views of the testing industry, especially from members from Canada, in response to a request from the Office of the Privacy Commissioner of Canada (“OPC”) for feedback on its recommendations for the regulation of artificial intelligence (“AI”).  

In the comments, ATP acknowledged its appreciation for the opportunity provided by the OPC to give feedback on its proposals for the enhancement of the Personal Information Protection and Electronic Data Act (“PIPEDA”).  

“We strongly believe that there are specific circumstances commonly found in the testing industry where the use of AI is both appropriate and necessary, where its use is justified when balanced against the rights of individual test takers, and where this technology should be allowed within the existing constraints of PIPEDA . Therefore, we requested that the OPC carefully craft proposed language consistent with these explanations clarifying how those proposals should be written for incorporation into PIPEDA,” observed Hazel Wheldon, CEO of Canadian based MHS, and an ATP Board member.

[ATP members can read the full comments as submitted by logging into the ATP website and clicking on Legal/Legislative Updates under Quicklinks. The full comments address eleven conceptual points raised by the Commissioner and specific questions asked under each concept.  A summary of ATP's comments on the conceptual issues is below:]

The comments gave this introductory overview:

            Many testing events occur in today’s society, which greatly benefit society in general, along with test users and individual test takers.  Canadian citizens are no exception to the vast – and growing – use of assessments by individuals to help themselves to advance personally or professionally.  For that reason, it is vitally important that testing programs are able to ensure its tests are fair to all test takers – in so doing, testing organizations today use AI for the development of software for the delivery of assessments, as well as for the development of scoring rubrics.  Testing organizations also rely on AI to develop items for use in assessments.  Further, AI is currently playing a role in assisting testing organizations and end users of assessments, for a variety of purposes, including but not limited to: (1) assisting employers identifying candidates who meet their job-related needs; (2) providing doctors with data for diagnosing and treating physical diseases and mental disorders; (3) enabling certification bodies to ascertain if an individual has mastered specific competencies; and (4) performing test security analyses to detect cheating by test takers.  Thus, it is clear that diverse uses of AI have already become an indispensible element of the assessment process.

ATP General Counsel Alan J. Thiemann further explained, “We encouraged the OPC to balance the privacy rights of every individual  with the rights of the test sponsor  and the testing service organization that provides the testing services, as required by the PIPEDA.”

In order to best inform the OPC and provide an introduction to the issues specific to testing, the ATP comments addressed the following proposals and questions set forth by the OPC in its request.  Thiemann clarified that these positions had been developed with input from ATP’s Canadian Members:

Proposal 1: Incorporate a definition of AI within the law that would serve to clarify which legal rules would apply only to it, while other rules would apply to all processing, including AI

The ATP firmly agreed with the OPC’s own acknowledgement that “PIPEDA is technologically neutral and is a law of general application.” As such, ATP submitted that it would be inappropriate to add definitions expressly relating to AI, automated decision-making, machine learning, automated processing, or profiling. However, there is a need for specific guidance to cover certain uses of AI, now and in the future, which would support clarification as to when and how such rules would apply.

Proposal 2: Adopt a rights-based approach in the law, whereby data protection principles are implemented as a means to protect a broader right to privacy—recognized as a fundamental human right and as foundational to the exercise of other human rights

As cited by the OPC, the 2019 Resolution of Canada’s Federal, Provincial and Territorial Information and Privacy Commissioners, has already determined that AI technologies must be “designed, developed and used in respect of fundamental human rights, by ensuring protection of privacy principles such as transparency, accountability, and fairness.”  The ATP had no disagreement with this statement.

However, in order to ensure such protection, the ATP contended that PIPEDA should be read from a rights-based perspective that recognizes privacy in its proper breadth and scope, yet balances those rights with the rights of businesses under PIPEDA, and uses that balanced perspective to provide direction on how its provisions should be interpreted.  Such an approach will clarify rights in PIPEDA and ensure the use of AI receives a proper focus.

Proposal 3: Create a right in the law to object to automated decision-making and not to be subject to decisions based solely on automated processing, subject to certain exceptions

Article 22 of the GDPR grants individuals the right not to be subject to automated decision-making, including profiling, except when an automated decision is necessary for a contract; authorized by law; or explicit consent is obtained. However, Article 22 also provides an exception that where significant automated decisions are taken based on a legitimate basis for processing, or where the public interest (or official authority) exists, that can override the rights of the individual even though the individual still has the right to obtain human intervention, to contest the decision, and to express his or her point of view (see Article 21). 

ATP wrote that PIPEDA should be required to balance those same rights.  Accordingly, the controller/business should be allowed to continue its use of AI by showing that there is a compelling reason that overrides the individual’s right, including because the business can demonstrate that there is the establishment, exercise, or defense of legal claims.

For these reasons, ATP supports adoption of limited rights associated with restrictions on the use of AI through a right to object in PIPEDA, subject to a balancing of rights, parallel to those provided under the GDPR  And commented that it seems that such a generic right and its balancing against the business’s rights to protect its IRP could be added to PIPEDA.

Proposal 4: Provide individuals with a right to explanation and increased transparency when they interact with, or are subject to, automated processing

The ATP pointed out that even the GDPR recognizes that a business may have intellectual property rights (“IPR”) that come into play when an individual seeks information about the collection and use of personal information.  See Working Party 29 Guidelines on Article 15 (IRP and other intellectual property (e.g., trade secrets) that are central to the controller’s business model must be respected).  This fact is equally relevant when it comes to AI technologies. Thus, while an individual is entitled to have notice about the collection and use of their personal information, including the right to know if AI is used and for what purpose it is used, that right does NOT give an individual the right to access the IPR of the business.   

ATP further commented that regarding an individual’s right to obtain information about the algorithmic logic used for AI, Working Party 29 has expressed its opinion that a controller only needs to provide “the rationale behind, or the criteria relied on” in reaching a decision without disclosing the entirety of the scientific basis, which is usually part of the business’s IPR.” Article 29 Working Party, ‘Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679’, WP251rev.01, adopted on October 3, 2017, as last revised and adopted on February 6, 2018. See hhttp://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=50083. This position is consistent with the ATP’s view that disclosure of AI must not jeopardize any IPR.

ATP concluded that PIPEDA should only include a right to explanation that would provide an individual interacting with an AI system the basic reasoning underlying any automated processing of their data, and the consequences of such reasoning for their rights and interests, subject to the controller’s right not to be required to disclose its IPR.  This would also help to satisfy PIPEDA’s existing obligations of providing individuals with rights to access and correct their information.

To assist in achieving this balanced outcome, the ATP would suggest that modifications to PIPEDA include the requirement for a Privacy Impact Assessment (PIA), including an assessment relating to the impacts of AI processing on privacy and human rights. The published content would be based on a minimum set of requirements that would be developed in consultation with the OPC. ATP regarded this result as similar in effect to the proposed California legislation (SB 1241) on the use of AI in hiring.

Proposal 5: Require the application of Privacy by Design and Human Rights by Design in all phases of processing, including data collection

The GDPR, specifically Recital 78 and Article 25, requires a business to meet compliance standards in relation to its treatment of personal information by designing technology, services, and products to achieve maximum compliance and security (termed “privacy and data protection by design”), as well as requiring that the strictest privacy setting on products and services be set by default without any action by the consumer (termed privacy by default). 

The privacy by design framework was first published in 2009 and then adopted by the International Assembly of Privacy Commissioners and Data Protection Authorities in 2010. Privacy by design is also encouraged indirectly by other global privacy laws.  In general, privacy by design is a methodology, not an absolute requirement (e.g., under the GDPR, privacy by design is qualified by what available technology is considered “state of the art,” by the cost of implementation, and by the nature, scope, context, and purposes of processing, as well as by the risks for the individuals whose personal information is being collected and processed. With this caveat, then, ATP stated that privacy by design principles should be incorporated into PIPEDA in a manner that is consistent with the GDPR.

Proposal 6: Make compliance with purpose specification and data minimization principles in the AI context both realistic and effective

            ATP commented that it is true that data minimization is generally at odds with the underlying tenet behind AI – which is predicated on having a maximum amount of data available from which to train the AI engine and then utilize the power of an AI system to analyze data.  Similarly, the notion of specification of purpose arguably creates a practical problem for an AI system where the purpose(s) of data usage may not be known (or appreciated) until after huge amounts of data are collected and analyzed.  As noted in the Information Accountability Foundation paper cited by the OPC, “the insights data hold are not revealed until the data are analyzed, consent to processing cannot be obtained based on an accurately described purpose.” In essence, the ATP concluded that restricting the use of AI in advance would require the business to know in advance what is actually going to be determined – which is impossible. Thus, the challenge would be to limit the very personal information that is appropriate and needed for AI purpose(s).

Proposal 7: Include in the law alternative grounds for processing and solutions to protect privacy when obtaining meaningful consent is not practicable

Although the ATP concurred that affirmative consent is the primary basis for collection and use of personal information under the GDPR and PIPEDA, we pointed out that there is a growing sense that the consent model may not be viable in all situations.  In the ATP’s opinion, that concern now includes uses of AI.  This result is due in part to the inability to obtain meaningful consent when businesses are unable adequately to inform individuals of the purposes for which their information is being collected, used, or disclosed in sufficient detail.

The OPC’s Report on Consent (see fn. 34) acknowledges that alternate grounds to consent may be acceptable in certain circumstances, specifically when obtaining meaningful consent is not practicable and certain preconditions are met.  The ATP noted we are unsure at this stage of the discussion about AI whether meaningful consent should be required at all – the experience of testing organizations so far seems to indicate that in many instances consent is rendered of no value to the resolution of determining the balance of the privacy rights of the consumer with the legitimate interests of the testing organization.

The testing industry has realized that legitimate interest often provides the most practical approach to the collection and use of personal information, because it establishes the proper framework from which to evaluate the balance of rights and interests that exists between test takers (i.e., consumers) and testing organizations.   Based on this experience, the ATP submits that the OPC should develop its proposals based on the GDPR approach to legitimate interests.   We reserved further comments until specific language is provided.

Proposal 8: Establish rules that allow for flexibility in using information that has been rendered non-identifiable, while ensuring there are enhanced measures to protect against re-identification

De-identification is achieved through processes that remove information that can identify individuals from a dataset so that the risks of re-identification and disclosure are reduced to low levels.  The ATP submitted, that re-identification of such information is only a concern if the ability to re-identify is actually possible by the controller/business; if such re-identification is purely theoretical (e.g., the database of tokenized personal information is owned and under the exclusive control of another entity, then the controller/business has no realistic opportunity to conduct the re-identification.        

Proposal 9: Require organizations to ensure data and algorithmic traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle

The ATP contended that principles of accountability, accuracy, transparency, and data minimization (as well as access and correction), support some limited tracing of the source of AI system data.  This result is especially appropriate for AI data that is NOT collected directly from individuals, but is supplied from other sources and combined into the AI analysis.  ATP agreed with the OPC, citing the OECD Principles on Artificial Intelligence: “AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.”  And as referenced, the IEEE has stated that, “algorithmic traceability can provide insights on what computations led to questionable or dangerous behaviors.”  However, ATP commented, it is completely premature for the OPC to rely on proposed legislation in the US, the Algorithmic Accountability Act (AAA), which is not likely to be enacted, at least in its current form.

Proposal 10: Mandate demonstrable accountability for the development and implementation of AI processing

ATP commented that it may be able to support the OPC’s recommendation that a more robust accountability principle related to AI be included in Principle 4.1 of PIPEDA, depending upon how this notion is implemented.  A requirement that a business maintains a record of its internal evidence demonstrating AI accountability for the personal information under its control could be appropriate if exercised through an independent audit requirement or as part of an enforcement activity.  But a business should only be required to conduct an audit of its AI system once after it is fully operational and following any major changes to the system.  But, ATP stated that it would NOT support such a requirement if it means that evidence would have to be provided on demand to every individual consumer– that sort of requirement is likely to encourage many thousands of requests that would be extremely burdensome on businesses – and would likely create the potential for individuals to seek access to the business’s IPR.

As for shifting liability, ATP commented that we fail to understand what the OPC is recommending – all violations of PIPEDA, including any enhanced accountability requirement, are issued against the covered business, not to a machine.  In the ATP’s opinion, it would be totally inappropriate to fine individual employees of the business, when they are acting within the scope of their employment.  Similarly, developing regulatory incentives would apply to the business (e.g., for adopting demonstrable accountability measures).

Proposal 11: Empower the OPC to issue binding orders and financial penalties to organizations for non-compliance with the law

The ATP commented that we understand the concerns of the OPC with the privacy risks that could be posed by AI systems, particularly if enforcement for organizations found to be non-compliant is not meaningful.  However, ATP disagrees with the OPC that Canada has “fallen significantly behind” other jurisdictions in terms of enforcement, especially if that statement is intended to incorporate the need for “enforcement mechanism that ensure individuals have access to a quick and effective remedy for the protection of their rights….”  We (ATP) assume the OPC is referencing the use of private right of action provisions to replace or augment government enforcement efforts.   

Further, ATP commented that we do not support the OPC’s recommendation that, with respect to AI, it should be given authority to make binding orders and impose consequential penalties for non-compliance with the law.  Again, this proposal would treat AI very differently than any other method or process for the collection and use of personal information – that result is not warranted.

In conclusion, the ATP encouraged the OPC to align its final proposals related to the use of AI with the GDPR.  “We believe it is critical that AI not be treated any differently than other decisions made without the benefit of those technologies.  Otherwise, we are concerned that innovation will be stifled,” Thiemann noted.

[ATP Members can download the complete comments, including answers to specific questions, as submitted by logging in to the ATP website at www.testpublishers.org and clicking on Legal/Legislative Updates in their Quick links]