Legal implications of artificial intelligence in higher education explored in lawfirm webinar

There are evident gaps in certain areas, like the legal aspects of deepfakes or the creation/deployment of AI tools with existential risks, and policy areas that require further development, such as plagiarism.

Share the post
Photo via Pexels

In a webinar titled “Cutting Through the Noise,” David Copping and Ethan Ezra, lawyers from Farrer & Co. law firm, discussed the legal implications of artificial intelligence (AI) within the higher education sector, emphasizing the breadth and variety of AI as a technological phenomenon.

Gaps in legal aspects

There are evident gaps in certain areas, like the legal aspects of deepfakes or the creation/deployment of AI tools with existential risks, and policy areas that require further development, such as plagiarism. Nonetheless, there exists an established legal framework that allows many questions regarding AI usage to be addressed without the need for new laws or regulations. There is a substantial disparity in how students and academics perceive the utilization of generative AI. Students largely favor its use, while academics predominantly see AI as a risky tool for plagiarism. The challenges associated with student use of generative AI include the misrepresentation of academic abilities and the potential infringement of third-party intellectual property. The webinar posited that considering the widespread prevalence and popularity of generative AI among students, enforcing a complete ban on its use may prove challenging. 

To effectively manage and regulate the use of generative AI, Copping and Ezra recommended universities to consider the following strategies:

  1. Internal and external liaison: Establish an oversight committee or board to discuss and evaluate the risks associated with generative AI. Stay informed about the perspectives and guidelines provided by other universities, industry bodies, and regulatory groups. Collaborate with relevant entities to address common challenges.
  2. Stay on top of risk mitigation initiatives: Keep abreast of initiatives aimed at countering the risks associated with generative AI. For instance, consider adopting stress tests, like the one proposed by Imperial College Business School’s IDEA Lab, to assess the vulnerability of academic modules to generative AI tools.
  3. Implement an AI use policy: Develop a comprehensive policy that outlines the scope of AI covered, conditions for student use, alignment with existing plagiarism policies, methods for verifying student learning (e.g., requiring signed declarations and full source citations), restrictions on the disclosure of proprietary information, and guidelines for faculty use of (generative) AI.

Patents

In the context of AI owning patents, the UK Intellectual Property Office (UKIPO) has rejected two patent applications in 2018, invoking sections 7 and 13 of the Patents Act 1977, which stipulate the involvement of “a person” in the patent process. The applications were submitted for an AI machine named DABUS, which was deemed not to qualify as “a person,” raising questions about its eligibility as an inventor or patent owner. The case is presently under review by the Supreme Court, and the outcome is pending. It’s noteworthy that a UKIPO consultation on this matter determined that no alterations to the existing law were necessary.

Copyright

According to the Copyright Designs and Patents Act 1988, the “author” of a work is the individual who creates it. In the context of computer-generated works, Section 9(3) CDPA stipulates that “the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”The question of authorship for AI-generated art remains open to interpretation, with potential candidates including the creator of the AI system and/or the person instructing the system to generate the artwork.
The current copyright framework in the UK and other jurisdictions is deemed applicable to AI-generated work. Regarding ownership, it is unclear, but a general principle is that the author of a work is its owner, or in employment scenarios, the employer may claim ownership. AI platforms may adopt varied approaches, such as OpenAI assigning ownership of the generated output to the user.

Infringement

According to Copping and Ezra, in terms of infringement, the utilization of AI poses a risk of intellectual property (IP) infringement, particularly because AI tools like ChatGPT are “trained” on extensive third-party datasets from the internet to generate output material.
While there are exceptions for infringement, such as text/data mining for non-commercial research, these exceptions are less likely to apply in larger-scale data extraction scenarios within a commercial context.

Data Protection

Concerning data protection, the associated risks involve users entering personal information into AI tools, as many tools retain or make onward use of input data. Additionally, there are risks related to AI tools targeting publicly available personal data, such as profiles and registries.
Higher education institutions should address these issues by conducting suitable data protection impact assessments, implementing AI policies that regulate permissible input data (e.g., limiting personal details), and employing technical measures to deter data scrapers.

Bias

In terms of bias, numerous studies have highlighted various social and political biases inherent in AI tools like ChatGPT. Higher education institutions should carefully consider the types of work and tasks for which they would permit AI to be used.
For example, typing in queries on sensitive political issues, student admissions, and welfare queries are areas where biases may have significant implications and should be scrutinized.

AI’s growth in the next five years

The symbiosis between artificial intelligence and education is on an upward trajectory, with the forthcoming years promising significant expansion of AI applications within this sector, according to a report by think tank Technavio.

In its report titled “Artificial Intelligence Market In The Education Sector Market 2023-2027”, Technavio also noted that the increase underscores the rising importance of integrating AI into the educational landscape to boost learning outcomes. Over the next five years, the research found that the education sector is poised for a transformative leap, fueled by AI. It projected a net increase of $1,100.07 million by 2027 due to AI in the education market setting the stage for revolutionary changes. These robust growth estimates suggest a compound annual growth rate of 41.14 percent within the forecast period, reflecting the rapidly accelerating pace of AI adoption in this sector.

AI guidance toolkit for global schools

A coalition of education and technology organizations, led by TeachAI, has unveiled the AI Guidance for Schools Toolkit aimed at equipping educational institutions with the necessary framework and resources to responsibly integrate artificial intelligence into the classroom.
The initiative is backed by industry leaders including Code.org, ETS, the International Society of Technology in Education, Khan Academy, and the World Economic Forum.

Nathan Yasis

Nathan Yasis

Nathan studied information technology and secondary education in college. He dabbled in and taught creative writing and research to high school students for three years before settling in as a digital journalist.

banner place

What to read next...
Nathan Yasis

Nathan Yasis

Nathan studied information technology and secondary education in college. He dabbled in and taught creative writing and research to high school students for three years before settling in as a digital journalist.