Body
Purpose
The University supports the responsible use of Generative Artificial Intelligence tools and services (collectively, “Generative AI”). However, these tools have notable limitations and present new risks that must be taken into consideration when using these technologies. Among these risks are the following:
-
If Generative AI is given access to personal information, the technology may not respect the privacy rights of individuals, including in a manner that may be required for compliance with applicable data protection laws, which are constantly changing.
-
If Generative AI is given access to confidential information or trade secrets, the University may lose its intellectual property (“IP”) rights to that information and the information may be disclosed to unauthorized third parties through their independent use of the Generative AI technology.
-
Generative AI outputs may violate the IP rights of others and might not themselves be protected by IP laws.
-
Generative AI outputs may be factually inaccurate, and the University might be exposed to liability if relied upon without proper review.
-
Generative AI may produce decisions that are biased, discriminatory, or otherwise inconsistent with University policies and that are otherwise in violation of applicable law.
Any use of Generative AI at the University must be reflective of its inherent limitations and risks. This Directive specifies the requirements for the appropriate use of Generative AI at the University.
Scope
This Directive applies to the use of Generative AI by faculty, staff, students, affiliates, and other University stakeholders (collectively, “University Users”) in the performance of their functions for, at, or on behalf of the University requiring the use of University Information, defined below, or the use of the University’s systems or assets. This Directive applies to all Generative AI use, including free or no-charge Generative AI tools and services. It also applies to Generative AI tools that are standalone tools as well as Generative AI features of other software products and services (e.g., a Chatbot). Unless specified otherwise in this Directive, the term “Generative AI” includes both types of Generative AI.
Definitions
-
“Generative AI” includes any machine-based tool designed to consider user questions, prompts, and other inputs (e.g., text, images, videos) to generate a human-like output (e.g., a response to a question, a written document, software code, or a product design). Generative AI tools and services create new data or content based on existing information. Generative AI may be Publicly Available or UD-vetted, as defined below.
-
“Publicly Available Generative AI” means those tools and services that generate output using publicly available information. Only University Information classified as Level I may be used with Publicly Available Generative AI in addition to other publicly available information.
-
“UD-vetted Generative AI” means those tools and services that the University has vetted and makes available to University Users through a contractual arrangement with the vendor or otherwise. UD-vetted Generative AI is under the control of the University and is unique to the University’s operations. UD-vetted Generative AI can be trained using University Information classified as Level II or Level III, defined below, as specified in this Policy.
-
“University Information” means any information within the University’s purview, including information that the University may not own but that is governed by laws and regulations to which the University is held accountable. University Information encompasses all data that pertains to or supports the administration and missions, including research, of the University.
-
“Level I, II, III Data” have the meanings specified at https://www1.udel.edu/security/framework/classification.html .
Direction
The University has numerous policies that safeguard University Information and the University’s systems and assets. These policies include, but are not limited to, the Information Classification Policy1, Data Governance Policy2, and the Information Security Policy3. These policies in conjunction with this Directive apply when University Users use Generative AI at, for, or on behalf of the University when using University Information or the University’s systems or assets. University Users are required to read, understand, and comply with these policies and to raise any questions regarding them as they pertain to the use of Generative AI, or otherwise, with Information Technology. University Users are expected to use only approved Generative AI and to limit information used to train the tools and services as specified in this Policy. University Users who do not follow this Directive may have the non-conforming Generative AI deactivated.
Procedures
Approved Generative AI
-
Only Generative AI that UD Information Technologies (UDIT) has vetted and approved, in consultation with other relevant University departments, as appropriate, may be used with University systems or assets or with University Information (“UD-vetted Generative AI”). This vetting and approval process ensures that Generative AI procured on behalf of the University has the appropriate privacy and security protections and provides the best use of University funds 4. This process will also ensure that any new Generative AI features of software or services will not be enabled without being vetted.
-
If a University User wants to acquire Generative AI that has not been previously reviewed, prior to acquiring it, a Technology Request must be submitted to UDIT 5. UDIT will route the request to other appropriate University departments and resources to assist in validating the vendor’s product and to verify that the proposed contract does not introduce undue risk to the University. Absent approval, the Generative AI may not be used with University systems or assets or University Information. A list of approved standalone Generative AI tools will also be available at https://services.udel.edu/TDClient/32/Portal/KB/ArticleDet?ID=1150.
-
UDIT will also examine default settings associated with the Generative AI to determine whether they could expose proprietary or sensitive information to unauthorized persons and if so, will specify the appropriate settings for the use of the Generative AI on the approved list.
-
Generative AI that is still in BETA testing will not be used with University systems or assets or with University Information until assessed for risk through the University’s Technology Request process.
Use of University Information with Generative AI
-
University Information classified as Level II or Level III may only be used with UD-vetted Generative AI that has been assessed and approved for such use by UDIT as a part of a Technology Request review. Level II and III data includes:
-
Information subject to the Family Educational Rights and Privacy Act (“FERPA”) including directory information, work produced by students to satisfy course requirements, student names and grades, and student disability-related information;6
-
Health information protected by the Health Insurance Portability and Accountability Act (“HIPAA”);7
-
Nonpublic Personal Information subject to the Gramm-Leach-Bliley Act (“GLBA”);8
-
Human resources information, such as salary and employee benefits information;
-
Personally identifiable information, including information subject to the General Data Protection Regulation, the Personal Information Protection Law, or any other international laws or regulations governing the processing of personally identifiable information;9
-
Information contained in personnel records;10
-
Intellectual property not publicly available;11
-
Material under confidential review, including research papers and funding proposals;
-
Information subject to export control; and
-
Sensitive Data to include passwords, financial account information and other such information as specified in the Information Classification Policy as it may be modified from time to time.
- Only University Information classified as Level I, in conjunction with other publicly available information, may be used with Publicly Available Generative AI. Using Level II or Level III University Information with Publicly Available AI is prohibited.
Output Review Requirements
-
University users must confirm the accuracy of information generated by Generative AI using other sources.12
-
University users must check the output of the Generative AI for bias by determining whether the data input into, and the output of, Generative AI tools produces decisions that may result in a disparate impact to individuals based on their protected classifications under applicable law, such as race, ethnicity, national origin, age, sexual orientation, or disability status. Any output that is indicative of a potential bias should not be relied upon.
-
University users must be transparent and disclose the use of Generative AI used to produce written materials or other work products and must not hold out output generated by Generative AI tools as their own. If the output includes any quote, paraphrase, or borrowed ideas from the output of Generative AI tools or services, University Users must confirm that the output is accurate and not plagiarize another party’s existing work or otherwise violate another party’s intellectual property rights.
-
University users must not use Generative AI tools to generate malicious content, such as malware, viruses, worms, and trojan horses that may have the ability to circumvent access control measures put in place by the University, or any other third-party entity, to prevent unauthorized access to their respective networks. Any code generated using Generative AI must be reviewed by qualified personnel to verify it does not contain any malicious elements.
-
University users must not use Generative AI to generate content that facilitates sexual harassment, stalking, or sexual exploitation13; or that helps others break federal, state, or local laws; institutional policies, rules, or guidelines; or licensing agreements or contracts.
Academic Integrity
Expectations regarding the use of Generative AI should be clear to students and consistent with the guidelines the Center for Teaching and Assessment of Learning adopts in the “Considerations for Using and Addressing Advanced Automated Tools in Coursework and Assignments,” available at https://ctal.udel.edu/advanced-automated-tools/.
Remain Alert for and Report Suspicious Activity
Generative AI has made it easier for malicious actors to create sophisticated phishing emails and to create video or audio intended to convincingly mimic a person’s voice or physical appearance without their consent (“deepfakes”). University Users who suspect any such malicious activity must report it to Information Technologies in accordance with the University’s Information Security Event Reporting policy14. Reported events will be investigated as set forth in the University’s Data Security Incident Response Plan.
[1] Available at https://sites.udel.edu/generalcounsel/policies/information-classification-policy/.
[2] Available at https://sites.udel.edu/generalcounsel/policies/data-governance-policy/.
[3] Available at https://sites.udel.edu/generalcounsel/policies/information-security-policy/.
[4] The list of approved Generative AI will be posted at https://services.udel.edu/TDClient/32/Portal/KB/ArticleDet?ID=1150.
[5] A Technology Request can be submitted at https://services.udel.edu/TDClient/32/Portal/Requests/ServiceDet?ID=232.
[6] See https://sites.udel.edu/generalcounsel/policies/the-family-educational-rights-and-privacy-act-ferpa-policy/.
[7] See https://sites.udel.edu/generalcounsel/policies/hipaa-compliance/.
[8] See https://sites.udel.edu/generalcounsel/policies/gramm-leach-bliley-act-information-security-program/.
[9] See https://sites.udel.edu/generalcounsel/policies/personally-identifiable-information-privacy-policy/; https://sites.udel.edu/generalcounsel/policies/general-data-protection-regulation-compliance-policy/. https://sites.udel.edu/generalcounsel/policies/personal-non-public-information-pnpi-policy/; https://
[10] See https://sites.udel.edu/generalcounsel/policies/access-to-personnel-records/.
[11] Entering copyrighted material into a Generative AI tool or service may effectively result in the creation of a digital copy, which is a copyright violation. Feeding copyrighted material information into a Generative AI tool or service could “train” the AI to output works that violate the intellectual property rights of the original creator. In addition, entering research into Generative AI tool or service could constitute premature disclosure, compromising invention patentability. Users may not use AI tools or services to infringe copyright or other intellectual property rights. See https://sites.udel.edu/generalcounsel/policies/intellectual-property-protection-ownership-and-commercialization/; https://sites.udel.edu/generalcounsel/policies/policy-for-copyright-and-fair-use-in-instruction/.
[12] It is possible for AI-generated content to be inaccurate, biased, or entirely fabricated (sometimes called “hallucinations”).
[13] See https://sites.udel.edu/generalcounsel/policies/non-discrimination-sexual-misconduct-ant-title-ix-policy/.
[14] See https://sites.udel.edu/generalcounsel/policies/information-security-event-reporting/.