The University of Delaware embraces the responsible use of generative artificial intelligence (AI) tools with University’s systems, assets, and information. Although these tools create many benefits for users, the tools also have limitations and create risks for the University. To enhance the benefits and mitigate the risks of these tools, the University has adopted the “Directive for Appropriate Use of Generative AI Tools and Services” (the “Directive”). The Directive explains what generative AI tools can be used with the University’s systems, assets, and information. All users of the University’s systems, assets, and information (faculty, staff, students, researchers, etc.) are expected to read, understand, and comply with the Directive when using generative AI tools. The Directive is available at [provide link].
The FAQs below provide some basic information about these tools and how they work so that users better appreciate the inherent risks associated with them. The FAQs also discuss important considerations for users of generative AI tools with the University’s systems, assets, and information.
I. Basic Information About Generative AI
What is Artificial Intelligence?
Artificial intelligence, or AI, involves using computers and software to perform tasks traditionally requiring human intelligence.
What is machine learning?
Machine learning (ML) is a subset of AI that focuses on enabling computers to extract insights from data. This is done by using algorithms to make informed decisions or predictions. ML differs from traditional programming, where computers perform tasks based on established instructions.
How is generative AI different from previous forms of artificial intelligence?
Generative AI shows sophistication in the reasoning and computation behind replies to messages that are new to artificial intelligence. Generative AI has extensive computing capabilities and accessibility that produce a variety of sophisticated outputs. Generative AI can perform tasks like turning data from a given source into a set of visualizations, and inserting each visualization into a slide in a PowerPoint presentation it creates based on your instructions.
How does generative AI work?
Generative AI models are unique although there are some basic similarities in the way these tools are developed and trained. Current generative AI models are essentially sophisticated pattern recognition and imitation tools. They are typically trained on large amounts of sample data, which are often collected through web-scraping tools. Through various deep learning techniques, these tools are trained to recognize patterns in their datasets and use those patterns to create unique outputs. These outputs are often intended to realistically mimic elements of human communication like writing, music, art, or speech. As these models are trained, they typically receive human feedback on the plausibility and quality of their outputs. The training process is iterative, and these tools are often continuously updated and refined based on human feedback, but not necessarily on the accuracy of the output. So, for example, these models only know that 2+2=4 because so many people on the Internet said so, not because the models can calculate.
What is a large language model?
A large language model (LLM) is a subset of generative AI that uses algorithmic principles to understand the relationship between words and sentences. It can predict what words might come next in a sentence based on prior training data. Essentially, these large language models work by taking your input and trying to generate the next word or phrase. The tools then take that generated word and add it to your original input.
What is prompt engineering, and how does it enable Generative AI?
A prompt is the starting point for generating AI responses. Simply put, a prompt is a way of telling the AI what task you want it to perform, like asking a question or giving it an instruction. The more precise and specific you are with your prompt, the better the AI can understand what you need and give you a relevant or appropriate answer or result. The term “prompt engineering” is about creating clear and useful instructions – or prompts – so the generative AI knows what you want it to do.
How secure are generative AI tools?
Privacy concerns have been raised in relation to the data processing undertaken to train generative AI tools, as well as (mis)information that such tools provide about individuals or groups. For those working with certain kinds of data, using third-party generative AI tools to process the data may come with additional privacy and security risks. For example, users working with data from human research participants must not submit any personal or identifying information, or any information that could be used to re-identify an individual or group of participants to third-party generative AI tools because the data could become available to others, constituting a major breach of research participant privacy. Users working with other types of confidential information, such as information disclosed as part of an industry partnership, must not submit these data to third-party generative AI tools because it could be a breach of non-disclosure terms within the agreement.
Are there other limitations associated with using generative AI tools?
In addition to the privacy and security concerns discussed above, generative AI content may be:
-
Inaccurate, misleading, or entirely fabricated (sometimes called “hallucinations”);
-
Biased or discriminatory; or
-
In violation of the University’s student and faculty handbooks and policies or applicable laws, including laws governing intellectual property.
Generative AI tools also make it easier for malicious actors to create sophisticated scams at a far greater scale.
Users need to appreciate these limitations and risks when using generative AI tools.
II. Using Generative AI at the University of Delaware
Why is the Directive necessary?
The Directive provides guidelines for responsible use of generative AI tools to minimize the risks and limitations inherent with them. The Directive specifies (i) what generative AI tools may be used with the University’s systems and assets, and (ii) what University information may be used with generative AI tools. Placing parameters around the tools and the information reduces privacy, security, and other risks.
What generative AI tools can be used at the University?
Only generative AI tools that University of Delaware Information Technologies (UDIT) has vetted for, among other things, security and privacy controls may be used with the University’s systems, assets, and information and regardless of whether the generative AI tool is free or no-charge. This restriction also applies regardless of whether the generative AI tools are standalone tools or generative AI features of other software products and services (e.g., Chatbot). Unless specified otherwise in these FAQs, the term “generative AI” includes both types of generative AI. A listing of approved standalone generative AI is available at the University Generative AI Services List.
May generative AI tools be used in my course?
Potentially. The University encourages the responsible use of generative AI for teaching and learning. In addition, the University hopes to foster open and exploratory investigation and research into generative AI. In addition to reviewing the Directive for Appropriate Use of Generative AI Tools and Services, the University recommends reviewing the Guiding Considerations for Integrating AI within Teaching and Learning.
May generative AI tools be used for class assignments?
The ultimate decision on whether students can use generative AI tools rests with the instructor. Instructors are cautioned to be clear to students about the expectations regarding the use of generative AI tools consistent with the guidelines in the Center for Teaching and Assessment of Learning Coursework and Assignments,” available at https://ctal.udel.edu/advanced-automated-tools/.
May generative AI tools be used for University operations?
Potentially. Generative AI can certainly be used to augment or support multiple job responsibilities. The University supports creating efficiencies using AI and automation. The data that is used in conjunction with each task will often determine what service can be used, if any. The information in the Directive provides guidance on data privacy and security along with additional considerations to include bias and ethics.
How do generative AI tools get on the approved list?
Users who want to use a generative AI tool that has not not yet been vetted must submit a Technology Request form before acquiring or using the tool with the University’s systems, assets, or information1. UDIT will initiate a review process that includes other appropriate University departments and resources to validate the vendor’s product and to ensure the tools do not introduce undue risk to the University’s systems or assets or University information.
What constitutes the University’s information?
University information means any information within the University’s purview, including information that the University does not own but that is governed by laws and regulations to which the University is held accountable. University information includes all data that pertains to or supports the administration and missions, including research, of the University.
What University information can be used with vetted generative AI tools?
It depends on whether the generative AI tools are publicly available or UD-vetted.
What are publicly available generative AI tools?
Publicly available generative AI tools are those tools and services that generate output using publicly available information. Any data users provide or make accessible to the tools help train the tools further and become available to the public. For example, ChatGPT saves the information used to create an account, such as the user’s name, phone number, email, and (for accounts requiring a fee) the payment method. In addition, these tools may save a user’s entire conversation, including every prompt entered, to be used in the ongoing training of the tool. Users who enter sensitive, personal, or proprietary information are at risk of having this information being made publicly available.
What University information can be used with publicly available generative AI tools?
When using publicly available generative AI tools, you should assume that everything you do will be available to others. The Directive therefore limits the use of the University’s information with publicly available generative AI systems to only University information that is classified as Level I data. Level I data is approved for distribution to all individuals and entities external to the University community with no legal, regulatory, contractual, or funding agency restrictions on access or usage so there is minimal risk of inappropriately disclosing sensitive, personal, or proprietary information.
Why can’t Level II or Level III data be used with publicly available generative AI?
Level II and Level III data include personally identifiable information and other confidential information that may be subject to privacy laws (such as FERPA, HIPAA, etc.) or otherwise prohibited from being made available to the public by law or contract. It includes personally identifiable information in employment and personnel records as well as intellectual property not otherwise publicly available.
Using Level II or Level III data on publicly available generative AI would expose the data and create privacy and security risks, including breaches reportable to regulatory authorities and the affected individuals. The Directive prohibits using Level II or Level III on publicly available generative AI tools regardless of whether those tools are approved for use with University systems or assets.
Where can I find more information about the University’s classification of its information?
The University’s Information Classification Policy is available at the Data Classification Matrix.
When can Level II or Level III data be used with generative AI tools?
University information classified as Level II or Level III data may be used with UD-vetted generative AI.
What are UD-vetted generative AI tools?
UD-vetted generative AI are tools that UDIT and other departments have vetted and approved for use with the University’s systems, assets, and information as a part of a Technology Request review. As a part of the review process, the controls for any additional use of the information are examined to ensure the information is not, among other things, used to train the tool further or otherwise made available to anyone who is unauthorized to see the information. The information is contained within the University. The University typically enters into agreements with the vendors offering these tools that obligates those vendors to safeguard the privacy of the information.
Can the output of the generative AI tools be trusted?
Generative AI tools are not perfect. As discussed in Section I, sometimes generative AI tools “hallucinate,” and the output is inaccurate, misleading, or incomplete. Generative AI may also perpetuate biases that are present in the data the tools are trained on, which may exacerbate existing inequities. The Directive requires users of generative AI tools to confirm the accuracy of the output using other sources and to check the output for bias by determining whether the data used produce decisions that may result in a disparate impact to individuals based on protected classifications under applicable law, such as race, ethnicity, national origin, sexual orientation, or disability status. Any output that is indicative of a potential bias should not be relied upon.
Should the use of generative AI tools be disclosed?
Users should be transparent in the use of generative AI tools and not present the output as their own. Users who paraphrase or borrow ideas from the output of generative AI tools must confirm its accuracy and that they are not plagiarizing another party’s existing work product or violating another party’s intellectual property rights. Acknowledging the use of generative AI may not always involve a formal citation. For example, users could write a description of the tool they used and how they used it. Some citation styles have been developed for generative AI output that can be linked and shared with others. Students should confer with faculty to determine whether a particular style of citation is preferred.
1 A Technology Request can be submitted at https://services.udel.edu/TDClient/32/Portal/Requests/ServiceDet?ID=232.