AI technology usage guidelines and procedures
The University of Nevada recognizes the transformative potential of emerging generative Artificial Intelligence (AI) tools (ChatGPT, OpenAI, Microsoft Copilot, etc.) which have demonstrated a significant advancement in revolutionizing daily tasks across personal, educational, and professional domains. As AI tools continue to experience a rapid surge in adoption, the full extent of its impacts remains uncertain. Considering this, we advocate for fostering curiosity and responsible usage of AI tools within the University community. These guidelines and procedures are designed to align seamlessly with existing University policies, ensuring a steadfast commitment to maintaining safety, security, and academic integrity across the institution.
Purpose
These guidelines and procedures are applicable to all students, employees, contractors, and third parties utilizing AI platforms on behalf of the University and provide a framework that encourages responsible and ethical use of AI across the institution and to exercise caution around information security, data privacy, copyright, and academic integrity.
Key considerations for AI usage:
Absence of Formal Campus Vendor: The University currently does not maintain any explicit contracts or internal agreements with individual AI vendors. Consequently, any utilization of AI software is undertaken at the discretion and risk of the user. While the University allows access to certain AI tools or resources, it does not endorse or guarantee the performance, reliability, or outcomes of these tools. If an AI resource is needed, campus software licensing provides access to Microsoft CoPilot.
Data Protection Imperatives: As with any third-party software or tools not directly supplied by the University, it is imperative to adhere to all recommended data protection protocols. This includes ensuring compliance with relevant privacy laws, safeguarding sensitive information, employing encryption or other security measures to mitigate risks associated with data breaches or unauthorized access. For questions about the University’s data classifications and current policies and procedures, please contact IT Compliance at itcompliance@unr.edu.
Potential for Inaccuracy and Bias: AI tools can produce responses that may be incomplete, erroneous, or influenced by biases present in the underlying data or algorithms. Therefore, it is incumbent upon users to exercise diligence and critical judgement when interpreting AI-generated outputs. Human oversight and review are indispensable for identifying and rectifying any inaccuracies or biases that may arise.
Scope
These guidelines and procedures are applicable to all users utilizing AI tools on computing resources owned or managed by the University.
Guidelines and procedures
- Confidential Data Protection: Safeguarding both the University's data and individual user information is paramount. It is strongly advised not to share sensitive, confidential, or regulated data on AI tools, as the confidentiality of data may be compromised based on the tool’s data-sharing practices. Refer to Section 3 in Information Security Policies and Procedures (NetID authentication required) for the University of Nevada, Reno's data policy.
- Responsibility in Content Generation: Users are required to meticulously review AI-generated content to ensure accuracy and prevent the dissemination of misleading, inaccurate, biased, or copyrighted material. A good review process of generated content is essential to uphold content quality standards.
- Risk Assessment Protocol: Before integrating an AI tool into daily tasks, users should initiate a vendor risk management assessment through the Office of Information Technology’s (OIT) Compliance team. This step ensures an evaluation of potential risks associated with the AI tool in review.
- Academic Integrity/Standards Compliance: Adherence to academic standards policies (UAM 6,502) is crucial. Users are encouraged to familiarize themselves with relevant policies and disclose the use of AI in academic settings as appropriate.
- Security Awareness: Exercise caution when utilizing AI tools, particularly with personal information. Be vigilant against potential phishing schemes that may exploit passwords, emails, and other sensitive information. Maintain a heightened awareness of security risks.
- Authentic Recommendations: When choosing an AI tool, opt for reputable and well-established AI tool providers to mitigate potential risks. Third-party access to AI tools, especially those widely used online (i.e. Discord community servers), may pose increased data security threats. Exercise discretion in choosing providers to safeguard data integrity and security.
Responsibilities
- Users are responsible for conducting themselves ethically and responsibly when utilizing AI tools on the University's computing resources.
- Supervisors are responsible for ensuring that their respective teams are aware of and compliant with these guidelines. They should also facilitate access to training and educational resources as needed.
- OIT will provide guidance, support, and resources to assist users in understanding and complying with guidelines.
Security
Users with access to data contained on the University's systems that are classified as confidential or regulated must take all necessary precautions to prevent unauthorized access to this information. Examples of confidential or regulated data include but are not limited to Family Educational Rights and Privacy Act (FERPA), Graham Leach Bliley Act (GLBA), proprietary, Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry (PCI), and confidential research data.
Privacy
While the University desires to provide a reasonable level of privacy and does not generally monitor or limit content of information transmitted on the campus network, it reserves the right to access and review such information under certain conditions. These include investigating performance deviations and system problems (with reasonable cause) and determining if an individual is in violation of University policy, as may be authorized by other University or NSHE policy or as may be necessary to ensure that the University is not subject to claims of institutional misconduct.
Review and revision
These guidelines and procedures will be periodically reviewed and revised by the appropriate governing bodies within the University. Feedback from stakeholders, advancements in AI research, changes in regulatory requirements, and emerging best practices will inform the review process.