Request Information
Ready to find out what MSU Denver can do for you? We’ve got you covered.
AI tools can increase efficiency and expand access to support, but they also introduce real risks related to privacy, accuracy, bias, and misuse. This page provides practical guidance for faculty, staff, and students on using AI responsibly within MSU Denver’s expectations and professional standards.
Responsible AI use is not one rule. It is a set of guardrails that protect people, protect data, and preserve trust while allowing appropriate experimentation.
People remain responsible for decisions, judgments, and outcomes. AI tools may support analysis, drafting, or exploration, but they do not replace human expertise, judgment, or responsibility for final decisions.
Communicate when and how AI is being used when it affects others or when disclosure is required by a course, unit, or workflow. Transparency reduces ambiguity, builds trust, and helps others understand the role AI played in a decision or outcome.
Protect student, employee, and institutional data by not entering sensitive or protected information into unapproved AI tools. When in doubt, treat information as protected and seek guidance before sharing it with an AI system.
Treat AI output as draft or advisory support rather than authoritative information. Verify factual claims, sources, citations, and data using trusted references before relying on or sharing AI-generated content.
Consider whether AI use creates unequal advantage or reinforces existing bias. Review outputs for biased assumptions, stereotyping, or exclusion, and ensure expectations around AI use are applied consistently to all students or employees.
Ensure that AI-supported materials remain accessible and usable for all. This includes clear language, readable formats, and accommodations such as captions, transcripts, or alternative formats when appropriate.
Only input the information required to complete a task. Avoid including names, identifiers, or extra details that are not necessary for the work.
Use AI only when it clearly supports the purpose of teaching, learning, service, or university operations. Avoid using AI simply because it is available; its use should add value and align with institutional goals.
In courses and in professional work, disclosure builds trust and reduces ambiguity. Follow course expectations, unit expectations, and any disclosure requirements in assignment instructions, workflows, or publications.
AI tools can produce confident output that is incorrect or incomplete. Verify factual claims, quotations, data, and citations using trusted sources. Do not assume that a citation provided by AI is real or accurate.
Do not enter student records, employee information, health information, or other confidential data into AI tools unless the tool and use case are explicitly approved for that data type. If you are unsure, treat the information as protected and ask for guidance.
AI should not be the sole basis for decisions that affect grades, employment actions, discipline, health, legal matters, or financial outcomes. AI may support analysis or drafting, but final decisions require human judgment.
AI outputs can reflect and reinforce bias. Review content for fairness, stereotyping, and unintended exclusion. Ensure any AI-supported content meets accessibility expectations (clear structure, readable language, captions or transcripts when relevant).
AI detection tools are not reliable evidence on their own. Do not treat detector scores as proof of misconduct. Use clear expectations, process evidence, and follow-up verification methods instead.
AI tools are trained on large datasets and may reproduce patterns from existing work. Use AI to support learning and drafting, but maintain appropriate citation practices and do not present AI-generated work as original scholarship or creative work without proper attribution when required.
Teaching, advising, supervision, and support work depend on trust and context. AI can assist with preparation and drafting, but it should not replace human engagement where relationships are part of the work.
If a task involves protected data, high-stakes decisions, or unclear expectations, pause and seek guidance rather than guessing.
Before using AI for data projects, review the MSU Denver Data Classification Policy
Access the PolicyMSU Denver encourages faculty, staff, and students to explore emerging technologies, including generative AI platforms, to support research, streamline administrative work, and enhance teaching and learning. At the same time, AI use in university operations must be ethical, privacy-focused, and aligned with institutional security requirements, particularly when work involves student records, protected health information, employment data, or other personally identifiable information.
University data and systems are governed by legal and regulatory requirements that may apply depending on the data involved, including:
Family Educational Rights and Privacy Act (FERPA)
Health Insurance Portability and Accountability Act (HIPAA)
Payment Card Industry Data Security Standard (PCI DSS)
General Data Protection Regulation (EU GDPR)
Colorado Privacy Act (CPA)
California Consumer Privacy Act (CCPA)
Gramm-Leach-Bliley Act (GLBA)
AI can be appropriate for university work when it does not involve protected data and when outputs are reviewed by a human before use or publication. Common low-risk uses include:
Drafting general communications and improving clarity or tone
Summarizing non-sensitive notes or documents you provide
Creating outlines, agendas, checklists, and first drafts of internal materials
Brainstorming process improvements or planning options
Generating examples, FAQs, or training support that does not include protected information
Faculty and staff remain responsible for accuracy, policy compliance, and final decisions.
Administrative and operational work frequently involves data that is not appropriate for consumer AI tools. Before using AI for university work:
Identify what data is involved and how it is classified (Public, Official Use Only, or Confidential).
Do not enter protected student or employee information into unapproved AI tools.
Use institutionally approved tools and approved environments when working with university information.
Share only what is necessary to complete the task, and remove identifiers when possible.
If you are unsure whether information is protected, treat it as protected and seek guidance before using AI.
Microsoft Copilot is MSU Denver’s approved AI chat utility for the web and is available in the MSU Denver computing environment. For eligible users who sign in with their MSU Denver account, Copilot provides commercial data protection designed for organizational use. This protection is intended to reduce the risk of exposing confidential information that may be vulnerable when using public AI tools.
Copilot is available through the browser and supports conversational prompting similar to other AI chat tools like ChatGPT, Gemini, or Claude.
As an enterprise client, MSU Denver’s Copilot access comes with commercial data protection.
Users should verify they are signed in with their MSU Denver account (i.e., two-factor authentication) and are using the protected version of the service.
If an individual or department wants to introduce a new AI platform or AI-enabled product into the MSU Denver technology environment, it must be evaluated for privacy, security, and compliance before adoption.
To initiate review:
ITS will follow up via ticket communication and may schedule a discussion depending on complexity.
Do not adopt or deploy new AI tools for institutional use before review, especially when student, employee, health, or financial data may be involved.
Report suspected data exposure, misuse, or AI-related security concerns through established ITS channels. If sensitive or protected data may have been disclosed, timely reporting is essential so the appropriate security and privacy response can occur.
Data protection is a core component of responsible AI use. Before entering any information into an AI tool, classify the data and apply the handling rules that match the classification.
MSU Denver classifies university data into three categories: Public, Official Use Only, and Confidential. All members of the university community are responsible for protecting the confidentiality, integrity, and availability of university data, regardless of medium or format.
.
| Classification | Public Data | Official Use Only Data | Confidential Data |
| Definition | Public data may be open to the general public and has no legal restrictions on access or usage. | Official Use Only data must be guarded due to proprietary, ethical, or privacy considerations and protected from unauthorized access, modification, transmission, storage, or other use. It is restricted to members of the university community with a legitimate purpose. | Confidential data is protected by statutes, regulations, university policies, or contractual language, and may be disclosed only on a need-to-know basis. |
| Examples | publicly posted press releases, class schedules,, public-facing university publications | employment data, internal directories, vendor contracts, proposals, or pricing information not designated as public | medical records, student records and other non-public student data, Social Security numbers, and personal financial information |
| AI Guidance | Public information is the lowest risk, but you should still verify accuracy and avoid misrepresenting official positions. | Do not enter Official Use Only data into public, consumer AI tools. Use only approved systems for university work and limit data sharing to the minimum necessary. | Do not enter Confidential data into generative AI tools unless the system and use case are explicitly approved for Confidential data. The policy requires strong protections for storage and sharing, including secure transfer systems and restrictions on standard email transmission. |
When in doubt, treat the data as protected and seek guidance before using AI.