Ethical and Responsible AI Use

AI tools can increase efficiency and expand access to support, but they also introduce real risks related to privacy, accuracy, bias, and misuse. This page provides practical guidance for faculty, staff, and students on using AI responsibly within MSU Denver’s expectations and professional standards.

Guiding Principles for AI Use

Responsible AI use is not one rule. It is a set of guardrails that protect people, protect data, and preserve trust while allowing appropriate experimentation.

Accountability

People remain responsible for decisions, judgments, and outcomes. AI tools may support analysis, drafting, or exploration, but they do not replace human expertise, judgment, or responsibility for final decisions.

Transparency

Communicate when and how AI is being used when it affects others or when disclosure is required by a course, unit, or workflow. Transparency reduces ambiguity, builds trust, and helps others understand the role AI played in a decision or outcome.

Privacy and Data Protection

Protect student, employee, and institutional data by not entering sensitive or protected information into unapproved AI tools. When in doubt, treat information as protected and seek guidance before sharing it with an AI system.

Accuracy and Verification

Treat AI output as draft or advisory support rather than authoritative information. Verify factual claims, sources, citations, and data using trusted references before relying on or sharing AI-generated content.

Fairness and Equity

Consider whether AI use creates unequal advantage or reinforces existing bias. Review outputs for biased assumptions, stereotyping, or exclusion, and ensure expectations around AI use are applied consistently to all students or employees.

Accessibility

Ensure that AI-supported materials remain accessible and usable for all. This includes clear language, readable formats, and accommodations such as captions, transcripts, or alternative formats when appropriate.

Use Only What You Need

Only input the information required to complete a task. Avoid including names, identifiers, or extra details that are not necessary for the work.

Purpose Alignment

Use AI only when it clearly supports the purpose of teaching, learning, service, or university operations. Avoid using AI simply because it is available; its use should add value and align with institutional goals.

Practical Guidance

MSU Denver Data Classification Policy

Before using AI for data projects, review the MSU Denver Data Classification Policy

Access the Policy

Information Technology Services (ITS) Guidance

MSU Denver encourages faculty, staff, and students to explore emerging technologies, including generative AI platforms, to support research, streamline administrative work, and enhance teaching and learning. At the same time, AI use in university operations must be ethical, privacy-focused, and aligned with institutional security requirements, particularly when work involves student records, protected health information, employment data, or other personally identifiable information.

University data and systems are governed by legal and regulatory requirements that may apply depending on the data involved, including:

  • Family Educational Rights and Privacy Act (FERPA)

  • Health Insurance Portability and Accountability Act (HIPAA)

  • Payment Card Industry Data Security Standard (PCI DSS)

  • General Data Protection Regulation (EU GDPR)

  • Colorado Privacy Act (CPA)

  • California Consumer Privacy Act (CCPA)

  • Gramm-Leach-Bliley Act (GLBA)

Appropriate Administrative and Operational Use Cases

AI can be appropriate for university work when it does not involve protected data and when outputs are reviewed by a human before use or publication. Common low-risk uses include:

  • Drafting general communications and improving clarity or tone

  • Summarizing non-sensitive notes or documents you provide

  • Creating outlines, agendas, checklists, and first drafts of internal materials

  • Brainstorming process improvements or planning options

  • Generating examples, FAQs, or training support that does not include protected information

Faculty and staff remain responsible for accuracy, policy compliance, and final decisions.

Data handling expectations

Administrative and operational work frequently involves data that is not appropriate for consumer AI tools. Before using AI for university work:

  • Review the MSU Denver Data Classification Policy
  • Identify what data is involved and how it is classified (Public, Official Use Only, or Confidential).

  • Do not enter protected student or employee information into unapproved AI tools.

  • Use institutionally approved tools and approved environments when working with university information.

  • Share only what is necessary to complete the task, and remove identifiers when possible.

If you are unsure whether information is protected, treat it as protected and seek guidance before using AI.

Microsoft Copilot

Microsoft Copilot is MSU Denver’s approved AI chat utility for the web and is available in the MSU Denver computing environment. For eligible users who sign in with their MSU Denver account, Copilot provides commercial data protection designed for organizational use. This protection is intended to reduce the risk of exposing confidential information that may be vulnerable when using public AI tools.

Things to Know About Copilot

  • Copilot is available through the browser and supports conversational prompting similar to other AI chat tools like ChatGPT, Gemini, or Claude.

  • As an enterprise client, MSU Denver’s Copilot access comes with commercial data protection.

  • Users should verify they are signed in with their MSU Denver account (i.e., two-factor authentication) and are using the protected version of the service.

Requests to Implement New AI Solutions

If an individual or department wants to introduce a new AI platform or AI-enabled product into the MSU Denver technology environment, it must be evaluated for privacy, security, and compliance before adoption.

To initiate review:

  • Email [email protected], briefly explaining the tool’s functionality and why its functionality is unique/not already available via an MSU Denver-approved tool.
  • Following a response, submit a ticket to ITS with the tool name, intended use case, and the type of data involved.
  • ITS will follow up via ticket communication and may schedule a discussion depending on complexity.

  • Do not adopt or deploy new AI tools for institutional use before review, especially when student, employee, health, or financial data may be involved.

 

Reporting Risks or Incidents

Report suspected data exposure, misuse, or AI-related security concerns through established ITS channels. If sensitive or protected data may have been disclosed, timely reporting is essential so the appropriate security and privacy response can occur.

Data Usage<

Data protection is a core component of responsible AI use. Before entering any information into an AI tool, classify the data and apply the handling rules that match the classification.

MSU Denver classifies university data into three categories: Public, Official Use Only, and Confidential. All members of the university community are responsible for protecting the confidentiality, integrity, and availability of university data, regardless of medium or format.

.


Classification Public Data Official Use Only Data Confidential Data
Definition Public data may be open to the general public and has no legal restrictions on access or usage. Official Use Only data must be guarded due to proprietary, ethical, or privacy considerations and protected from unauthorized access, modification, transmission, storage, or other use. It is restricted to members of the university community with a legitimate purpose. Confidential data is protected by statutes, regulations, university policies, or contractual language, and may be disclosed only on a need-to-know basis.
Examples publicly posted press releases, class schedules,, public-facing university publications employment data, internal directories, vendor contracts, proposals, or pricing information not designated as public medical records, student records and other non-public student data, Social Security numbers, and personal financial information
AI Guidance Public information is the lowest risk, but you should still verify accuracy and avoid misrepresenting official positions. Do not enter Official Use Only data into public, consumer AI tools. Use only approved systems for university work and limit data sharing to the minimum necessary. Do not enter Confidential data into generative AI tools unless the system and use case are explicitly approved for Confidential data. The policy requires strong protections for storage and sharing, including secure transfer systems and restrictions on standard email transmission.

Hard Boundaries

  • DO NOT ENTER Confidential data into unapproved AI tools.
  • DO NOT ENTER Official Use Only data into consumer AI tools or any tool that has not been approved for university operations.

When in doubt, treat the data as protected and seek guidance before using AI.

 

If You are Unsure, or If an Incident Occurs

  • If you are unsure how to classify data, consult your supervisor, email [email protected], and use the Data Classification Policy as the reference point,
  • Report security risks or suspected violations via an ITS ticket.
  • If Confidential data may have been lost or disclosed, the policy indicates that the Chief Information Security Officer must be notified in a timely manner.