Have Questions? We have answers.
Contact UsSince Fall 2023, MSU Denver has expected that every course explicitly address generative AI use in the syllabus or other course information. Clear syllabus language reduces ambiguity, supports academic integrity, and helps students focus on learning rather than guessing what is allowed.
Syllabus language is necessary, but it is not sufficient. Regardless of which approach you choose, it is best practice to restate AI expectations in assignment instructions, especially for major assessments and any activity where permitted and prohibited uses differ.
The Generative AI Taskforce recommends a simple spectrum of approaches to student AI use. Any option can be rigorous and fair when it is communicated clearly and reinforced at the assignment level.
For deeper insight and examples to support your decision, see the Generative AI Syllabus Language and Considerations document.
Under this approach, the use of generative AI is generally allowed or encouraged across most coursework. For example, students may use AI tools to support their work, provided they clearly disclose and explain how those tools were used, and assignments continue to assess student reasoning, decision-making, and learning rather than surface-level polish or presentation.
Under this approach, different parts of the course may forbid, allow, or encourage the use of generative AI. For example, students may be permitted to use AI for certain assignments but not others, depending on the task, with assignment instructions clearly stating what is allowed and requiring disclosure whenever AI use is permitted.
Under this approach, the use of generative AI is generally forbidden unless an assignment explicitly requires it. For example, students may only use AI when directed to do so, with any unapproved use treated as an academic integrity concern, and assessments designed to emphasize visible thinking, process, and individual student work.
Below are copy/paste options adapted from the Taskforce “syllabus spectrum.” Each option includes a short version and a more detailed version. Instructors should modify language to fit the course, discipline, and specific assignment design.
Short Version: Students may use generative AI in any assignment in this course. All use of generative AI must be cited or explained.
Detailed Version: Students are free (and even encouraged) to use generative AI in all aspects of the course. Generative AI can be used when and wherever the student feels it is appropriate to enhance their learning. If generative AI is used as part of work that will be submitted, its use should be briefly but accurately explained in a submission statement.
Short Version: Using generative AI is encouraged in some aspects of this course, allowed in others, and prohibited in some. The instructions included with every assignment will address the use of generative AI. When used, you will be asked to briefly but accurately explain how in a submission statement.
Detailed Version: The use of generative AI is encouraged in some aspects of this course, allowed in others, and prohibited in some. Students are encouraged to explore any and all tools that help with their learning, including the use of generative AI. If generative AI is used as part of work that will be submitted, its use should be briefly but accurately explained in a submission statement.
In creating submissions, generative AI may be used in specific parts of the coursework, for example:
The following assignments should be completed without the use of generative AI, for example:
The use of generative AI in these cases will be treated as a violation of academic integrity.
Short Version: Students may only use generative AI in this course when an assignment explicitly calls for it. Use in any other assignment will be treated as academic dishonesty.
Detailed Version: The use of generative AI is restricted to rare and specific circumstances in this course. Students should not use such tools unless they are specifically called for in an activity, assignment, or assessment. Any use of generative AI in the creation of submitted work for this course will be treated as a violation of academic integrity.
Even when a syllabus policy is clear, students benefit from assignment-level guidance about generative AI use. Reiterating expectations within individual assignments is especially important in mixed approaches and helps reduce confusion and improve consistency.
For each assignment, instructors should clearly indicate whether generative AI use is allowed, allowed with disclosure, restricted, or not permitted. This information should appear directly in the assignment instructions.
When AI use is permitted, assignment instructions should clarify:
How AI tools may be used (e.g., brainstorming, outlining, or revision)
Which uses are not allowed such as generating final analysis, answering quiz questions, or completing personal reflections
In assignments where AI use is allowed, requiring a brief disclosure supports transparency and accountability. Students should be reminded that they remain responsible for:
Accuracy and reasoning
Verifying facts and sources
Making final decisions about their work
A disclosure typically includes:
Providing assignment-level AI expectations supports fair evaluation and helps ensure assessments reflect student learning in an AI-present environment.
From a practical standpoint, it is not always possible to draw a sharp line between generative AI and other digital tools that exist along a continuum of automation, machine learning, and AI. Tools such as voice-to-text, spellcheck, and grammar assistance may incorporate AI features without functioning as full generative systems.
Because of this ambiguity, instructors are encouraged to clearly define what counts as generative AI for their course and to provide students with practical guidance on what is permitted. This is especially important when accessibility or assistive technologies may be in use.
A useful rule of thumb is to focus on impact rather than technology: if a tool meaningfully generates, rewrites, or substitutes for student thinking or content creation, it should be treated as generative AI for the purposes of the course.
When defining AI expectations, instructors should consider the following:
Some assistive technologies used by students may include AI features. Course language should account for accessibility needs and provide a clear process for students to seek clarification or accommodations.
Generative AI systems can reflect and reinforce biases present in their training data. Instructors should consider how this may affect learning, assessment fairness, and student outcomes.
Prompts and outputs may raise data privacy or legal concerns. Students should be instructed not to enter sensitive or protected information into unapproved AI tools.
If AI tools are permitted or required, instructors should consider whether all students have equitable access and whether tool use could create unfair advantage.
No. Instructors may choose Allowed, Mixed, or Restricted approaches. The requirement is clarity: the syllabus and assignment instructions must communicate expectations clearly.
Use consistent labels (AI Allowed, AI Allowed with Disclosure, AI Not Permitted) and repeat them in the syllabus and every major assignment. But perhaps most importantly, have a conversation with your students about the policy.
These tools can sit on a continuum. State what is permitted and create a path for students using assistive technologies to request clarification.
No. Detection tools should not be treated as evidence on their own. Use policy clarity, disclosure, process evidence, and follow-up verification instead.
And if you suspect unauthorized AI usage, the best next step is a conversation with your student.
Tell students to ask before submitting. When possible, default to the assignment instructions as the most specific guidance.