Request Information
Ready to find out what MSU Denver can do for you? We’ve got you covered.
At the recent Community Collab Summit, I facilitated a session about generative AI (gAI) at MSU Denver. In part of that session, I asked participants to imagine a few different versions of a scenario that plays out often at MSU Denver: Someone at the university works hard to create a report that can inform decision-making.
Suppose you were the one that created the report and I’m your supervisor. Consider scenarios A, B and C:
Look carefully at the differences between these (as indicated) and, if you’re willing, pause to think about your own reaction to each.
Sidenote: You can also imagine the next step after any of the above versions. A university VP asks gAI to read the email, generate action steps based on MSU Denver core values, and send those action steps to the Deans.
For the folks in the session, I posed this follow-up question:
What if each scenario (A, B, & C) included a simple and accurate gAI disclosure statement?
Reacting to (A): Scenario A feels mostly fine to me. Using gAI to give me a head start on understanding a complex report seems both reasonable and useful. I would need to be very careful to investigate and dig into whatever the gAI summary said, especially relating to areas where I don’t have much expertise. I would ask for specific references that support overall claims, etc. As long as the user is taking those steps, this feels okay. And, crucially, disclosing this use of gAI would not give me pause!
Reacting to (B): In this version, I start to worry. Taking the gAI summary as true is risky because gAI hallucination is a permanent feature of these types of tools (1). Those hallucinations might not be a problem, but I think I would be shirking my job duties if I don’t check the gAI summary against the report itself. Partly, I am using my own discomfort at disclosing this process as a guidepost. The feeling that I might not want to be transparent about using it this way is, I think, a (crude) signal that this is against my values.
Reacting to (C): This makes me deeply uncomfortable. I feel like I’m not really involved in this at all. The results would have been the same if you sent the report to the gAI tool directly, asked it to summarize and send an email pretending to be me. If I disclose this kind of use to my colleagues, I absolutely expect them to wonder what work the university is paying me to do. Disclosure would feel bad, and that is an important signal.
Sidenote: Transparency isn’t perfect. “The transparency dilemma: How AI disclosure erodes trust” outlines some important results: Disclosing AI use is better than being found out after-the-fact, but it isn’t flawless. Even fully transparent use can still erode trust, even for those with positive attitudes toward technology and confidence in gAI accuracy (2).
After a brief discussion on these topics, I asked the folks in the Community Collab Summit session the following question:
What would be the impact at MSU Denver if
disclosure of gAI use was standard practice?
Results: A – 50%, B – 36%, C – 14%, D – 0%. Crudely: 86% Beneficial. Overall, the 22 folks that responded that day gave a very strong signal that making disclosure of gAI standard practice would be beneficial at MSU Denver.
As you can probably tell, I agree wholeheartedly. I think a simple disclosure statement can take a given scenario out of a muddy, icky grey zone and make it feel relational and
Making gAI disclosure standard practice at MSU Denver will probably evoke a wide range of reactions. In my opinion, the discomfort and friction that might arise is important. It is a signal that we are not all operating with the same values around these very new, very powerful tools.
One interesting case to consider is in the instructor and student roles in a course. There are cases where faculty used a gAI assistant to write an email accusing students of using gAI to write their essay. Or famous cases in which students demanded a refund after realizing the course content itself was generated by AI (3). I don’t think these uses are equivalent, but I do think disclosure and transparency would have made a big, positive impact.
I have no idea how MSU Denver faculty, staff and students will view gAI in 10 years’ time. For now, I hope we are forced to have difficult, nuanced conversations that lead to clear guidelines and practices.
And so… I will end with an actual Call to Action! Start including a generative AI disclosure statement as part of your email signature, your websites, your course content, your internal reports, etc. This isn’t just for those who use gAI, this is for everyone. If this became common it would kick off important conversations.
Your disclosure doesn’t have to be detailed or lengthy. I’ve started including the following in my email signature:
AI Disclosure Commitment: When I use generative AI in my work, I will always include a brief disclosure statement. Aside from spellcheck, this email was composed without the use of generative AI.
When it happens, I modify the last phrase to disclose how I used gAI. You can also find similar disclosures at the end of all of my CTLD Connections pieces, such as this one about Three Types of Content and What They Mean for Generative AI (scroll to the bottom).
Generative AI disclosure: After writing this piece I used generative AI to write a first draft of the short “teaser blurb” that went out by email. (The featured image, on the other hand, was made by combining icons each under creative commons license.) Want to know more? Send me an email and we can chat!