One instructor’s reflections on the first workshop in the Arts and Humanities-focused GenAI series at SCTCC
I came into this CTL session anticipating a conversation about AI rules—what to allow, what to prohibit, and how to handle gray areas. But the crux of this workshop hit at something different. Somewhere between drafting an AI statement and debating what students must be able to do independently, the focus shifted. The question stopped being “How do we prevent misuse?” and became “What kind of learning are we actually trying to protect?”
This session marked the first workshop in our Arts & Humanities–focused GenAI series, led by Plamen Miltenoff, PhD, Researcher in AI Literacy & Immersive Teaching at the University of Economics – Varna and coordinated locally by David Anderson (Biology Faculty). From the start, it was clear that this wouldn’t be a lecture about tools or detection. Instead, the focus was on something much more useful: helping instructors clearly articulate what students must learn to do on their own, how AI can support students, and where AI use crosses the line of acceptable/unacceptable use.
Starting with Learning, Not Policing
One of the most valuable shifts in this session was moving away from a “ban or detect” mindset and toward a clear AI use policy. Rather than asking How do we catch AI use?, we were asked more productive questions:
- What core intellectual work must students be able to do on their own?
- Where does AI genuinely support learning without replacing disciplinary thinking?
- How can we make expectations visible, teachable, and fair?
We spent time actively working on our own AI statements, grounding them in course learning outcomes instead of fear or ambiguity. This hands-on work made it clear that an effective AI policy is about defining authorship, reasoning, and responsibility in a way students can understand.
Guiding Principles for an AI Statement
A major takeaway from the session was a practical framework for building a syllabus AI statement around three clear components:
- Acceptable use – how AI may support learning
- Unacceptable use – where AI replaces student thinking or misrepresents authorship
- Required disclosure – how students document AI use transparently
Framing AI use this way shifts the conversation from enforcement to learning. Students aren’t left guessing, and instructors aren’t left interpreting gray areas after the fact.
Philosophical (and Practical) Conversations That Mattered
Beyond policy language, the session made me think about what it actually means to know something in our field. What must students wrestle with themselves? What kind of struggle is essential to learning? And how do we design assignments that make thinking visible?
These conversations felt especially valuable because they were immediately connected to practice. We weren’t just talking about AI in theory—we were drafting policies, testing language, and imagining how students would encounter these expectations in real courses. In addition, what made this session especially valuable was its insistence that AI policies must be grounded in disciplinary ways of knowing, not generic rules.
Dr. Miltenoff framed the conversation in a way that resonated with me: AI may support expression, organization, and revision—but it cannot replace interpretation, judgment, or disciplinary reasoning.
Looking Ahead
This was just the first of four workshops in the Arts & Humanities GenAI series. The upcoming sessions will dig deeper into:
- Redesigning assignments for traceable learning
- Creating secure assessment checkpoints
- Using AI to meaningfully support teaching and learning in arts and humanities contexts
If you’ve been feeling stuck between banning AI and ignoring it, these workshops offer a practical, discipline-centered way forward. I left this workshop with clearer language, stronger ideas, and a much better sense of how to align AI use with what I actually want students to learn. The hands-on, applicable nature of the session made it both practical and genuinely enjoyable—and I’m already looking forward to what’s next.