Introduction
What happens when students are required to use AI tools in their assessments? In this post, I reflect on reimagined assessment design for two modules at Bangor Business School. The two modules were 1502 Personal Finance and Banking (Undergraduate (鈥淯G鈥) first semester, first year) and 4446 Financial Ethics (Postgraduate (鈥淧GT鈥) first and second semester).
The rising prevalence of AI in education has sparked widespread debate about its impact on academic integrity and student learning. In response, I required students on two modules to actively use AI tools as part of their summative coursework. Rather than seeing my role as the AI police, waving a big stick, I chose to support students in exploring these tools thoughtfully and responsibly. The goal wasn鈥檛 control - it was conversation, curiosity, and confidence.
What inspired or motivated you to use this tool/resource?
A primary concern was the diminishing validity of assessment by written coursework. Despite adopting Problem Based Learning and Authentic Assessment strategies, many issues persisted: concerns about Contract Cheating; low take up of Formative Assignments; and poor engagement with feedback, exacerbated by feedback bunching. Clearly, there was demand for more relevant assessment. I adopted a 鈥渇irst principles鈥 approach and decided to adopt the following:
- Give students more autonomy
- Engage students in authentic conversations about AI
- Promote active learning
- Ensure alignment with the learning outcomes (LO)
- Grades should reflect genuine student learning
Moving from detection to direction, this year鈥檚 students were given an assignment brief containing the following seven elements. New Assignment Elements:
- Free topic choice (Aligned to at least one LO)
- Use any AI tool(s) they choose (BU Policy compliant)
- Reflective element (Learning journals on VLE)
- Mid-process check-in (Week 4, VLE)
- Verbal presentation with Q&A (Wks 9&10)
- Immediate feedback in class
- Revised submission (Wks 10&11)
What was your aim in using this tool/resource?
The overarching aim was to improve the validity of coursework; to ensure it genuinely reflected student learning, aligned with contemporary quality standards, and supported greater autonomy and active learning.
A key objective was to foster ethical, transparent AI usage. Rather than treating AI as a threat, the initiative encouraged students to explore its potential within clear university policy boundaries. I wanted students to feel confident using AI within ethical boundaries. It wasn鈥檛 just about getting better marks; it was about building good habits for the workplace too.
What did you use the tool/resource for?
The assignment included two digital elements: students鈥 use any AI tool(s) they choose, and a Week 4 submission via a formative discussion on the VLE to capture early ideas and promote feedback. The checkpoint surfaced misunderstandings about acceptable AI use. Some of this we could clear up quickly in class; other cases needed tougher conversations. But surfacing these things early made a big difference.
In the final presentations the UG students were bolder and included more original and varied use of AI whereas the PGT students were more circumspect. Some students were particularly creative in their use of AI, from generating images and synthetic voiceovers to reshaping content structure. The overall level of creativity in this year鈥檚 submissions was significantly higher than normal.
What was the impact?
The average grades achieved this year showed a notable improvement. UG students' average grades rose from 54% to 74%, reflecting a 20-point increase. PGT students' average grades increased from 47% to 59%, representing a 12-point rise. The standard deviation decreased for both groups, reflecting fewer failing students this year.
Early feedback in Week 4 and immediate post-presentation feedback played a key role in clarifying assessment criteria and ensuring alignment with learning objectives. Increased academic support through office hours provided additional guidance for weaker students and encouraged them to engage with the Teaching and Learning support team and their academic support librarian. However, this level of individualised support was partly enabled by a smaller cohort size. Scaling this model will require careful planning and support.
How did the tool/resource impact your teaching?
The integration of AI tools had a significant impact on teaching practices, fostering increased student engagement as evidenced by oversubscribed office hours indicating heightened interest and a desire for guidance. Class time was allocated to discussing assignment ideas and facilitating peer learning, ensuring students were on the right track and fostering a collaborative environment. Notably, there was a marked reduction in suspected contract cheating cases, particularly at the undergraduate level, suggesting an improvement in academic integrity. Moreover, students exhibited improved performance, with higher average grades reflecting a deeper understanding of the material. These changes facilitated more authentic conversations with students, fostering trust and reducing the traditional power dynamic.
How well did the tool/resource perform, would you recommend it?
Mandating AI use is highly recommended, given its potential to enhance assessment validity, promote student engagement, and prepare students for the future. However, structured support is crucial, particularly for students with varying levels of digital literacy, and clear guidelines are essential for responsible AI use. It's important to acknowledge that this approach may not be sufficient for all students, especially those with limited digital skills and high-stakes concerns.
Of course, even with all this in place, some students still stepped over the line. Without stronger institutional safeguards, we must accept that risk - but we can still do a lot to reduce it.
How well was the tool/resource received by students?
Only a handful of students appeared genuinely reticent to use AI, mostly because they were concerned about the ethics. Clearly many were already using different AI tools but showed degrees of transparency about how they were using them. There was a clear divide between the cohorts with the undergraduate students demonstrating greater creativity and interest in exploring topics beyond the traditional syllabus.
On the whole students appreciated the clarity around AI expectations and the support provided in interpreting university policy.
Share a 鈥楾op Tip鈥 for a colleague new to the tool/resource
Key recommendations for colleagues include:
- Fostering open dialogue to clarify AI capabilities, limitations, and ethical considerations.
- Promoting student autonomy by allowing independent exploration of AI tools, balanced with sufficient guidance to prevent confusion.
- Highlighting the relevance of AI skills in preparing students for their future careers.
How would I summarise the experience in 3 words?
Innovative, Engaging, Transformative.
What would I change next year?
This year confirmed how important feedback during the process was to improve both performance and integrity. The Week 4 submissions, class discussions, office hours, and immediate feedback after presentations all helped reduce unethical AI use and boost student attainment. However, there are some key areas I鈥檇 refine next year:
- Provide a custom GPT 鈥淎I Literacy Coach鈥 for students:
Students need structured support to select, prompt, and apply AI tools effectively. Those with higher levels of digital literacy are currently at a clear advantage - especially when it comes to using generative AI for critical thinking, reflection, and academic writing. A dedicated GPT-powered AI Literacy Coach could help demystify these tools, scaffold responsible use, and provide just-in-time support for students developing their confidence and capability. (I have a working model of one for staff, email me for access) - Strengthening the Week 4 checkpoint:
The VLE activity could be enhanced by asking students to respond to specific, structured prompts鈥攐n their topic choice, AI tools, reflective model, and technology decisions. These could be marked as pass/fail to ensure timely engagement and provide early data on student progress and tool usage. This would also make it easier to identify those needing additional support. - Supporting student autonomy without overwhelming them:
While the freedom to choose any topic was motivating for many, others found it overwhelming. This led to a significant uptake in office hours, which became an opportunity to build confidence, deepen subject knowledge, and narrow overly broad ideas. A more guided scaffold at the start of the process may help balance choice with clarity. - Managing logistics around extensions and resubmissions:
The verbal presentation format was pedagogically rich, but extensions and supplementary submissions required additional class time and one-to-one slots. These created a substantial logistical and administrative load. Next year, I will explore ways to safeguard fairness and workload for staff and students alike without compromising on the benefits of verbal feedback and individual engagement.
Finally, there is still a gap in how students engage with academic support services. Many remain reluctant to seek help from the library or Teaching & Learning teams. I鈥檇 like to co-design clearer scaffolding for digitally less confident students to ensure more equitable access and greater inclusion across cohorts.
Sustaining this kind of personalised support in the longer term may require rethinking how teaching time is allocated鈥攅specially where new assessment approaches are being embedded.
Recommended viewing:
This case was also presented at the University of Kent Digitally Enhanced Education Webinar Series (watch it ).
Contact for more information:
Ian Roberts: ian.roberts@bangor.ac.uk