Higher education
Photo credit: RawPixel

Swedish academics have created fictional future scenarios exploring how generative AI will reshape university teaching over the next two years, warning that without proper coordination, the transformation risks leading to paralysis rather than renewal.

Researchers from the Department of Communication and Learning in Science at Chalmers University of Technology developed six near-future scenarios based on interviews with engineering students and workshops with university teachers, postdocs and educational developers. The study, titled Navigating generative AI in higher education – six near future scenarios, employed a method called informed educational fiction, using storytelling to grasp complex future issues.

The scenarios explore how AI could influence everything from student learning patterns to teachers’ work situations, as well as the institutional support systems needed to manage the integration of generative AI into academic life. Postdoctoral researcher Tiina Lindell, who co-authored the study with Professor Christian Stöhr, noted that researchers heard stories of both opportunities and risks in how teaching, the teacher’s role and the entire campus environment may change in the next few years.

Conflicting learning goals

The first scenario examines how students use AI and the conflicts that may arise from differing views on educational objectives. Educators expressed concerns that some colleagues resist allowing students to use generative AI for any learning tasks, whilst others want to make it a mandatory requirement. Teachers highlighted that fundamental understanding needed to grasp complex problems could be lost when students use generative AI for coding.

The scenario reveals uncertainty about how to prioritise changes to learning objectives. Teachers expressed a desire to adjust courses based on students’ use of generative AI, noting they would like to understand how students use it to make these adjustments. However, educators acknowledged the difficulty in understanding what students actually do with the technology, which suggests challenges in developing learning objectives based on students’ generative AI use.

Excessive self-direction among students

The second scenario questions how much freedom students should have when using AI tools. Educational specialists raised concerns that some students might not identify what is important to learn with generative AI, which could potentially automate tasks they need to know. They emphasised the importance of explaining why tasks are essential so students feel motivated to work on them.

Postdocs noted that generative AI could make students more independent whilst decreasing student-teacher interactions, but expressed concern that this independence should not come at the expense of learning. Postdocs suggested that master’s students might handle independence better than undergraduates and recommended a curriculum where students’ generative AI responsibility increases with their education level.

Unpredictable development of GenAI

The third scenario explores how education can be planned when technological progress moves rapidly in uncertain directions. Educational specialists found it challenging to predict how generative AI might evolve, expressing concerns about not knowing what technological developments will bring or what the tools’ capabilities will be.

Postdocs expressed differing perspectives on future development. Concerns emerged that current advancements stem from programmers who learned without ChatGPT, raising questions about whether there will be enough skilled programmers in the future. Others envisioned a future with many tools for both teachers and students, potentially offering personalised or individualised instruction for every student.

Teachers are worried about constant changes in generative AI, as it is challenging for them to keep up. Some suggested that with more time, they could manage the challenges better.

Contradictory and counterproductive regulations

The fourth scenario addresses how differing attitudes towards AI among students, teachers and institutions create inconsistent rules. Educational specialists emphasised the importance of teachers clearly explaining why learning objectives are crucial, but raised concerns that educators’ teaching methods might inadvertently encourage cheating. Concerns emerged that teachers’ negative attitudes towards generative AI might hinder transparent use, potentially penalising honest students, whilst those who secretly use generative AI go undetected.

Postdocs had differing views on implementation strategies. Suggestions included creating a specific course for practising generative AI use, with arguments that students should be able to use generative AI as much as they want, with support from teachers. Others doubted whether generative AI enhances learning, warning that it risks the loss of fundamental knowledge, suggesting students should not use generative AI without clear regulations.

Teachers expressed uncertainty about how unified and practically implementable rules can be formed. Proposals included changing assessment criteria so that linguistic skills are not heavily weighted, whilst others described balancing students’ engagement with generative AI as a substantial challenge.

Changing educator roles and interactions with students

The fifth scenario explores how AI might reduce teacher-student interaction whilst seeking new collaboration forms without increasing workload. Educational specialists expressed a vision of future education characterised by collaborative problem-solving, where students and educators work together to solve problems without predetermined answers. However, concerns emerged that students might take firm stances, causing conflicts due to differing views on learning objectives.

Concerns were expressed that managing students’ generative AI use could increase educators’ workload, requiring university support to handle. Educational specialists also worried that generative AI use could reduce students’ overall knowledge levels, potentially forcing educators to lower teaching standards.

Teachers initially agreed that students’ use of generative AI for self-directed learning would not significantly change student-teacher interactions over the next two years, but expressed concerns that decreased interactions might negatively impact students’ learning. Suggestions included using practical classroom exercises and discussions to foster human engagement. Teachers noted they need to learn the tools and prioritise which learning objectives are redundant and need to be replaced.

Forging an AI-ready campus

The sixth scenario focuses on preventing AI management from collapsing at the individual level through proper institutional support. Educational specialists expressed concern that managing generative AI would increase their workload and called for university support, including larger and structurally sustained efforts with resource allocation and training.

Postdocs emphasised the importance of having a policy in place first so that education can then be adapted to these rules. Teachers noted that without common policies, demands on students could become unreasonable, and more straightforward guidelines from the university would be beneficial. Emphasis was placed on supporting professional development, requiring teachers to keep up with developments and incorporate elements of how companies operate. It was noted that subject-specific workshops and ongoing support are crucial, and temporary measures will not suffice.

The researchers emphasised that the scenarios are not forecasts but tools to help universities and decision-makers reflect on the kind of future they want to move towards and what they might want to avoid. Stöhr noted that if universities manage this transition well, AI could become a driver of renewal, but without coordination and support, the development risks leading to confusion, conflict and paralysis.

The research was conducted in two phases. Individual and group interviews with engineering students from 13 different programmes focused on whether, how and why they use generative AI. Their responses were grouped into five themes. University teachers, postdocs and educational developers then used these themes in workshops to explore multiple possible futures. The researchers chose a two-year timeframe to be both realistic and forward-looking.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Political misinformation key reason for US divorces and breakups, study finds

Political misinformation or disinformation was the key reason for some US couples’…

Pinterest launches user controls to reduce AI-generated content in feeds

Pinterest has introduced new controls allowing users to adjust the amount of…

Meta launches ad-free subscriptions after ICO forces compliance changes

Meta will offer UK users paid subscriptions to use Facebook and Instagram…

Wikimedia launches free AI vector database to challenge Big Tech dominance

Wikimedia Deutschland has launched a free vector database enabling developers to build…

Film union condemns AI actor as threat to human performers’ livelihoods

SAG-AFTRA has condemned AI-generated performer Tilly Norwood as a synthetic character trained…

Mistral targets enterprise data as public AI training resources dry up

Europe’s leading artificial intelligence startup Mistral AI is turning to proprietary enterprise…

Anthropic’s Claude Sonnet 4.5 detects testing scenarios, raising evaluation concerns

Anthropic’s latest AI model recognised it was being tested during safety evaluations,…

Wong warns AI nuclear weapons threaten future of humanity at UN

Australia’s Foreign Minister Penny Wong has warned that artificial intelligence’s potential use…