ISSN: (Print) (Online) Journal homepage: www.tandfonline.com/journals/ujdl20
Writing with AI in and beyond teacher education:
Exploring subjective training needs of student teachers
across five subjects
Florian Hesse & Gerrit Helm
To cite this article: Florian Hesse & Gerrit Helm (2025) Writing with AI in and beyond teacher
education: Exploring subjective training needs of student teachers across five subjects, Journal
of Digital Learning in Teacher Education, 41:1, 21-36, DOI: 10.1080/21532974.2024.2431747
To link to this article: https://doi.org/10.1080/21532974.2024.2431747
© 2024 The Author(s). Published with
license by Taylor & Francis Group, LLC.
Published online: 09 Dec 2024.
Submit your article to this journal
Article views: 358
View related articles
View Crossmark data
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=ujdl20
Journal of Digital Learning in Teacher Education
2025, VOL. 41, NO. 1, 21–36
Writing with AI in and beyond teacher education: Exploring
subjective training needs of student teachers across five
subjects
Florian Hesse and Gerrit Helm
Department for German Literature, Friedrich Schiller University Jena, Jena, Germany
ABSTRACT
AI is changing the way writing is learnt at university and taught in schools.
Different institutions hence call for integrating programs on writing with AI
in teacher education. These must be based on the needs of the participants,
which are, however, still unexplored. This article fills this gap with findings
from a February 2024 questionnaire study involving 505 student teachers
across five subjects. Content analysis revealed that student teachers require
fundamental training in using AI for writing. Nevertheless, some student
teachers also expressed specific needs in relation to university and school
writing, which indicates a heterogeneity concerning training interests.
Finally, dependencies between the needs and students’ personal character-
istics (e.g., subject, semester) were identified, aiding the development of
tailored learning opportunities.
- Introduction
Generative AI technologies, such as ChatGPT, are considered to have great disruptive potential
both for society and for teaching and learning in school settings (e.g. Alier et al., 2024; Farrelly
& Baker, 2023; Zekaj, 2023). Especially the rapid generation of coherent text by AI is anticipated
to have a major impact on (teaching) writing in school (e.g. Chiu et al., 2023; Ferdig et al.,
2023; Zhang & Tur, 2024). Consequently, educational research has recently focused on exploring
the use of AI for writing. Based on initial findings in the field, the UNESCO and other edu-
cational institutions have pointed out potential scenarios and limits for writing with AI in schools
(e.g. Köller et al., 2024; Miao, 2023). These recommendations also emphasized that professional
development (PD) programs on AI should be designed, ideally beginning with the training of
student teachers at universities.
However, to develop training programs for this target group, it is necessary to take into
account their needs and previous experience of the topic. This is because research on PD indi-
cates that training programs are more successful when they are aligned with the needs of the
participants (Korthagen, 2016; Kunter et al., 2011; Lipowsky & Rzejak, 2021). Current research
on AI-assisted writing primarily consists of studies that either survey students in general about
their use of AI (when writing) (e.g. Hoffmann & Schmidt, 2023; Malmström et al., 2023; von
Garrel et al., 2023), or that examine how students write with AI in specific scenarios (e.g. Fyfe,
2023; Lemke et al., 2023). Studies that focus explicitly on student teachers and their needs
concerning writing with AI are still scarce.
© 2024 The Author(s). Published with license by Taylor & Francis Group, LLC.
CONTACT Florian Hesse florian.hesse@uni-jena.de Department for German Literature, Friedrich Schiller University Jena,
Fürstengraben 18, D-07743 Jena, Germany.
https://doi.org/10.1080/21532974.2024.2431747
KEYWORDS
writing with AI;
subjective training needs;
Artificial Intelligence;
ChatGPT in education;
teacher education
ARTICLE HISTORY
Received 11 July 2024
Revised 1 October 2024
Accepted 15 November 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/
by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The terms on
which this article has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent.
22 F. HESSE AND G. HELM
To address this research gap, the present study analyzes data taken from a February 2024
survey of 505 student teachers from five different subjects. Those were asked in an open response
format about their training needs for writing with AI. The responses were then analyzed using
Qualitative Content Analysis and were statistically correlated with personal requirements. The
results provide fine-grained insights into the training needs of student teachers across different
subjects and represent an important starting point for the development of training programs.
1.1. What are new challenges in (teaching) writing in the context of AI that require teacher
training programs?
Since ChatGPT was published in November 2022, numerous studies in educational research have
emerged that outline the potentials and limitations of the application, particularly in the context
of academic writing in universities and writing in schools (e.g. Ahmad et al., 2023; Bai̇doo-Anu
& Owusu Ansah, 2023; Imran & Almusharraf, 2023; Li, 2022; Vargas-Murillo et al., 2023; Zafari
et al., 2022).
Regarding the potentials, AI applications might be a support in all phases of the writing
process: In the planning phase, AI is said to offer potential for the generation of ideas and
general outline of texts (Su et al., 2023; Utami et al., 2023) and for the development of struc-
tured argumentation (Fok & Weld, 2023). Concerning the phase of formulating text, it is high-
lighted that AI can generate parallel and sample texts (Fang et al., 2023; Schicker & Akbulut,
2023). These might be used to (re)formulate one’s own text or to obtain alternatives. The text
reviewing phase also seems to offer a fruitful ground for the use of AI applications. For example,
it has been demonstrated that AI can be used for feedback during text revision (Abdullayeva
& Musayeva Zilola Muzaffarovna, 2023; Wampfler, 2023; Yan et al., 2024).
On the side of limitations, educators must be aware of potential risks and technical as well
as ethical limits. For example, a much-discussed topic concerns the risk that pupils might use
AI as a ghostwriter. Some scholars even suggest that AI could take writing out of the hands of
students altogether (Anson & Straume, 2022) or change the way written material is perceived
in the future (Pedersen, 2023). Moreover, the use of AI technology could be problematic as AI
generates information that are sometimes incorrect or hallucinated. Finally, limits of AI in edu-
cational contexts concern aspects of copyright, data protection or bias (Noto La Diega &
Koolen, 2024).
Given the potential of AI as a resource for (teaching) writing on the one hand, and its lim-
itations and hazards on the other, research and educational policy makers repeatedly stress the
need for teacher training programs that focus on these issues (Albadarin et al., 2023; Barrett &
Pack, 2023; Chiu et al., 2023; Eslit, 2023).
1.2. Related studies: What is known about the needs of student teachers concerning writing
with AI?
It is known from research on PD for teachers that PD programs are particularly effective when
they are aligned to the needs of teachers (Lipowsky & Rzejak, 2021). In this context, a distinc-
tion is made between objective and subjective needs. Objective needs are derived from tests or
lesson observations by comparing the teachers’ current knowledge or skills with a desirable
target state. By contrast, subjective needs are assessed by asking teachers directly about their
training needs (Weber et al. 2024).
Due to the novelty of publicly accessible chat-based generative AI, the number of studies on
objective and subjective needs regarding the use of AI for writing is still limited. Although there
are now many overview articles that conceptually analyze the opportunities and challenges of AI
in educational contexts (e.g. Kasneci et al., 2023), there is still little empirical research on student
teachers’ writing with AI.
Journal of Digital Learning in Teacher Education 23
At present, studies that use questionnaires to ascertain how often and for what purposes
students use AI are particularly common (see Helm & Hesse, 2024, for an overview). As one
major result, the studies reveal an increasing familiarity with AI applications. Looking at specific
tools, students are especially aware of ChatGPT, which was known by 95% of the students,
compared to BingAI with 38% and Bard/Gemini with 21% (Malmström et al., 2023). However,
the awareness of AI applications does not seem to translate directly into regular use: Although
the majority of students seem to ‘know’ ChatGPT, only around 25-35% stated that they use this
tool at least ‘often’ or even ‘very often’ (Hoffmann & Schmidt, 2023; von Garrel et al., 2023).
In addition, some studies use small observational or intervention settings to gain insights
into students’ writing with AI. Results of these studies provide a mixed picture. One study
introduced bachelor’s students in business/law to the use of ChatGPT and found that “students
prefer using these tools if it simplifies their daily study routine, helps them to fulfil their writing
tasks, and thus makes them more efficient” (Lemke et al., 2023, p. 160). In contrast, Fyfe shows
in his study, in which students are asked to write assignments from their own texts together
with excerpts from ChatGPT in the form of “patchwriting” (Fyfe, 2023, p.1400), that this rarely
leads to an increase in efficiency: writing with AI did not become easier, but rather more dif-
ficult, as students had problems linking the AI passages to their own text. Different strategies
can be observed in various directions, ranging from continuously refining the prompts used to
simply adapting one’s writing style and content to match the chatbot’s statements. Altogether,
the studies indicate that using ChatGPT can increase efficiency, but it requires prior knowledge
and experience that most students lack, at least according to these studies. For students with
minimal experience, AI applications seem to be an (‘inferior’) substitute for search engines (e.g.
Google) in the writing process (Strubberg et al., 2023).
Based on these studies, initial conclusions can be cautiously drawn about objective training
needs. In contrast, there have not yet been any studies investigating the subjective training needs
of student teachers and in-service teachers regarding writing with AI. Nonetheless, it can be
assumed that previous research findings on the subjective training needs of teachers about other
topics are transferable to a certain extent.
1.3. Studies on subjective training needs of (student) teachers
In PD research, the consideration of subjective training needs is regarded significant for the
willingness to participate in PD (Weber et al. 2024). In some cases, they are also used synon-
ymously with the construct ‘interest in PD’, which is also considered to be of great importance
for the success of PD programs (e.g. Krille 2020).
To our knowledge, there are currently no studies that have investigated subjective needs of
student teachers concerning (writing with) AI. However, studies that have looked at the partic-
ipation of students in training courses on digital media do provide some clues. These studies
suggest that an examination of subjective needs cannot be carried out in isolation. Rather, it
must be conducted concurrently with an examination of the personal characteristics of the
participants. For example, Johnson et al. (2023) examine the willingness of students to participate
in digitization-related training programs for student teachers and compare those who expressed
an intention to participate with those who did not. The results indicated that participants dif-
fered from non-participants primarily in age and the semester they were currently enrolled in.
Students who were willing to participate in further training were generally older, but were still
attending an early semester (Johnson et al., 2023, p. 81). In addition to these factors, participants
with lower self-assessments regarding competence in the use of technology were more likely to
participate in training programs. Other personality facets and the gender of the students did
not affect the willingness to participate in a training program (Johnson et al., 2023, p. 82).
If one also looks at studies on the subjective training needs of in-service teachers, further
indications of potential influencing factors can be derived. In line with Johnson’s study (2023),
existing research indicates, for example, that the self-concepts – i.e. judgements and attitudes
24 F. HESSE AND G. HELM
regarding different aspects of oneself (Kuhl et al. 2023; Möller & Trautwein, 2009) – of the par-
ticipants explain the selection of professional development courses. In this context, some studies
show that – contrary to expectations – teachers with highly developed self-concepts are more
likely to participate in training courses than teachers with low self-concepts (Richter et al. 2013,
Krille 2020). Other studies find that teachers with a low (topic-specific) self-concept take part
in professional development courses (Weber et al. 2024).
In addition to questions of self-concept, the extent to which demographic factors (e.g. gender,
professional experience) affect training participation was also investigated. According to Weber
et al. (2024), these variables only explain a small part of the variance in most studies and are
directed differently depending on the study. For example, gender has no influence in the study
by Johnson et al. (2023) (see above), while in Krille (2020) men and in Gokmenoglu et al.
(2016) women show a greater interest in further training on digital media.
1.4. Conclusions
To synthesize the findings of previous studies, it can be observed that although students are
largely familiar with AI, particularly with ChatGPT, this does not necessarily result in the (auto-
matic) integration of AI into their own writing. When students do utilize AI applications for
writing purposes, it is primarily for the purposes of planning or formulating small text elements.
However, even in these instances, they often encounter difficulties in integrating the AI-generated
text elements. Overall, these findings corroborate the demand to train (prospective) teachers in
the use of AI. In order to design effective training courses, a deeper understanding of the initial
conditions and subjective needs of the teacher-students is required. It is of particular importance
to gain an understanding of the manner in which these subjective needs are related to personal
characteristics, including age, semester attended, gender, self-concept and, most importantly, the
subject being studied by the students. To this date, research in this field remains a desideratum.
- Methods
2.1. Research questions and hypotheses
The present study aims to address the identified research gap for the first time by surveying
student teachers of five different subject domains at a medium-sized German university.
Specifically, the following two research questions were examined:
RQ 1: What are the subjective needs of student teachers regarding on writing with AI?
RQ 2: Are there connections between the needs of the student teachers on writing with AI and their per-
sonal characteristics (e. g. subject studied, semester), self-assessed competencies (e. g. writing-related
self-concept) or their beliefs towards the impact of AI on teaching writing in schools?
Concerning RQ 2, we formulated differentiated hypotheses concerning the subgroups of needs
we have identified (general needs, school-related needs, university-related needs). In particular,
it is assumed that differences in the subject being studied will be reflected in differences in the
formulated subjective needs. Concerning school-related needs, we expected students with a lan-
guage subject rather than students with a social or STEM subject to express needs in writing
with AI, insofar as writing plays a greater role in language subjects. Moreover, we assumed that
the group expressing school-related needs is also likely to have stronger beliefs about the impact
of AI on writing in schools, as those who do not attribute any potential for change to AI are
less likely to see a need for further training in writing with AI. Finally, as far as the general
and university-related needs are concerned, it can be expected that students with corresponding
needs have a lower writing self-concept and will assess their own competence in dealing with
technology, especially AI (AI literacy), as lower, while students who do not articulate any needs
(main category ‘no needs’) already have higher self-assessments.
Journal of Digital Learning in Teacher Education 25
2.2. Data collection
To explore our research questions, we designed an online questionnaire using the tool Sosci-Survey.
The questionnaire was distributed to 2.472 student teachers at a medium-sized German university
via a mailing list in February 2024. To encourage participation, the survey was advertised in
lectures and was linked to a lottery in which participants had the chance to win vouchers valued
at 150 €. Participation in both the survey and the lottery was voluntary. The survey was dis-
tributed to student teachers of five subjects: German, English, Sports/Physical Education, Physics,
and Geography. These subjects were chosen as they represent different fields, including language
teaching, foreign language teaching, and STEM-Education. Participants were asked to select one
subject from the above list at the beginning of the survey, since students in Germany usually
study at least two subjects. All subsequent questions in the survey should be answered with this
subject in mind.
2.3. Participants
A total of N
=
505 student teachers completed the questionnaire. Only participants studying to
become secondary school teachers were included, as prospective kindergarten or primary school
teachers are not trained at the university where the survey was conducted. Table 1 shows the
distribution of subjects studied by the participants. The distribution of students who participated
in the survey corresponds to the distribution of students enrolled in the subjects. In terms of
gender, 328 students (64.36%) identify as female, 178 students (35.23%) identify as male, and
2 students (0.40%) identify as diverse. However, the latter group was not included in our analyses
due to its small size, which must be considered as a limiting factor when interpreting the results.
The participants have an average age of 22.92 years (SD
=
2.90).
2.4. Analysis of the data
For RQ 1, we analyzed free text answers of the participants concerning the following item that
asked for their needs in terms of further training in the field of writing (with AI): “In which
areas or aspects (writing, academic writing, writing with AI) would you like to have (more)
support? What could this support look like?” The answers to these questions were coded using
Qualitative Content Analysis (‘QCA’, Rädiker & Kuckartz, 2019, pp. 219-229). QCA involves
categorizing meaning units to organize data. When analyzing open responses in surveys, cate-
gories „can be formed using both an apriori approach or based directly on the data “(p. 226).
Because an a priori approach would have required more detailed research on students’ needs
for writing with AI, we decided to build categories inductively from the data. Following the
methodological literature’s recommendations (Kuckartz 2018), we iteratively assigned categories
to the data and structured them by revising, sorting, and deleting categories. As a unit of
meaning, we considered coherent sentences or key points that referred to a single need for
writing with AI. As soon as another need was formulated (in the form of a new key point or
marked by connectors such as “and”), a new code was assigned. This process ultimately resulted
in a fine-grained category system with 4 main categories and 16 subcategories, which is presented
completely in Appendix B.
Table 1. Distribution of subjects studied by the participants.
(school) subject studied absolute frequency relative frequency
German (L1) 183 36.24
English (L2) 149 29.50
Geography 72 14.26
Sports/PE 71 14.06
Physics 30 5.94
505 100.00
26 F. HESSE AND G. HELM
To ensure a high inter-coder-reliability, the data was coded in multiple steps. Initially, the
first author coded 50% of the data, so that it could be assumed that the category system was
saturated and no more new categories would be added. Afterwards, the second author critically
reviewed the category system to ensure the plausibility of category descriptions and examples,
as well as the distinction between categories. Subsequently, both authors utilized the ultimate
version of the category system to individually (re-)code 20% of the data for the purpose of
reliability analysis. The calculation of Cohen’s Kappa yielded a satisfactory level of agreement
between raters (k = .81) (Wirtz & Kaspar, 2002), allowing the remainder of the data to be coded
by the first author.
With regard to RQ 2, we wanted to find out to what extent people who express needs for
writing with AI in a certain area differ from those who do not express these needs. Since four
main categories of needs emerged in our category system – university-related, school-related,
general and no needs – we binary coded for each person whether they expressed a need in the
corresponding main category or not. The four main categories were then assumed to be inde-
pendent variables in order to test whether the participants with a need in the corresponding
category differed from participants without a need in the corresponding category.
The number of semesters, writing-related self-concept (i.e. judgements and attitudes regarding
one’s own writing abilities), AI literacy and school-related beliefs about AI were examined as
dependent variables using t-tests (cf. Appendix A for a detailed documentation of all scales incl.
reliability measures and example items). For all t-tests, the normal distribution of the data was
checked using the Shapiro-Wilk test. The equality of the variances (homoscedasticity) was eval-
uated using the Levene test. Violations of the normal distribution were accepted as a limiting
factor because t-tests are robust in this respect. If the assumption of homoscedasticity was
violated, the results of the Welch test were interpreted, which are more robust in this case. In
the case of significant results, the effect size was calculated using Cohen’s d, whereby values
from 0.2 were regarded as weak, from 0.5 as medium and from 0.8 as strong effects (Cohen 1988).
Furthermore, chi-square tests were used to check whether subject affiliation (German and
English as language subjects vs. physics, geography and sport as non-language subjects) or gender
(male vs. female) were related to the subjective needs of the students. In the case of significant
results, the effect size was indicated with Cramer’s V, whereby values from 0.1 were regarded
as weak, values from 0.3 as medium and values from 0.5 as strong effects (Cohen 1988).
- Results
3.1. Subjective training needs on writing with AI
To answer RQ 1, we analyzed a free-text item that asked for the needs of the students regarding
learning opportunities that concern the field of writing (with AI) by applying QCA with induc-
tive category building. As a result, 694 meaning units of the students were assigned to a total
of 16 categories (see Appendix B for comprehensive documentation). The fact that the number
of meaning units is higher than the number of participants (N
=
505) indicates that many stu-
dents mentioned more than one need regarding training programs in the field of writing with
- On average, every student expressed 1,37 needs.
Starting to look at the results from a global point of view, the responses can be categorized
into four main categories: The first category, university-related needs, encompasses all demands
that affect writing at the university. This includes, for example, the use of AI in assignments or
for planning, formulating or reviewing academic texts. The second category, school-related needs,
addresses the need for student teachers to receive support in using AI in their future teaching.
The third category, general needs, includes demands that relate more generally to the handling,
legal framework and technical questions of AI (in writing). Finally, a lot of students did not
mention any needs concerning further education in the field of AI at all or said explicitly that
they do not need any support. These cases were coded in the category no needs.
Journal of Digital Learning in Teacher Education 27
Figure 1 shows the distribution of the meaning units answers across these four main categories.
It indicates that more than one third of the meaning units (35%) was labeled with the category no
needs, while roughly two thirds of the meaning units concerned university-related (30%), school-related
(20%) or general needs (15%). In what follows, the (sub)categories will be investigated in more detail.
3.1.1. University-related needs
Among the students who articulated a need for further training in writing (with AI), most
codings could be assigned to categories that are directly connected with demands of academic
writing they face at university. The students’ needs in this area focused on various aspects.
Most of the codings, 93 of 679 (13.7%), fall into the sub-category of general university-related
needs. This sub-category was used to code all statements that indicated a need for more support
in academic writing in a general and comprehensive sense. For example, one student expressed
the wish that tutorials for academic writing should be further expanded. Other students com-
plained that the teaching of academic writing (with and without AI) was generally neglected in
their studies and that they would appreciate more support in this area.
The second most common sub-category was assignments (49 codings, 7.2%). It was coded
when students expressed needs relating to the completion of assignments or other written exam-
inations. In this sub-category, many students asked how they should cite sources correctly in
the future – especially in view of the use of AI (e.g. “How can I cite AI?” or “Can I use AI at
all for the term paper?”). Moreover, it was noticeable that many students also complained that
lecturers often did not provide any (standardized) guidelines on what to consider when writing
a term paper (with AI). Finally, other students were also interested in how AI could help them
to write term papers more efficiently.
Three further sub-categories related to the different phases of academic writing. For example,
26 statements (3.8%) referred to the request to learn more about planning term papers (with
- AI) (e.g. finding topics with AI, literature search with AI). A further 16 statements (2.4%) were
coded with the sub-category text generation. In this sub-category, students wished to learn more
about how AI can help, for example, to formulate thoughts, find formulation alternatives or find
the right style. The sub-category reviewing accounted for 4 statements (0.6%). It referred to the
explicit wish of some students to learn how AI tools can be used for revising term papers (with
regard to correctness or style).
Finally, the sub-category personal feedback included all statements that contained students’
need to receive more personal feedback from their lecturers (in addition to AI-generated feed-
back). In this context, one student reported that he had not received a single personal feedback
on a term paper during his entire time studying, indicating that also in times of AI students
still find personal conversations about their assignments important.
3.1.2. School-related needs
While the above-mentioned needs are likely to apply to students from many disciplines, the main
category school-related needs contained meaning units in which student teachers are already thinking
Figure 1. Distribution of subjective needs.
28 F. HESSE AND G. HELM
about their future role as teachers. Similar to the needs in the area of academic writing, the
school-related needs were also formulated quite generally. In most cases, students want to learn how
to deal with AI at school in general, without specifying this need more precisely. For example, one
student asked for “more input how to use AI in school” (student 382). If the needs are more specific,
they usually relate to the question of how to deal with (potentially) AI-generated texts at school in
the future (category handling texts; 6.3%; 43 codings). In this sub-category, questions of AI recog-
nition play just as much a role as the question of how to grade AI-generated texts. In contrast, only
19 students (2.8%) are interested in learning more about the possibilities of AI for lesson planning—
- g. in relation to the creation of (differentiated) learning tasks or teaching materials. Similarly, only
a few students express a need to learn more about the ethical and legal limits of AI in schools.
3.1.3. General needs
The two previous main categories have in common that they can be clearly assigned to an insti-
tutional context (school or university). The category below differs from this in that it contains
meaning units that are not clearly related to school or university writing tasks. Accordingly, the
needs assigned to this category overlap to some extent with the categories described above, but
also show new aspects. For example, 48 students (7.1%) are interested in the (ethical) limits and
risks of AI without relating this directly to school or university. Some students ask in which sit-
uations it would be permissible and ethically just to use AI. Questions about the technical limits
also fall into this category. Another general need of students, which is not linked to specific
institutions, is to familiarize themselves with AI applications (4.6%; 31 codings). In this sub-category,
students state that they want to get to know new AI tools and their potentials and limitations.
Finally, other general but much less frequently expressed needs relate to the desire to learn more
about the functioning of AI (sub-category AI literacy), expressed by 11 students (1.6%), or to
improve prompting techniques (sub-category prompting), stated by only 6 students (0.9%).
3.1.4. No needs
As already mentioned at the beginning, there are many students who have not indicated any
needs or have explicitly written that they do not wish to receive any further training (228 cod-
ings, 33.6%). Also of interest was a small subgroup that was coded with the subcategory statement
(19 codings, 2.8%). Comments from this group included statements that did not articulate a
need for learning opportunities but emphasized either the importance or the pointlessness of
generative AI. For example, one student commented positively that he considered AI to be
beneficial because it helped him to compensate for the deficits caused by his dyslexia. Still other
students noted (mostly without justification) that they find the spread of AI highly problematic
and would consistently speak out against the use of AI, especially in schools.
3.2. Connections between needs, personal characteristics, beliefs, and competencies
With the second research question, we wanted to find out to what extent students’ needs are related
to personal characteristics, beliefs or self-assessed competences. Significant differences were found
between students who reported school-related needs and those who did not. Participants who
expressed needs in this area were more likely to be studying a language subject (χ2(1) = 5.392, p =
.02, V = .103), to have a higher writing-related self-concept (t(503) = −2.212, p = .027, d
=
−0.245),
to have a stronger belief in the influence of AI at school (Welch-t(177.485) = −2.910, p = .002,
d
=
−0.294), and to be in a higher semester (t(503) = −3.208, p < .001, d
=
−0.356). It should be
noted, however, that according to Cohen (1988), the effects were only small in each case.
Apart from these results, significant differences were only found in relation to the expression
of general needs (t(503) = −2.146, p = .032, d
=
−0.263): Again, it was shown that people who
expressed general needs had significantly stronger beliefs about the influence of AI on school
Journal of Digital Learning in Teacher Education 29
writing. For all other variables tested, there were no significant differences between those who
expressed needs and those who did not. This includes gender, AI literacy or previous use of
formal or informal learning opportunities for writing or AI.
- Discussion
The present study has succeeded in identifying a wide range of needs of student teachers with
regard to writing with AI. On a global level, the results reflect the students’ limited previous
experience. This is not only evident in the general needs category, in which statements were
coded where students wanted basic further training opportunities (e.g. getting to know AI tools,
how AI works and its limitations) regardless of a specific institution. Also, in the university and
school-related statements, most needs coded were those from which a fundamental and less
specific need for further training can be derived. This aspect should not be underestimated
when designing training courses, as basic knowledge cannot be tacitly assumed for most of the
students. Rather, it is also important to offer formats that pick up even inexperienced students,
for example by differentiating between beginner and advanced courses or by pre-relieving courses,
for example through flipped classroom concepts.
Furthermore, on a global level, it is interesting to note that a clear teaching profession spec-
ificity emerges in the codings. While the general needs and the university-related needs are very
likely to be shared by students from all disciplines, the school-related needs are specific to student
teachers. In training courses on writing with AI, this must be considered since skills in teaching
school writing must also be taught in addition to skills for coping with academic writing tasks.
A detailed look at the (sub)categories reveals that students want to learn how to use AI in
a variety of scenarios in university and school writing. As usage and observation studies on
writing with AI show, students are not yet exploiting the potential of AI in these scenarios.
Therefore, it is hardly surprising that subjective and objective needs coincide here. However,
regarding the design of training courses, it must also be noted that the subjective needs some-
times diverge considerably and can hardly be covered all together within one course. To meet
this challenge, it would be conceivable, on the one hand, to design modular training concepts
according to institution-specific requirements (e.g. university vs. school writing courses). On the
other hand, it would also be possible to design the modules on a cross-institutional basis so
that intersections between the areas become more apparent. To name just one example, legal
and ethical issues of AI use are equally relevant for schools and universities and could therefore
be addressed in a single course, highlighting commonalities and differences across institutions.
Another interesting finding is that, despite the numerous AI-related challenges, a good third
of students did not express any need for training in writing with AI or explicitly stated that
they did not need any further training. There may be several reasons for this finding. For
example, it can be assumed that a significant proportion of respondents have used both formal
and informal learning opportunities and may therefore no longer have a need for further train-
ing. Similarly, the beliefs about the influence of AI on writing at school are not related to this
category. Accordingly, it can be ruled out that primarily students who do not consider AI to
have any influence on writing at school did not express any needs. Instead, it is more likely
that many students did not think of a specific need for further training or that they were not
motivated enough to write down their needs. This is particularly the case because the item in
question was the last item in the questionnaire, which must be noted as a limiting factor.
Finally, based on previous PD research, it was important to determine the extent to which students’
professional development needs are related to demographic data (gender, subject, number of semesters)
and self-assessed skills and beliefs. Interestingly, significant correlations were found particularly
regarding school-related needs. The finding that students with school-related needs study a language
subject is in line with our expectations, insofar as these students are more involved in teaching
writing skills at school. The correlations with AI beliefs are also fitting our assumptions, as students
feel a greater need for further training if they perceive some of the changes outlined in chapter 1.1
30 F. HESSE AND G. HELM
and consider them to be important. It is possible that the finding that – contrary to previous research
findings (e. g. Johnson et al. 2023) – it is primarily students in higher semesters who express a need
for school-related further training fits into this context. This is because student teachers might feel
the need to deal with current challenges before they directly enter school practice, even though their
capacities may be limited. For the development of training concepts, this could mean that it is to be
expected that students in higher semesters in particular will take part in training programs.
- Conclusion
ChatGPT and comparable AI applications are said to provide numerous new possibilities for writing
in universities and schools, which will certainly continue to develop in the future. This presents
student teachers with a twofold challenge. On the one hand, they must acquire skills in using AI
for text production themselves, but on the other hand, they must also be able to teach their future
pupils how to write with AI. As these are complex requirements, various educational institutions
have justifiably called for further education and training programs for prospective and experienced
teachers. For teacher training, however, this demand is challenging insofar as little is known about
how student teachers use AI for writing, what formal and informal learning opportunities they
have already taken advantage of in this area and what specific needs they have for further training.
The present study addressed this issue based on an analysis of data from a questionnaire study
and placed a particular focus on subjective training needs. Unlike existing studies, which have so
far mainly focused on the frequency and functions of AI use in writing, this is the first time that
the views of student teachers from different subjects have been taken into account. The results are
particularly important because they allow conclusions to be drawn regarding the design of learning
opportunities in writing with AI, the most important of which are briefly summarized below:
- The findings reveal that many students – a good year and a half after the publication of
ChatGPT – are still expressing a need for further training that relates to very fundamental
issues. For the design of training courses, it can be concluded from this that, despite the
rapid technological developments and evolving specialist discourses, many students still
need to be taught the basics first. This involves conveying technological knowledge (e.g. an
overview of how AI works, insights into various AI applications or prompting techniques)
as well as conveying writing-specific knowledge in the context of AI (e.g. on the writing
process and children’s writing development).
- Moreover, the data show that the needs revealed specific disciplinary characteristics in that
the student teachers also expressed school-related needs. This indicates that student teachers
not only think about school-related requirements during their studies, but also want to
further their qualifications in this area. As in-depth analyses showed, this applies in par-
ticular to students in higher semesters and with language subjects. The latter result would
also explain why many students in our data are interested in learning how to deal with
student texts, as the evaluation of student texts is an important task for language teachers.
Consequently, a particular focus of training courses should also be on showing students
how to give feedback on student texts (using AI) that is conducive to learning.
- The study also provides evidence that, despite the pervasive presence of AI in the media, not
all students express specific training needs related to AI. This may be attributed to the students’
existing familiarity with AI or, conversely, their inability to discern the relevance of the topic
or articulate specific training needs due to a lack of AI-related knowledge. In the context of
teacher training, this could mean that it is necessary to highlight the relevance of the topic,
both for students with prior knowledge (in order to show them where they can still improve)
and for those who do not yet recognize the potential role of AI in their future careers.
Finally, it should be noted that the present study has several limitations that offer starting
points for subsequent research. For example, one must consider that the present study is based
Journal of Digital Learning in Teacher Education 31
on a convenience sample from one university location, which should be supplemented in the
future by more controlled samples at other locations. Another limitation of the study is that
all the variables considered are ultimately based on the students’ self-assessments. As the ques-
tion of the extent to which teachers can even validly assess their own training needs is con-
troversial (Ernst et al., 2023), the findings must be validated by further samples and compared
with more objective methods. A further limitation concerns the fact that that the school-related
needs that appear in the present study could also appear in a similar way in comparable degree
programs with a predefined field of work (e.g. medicine, law). This assumption should be
investigated in future studies by comparing different degree programs. Finally, it must be taken
into account that the results can be questioned in view of the continuous further development
of AI applications. In particular, it is possible that some of the students’ needs may be relativ-
ized in the future if AI applications are integrated into existing operating systems and appli-
cations to a greater extent. At the same time, however, such integration may lead to students
becoming even less aware of the use of AI, which could make new training needs more relevant.
Accordingly, it will be necessary to replicate studies like the present one in the future.
Disclosure statement
The authors report there are no competing interests to declare.
Funding
This work was supported by a research grant from the innovation pool (project: “Writing with AI in teacher edu-
cation”) of the center for teacher education and education research, Friedrich Schiller University, Jena.