Skip to main content

Knowledge and use, perceptions of benefits and limitations of artificial intelligence chatbots among Italian physiotherapy students: a cross-sectional national study

Abstract

Background

Artificial Intelligence (AI) Chatbots (e.g., ChatGPT, Microsoft Bing, and Google Bard) can emulate human interaction and may support physiotherapy education. Despite growing interest, physiotherapy students’ perspectives remain unexplored. This study investigated Italian physiotherapy students’ knowledge, use, and perception of the benefits and limitations of AI Chatbots.

Methods

A cross-sectional study was conducted through Survey Monkey from February to June 2024. One thousand five hundred and thirty-one physiotherapy students from 10 universities were involved. The survey consisted of 23 questions investigating: (a) respondent characteristics, (b) AI Chatbot knowledge and use, (c) perceived benefits, and (d) limitations. Multiple-choice and Likert-scale-based questions were adopted. Factors associated with knowledge, use, and perceptions of AI were explored using logistic regression models.

Results

Of 589 students (38%) that completed the survey, most were male (n = 317; 53.8%) with a mean age of 22 years (SD = 3.88). Nearly all (n = 561; 95.3%) had heard of AI Chatbots, but 53.7% (n = 316) never used these tools for academic purposes. Among users, learning support was the most common purpose (n = 187; 31.8%), while only 9.9% (n = 58) declared Chatbot use during internships. Students agreed that Chatbots have limitations in performing complex tasks and may generate inaccurate results (median = 3 out of 4). However, they neither agreed nor disagreed about Chatbots’ impact on academic performance, emotional intelligence, bias, and fairness (median = 2 out of 4). The students agreed to identify the risk of misinformation as a primary barrier (median = 3 out of 4). In contrast, they neither agreed nor disagreed on content validity, plagiarism, privacy, and impacts on critical thinking and creativity (median = 2 out of 4). Young students had 11% more odds of being familiar with Chatbots than older students (OR = 0.89; 95%CI 0.84–0.95; p = < 0.01), whereas female students had 39% lesser odds than males to have used Chatbots for academic purposes (OR = 0.61; 95%CI 0.44–0.85; p = < 0.01).

Conclusions

While most students recognize the potential of AI Chatbots, they express caution about their use in academia. Targeted training for students and faculty, supported by institutional and national guidelines, could guarantee a responsible integration of these technologies into physiotherapy education.

Trial registration

Not applicable.

Peer Review reports

Background

Recent advances in Artificial Intelligence (AI) have shown that computers can simulate human intelligence to perform tasks such as comprehension, reasoning, and problem-solving [1, 2]. AI Chatbots, such as ChatGPT, Google Gemini, and Microsoft Copilot, represent a growing application of AI, which uses generative AI algorithms to simulate human interaction [3]. Their user-friendly interfaces, ability to predict words based on user prompts, and rapid response time of AI Chatbots have increased attention across various fields, including the education of students in healthcare professions [4].

Several studies have analyzed the role of AI Chatbots among healthcare students’ education [1, 2, 5,6,7,8,9,10], highlighting their potential to enhance personalized learning experiences, provide immediate feedback, develop critical thinking, and augment problem-solving skills [1, 5]. AI Chatbots have shown their ability to create realistic clinical vignettes, improve communication skills, and act as a virtual mentor through interactive simulations [1]. They can also support healthcare students by ensuring writing accuracy, clarity and coherence in style and formatting, and providing appropriate language and terminology [6]. Furthermore, AI Chatbots can simplify complex concepts, assist in research activities, and help individual and group learning by improving efficiency and overcoming language barriers [7]. In addition to supporting individual learning, the literature highlights the relevance of co-constructing knowledge with generative AI tools [11].

However, AI Chatbots have several limitations in healthcare education, including challenges in attributing intellectual property to students’ work [12], in assessing learning outcomes [13], and in denuding critical thinking development and self-reflection [8]. They may also lead to over-dependence on technology, hinder peer interactions, and generate superficial or inaccurate content, including incorrect citations and fabricated sources [9, 10, 14, 15]. Ethical concerns include issues such as plagiarism, copyright violations, the lack of transparency, and the inability to verify content validity or detect misinformation, highlighting the need for human supervision [16, 17]. Moreover, their sustainability is a critical issue, as their operation requires significant energy resources, contributing to the global carbon footprint [18]. These risks underscore the importance of investigating how healthcare students perceive and respond to this emerging technology [4].

Available surveys [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46] have analyzed healthcare students’ use, perception, knowledge, and attitudes of healthcare students toward AI Chatbots, mainly ChatGPT. The results have documented that most healthcare students are aware of the existence of AI Chatbots [36, 37, 42], but their actual application in both academic and clinical contexts is still quite limited [26, 37, 42]. AI Chatbots are primarily used for information aggregation, text summarization, academic writing, and peer-review of scientific publications, with their application to clinical practice being a subject of debate and not as widely adopted [29, 32, 40]. Many students saw AI Chatbots as an effective tool to  improve learning and productivity [21, 42], yet they expressed skepticism regarding their reliability [30, 35, 42]. At the ethical and regulatory level, most of the students feel the need to set limits on AI chatbot usage, particularly to prevent issues with plagiarism, bias, data privacy, and patient safety [23, 35, 37]. Students have a positive perception of AI in medicine and recognize its significant impact on healthcare. However, they express concerns about reduced human interaction and the potential dehumanization of the profession [20, 34]. Several of available studies have recommended the incorporation of AI training into medical curricula, both through dedicated modules on the clinical applications of AI, as well as focused training on ethics and critical thinking to promote responsible and judicious application of these tools in the future [20, 30].

Several significant factors have emerged regarding the use of AI Chatbots in academic and healthcare settings. Gender influences perceptions of AI Chatbots, with female students showing more positive attitudes [42] and male students attributing more diagnostic and clinical benefits to AI [34]. Year of study plays a key role: more experienced students have a higher critical view of AI than less experienced colleagues [20, 37]. Familiarity with AI Chatbots is associated with greater confidence in their use and more favourable perceptions of their reliability [36], while ethical and regulatory concerns are found to be correlated with less academic and clinical use of these tools [32, 46]. Students who have used AI Chatbots at least once are more likely to use them again, although not necessarily for academic purposes [33, 35]. Moreover, greater knowledge and positive attitudes toward AI Chatbots are correlated with increased academic use [26], while perceived ease of use and usefulness are significant predictors of their future adoption [31, 33]. All these factors highlight the need for targeted educational strategies for informed integration of AI in health care education [20, 21, 30].

Analyzing students’ knowledge, use, and perceptions of AI Chatbots is needed to guide their appropriate application in education [14]. Studies available to date focused mainly on students’ awareness of the existence and operation of ChatGPT and generative AI [26, 36, 42]. Epistemic knowledge, understood as critical reflection on the effects of AI in learning processes and knowledge construction, was also examined [21, 23, 30]. However, studies have primarily focused on some disciplines such as medicine [19,20,21, 23, 25, 26, 29, 30, 33, 34, 36, 37, 41,42,43,44], a mixture of health professions [22, 28, 36, 38, 39, 42], pharmacy [24, 32, 40, 42], dentistry [27, 36], and veterinary medicine [31], leaving other healthcare fields like physiotherapy unexplored. Moreover, while several studies have investigated AI Chatbots in clinical practice [47, 48], research on their application in physiotherapy education remains obscure [4]. Furthermore, most surveys on healthcare students’ perceptions of AI Chatbots have been conducted in America [21, 36,37,38, 40, 44], Asia [22, 24, 26, 28, 29, 32,33,34,35, 39, 42, 43], Europe [19, 20, 23, 27, 30, 41], Africa [25], and Australia [31], leaving other European countries, such as Italy, underrepresented.

Therefore, this study aims to investigate the use and knowledge, perceptions of benefits and opportunities, limits and barriers to AI Chatbots among Italian physiotherapy students. We considered knowledge as declarative knowledge as the students’ awareness of the existence of this technological tool. We hypothesized that age, sex, and university career are key predictors of AI Chatbots knowledge and use, as that they would influence technological exposure, adoption patterns, and familiarity shaped by generational, gendered, and educational differences [49,50,51].

Methods

Study design

A quantitative cross-sectional survey was conducted in accordance with (a) the Checklist for Reporting Results of Internet E-Surveys (CHERRIES) [52] (b) the STrengthening the Reporting of OBservational Studies in Epidemiology (STROBE) [53] and (c) the Consensus-Based Checklist for Reporting of Survey Studies (CROSS) [54] guidelines. The study was approved by the Ethics Committee of Human Sciences at the University of Verona (approval number d. 2023_38, accepted on 24/01/2024) and was prospectively registered on the OSF database (https://osf.io/2hyuz/). Clinical trial number: not applicable. This study represents a part of a broader  research project aimed to analyze the knowledge, use, perceptions and barriers concerning the use of AI chatbots among health professionals’ students in the Italian context.

Participants and setting

A nationwide sample of Italian physiotherapy students was purposively recruited from Bachelor of Physiotherapy programs at ten Universities that agreed to participate in the study, between February and June 2024. The universities had a diverse geographic distribution across Italy, with five located in the north (University of Bologna, Milan San Raffaele, Trieste, Udine, and Verona), one in the center (University of Roma Sapienza), and four in the south (University of Catania, Molise, Napoli/Benevento, and Palermo).

In Italy, the Bachelor of Physiotherapy program educates healthcare professionals on delivering physiotherapy services in prevention, functional assessment, treatment, and rehabilitation, which aligns with the national professional profile [55]. The program lasts three years and requires students to complete 180 European Credit Transfer and Accumulation System credits to earn a Bachelor’s degree and licensure as a physiotherapist [56].

The inclusion criteria for participation in this study were: (a) enrollment in a Bachelor of Physiotherapy program; (b) possession of a valid university e-mail account; (c) the ability to understand and read Italian and (d) willing to participate. Exclusion criteria were: (a) physiotherapists who had already graduated; (b) individuals who refused consent to participate, and (c) those unable to understand the Italian language.

Sample size

The SurveyMonkey sample size calculator (https://www.surveymonkey.com/mp/sample-size-calculator/) was utilized to determine the required number of responses. Given a population size of 1,531 physiotherapy students from the ten universities involved, with a 5% margin of error (indicating how closely the survey results reflect the opinions of the entire population) and a 95% confidence level (indicating the degree of certainty that the population would choose an answer within a specific range), the required sample size was calculated to be 308 completed responses.

Questionnaire survey development and pre-testing

The data collection tool was developed based on the existing surveys on AI conducted on healthcare students at the time of the project’s conception [14, 20, 21, 28, 33, 35, 57,58,59,60,61,62]. It was developed and critically evaluated for face and content validity by a multi-professional team of six professors and lecturers involved in education across physiotherapy and other healthcare fields (e.g., nursing, social science, and epidemiology) [63]. The experts revised the questions individually and provided feedback on content accuracy, relevance, clarity of wording, and survey structure.

Revisions were applied to enhance student comprehension, resulting in a final version containing 23 questions distributed across 10 web pages. After reaching a consensus on the final questionnaire, a preliminary version was piloted tested on a convenience sample of 15 physiotherapy students from different regions of Italy (North, n = 5; Centre, n = 5; South, n = 5), guaranteeing national representation [63].

Following the pilot phase, a debriefing session was performed, during which experts interviewed the piloted students to identify any issues encountered while completing the survey, such as questions requiring further explanation or unclear wording. The pilot phase was satisfactory, and no further changes were needed.

Questionnaire implementation

The final version of the questionnaire comprised four sections.

  • Section 1 - Characteristics of respondents (n = 7 questions). This section examined socio-demographic and academic information. Two questions addressed socio-demographic variables (age and gender), one as open-ended and the other as a closed-ended, multiple-choice question. Four questions assessed the academic profile, covering university affiliation, year of enrollment, off-campus status, and difficulties in academic progression. Digital skills were evaluated with a single-choice multiple-choice question.

  • Section 2– Use and knowledge (n = 11 questions). This section investigated students’ use and knowledge of AI Chatbots through eleven closed-ended questions. Nine multiple-choice, single-answer questions assessed knowledge of Chatbots, context in which students learned about them, use for academic purposes, evaluation of their Chatbots experience, frequency of academic use, use during internship, frequency of clinical use, future usage propensity, and difficulty level in using Chatbots. Two additional multiple-choice questions analyzed the application of Chatbots in learning and internship contexts.

  • Section 3– Benefits and opportunities (n = 3 questions). This section explored students’ perceptions of AI Chatbots. Two Likert-scale questions evaluated the pros of AI Chatbots, rated from 0 (“Not at all useful” or “Strongly disagree”) to 4 (“Extremely useful” or “Strongly agree”). A multiple-choice question probed students’ opinions on adopting AI Chatbots in education.

  • Section 4– Limitations and barriers (n = 2 questions). This section assessed students’ agreement with two Likert score scale questions ranging from 0 (“Strongly disagree”) to 4 (“Strongly agree”) evaluated beliefs about the cons of the AI Chatbots use.

Both Italian and English versions of the questionnaire are provided in Additional file 1 and 2, respectively.

Data collection procedure

The survey was administered and managed online using SurveyMonkey (SurveyMonkey, Palo Alto, California, www.surveymonkey.com) over 16-weeks period from February 21, 2024, to June 21, 2024. Participants were invited via email from the Bachelor of Physiotherapy programs at the ten participating universities. The invitation email contained a link to the survey (https://it.surveymonkey.com/r/SurveyCDLinFisioterapia). The introductory section of the survey provided: (a) the purpose of the study, (b) a link to the comprehensive information note, (c) an explanation of data handling (emphasizing anonymity), (d) the informed consent statement, and (e) the invitation to complete the survey. The first three questions allowed participants to confirm their acknowledgement and approval of the informed consent, understanding of the risks and benefits of the study, its voluntary participation, the option to withdraw without any consequence, and their acceptance to participate [52].

Three email reminders were sent at four-weeks intervals to encourage non-responders to complete the survey. The estimated completion time was 10 min, aligning with the optimal duration to increase response rates for online surveys [64]. A specific function of SurveyMonkey was activated to prevent multiple responses from the same participant ID. Participation was voluntary, with no incentives offered [52]. Respondents could revise or edit their answers using the ‘back button’ before completing the questionnaire. At the end, a summary of their responses was provided to each participant.

Data were securely stored on an encrypted computer, accessible only to the project leader. Participant’s identities were concealed from the researchers, and all data, including names and email addresses, were anonymized to ensure confidentiality and data protection. This approach mitigated any potential psychological discomfort during survey completion [52].

Variables

The questionnaire included multiple-choice and Likert scale questions. Demographic characteristics (e.g., gender, age, and university career) were grouped into categories. A multivariable logistic regression analysis was performed to assess the relationship between participants’ characteristics and knowledge, use for academic purposes and intention to use in future of AI Chatbots. In particular, the following questions were analysed: question 8 (‘Have you ever heard of AI Chatbots (e.g., ChatGPT, Google Bard, Bing)?’), question 10 (‘Have you ever used an AI Chatbot for teaching or academic purposes?’), question 13 (‘If yes, how often on average do you use AI Chatbots for teaching or academic purposes?’) and question 17 (‘How likely are you to use AI Chatbots in the future for academic purposes?’). For Likert-based questions, we collapsed choices to binary values (i.e., Never/Rarely were combined as were Sometimes/Often/Always). The independent variables selected for the model were chosen based on a priori theoretical considerations [49,50,51] and underwent rigorous evaluation for multicollinearity, resulting in the subset of key questions included in the analysis. For each section, an automatic response rate was achieved to identify the sample size and any premature drop-outs (i.e., respondents who did not complete all sections). Questionnaires with early interruptions were excluded from the analyses (e.g., participants who did not complete later sections, such as Sect. 3). Data were exported from SurveyMonkey and analysed using STATA software [65].

Statistical analysis

Normality of the data was tested using the Shapiro-Wilk test. Non-parametric statistical methods were used as the data was not normally distributed. The results were summarized using descriptive statistics, presented either as the median with interquartile range (IQR) or as absolute frequencies with corresponding percentages, depending on the type of variable. In addition, we provided a descriptive summary of the answers in ordinal format, from 0 (“Strongly disagree” or “Not at all useful”) to 4 (“Strongly agree” or “Extremely useful”), using the median and IQRs.

To assess the existence of some predictors, if any, the relationship between participants’ demographic characteristics (e.g., sex, age, university career) and their perceptions of AI Chatbots, a logistic regression analysis was performed evaluating the increased or decreased odds ratio (OR) of AI Chatbot use and perceptions, reporting the 95% confidence intervals (CI). Specifically, the following key questions were dichotomized (Sometimes/Often/Always vs. Never/Rarely) and analyzed: question 8 (‘Have you ever heard of artificial intelligence Chatbots (e.g. ChatGPT, Google Bard, Bing)?’), question 10 (‘Have you ever used an AI Chatbot for teaching or academic purposes?’), question 13 (‘If yes, how often on average do you use AI Chatbots for teaching or academic purposes?’) and question 17 (‘How likely are you to use AI Chatbots in the future for academic purposes?’).

Data were exported from SurveyMonkey and analyzed using STATA version 17 (StataCorp 2023). The questionnaire responses were visually represented in tables, figure or graphs which were created using R software [66]. Statistical significance was set at p < 0.05 for all analyses.

Results

Respondents rate

A total of 589 physiotherapy students out of 1,531 participated in the survey, with a response rate of 38%.

Section 1 - Respondent characteristics

Most respondents were male students (n = 317; 53.8%), and the average age was 22 years (SD = 3.9). Nearly half were enrolled at a university in northern Italy (n = 313; 53.1%) and in the first year of their physiotherapy program (n = 255; 43.3%). Over half of the participants reported having good digital skills (n = 325; 55.2%). Almost all students indicated they were enrolled on time with their studies (n = 584; 99.2%) and progressing regularly without academic difficulties (n = 537; 91.2%). Detailed characteristics are provided in Table 1.

Table 1 General characteristics of the participants

Section 2– Use and knowledge of AI chatbots

Almost all students reported being familiar with AI Chatbots (n = 561; 95.3%), primarily through friends, family, and classmates (n = 257; 43.6%) or social media (n = 245; 41.6%). Of these, 53.7% had never used Chatbots for academic purposes (n = 316). Among those who had used them, the experience was reported as “very positive” (n = 47, 8%), “positive” (n = 170, 28.9%), “neutral” (n = 47, 8%), “negative” (n = 7; 1.2%) and “very negative” (n = 2, 0.3%). The primary uses for AI Chatbots included learning support (n = 187; 31.8%), text summarization (n = 81; 13.8%), and brainstorming (n = 77; 13.1%), as outlined in Table 2. The frequency of AI Chatbots use for learning or academic purposes was found to be “never” (n = 316, 53.7%), “rarely” (n = 133, 22.6%), “sometimes” (n = 98, 16.6%), “often” (n = 36, 6.1%) and “always” (n = 6; 1%).

Table 2 Use and knowledge about AI chatbots for academic purposes

During internships, 9.9% of students (n = 58) reported using AI Chatbots. The main purposes included identifying protocols and guidelines for patient management (n = 23; 3.9%), treatment strategies for patient issues (n = 18; 3.1%), and patient assessment procedures (n = 17; 2.9%), as shown in Table 2. The reported frequency of AI Chatbot use during internships was: “never” (n = 531, 90.2%), “rarely” (n = 16, 2.7%), “sometimes” (n = 30, 5.1%), “often” (n = 12, 2%) and “always” (n = 0, 0%). Regarding future intentions to use AI chatbots, students indicated the following frequencies: “sometimes” (n = 252; 42.8%), “rarely” (n = 107; 18.2%), “often” (n = 95; 16.1%), “never” (n = 124, 21.1%) and “always” (n = 11, 1.2%). When asked about the ease of use of AI Chatbots, 44.9% of students described it as “neither easy nor difficult” (n = 264), while others found it “easy” (n = 217; 36.8%), “very easy” (n = 74; 12.6%), “difficult” (n = 29, 4.9%) or “very difficult” (n = 5; 0.9%).

Section 3– Perceptions of AI chatbots

Regarding the perceived usefulness of AI Chatbots, the students reported that these tools were “Somewhat useful” for functions like grammar reviewer, learning support, brainstorming, text summarization, and language learning. The AI Chatbots were perceived as “not very useful” (Median = 1 out of 4, IQR 1) for clinical case resolution and critical appraisal (Table 3).

Table 3 Perception: how useful do you believe AI chatbots are in the following areas?

Students “agreed” (Median = 3 out of 4; IQR 1) that AI Chatbots are accessible at convenient times, help save time and provide beneficial access to information. In contrast, students “neither agreed nor disagreed” (Median = 2 out of 4; IQR 1) on the clarity of information from Chatbots compared to professors, the utility of Chatbots over search engines (e.g., Google) or databases (e.g., PubMed), and the ease of interacting with Chatbots compared to lecturers (Table 4).

Table 4 Perception: what is your level of agreement with the following statements about AI chatbots??

More than half of the students (n = 341; 57.9%) believed it was too early to form a definitive opinion on the use of AI Chatbots in education, while 37.4% (n = 220) supported their active incorporation into academia.

Section 4– Limits and barriers of AI chatbots

Students “agreed” (Median = 3 out of 4, IQR 1) that AI Chatbots have limitations, such as struggling with complex tasks, generating factually inaccurate results, and an over-reliance on statistics that can reduce their usefulness in certain contexts. However, they “neither agreed nor disagreed” (Median = 2 out of 4, IQR 1) about AI Chatbots’ potential to positively impact academic performance, their lack of emotional intelligence and empathy leading to inappropriate results, and the risk of introducing bias and unfairness in outcomes (Table 5).

Table 5 Limits: do you think artificial intelligence chatbots are useful in the clinical practice?

Concerning the potential limits and barriers of AI Chatbots, students “agreed” (Median = 3 out of 4; IQR 1) that AI Chatbots may risk spreading misinformation and reducing human interaction. In contrast, they “neither agreed nor disagreed” (Median = 2 out of 4; IQR 1) on barriers such as content validity, the risk of plagiarism, privacy and ethical concerns, and the potential negative impact on writing skills, critical thinking, and creativity. The details are presented in Table 6.

Table 6 Barriers: how much do you agree with the barriers of AI chatbots??

Predictors

The analysis of the association between demographic characteristics and participants’ perceptions and use of AI Chatbots revealed several findings. We observed a significant association of age with current knowledge of AI Chatbots, indicating that younger students less frequently heard about AI Chatbots than older students (OR = 0.89; 95%CI 0.84–0.95; p-value < 0.01). Female participants were less likely to use AI Chatbots for learning or academic purposes than male peers (OR = 0.61; 95%CI 0.44–0.85; p-value < 0.01). No other significant association between age, sex, university career and the evaluation of experience with AI Chatbots were found. All associations are reported in Table 7.

Table 7 Association between characteristics and perceptions and use of AI chatbots

Legend: OR, odds ratio; AI, artificial intelligence; CI, confidence interval; AI artificial Iintelligence.

Discussion

Main findings

This study explored the knowledge and use, perceptions of benefits and opportunities, limitations and barriers of AI Chatbots among Italian physiotherapy students. The main findings suggest that a majority of students: (a) were familiar with AI Chatbots; (b) did not routinely use them for academic purposes or during internships; (c) held neutral perceptions about the usefulness and limitations of AI Chatbots in their educational journey; and (d) while finding them user-friendly, believed it was still too early to form a definitive opinion on their use in education. As reported in the literature [21, 43], these results highlight a prevailing sense of uncertainty and skepticism, suggesting that Italian students are still in an exploratory phase with this emerging technology and have yet to adopt a clear stance on its advantages and disadvantages. This also emphasizes the need to guide and train students in the use of AI Chatbots [19, 34, 36, 37, 40].

We also identified a significative association between students’ age and their awareness of AI Chatbots, revealing that younger students have 11% more odds (OR = 0.89) of hearing about AI Chatbots compared to more mature students’, indicating a negative association between age and awareness. This finding aligns with some studies [37, 67], but contradicts others, where later-year students reported higher awareness [19, 26, 36]. This result may reflect greater exposure of young people to digital technology and automated learning tools; previous studies have shown a similar association, suggesting that younger generations tend to become more quickly familiar with technological innovation than previous generations [24, 26] Furthermore, female participants in our sample shown 39% lesser odds (OR = 0.61) than males to have used AI Chatbots for learning or academic purposes. This finding is consistent with certain studies [19, 36, 67], but contrasts with others that reported higher usage among female students [22, 36, 42]. However, other authors have documented that when adequate educational resources are provided, gender differences in technology adoption tend to narrow [42]. Overall, inconsistencies across studies may stem from generational digital literacy gaps, with younger students being more immersed in digital tools, thus having more opportunities to engage with Chatbots throughout their academic journey [68]. Moreover, socio-cultural factors [49, 69] and disparities in accessing technology in different countries [9] may explain gender and age differences.

Comparison with the evidence

Regarding knowledge and use, although almost all Italian physiotherapy students were familiar with AI Chatbots, the majority had never used them for academic purposes and during internships. This trend aligns with findings from other studies on health professions students conducted worldwide [24,25,26, 29, 30, 34, 37, 40, 42]. In contrast, American students reported greater use of AI Chatbots, likely reflecting cultural differences, better access to technological resources, and more robust educational support systems [37].

A possible reason for the low usage among Italian students might stem from their fear of being judged or facing penalties in their academic pursuits. This issue was also noted in Moskovic’s study [45], which revealed that some students refrain from admitting their use of AI Chatbots due to worries about backlash from professors or the risk of academic consequences. Various studies have reported that the acceptance and perceptions regarding AI Chatbots can be shaped by socio-cultural factors [30, 32, 34, 36, 37]. It has been suggested that educational strategies and socio-demographic characteristics can influence healthcarestudents’ attitudes toward AI [26]. In Saudi Arabia, for instance, the government is actively encouraging the incorporation of AI in education, which directly affects how students accept it [42]. Furthermore, economic and technological disparities across countries with varying income levels may all impact the familiarity and usage of AI-based tools [46]. Likewise, students from different regions have differing perceptions of AI, influenced by factors, such as religion and academic background [45]. The limited use of AI chatbots may be linked to individual factors and social and academic pressures that affect their acceptance.

Students generally maintained a neutral stance regarding the perceived usefulness of AI Chatbots, considering these technologies only moderately helpful for the tasks they performed, partially confirming existing evidence [23, 34, 37, 42, 70]. Previous surveys indicated that students’ positive perceptions of AI Chatbots were linked to their use in educational tasks such as synthesizing study material provided by professors [23, 37, 40, 43, 44], revising essays [23, 25, 37, 44], and preparing research grant proposals [21, 24, 34, 36, 37, 44, 62, 70]. However, Cherrez-Ojeda et al. [36] found that although students recognized the advantages of AI Chatbots in academic writing, they were still doubtful about its effectiveness in handling complex reasoning tasks. This supports our findings that students view AI Chatbots as supportive tools rather than substitutes for critical thinking and problem-solving. However, differences in perceptions may stem from differing expectations and mindsets among students regarding the potential benefits of AI Chatbots. For example, students might view these tools as insufficiently tailored to meet their specific needs or lacking the requirements to align with their unique educational pathways [9]. Another relevant element is that many Italian physiotherapy students stated that AI Chatbots are not more useful than traditional search engines (e.g., Google and PubMed). According to students perceptions, AI Chatbots do not offer significant added value over the tools already available. However, this is inconsistent with the literature, where students preferred to use ChatGPT to seek academic assistance [71].

Regarding the perceived limitations, our results align with available evidence [32, 40], where students expressed concerns about AI Chatbots’ ability to manage complex requests, such as managing the patients’ painful conditions. In physiotherapy training, developing clinical reasoning is a critical skill that cannot be delegated to AI Chatbots [21]. This suggests that AI technologies could serve as complementary tools to support, rather than replace, human expertise [32, 40]. Several surveys have also highlighted barriers such as over-reliance on technology [22, 26, 28, 32], concerns about the accuracy and reliability of answers [21, 22, 24, 26, 32, 34, 43], and issues related to plagiarism and lack of transparency [23, 26, 35, 37, 42]. Interestingly, these concerns elicited a neutral response in our study, possibly reflecting the limited familiarity among Italian students with the academic risks associated with AI Chatbots. Integrating training on digital literacy and AI-related ethical issues into the physiotherapy curriculum could address these gaps and promote more informed evaluations of AI Chatbot applications [9].

A barrier to increased acceptance of AI Chatbots among Italian students could be the absence of clear guidelines regarding their application in academic and clinical contexts. In contrast, the United States and certain Asian nations have already established regulations and incorporated AI-based tools into their university programs [72, 73]. Furthermore, institutional cultural differences may influence how technology is perceived. Some academic institutions embrace a more exploratory and experimental mindset, while others adopt a more cautious stance due to ethical and regulatory issues [20].

Implication for education and research

While the debate on AI Chatbots in academia continues, prohibiting their use is considered counterproductive, despite some institutions having implemented such bans [74]. Rather than imposing a ban, a more farsighted strategy would be to guide the integration of these technological tools into academia through multi-level actions involving all stakeholders.

Students should be informed on the pros and cons of AI Chatbots, promoting awareness of the essential value of the individual study and fostering professional competencies. They should demonstrate their knowledge independently, without over-relying on AI Chatbots [4]. Faculty members should stay updated with emerging technologies, through continuous professional development, allowing them to guide students in the conscientious and ethical adoption of AI [48]. Furthermore, given the ongoing technological transition, reexamining student assessment systems offers the opportunity to preserve academic integrity and support adequate learning outcomes [74, 75]. At the institutional level, universities should create shared guidelines and regulations to guarantee educational standards nationwide and maintain ethical and professional integrity [9].

Future research should explore the perspectives of students enrolled in undergraduate and master’s programs in physiotherapy and other healthcare disciplines (e.g., nursing, speech therapy), especially across continents with socio-cultural diversity (e.g., Africa and Asia). Moreover, studies should examine correlations between other variables (e.g., barriers and perceptions) to better understand the topic’s complexity. Adopting diverse methodological designs could help to capture the complexity of the phenomenon [48]. For example, semi-structured interviews could explore students’ lived experiences with AI Chatbots [76]. Moreover, prospective observational studies could evaluate whether Chatbots use impacts the development or deterioration of expected skills over time [77]. Finally, randomized studies could assess the effectiveness of AI Chatbots as compared to traditional methods in enhancing student learning outcomes [78].

Strengths and limitations

A strength of this study is that it represents the first analysis of the knowledge, perceptions of benefits and opportunities, barriers and applications to AI Chatbots among physiotherapy students, shedding light on the role of this technology within the Italian academic context. The research was developed and conducted following the international reporting guidelines (STROBE, CHERRIES and CROSS) [52,53,54], ensuring methodological rigor. Furthermore, the survey design, composed of various question types (e.g., single- and multiple-choice), allowed for a more comprehensive exploration of the multifaceted nature of the phenomenon [79].

However, several limitations should be acknowledged. First, although the number of respondents exceeded the required sample size, the response rate was lower compared with similar studies [19, 21, 22, 24, 25, 29, 31, 34, 37, 40, 62], which may limit the generalizability of the findings. Second, the use of self-reported and retrospective data could have introduced recall bias or social desirability bias, potentially impacting the accuracy of the responses [79]. Lastly, despite explicit instructions to provide truthful responses, some participants may have inaccurately reported their use of AI Chatbots, either intentionally or due to misunderstanding certain survey questions [79].

Conclusions

In our study, the Italian physiotherapy students report a limited but increasing awareness and interest in AI Chatbots. While most students acknowledge the opportunities of these technologies, they express attention to their adoption in education. Integrating AI Chatbots in academia needs adequate training for students and faculty to guarantee their responsible use in physiotherapy education. Future research should focus on students’ long-term learning outcomes associated with AI Chatbots adoption and the development of guidelines to balance their pros and cons.

Data availability

The datasets generated and/or analysed during the current study are available in the Open Science Framework (OSF) repository, https://osf.io/2hyuz/.

Abbreviations

AI:

Artificial Intelligence

GPT:

Generative Pre-trained Transformer

STROBE:

Strengthening of Reporting of Observational Studies in Epidemiology

CHERRIES:

Checklist for Reporting Results of Internet E-Surveys

CROSS:

Consensus-Based Checklist for Reporting Survey Studies

References

  1. Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthc Basel Switz. 2023;11:887. https://doiorg.publicaciones.saludcastillayleon.es/10.3390/healthcare11060887

    Article  Google Scholar 

  2. Abd-Alrazaq A, AlSaad R, Alhuwail D, Ahmed A, Healy PM, Latifi S, et al. Large Language models in medical education: opportunities, challenges, and future directions. JMIR Med Educ. 2023;9:e48291. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/48291

    Article  Google Scholar 

  3. Sarker IH. AI-Based modeling: techniques, applications and research issues towards automation, intelligent and smart systems. SN Comput Sci. 2022;3:158. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s42979-022-01043-x

    Article  Google Scholar 

  4. Rossettini G, Rodeghiero L, Corradi F, Cook C, Pillastrini P, Turolla A, et al. Comparative accuracy of ChatGPT-4, Microsoft copilot and Google gemini in the Italian entrance test for healthcare sciences degrees: a cross-sectional study. BMC Med Educ. 2024;24:694. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-024-05630-9

    Article  Google Scholar 

  5. Shorey S, Mattar C, Pereira TL-B, Choolani M. A scoping review of ChatGPT’s role in healthcare education and research. Nurse Educ Today. 2024;135:106121. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.nedt.2024.106121

    Article  Google Scholar 

  6. Lucas HC, Upperman JS, Robinson JR. A systematic review of large Language models and their implications in medical education. Med Educ. 2024. https://doiorg.publicaciones.saludcastillayleon.es/10.1111/medu.15402

    Article  Google Scholar 

  7. Xu X, Chen Y, Miao J. Opportunities, challenges, and future directions of large Language models, including ChatGPT in medical education: a systematic scoping review. J Educ Eval Health Prof. 2024;21:6. https://doiorg.publicaciones.saludcastillayleon.es/10.3352/jeehp.2024.21.6

    Article  Google Scholar 

  8. Sallam M, Salim NA, Barakat M, Al-Tammemi AB. ChatGPT applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations. Narra J. 2023;3:e103. https://doiorg.publicaciones.saludcastillayleon.es/10.52225/narra.v3i1.103

    Article  Google Scholar 

  9. Rossettini G, Cook C, Palese A, Pillastrini P, Turolla A. Pros and cons of using artificial intelligence chatbots for musculoskeletal rehabilitation management. J Orthop Sports Phys Ther. 2023;53:728–34. https://doiorg.publicaciones.saludcastillayleon.es/10.2519/jospt.2023.12000

    Article  Google Scholar 

  10. Harrer S. Attention is not all you need: the complicated case of ethically using large Language models in healthcare and medicine. EBioMedicine. 2023;90:104512. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.ebiom.2023.104512

    Article  Google Scholar 

  11. Cress U, Kimmerle J. Co-constructing knowledge with generative AI tools: reflections from a CSCL perspective. Int J Comput-Support Collab Learn. 2023;18:607–14. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s11412-023-09409-w

    Article  Google Scholar 

  12. Peres R, Schreier M, Schweidel D, Sorescu A. On ChatGPT and beyond: how generative artificial intelligence May affect research, teaching, and practice. Int J Res Mark. 2023;40:269–75. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.ijresmar.2023.03.001

    Article  Google Scholar 

  13. Zhai X. ChatGPT user experience: implications for education. SSRN Electron J. 2022. https://doiorg.publicaciones.saludcastillayleon.es/10.2139/ssrn.4312418

    Article  Google Scholar 

  14. Chan CKY, Hu W. Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. Int J Educ Technol High Educ. 2023;20:43. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s41239-023-00411-8

    Article  Google Scholar 

  15. The Lancet Digital Health null. ChatGPT: friend or foe? Lancet Digit Health. 2023;5:e102. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/S2589-7500(23)00023-7

    Article  Google Scholar 

  16. Kitamura FC. ChatGPT is shaping the future of medical writing but still requires human judgment. Radiology. 2023;307:e230171. https://doiorg.publicaciones.saludcastillayleon.es/10.1148/radiol.230171

    Article  Google Scholar 

  17. Lubowitz JH, ChatGPT. An artificial intelligence chatbot, is impacting medical literature. arthrosc J arthrosc relat Surg off publ arthrosc assoc. N Am Int Arthrosc Assoc. 2023;39:1121–2. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.arthro.2023.01.015

    Article  Google Scholar 

  18. Khowaja SA, Khuwaja P, Dev K, Wang W, Nkenyereye L. ChatGPT needs SPADE (Sustainability, privacy, digital divide, and Ethics) evaluation: A review. Cogn Comput. 2024;16:2528–50. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s12559-024-10285-1

    Article  Google Scholar 

  19. Blease C, Kharko A, Bernstein M, Bradley C, Houston M, Walsh I, et al. Machine learning in medical education: a survey of the experiences and opinions of medical students in Ireland. BMJ Health Care Inf. 2022;29:e100480. https://doiorg.publicaciones.saludcastillayleon.es/10.1136/bmjhci-2021-100480

    Article  Google Scholar 

  20. Buabbas AJ, Miskin B, Alnaqi AA, Ayed AK, Shehab AA, Syed-Abdul S, et al. Investigating students’ perceptions towards artificial intelligence in medical education. Healthc Basel Switz. 2023;11:1298. https://doiorg.publicaciones.saludcastillayleon.es/10.3390/healthcare11091298

    Article  Google Scholar 

  21. Hosseini M, Gao CA, Liebovitz D, Carvalho A, Ahmad FS, Luo Y et al. An exploratory survey about using ChatGPT in education, healthcare, and research. medRxiv 2023:2023.03.31. https://doiorg.publicaciones.saludcastillayleon.es/10.1101/2023.03.31.23287979

  22. Hu J-M, Liu F-C, Chu C-M, Chang Y-T. Health care trainees’ and professionals’ perceptions of ChatGPT in improving medical knowledge training: rapid survey study. J Med Internet Res. 2023;25:e49385. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/49385

    Article  Google Scholar 

  23. Moldt J-A, Festl-Wietek T, Madany Mamlouk A, Nieselt K, Fuhl W, Herrmann-Werner A. Chatbots for future docs: exploring medical students’ attitudes and knowledge towards artificial intelligence and medical chatbots. Med Educ Online. 2023;28:2182659. https://doiorg.publicaciones.saludcastillayleon.es/10.1080/10872981.2023.2182659

    Article  Google Scholar 

  24. Mosleh R, Jarrar Q, Jarrar Y, Tazkarji M, Hawash M. Medicine and pharmacy students’ knowledge, attitudes, and practice regarding artificial intelligence programs: Jordan and West bank of Palestine. Adv Med Educ Pract. 2023;14:1391–400. https://doiorg.publicaciones.saludcastillayleon.es/10.2147/AMEP.S433255

    Article  Google Scholar 

  25. Oluwadiya KS, Adeoti AO, Agodirin SO, Nottidge TE, Usman MI, Gali MB, et al. Exploring artificial intelligence in the Nigerian medical educational space: an online cross-sectional study of perceptions, risks and benefits among students and lecturers from ten universities. Niger Postgrad Med J. 2023;30:285–92. https://doiorg.publicaciones.saludcastillayleon.es/10.4103/npmj.npmj_186_23

    Article  Google Scholar 

  26. George Pallivathukal R, Kyaw Soe HH, Donald PM, Samson RS, Hj Ismail AR. ChatGPT for academic purposes: survey among undergraduate healthcare students in Malaysia. Cureus. 2024;16:e53032. https://doiorg.publicaciones.saludcastillayleon.es/10.7759/cureus.53032

    Article  Google Scholar 

  27. Roganović J. Familiarity with ChatGPT features modifies expectations and learning outcomes of dental students. Int Dent J. 2024;S0020–65392400117–5. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.identj.2024.04.012

  28. Sallam M, Salim NA, Barakat M, Al-Mahzoum K, Al-Tammemi AB, Malaeb D, et al. Assessing health students’ attitudes and usage of ChatGPT in Jordan: validation study. JMIR Med Educ. 2023;9:e48254. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/48254

    Article  Google Scholar 

  29. Tangadulrat P, Sono S, Tangtrakulwanich B. Using ChatGPT for clinical practice and medical education: Cross-Sectional survey of medical students’ and physicians’ perceptions. JMIR Med Educ. 2023;9:e50658. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/50658

    Article  Google Scholar 

  30. Weidener L, Fischer M. Artificial intelligence in medicine: Cross-Sectional study among medical students on application, education, and ethical aspects. JMIR Med Educ. 2024;10:e51247. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/51247

    Article  Google Scholar 

  31. Worthing KA, Roberts M, Šlapeta J. Surveyed veterinary students in Australia find ChatGPT practical and relevant while expressing no concern about artificial intelligence replacing veterinarians. Vet Rec Open. 2024;11:e280. https://doiorg.publicaciones.saludcastillayleon.es/10.1002/vro2.80

    Article  Google Scholar 

  32. Zawiah M, Al-Ashwal FY, Gharaibeh L, Abu Farha R, Alzoubi KH, Abu Hammour K, et al. ChatGPT and clinical training: perception, concerns, and practice of Pharm-D students. J Multidiscip Healthc. 2023;16:4099–110. https://doiorg.publicaciones.saludcastillayleon.es/10.2147/JMDH.S439223

    Article  Google Scholar 

  33. Zou M, Huang L. To use or not to use? Understanding doctoral students’ acceptance of ChatGPT in writing through technology acceptance model. Front Psychol. 2023;14:1259531. https://doiorg.publicaciones.saludcastillayleon.es/10.3389/fpsyg.2023.1259531

    Article  Google Scholar 

  34. Alkhaaldi SMI, Kassab CH, Dimassi Z, Oyoun Alsoud L, Al Fahim M, Al Hageh C, et al. Medical student experiences and perceptions of ChatGPT and artificial intelligence: Cross-Sectional study. JMIR Med Educ. 2023;9:e51302. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/51302

    Article  Google Scholar 

  35. Ibrahim H, Liu F, Asim R, Battu B, Benabderrahmane S, Alhafni B, et al. Perception, performance, and detectability of conversational artificial intelligence across 32 university courses. Sci Rep. 2023;13:12187. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41598-023-38964-3

    Article  Google Scholar 

  36. Cherrez-Ojeda I, Gallardo-Bastidas JC, Robles-Velasco K, Osorio MF, Velez Leon EM, Leon Velastegui M, et al. Understanding health care students’ perceptions, beliefs, and attitudes toward AI-Powered Language models: Cross-Sectional study. JMIR Med Educ. 2024;10:e51757. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/51757

    Article  Google Scholar 

  37. Ganjavi C, Eppler M, O’Brien D, Ramacciotti LS, Ghauri MS, Anderson I, et al. ChatGPT and large Language models (LLMs) awareness and use. A prospective cross-sectional survey of U.S. Medical students. PLOS Digit Health. 2024;3:e0000596. https://doiorg.publicaciones.saludcastillayleon.es/10.1371/journal.pdig.0000596

    Article  Google Scholar 

  38. Kazley AS, Andresen C, Mund A, Blankenship C, Segal R. Is use of ChatGPT cheating? Students of health professions perceptions. Med Teach 2024:1–5. https://doiorg.publicaciones.saludcastillayleon.es/10.1080/0142159X.2024.2385667

  39. Liu J, Wu S, Liu S. Perception of ChatGPT by nursing undergraduates. Stud Health Technol Inf. 2024;315:669–70. https://doiorg.publicaciones.saludcastillayleon.es/10.3233/SHTI240271

    Article  Google Scholar 

  40. Anderson HD, Kwon S, Linnebur LA, Valdez CA, Linnebur SA. Pharmacy student use of ChatGPT: A survey of students at a U.S. School of pharmacy. Curr Pharm Teach Learn. 2024;16:102156. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.cptl.2024.102156

    Article  Google Scholar 

  41. Thomae AV, Witt CM, Barth J. Integration of ChatGPT into a course for medical students: explorative study on teaching scenarios, students’ perception, and applications. JMIR Med Educ. 2024;10:e50545. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/50545

    Article  Google Scholar 

  42. Alharbi MK, Syed W, Innab A, Basil A, Al-Rawi M, Alsadoun A, Bashatah A. Healthcare students attitudes opinions perceptions and perceived Obstacles regarding ChatGPT in Saudi Arabia: a survey–based cross–sectional study. Sci Rep. 2024;14:22800. https://doiorg.publicaciones.saludcastillayleon.es/10.1038/s41598-024-73359-y

    Article  Google Scholar 

  43. Mondal H, Karri JKK, Ramasubramanian S, Mondal S, Juhi A, Gupta P. A qualitative survey on perception of medical students on the use of large Language models for educational purposes. Adv Physiol Educ. 2024. https://doiorg.publicaciones.saludcastillayleon.es/10.1152/advan.00088.2024

    Article  Google Scholar 

  44. Zhang JS, Yoon C, Williams DKA, Pinkas A. Exploring the usage of ChatGPT among medical students in the united States. J Med Educ Curric Dev. 2024;11:23821205241264695. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/23821205241264695

    Article  Google Scholar 

  45. Moskovich L, Rozani V. Health profession students’ perceptions of ChatGPT in healthcare and education: insights from a mixed-methods study. BMC Med Educ. 2025;25:98. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-025-06702-0

    Article  Google Scholar 

  46. Ravšelj D, Keržič D, Tomaževič N, Umek L, Brezovar N, Iahad NA et al. Higher education students’ perceptions of ChatGPT: A global study of early reactions N.d. https://doiorg.publicaciones.saludcastillayleon.es/10.1371/journal.pone.0315011

  47. Bilika P, Stefanouli V, Strimpakos N, Kapreli EV. Clinical reasoning using ChatGPT: is it beyond credibility for physiotherapists use? Physiother Theory Pract 2023:1–20. https://doiorg.publicaciones.saludcastillayleon.es/10.1080/09593985.2023.2291656

  48. Gianola S, Bargeri S, Castellini G, Cook C, Palese A, Pillastrini P, et al. Performance of ChatGPT compared to clinical practice guidelines in making informed decisions for lumbosacral radicular pain: A Cross-sectional study. J Orthop Sports Phys Ther. 2024;54:222–8. https://doiorg.publicaciones.saludcastillayleon.es/10.2519/jospt.2024.12151

    Article  Google Scholar 

  49. Venkatesh V, Morris MG, Ackerman PL. A longitudinal field investigation of gender differences in individual technology adoption Decision-Making processes. Organ Behav Hum Decis Process. 2000;83:33–60. https://doiorg.publicaciones.saludcastillayleon.es/10.1006/obhd.2000.2896

    Article  Google Scholar 

  50. The digital divide. the special case of gender - Cooper– 2006 - Journal of Computer Assisted Learning - Wiley Online Library n.d. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1365-2729.2006.00185.x (accessed December 12, 2024).

  51. Mishra P, Koehler MJ. Technological pedagogical content knowledge: A framework for teacher knowledge. Teach Coll Rec. 2006;108:1017–54. https://doiorg.publicaciones.saludcastillayleon.es/10.1111/j.1467-9620.2006.00684.x

    Article  Google Scholar 

  52. Eysenbach G, Wyatt J. Using the internet for surveys and health research. J Med Internet Res. 2002;4:E13. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/jmir.4.2.e13

    Article  Google Scholar 

  53. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ. 2007;335:806–8. https://doiorg.publicaciones.saludcastillayleon.es/10.1136/bmj.39335.541782.AD

    Article  Google Scholar 

  54. Sharma A, Minh Duc NT, Luu Lam Thang T, Nam NH, Ng SJ, Abbas KS, et al. A Consensus-Based checklist for reporting of survey studies (CROSS). J Gen Intern Med. 2021;36(10):3179–87. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s11606-021-06737-1

    Article  Google Scholar 

  55. Trova N. accessed August 21, & Concorsi - Normativa Sanitaria https://www.trovanorme.salute.gov.it/norme/dettaglioAtto?id=6627%26;articolo=1. 2024.

  56. Gazzetta Ufficiale. https://www.gazzettaufficiale.it/eli/id/2000/09/06/000G0299/sg (accessed August 21, 2024).

  57. Bisdas S, Topriceanu C-C, Zakrzewska Z, Irimia A-V, Shakallis L, Subhash J, et al. Artificial intelligence in medicine: A multinational Multi-Center survey on the medical and dental students’ perception. Front Public Health. 2021;9:795284. https://doiorg.publicaciones.saludcastillayleon.es/10.3389/fpubh.2021.795284

    Article  Google Scholar 

  58. Malik R, Sharma A, Trivedi S, Mishra R. Adoption of chatbots for learning among university students: role of perceived convenience and enhanced performance. Int J Emerg Technol Learn (iJET) 16(18), pp. 200–12. https://doiorg.publicaciones.saludcastillayleon.es/10.3991/ijet.v16i18.24315

  59. Pinto Dos Santos D, Giese D, Brodehl S, Chon SH, Staab W, Kleinert R, et al. Medical students’ attitude towards artificial intelligence: a multicentre survey. Eur Radiol. 2019;29:1640–6. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s00330-018-5601-1

    Article  Google Scholar 

  60. Alsobhi M, Khan F, Chevidikunnan MF, Basuodan R, Neamatallah Z. Physical therapists’ knowledge and attitudes regarding artificial intelligence applications in health care and rehabilitation: Cross-sectional study. J Med Internet Res. 2022;24(10):e39565. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/39565

    Article  Google Scholar 

  61. Garrel J. Artificial intelligence in studies—use of ChatGPT and AI-based tools among students. Ger Humanit Soc Sci Commun. 2023;10:799. https://doiorg.publicaciones.saludcastillayleon.es/10.1057/s41599-023-02304-7

    Article  Google Scholar 

  62. Liu DS, Sawyer J, Luna A, Aoun J, Wang J, Boachie, Lord, et al. Perceptions of US medical students on artificial intelligence in medicine: mixed methods survey study. JMIR Med Educ. 2022;8:e38325. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/38325

    Article  Google Scholar 

  63. de Leeuw E, Hox J, Dillman D. International handbook of survey methodology. New York: European Association of Methodology; 2008.

    Google Scholar 

  64. Fan W, Yan Z. Factors affecting response rates of the web survey: A systematic review. Comput Hum Behav. 2010;26:132–9. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.chb.2009.10.015

    Article  Google Scholar 

  65. StataCorp. Stata statistical software: release 16. College station. TX: StataCorp LLC; 2023.

    Google Scholar 

  66. R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. 2021. Available from: https://www.R-project.org/

  67. Pan G, Ni J. A cross sectional investigation of ChatGPT-like large Language models application among medical students in China. BMC Med Educ. 2024;24(1):908. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-024-

    Article  Google Scholar 

  68. Hadi Mogavi R, Deng C, Juho Kim J, Zhou P, Kwon D, Hosny Saleh Metwally Y. ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions. Comput Hum Behav Artif Hum. 2024;2:100027. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.chbah.2023.100027

    Article  Google Scholar 

  69. Li S, Glass R, Records H. The Influence of Gender on New Technology Adoption and Use–Mobile Commerce 2008. https://doiorg.publicaciones.saludcastillayleon.es/10.1080/15332860802067748

  70. Sy C, Hy K, Sh C. Perceptions of ChatGPT in healthcare: usefulness, trust, and risk. Front Public Health. 2024;12. https://doiorg.publicaciones.saludcastillayleon.es/10.3389/fpubh.2024.1457131

  71. Zhang M, Yang X. Google or ChatGPT: who is the better helper for university students. Educ Inf Technol 2024:1–22. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s10639-024-13002-5

  72. Hung J, Chen J. The benefits, risks and regulation of using ChatGPT in Chinese academia: A content analysis. Soc Sci. 2023;12:380. https://doiorg.publicaciones.saludcastillayleon.es/10.3390/socsci12070380

    Article  Google Scholar 

  73. Wang H, Dang A, Wu Z, Mac S. Generative AI in higher education: seeing ChatGPT through universities’ policies, resources, and guidelines. Comput Educ Artif Intell. 2024;7:100326. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.caeai.2024.100326

    Article  Google Scholar 

  74. Albadarin Y, Saqr M, Pope N, Tukiainen M. A systematic literature review of empirical research on ChatGPT in education. Discov Educ. 2024;3:60. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s44217-024-00138-2

    Article  Google Scholar 

  75. Montenegro-Rueda M, Fernández-Cerero J, Fernández-Batanero JM, López-Meneses E. Impact of the implementation of ChatGPT in education. Syst Rev Computers. 2023;12:153. https://doiorg.publicaciones.saludcastillayleon.es/10.3390/computers12080153

    Article  Google Scholar 

  76. Nathan S, Newman C, Lancaster K. Qualitative Interviewing. In: Liamputtong P, editor. Handb. Res. Methods Health Soc. Sci., Singapore: Springer; 2019:391–410. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/978-981-10-5251-4_77

  77. Hess A, Abd-Elsayed A. Observational Studies: Uses and Limitations. 2019:123–5. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/978-3-319-99124-5_31

  78. Bauchner H, Golub RM, Fontanarosa PB. Reporting and interpretation of randomized clinical trials. JAMA. 2019;322:732–5. https://doiorg.publicaciones.saludcastillayleon.es/10.1001/jama.2019.12056

    Article  Google Scholar 

  79. Schmidt WC. World-Wide web survey research: benefits, potential problems, and solutions. Behav Res Methods Instrum Comput. 1997;29:274–9. https://doiorg.publicaciones.saludcastillayleon.es/10.3758/BF03204826

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank the Department of Innovation, Research, University and Museums of the Autonomous Province of Bozen/Bolzano for covering the Open Access publication costs.

Funding

The authors receive funding from the Autonomous Province of Bozen/Bolzano– South Tyrol for covering the Open Access Publication costs.

Author information

Authors and Affiliations

Authors

Contributions

GR, FT, MGL, AP conceived and designed the research and wrote the first draft. FT, GR managed the acquisition of data. SG, GC, AP, GR managed the analysis and interpretation of data. FT, AP, AT, GC, PP, SG, MGL, CC, GG, GGi, LR, GR wrote the first draft. All authors read, revised, wrote and approved the final version of manuscript.

Corresponding author

Correspondence to Lia Rodeghiero.

Ethics declarations

Ethics approval and consent to participate

The Ethics Committee of Human Sciences at the University of Verona (approval number d. 2023_38, accepted on 24/01/2024) approved the present study. The study adhered to the Declaration of Helsinki.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Authors’ Informations

A multidisciplinary group of healthcare science educators promoted and developed this study in Italy. The group consisted of professors, lecturers, and tutors actively involved in university education in different healthcare science disciplines (e.g., rehabilitation, physiotherapy, nursing).

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tortella, F., Palese, A., Turolla, A. et al. Knowledge and use, perceptions of benefits and limitations of artificial intelligence chatbots among Italian physiotherapy students: a cross-sectional national study. BMC Med Educ 25, 572 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-025-07176-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-025-07176-w

Keywords