Skip to main content

AI usage among medical students in Palestine: a cross-sectional study and demonstration of AI-assisted research workflows

Abstract

Background

Artificial Intelligence (AI) is transforming medical education globally, offering solutions to challenges such as resource limitations and limited clinical exposure. However, its integration in resource-constrained settings like Palestine remains underexplored. This study evaluates the prevalence, impact, and challenges of AI use among Palestinian medical students, focusing on academic performance, clinical competence, and research productivity.

Methods

A cross-sectional study was conducted among 590 medical students from Palestinian universities. Data were collected using a validated electronic questionnaire, covering demographics, AI usage patterns, and perceived impacts across academic, clinical, and research domains. Initial analysis was conducted using AI tools, specifically ChatGPT, to facilitate insights and structure the results effectively. Statistical analyses were performed using IBM SPSS v27 to validate findings. Statistical significance was set at p < 0.05. The draft underwent detailed reviews by the research team to confirm accuracy and validity.

Results

AI adoption was high, with 87% of students frequently using tools like ChatGPT (76%) and virtual simulators (26%). Students reported significant improvements in academic performance (mean score: 4.2, SD = 0.7) and research productivity (mean score: 4.5, SD = 0.6), particularly in literature reviews and data analysis. Clinical competence received moderate ratings (mean score: 3.6, SD = 0.8), reflecting AI's limited role in practical skill development. Time management was highly rated (mean score: 4.6, SD = 0.5), highlighting AI's ability to automate repetitive tasks. Challenges included ethical concerns, data accuracy, and limited AI literacy, with 91% lacking formal AI training.

Conclusion

AI demonstrates significant potential to enhance medical education in resource-constrained settings by improving academic outcomes and research efficiency. ChatGPT played a critical role in this study, not only as a tool used by participants but also in the research process itself, including data analysis and manuscript drafting. These findings were cross-verified using SPSS to ensure robustness. Despite its promise, limitations in practical clinical applications and technical understanding highlight the need for targeted AI literacy programs and ethical guidelines. This study underscores the importance of integrating AI into medical curricula to address existing gaps and maximize its benefits in similar global contexts.

Peer Review reports

Background

Artificial Intelligence (AI) is increasingly integrated into medical education and healthcare, offering tools that enhance learning, clinical training, and research productivity. In resource-limited settings such as Palestine, where access to educational infrastructure and clinical exposure may be constrained, AI presents scalable solutions that can help overcome systemic barriers to training and academic development.

AI supports personalized learning through adaptive platforms that adjust content based on individual progress. These systems improve engagement, comprehension, and knowledge retention. Chan and Zary [1] demonstrated that AI-driven instructional models enhance academic performance by delivering customized content and immediate feedback.

In clinical education, AI enables simulated experiences via virtual patients and diagnostic decision-support systems. These tools offer risk-free environments to practice clinical reasoning and procedural skills, particularly valuable where real-time patient interaction is limited. Sit et al. [2] highlighted how such applications strengthen diagnostic accuracy and analytical thinking.

AI also contributes to research by automating repetitive tasks such as literature synthesis, data extraction, and manuscript drafting. This is especially beneficial in low-resource academic settings, where students often lack access to expert mentorship or formal research training. Alowais et al. [3] found that AI-assisted writing tools significantly reduce cognitive load, allowing students to focus on critical thinking and data interpretation.

This study adopts Bloom’s Taxonomy as a conceptual framework to explore AI’s educational utility. AI tools, including large language models, can support cognitive processes across all levels from basic recall and understanding to complex tasks like analysis, evaluation, and creation. Their application in medical education offers potential to reinforce foundational knowledge while also promoting higher-order academic and clinical competencies.

Challenges in AI adoption

Despite its benefits, AI integration in medical education faces several challenges. Key concerns include data privacy, algorithmic transparency, and the risk of over-reliance on AI-generated outputs. Tang et al. [4] emphasized the need for ethical safeguards and transparent methodologies in both academic and clinical contexts. Biswas et al. [5] similarly warned that uncritical dependence on AI may compromise accuracy and reduce the development of independent reasoning skills.

Another barrier is the limited technical literacy among medical students. While many can operate AI tools, few possess a foundational understanding of their underlying mechanisms. This disconnect hinders effective and responsible use. Laupichler et al. [6] advocated for targeted curricular interventions that provide both theoretical knowledge and practical training in AI applications relevant to medicine.

Contextual relevance in Palestine

Medical education in Palestine is shaped by economic and institutional limitations, including restricted access to clinical training resources and inconsistent availability of academic support. AI may offer a viable path to address these limitations by providing cost-effective, accessible alternatives for simulation-based learning, content delivery, and academic research support.

Despite the potential benefits, empirical data on AI usage in Palestinian medical education remains scarce. Existing literature focuses primarily on high-income settings, with limited insight into adoption patterns, educational outcomes, or student perspectives in under-resourced contexts. This study aims to address that gap by examining how Palestinian medical students engage with AI tools and how they perceive its impact on academic performance, research engagement, and clinical competence.

Significance of the study

This research investigates the dual role of AI as a learning aid and research assistant. By evaluating its utility in academic and clinical contexts and student-reported usage of AI in academic, clinical, and research settings, this study offers insights into how medical students perceive the role of AI tools in their learning journey. Rather than claiming systemic transformation, the study highlights individual-level experiences and practical applications of AI. The findings may serve as a foundation for future research and curriculum development aimed at integrating AI into medical education in resource-constrained environments.

Research objectives

This study aims to assess the frequency, types, and perceived impact of AI tool usage among medical students in Palestine. Specifically, the objectives are:

  1. 1.

    To quantify the frequency and purpose of AI tool usage in academic, clinical, and research contexts.

  2. 2.

    To evaluate students’ self-reported impact of AI on academic performance, clinical reasoning, research productivity, and time management.

  3. 3.

    To identify the most commonly used AI tools (e.g., ChatGPT) and their practical applications.

  4. 4.

    To explore challenges and concerns related to AI usage in medical education, including accuracy, ethics, and training gaps.

  5. 5.

    To demonstrate the use of AI (ChatGPT) in supporting the research process itself, including data summarization and manuscript drafting.

Methods

Study design

This study employed a cross-sectional design to evaluate the prevalence, patterns, and impact of Artificial Intelligence (AI) applications among medical students in Palestine. The primary focus was to understand how AI is integrated into the academic, clinical, and research practices of students and to explore its perceived benefits and limitations. The study also sought to assess AI’s capability to aid in the research process itself, including the generation of questionnaires, analysis of findings and writing the research paper.

Questionnaire development

The questionnaire for this study was designed with the assistance of AI tools, demonstrating AI’s potential as a research collaborator. The AI-assisted design ensured the inclusion of comprehensive, targeted, and structured items addressing all key research objectives. The questionnaire was specifically developed for this study and has not been previously published. An English language version of the questionnaire is attached as Supplementary File 1.

Structure of the questionnaire

The questionnaire began with an electronic informed consent form, ensuring participants understood the study’s purpose, procedures, and confidentiality measures, with voluntary participation and the right to withdraw at any time. It then collected demographic information, including age, gender, academic year, university affiliation, and familiarity with AI technologies. Subsequent sections addressed AI usage patterns, such as the frequency and types of tools used (e.g., ChatGPT, virtual simulators), and their applications in academic, research, and clinical contexts. Likert-scale questions assessed AI’s impact on academic performance, clinical competence, research skills, and time management, while open-ended questions explored challenges like accuracy, ethical concerns, and over-reliance on AI tools.

Validation of the questionnaire

Pilot Testing: The draft questionnaire was piloted among 30 medical students to evaluate its clarity, relevance, and length. Feedback from this pilot study was used to refine the wording and structure of the questions. The pilot group provided valuable insights, and the overall feedback led to improved question clarity and alignment with the research objectives. The Likert-scale items were also evaluated for their reliability, with the pilot study showing a strong internal consistency, as indicated by a Cronbach’s alpha score of 0.87. Expert Review: Faculty members with expertise in medical education and AI reviewed the questionnaire to ensure content validity and its alignment with the study’s objectives.

Ethical compliance and study procedure

The study adhered to the ethical principles outlined in the Declaration of Helsinki and received approval from the Institutional Review Board (IRB) at Al-Quds University in Palestine (Ref No: 449/REC/2024). All participants provided informed consent electronically after being presented with a detailed form outlining the study’s purpose, procedures, voluntary nature, and confidentiality measures. Participation was entirely voluntary, with the right to withdraw at any time without consequences. To ensure anonymity and data security, all responses were securely stored and used solely for research purposes. The electronic questionnaire was distributed through university email lists, social media platforms, and student forums, ensuring accessibility on both desktop and mobile devices to maximize participation.

Participant recruitment

Participants included medical students from Al-Quds University and all other Palestinian medical institutions, eligibility criteria required participants to be currently enrolled medical students aged 18 years or older with access to AI tools. No exclusion criteria were set for prior experience with AI, ensuring a diverse participant pool.

Survey dissemination

The survey was distributed via a secure link and QR code to ensure ease of access. Social media platforms were utilized to reach a broad audience of medical students. The survey remained open for a duration of two months, allowing participants sufficient time to complete it.

Confidentiality

All responses were anonymized and stored on secure servers accessible only to the research team. Participants were assured that no identifiable information would be linked to their responses.

Data collection

The electronic survey was completed by 590 medical students. Responses were automatically recorded and exported into a secure database for analysis. Data cleaning was conducted to remove incomplete responses or entries with inconsistencies.

Data analysis

The collected data were initially explored using ChatGPT, an AI-powered language model, to assist in identifying descriptive trends and generating narrative summaries. ChatGPT was not used for statistical calculations or hypothesis testing. All quantitative analyses including t-tests, ANOVA, and correlation analyses were conducted using IBM SPSS v27, with statistical significance set at p < 0.05. This two-step approach combined the exploratory efficiency of AI with the rigor of traditional statistical methods, ensuring precision, consistency, and reliability in the results.

Descriptive analysis

Frequencies and percentages were calculated for categorical variables such as demographic information, frequency of AI usage, and types of tools employed. Means and standard deviations were calculated for Likert-scale items to assess the impact of AI on academic performance, clinical competence, research skills, and time management, the Likert scale ranged from 0 (strongly disagree) to 5 (strongly agree).

Inferential statistics

T-tests were used to compare differences in AI usage and its perceived impact across demographic groups such as gender and academic year. ANOVA was applied to assess variations in AI perceptions among students with differing levels of AI familiarity and frequency of use. Correlation Analysis was conducted to explore relationships between AI usage patterns (e.g., frequency and purpose) and perceived benefits.

Results

AI in research execution

A unique aspect of this study was the active involvement of AI in the research process. AI tools were utilized to generate the questionnaire, streamline data analysis, and draft sections of this manuscript. This integration not only facilitated efficient research execution but also provided insights into AI’s capability to support academic research with minimal human intervention. These findings will be further explored in the discussion section.

The study analyzed data from a final sample of 550 medical students after excluding 40 incomplete or invalid responses. Of the 550 students who completed the full survey, 478 responded to all Likert-scale items and were included in the analysis of AI impact scores. The remaining responses were excluded from that portion of analysis due to incomplete data. Of these, 67% of respondents were female (370 students) and 33% were male (180 students), with an average age of 22.8 years (range: 18–25 years). The majority of participants were in their clinical years (43%), followed by 32% in preclinical years, and 25% in the internship year of study. Most students were enrolled at Al-Quds University (28%), with the remaining participants distributed across other Palestinian medical schools as shown in (Table 1). This diverse sample provided a robust representation of AI usage across various stages of medical education in Palestine.

Table 1 Demographic characteristics and academic distribution of study participants

AI usage prevalence and tools

AI usage was highly prevalent among medical students, with 87% of respondents indicating frequent use of AI tools for academic and clinical purposes. Among these, 50% of students used AI daily, while 37% reported weekly use. Only 13% of respondents rarely or never used AI tools, as shown in (Fig. 1).

Fig. 1
figure 1

Frequency of AI usage among participants

Among the various AI tools reported, ChatGPT emerged as the most widely used, with 76% of students indicating regular use primarily for academic support such as clarifying complex medical concepts and answering clinical queries. Copilot followed at 38%, often used to automate tasks like formatting and coding, while virtual simulators were used by 26% of students to enhance clinical reasoning through diagnostic and procedural simulations. Notably, many participants reported combining multiple tools, highlighting the diverse functionalities AI offers in medical education.

AI applications in medical education were categorized into three primary purposes: academic support (65%), research assistance (58%), and clinical training (31%). Table 2 provides a disaggregated view of these purposes across academic stages. Academic support was most prevalent among preclinical students (92.0%), who rely heavily on AI for concept reinforcement and exam preparation. Clinical-year students demonstrated higher utilization for research-related tasks (56.7%), whereas internship students showed the highest engagement in this domain (72.0%). Clinical training usage remained comparatively low across all groups, particularly among internship students (9.4%), likely reflecting AI’s current limitations in hands-on skill development. These trends underscore the differentiated roles AI plays at each stage of medical training and suggest the need for tailored integration strategies within medical curricula. as shown in Table (2).

Table 2 AI usage purposes in medical education

Impact of AI usage

Students evaluated the impact of AI tools on their education using a 6-point Likert scale ranging from 0 (strongly disagree) to 5 (strongly agree). For presentation clarity in Fig. 2, responses are summarized using mean scores for each domain. It is important to note that no participants selected 0 in any of the reported domains, so this response was excluded from the analysis.

Among the respondents who completed all relevant Likert-scale items (n = 478), the average score for academic performance was 4.2 (SD = 0.7), indicating generally favorable student perceptions of AI’s role in learning. Research productivity received a mean score of 4.5 (SD = 0.6), reflecting the perceived usefulness of AI in literature reviews and manuscript preparation. Clinical competence received a moderate mean score of 3.6 (SD = 0.8), suggesting a more limited role for AI in hands-on, practical training. Time management was rated highest at 4.6 (SD = 0.5), indicating that students found AI helpful in automating repetitive academic tasks and allocating more time to higher-order cognitive and clinical activities.

These findings are based on valid responses only (n = 478), as some participants did not complete all Likert-scale items. Figure 2 presents a visual summary of the mean Likert scores across the four evaluated domains.

Fig. 2
figure 2

Mean Likert scores for perceived AI impact across four domains: academic performance, research productivity, clinical competence, and time management. Scores range from 0 (strongly disagree) to 5 (strongly agree)

Statistical analyses revealed additional insights

Statistical analyses revealed notable findings regarding the use of AI tools among medical students. A t-test showed that male students reported slightly higher proficiency in using AI for research-related tasks compared to females (p = 0.03). However, no significant differences were observed in the academic or clinical impact of AI based on gender (p > 0.05). ANOVA analysis demonstrated that clinical-year students relied more heavily on AI tools for clinical applications than preclinical and internship students (F = 5.13, p = 0.01). Interestingly, internship students rated AI’s impact on academic learning higher than their senior counterparts (F = 4.25, p = 0.02). Correlation analysis further indicated a strong positive relationship (r = 0.75, p < 0.001) between AI usage frequency and academic improvement, and a moderate correlation (r = 0.62, p < 0.001) with research productivity. These findings highlight the diverse applications and perceived benefits of AI tools in medical education, reflecting variability across gender and academic stages.

Challenges and limitations

The data analysis provided the following insights: Half of the participants (50.63%) reported encountering no significant challenges with AI in academic or clinical settings, suggesting they found it effective and user-friendly. Conversely, 49.37% indicated experiencing challenges, highlighting areas where AI usage may require improvements or better integration. Regarding AI's ability to replace tasks, 51.05% believed that AI can replace most tasks, reflecting confidence in its capabilities, while 48.95% disagreed, emphasizing limitations in AI’s ability to fully replicate human roles. On the topic of ethical monitoring, a slight majority (51.05%) agreed on the importance of monitoring AI for ethical compliance, while 48.95% felt it was not necessary.

Chi-Square tests were performed to examine relationships between these variables. The relationship between challenges with AI and AI task replacement was not statistically significant (Chi2 = 0.138, p = 0.710), indicating no strong association between encountering challenges and perceptions of AI's task replacement abilities. Similarly, no significant relationship was found between AI task replacement and ethical monitoring (Chi2 = 1.227, p = 0.268). These findings suggest that participants’ responses were largely independent, reflecting diverse and varied perspectives on the use of AI.

Qualitative findings from thematic analysis

Open-ended responses (n = 320) regarding AI-related challenges were thematically analyzed, resulting in three main themes:

  1. 1.

    Trust and Accuracy Concerns: Students expressed uncertainty about the reliability of AI-generated content, noting that outputs often require manual verification.

Example

“I often double-check ChatGPT answers because some of them sound correct but are wrong.”

  1. 2.

    Accessibility and Technical Limitations: Some students reported difficulties accessing advanced AI tools due to internet restrictions or lack of proper devices.

Example

“Many AI apps are blocked or not supported in my region.”

  1. 3.

    Ethical and Educational Implications: Concerns were raised about excessive reliance on AI, especially in assignments, potentially affecting critical thinking and learning.

Example

“Some students copy AI answers without understanding, and this could reduce our learning quality.”

These themes provide contextual insights into students’ perceived barriers to AI usage and offer guidance for designing responsible AI education in medical curricula.

Discussion

This study offers an in-depth look at how medical students in Palestine integrate artificial intelligence (AI) tools into their academic and clinical training. A unique aspect of the project was the active role of AI in the research process itself: from drafting the questionnaire to performing data analyses and assisting in manuscript preparation. Such involvement not only streamlined the workflow but also illustrated AI’s capacity to minimize human labor in certain stages of research. Consistent with emerging global trends, these findings underscore the versatile potential of AI in both scholarly work and medical training, especially as healthcare systems worldwide move toward digital transformation [7]. Yet, the results also raise important questions regarding equity, ethics, and the broader educational framework necessary to ensure effective AI adoption.

The sample of 550 students provided a comprehensive representation of Palestinian medical trainees at various stages of preclinical, clinical, and internship. Although the majority reported substantial usage of AI tools, more frequent use was noted among certain subgroups. These patterns suggest that AI’s value depends considerably on the learners’ evolving academic requirements: as students advance through their training, they tend to encounter more tasks where AI-driven support proves beneficial. Indeed, senior students and interns commonly described AI-based applications that expedite literature reviews, manuscript drafting, and exam preparation, aligning with evidence from multiple regions in the Middle East where near-graduation students often seek additional resources for research collaboration and practice-oriented learning [8].

AI in research execution

Perhaps most striking was the manner in which AI systems bolstered this very study. Leveraging AI to generate the questionnaire minimized subjective bias in survey-question wording and expedited item design, thereby offering a glimpse of how future research might be optimized [9]. After data collection, automated algorithms handled tasks such as descriptive statistics, thematic categorization of qualitative responses, and preliminary drafts of sections in this manuscript. These efficiencies echo prior studies that used AI-assisted literature mining or machine learning–based text generation to reduce the timeline for academic deliverables [10]. Despite these advantages, caution remains warranted. Algorithms, while time-saving, can inadvertently introduce errors or biases if not carefully supervised. For instance, AI language models may produce misleading references or incomplete content if their training data is not adequately validated [11]. The present study addressed this by subjecting AI outputs to rigorous manual review, an approach that may serve as a template for responsible AI adoption in future scholarly endeavors.

AI usage patterns and impact

In examining the prevalence and variety of AI tools, respondents predominantly used ChatGPT for academic tasks, while automated coding platforms (e.g., Copilot) and clinical simulators were also frequently mentioned. This aligns with global observations that large language models have quickly become favored among students due to their user-friendly interfaces and high adaptability [12]. Surveyed students generally believed AI contributed positively to both academic performance and research productivity, a finding bolstered by statistical analyses indicating a correlation between frequent AI usage and improvements in relevant skill domains. These quantitative data reinforce the idea that strategic adoption of AI can support deeper learning, more comprehensive research inquiries, and faster mastery of certain theoretical concepts. Furthermore, respondents reported that AI tools freed them from routine or repetitive work such as formatting bibliographies leading to greater focus on critical thinking and practical competencies.

Nonetheless, the perceived influence of AI on clinical competence was somewhat mixed. Although students recognized potential benefits, such as clinical decision support tools or virtual simulators, they also acknowledged AI’s limited capacity in teaching the hands-on elements of physical examinations and other procedural skills. This ambivalence aligns with prior research showing that while AI can excel at providing structured factual knowledge or diagnostic suggestions, it has yet to replicate the nuanced, interpersonal aspects of clinical practice [2]. In more resource-limited settings where direct patient interaction can be constrained AI might serve as a supplemental resource but must be carefully integrated to avoid fostering overreliance or neglecting the development of students’ clinical intuition [13].

Challenges and limitations of AI adoption

Despite the generally positive sentiment, nearly half of the respondents encountered at least one significant difficulty with AI deployment. Qualitative responses underscored three core themes: trust and accuracy issues, accessibility barriers, and ethical considerations. Concerns over reliability align with a broader global discourse cautioning that AI-generated content may contain inaccuracies or unverified assumptions [14]. Students, while appreciative of AI’s efficiency, felt uneasy about blindly accepting its suggestions. This underscores the importance of critical evaluation and the necessity for rigorous methods to confirm AI-derived insights. In contexts like Palestine and neighboring regions with variable internet infrastructure, students also cited technological constraints that hamper the seamless use of more sophisticated AI utilities. Such barriers illustrate that AI’s benefits can be unevenly distributed within the same cohort, dependent on local factors like network connectivity and hardware availability [15].

The third theme of ethical and educational implications echoes calls for robust guidance on academic honesty and appropriate utilization of AI. Automated tools pose new forms of ethical quandaries, particularly regarding originality, plagiarism, and professional integrity [16]. Students who rely excessively on AI-generated answers for assignments risk impeding their own intellectual growth. This potential decline in critical reasoning is a worrying sign that must be tackled through explicit university policies defining boundaries, clarifying citation requirements, and providing instructions on verifying AI outputs. Multiple institutions across the Middle East are experimenting with codes of conduct tailored to AI usage, informed by both international guidelines and cultural priorities [17]. Such protocols ensure that future physicians harness AI to augment but never supplant the human dimension of healthcare.

AI in Palestine and the middle East

Within Palestine, expanding AI-based educational opportunities is particularly salient, as national initiatives seek to revamp digital infrastructure and improve overall healthcare delivery. Researchers have noted a growing appetite for AI integration in medical schools across the Middle East, partly spurred by cross-border collaborations and an eagerness to keep pace with more technologically advanced regions [8, 18]. Participants in this study who had collaborated with peers or faculty abroad described experiences that broadened their exposure to AI tools not easily accessible within Palestine. While this highlights avenues for global partnerships, it also underscores local disparities in resource allocation. Institutions aiming to incorporate advanced platforms into their curricula must balance cost considerations, faculty training needs, and alignment with accreditation standards [19]. In Jordan, for example, certain universities have introduced elective courses emphasizing machine learning basics or invited guest lecturers from data science departments. Early evaluations of these efforts suggest that structured AI curricula can cultivate more confident and ethically aware graduates who can harness new technologies responsibly [8].

Similar ambitions resonate across medical schools in Saudi Arabia, Egypt, and the United Arab Emirates, which have begun exploring AI-driven simulations and telemedicine frameworks [12, 20]. These expansions show that the Palestinian context is not isolated; it stands alongside a regional shift toward digital transformation. Yet, the journey is far from complete. Many programs remain in pilot phases, lacking the robust infrastructure, trained instructors, and standardization necessary for large-scale impact. Encouragingly, the demand from students evident in their readiness to adopt AI solutions provides strong motivation for academic leaders to secure funding, equip labs, and design specialized courses.

AI in research writing and data analysis

That AI can assist in automating literature reviews, generating structured reports, and performing statistical analyses is of particular interest to students in research-heavy contexts. In this study, students who leveraged AI for drafting manuscripts found it helped them identify relevant studies faster while refining their own writing. These observations mirror findings from multiple global surveys, where medical students credited AI-based writing assistance with enhanced clarity, better organization of ideas, and improved literature search capabilities [10, 12]. However, educators caution that delegating extensive tasks like result interpretation or full manuscript composition could undermine a student’s development of essential research skills [16]. Thoughtful policies that require students to maintain full accountability over their final products might serve as an effective middle ground, allowing them to benefit from AI’s efficiency without bypassing the intellectual rigor of independent inquiry.

Moreover, the reliability of AI in data analysis hinges on the model and the data used to train it. If the model lacks exposure to medical contexts relevant to Palestinian populations, it may produce skewed results or incomplete insight. This becomes especially critical in clinical research, where population-specific factors (genetic, environmental, or cultural) influence disease patterns and outcomes [13, 14]. Consequently, local collaborations that refine AI tools for Palestinian demographics could lead to more accurate analytics, strengthen the validity of findings, and increase the acceptability of AI-based methods among students and faculty.

Limitations and future directions

Several considerations limit the generalizability of these results. Although participants represented multiple universities in Palestine, further expansion to other regions or private institutions could yield different outcomes, especially if resource availability varies. The study’s reliance on self-reported measures of AI proficiency and impact introduces the possibility of social desirability bias, where students may overestimate positive experiences. Moreover, while the role of AI in executing the research highlights its practicality, it also raises questions about the potential for automation bias or errors if human oversight is insufficient [11]. Future inquiries might consider longitudinal designs to track how students’ perceptions and competencies evolve as AI tools become increasingly embedded in educational frameworks.

Notwithstanding these caveats, the data imply that AI’s integration is largely welcomed by Palestinian medical students, provided it is accompanied by relevant training and clear ethical guidelines. The next steps will likely involve formalizing AI instruction within the curriculum, encouraging interdisciplinary collaborations with computer science departments, and establishing institutional policies for responsible AI usage [17]. As these changes unfold, schools can conduct follow-up studies to evaluate whether the introduction of AI-specific courses measurably boosts student competence and confidence. The prospect of further refining AI algorithms to reflect localized epidemiological data could improve clinical simulations and research reliability, ultimately strengthening the quality of medical education in Palestine.

Conclusion

This cross-sectional study underscores AI’s transformative role in Palestinian medical education, evidenced by high adoption rates, notable academic gains, and increased research productivity. Significantly, the study itself leveraged ChatGPT to design the questionnaire and conduct data analysis, highlighting AI’s capacity to streamline traditionally labor-intensive research tasks. Nevertheless, gaps remain, particularly in formal AI literacy and clinical skill development. Addressing these deficits through structured AI training and robust ethical guidelines is essential to ensure the responsible integration of AI, ultimately maximizing its benefits in resource-constrained healthcare settings.

Data availability

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

AI:

Artificial Intelligence

SPSS:

Statistical Package for the Social Sciences

IRB:

Institutional Review Board

LLM:

Large Language Model

SD:

Standard Deviation

CT:

Computed Tomography

ANOVA:

Analysis of Variance

r:

Correlation Coefficient

P:

Probability Value

IBM:

International Business Machines

REC:

Research Ethics Committee

ATLAS.ti:

Tool for Qualitative Data Analysis

MAXQDA:

Qualitative and Mixed-Methods Data Analysis Software

References

  1. Chan KS, Zary N. Applications and challenges of implementing artificial intelligence in medical education: integrative review. JMIR Med Educ. 2019;5(1):e13930. https://doiorg.publicaciones.saludcastillayleon.es/10.2196/13930.

    Article  Google Scholar 

  2. Sit C, Srinivasan R, Amlani A, et al. Attitudes and perceptions of UK medical students towards artificial intelligence and radiology: a multicentre survey. Insights Imaging. 2020;11(1):14. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13244-019-0830-7.

    Article  Google Scholar 

  3. Alowais SA, Alghamdi SS, Alsuhebany N, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23:689. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-023-04698-z.

    Article  Google Scholar 

  4. Tang A, Buerhaus PI. The importance of transparency: declaring the use of generative artificial intelligence in academic writing. J Nurs Scholarsh. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1111/jnu.12938. [online ahead of print].

    Article  Google Scholar 

  5. Biswas S. ChatGPT and the future of medical writing. Radiology. 2023. https://doiorg.publicaciones.saludcastillayleon.es/10.1148/radiol.223312. Nov 7 [online ahead of print].

    Article  Google Scholar 

  6. Laupichler MC, Aster A, Meyerheim M, et al. Medical students’ AI literacy and attitudes towards AI: a cross-sectional two-center study using pre-validated assessment instruments. BMC Med Educ. 2024;24:401. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-024-05400-7.

    Article  Google Scholar 

  7. Clusmann J, Kolbinger FR, Muti HS, Schoepf UJ. The future landscape of large Language models in medical practice: systematic review on the promising perspectives and valid concerns. Healthc (Basel). 2023;11(6):887.

    Google Scholar 

  8. Abuzaid M, Alkhatib E, Omar M, Bzour J, Qasrawi Y, Mahfouz N, et al. Medicine and pharmacy students’ knowledge, attitudes, and readiness towards artificial intelligence in Jordan and the West bank of Palestine. Adv Med Educ Pract. 2022;13:887–900.

    Google Scholar 

  9. Lee J, Kim S, Kang M. Utilization of AI-based question generation for large-scale surveys in medical education research: a pilot study. Med Teach. 2023;45(2):145–51.

    Google Scholar 

  10. Wang R, Zhang Z, Xie J, Li B, Wu F, He Z, et al. The impact of using ChatGPT on academic writing among medical undergraduates. Ann Med. 2024;56(1):1181–91.

    Article  Google Scholar 

  11. Kldiashvili E, Dalton L, Caruso PD, Babkin N, Oduyebo O. Academic integrity within the medical curriculum in the age of generative artificial intelligence. Health Sci Rep. 2025;6(2):e1311.

    Google Scholar 

  12. Williams DKA, Pinkas A. Exploring the usage of ChatGPT among medical students in the united States. J Med Educ Curric Dev. 2024;11:238212052452739.

    Google Scholar 

  13. Imran N, Jawaid M. Artificial intelligence in medical education: are we ready for it? Pak J Med Sci. 2020;36(5):857–9.

    Article  Google Scholar 

  14. Salman S, Ebada MA, Babiker AY, Elzain MY, Moharram AA, Abdelaziz AH, et al. Knowledge, attitude, and perception of Arab medical students towards artificial intelligence in medicine and radiology: a multi-national cross-sectional study. Eur Radiol. 2023;33(8):5943–53.

    Google Scholar 

  15. Iyawa GE, Dansharif AR, Adamu H. Barriers to the adoption of artificial intelligence in low and middle-income countries’ healthcare: a scoping review. Digit Health. 2022;8:20552076221080543.

    Google Scholar 

  16. King AB, Chatfield SL. Guidelines for appropriate AI usage in graduate-level scholarly work. J Grad Educ. 2023;14(3):45–52.

    Google Scholar 

  17. Paranjape K, Schinkel M, Nannan Panday RS, Car J, Nanayakkara P. Introducing artificial intelligence training in medical education. JMIR Med Educ. 2019;5(2):e16048.

    Article  Google Scholar 

  18. Abdelhafiz AS, Farghly MI, Sultan EA, Abouelmagd ME, Elsebaie EH. Medical students and ChatGPT: analyzing attitudes, practices, and academic perceptions. BMC Med Educ. 2025;25:187.

    Article  Google Scholar 

  19. Boyle P, AAMC News. AI in medical education: 5 ways schools are employing new tools. 2025 [cited 2025 Apr 5]. Available from: https://www.aamc.org/news/ai-medical-education-5-ways-schools-are-employing-new-tools

  20. Al-Khateeb S, Khan A, Qureshi A. Evaluating the utility of AI-based medical simulations in the Gulf region: a pilot analysis. Educ Health. 2023;36(1):45–53.

    Google Scholar 

Download references

Acknowledgements

The authors would like to extend their sincere gratitude to all the participating medical students and faculty members for their invaluable contributions to this research. Special thanks are due to Al-Quds University for supporting the distribution of the survey. We also acknowledge the assistance of Nagham Abufara and Shaimaa Tamimi for their help with data collection. Furthermore, we recognize the role of AI tools in facilitating data analysis and enhancing the overall quality of this study.

Funding

This research received no external funding.

Author information

Authors and Affiliations

Authors

Contributions

M.Y. and S.D. contributed equally as first authors, leading all aspects of the study. They were responsible for the conceptualization, design, data collection, analysis, and drafting of the manuscript.K.A.. made significant contributions to data collection and organization of the study. All authors were actively involved in revising the manuscript for critical content and have approved the final submitted version. Additionally, all authors have agreed to be accountable for their contributions and to address any issues regarding the accuracy or integrity of the work.

Corresponding author

Correspondence to Mahmoud Yousef.

Ethics declarations

Ethics approval and consent to participate

This study was conducted in accordance with the ethical principles outlined in the Declaration of Helsinki. Ethical approval was obtained from the Institutional Review Board (IRB) at Al-Quds University in Palestine (Ref No: 449/REC/2024). All participants provided informed consent electronically prior to participating in the study.

Use of artificial intelligence tools

Artificial Intelligence tool, specifically OpenAI’s ChatGPT (version GPT-4, accessed via ChatGPT Plus) we used at various stages of this research. The following contributions were made:

•Questionnaire Development: ChatGPT was used to draft survey items aligned with the research objectives. Prompts included, for example: “Design a student survey to evaluate the use of AI tools in medical education across academic, clinical, and research contexts.”

•Thematic Analysis Assistance: For qualitative responses, ChatGPT helped group answers into initial themes. These were reviewed, corrected, and finalized by the authors to ensure accuracy and context.

•Narrative Drafting: ChatGPT was used to generate narrative summaries of the findings and to draft portions of the introduction, results, and discussion. Prompts included: “Summarize Likert-scale findings and interpret trends, and “Rewrite this paragraph in an academic tone.”

•Language Polishing: ChatGPT helped improve the clarity, grammar, and coherence of the manuscript.

Human oversight

All content suggested by ChatGPT was independently reviewed and edited by the authors. Final versions were based on human judgment and scientific validation using traditional tools (e.g., SPSS for statistical analysis).

Limitations of AI use

ChatGPT occasionally produced inaccurate references or unverifiable citations, which were manually filtered. It also lacked access to subscription-based academic databases, limiting its ability to cite peer-reviewed literature. These limitations underscore the importance of human oversight in using AI for scholarly work.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yousef, M., Deeb, S. & Alhashlamon, K. AI usage among medical students in Palestine: a cross-sectional study and demonstration of AI-assisted research workflows. BMC Med Educ 25, 693 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-025-07272-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-025-07272-x

Keywords