Skip to main content
Journal cover image

Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini in the Italian entrance test for healthcare sciences degrees: a cross-sectional study.

Publication ,  Journal Article
Rossettini, G; Rodeghiero, L; Corradi, F; Cook, C; Pillastrini, P; Turolla, A; Castellini, G; Chiappinotto, S; Gianola, S; Palese, A
Published in: BMC Med Educ
June 26, 2024

BACKGROUND: Artificial intelligence (AI) chatbots are emerging educational tools for students in healthcare science. However, assessing their accuracy is essential prior to adoption in educational settings. This study aimed to assess the accuracy of predicting the correct answers from three AI chatbots (ChatGPT-4, Microsoft Copilot and Google Gemini) in the Italian entrance standardized examination test of healthcare science degrees (CINECA test). Secondarily, we assessed the narrative coherence of the AI chatbots' responses (i.e., text output) based on three qualitative metrics: the logical rationale behind the chosen answer, the presence of information internal to the question, and presence of information external to the question. METHODS: An observational cross-sectional design was performed in September of 2023. Accuracy of the three chatbots was evaluated for the CINECA test, where questions were formatted using a multiple-choice structure with a single best answer. The outcome is binary (correct or incorrect). Chi-squared test and a post hoc analysis with Bonferroni correction assessed differences among chatbots performance in accuracy. A p-value of < 0.05 was considered statistically significant. A sensitivity analysis was performed, excluding answers that were not applicable (e.g., images). Narrative coherence was analyzed by absolute and relative frequencies of correct answers and errors. RESULTS: Overall, of the 820 CINECA multiple-choice questions inputted into all chatbots, 20 questions were not imported in ChatGPT-4 (n = 808) and Google Gemini (n = 808) due to technical limitations. We found statistically significant differences in the ChatGPT-4 vs Google Gemini and Microsoft Copilot vs Google Gemini comparisons (p-value < 0.001). The narrative coherence of AI chatbots revealed "Logical reasoning" as the prevalent correct answer (n = 622, 81.5%) and "Logical error" as the prevalent incorrect answer (n = 40, 88.9%). CONCLUSIONS: Our main findings reveal that: (A) AI chatbots performed well; (B) ChatGPT-4 and Microsoft Copilot performed better than Google Gemini; and (C) their narrative coherence is primarily logical. Although AI chatbots showed promising accuracy in predicting the correct answer in the Italian entrance university standardized examination test, we encourage candidates to cautiously incorporate this new technology to supplement their learning rather than a primary resource. TRIAL REGISTRATION: Not required.

Duke Scholars

Published In

BMC Med Educ

DOI

EISSN

1472-6920

Publication Date

June 26, 2024

Volume

24

Issue

1

Start / End Page

694

Location

England

Related Subject Headings

  • Medical Informatics
  • Male
  • Italy
  • Humans
  • Female
  • Educational Measurement
  • Cross-Sectional Studies
  • Artificial Intelligence
  • 3904 Specialist studies in education
  • 3901 Curriculum and pedagogy
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Rossettini, G., Rodeghiero, L., Corradi, F., Cook, C., Pillastrini, P., Turolla, A., … Palese, A. (2024). Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini in the Italian entrance test for healthcare sciences degrees: a cross-sectional study. BMC Med Educ, 24(1), 694. https://doi.org/10.1186/s12909-024-05630-9
Rossettini, Giacomo, Lia Rodeghiero, Federica Corradi, Chad Cook, Paolo Pillastrini, Andrea Turolla, Greta Castellini, Stefania Chiappinotto, Silvia Gianola, and Alvisa Palese. “Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini in the Italian entrance test for healthcare sciences degrees: a cross-sectional study.BMC Med Educ 24, no. 1 (June 26, 2024): 694. https://doi.org/10.1186/s12909-024-05630-9.
Rossettini G, Rodeghiero L, Corradi F, Cook C, Pillastrini P, Turolla A, et al. Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini in the Italian entrance test for healthcare sciences degrees: a cross-sectional study. BMC Med Educ. 2024 Jun 26;24(1):694.
Rossettini, Giacomo, et al. “Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini in the Italian entrance test for healthcare sciences degrees: a cross-sectional study.BMC Med Educ, vol. 24, no. 1, June 2024, p. 694. Pubmed, doi:10.1186/s12909-024-05630-9.
Rossettini G, Rodeghiero L, Corradi F, Cook C, Pillastrini P, Turolla A, Castellini G, Chiappinotto S, Gianola S, Palese A. Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini in the Italian entrance test for healthcare sciences degrees: a cross-sectional study. BMC Med Educ. 2024 Jun 26;24(1):694.
Journal cover image

Published In

BMC Med Educ

DOI

EISSN

1472-6920

Publication Date

June 26, 2024

Volume

24

Issue

1

Start / End Page

694

Location

England

Related Subject Headings

  • Medical Informatics
  • Male
  • Italy
  • Humans
  • Female
  • Educational Measurement
  • Cross-Sectional Studies
  • Artificial Intelligence
  • 3904 Specialist studies in education
  • 3901 Curriculum and pedagogy