Skip to main content
Journal cover image

Faculty perceptions of AI-versus human-summarized narrative exit survey data across three nursing programs.

Publication ,  Journal Article
Reynolds, SS; Kauschinger, ED; Cadavero, A; Conrad, S; McMillian-Bohler, JM; Webb, M
Published in: Nurse education in practice
January 2026

The purpose of this study was to compare faculty perceptions of the quality of artificial intelligence (AI)-generated versus human-generated summaries of narrative exit survey data to assess the feasibility of AI integration into program evaluation processes.Generative AI tools are increasingly used in higher education to streamline data analysis. In nursing education, student evaluations offer rich insights but are time-consuming to summarize. AI tools like Microsoft Copilot offer potential efficiencies but raise concerns about reliability, bias and the preservation of reflective pedagogy and student voice.A cross-sectional, descriptive pilot study design was used.Five faculty members independently rated summaries generated by Microsoft Copilot and by human analysis using a 7-point Likert scale. Ratings were based on accuracy, clarity, bias and relevance.Quality ratings of the AI-generated summaries were higher (mean=5.9) compared with the human-generated summaries (mean=5.0).This pilot project suggests integrating AI as a supportive tool rather than a replacement for human review. The overall intent was to assist faculty in improving efficiency in program evaluations by using AI, in conjunction with human review, to maintain fidelity to the student voices and context.

Duke Scholars

Published In

Nurse education in practice

DOI

EISSN

1873-5223

ISSN

1471-5953

Publication Date

January 2026

Volume

90

Start / End Page

104648

Related Subject Headings

  • Surveys and Questionnaires
  • Students, Nursing
  • Reproducibility of Results
  • Program Evaluation
  • Pilot Projects
  • Perception
  • Nursing
  • Humans
  • Faculty, Nursing
  • Education, Nursing, Baccalaureate
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Reynolds, S. S., Kauschinger, E. D., Cadavero, A., Conrad, S., McMillian-Bohler, J. M., & Webb, M. (2026). Faculty perceptions of AI-versus human-summarized narrative exit survey data across three nursing programs. Nurse Education in Practice, 90, 104648. https://doi.org/10.1016/j.nepr.2025.104648
Reynolds, Staci S., Elaine D. Kauschinger, Allen Cadavero, Stefanie Conrad, Jacquelyn M. McMillian-Bohler, and Michelle Webb. “Faculty perceptions of AI-versus human-summarized narrative exit survey data across three nursing programs.Nurse Education in Practice 90 (January 2026): 104648. https://doi.org/10.1016/j.nepr.2025.104648.
Reynolds SS, Kauschinger ED, Cadavero A, Conrad S, McMillian-Bohler JM, Webb M. Faculty perceptions of AI-versus human-summarized narrative exit survey data across three nursing programs. Nurse education in practice. 2026 Jan;90:104648.
Reynolds, Staci S., et al. “Faculty perceptions of AI-versus human-summarized narrative exit survey data across three nursing programs.Nurse Education in Practice, vol. 90, Jan. 2026, p. 104648. Epmc, doi:10.1016/j.nepr.2025.104648.
Reynolds SS, Kauschinger ED, Cadavero A, Conrad S, McMillian-Bohler JM, Webb M. Faculty perceptions of AI-versus human-summarized narrative exit survey data across three nursing programs. Nurse education in practice. 2026 Jan;90:104648.
Journal cover image

Published In

Nurse education in practice

DOI

EISSN

1873-5223

ISSN

1471-5953

Publication Date

January 2026

Volume

90

Start / End Page

104648

Related Subject Headings

  • Surveys and Questionnaires
  • Students, Nursing
  • Reproducibility of Results
  • Program Evaluation
  • Pilot Projects
  • Perception
  • Nursing
  • Humans
  • Faculty, Nursing
  • Education, Nursing, Baccalaureate