Faculty perceptions of AI-versus human-summarized narrative exit survey data across three nursing programs.
The purpose of this study was to compare faculty perceptions of the quality of artificial intelligence (AI)-generated versus human-generated summaries of narrative exit survey data to assess the feasibility of AI integration into program evaluation processes.Generative AI tools are increasingly used in higher education to streamline data analysis. In nursing education, student evaluations offer rich insights but are time-consuming to summarize. AI tools like Microsoft Copilot offer potential efficiencies but raise concerns about reliability, bias and the preservation of reflective pedagogy and student voice.A cross-sectional, descriptive pilot study design was used.Five faculty members independently rated summaries generated by Microsoft Copilot and by human analysis using a 7-point Likert scale. Ratings were based on accuracy, clarity, bias and relevance.Quality ratings of the AI-generated summaries were higher (mean=5.9) compared with the human-generated summaries (mean=5.0).This pilot project suggests integrating AI as a supportive tool rather than a replacement for human review. The overall intent was to assist faculty in improving efficiency in program evaluations by using AI, in conjunction with human review, to maintain fidelity to the student voices and context.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Surveys and Questionnaires
- Students, Nursing
- Reproducibility of Results
- Program Evaluation
- Pilot Projects
- Perception
- Nursing
- Humans
- Faculty, Nursing
- Education, Nursing, Baccalaureate
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Surveys and Questionnaires
- Students, Nursing
- Reproducibility of Results
- Program Evaluation
- Pilot Projects
- Perception
- Nursing
- Humans
- Faculty, Nursing
- Education, Nursing, Baccalaureate