Comparative Analysis of AI models ‒ Using AI-supported Qualitative Data Analysis for Interview Analysis
Sterczl, Gábor
Csiszárik-Kocsir, Ágnes
2025-08-06T07:34:21Z
2025-08-06T07:34:21Z
2025
1785-8860
hu_HU
http://hdl.handle.net/20.500.14044/31956
This study aims to comparatively evaluate the performance of currently popular
Artificial Intelligence (AI) models in supporting qualitative data analysis, specifically
focusing on the coding and hypothesis validation of interview transcripts. We investigate how
models from OpenAI, Google Gemini, and Anthropic perform in these tasks compared to
traditional manual analysis and established CAQDAS tools. Utilizing transcripts from three
exploratory interviews, the methodology involved applying each AI model and selected
CAQDAS tools to generate codes and quantify references based on predefined research
objectives and a set of established codes. Key findings reveal significant variability in the
ability of different AI models to accurately identify and quantify relevant data segments, with
some models demonstrating greater efficiency and the capacity to suggest novel, relevant
categories not initially identified through manual analysis (e.g., external influences, roles,
and responsibilities). Conversely, instances of inaccuracies, such as hallucinated quotes,
were observed in other models. The study highlights that while AI offers substantial potential
for increasing the efficiency and objectivity of qualitative analysis, its effectiveness is highly
dependent on the specific model used and necessitates critical human oversight and
validation. The implications underscore the importance of a hybrid human-AI approach in
qualitative research, emphasizing careful model selection, robust data management
protocols, and continuous attention to ethical considerations, particularly regarding data
privacy and algorithmic bias.
hu_HU
dc.format
PDF
hu_HU
en
hu_HU
Comparative Analysis of AI models ‒ Using AI-supported Qualitative Data Analysis for Interview Analysis