GET THE APP

..

Journal of Clinical Case Reports

ISSN: 2165-7920

Open Access

Current Incident Reporting and Learning Systems Yield Unreliable Information on Patient Safety Incident Reporting Reliability

Abstract

Mari Plukka*

Background: Incident reporting systems are being implemented throughout the world to record safety incidents in healthcare. The quality of the recording and analysis of reporting systems is important for the development of safety promotion measures.

Methods: To assess the reliability of incident reporting ratings collected in a hospital setting, a three level interrater comparison was undertaken. The routine ratings of the frontline event handlers responsible for evaluating safety incident reports (n=495) were compared with the parallel ratings of two trained patient safety coordinators. The two patient safety coordinators then each separately reviewed about half of the 495 reports, followed by reclassification of a random sample (a random data subset of 60 reports) previously reclassified by the other coordinator during the first reclassification. The following seven patient safety variables were included: Nature of the incident, type of incident, patient impact, treating unit impact, circumstances and contributory factors, immediate actions taken, and risk category. Interrater agreement was tested with kappa, weighted kappa or iota.

Results: For the seven variables examined, event handlers had an average of 1.36 missing answers, patient safety coordinators 0.32. For the first interrater comparison, the average change between the three ordinal scale variables for all variables together was towards more serious in 29% (95% CI: 27%, 32%) and towards less serious in 2% (95% CI: 0%, 5%) of incidents. The net change for the first interrater comparison was 27% (95% CI: 25%, 30%) towards a more serious incident. The average selection of several categories, when allowed, increased from 7% (95%CI: 6%, 8%) to 33% (95% CI: 31%, 35%). For all three paired interrater comparisons, the average interrater agreements were in the range of 0.44 to 0.53 and considered moderate. While patient safety coordinators should in theory represent a ‘golden standard’, the coordinator interrater agreement seen in this study was moderate.

Conclusion: Consensus at national level on how to classify high risk incidents is needed to develop incident reporting reliability. Also, continuous training in common practices; terminology and rating systems should be given more attention. Having a patient safety coordinator reclassify incident reports can improve reporting accuracy and thereby corrective actions and learning.

HTML PDF

Share this article

Google Scholar citation report
Citations: 1295

Journal of Clinical Case Reports received 1295 citations as per Google Scholar report

Journal of Clinical Case Reports peer review process verified at publons

Indexed In

 
arrow_upward arrow_upward