GET THE APP

Does a Re-designed Patient Consent Form Result in Fewer Completion Errors? A Study within a Trial (SWAT)
..

Journal of Clinical Research

ISSN: 2795-6172

Open Access

Research Article - (2020) Volume 4, Issue 5

Does a Re-designed Patient Consent Form Result in Fewer Completion Errors? A Study within a Trial (SWAT)

Peter Knapp1*, Peter Bower2, Jenny Roche3, Caroline Fairhurst3, Lola Awoyale4 and Tracy Moitt4
*Correspondence: Peter Knapp, Department of Health Sciences and the Hull York Medical School, University of York, New York 10 5DD, Heslington, United Kingdom, Tel: (+44) 07803 399 690, Email:
1Department of Health Sciences and the Hull York Medical School, University of York, Heslington, United Kingdom
2Centre for Primary Care and Health Services Research, University of Manchester, Manchester, United Kingdom
3Department of Health Sciences, University of York, Heslington, United Kingdom
4Liverpool Clinical Trials Centre, University of Liverpool, Liverpool, United Kingdom

Received: 29-Sep-2020 Published: 09-Nov-2020
Citation: Peter Knapp, Peter Bower, Jenny Roche and Caroline Fairhurst, et al. "Does a Re-designed Consent Form Result in Fewer Completion Errors? A Study within a Trial (SWAT)." J Clin Res 4 (2020): 112 doi: 10.37421/ jcre.2020.14.112
Copyright: © 2020 Knapp P, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Abstract

Background: Consent form completion errors may invalidate consent and waste participant and research time. The aim of this study was to compare error rates in an existing trial consent form with a revised version, developed through information design and user testing

Method: Study within a trial (SWAT) conducted within the ISDR trial of diabetic screening intervals. Using a sequential groups design, participants completed either the original trial consent form or a version that had been revised through iterative user testing and information design principles.

Results: Forms for 1,027 participants were analyzed: 307 revised forms and 720 original forms. Major error rates were low and similar in the two groups (1.0% in original forms; 0.6% in revised forms; p=0.61). Minor error rates were lower in revised forms (11.0% in original forms; 5.9% in revised forms; p=0.006), as were rates of any error (11.8% in original forms; 5.9% in revised forms; p=0.004).

Conclusion: Revising the consent form through user testing and graphic design resulted in a halving of error rates, although the effect requires further evaluation in a SWAT with random allocation.

Keywords

Consent trial • SWAT • Error • User testing • Patient information • Graphic design

Introduction

Consent to research requires participants to endorse and sign a consent form, usually a printed sheet. Completion errors may invalidate consent, requiring participants to be re-consented, incurring recruitment delays and opportunity costs for researchers and participants. This can be a particular problem in research that uses postal recruitment, when forms are often completed remotely. Anecdotal reports from two UK university trials units are that error rates may occur in as many as 10% of forms.

There has been substantial research into the participant information sheets used to inform research consent decisions, commonly showing they are too long and contain language that is too technical for some users [1-4], but that re-writing and re-design through user testing can impact positively on understanding and preferences [5-7].

Research into questionnaires intended for patient and research participant completion shows that shortening questionnaires can increase completion rates [8,9], as may the use of double-sided questionnaires [10], but overall there is mixed evidence for the effects of design and layout on rates of completion. Furthermore this area is characterised by opinion rather than empirical evidence [11], and the available evidence tends to show that document performance is related to individual preferences, often aesthetic, which vary and tend to be highly subjective.

Surprisingly there appears to be no research into the length or design of consent forms themselves and whether these features impact on error rates.

Study aim

The aim of this Study within a trial (SWAT) was to compare error rates in an existing trial consent form with a revised version, developed through information design and user testing.

Materials and Methods

Study design

Study Within A Trial (SWAT) conducted within the ISDR trial, which was investigating the effectiveness of different lengths of intervals of screening for retinopathy in adults with diabetes [12]. The SWAT used a non-randomized, sequential group design, with error rates compared before and after the introduction of the revised form.

Participants

The SWAT participants were patients who had agreed to take part in the ISDR trial; all were adults with diabetes living in Liverpool, UK, who had previously been screened for retinopathy and were invited for annual screening at one of 6 clinics across the city.

Intervention

All patients agreeing to participate in the ISDR trial were asked to complete a printed consent form when attending their screening appointment. For the first 17 months of trial recruitment (November 2014 to March 2016) participants completed the standard consent form; for the last 6 weeks of recruitment (April- May 2016) participants completed the revised version. The revised version had been developed through iterative user testing [13,14] and information design principles.

User testing and consent form revision

User testing involved 30 people who reflected the ISDR target population. During user testing, participants were asked to imagine they had agreed to take part in the ISDR trial and to complete the consent from accordingly.

In the first round of testing 10 participants completed the standard consent form (Figure S1). The consent form was then revised based on participant responses and completion errors. A second round of testing was completed, in which 10 new participants were asked to complete the revised form. The form was then further revised and tested on another 10 participants.

Revisions to the consent form included: changes to colour and shading; the addition of a column for participants to tick, making the need for initials more obvious; provision of a worked example of ticks and initials; removal of five address lines at the top; and moving the document footer to the side of the form, making the appearance less cluttered. The original and revised versions of the form were both printed on single sides of A4 (Figure S2).

Outcomes

Study outcomes were completion errors, counted as the number of forms that included at least one error. Errors were categorised as either major or minor: major errors were the primary study outcome and were defined as those that could potentially invalidate patient consent and which would require researchers to contact the patient again. A major error was counted if there was a negative answer to any of the 7 questions below:

1. Has the participant initialled all boxes for compulsory questions?

2. Has the participant printed their name and dated the form themselves?

3. Has the participant signed the form?

4. Has the researcher printed their name and dated the form themselves?

5. Has the researcher signed the form?

6. Has the ID number been recorded?

7. If applicable, has the interpreter named and signed the form?

Minor errors were corrections to the form (marked and initialled by the recruiting researcher), for example incorrect date, ticking rather than initialling or signing in the wrong place). Minor errors were those that would not invalidate patient consent.

Outcome data were extracted from consent forms by two researchers after the completion of trial recruitment.

Sample size and sampling

The sample size was constrained by the number of patients approached to take part in the host trial, which was largely governed by researcher capacity. We anticipated there would be at least 287 completed revised forms during the 6 week period of use. We estimated error rates of 10% and 5% respectively in the original and revised versions of the form. Collecting data on all available revised forms (n=287) and x2.5 that number of original forms (n=718) would yield 80% power to detect a 5% absolute difference with a two-sided alpha of 0.05.

All revised forms were included in the sample. 720 original forms were randomly sampled from the 6 recruitment centres and across the 17 month period of use of original forms.

Statistical analysis

We calculated the proportions of forms that included at least one error. We counted major and minor errors separately. The Chi-square test was used to compare error rates between the original and revised form groups.

Approvals: ISDR was approved by the Health Research Authority (REC reference: 14/NW/0034).

Results

The revised version form was completed by 307 participants, and we sampled standard version forms for 720 participants.

Rates of major errors were low: 7/720 (1.0%) in original forms; 2/307 (0.6%) in revised forms (Chi-square=0.25; df=1; p=0.61) (Table 1).

Table 1: Completion errors in the original and revised versions of the consent form.

  Original form
(n=720)
Revised form
(n=307)
Forms with major completion errors, n (%) 7 (1.0) 2 (0.6)
Forms with minor completion errors, n (%) 82 (11.4) 18 (5.9)
Forms with any completion errors, n (%) 85 (11.8) 18 (5.9)

Minor error rates were: 82/720 (11.4%) in original forms and 18/307 (5.9%) in revised forms (Chi-square= 7.48; df=1; p=0.006).

The proportions of forms that had at least one error (major or minor) were: 85/720 (11.8%) in original forms and 18/307 (5.9%) in revised forms (Chisquare= 8.42; df=1; p=0.004).

Discussion

User testing and information design resulted in a revised trial consent form that had significant differences in its appearance to the original version. Rates of major and minor completion errors were lower for the revised consent form, although only the rates of minor errors or any errors reached statistical significance. The rate of minor completion errors in the revised form was almost half the rate seen in the original form.

The major error rate was very small, meaning that the study was underpowered to produce conclusive findings on the primary outcome.

That error rates were lower in each category suggests that patients found the revised forms easier to complete correctly. The mistakes requiring corrections (minor errors) would not invalidate consent but would take up researcher time, although this was not measured and the time impact would likely be small for each affected patient. Anecdotal reports from the ISDR trial team were that patients and recruiting staff found the revised forms easier to complete, but these reports were not collected systematically.

The study did not use random allocation because practical constraints made it impossible in this study. The sequential groups design means that differences in reported error rates could have resulted from systematic differences in the patient samples; although that seems unlikely we are unable to check this with the available dataset. The researchers recruiting patients to the ISDR trial were not masked to the version of the consent form being used, and consequently could have changed their behavior.

Conclusion

As far as we know, this study is the first investigation of the effects of significant revisions to the layout and design of a research consent form. Rates of major completion errors were lower than anticipated but, if replicated, could still represent a research governance problem.

Further investigation is warranted, particularly in a host study or trial in which patients are completing consent forms remotely, when error rates would likely be higher, and ideally in a SWAT with random allocation. Investigation of the effects on digital consent forms would also be valuable.

Acknowledgements

The authors would also like to thank: the ISDR trial participants and the ISDR trial group; Luto Research Limited (luto.co.uk) for undertaking user testing; Making Sense Design (makingsense.co.uk) for graphic design input; Jo Rick for her contributions to the early stages of this SWAT; and Christopher Grierson for administrative support .

The ISDR Trial Group is: Simon P Harding (Chair), Deborah M Broadbent (ISDR Trial Principal Investigator), Paula Byrne, Anthony C Fisher, Mark Gabbay, Marta García-Fiñana, Marilyn James, Tracy Moitt, John R Roberts, Daniel Seddon, Irene M Stratton, Paula Williamson, Duncan Appelbe, Lola Howard, Ayesh Alshukri, Abigail Bennett, Christopher P Cheyne, Paula Byrne, Antonio Eleuteri, Christopher Grierson, Bryar Kadir, Mehrdad Mobayen-Rahni, Andrew Ovens, Christopher J Sampson, David Szmyt, Clare Thetford, Amu Wang, Helen Cooper, John Collins, Sue Howlin, John Kelly, Nathalie Massat, Gideon Smith, Vineeth Kumar, Chris Rogers, Julia West, Naveed Younis.

Funding

The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: the ISDR trial was supported by the National Institute for Health Research (NIHR) under the Programme Grants for Applied Research programme (RP-PG- 1210-12016). The author(s) received no financial support for the SWAT research, authorship, and/or publication of this article.

Declaration of Conflicting Interests: The Author(s) declare(s) that there

Declaration of Conflicting Interests: The Author(s) declare(s) that there is no conflict of interest.

Funding: the ISDR trial was supported by the National Institute for Health Research (NIHR) under the Programme Grants for Applied Research programme (RP-PG- 1210-12016).

Registration: ISRCTN ID ISRCTN87561257; registered on 08 May 2014.

References

arrow_upward arrow_upward