How RGT Exams Are Quality Controlled

Andrew Hatt, LCM’s Qualifications Officer, explains how it is ensured that RGT examiners award marks reliably and consistently.

One of our primary considerations as an examinations board has to be the reliability and consistency of marks, so that candidates can be assured that, presenting an identical performance to different examiners on different days, they would attain the same mark each time.

In performance exams, many factors combine to ensure comparability between examiners – examples include:

  • Robust training of examiners
  • Periodic performance review of examiners
  • Compulsory bi-annual retraining courses for all RGT examiners
  • The use of highly detailed assessment criteria to award marks
  • And more

 

RGT Exams Standardisation Procedures

An additional important factor in ensuring consistency is post-examination standardisation, the primary aim of which is to compare an examiner’s results for a particular exam centre with expectations, based on historical statistical data.

After each set of results is inputted at the RGT office, specialised exam-standardisation software calculates whether the (mean) average mark for that results set deviates from the ‘universal average’ by more than a reasonably expected amount. (The ‘universal average’ is the mean average of all RGT exams that have taken place since the standardisation system was installed some years
ago.)

Only if the average mark for the results set falls into the ‘acceptable’ range (currently set at five marks above or below the universal average) are the results approved for release.

If the average mark deviates from the universal average by more than five marks, the results are ‘red-flagged’ and referred to the RGT Examinations Director.

Further analyses of the results, using the specialised exam-standardisation software, are then undertaken.

This analysis includes assessing whether the examiner’s average mark for the centre varies from the examiner’s own historical average by more than a reasonably expected amount, as well as whether the examiner’s average mark for the centre varies from the centre’s historical average by more than a reasonably expected amount.

This is not intended to suggest that we would expect the results of all centres to fall into the same pattern, and is certainly not meant to imply that marks would automatically be altered if they were not as statistically expected.

Indeed there are many factors which might lead to a high or low average mark, including the size of the sample, a particularly experienced teacher at a centre, a preponderance of low or high grades, et al.

Rather, it can alert us to query the results and the Examinations Director may lead an investigation with the relevant examiner, and teachers, if felt necessary.

In many cases the results are approved following the extra information gathered from this procedure, but in very rare cases this process might lead to a moderation of the marks for that centre.

 

RGT Exams – Examiner Marking

Another significant use for the standardisation information is that we can monitor and evaluate an examiner’s performance over time: a detailed profile of each examiner’s marking emerges, and any tendency to deviation from standard marking patterns can be spotted and advised upon expeditiously.

To avoid any pre-existing associations with numbers (e.g. 6 out of 10 might imply ‘quite good’ to some people, but is actually below the pass mark in our exams) examiners do not initially award numerical marks for each section of the exams.

Instead a very detailed description of what is expected at each level of attainment band (i.e. distinction, merit, pass, below pass upper level, below pass lower level) for every section of each exam is provided to the examiners and the examiner makes a note of which of these the candidate’s performance matched.

The examiner then uses a special conversion table to convert that level of attainment band into the appropriate mark for that section of that particular grade.

 

RGT Exams – Reliability

There is no such thing as a ‘harsh’ or ‘lenient’ RGT examiner, as we go to great lengths to ensure that all examiners award marks objectively using exactly the same assessment criteria, and the marks awarded are continually analysed and monitored via our exam standardisation procedures described above.

As a result of all these procedures we can be sure that RGT candidates will always be marked fairly and consistently regardless of where they take their exam and no matter which examiner examines them.

 

Do you have a question about how RGT Exams are marked or how quality control is monitored? Share your thoughts in the comments section below.

2 thoughts on “How RGT Exams Are Quality Controlled

  1. Hi Will. My best advice here would be to familiarise yourself with the syllabus for the specific exams you are teaching. In particular look at the section marked “Attainment Descriptions” towards the back as these give a useful insight into what the examiner will be looking (and listening) for. Hope that helps.

Leave a Reply

Your email address will not be published. Required fields are marked *

*