This is really interesting and kind of scientific too, the process is long and little complicated, but it works for the commission if not for the candidates. This is edited excerpt from a submission by UPSC to Supreme Court which was also published on Iksa.
In the counter affidavit filed on behalf of the Commission, the entire methodology of conducting the examination and evaluation of answer scriptshas been explained in the following words:
The UPSC conducts 14 structured examinations a year involving lakhs of candidates. Some of these such as the NDA and the CDS Examinations consist of Objective-type (multi-choice) Question papers with OMR answer sheets wherein candidate has to blacken the correct answer choice. Other examinations, including the Civil Services Mains) have ‘conventional’ (essay-type) question-papers that require discursive handwritten answers.
While objective-type answer sheets are evaluated through a scanner and computer, conventional answer-books are evaluated manually by Examiners.
How mains papers are evaluated:
GENERAL PROCESS OF EVALUATION FOR ‘CONVENTIONAL’ (DISCURSIVE) TYPE PAPERS
1. Head Examiner is called early (before the Examiners’ meeting) and evaluates sample/ random answer-books for each Additional Examiner being called. All answer-books are coded with fictitious numbers prior to the start of the evaluation exercise.
2. The Examiners’ meeting starts immediately after (i) above. Head Examinerand Additional Examiners discuss the question paper exhaustively and agree on assessment standards and evaluation yardsticks.
3. Each Examiner evaluates the specimen random answer-books allotted to him/her that have already been seen initially by the Head Examiner and indicates a tentative award. The answer-books are then scrutinized by the scrutiny staff for totalling errors, unevaluated portions etc. and where necessary, got revised by the Examiner.
4. After (iii) above, the Head Examiner meets each Additional Examiner, in turn, to compare evaluation standards based on marks awarded by each for the specimen random answer books. Reconciliation/ recalibration of standards, wherever required, is done, and marks are accordingly finalized for the specimen answer books.
5. Ideally, once standards are thus set as above, assessment should be uniform. In practice, however, assessment standards tend to vary during the course of evaluation- with some examiners being ‘strict’ and others liberal’. Ideally, once standards are thus set as above, assessment should be uniform. In practice, however, assessment standards tend to vary during the course of evaluation- with some examiners being ‘strict’ and others ‘liberal’.
6. To ensure uniformity therefore, the Head Examiner re-examines a certain number of each Additional Examiner’s answer-books to check if the agreed standards of assessment have been followed. The Head Examiner may therefore, after this re-examination, either confirms the Additional Examiner’s award or revises it and indicates the revised award on the answer-book. Based on this revision (wherever done), the quantum of moderation to be applied (upwards or downwards) on the remaining answer-books evaluated by the Additional Examiner are determined. In extreme cases where the marking of the Additional Examiner is determined erratic based on the Head Examiner’s check, all the answer-books evaluated by such an Examiner are re-examined by either the Head Examiner or by another Additional Examiner whose standards are seen to match those of the Head Examiner.
7. Based on (vi) above, inter-examiner moderation is carried out and applied to each candidate (identified only by the fictitious code number). Before this is done, however, each and every answer book is scrutinized by the scrutiny staff and totalling errors, unevaluated portions, credit awarded to answers exceeding the prescribed number of attempts etc. are rectified and revised awards indicated on the answer-books under the initial of the Examiner(s).
8. After evaluation of all subject-papers is over, the performance of candidates in each is looked at based on marks awarded at the end of inter-examiner (intra-subject) moderation above. Candidates for this Examination choose any two optional subjects (each subject having two Papers) from among a basket of 55 diverse optional subjects (30 Literature and 25 non-Literature) – in effect, 4 Optional Papers from amongst 110. Apart from the differences in the scope and coverage of the syllabi; the difficulty level of the question-papers, and the standards of evaluation are therefore inevitably different and can vary from year to year across subjects/papers. Based on a holistic perspective, therefore, and with its decades of experience, the Commission applies upward or downward inter-subject moderation, wherever required. This is done to ensure a level playing field for all candidates. It is important to note that at this stage too, only statistics are taken into consideration with full anonymity as regards candidates’ details.
9. Based on the inter-subject moderation, above, marks are finally awarded to each Paper of every candidate (as represented by the relevant fictitious code numbers). This final award subsumes all the earlier stages. It is only these final paper-wise awards that are then considered for preparing the common merit-list after decoding of the relevant fictitious numbers. In all subsequent processing, it is only the final (moderated) awards that are factored and the earlier stages are no longer relevant in this context.
PROBLEMS IN SHOWING EVALUATED ANSWER-BOOKS TO CANDIDATES
Final awards subsume earlier stages of evaluation. Disclosing answer-books would reveal intermediate stages too, including the so- called ‘raw marks’ which would have negative implications for the integrity of the examination system, as detailed in Section (C) below.
The evaluation process involves several stages. Awards assigned initially by an examiner can be struck out and revised due to (a) Totalling mistakes, portions unevaluated, extra attempts (beyond prescribed number) being later corrected as a result of clerical scrutiny (b) The Examiner changing his own awards during the course of evaluation either because he/she marked it differently initially due to an inadvertent error or because he/she corrected himself/herself to be more in conformity with the accepted standards, after discussion with Head Examiner/colleague Examiners (c) Initial awards of the Additional Examiner being revised by the Head Examiner during the latter’s check of the former’s work (d) The Additional Examiner’s work, having been found erratic by the Head Examiner, been re-checked entirely by another Examiner, with or without the Head Examiner again re-checking this work.
The corrections made in the answer-book would likely arouse doubt and perhaps even suspicion in the candidate’s mind. Where such corrections lead to a lowering of earlier awards, this would not only breed representations/grievances, but would likely lead to litigation. In the only evaluated answer book that has so far been shown to a candidate (Shri Gaurav Gupta in WP 3683/2012) on the orders of the High Court, Delhi and that too, with the marks assigned masked; the candidate has nevertheless filed a fresh WP alleging improper evaluation.
As relative merit and not absolute merit is the criterion here (unlike academic examinations), a feeling of the initial marks/revision made being considered harsh when looking at the particular answer-script in isolation could arise without appreciating that similar standards have been applied to all others in the field. Non-appreciation of this would lead to erosion of faith and credibility in the system and challenges to the integrity of the system, including through litigation.
With the disclosure of evaluated answer-books, the danger of coaching-institutes collecting copies of these from candidates (after perhaps encouraging/inducing them to apply for copies of their answer-books under the RTI Act) is real, with all its attendant implications.
With disclosure of answer-books to candidates, it is likely that at least some of the relevant Examiners also get access to these. Their possible resentment at their initial awards (that they would probably recognize from the fictitious code numbers and/or their markings, especially for low-candidature subjects) having been superseded (either due to inter-examiner or inter-subject moderation) would lead to bad blood between Additional Examiners and the Head Examiner on the one hand, and between Examiners and the Commission, on the other hand. The free and frank manner in which Head Examiners, for instance, review the work of their colleague Additional Examiners, would likely be impacted. Quality of assessment standards would suffer.
Some of the optional Papers have very low candidature (sometimes only one), especially the literature papers. Even if all Examiners’ initials are masked (which too is difficult logistically, as each answer-book has several pages, and examiners often record their initials and comments on several pages-with revisions/corrections, where done, adding to the size of the problem), the way marks are awarded could itself be a give-away in revealing the examiner’s identity. If the masking falters at any stage, then the examiner’s identity is pitilessly exposed. The ‘catchment area’ of candidates and Examiners in some of these low-candidature Papers is known to be limited. Any such possibility of the Examiner’s identity getting revealed in such a high-stakes examination would have serious implications-both for the integrity and fairness of the Examination system and for the security and safety of the Examiner. The matter is compounded by the fact that we have publicly stated in different contexts earlier that the Paper-setter is also generally the Head Examiner.
The UPSC is now able to get some of the best teachers and scholars in the country to be associated in its evaluation work. An important reason for this is no doubt the assurance of their anonymity, for which the Commission goes to great lengths. Once disclosure of answer-books starts and the inevitable challenges (including litigation) from disappointed candidates starts, it is only a matter of time before these Examiners who would be called upon to explain their assessment/award, decline to accept further assignments from the Commission. A resultant corollary would be that Examiners who then accept this assignment would be sorely tempted to play safe in their marking, neither awarding outstanding marks nor very low marks-even where these are deserved. Mediocrity would reign supreme and not only the prestige, but the very integrity of the system would be compromised markedly.”
This methodology is used for old syllabus but some thing similar might be adopted for new exam pattern. The Source document is a Supreme Court Case: Click here to check.