Here is the CMT Uptime check phrase

Reliability of ophthalmic diagnoses in an epidemiologic survey

In the Nepal Blindness Survey, 39,887 people in 105 sites were examined by 10 ophthalmologists from Nepal and four other countries during 1981. Ophthalmic protocols were pretested on approximately 3000 subjects; however, interobserver variability was inevitable. To quantify the amount of variability and assess the reliability of important ophthalmic measures, a study of interobserver agreement was conducted. Five ophthalmologists, randomly assigned to one of two examining stations in a single survey site, carried out 529 pairs of examinations. Eighty demographic and ophthalmic variables were assessed at each station. In 62 of 80 (77.5%) measures, observer agreement exceeded 90%. Since pathologic findings were rare, however, chance agreement alone could yield misleadingly high per cent agreement; therefore, the kappa statistic was used for assessing comparative reliability of ophthalmic measures. There were 74 measures for which kappa could be computed and ranked by strength of agreement: 20 (27%) showed excellent agreement (kappa = 0.75-1.00), 39 (53%) showed fair to good agreement (kappa = 0.40-0.74), and 15 (20%) showed poor agreement (kappa less than 0.40). In general, measures dealing with blindness prevalence or causes of blindness showed substantial or almost perfect agreement, while polychotomous descriptions of rare clinical signs demonstrated less agreement.