Burton, R. F. (2001) “Quantifying the Effects of Chance in Multiple Choice and True/False Test: question selection and guessing of answers”, Assessment & Evaluation in Higher Education 26(1)
Grendon, P. and Pieper, P. (2005) “Does Attendance Matter? Evidence from an Ontario ITAL”, Discussion Paper, The Business School, Humber institute of Technology and Advanced Learning, Toronto, Canada. Downloaded from: http://economics.ca/2005/papers/0483.pdf on 28/06/06 at 14.17GMT
Hilmer, M. J. (2001) “A Comparison of Alternative specifications of the college attendance equation with an extension to two-stage selectivity-correction models”, Economics of Education Review 20: 263-278. [OpenAIRE]
Marburger, D. R. (2001) “Absenteeism and Undergraduate Exam Performance”, Journal of Economic Education 37 (Spring): 148-155
Petress, K. C. (1996) “The Dilemma of University Undergraduate Student Attendance Policies”, College Student Journal 30(3): 387-389
Romer, D. (1993) “Do Students go to Class? Should They?”, Journal of Economic Perspectives 7 (summer): 167-74
Thomas, W. and Webber, Don J. (2001) “'Because my Friends are': The Impact of Peer Groups on the Intention to Stay-on at Sixteen”, Research in Post-Compulsory Education 6(3): 339-354
Webber, Don J. and Walton, F. (2006) “Gender Specific Peer Groups and Choice at Sixteen”, Research in Post-Compulsory Education 11(1): 65-84
1 In terms of the attendance policy, footnote 2 (page 154) states that a student who misses more than twice the number of lectures normally scheduled per week would receive an 'F' grade and that student who misses more than 6 microeconomics classes would receive an 'F'.
2 The reported results on the link between exam performance and absenteeism are rather surprising. From Table 2 we see that for all students, who missed a given class, the likelihood of responding incorrectly to a question relating to that class's topic increases from 9% in exam 1 to 14% in exam 3. Yet when absenteeism was at it highest for the no-policy group, in teaching block III and prior to the final exam, this group was only 2% more likely to get a wrong answer compared to those students in the policy group.
3 As it is we can not speculate any further as Marburger does not tell us the distribution or average marks for all nine groups covering 2001 to 2003. Other concerns rest with the exam results, firstly we are given no details about the time or the length of the exams, or the number of multiple choice questions that were set or the number of choices found in each question. Burton (2001) demonstrates that a typical 60-question four-choice test is “inherently too unreliable for the demands commonly placed on it” (p 47). If it turns out that the exams set during Marburger's study where of this nature then the degree of guessing could be significant, and would comprise the validity of the final marks for all students.
4 It is our view that the impact of either removing or imposing a policy on attendance is unlikely to be uniform across attendance rates or consistent across cohorts. Each approach will arrive at different conclusions which could then mislead policy makers.
5 It is interesting to note from Marburger's study that the local students worked less; this is most evidence in the policy class. In the no-policy class there were fewer locals and these individuals worked more hours on average. This may be associated with higher living costs for rent (not living at home with parents) and for travel costs to get back home to see the family
6 The extent to which the year 2 mark accurately captures the student's ability is questionable; the analysis of the changes in exam marks is presented below.