-
1
-
-
0348044456
-
-
note
-
One study of twelfth-grade essay examinations found that the highest mean grades went to those with neat handwritten essays, and that poor handwritten essays actually fared better than typewritten essays. See Jon C. Marshall & Jerry M. Powers, Writing Neatness, Composition Errors, and Essay Grades, 6 J. Educ. Measurement 97, 99-100 (1969). The authors of that study expressed surprise that good typewritten papers fared so poorly, but their data reveal one potential explanation. The authors investigated not only how neatness might influence a grader instructed to evaluate solely with respect to content, but also how spelling and grammar errors might influence the grader. For those exams without spelling or grammar errors, typed exams actually finished a close second to neatly handwritten exams and ahead of exams that were written in fair or poor handwriting. It was only when spelling or grammar errors were included in the exams that the typed scores were below those written with fair or poor handwriting. Although it is difficult to discern from the data presented, it may be that the grammar and spelling errors become more noticeable in a typed essay than in an essay written in fair or poor handwriting. See Elaine Peterson & Wei Wei Lou, The Impact of Length on Handwritten and Wordprocessed Papers, Educ. Resources Info. Center, Dep't of Educ. 12 (1991). Thus typed examinations may create a larger spread: they are more appealing when done well, but when they are done poorly, the flaws are more evident.
-
-
-
-
2
-
-
84982337158
-
Writing Neatness, Composition Errors, and Essay Grades
-
One study of twelfth-grade essay examinations found that the highest mean grades went to those with neat handwritten essays, and that poor handwritten essays actually fared better than typewritten essays. See Jon C. Marshall & Jerry M. Powers, Writing Neatness, Composition Errors, and Essay Grades, 6 J. Educ. Measurement 97, 99-100 (1969). The authors of that study expressed surprise that good typewritten papers fared so poorly, but their data reveal one potential explanation. The authors investigated not only how neatness might influence a grader instructed to evaluate solely with respect to content, but also how spelling and grammar errors might influence the grader. For those exams without spelling or grammar errors, typed exams actually finished a close second to neatly handwritten exams and ahead of exams that were written in fair or poor handwriting. It was only when spelling or grammar errors were included in the exams that the typed scores were below those written with fair or poor handwriting. Although it is difficult to discern from the data presented, it may be that the grammar and spelling errors become more noticeable in a typed essay than in an essay written in fair or poor handwriting. See Elaine Peterson & Wei Wei Lou, The Impact of Length on Handwritten and Wordprocessed Papers, Educ. Resources Info. Center, Dep't of Educ. 12 (1991). Thus typed examinations may create a larger spread: they are more appealing when done well, but when they are done poorly, the flaws are more evident.
-
(1969)
J. Educ. Measurement
, vol.6
, pp. 97
-
-
Marshall, J.C.1
Powers, J.M.2
-
3
-
-
0346783248
-
The Impact of Length on Handwritten and Wordprocessed Papers
-
Dep't of Educ.
-
One study of twelfth-grade essay examinations found that the highest mean grades went to those with neat handwritten essays, and that poor handwritten essays actually fared better than typewritten essays. See Jon C. Marshall & Jerry M. Powers, Writing Neatness, Composition Errors, and Essay Grades, 6 J. Educ. Measurement 97, 99-100 (1969). The authors of that study expressed surprise that good typewritten papers fared so poorly, but their data reveal one potential explanation. The authors investigated not only how neatness might influence a grader instructed to evaluate solely with respect to content, but also how spelling and grammar errors might influence the grader. For those exams without spelling or grammar errors, typed exams actually finished a close second to neatly handwritten exams and ahead of exams that were written in fair or poor handwriting. It was only when spelling or grammar errors were included in the exams that the typed scores were below those written with fair or poor handwriting. Although it is difficult to discern from the data presented, it may be that the grammar and spelling errors become more noticeable in a typed essay than in an essay written in fair or poor handwriting. See Elaine Peterson & Wei Wei Lou, The Impact of Length on Handwritten and Wordprocessed Papers, Educ. Resources Info. Center, Dep't of Educ. 12 (1991). Thus typed examinations may create a larger spread: they are more appealing when done well, but when they are done poorly, the flaws are more evident.
-
(1991)
Educ. Resources Info. Center
, pp. 12
-
-
Peterson, E.1
Lou, W.W.2
-
4
-
-
84933480367
-
The Impact of the Computer on Writing: No Simple Answers
-
See, e.g., Claudia S. Dybdahl et al., The Impact of the Computer on Writing: No Simple Answers, 13(3/4) Computers in the Schools 41 (1997); Ronald D. Owston & Herbert H. Wideman, Word Processors and Children's Writing in a High-Computer-Access Setting, 30 J. Res. on Computing in Educ. 202 (1997); Sarah E. Peterson, A Comparison of Student Revisions When Composing with Pen and Paper Versus Word-Processing, 9(4) Computers in the Schools 55 (1993).
-
(1997)
Computers in the Schools
, vol.13
, Issue.3-4
, pp. 41
-
-
Dybdahl, C.S.1
-
5
-
-
85007782060
-
Word Processors and Children's Writing in a High-Computer-Access Setting
-
See, e.g., Claudia S. Dybdahl et al., The Impact of the Computer on Writing: No Simple Answers, 13(3/4) Computers in the Schools 41 (1997); Ronald D. Owston & Herbert H. Wideman, Word Processors and Children's Writing in a High-Computer-Access Setting, 30 J. Res. on Computing in Educ. 202 (1997); Sarah E. Peterson, A Comparison of Student Revisions When Composing with Pen and Paper Versus Word-Processing, 9(4) Computers in the Schools 55 (1993).
-
(1997)
J. Res. on Computing in Educ.
, vol.30
, pp. 202
-
-
Owston, R.D.1
Wideman, H.H.2
-
6
-
-
84937309813
-
A Comparison of Student Revisions When Composing with Pen and Paper Versus Word-Processing
-
See, e.g., Claudia S. Dybdahl et al., The Impact of the Computer on Writing: No Simple Answers, 13(3/4) Computers in the Schools 41 (1997); Ronald D. Owston & Herbert H. Wideman, Word Processors and Children's Writing in a High-Computer-Access Setting, 30 J. Res. on Computing in Educ. 202 (1997); Sarah E. Peterson, A Comparison of Student Revisions When Composing with Pen and Paper Versus Word-Processing, 9(4) Computers in the Schools 55 (1993).
-
(1993)
Computers in the Schools
, vol.9
, Issue.4
, pp. 55
-
-
Peterson, S.E.1
-
7
-
-
0346783256
-
The Effect of Handwriting on Teachers' Grading of High School Essays
-
Dep't of Educ.
-
Even when directed to focus solely on content, graders tend to give higher scores to essays with neat and clear handwriting than to those with poor handwriting. See, e.g., Iris McGinnis & Charles A. Sloan, The Effect of Handwriting on Teachers' Grading of High School Essays, Educ. Resources Info. Center, Dep't of Educ. 6 (1978); Clinton I. Chase, The Impact of Some Obvious Variables on Essay Test Scores, 5 J. Educ. Measurement 315 (1968). It is possible (Chase makes this point) that differential grading based on penmanship would have been more pronounced earlier in the twentieth century. Because modern curricula give less emphasis to penmanship, poor penmanship will not appear to the grader as such a departure from the norm.
-
(1978)
Educ. Resources Info. Center
, pp. 6
-
-
McGinnis, I.1
Sloan, C.A.2
-
8
-
-
84982337459
-
The Impact of Some Obvious Variables on Essay Test Scores
-
Even when directed to focus solely on content, graders tend to give higher scores to essays with neat and clear handwriting than to those with poor handwriting. See, e.g., Iris McGinnis & Charles A. Sloan, The Effect of Handwriting on Teachers' Grading of High School Essays, Educ. Resources Info. Center, Dep't of Educ. 6 (1978); Clinton I. Chase, The Impact of Some Obvious Variables on Essay Test Scores, 5 J. Educ. Measurement 315 (1968). It is possible (Chase makes this point) that differential grading based on penmanship would have been more pronounced earlier in the twentieth century. Because modern curricula give less emphasis to penmanship, poor penmanship will not appear to the grader as such a departure from the norm.
-
(1968)
J. Educ. Measurement
, vol.5
, pp. 315
-
-
Chase, C.I.1
-
9
-
-
0005124183
-
Essay Grades: An Interaction between Graders' Handwriting Clarity and the Neatness of Examination Papers
-
See Schuyler W. Huck & William G. Bounds, Essay Grades: An Interaction Between Graders' Handwriting Clarity and the Neatness of Examination Papers, 9 Am. Educ. Res. J. 279 (1972).
-
(1972)
Am. Educ. Res. J.
, vol.9
, pp. 279
-
-
Huck, S.W.1
Bounds, W.G.2
-
10
-
-
0346783261
-
-
note
-
See Marshall & Powers, supra note 1, at 100. But see Chase, supra note 3, at 315 (finding that spelling did not correlate significantly with scores on essay tests).
-
-
-
-
11
-
-
0346783240
-
Graphics Variables and the Reliability and the Level of Essay Grades
-
John Follman et al., Graphics Variables and the Reliability and the Level of Essay Grades, 8 Am. Educ. Res. J. 365, 371 (1971).
-
(1971)
Am. Educ. Res. J.
, vol.8
, pp. 365
-
-
Follman, J.1
-
12
-
-
0346152839
-
-
note
-
Several measures can be employed to combat this leniency effect. First, the teacher can regrade the first few examinations to see if there was indeed a pattern of underscoring. The regrading can continue until the point at which the underscoring pattern ceases to manifest itself. Second, where an exam has multiple essays, the teacher can grade each essay separately rather than grading the entire exam at one time. Then, in grading the additional essays, the teacher can reverse the order of grading or do some shuffling of the deck. Grading by essay rather than entire examination and then altering the order in which the exams are read should spread the impact of the leniency effect more randomly throughout the class. Third, because rescoring the initial exams is not particularly pleasant and is largely against self-interest, it is critical that we as teachers be introspective about the scoring decisions we make during the grading process. Although in some instances we may not be aware that we are becoming more lenient, our experience is that we often sense when we have begun to be more lenient. In such instances our obligation is to attempt to correct the disparity, either by restraining ourselves from scoring more generously or by rescoring the exams graded under a different criteria.
-
-
-
-
13
-
-
0348044460
-
-
note
-
There were approximately 240 exams for which we were unable to obtain data from our colleagues. To maintain student confidentiality and privacy, students were never identified to us by name but only by the confidential exam numbers assigned to facilitate blind grading. We also maintained this confidentiality with respect to all other data collected for our study. When we gathered LSAT scores and undergraduate GPAs for other questions in the study, we had administrative staff who were not involved in our study substitute examination numbers for student names.
-
-
-
-
14
-
-
0346152838
-
-
note
-
Our first-year curriculum also includes a legal writing course, which we did not include in our study because the legal writing grades were determined by several writing projects that had to be keyboarded rather than by an examination where keyboarding or handwriting was an option.
-
-
-
-
15
-
-
0346152828
-
-
note
-
For example, for the 1999-2000 academic year, a GPA increase of 0.10 would have moved a student from a class rank of 73/148 to a rank of 53/148, from 97/148 to 77/148, or from 19/ 148 to 10/148.
-
-
-
-
16
-
-
0347413663
-
-
note
-
The reliability of this 0.1 grade differential was diminished somewhat by the fact that several of the first-year exams have a multiple-choice or short-answer component. For those exams with an objective component, we examined the mean and median raw scores on the essay portion of the examination. That comparison yielded similar results. The mean and median raw essay scores were consistently higher for the exams that had been keyboarded.
-
-
-
-
17
-
-
0347413648
-
-
note
-
The difference between the mean score of keyboarded and handwritten exams after controlling for year and course was statistically significant at p<0.0001.
-
-
-
-
18
-
-
10344263934
-
-
New York
-
The chi-square test is a formula that determines whether two variables can be considered statistically independent. In performing the chi-square test, the observed frequency in each cell of a table is compared to the frequency that would be expected if the row and column classifications were indeed independent. If the calculated statistic is sufficiently large according to a predetermined significance level, typically 0.05, then the two variables are considered dependent. See Arnold Naiman et al., Understanding Statistics 159-60 (New York, 1972).
-
(1972)
Understanding Statistics
, pp. 159-160
-
-
Naiman, A.1
-
19
-
-
0345084561
-
The Threat to Diversity in Legal Education: An Empirical Analysis of the Consequences of Abandoning Race as a Factor in Law School Admission Decisions
-
See Linda Wightman, The Threat to Diversity in Legal Education: An Empirical Analysis of the Consequences of Abandoning Race as a Factor in Law School Admission Decisions, 72 N.Y.U. L. Rev. 1, 31-32 (1997) (noting that the "median correlation coefficient for the LSAT alone [and first year grades] is .41, compared with 0.26 for UGPA alone" and concluding "[t]here has been and continues to be substantial statistical support for the claim of validity of the LSAT" for predicting academic success in the first year of law school). We decided to use LSAT scores rather than GPA because the LSAT is more applicable to the situations of other law schools around the country. Given the rather large numbers of BYU undergraduates who attend BYU Law School, judging academic strength with reference to undergraduate GPA would have made our study less relevant to other institutions.
-
(1997)
N.Y.U. L. Rev.
, vol.72
, pp. 1
-
-
Wightman, L.1
-
20
-
-
0346152829
-
-
note
-
Students between the median and the 75th percentile and students above the 75th percentile were almost identical in the percentage of exams they keyboarded, 85.85 and 85.10 percent respectively.
-
-
-
-
21
-
-
0346152830
-
-
note
-
In addition to the chi-square test, Hendrix performed a t test comparing mean LSAT scores for keyboarded and handwritten exams. It showed that the mean LSAT score for students who keyboarded was 160.28 and the mean LSAT scores for students who handwrote was 159.21. This mean difference of 1.07 was statistically significant at p<0.0002.
-
-
-
-
22
-
-
0348044441
-
-
The complete protocol is on file with the authors
-
The complete protocol is on file with the authors.
-
-
-
-
23
-
-
0346783250
-
-
note
-
We used the undergraduate GPAs provided by LSDAS, which are standardized to the same scale (A = 4.0, A- = 3.67, B+ = 3.33, etc.) but are not corrected for grade inflation at different institutions and between particular majors.
-
-
-
-
24
-
-
0348044457
-
-
note
-
The t test examines the observed mean in a data sample in comparison with a theoretical distribution. See Naiman et al., supra note 13, at 133-34. Specifically, in this case the t test examines the observed mean difference between pairs in contrast with the mean distribution one would expect in the absence of any difference between keyboarding and handwriting.
-
-
-
-
25
-
-
0346152831
-
-
note
-
The index is derived from a formula that combines a student's undergraduate GPA and LSAT score. It is designed to predict first-year performance more accurately than either indicator alone. Each law school has an individual index derived from a formula provided to it by the Law School Data Admission Service. Each year, LSDAS gets from participating institutions the LSATs and undergraduate GPAs of all entering students as well as the first-year grades of those students. Based on those three pieces of data, LSDAS tells each institution what combination of LSAT score and undergraduate GPA would have best predicted first-year law school performance. That combination is represented in a formula that each law school can then use to develop the "index" which predicts the performance of its entering class based on years past. The higher the index, the better the predicted law school performance.
-
-
-
-
26
-
-
0346152832
-
-
note
-
All of these factors except gender significantly related to overall first-year GPA at BYU Law School. Undergraduate GPA, however, was not significantly related to overall first-year law school GPA after index was added to the model.
-
-
-
-
27
-
-
0346783247
-
-
note
-
In addition to those who cut and paste, there are, of course, always students who have a great deal to say about issues that simply are not presented by the fact pattern in the essay question. With perhaps some self-interested motives, we have cautioned our students that sheer volume is not the primary goal.
-
-
-
-
28
-
-
0347413655
-
-
note
-
The 95 percent confidence interval for the slope of the line is 0.017 to 0.024.
-
-
-
-
29
-
-
0347413646
-
-
note
-
Correlating exam length and grades for handwritten exams is a project for another time. Counting characters in handwritten exams involves questions of interpretation, whether one attempts to count directly from the handwriting or first transcribes the exam to computer text and then runs a mechanical count.
-
-
-
-
30
-
-
0013218456
-
Testing on Computers: A Follow-up Study Comparing Performance on Computer and on Paper
-
Michael Russell, Testing on Computers: A Follow-up Study Comparing Performance on Computer and on Paper, 7(20) Educ. Pol'y Analysis Archives (1999), available at http:// olam.ed.asu.edu/epaa (visited November 6, 2000).
-
(1999)
Educ. Pol'y Analysis Archives
, vol.7
, Issue.20
-
-
Russell, M.1
-
31
-
-
0346152822
-
-
note
-
See, e.g., Wightman, supra note 14, at 42-43 (noting a linear relationship between LSAT score and socioeconomic standing). Sometimes called the digital divide, the relationship between computer access and socioeconomic status has been much discussed. See, e.g., William E. Kennard, Equality in the Information Age, 51 Fed. Comm. L.J. 553, 554 (1999).
-
-
-
-
32
-
-
0346152821
-
Equality in the Information Age
-
See, e.g., Wightman, supra note 14, at 42-43 (noting a linear relationship between LSAT score and socioeconomic standing). Sometimes called the digital divide, the relationship between computer access and socioeconomic status has been much discussed. See, e.g., William E. Kennard, Equality in the Information Age, 51 Fed. Comm. L.J. 553, 554 (1999).
-
(1999)
Fed. Comm. L.J.
, vol.51
, pp. 553
-
-
Kennard, W.E.1
|