






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
This study examines the impact of traditional, trivia, and practice test-based exam reviews on actual exam scores and students' attitudes. Traditional reviews, trivia games, and practice tests each have unique advantages and limitations. The research found that students reported more confidence and feeling prepared after trivia and traditional reviews, but exam scores were significantly higher following both traditional and trivia reviews compared to practice tests.
What you will learn
Typology: Exams
1 / 10
This page cannot be seen from the preview
Don't miss anything!
Journal of the Scholarship of Teaching and Learning , Vol. 12, No. 3, September 2012, pp. 78 – 87.
Jana Hackathorn 1 , Kathryn Cornell 2 , Amy M. Garczynski 3 , Erin D. Solomon 3 , Katheryn E. Blankmeyer^3 , and Rachel E. Tennial^3 Abstract: Instructors commonly use exam reviews to help students prepare for exams and to increase student success. The current study compared the effects of traditional, trivia, and practice test- based exam reviews on actual exam scores, as well as students’ attitudes toward each review. Findings suggested that students’ exam scores were significantly higher for the exams following both a traditional review and a trivia review than for the exam following the practice test review. Immediately after the review, three attitude measures (i.e. confidence, feeling prepared, and helpfulness of review) were lowest after the traditional review. Finally, immediately after taking the exam, students reported higher ratings (i.e. confidence, feeling prepared, and helpfulness) after the trivia review. Implications of these results are discussed. Keywords: Exam Reviews, Teaching Effectiveness, Learning I. Introduction. Using exam reviews to help students prepare for an exam leads to better test results in comparison to no review at all (King, 2010). However, a majority of the literature regarding exam reviews has been either anecdotal, or has mainly focused on measures of students’ attitudes. For example, Middlecamp (2003) measured students’ perceptions of a review game for an abnormal psychology course. The findings suggested that most of the students enjoyed the review game and felt it was challenging. However, Middlecamp did not measure the actual effectiveness of playing the game on exam scores. This is problematic, as student perceptions of the review’s effectiveness may not align with the review’s actual effectiveness. Furthermore, literature regarding direct comparisons of differing types of exam reviews is sparse. Arguably, any exam review, if done correctly, can garner positive effects for students’ success. However, what remains unclear is whether certain exam reviews are more effective than others. Thus, the current study sought to directly compare the effectiveness of a few variations of exam reviews (i.e. traditional, trivia, and practice-test). Traditional exam reviews, a common approach, ask students to compose questions that the instructor will answer during a scheduled, class-wide meeting. However, the success of a traditional exam review is contingent upon students coming to class prepared to ask questions. Moreover, the traditional exam review is often received by students as ‘just another class lecture’, passive and disengaging, and only including, or benefitting, the few students who prepared (King, 2010; Paul, Hollis, & Messina, 2006). In attempts to remedy this, many educators have created and experimented with new ways to implement reviews with the hopes of (^1) Jana Hackathorn, Murray State University, 209 Wells Hall, Murray KY 42071, jhackathorn@murraystate.edu (^2) Murray State University (^3) Saint Louis University
Journal of the Scholarship of Teaching and Learning , Vol. 12, No. 3, September 2012. 79 improving exam scores, increasing attendance for review days, and also in influencing attitudes (e.g., satisfaction, feelings of efficacy) about the given subject. One alternative is for instructors to implement trivia-style games, similar to the television show, Jeopardy , to review material. These are often utilized because students appear to enjoy them, and can be easily created for any level course regardless of material type or difficulty. For example, Keck (2000) created a trivia game, based on the game show Jeopardy , as a supplement to a traditional review before a final exam in an upper level chemistry course. Results indicated that students enjoy trivia-based review games, and that students who participated yielded higher exam scores than those who did not participate. Similarly, Paul, Hollis, and Messina (2006) examined the effectiveness of a trivia game on exam scores. Their findings suggested that students’ grades increased significantly across all four exams for students who played the trivia game, as opposed to students who chose not to play. Self-report measures indicated that the game experience taught students how and what to study. Perhaps most importantly, students reevaluated their study methods based on the inadequacies of their knowledge discovered during the game reviews. A major limitation to Paul and colleagues’ (2006) study, however, was that students chose whether or not to attend the class when the game was played. That is, students self- selected into the experimental or control group. This is problematic for a couple of reasons. First, it is unclear whether students who did not choose to play the game received a different type of review, or no review at all. Moreover, it is unclear whether this type of review is helpful for all students, or just the students who would choose to play a review game. That is, an individual difference variable, such as intrinsic motivation or interest in the subject material, could be producing higher exam grades. Perhaps those students who chose to play a trivia game would have scored higher on the exams regardless, because they possess greater motivation. Finally, practice tests are an additional alternative to traditional reviews: however, past literature is inconclusive regarding their effectiveness. Some studies indicate that students prefer technology-based learning, particularly practice tests, because they appreciate the opportunity to check their knowledge level, and get a ‘sneak peek’ at the format and wording of exam questions (King, 2010; Pemberton, Borrego, & Cohen, 2006). For example, Balch (1998) assigned students to either take a practice test, or to review a completed practice test prior to an exam, and found that actively taking the practice test significantly improved subsequent exam scores. The author argued that the process of actually taking an exam, and subsequently getting answers incorrect, allows students to understand their weaknesses. Additionally, Balch argued that by taking the practice tests, the students are engaging in deeper cognitive elaboration, improving understanding and retention of material. Additionally, King (2010) used a slightly different version of the practice test exam review, in which students used clickers to answer multiple choice questions. Attendance and exam grades were higher for clicker reviews relative to other reviews, however, similar to Paul and colleagues (2006), results are confounded by the fact that students self-selected into the exam review or no exam review conditions. Conversely, other studies have shown little to no effects of the practice test review. Pemberton, Borrego, and Cohen (2006) found that using online computer-based practice tests in reviews did not improve exam grades compared to those who had traditional review. Additionally, Kalat (1983) showed that a practice test that mimicked the style of real exams had no effect on exam scores, as compared to a control group with no review.
Journal of the Scholarship of Teaching and Learning , Vol. 12, No. 3, September 2012. 81 B. Procedure. Undergraduate psychology students in two upper level social psychology courses were instructed at the beginning of the semester that throughout the duration of the course an examination of various teaching techniques and exam reviews would occur. Students were also informed that any and all activities were voluntary, and they could choose not to complete any or all of the surveys. They did not receive any extra credit for their participation or completion of surveys. However, in an attempt to increase attendance in each of the exam reviews, extra credit was offered for participating in each exam review. To control for variance across courses, the instructors worked together to cover the same material using the same notes and identical exam review procedures. Additionally, both courses used the same textbook, the same homework assignments, and assessed learning through the exact same multiple choice exams. On the class period prior to each of the three exams, students were presented with a different type of exam review. The reviews consisted of a trivia-style game review for the first exam, followed by a traditional review for the second exam, and finally a practice test prior to the third exam, which was also the final exam. To increase attendance and participation at each of the exam reviews, extra credit points were added to students’ final grades. In order to maintain the integrity of the dependent variable, exam scores before the extra credits were added were used in the analyses. 1. Trivia Exam Review. On the scheduled class day prior to the first exam, students participated in a trivia game in which they were randomly assigned to teams of five to six students. The trivia game was set up so that the instructors created a series of questions prior to the class. Questions ranged from giving definitions and having students recall or recognize (via a multiple-choice format) the correct concepts, to giving real-world examples and students correctly applying concepts. Students took turns representing their group for each question. Groups were asked questions pertinent to material on the upcoming exam, and a point was awarded to the first team with the correct answer. 2. Traditional Exam Review. On the scheduled class day prior to the second exam, students participated in a traditional review. Students were instructed to come to class with questions regarding material that was unclear or needed further clarification. During the class period, students were called on to read a prepared question aloud and the instructor would answer the question for them, or provide clarification regarding subject matter that was unclear. Other students were not necessarily given an opportunity to answer the questions, but were allowed to ask follow-up questions or discuss the material until everyone in the class understood. There were no limits to how many questions each student could ask, and instructors used the full class time. 3. Practice Test Exam Review. On the scheduled class day prior to the third exam, students participated in a computerized practice test on their own computers outside of the classroom. The computer practice test took
Journal of the Scholarship of Teaching and Learning , Vol. 12, No. 3, September 2012. 82 place during the same 50 minute time period that the scheduled class occurred. Students were informed that the amount of questions they answered correctly would not impact their course grade or extra credit. The 20 questions on the practice test mimicked the format, wording, and material of actual multiple choice questions of the upcoming exam, but were not identical to actual test items. Only 20 questions were given, as this provided students the opportunity to take notes, look up answers if necessary, or get further clarification from their notes or textbooks. After students finished the practice test, they were informed of which question(s) they got incorrect, the correct answer(s) and an explanation of the correct answer(s). _C. Measures.
Journal of the Scholarship of Teaching and Learning , Vol. 12, No. 3, September 2012. 84 after the practice-test review ( p = .014 and p = .005, respectively). However, the trivia and traditional review were not significantly different from one another in students’ feelings of preparedness before the exam. Finally, after taking the exam, students reported the trivia game was more helpful than the traditional review ( p = .009) but was not significantly different from the practice-test review ( p > .05). Additionally, the traditional review and the practice-test review were not significantly different from one another ( p < .05) in the students’ reports of helpfulness. See Table 2 for summary of analyses for each individual attitude item. Table 1. Analysis of Attitude Items Immediately after the Exam Review. Trivia Traditional Practice-test M(SD) M(SD) M(SD) F p η 2 p Confidence 4.64(1.11) b 3.98(1.03) ac 4.74(0.87) b 9.82 <.001. Prepared 4.11(1.09) c 3.72(1.17) c 4.60 (0.97) ab 8.86 <.001. Helpful 5.39(1.11) b 3.89(1.40) ac 5.28(1.26) b 20.74 <.001. Note: Superscript notations indicate significant difference from other review; a = difference from trivia, b = difference from traditional, c = difference from practice-test. Table 2. Analysis of Attitude Items after the Exam. Trivia Traditional Practice-test M(SD) M(SD) M(SD) F P η 2 p Confidence 5.56(1.19) c^ 5.24(1.08) 4.79(0.98) a^ 5.28 .007. Prepared 5.38(1.16) c^ 5.41(1.17) c^ 4.65(1.36) ab^ 5.52 .006. Helpful 5.22(1.38) b 4.54(1.22) a 4.70(1.55) 2.97 .057 .0 8 Note: Superscript notations indicate significant difference from other review; a^ = difference from trivia, b^ = difference from traditional, c^ = difference from practice-test. IV. Discussion. The current study is the first known to directly compare the effectiveness of traditional, trivia, and practice-test reviews on actual exam scores. Findings indicate that exam scores were significantly higher following both a traditional and a trivia review than following the practice- test review. This suggests that compared to practice-test reviews, traditional and trivia exam reviews more effectively increase students’ exam scores. These findings support and extend past literature which indicates that not only do students enjoy trivia games as an exam review, but that students who actively participate in trivia games tend to show increased exam scores (e.g., Keck, 2000; Paul, Hollis, & Messina, 2006), especially when compared to no review at all. Our findings also support the notion that practice tests have very little effect on exam scores (e.g., Kalat, 1982), although, it is important to note that practice tests reviews are better than no review at all (Balch, 1998). Additionally, the current study examined students’ attitudes toward each of the exam reviews. Based on findings in prior literature in which students’ attitudes were measured, we expected that students’ attitudes would be most favorable of the trivia and practice test reviews, and least favorable of the traditional review (King, 2010; Paul et al. 2006). That is, we expected students to rate the traditional exam review as least helpful, and would report they felt less confident, and less prepared than after the practice-test or trivia reviews. Our findings partially
Journal of the Scholarship of Teaching and Learning , Vol. 12, No. 3, September 2012. 85 supported this hypothesis. Results indicated that prior to taking the exam, all three attitude measures (confidence, feeling prepared, and helpfulness of review) were lowest after the traditional review, in spite of the fact that the traditional review was more helpful than the practice test. However, after taking the exam, the students’ attitudes shifted in that they were least favorable of the practice-test-based review. Perhaps students are aware of which types of reviews are actually the most helpful to them, but only after taking the actual exam and not immediately after the review itself. Future studies should investigate if students are aware of their learning and studying styles, and whether this corresponds with the types of reviews they prefer. At both points, immediately after the review and after taking the exam, students’ ratings of the trivia game were the most favorable of any of the review styles. Importantly, students’ scores on the actual exam were also highest after the trivia review. There are several possible reasons for this finding. Perhaps the trivia game reviews are more appreciated by students because they are similar to practice test reviews which give students a ‘sneak-peek’ at test wording while also explicitly pointing out areas where more studying is needed (King, 2010). Or, perhaps as students find the trivia games more fun they feel more at ease and less anxious (Middlecamp, 2003). This lack of anxiety may help them to study more effectively and increase their chances of success. However, measuring students’ fun and anxiety toward the review was beyond the scope of this study. Our findings also suggest that a small disconnect exists between the perceived effectiveness and the actual effectiveness of the traditional exam review. That is, students did not seem to realize the helpful nature of traditional exam reviews. Immediately following a review, students feel that the traditional reviews are the least helpful, but their actual scores were higher after the traditional review than the practice-test review. Moreover, even after taking the actual exam, students still did not perceive the effectiveness of the traditional review. It has been noted that the success of the traditional exam reviews are contingent upon students attending the session prepared with questions for the instructor, which suggests some amount of studying before the review session (King, 2010; Paul, Hollis, & Messina, 2006). It is possible that students perceive that the traditional review session is supposed to guide their studying or even replace their studying altogether. Thus, a lack of preparedness and student attitudes may lead to the perceived ineffectiveness of traditional reviews. Another possibility is that the students who feel the least prepared following the review session put additional effort into studying for the exam, thereby explaining the disconnect between feelings of preparedness and actual exam scores. Past studies (e.g., King, 2010) have suggested that perhaps students find traditional exam reviews to be stuffy, awkward, or just plain boring. As part of this study, although not part of our explicit analysis, students were asked open-ended questions regarding what they liked and disliked about each exam review. After the traditional exam review, a theme emerged within students’ comments pertaining to the dislikes of the traditional review stating that the traditional review was boring, an overall feeling that not all of the material was covered, and that the overall session was disorganized. As students generate their own questions to a traditional exam review, and ultimately take turns, the material mentioned in one question is not necessarily connected to the question that was asked before it. This may inhibit proper note taking, and even lead the material to become more confusing to them. Thus, they feel less prepared and confident, and ultimately underestimate the helpfulness of the review session. Perhaps if instructors structured the traditional exam reviews so that students’ questions were grouped by chapter or concept, this might ease students’ concerns.
Journal of the Scholarship of Teaching and Learning , Vol. 12, No. 3, September 2012. 87 References Balch, W.R. (1998). Practice versus review exams and final exam performance. Teaching of Psychology, 25 , 181-184. Bloom, B.S., Engelhart, M.D., Furst, F.J., Hill, W.H., and Krathwohl, D.R. (1956). Taxonomy of educational objectives: Cognitive domain. New York: McKay. Drouin, M.A. (2010). Group-based formative summative assessment related to improved student performance and satisfaction. Teaching of Psychology, 37, 114 - 118. Kalat, J.W. (1983). Effect of early practice test on later performance in introductory psychology. Teaching of Psychology, 10 , 53. Keck, M.V. (2000). A final exam review activity based on the Jeopardy format. Journal of Chemical Education, 4 , 483. King, D. (2010). Redesigning the preexam review session. Journal of College Science Teaching, 40 , 88-96. Middlecamp, M.K. (2003). Uncover the disorder: A review activity for abnormal psychology courses. Teaching of Psychology, 30 , 62-63. Paul, S.T., Hollis A.M., and Messina, J.A. (2006). A technology classroom review tool for general psychology. Teaching of Psychology, 33 , 276-279. Pemberton, J.R., Borrego, J., and Cohen, L.M. (2006). Using interactive computer technology to enhance learning. Teaching of Psychology, 33, 145 - 147.