Peer Assessment: A Complementary Tool to Promote Students' Autonomy

Javiera Alfaro Alpízar

Universidad Estatal a Distancia, Costa Rica

José Fabián Elizondo González

Escuela de Lenguas Modernas

Universidad de Costa Rica

Netzi Valdelomar Miranda

Escuela de Lenguas Modernas

Universidad de Costa Rica

Abstract

This research study explores the implementation of peer assessment in an impromptu group discussion set in an English as a Foreign Language (EFL) class of 20 students. Peer assessment is incorporated in this class as a way to enrich the students’ own learning process and promote autonomy, based on the premise that, in conjunction with traditional testing, peer assessment helps students to develop a better understanding of the subject matter, their strengths and weaknesses, and their learning process in general (Crisp, Sambell, Mc Dowell & Sambell as cited in Thomas, Martin & Pleasants, 2011). The results of the study showed that the students took a proactive role in completing the checklists conscientiously and writing comments on their peers’ performances, focusing on delivery, pronunciation, grammar, and vocabulary, respectively. Careful preparation of the activity and appropriate guidance to students were key to obtaining the results desired –students’ autonomy, self-confidence, cooperation and motivation (Brown, 2004). Overall, the results obtained not only show that peer assessment can promote autonomy and cooperation among students, but it also has practical implications for instructors, as we learned that we can improve the instruments to generate more insightful learning experiences through peer assessment in future activities.

Keywords: alternative assessment, peer assessment, group discussions, EFL, public speaking

Resumen

Este estudio explora la implementación de la evaluación de pares tras un debate grupal e improvisado que tuvo lugar en una clase de 20 estudiantes de inglés como lengua extranjera. La evaluación de pares se incorpora en esta clase para enriquecer el proceso de aprendizaje del estudiante y promover su autonomía. Aunada a la evaluación tradicional, la evaluación de pares les ayuda a los estudiantes a desarrollar un mejor entendimiento de la materia, de sus fortalezas y debilidades,y de su propio proceso de aprendizaje (Crisp, Sambell, Mc Dowell y Sambell en Thomas, Martin y Pleasants, 2011). Los resultados demostraron que los estudiantes tuvieron un papel proactivo a la hora de completar concienzudamente las listas de cotejo, así como al escribir comentarios acerca del desempeño de sus compañeros en temas como la forma de dirigirse al público, la pronunciación, la gramática y el vocabulario. La preparación meticulosa de la actividad y la guía apropiada de los estudiantes fueron elementos clave para obtener los resultados deseados: la autonomía en los estudiantes, confianza en ellos mismos, cooperación y motivación (Brown, 2004). En general, los resultados obtenidos demuestran que la evaluación de pares puede promover la autonomía y la cooperación entre estudiantes; asimismo, tiene implicaciones prácticas para los instructores, pues aprendimos durante el proceso la posibilidad de crear instrumentos generadores de experiencias de aprendizaje mucho más esclarecedoras.

Palabras clave: evaluación alternativa, evaluación de pares, discusiones grupales, inglés como lengua extranjera, oratoria

Modern educational settings incorporate peer assessment in an attempt to encourage collaborative and cooperative learning where students take responsibility for their own learning (Boud, Cohen, & Sampson, 1999). Brown (as cited in Salehi & Daryabar, 2014) defines peer assessment as "any items wherein students are asked to rate each other’s knowledge, skills, or performance" (p. 291). One type of peer assessment is the direct assessment of performance, where students are asked to monitor a partner in either oral or written production and fill out a checklist that rates their performance on a specific scale (Brown, 2004). This paper examines the application of peer assessment in an EFL class to evaluate an impromptu group discussion. We want to determine the effects of peer assessment in promoting autonomy and cooperation among the students.

Literature Review

The present review of the literature provides insight into the nature of alternative assessment –its origin, advantages and disadvantages,
focusing on peer assessment. Peer assessment is presented as a complementary device to be used in the classroom, which has pros and cons and meets specific evaluation principles that must be considered by the teacher prior to
its implementation.

Alternative Assessment

Brief History. Alternative assessment emerged in the early 1990s in opposition to the idea that everything, including what students know and do in a classroom, could be measured using traditional methods of testing. In the field of EFL, this innovative concept began to gain ground as both teachers and students started realizing that the full range of student outcomes in the process of learning a language was not being adequately assessed with the traditional, standardized testing criteria commonly used in this area.

Even though there is not a single definition of alternative assessment that fits all views, some authors have tried to describe it. Hancock (as cited in Coombe, Purmensky, & Davidson, 2012) explains that alternative assessment is a continuous process in which students and instructors use novel strategies to judge students’ progress. For Hamayan, alternative assessment refers to the use of techniques which can be part of the instructional context and that can be used in everyday classroom activities (as cited in Coombe, Purmensky, & Davidson, 2012). Brown and Hudson (as cited in Brown, 2004), on the other hand, believe that speaking of alternative assessment might be counterproductive because it can be perceived as something new that does not have to comply with the requisites to construct appropriate tests. Therefore, they propose the term “alternatives in assessment,” which considers traditional tests as a type of assessment—but not the only one. In the same line, Coombe, Purmensky, and Davidson emphasize that alternative assessment is not really an “alternative” to traditional types of assessment but a complement, and that both should be used in conjunction in order to obtain a more comprehensive assessment method that considers different learning styles, language proficiencies, and backgrounds (2012).

Advantages and disadvantages. In general terms, it can be said that the purpose of alternative assessment is to observe students as they complete authentic tasks, gather data during the process, and evaluate their production. This way of assessing emphasizes students’ growth over time and their strengths as it integrates learning, teaching, and evaluation (Bailey, O’Malley, Stiggins, & Tannenbaum, as cited in in Coombe, Purmensky, &
Davidson, 2012).

Using alternative assessment in the EFL classroom can benefit both teachers and students in many ways. For instance, it relieves the teacher from the burden of having to do the entire job since it requires and encourages a more student-centered classroom. The role of the student changes from being a mere receptor of information to an active builder of their own knowledge. Therefore, it helps to develop students’ self-esteem and autonomy. Alternative methods of assessing might also be motivating for learners because they encourage cooperation over competition. Students work together toward common goals that can help them build and improve their own language skills. Furthermore, assessing students on a daily or weekly basis allows instructors and students to identify strengths and weaknesses, and work on the weak points before it is too late to take action.

Notwithstanding the multiple advantages of alternative assessment, in order to successfully implement it in the classroom, teachers need to be careful and treat it in the same way they would do it with any other type of traditional test. This means that instructors have to set very clear goals and objectives for every task, make sure that the students understand them, and guide them through the process. Additionally, teachers should design reliable evaluation forms, rubrics, or checklists that allow them to give students opportune and pertinent feedback. Some might feel tempted to think that because students are doing a lot of the work by themselves, teachers do not need to do much, but instructors need time to design those checklists, to fill them in, to assign a grade, and finally to analyze the information gathered. Consequently, as it can be seen, implementing alternative assessment is not an easy task and, on the contrary, can be a very time-consuming activity. Nevertheless, the advantages of personal-response assessment are worth the effort of putting into practice a method that has proven to be beneficial for students in the EFL classroom (Brown & Hudson, 1998).

There are several types of alternatives in assessment which can be used by teachers and students in the EFL classroom. Portfolios, journals, conferences, interviews, observations, and self and peer assessment are the most common ones. The type chosen depends on the objectives and purposes of the course, and in many cases, more than one type is used at the same time to assess the same task. For example, a portfolio can be assessed by the teacher, by the student who created it, and by other classmates, encouraging in this way self-reflection and cooperative learning.

Peer assessment. Peer assessment has been defined in numerous ways by different authors (Robert, Strijbos, & Sluijsmans, and Topping as cited in Karami & Rezaei, 2015), but all of them agree on the notion that there is a judgment of peer’s performances or products. For the purposes of this study, peer assessment is when “students use criteria and apply standards to the work of their peers in order to judge that work” (Falchikov, 2005, p. 27).

According to Brown, peer-assessment—together with self-assessment— is “among the best possible formative types of assessment and possibly the most rewarding” (2004,
p. 276). It helps students to be directly involved in the process of language acquisition because, as they collaborate with peers, they also monitor their own performances. Thus, it promotes autonomy, self-confidence, cooperation, and responsibility. Peer assessment can also be more motivating for students than teacher’s assessment because it helps them to develop a better understanding of the subject matter, their strengths and weaknesses, and their learning process in general, which can lead to more cognitive gains (Crisp, Sambell, Mc Dowell, & Sambell as cited in Thomas, Martin, & Pleasants, 2011). For Topping (2009), the most significant quality of peer assessment is that it is plentiful,
meaning that students can obtain more feedback on their performances because they generally have only one instructor but several peers.

In spite of all the advantages that peer assessment may have, some instructors are cautious about using it because it needs careful planning and time to implement it. If it is not well designed and guided, its effectiveness can be greatly reduced. For instance, students who are not well trained can be biased in their appreciations and assess their peers’ performances very subjectively (either too harsh or too flattering). Others, feeling unprepared for the task, might tend to award everyone similar marks. One way to overcome these obstacles is to train students by providing them with multiple opportunities to engage in peer-assessment since very early stages of the process of language acquisition, or —in a classroom context— since the first day of classes. Another way is to guide them by clearly stating the purpose of the assessment and the objective of the task to be assessed. Teachers can also encourage impartial evaluation, which will ensure positive washback (Brown, 2004).

Brown (2004) describes five types of self- and peer-assessment: direct assessment of performance, indirect assessment of performance, metacognitive assessment, assessment of socio-affective factors, and student self-generated tests. The type used in the present project was direct assessment of performance, which deals with the observation of an oral task and the rendering of an evaluation of the performance soon after it took place. The evaluation is done using a checklist that guides the students to observe specific aspects of the delivery.

Principles met in peer assessment. Peer-assessment can be implemented in a variety of tasks within each of the four skills. In the specific case of speaking tasks, it can be used to rate the effectiveness of communication by others and to detect errors in grammar or pronunciation. This can be done by means of holistic checklists provided by the teacher with space to make other types of annotations. Peer-assessment generally scores high in content validity when there is a correspondence between curricular objectives and those being assessed. Consequential validity also achieves a high level since the results obtained are used to enrich the teaching practice and the learning process, which in turn benefit the students. Authenticity and washback also score high due to the usefulness of the feedback and the emphasis on the students’ linguistic needs. Practicality reaches a moderate level as a result of procedures such as filling in checklists. Reliability, on the other hand, achieves a low level because of the possibility of having inconsistencies in rating, subjectivity, and lack of consensus, which in turn may also affect face validity.

Times have changed and so have many ideas, concepts and notions in education, and assessment is not the exception. If teachers’ aim is at maximizing students’ performances and helping them take control of their own learning process, they cannot continue relying exclusively on traditional ways which do not promote autonomy, cooperation, and growth over time. Language learning is a process, and it has to be approached and assessed as such.

Methodology

Setting and Participants. This research study was conducted in a group of 20 English major students enrolled in an Oral Communication and Pronunciation Techniques I (LM-1351) course at the University of Costa Rica. This class met on Monday, Wednesday, and Friday morning from 7 a.m. to 9 a.m.

In this course, students develop the public speaking skills necessary to carry out prepared as well as impromptu speeches and group discussions on regional and world issues studied in class. Class time is divided into the theory and practice of public speaking, pronunciation techniques, and discussion of topics like violence, world and regional conflicts, violation of human rights, resource exploitation, and social responsibility. The class dynamics intend to provide the students with the input and practice necessary to give informative speeches and engage in group discussions successfully.

The impromptu group discussion that the students had to carry out in this course was complemented with a peer evaluation. It is the implementation of this peer evaluation what we analyze in the present study. The peer assessment was carried out during the eleventh week of the semester (out of seventeen weeks). This assessment intended to expose students to the group discussion format. Before this activity was carried out, they had analyzed some group discussion videos from previous years, and they had read about multiple “do’s and don’ts” of group discussions, but they had never experienced them firsthand.
Consequently, this meant a great opportunity to put into practice everything they had learned until that moment. It aimed at making students aware of their capacities to provide meaningful feedback to their classmates when delivering group discussions. We wanted to make them realize that they can play an important role in their classmates’ learning process by offering them insightful and relevant pieces of advice.

Both the implementation of the impromptu group discussion and the peer evaluation, which are complementary, sought to assess the following course objectives: (a) present informative speeches and group discussions using proper pronunciation of segmentals (vowels and consonants, with a special emphasis on accurate pronunciation of consonants) and suprasegmentals (word and sentence stress), (b) be effective interlocutors by becoming active participants and attentive listeners and (c) evaluate the students’ own work and that
of their classmates.

Instruments. There were two Peer Evaluation forms: one to evaluate group leaders and another one to evaluate the rest of group members (see Appendices A and B). Both versions consisted of checklists with eight questions that evaluated specific aspects based on the student’s role (delivery, participation, and leadership) and a section for writing additional comments. The criteria in these forms had been previously studied in class and evaluated in a quiz, which means that the students were familiar with them.

The Peer Evaluation form for leaders (see Appendix A) assessed their ability to introduce the participants, state the problem in discussion, encourage all group members to participate, provide transitions and
summaries between each step, bring the discussion to a close, thank the participants, introduce the Q&A session and follow all the steps present in a group discussion. All questions, except one (Did s/he skip steps?), were meant to be responded with a yes. The question that needed a no for an answer was deliberately stated in negative terms so as to demand a conscious attention from the students.

The Peer Evaluation form for group members (see Appendix B) assessed the students’ body language, number and quality of contributions, attentiveness to others’ contributions, as well as their capacity to acknowledge their peers’ opinions and respect them, refer to all participants by their name, let others participate, and address the topics assertively. In the same line, two of the questions (Did s/he monopolize the discussion? and Did s/he go off on a tangent?) were written in negative terms with same purpose –make the students aware of their answers.

Procedures. In order to obtain the data collected for this study several steps were taken. First, the students were guided to get informed about the topics for the group discussion, then they were led to carry out a group discussion, and finally they were instructed to do a peer evaluation of their classmates’ performances. More detailed lesson plan instructions are explained here.

On Monday, the students were given the following instruction four days before the activity took place: You are to read the news articles (given in their course reader) as homework for next class: The lone seven-year-olds leaving home and country behind and Why are Eritreans leaving home? On Wednesday, they were divided into three groups of six students each (18 students total) haphazardly. Although the class was composed of 20 students, only 18 attended the class this day. Each group was assigned a number. This number represents the prompt they were required to answer in the format of a group discussion studied in previous classes. Once they were seated in their groups, these were the instructions:

  1. You are to answer your prompt using the group discussion format.

    Prompt for group 1

    Why do frontiers exist? Can we live in a frontierless world? (Causes and solutions)

    Prompt for group 2

    Refugees and their new lives. Can they really adapt to a new culture? Is it convenient for a country to have refugees? (Consequences and solutions)

    Prompt for group 3

    What's happening around the world that people are migrating at accelerating rates in the last years? Are there any consequences when people try to migrate? (Causes and consequences)

  2. You need a leader and participants. If you are the leader, you need to come up with an attention-getter, introduction, evidence that the problem exists, transitions, summaries, and conclusions. If you are a group member, you need to develop solid and academic answers in your two interventions. Every group member should last from 1 minute and 30 seconds to 2 minutes in each of their interventions. Remember that each participant must ‘play’ a specific role in the discussion (journalist, sociologist, psychologist, etc.) You may all use both readings as the main source of evidence for your arguments.
  3. As a group, you have 20 minutes to get ready. Once time is up, students start presenting.

The teacher gave them five extra minutes to get ready. When ready, the teacher videotaped all of the group discussions.

On Friday, the teacher asked the students to evaluate the three impromptu group discussions held on Wednesday using the peer evaluation forms. This day, the 20 students attended the class and therefore participated in the peer assessment activity. The teacher explained that they would receive two different forms: one for the leaders and one for the other group members. What they had to do was to watch the recording of each group discussion and check the boxes (YES/NO) that corresponded based on each student’s performance. The students were also encouraged to write down any comment they considered necessary. At the end of every video, the students were given one extra minute to write down additional comments. Finally, the teacher collected all the forms.

Parallel to the students’ work, the teacher categorized all of the feedback forms received based on the student that was evaluated. He used paper clips to put all of the forms together and kept them on his desk. Once the students had finished with the last
video, the teacher gave them back their feedback forms and told them to read all of the comments their classmates had written to them.

Some days later, the teacher requested the students’ permission to use their peer evaluation forms in a research study, but only nine of them accepted. Therefore, in this study, we will analyze the feedback forms of nine students exclusively (this makes up a total of 180 forms). The procedure to select this sample can be categorized as random sampling. The nine participants agreed to give their forms back to the instructor voluntarily, and they were promised that the results would be anonymous. Because of this sampling, we will analyze the feedback forms of two leaders and seven group members from the three different groups mentioned above. The distribution of participants to be used in this study will be the following:

Group 1: Students A (leader), B and C

Group 2: Students D, E, F and G

Group 3: Students H and I (leader)

Results and Discussion

The analysis was done with the information gathered through a class observation of the teacher’s implementation of the activity and with the data yielded from nine peer assessment forms, as previously explained.

Analysis of the Class Observation. While students were assessing their classmates, the instructor observed their response to the activity. The following scenario was observed: when students were requested to give the feedback forms back after each video ended, they requested more time to write down their comments. The instructor noticed that in most of the cases students did not write their comments while the speakers were presenting, but they waited until each participation was over to provide feedback. We could say then that students may have provided more feedback had they been given more time to elaborate on their answers. This time constraint could have meant a limitation for the students who might have been waiting for this moment to make their observations. On the other hand, if the instructor had asked them to take notes as they were watching the video, probably they would have given more comments on areas such as pronunciation and vocabulary.

Analysis of the Peer-Assessment Forms. Having said that, students really put some effort on providing comments when a participant’s performance showed major areas of improvement. Interestingly, when the performance was “okay” in general terms, they provided fewer comments and pieces of advice or none at all.

In some cases, the students used general and vague words to assess the participants’ performance. Some of them included comments such as: “Great,” “Good,” “Not sure,” “More or less,” “50-50,” “25-75,” “So, So,” “Sort of,” “It wasn’t needed,” and “Not the whole time” (see Appendix C). Many students also used dashes to indicate that the element they were supposed to observe and assess was not relevant or necessary for this specific participant, and a few of them simply left some boxes empty probably because of the same reason. It is worth pointing out possible reasons why half of the forms (88) did not contain any additional comments. One factor that definitely influenced this outcome was that writing comments was not mandatory. We presume that some students provided fewer comments or none at all when they considered that a participant’s performance was “OK.” Another possible reason is that the students did not have the knowledge to correct or advise their peers. One last possible reason is that the students may have grown tired after watching three 20-minute-long videos in a row and assessing eighteen students all at once.

We will now analyze the 153 comments found in 92 forms (see Appendix C for specific examples). As shown in Table 1, students gave more importance to some elements than others when assessing their classmates’ performance. Delivery and Pronunciation were the aspects assessed with more frequency. One third of the total comments (50) qualified the errors in body language use. This trend may respond to explicit classroom training in how to use body language in group discussions or speeches in general, a training which the students may have received in this class or any other class(es).

Table 1

Type of Comments Found in Students’ Feedback Forms

Element analyzed

Number of comments

Pronunciation and fluency issues: vowels & consonants, rate, fluency, pauses, fillers, choppiness, intonation, stress, and articulation

36

Delivery: body language (eye contact, volume of voice, posture, confidence, enthusiasm), hesitancy, breakdowns

50

Content: organization (appropriate introduction, attention getter, introduction of group members), meaningful contributions, use of transitions, use of sources, & evidence

24

Positive appraisal of student’s performance

17

Use of grammar and vocabulary

25

Around one fourth of the total comments (36) assessed pronunciation, the second most important aspect for the students. The students are believed to have paid particular attention to pronunciation due to the content they have studied in the course, which illustrates how the use of alternative assessment encourages learners to become active builders of their own knowledge (Brown, 2004). In spite of pronunciation being the second most evaluated aspect, it is surprising that the students did not write more comments, considering that it was an Oral Communication and Pronunciation Techniques course. This may have happened due to three main reasons: (a) they decided to ignore some
pronunciation mistakes, (b) they did not know how to correct their peers, or (c) they were just more focused on delivery.

As it was mentioned above, making comments on delivery was by far the most important trait to observe and assess. Most of the comments made were related to body language, specifically about body parts movement, eye contact, and posture. Feedback on pronunciation and fluency issues takes the second place, and among the most common corrections were consonant and vowel sounds such as [ǝ], [Ɵ], [ð], [v], inflectional –ed, among others. In the third place, we have comments on content and use of grammar and vocabulary. About content, the students seemed to pay special attention to the use of evidence to support arguments and directness of speech. In regard to grammar and vocabulary, the most recurrent corrections were related to subject-verb agreement and a few of them were about lexical phrases, collocations, and phrasal verbs. Although some students explicitly praised students’ performance in some areas, this kind of comments was the least common. We may speculate that most of the students evaluated good performance with the checklist.

Another important result in our study is that students provided a lot more comments when they were assessing the leaders’ performance (Students A and I, from groups 1 and 3 respectively). Student A—the leader of the first group analyzed—obtained comments in 14 forms, which accounts for 70%. Student B, on the other hand, obtained comments in 10 forms, accounting for 50%, and Student C obtained comments in 7 forms, for a 35%. When analyzing Student I’s feedback forms (the leader of the third group), we can find comments on 12 forms, accounting for 60%. Nonetheless, in Student H’s forms, we can see comments in 10 forms, which accounts for a 50%. In the same fashion, in Students G’s feedback forms, we can see comments on 8 forms, accounting only for 40%.

As it can be seen, the two leaders of the groups assessed received more feedback than their peers. Although we could not study the assessment of the leader in the second group, we believe that this student probably obtained more comments than the other group members, following in this way the same pattern as the other groups. This pattern suggests that the students paid special attention to the leaders’ performance, and the reason may be that they were asked to fill a separate form for the leaders, which probably led them to give special importance to them.

Talking about the individual performances, there are two cases that caught our attention. As students were required to assess every student’s performance, at some point they had to evaluate themselves. Only two students provided comments on their performance using the personal pronoun “I.” Interestingly, in the case of other students, they wrote comments without using the personal pronoun “I,” which makes it impossible then to identify whether or not they made any remarks on their presentation.

Based on the individual analyses, we can also see that students could identify different areas of improvement in every single student, and this is reflected in the kind of comments they provided. In the first group, Student A, the leader, obtained 7 comments on Delivery, which accounts for 30%, and most of the observations this student obtained were positive appraisal (35%). On the contrary, student B did not obtain any positive appraisal and Student C only received one positive comment out of 11 (9%). Student B received the most observations on Delivery—6 out of 16, which stands for 37%—and 8 comments in Pronunciation and Fluency and Grammar and vocabulary, accounting for 25% in each of the aforementioned areas. Lastly, Student C obtained the most notes on Pronunciation and Fluency (6 out of 11), accounting for 54%. In the second group, Student D received important feedback on two areas Delivery (8 comments) and Grammar and vocabulary (5 comments), for a total of 61%. With a higher percentage, Student E received 13 comments in Delivery and 10 comments in Pronunciation and Grammar and Vocabulary, which represents 85% of all the feedback given. Student F received one comment in all of the areas, except in Grammar and vocabulary where s/he received two comments, representing 33%. In the third group, Student I, the leader, received 13 comments out of 24 on Content, which means 54% of all of the comments s/he received in total. Differently, Student G received 9 comments out of 11 on Pronunciation and Delivery, which means 82% of all of the comments s/he received, and 69% of the comments (9 out of 13) for Student H were related to Delivery and Content. All these data show that students were able to pinpoint students’ areas of improvement when dealing with a real impromptu group discussion and provide the recommendations they consider appropriate based on what they had studied in class, especially in Delivery and Pronunciation. However, we could not identify a specific pattern in the comments provided that explains why some students focused on some areas while others considered other areas more important. We might infer that the students assessed these areas either because they think that the assessed peer needs to work on them or because they themselves consider they have to improve in those areas.

Conclusions

The alternative assessment implemented in this study successfully met the objectives proposed. The results previously presented show that all of the students completed the checklist and half of them wrote additional
comments. This result is satisfactory since the students were required to write comments only when they considered it necessary. The students’ proactive participation in the peer assessment shows the benefits that this type of assessment provides as it promotes autonomy, self-confidence, cooperation, and motivation (Brown, 2004).

The implementation of this alternative assessment method was in general well prepared and well guided. In order to complete the checklist successfully, the students had to read the questions carefully because some questions required a Yes answer, but two of them required a No (e.g. Does the student go off on a tangent?). This means that if the participant explained the content successfully, then the evaluator had to check No on that previous question. This type of question forced the students to read actively and complete the form consciously. In general, the instructions given, the time devoted during class, and the preparation of the videos show conscientious planning and good guidance during the task.

The evaluation principles of reliability and washback scored high in this particular activity. In all the participants’ forms, the comments given by their classmates complement the answers marked in the checklist. Consequently, the results can be considered reliable as multiple raters will arrive at the same conclusions. This also promotes positive washback in the participants since they receive objective constructive feedback from peers, which ensures positive washback to support learning (Brown, 2004).

Although the focus of this activity was on peer assessment, some self-assessment was present in the practice. Since every student had to fill out a form for every participant including him or herself, they had the opportunity to self-evaluate. One of the participants used the pronoun “I” to evaluate his/her performance and wrote many comments about his/her performance, all of them about aspects he/she could improve. If the instructor had asked the students to assess themselves using the pronoun “I,” we would have known the way the students perceived their own learning process. This represented a limitation in this research study and could be taken into consideration for further studies.

The use of this peer assessment form made evident that the students are capable of giving trustworthy feedback in different areas and not only on Content and Delivery as it was intended and expected in most of the cases. Therefore, we believe that using a more complete rubric in which different aspects such as Content, Delivery, Pronunciation, Grammar and Vocabulary could enhance our teaching performance in the future. Also, other types of scales (e.g. Likert) could be used in future checklists so as to broaden answer choices.

For us, the significance of this study lies in the fact that it showed that the use of alternatives in assessment promotes autonomy and cooperation. Moreover, it has practical implications for us as instructors, since we could observe how students are able to build their own knowledge, and how little by little we can improve the instruments we use to collect data. In spite of the positive results yielded by the implementation of the peer-assessment in this course, it cannot be denied that using any type of alternative assessment implies a lot of extra work for the instructor. Any teacher willing to use this type of assessment has to keep in mind that s/he needs to design his/her own instruments, guide students throughout the process, devote in-class time for the implementation of the task, and finally analyze all the data obtained. Nevertheless, the advantages of using alternative assessment might compensate for the effort it requires.

Bibliography

Boud, D., Cohen, R., & Sampson, J. (1999). Peer learning and assessment. Assessment and Evaluation in Higher Education, 24, 413-426. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.203.1370&rep=rep1&type=pdf

Brown, H. D. (2004). Language assessment: Principles and classroom practices. White Plains, Y.: Pearson Education.

Brown, J. D., & Hudson, T. (1998). The alternatives in language assessment. TESOL Quarterly, 32 (4), 664. doi: 10.2307/3587999

Coombe, C., Purmensky, K., & Davidson, P. (2012). The Cambridge guide to second language assessment. Cambridge: Cambridge University Press.

Falchikov, N. (2005). Improving Assessment through Student Involvement. New York: Routledge Falmer.

Karami, A. & Rezaei, A. (2015). An Overview of peer-assessment: the benefits and importance. Journal for the Study of English Linguistics, 3 (1), 93-100. Retrieved from http://dx.doi.org/10.5296/jsel.v3i1.7889

O’Malley, J.M., & Valdez Pierce, L. (1996). Authentic Assessment for English Learners.New York: Addison Wesley Publishing Company.

Salehi, M., & Daryabar, B. (2014). Self- and Peer Assessment of Oral Presentations: Investigating Correlations and Attitudes. English for Specific Purposes World, ISSN 1682-3257, 42, 15. Retrieved from www.esp-world.info

Thomas, G., Martin, D. & Pleasants, K. (2011). Using self- and peer-assessment to enhance students’ future-learning in higher education. Journal of University Teaching & Learning Practice, 8 (1). Retrieved from http://ro.uow.edu.au/jutlp/vol8/iss1/5

Topping, K. (2009). Peer Assessment, Theory into Practice, 48 (1), 20-27. Retrieved from http://www.jstor.org/stable/40071572

Appendix I

Peer Evaluation for Group Leaders

LM-1351 Communication and Pronunciation Techniques

Peer Evaluation (Group discussions: leader)

Student's name:

YES

NO 

1. Did the leader introduce the participants?

 

 

2. Did s/he state the problem to be discussed?

 

 

3. Did s/he skip steps?

 

 

4. Did s/he encourage all group members to participate?

 

 

5. Did s/he provide transitions & summaries between each step?

 

 

6. Did s/he bring the discussion to a close?

 

 

7. Did s/he thank the participants?

 

 

8. Did s/he introduce the Q&A session?

 

 

If you have any comments for your classmates, please write them on the back of this slip
of paper.

Appendix II

Peer Evaluation for Group Members

LM-1351 Communication and Pronunciation Techniques

Peer Evaluation (Group discussions: group member)

Student's name:

YES

NO 

1. Was the group member prepared with evidence?

 

 

2. Did s/he make a sufficient number of contributions?

 

 

3. Did s/he monopolize the discussion?

 

 

4. Was s/he open-minded? (acknowledges people's opinions and respects them)

 

 

5. Did s/he pay close attention to other participants' contributions?

 

 

6. Did s/he refer to all participants by name?

 

 

7. Did s/he go off on a tangent?

 

 

8. Did s/he use body language appropriately?

 

 

If you have any comments for your classmates, please write them on the back of this slip
of paper.

Appendix III

Samples of Students’ Comments in Peer Assessment Forms

Recepción: 06-07-18 Aceptación:01-10-18