Recent research in the context of university learning insists on the need to develop the students' ability to regulate their own learning processes by means of active participation in assessment procedures (Boud, 2006; Boud & Associates, 2010; Nicol, 2009). In fact, during the eighties and, especially, during the nineties of the last century, there has been a great tendency for research to break with the traditional idea of the lecturer as the main and only actor in learning assessment. What these lines of research did was to demonstrate and advocate for the importance of the active participation of students in assessment processes, the revisions carried out by Falchikov (1986; 2005), Dochy, Segers and Sluijmans (1999) and Gielen, Docky y Onghena (2011) being worthy of mention in this sense.
At the same time, the changes which have been taking place on socio-economic and cultural levels mean that higher education institutions are required to ensure that graduates are capable of planning and maintaining a follow-up of their self learning processes during later stages, thus developing, amongst others, the ability of independent learning and critical and innovating thinking during their lifetimes (Goñi, 2005).
Within this context of innovation and change, university teaching staff is facing the challenge of making the student take part in the teaching process by using participatory methodologies such as collaborative learning or problem-based learning. But it is also necessary for the teaching staff to design, plan and develop assessment procedures in which the implication and active participation of the students play a central role, bearing in mind the benefits derived from this participation in the development of the students' learning processes.
These preliminary considerations make it necessary for both the teaching staff and students and even the university itself, to adopt a new culture as regards assessment where the latter becomes part of the learning process in itself. As indicated by Ibarra (1999) "authentic" assessment/learning tasks make it possible to build knowledge, disciplined research and the transfer of that knowledge to other contexts. These authentic tasks are the central axes of the concept of assessment known as "learning oriented assessment" proposed by Carless, Joughin and Mok (2006), which takes into consideration the active participation of university students in the process of assessment, mainly by means of self-assessment, peer-assessment and co-assessment, together with a continuous interaction between the teaching staff and the student through a process of feedback and feed forward which allows the student to improve his performance. To sum up, the role of the student is extended to that of assessor. There is a transition from the traditional passive role assigned to the students, according to which they are the object of assessment, to an active approach, in which the student is an evaluating agent.
As stated by Boud (2006) society today demands something more than just passive graduates who accept a predetermined system of assessment. On the other hand, the idea is that graduates be capable of planning and maintaining the follow-up of their learning process on their own. In this context, it is considered that the participation of students in the assessment process is a learning opportunity which can, in itself, develop competences such as:
To sum up, by means of providing assessment education it is possible to establish criteria (and, therefore, priorities), to reflect on the positive and negative aspects of realities, to evaluate (and compare) the objectives of assessment and, above all, to make reasoned and justified decisions. This can encourage the student to pace his own learning and promote self learning and, from a professional perspective, allow him to adapt more easily to changes and make him capable of assuming responsibilities.
Thus, advances in science and technology and those carried out in the field of learning and assessment force university education to consider new strategies in learning assessment that contemplate the active participation of the student. This means that more and more authors are promoting alternative strategies to traditional assessment putting an emphasis on this participation (Biggs, 2005; Biggs and Tang, 2009; Bordás and Cabrera, 2001; Boud and Associates, 2010; Boud, 2011; Carless, Joughin y Mok, 2006; Falchikov, 2005; Gessa, 2011; Gibbs, 2006; Ibarra Sáiz y Rodríguez Gómez, 2010; Ibarra Sáiz, Rodríguez Gómez and Gómez Ruiz, 2012; Knight, 2005; Ljungman y Silén, 2008; López Pastor, 2009; Padilla Carmona and Gil Flores, 2008; Pérez Pueyo et al., 2008; Rodríguez Gómez and Ibarra Sáiz, 2011; Rodríguez Gómez, Ibarra Sáiz and Gómez Ruiz, 2011).
This research forms part of a broader study  aimed at developing procedures and instruments for assessment which will facilitate and promote the participation of students in the assessment process. The benefits of this undertaking would have a beneficial effect on the quality of the education offered by universities, on the development of academic, professional and human abilities of the professionals who graduate and on the mutual satisfaction of teaching staff and students as regards the teaching offered by the university.
As a first step towards achieving this goal the first proposal was to identify and describe the initial conditions of student participation in learning assessment processes at each of the universities taking part in the study, by means of:
As the main objectives of the study were descriptive and evaluative in nature, a "multiple case study" design was used in the research (Rodríguez Gómez, Gil Flores and García Jiménez, 1999: 96). Five cases were used, each corresponding to one on the universities taking part in the project. This study shows exclusively the data and results corresponding to the specific case of the University of Cadiz, specified in a documentary analysis of the subject outlines and a survey among teaching staff and students.
Population and sample of the documentary analysis
A total of 65 official programmes (short cycles, long cycles, second cycle only and masters) were taught at the University of Cadiz during the academic year 2009/2010. An analysis of the educational outlines of the subjects of five branches of knowledge was carried out for the purpose of this research. In accordance with the internal regulations of the university (Instruction UCA/I01VPOA/2010 dated 20/12/2009) and in order to coordinate the Education Organisation Plans of Centres and Departments for the following academic year, educational programmes must be approved and published on the institutional website during the month of June of the previous academic year.
A non-probabilistic method was used for the selection of the programmes to be studied. To be precise, quota sampling was used, the criterion being to have a total of 75 programmes available. Table 1 shows the final composition of the sample of documents selected for analysis.
TABLE 1. Composition of the sample of programmes analysed by branches of knowledge
Survey population and samples
According to the academic report for 2009/2010 the teaching staff at the University of Cadiz was formed, at that moment, by 1,541 lecturers divided between 49 university departments. The teaching staff was selected using quota sampling, combined with an incidental sampling for each of the five areas of knowledge according to criteria of ease of access and acceptance to participate. A total of 40 teaching staff was surveyed following this selection process (see Table 2).
TABLE 2. Composition of the sample of teaching staff surveyed by branches of knowledge
As regards to the student population, according to the report for the academic year 2009/2010, there was a total of 17,280 students matriculated. A similar selection process to that of the teaching staff, combining quota and incidental sampling, was used. Access to the students was possible through their lecturers, following criteria of ease of access and willingness to participate. The final data producing sample consisted of a total of 614 students belonging to different branches of knowledge (see TABLE 3).
TABLE 3. Composition of the sample of students surveyed
Two types of instruments were designed in accordance with the nature of the information it was intended to obtain:
Expert judges from each of the universities taking part in the project participated in the validation process of the contents of instruments used in order to verify the same. On the other hand, the reliability calculations carried out give reliability coefficients (Cronbach's alpha) of between 0.75 and 0.78.
Scale for the documentary analysis of student participation in the assessment process
The "Scale for the documentary analysis of student participation in the assessment process" was designed for the analysis of subject outlines. This instrument was made up of fifteen items or attributes classified in five dimensions:
A (yes/no) control list was specified for each of the fifteen attributes to indicate whether or not there was any evidence of them. If affirmative, the degree to which the attribute appeared was evaluated by means of a frequency scale with the following values: 1 (very little), 2 (little), 3 (somewhat), 4 (quite a lot), 5 (a lot) or 6 (totally).
Questionnaire on student participation in their assessment
Two versions of a questionnaire were drawn up to obtain the opinions of students and teaching staff: a) Questionnaire on student participation in their assessment (version for teaching staff); and b) Questionnaire on student participation in their assessment (version for students). Both instruments has a similar format and were structured with preliminary identification questions and 21 others referring to the attitudes and beliefs of the teaching staff or students, depending on the version, as regards active student participation in assessment. The structural dimensions of these instruments are:
According to the nature of the question, the opinion expressed on each of them could be expressed according to different scales, from a control list indicating presence or absence (Yes/No) to an evaluation using a Likert type scale with values of between a minimum of 1 and a maximum of 6.
Analysis of the educational programmes
The analysis of the subject outlines was centred exclusively on the section corresponding to assessment, in which the teaching staff must specify their assessment criteria and procedures. This analysis will serve to identify those aspects planned by the teaching staff related to student participation in the assessment process.
Each programme was analysed using the Scale for the documentary analysis of student participation in the assessment process. The first task to do was to check whether there was any evidence of the explicitness of each of the fifteen attributes considered in the scale, thus determining the presence or absence of the information referring to each attribute. Secondly, if any evidence referring to student participation in the assessment process was detected, an evaluation of the degree to which this presence was specified or shown was carried out.
The information obtained by means of the Scale for the documentary analysis of student participation in the assessment process was mainly quantitative. For this reason a descriptive study, taking into account the frequency and percentage of responses to the different options, was carried out.
In order make it easier to interpret the data and present the results, the option adopted was to group the evaluations carried out into three categories. A first category (not at all) referred to when evidence of this attribute was found in the programme. Should the planning show evidence of student participation in assessment, the evaluations were grouped into two categories: "somewhat" for those between values 1, 2 and 3; and "a lot" for evaluations 4, 5 and 6 (see Table 4).
Analysis of the opinions of teaching staff and students
The teaching staff and students could either complete the questionnaires "on-line" or use the traditional paper format. All the participants included in teaching staff sample used the on-line version of the instrument as it was the quickest and most convenient method. However, in the case of the students, it was considered that the classroom was the most appropriate method of reaching the largest number of participants, which means that they all completed the survey in person.
With respect to the analysis of the data and, bearing in mind that the majority of the information obtained was quantitative, a statistical analysis of the data was carried out by means of a descriptive study (frequencies and percentages) and non-parametric contrast tests. The open questions were only answered by a very small number of respondents, which means that they have only been used in the presentation when they add a relevant detail to the information provided by the statistical analysis.
Student participation in educational programmes
The first results to be presented are those obtained from the documentary analysis of the educational programmes (see Table 4 and Figure 1). In this sense, it is has been noted that the there is very little information in the subject outlines regarding the planning of student participation in assessment. Thus, it has been observed that there is no information in more than 80% of the programmes related to items 14, 7 and 5:
Similarly, no evidence whatsoever regarding attributes 2, 3, 6, 8, 10, 11, 12, 13 and 15 appears in more than 50% of the subject outlines.
In almost 20% of the programmes there is evidence that the assessment criteria do favour student participation to some extent (item 4). Likewise, around 15% of the programmes clearly specify the weight of student participation in assessment (item 15), and provide information regarding the evaluation results which enable the students to reflect on the level of their achievements (item 9).
TABLE 4. Degree to which each attribute is present in the subject outlines (%)
FIGURE 1. Degree of presence by attribute (%)
Student participation from the perspective of teaching staff and students
This section presents the results obtained from the surveys carried out among the teaching staff and students. They have been grouped together according to the dimensions of the study and the average percentages or scores are show in the corresponding tables, according to each situation. Where applicable, the statistically significant differences (p ≤ 0.05) are indicated by and asterisk (*).
Criteria, design and information and training /education in assessment
Reference is made to the results of the first three dimensions of the questionnaire: a) assessment criteria (from item 6.1 to item 6.5) assessment design (from item 7.1 to item 7.5) and c) information and training/education (items 8, 9, 10, 11 and 12).
Figure 2 shows the percentage of lecturers and students who give an affirmative answer to each of the questions posed concerning student participation in the development of assessment criteria. The greatest difference was found in the answer to item 6.1, where 45% of the teaching staff indicate that it is appropriate for students to participate in the determination of assessment criteria, as opposed to 88.6% of students who think that it is necessary.
The percentages of affirmative answers given by both students and teaching staff to item 6.2 are almost identical. Only a minority believes that strategies and opportunities are offered to students to take an active part in assessment planning.
Only five of the forty teaching staff surveyed (12.8%) stated that they define the assessment processes for their subjects in collaboration with the students (item 6.3), while this perception was even less from the point of view of the students (6.1%).
It is also worth noting that 72% of the teaching staff states that it carries out activities to explain and discuss assessment criteria (item 6.4). However, only 41.9% of students perceive this to be the case. After this joint discussion, 47.5% of the teaching staff states that they modified assessment criteria to include proposals made by their students (item 6.5). However, only 11.5% of the students recognise that this modification takes place.
FIGURE 2. Perspective of teaching staff and students regarding participation in the establishment of assessment criteria
Figure 3 shows that, with regard to student participation in assessment design, few possibilities are offered to students to take part in the design of the assessment process, since the percentages of affirmative answers to the items in this dimension are all low.
FIGURE 3. Perspective of teaching staff and students regarding participation in assessment design
On the other hand, approximately a third of the teaching staff (37.5%) state that they offer the possibility of choosing assessment tasks (item 7.4). Both teaching staff and students coincide in indicating the low student participation in defining what will be assessed, -theoretical contents, tasks to be carried out, deliverables- (item 7.1) and in the establishment of the assessment procedure (item 7.5). A higher number of the teaching staff perceive student participation in the choice of the instruments (item 7.2) and in their creation/development (item 7.3), 28.2% and 27.5% respectively, than do the students (15% and 13.8%).
In the dimension concerning information and training (Figure 4), there is a certain discrepancy between the opinion of the teaching staff and that of the students. The lecturers return a higher number of affirmative answers to the items in this dimension being, in all cases, significantly higher than that of the students.
61.5% of the teaching staff state that they provide students with information concerning the benefits of participating in assessment (item 8); however, compared to this opinion, only 36.1% of the students consider this to be the case. We stress the fact that 95% of the teaching staff state that the strategies they use allow the student to be aware of his level of achievement (item 10) and that they provide feedback (item 11). This perception differs from that of the students (62% in item 10 and 58.5% in item 11). While 82% of the teaching staff state that they provide their students with prospective feed-forward or feedback (item 12), only 32% of students confirm their perception of this.
The percentage of answers to item 9 concerning the presence of activities for assessment training is revealing. This represents the lowest percentage for the items in this dimension, both for students (20% of affirmative answers) and teaching staff (38.5%).
FIGURE 4. Perspective of teaching staff and students regarding the information provided on assessment practices
Types of assessment: self-assessment, peer-assessment and co-assessment
The survey for the collection of information included various items aimed at getting to know the opinion of teaching staff and students on the use and way of using self-assessment, peer-assessment and co-assessment.
First of all, self-assessment was reflected in two items (13 and 14). In the case of item 13, Figure 5 shows two profiles with quite similar response tendencies among teaching staff and students. For 17.3% of students and 30% of teaching staff self-assessment is put into practice by assessing individual performance (item 13.1); or by assessing group exercises (item 13.2), an opinion maintained by 9% of students and 12.5% of lecturers. Almost one third of the teaching staff (item 13.3) state that they use both types of self-assessment (individual or group), although only 15.4% of the students are in agreement with this.
FIGURE 5. Perspective of teaching staff and students on the use of self-assessment
There is also a certain amount of divergence in the response to item 13.4. According to the perspective of 58.3% of students, the teaching staff does not put these self-assessment processes into practice. This opinion is only shared by 25% of lecturers.
From the responses to item 14 (Figure 6) we can see that, according to 30% of lecturers and 33.7% of students, self-assessment relies on the teaching staff providing solutions to exercises to be corrected by the students themselves (item 14.2). This is followed by the use of self-assessment as a means by which the students must evaluate to what extent they comply with assessment criteria and think critically about this assessment (item 14.5), according to 25% of teaching staff and 12.9% of students. To a lesser extent self-assessment is used to identify and describe errors (item14.4), think over and write a report on what has been learned (item 14.3) or give themselves their own marks (item 14.1).
FIGURE 6. Perspective of teaching staff and students regarding the form of self-assessment
Secondly, the information regarding peer-assessment is reflected in items 15 and 16. Figure 7 shows the opinion of teaching staff and students concerning the use of this type of assessment. Both students (69.3%) and teaching staff (47.5%) state that peer-assessment is not put into practice (item 15.4). When it is put into practice, according to the opinion of 22.5% of teaching staff, most often what is done is that the students assess the performance of others in their group (item 15.1). On the other hand, 11.5% of students believe that what is most put into practice is the assessment of the performance of the class as a whole (item 15.2).
FIGURE 7. Perspective of teaching staff and students regarding the use of peer-assessment
In relation to how peer-assessment is defined (Figure 8), a certain amount of discrepancy has been observed between the perception of students and teaching staff. This comes to light in items 16.1, 16.2 and 16.4. While 17.3% of students believe their participation is centred on correcting the exercises of other classmates, based on the solutions provided by the teaching staff, only 7.5% of the latter indicate that they put this possibility into practice. Even greater is the discrepancy between both perspectives when evaluating with assessment criteria the work carried out by other classmates, using critical reasoning for this evaluation. 27.5% of the teaching staff state that they do this, but this aspect is only perceived by 7.2% of students.
FIGURE 8. Perspective of teaching staff and students regarding the forms of peer-assessment
Finally, item 17 asked about the use of co-assessment. The responses obtained from both lecturers and students clearly reveal that it is not put into practice. Only one of the lecturers surveyed stated that they use it. Similarly, 92.2% of students state that the teaching staff does not put either co-assessment or consensual assessment into practice.
Performance, benefits and consequence of participation
The information concerning these aspects was obtained from items 18, 19 and 20 of the survey. The opinion of lecturers and students concerning their perception of how students perform when taking part in the assessment process was obtained from item 18. Items 19 and 20 were centred on the consequences and benefits of the said participation.
As can be observed in Figure 9, the opinion of lecturers and students on the performance of the latter in assessment have response profiles with very similar tendencies, although there are statistically significant differences in all cases, with the exception of item 18.8.
FIGURE 9. Perspective of teaching staff and students concerning student performance in participatory assessment
It is worth pointing out that the majority of lecturers agree in their opinion that students tend to overvalue their performance (item 18.1) and give importance to their effort without taking the results into account (item 18.3). They also recognise that the students usually take part in assessment when asked to do so (item 18.7), although they do not entirely agree that students are initially willing to assume responsibility for their own assessment or for the assessment of others (item 18.8).
They also express disagreement whether students have sufficient mastery of the subject to carry out objective assessment (item 18.2) and enough experience for self-assessment (item 18.4) and peer-assessment (item 18.5). They also recognise that the students do no have sufficient training to carry out assessment (item 18.6).
Students believe they do not tend to overvalue the performance of their classmates (item 18.1). They agree with the teaching staff that they do not have sufficient mastery of the subject to carry out objective evaluations (item 18.2) and that they give importance to effort without taking the results into account (item 18.3). They also state that they have little experience in self-assessment (item 18.4) and in peer-assessment (item 18.5). More students than teaching staff believe that they do not have enough training for assessment (item 18.6). Finally, a low to middle proportion state that they usually participate in assessment when the lecturer requests it (item 18.7) and that they like to assume responsibility for their own assessment or for that of others (18.8).
This information was concluded by asking both groups about the honesty of the student regarding assessment. Always with reference to the average scores, the students express disagreement with regards to the fact that their assessments are influenced by the possible effects these may have on their marks (2.56) or that they undervalue the performance of their classmates (2.10). The teaching staff, on their part, consider that the students are mainly sincere in their assessments (3.38). They do, however, think that the contributions made by students to the discussion or consensus of the grading tend to be subjective and biased (3.79).
On the other hand, with respect to the benefits of student participation in assessment, Figure 10 shows that both groups believe, the most important benefits are increasing the ability to identify one's own errors (item 20.6) and having greater involvement in the learning process (item 20.5).
Other benefits pointed out by both teaching staff and students, although there is a slight difference regarding the order of importance, are the development of a critical attitude towards one's own achievements (item 20.2); the improvement of the abilities and skills acquired in the subjects (item 20.4); the improvement of knowledge related to the subject itself (item 20.3); the improvement of the results or products expected from learning (item 20.8); the ability to improve attitudes (item 20.7); and, finally, the least valued benefit, although with a general positive assessment among both groups, is the acquisition of a more complete vision of the competences to be gained in the subject (item 20.1).
FIGURE 10. Perspective of teaching staff and students regarding the benefits of student participation in assessment
Similar tendencies are observed in the responses regarding the possible consequence of participation in assessment (see Figure 11). The majority of both groups (45.8% and 41.4%) maintain that the teaching staff reviews the results taking self-assessment into consideration (item 19.1). To a lesser extent this review is carried out taking into account peer-assessment (item 19.2) and only in a very few cases is assessment carried out by means of a process of consensus (item 19.3).
FIGURE 11. Perspective of teaching staff and students regarding the consequences of student participation in assessment
This interpretation is coherent with the degree to which teaching staff and students state that these types of assessment are carried out (item 21). Thus, it can be observed in Figure 12 that teaching staff and students agree that the main type of assessment put into practice is that of the professor, followed, to a lesser degree, by the other participatory assessment strategies.
FIGURE 12. Perspective of teaching staff and students regarding the extent to which the different types of assessment are used
Conclusions and prospective
The object of this research was to analyse how both teaching staff and students perceive the participation of university students in assessment, using both of the groups involves as a source of information. First of all, the explicitness used by the lecturers when referring to this matter in the official subject outlines was analysed, paying specific attention to the section dealing with how the assessment of their subjects was planned. Secondly, an analysis of the opinion expressed by lecturers and students, by completing two questionnaires prepared for this purpose, was carried out.
Following the documentary analysis of the official subject outlines we can conclude that there is little evidence to indicate that university teaching staff considers student participation in the assessment process to be important. This lack of indicators regarding the presence of participatory assessment allows us to affirm that the type of assessment predominating in the analyzed institution being is "traditional assessment" as this continues to be a process essentially designed, carried out and controlled by the teaching staff, and as there is no evidence of student participation by means of alternative assessment strategies such as self-assessment, peer-assessment or co-assessment .
From the descriptive analysis of the opinion of lecturers and students we have been able to ascertain that, generally speaking, both groups are in agreement as to the possible benefits of active student participation in the assessment process, with regard to aspects such as the acquisition of a more complete competence in the subject, the development of critical capacity, active implication in the learning process, the improvement of knowledge in the specific subjects, or the improvement of attitudes and the results or products of learning.
However, this recognition comes up against the scant use of participatory assessment strategies. Outstanding, in this sense, is the use of self-assessment, followed by peer-assessment, but discussion, sharing and consensus through co-assessment is not an assessment practice carried out in university classrooms.
It has been shown that professors and students are relatively in agreement that it is not a general practice for teaching staff and students to collaborate in assessment design through, for example, the choice of assessment task or instruments, or to establish assessment procedures.
With reference to assessment criteria, a significant section of the teaching staff surveyed is in favour of adopting some type of strategy which will enable students to participate in the determination of assessment criteria; stating that they carry out activities to explain and discuss these criteria with the students and that they have changed previous criteria, including proposals made by the students. These opinions are qualified to some extent by the students who, for the majority, are in favour of the need to participate in assessment and who consider that the explanation and discussion of the criteria, or the participation in determining them, is not a widespread reality.
Lecturers and students agree to a certain extent that the training and education given to students to enable them to deal effectively with assessment is minimal. They also agree that the teaching staff provide their students with feedback, enabling the latter to know their level of achievement. However, much less use is made of feed-forward, thus limiting the possible improvement of student performance.
These conclusions must be considered from the point of view of the methodological limitations of the study, such as the small sample used and the fact that it has been carried out in one sole academic institution, not allowing for generalisations.
Despite the limitations of this descriptive study, the results presented call attention to the need to establish training processes for both teaching staff and students, in order to favour the active student participation in assessment processes so that assessment is gradually becoming considered as a means for both to share responsibility for student learning. In short, it is necessary to provide education in assessment, so that the proposal of Bound & Associates (2010), according to which students and lecturers become partners, responsible for learning and assessment, can become a reality.
As indicated by Sadler (2010), assessment competency is that acquired by the teaching staff after dealing with the assessment of their students, through training and practice and this is the same opportunity that we should offer our students so that they can develop the same ability to establish criteria and critically and coherently evaluate not only their own learning process but also that of others. In this sense, the students can also learn to provide quality feedback and feed forward to their peers. This would contribute to both the improvement of their own performance and that of their classmates and also develop high level abilities such as critical appraisal and thinking.
The challenge over the next few years is based on offering this specific training to the students, in order to make it a basic experience during their time at university. But this supposes and demands that the university institutionalises the participatory dimension in assessment processes.
Falchikov (2005: 254) stated that we still know very little about many aspect of the student implication in assessment processes and insisted that it would be necessary to implement a coordinated transcultural research programme that would make it possible to investigate this field in greater depth. The intention of this study was to shed some light on this aspect, but there are still many unanswered questions. Different research and innovating projects  are giving an impulse to these participatory assessment strategies and opening spaces for communication and exchange of ideas between university teaching staff, as has been seen from the contributions presented during the EVALTrends 2011 international congress (http://evaltrends.uca.es). We are confident that, over the next few years, research in this area will be widened and that the use of participatory assessment strategies among university teaching staff and students will become generalised, thus favouring strategic learning throughout their lives.
Biggs, J. (2005). Calidad del aprendizaje universitario. Madrid: Narcea.
Biggs, J. & Tang, C. (2009). Teaching for Quality Learning at University. Buchingham: Open University Press.
Bordas, M.I. & Cabrera, F.A. (2001). Estrategias de evaluación de los aprendizajes centrados en el proceso. Revista Española de Pedagogía, LIX (218), 25-48.
Boud, D. (1991). Implementing student self-assessment. Campbelltown: Higher Education Research and Development. Society of Australia Incorporated.
Boud, D. (2000). Sustainable assessment: rethinking assessment for the learning society. Studies in Continuing Education, 22 (2), 151–167.
Boud, D. (2006). Foreword. En C. Bryan & K. Clegg (Eds.), Innovative Assessment in Higher Education (pp. xvii-xix). New York: Routledge.
Boud, D. & Associates (2010). Assessment 2020: Seven propositions for assessment reform in higher education. Sydney: ALTC.
Brew, A. (2003). La autoevaluación y la evaluación por los compañeros. En S. Brown & A. Glasner (Eds.), Evaluar en la Universidad. Problemas y nuevos enfoques (pp. 179-189). Madrid: Narcea.
Carless, D., Joughin, G. & Mok, M. (2006). Learning-oriented assessment: principles and practice, Assessment and Evaluation in Higher Education, 31(4), 395–398.
Dochy, F., Segers, M. & Sluijsmans, D. (1999). The use of self-, peer and co-assessment in higher education: a review. Studies in Higher Education 24 (3), 331–350.
Falchikov, N. (1986). Product comparisons and process benefits of collaborative peer group and self-assessments. Assessment & Evaluation in Higher Education, 11 (2), 144-166.
Falchikov, N. (2005). Improving Assessment Through Student Involvement. Practical solutions for aiding learning in higher and further education. London: Routledge-Falmer.
Gessa Perera, A. (2011). La coevaluación como metodología complementaria de la evaluación del aprendizaje. Análisis y reflexión en las aulas universitarias. Revista de Educación, (354), 749-764. Recuperado el 20 de julio de 2011, de http://www.revistaeducacion.educacion.es/re354/re354_30.pdf
Gibbs, G. (1981). Teaching students to learn: a students-centred approach. Philadelphia: Open University Press.
Gibbs, G. (2003). Uso estratégico de la evaluación en el aprendizaje. En S. Brown Y A. Glasner (Eds.), Evaluar en la Universidad. Problemas y nuevos enfoques (pp. 61-76). Madrid: Narcea.
Gibbs, G. (2006). Why assessment is changing. En C. Bryan & K. Clegg (Eds.), Innovative Assessment in Higher Education (pp. 11-22). New York: Routledge.
Gielen, S., Docky, F. & Onghena, P. (2011). An inventory of peer assessment diversity. Assessment & Evaluation in Higher Education, 36 (2), 137-155.
Goñi Zabala, J. M. (2005). El espacio europeo de educación superior, un reto para la universidad. Competencias, tareas y evaluación, los ejes del currículum universitario. Barcelona: Octaedro
Ibarra Sáiz, M.S. (1999). Guía para un diagnóstico alternativo en el contexto del aula. Diagnóstico en Educación. Proyecto Docente (pp.167-211). Cádiz: Universidad de Cádiz.
Ibarra Sáiz, M. S. (Dir.) (2007). Proyecto SISTEVAL. Recursos para el establecimiento de un sistema de evaluación del aprendizaje universitario basado en criterios, normas y procedimientos públicos y coherentes. Cádiz: Servicio de Publicaciones de la Universidad de Cádiz. Recuperado el 20 de julio de 2011, de http://minerva.uca.es/publicaciones/asp/docs/obrasDigitalizadas/sisteval/sisteval.html
Ibarra Sáiz, M.S.(Dir.) (2008). EvalCOMIX: Evaluación de competencias en un contexto de aprendizaje mixto. Cádiz: Servicio de Publicaciones de la Universidad de Cádiz. Recuperado el 20 de julio de 2011, de http://minerva.uca.es/publicaciones/asp/docs/obrasDigitalizadas/evalcomix.pdf
Ibarra Sáiz, M.S. & Rodríguez Gómez, G. (2010). Los procedimientos de evaluación como elementos de desarrollo de la función orientadora en la universidad. Revista Española de Orientación y Psicopedagogía, 21 (2), 443-461.
Ibarra Sáiz, M.S., Rodríguez Gómez, G. y Gómez Ruiz, M.A. (2012). La evaluación entre iguales: beneficios y estrategias para su práctica en la universidad. Revista de Educación, (359). DOI: 10-4438/1988-592X-RE-2010-359-092
Knight, P. T. (2005). El profesorado de Educación Superior. Formación para la Excelencia. Madrid: Narcea.
López Pastor, V.M. (Coord.) (2009). La evaluación formativa y compartida en docencia universitaria: propuestas, técnicas, instrumentos y experiencias. Madrid: Narcea.
Ljungman, A. & Silén, C. (2008). Examination involving students as peer examiners. Assessment & Evaluation in Higher Education, 33 (3), 289-300.
Padilla Carmona, M.T. & Gil Flores, J. (2008). La evaluación orientada al aprendizaje en la Educación Superior: Condiciones y estrategias para su aplicación en la docencia universitaria. Revista Española de Pedagogía, 66 (24) 467-486.
Pérez Pueyo, A., Tabernero B., López, V.M., Ureña, N., Ruiz, E., Caplloch, M., González, N. y Castejón, F.J. (2008). Evaluación formativa y compartida en la docencia universitaria y el Espacio Europeo de Educación Superior: cuestiones clave para su puesta en práctica. Revista de Educación, (347), 435-451. Recuperado el 25 de julio de 2011, de http://www.revistaeducacion.mec.es/re347/re347_20.pdf
Prins, F.J., Sluijmans, M.A., Kirschener, P.A. & Strijbos, J. W. (2005). Formative peer assessment in a CSCL environment: a case study. Assessment & Evaluation in Higher Education, 30(4), 417-444.
Nicol, D. (2009). Transforming Assessment and Feedback: Enhancing integration and empowerment in the first year. Mansfield: Enhacement Themes.
Rodríguez Gómez, G., Gil Flores, J. & García Jiménez, E. (1999). Metodología de la investigación cualitativa. Archidona, MA: Aljibe.
Rodríguez Gómez, G. (Dir.) (2009). EvalHIDA: Evaluación de Competencias con Herramientas de Interacción Dialógica Asíncronas (foros, blogs y wikis). Cádiz: Servicio de Publicaciones de la Universidad de Cádiz. Recuperado el 20 de julio de 2011, de http://www.tecn.upf.es/~daviniah/evalhida.pdf
Rodríguez Gómez, G. & Ibarra Sáiz, M.S. (Eds.) (2011). e-Evaluación orientada al e-Aprendizaje estratégico en la educación superior. Madrid: Narcea.
Rodríguez Gómez, G., Ibarra Sáiz, M.S. & Gómez Ruiz, M.A. (2011). e-Autoevaluación en la universidad: un reto para profesores y estudiantes. Revista de Educación. DOI:10-4438/1988-592X-RE-2010-356-045.
Rodríguez Gómez, G. (Dir.), Quesada Serra, V., Gómez Ruiz, M. A., Ibarra Sáiz, Mª S., Gallego Noche, B., Cabeza Sánchez, D., León Rodríguez, A. & Cubero Ibáñez, J. (2010). Re-Evalúa: Comprobando el impacto de la e-Evaluación orientada al e-Aprendizaje en la universidad. En M.E. Prieto Méndez, J.M. Dodero Beardo y D.O. Villegas Sáenz (Eds.), Recursos Digitales para la Educación y la Cultura. Actas CcITA-Volumen SPDECE (pp. 253-256). Cádiz: Universidad de Cádiz y Universidad Tecnológica Metropolitana de México.
Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment and Evaluation in Higher Education, 35, 535-550.
Sambell, K. & McDowell, L. (1998). The construction of the hidden curriculum: messages and meanings in the assessment of student learning. Assessment and Evaluation in Higher Education, 23, 391-402.
Sivan, A. (2000). The implementation of peer assessment: an action research approach. Assessment in Education, 7(2), 193-213.
Stefani, L.A.J. (1994). Peer, self and tutor assessment: relative reliabilities. Assessment and Evaluation in Higher Education, 19(1), 69-75.
project - involving
in learning assessment
in higher education, funded
by the Spanish Agency
(Agencia Española de Cooperación
Internacional para el Desarrollo - AECID) (Ref. A/016477/08).
project of - "Reengineering
and skills development
university professors and students."
and development of procedures
and tools for
skills assessment in
students in undergraduate
 EvalPART project - involving students in learning assessment and quality in higher education, funded by the Spanish Agency of International Cooperation for Development (Agencia Española de Cooperación Internacional para el Desarrollo - AECID) (Ref. A/016477/08).
 "Re-Evalúa" excellence research project of - "Reengineering of e-assessment, technology and skills development in university professors and students." Ref. P08-SEJ-03502. http://reevalua.uca.es. INEVALCO Project - "Innovation in competencies Assessment. Design and development of procedures and tools for skills assessment in blended learning environments / virtual involving students in undergraduate degrees ". Ref. EA-2010-0052.
ARTICLE RECORD / FICHA DEL ARTÍCULO
Rodríguez-Gómez, Gregorio; Ibarra, Marisol; Gallego-Noche, Beatriz; Gómez-Ruiz, Miguel-Ángel & Quesada-Serra, Victoria (2012). Student voice in learning assessment: a pathway not yet developed at university. RELIEVE, v. 18, n. 2, art. 2. DOI: 10.7203/relieve.18.2.1991
Title / Título
Student voice in learning assessment: a pathway not yet developed at university. [La voz del estudiante en la evaluación del aprendizaje: un camino por recorrer en la universidad].
Authors / Autores
Rodríguez-Gómez, Gregorio; Ibarra-Sáiz, Maria Soledad; Gallego-Noche, Beatriz; Gómez-Ruiz, Miguel-Ángel & Quesada-Serra, Victoria
Review / Revista
|RELIEVE (Revista ELectrónica de Investigación y EValuación Educativa), v. 18, n. 2|
Publication date /
Fecha de publicación
2012 (Reception Date: 2011 June 08 ; Approval Date: 2012 August 30. Publication Date: 2012 September 03).
Abstract / Resumen
During 2009/2010, the current research was conducted with the aim of analysing student and teaching staff on regarding student participation in assessment. A content analysis of 76 subject outlines was carried out, and then 40 member of the teaching staff and 614 university students were surveyed. The results of the content analysis prove there is a shortage of information about and programming for student participation in assessment.
Durante el curso 2009/2010 se llevó a cabo esta investigación con el objetivo de analizar la opinión y perspectiva que profesores y estudiantes universitarios tienen sobre la participación de estos últimos en el proceso de evaluación. Se realizó un análisis de contenido de 76 programas de asignaturas universitarias y se encuestaron mediante dos cuestionarios a 40 profesores y 614 estudiantes universitarios. Los resultados muestran una escasez de evidencias sobre la participación real de los estudiantes. Además, confirman opiniones divergentes entre docentes y estudiantes sobre los usos y las formas en las que se concreta esta participación activa en la evaluación.
Keywords / Descriptores
Learning assessment, Learning-oriented assessment, Self-assessment, Peer-assessment, Co-assessment, Collaborative assessment, Participative assessment, Higher Education.
Evaluación del aprendizaje, evaluación orientada al aprendizaje, autoevaluación, evaluación entre iguales, coevaluación, evaluación colaborativa, evaluación participativa, educación superior.
Institution / Institución
Facultad de Educación. Universidad de Cádiz (Spain).
Publication site / Dirección
Language / Idioma
Español & English version (Title, abstract and keywords in English & Spanish)
Volumen 18, n. 2
© Copyright, RELIEVE. Reproduction and distribution of this article is authorized if the content is no modified and its origin is indicated (RELIEVE Journal, volume, number and electronic address of the document).
© Copyright, RELIEVE. Se autoriza la reproducción y distribución de este artículo siempre que no se modifique el contenido y se indique su origen (RELIEVE, volumen, número y dirección electrónica del documento).
[ ISSN: 1134-4032 ]
Revista ELectrónica de Investigación y EValuación Educativa
E-Journal of Educational Research, Assessment and Evaluation