2.6 New views of assessment in education

14:00 - 15:30 Tuesday, 6th September, 2022

Location Maths Building, Room 104

Theme Curriculum, Assessment and Pedagogy

Presentation Type Individual Papers

Chair Mary Richardson


265 21st Century Educational Assessment - Taking Stock

Ms Isabel Nisbet, Mr Stuart Shaw
University of Cambridge, Cambridge, United Kingdom

Abstract

At the end of the 20th century, there was much reflection about the state of education, and of educational assessment in particular. Drivers for change included, for example, the democratisation of learning, the rise in university participation and in careers requiring a degree, and the fast pace of technological change (OECD, 2018). Lists of so-called “21st century skills” were shared across the world and education was seen as linked to growing internationalism across countries and multi-culturalism within countries (European Commission, 2019). In the world of educational assessment, technology was seen to be opening doors to many new opportunities (Bennett, 2002) despite a continuing nervousness in some countries to change the formats for key, high-stakes, exams. The distinction between “assessment for learning” and assessment of learning” was becoming commonplace, with most educationists championing formative uses of assessment outcomes (Schildkamp, et. al., 2020). In England, the highly influential “Assessment Reform Group” (1989 – 2010) worked over the turn of the century with the aim that policy on educational assessment should take more account of relevant research findings.[1]

Twenty years and a pandemic later, reviews of education and assessment are again abundant (In the UK, for example, see The Times, 2021, and IAC, 2021). The future of assessment is contested and debated, as was illustrated by the selection of assessment as the focus for a BERA Presidential Roundtable held in January 2022.   However, there is a gap between specialised analyses of assessment issues and high-level policy statements. This presentation will seek to populate that gap with academic, interdisciplinary, work to review thinking about educational assessment, leading to proposals for further work.  

The presentation will consider some of the contextual changes affecting educational assessment.  These include developments in our understanding of knowledge, learning and the brain, changes to patterns of life and work, both in post-industrial societies and in developing countries, and increasingly emotive and divisive developments in culture, politics and ideology affecting education. Assumptions about growing internationalism have been challenged by revivals of nationalist thinking and the promotion of national – and sub-national – traditions and values. The content and style of teaching and learning in the former imperial powers - and internationally - have been shaking off the traces of colonialism. And thinking about equality, fairness and diversity has moved on, though not without controversy (Nisbet & Shaw, 2019).  

Also clearly relevant is the “information revolution”, with technological advances enabling vast amounts of information to be gathered and analysed and sophisticated devices to be affordable and usable by teachers and students. This has greatly increased the scale, types, and immediacy of data available. (Herold, 2016), as well as raising ethical issues. 

Writers about education at the end of the 20th century might be surprised by two further more recent developments. The first is the prominence of concern (including by young people) about climate change and the future of the planet. The second is the widespread consensus on the importance for education of promoting and sustaining wellbeing, both of students and of their teachers. Over and above all these factors has been the impact of the pandemic, which has created a fissure in long-term thinking about education and assessment: it is not possible to chart continuous straight-line development through the first quarter of the century. 

In this changing context, educational assessment has itself changed. The presentation will touch upon developments in assessment theory and in uses of assessment – including uses for accountability and performance management. The practical constraints of the pandemic have forced a normally cautious sector to experiment with different modes of assessment and in using different kinds of evidence of the knowledge and skill of students. Questions for policy-makers and for educational researchers include how much the post-pandemic “new normal” should revert to the “old normal”, and the ethics of how assessment should be practised and its results used in the 2020s and beyond.  

The presentation will draw on examples from the UK, Europe and USA. It offers a starting-point for much-needed cross-disciplinary academic work involving both synthesis (bringing together conclusions from different trends and viewpoints) and analysis, to revisit some of the conceptual foundations of thinking about educational assessment. It will conclude with some principles relevant to education professionals, the assessment industry and policy-makers. 

[1] https://www.nuffieldfoundation.org/project/the-assessment-reform-group


References

  • Bennett, R. E. (2002). Inexorable and Inevitable: The Continuing Story of Technology and Assessment. The Journal of Technology, Learning and Assessment, 1(1). Retrieved from https://ejournals.bc.edu/index.php/jtla/article/view/1667
  • European Commission. (2019). Key Competences for Lifelong Learning. European Union.
  • Herold, B. (2016, January 11). The Future of Big Data and Analytics in K­12 Education. Education Week, 35(17). Retrieved from https://www.edweek.org/ ew/articles/2016/01/13/the­future­of­big­data­and­analytics.html
  • IAC (2021). The Independent Assessment Commission. Interim Report, Retrieved from  Findings | New Era Assessment
  • Nisbet, I. & Shaw, S. D. (2019). Fair assessment viewed through the lenses of measurement theory, Assessment in Education: Principles, Policy & Practice, DOI: 10.1080/0969594X.2019.1586643
  • OECD. (2018). The Future of Education and Skills: Education 2030. Organisation for Economic Co-operation and Development.
  • Schildkamp, K., van der Kleij, F. M., Heitink, M. C., Kippers, W. B., & Veldkamp, B. P. (2020). Formative assessment: A systematic review of critical teacher prerequisites for classroom practice. International journal of educational research, 103, [101602]. https://doi.org/10.1016/j.ijer.2020.101602
  • The Times (2021). The Times Education Commission. Details at The Times Education commission 

Paper submission type

Individual

Themes

Curriculum, Assessment and Pedagogy

Second Theme

Educational Research and Educational Policy-Making

364 Experiential Learning Assessment in Post-Secondary Education

Dr Jay Wilson, Dr Marc Gobeil, Dr Tom Yates, Dr Alec Aitken, Dr Kevin Lewis
University of Saskatchewan, Saskatoon, Canada

Abstract

Introduction

This presentation examines the emerging results of the two-year Social Science Humanities Research Council of Canada (SSHRC) funded study “Experiential Learning Assessment in Post-Secondary Education.” As experiential learning (EL) continues to be a key area of growth for many universities and post-secondary institutions, this research appeals to faculty, designers, and others tasked with implementing effective EL. The purpose of this study was to develop a base understanding of experiential learning assessment through an exploratory approach with university faculty. The research engaged University of Saskatchewan instructors experienced in using experiential learning. Nine participants completed a survey and six of these discussed their experiences and understanding.

Findings from participants noted a student body that was far more engaged with their learning than observed in traditional courses. Participants expanded on current definitions and characteristics of EL assessment and differences between instructing and assessing in traditional classes.  Common themes such as questioning and processes to integrate more EL evaluation strategies were identified. Participants shared advice for those considering EL approaches in their own teachings. 

Methodology

A qualitative survey was designed by the research team with consultation from the university’s internal research support unit. The survey consisted of nine exploratory open-ended questions, and three demographic questions on gender, age, and years of teaching experience. Following the survey, a focus group and one individual interview were conducted to explore themes that emerged in the survey results. 

Participants & Procedures

The survey was distributed to a list of faculty known to teach with experiential learning methods throughout the University of Saskatchewan (n=14). Data was collected from January 15 to February 12, 2021, with a total of nine participants (response rate: 64%). Of the nine survey respondents, five participated in a focus group and one respondent was interviewed individually. The focus group and interview were recorded using the Cisco Webex platform.

Analysis

Participant responses were uploaded into NVivo and coded for key themes. Focus group questions and responses were recorded and transcribed, results were also uploaded into NVivo for coding and analysis.

Findings 

The research emphasized the role of prior experience with experiential learning in each participant’s educational journey. All participants noted positive experiences instructing in EL settings. Experiential learning as a methodology was discussed and nearly all participants (n=8) described learning that took place in a real-world setting, whether that be in a lab, the field, communities, or the “real-world”.  Another key finding was the inclusion of hands-on or practical components in all EL courses. Key positive differences of EL over traditional courses were better engagement, increased student confidence, value-added learning, and opportunities for students to “take control” of assessment. Benefits were not limited to the students but also specifically by the instructors themselves. These benefits included increased satisfaction with teaching and assessment practices and enhanced relationships with students. Successful EL assessment characteristics were identified, and considerable focus was placed on students deciding or constructing assessments specific to the activity. The co-construction of criteria was noted as being another way to synthesize multiple activities and convey meaning from personal experiences. Finally, the participants provided advice to instructors considering an EL teaching approach. Their message was overwhelmingly supportive of integrating or shifting to EL as a teaching and learning practice that “means the world to students”.

Conclusions

Based on the research, three key themes emerged from the experiences of the participants.

Experiential learning improves student experiences. All participants noted benefits for students, including engagement, increased understanding, and control over their learning. Many participants partook in an experiential learning opportunity in their own schooling and the outcomes resonated with these participants at the time of the survey, suggesting potential longevity in the benefits of such courses.

Experiential assessment needs to be flexible and balanced. There was the belief that many things need to be balanced in experiential learning assessment. Subjectivity and objectivity need to be balanced and differing base levels of student knowledge and skill must be accounted for. Practical exams should assess skill acquisition, while broader approaches such as reports and/or presentations can be used to assess knowledge, engagement, and extended learning.

Instructors must be comfortable with a different role. When offering advice, participants suggested that instructors need to shift their role from lecturer to conversationalist. Instructors are there to guide experiences but not dictate them. It was suggested that time must be taken to allow for deeper experiences and to engage with students individually to be able to adequately assess each on an individual basis.

References

  • Wilson, J., Yates, T., & Purton, K. (2018). Performance, Preference, and Perception in Experiential Learning Assessment. Canadian Journal for the Scholarship of Teaching and Learning, 9(2), 1-24.
  • Yates, T., Wilson, J., & Purton, K. (2015). Surveying Assessment in Experiential Learning: A Single Campus Study. Canadian Journal for the Scholarship of Teaching and Learning, 6(3), 1-27.

Paper submission type

Individual

Themes

Curriculum, Assessment and Pedagogy

Second Theme

Higher Education

571 How Can Teachers Assist Digitally? Students’ Experiences, Perceptions and Expectations of Useful Feedback Approaches in UK Higher Education

Ms Wan Faizatul Ismayatim, Dr Serdar Abaci, Dr Jill Northcott
University of Edinburgh, Edinburgh, United Kingdom

Abstract

Introduction

The UK National Student Survey (NSS) reported that student satisfaction ratings of feedback have been consistently rated lowest than other teaching and learning elements within the UK higher education sector (Nicol, 2010), and the explanations why students in higher learning institutions do not meet their academic feedback expectations are still unclear. Past studies on the practice of different approaches in giving feedback and how students feel about it indicate a mismatch between what students are looking for and what educators think they are giving (Chalmers et al., 2017; Lindsey, 2012). Although many studies proposed different frameworks to explain what makes feedback effective, there appears to be a significant gap in knowledge as to what constitutes valuable feedback from the students’ perspectives, precisely the gap between students’ dissatisfaction with feedback and the teachers’ feedback practice. While both educators and students agreed that feedback is for improvement purposes, students perceived that receiving high-quality feedback comments that are usable, detailed, considerate of affect, and personalised to the student’s work are what makes feedback effective for their learning (Dawson et al. 2019). Zhao et al. (2021) found that frequently occurring emotions in the feedback process are negative, with students reporting the feeling of uncertainty about their progress, and teachers reported as often avoiding having a dialogue with the students because of fear of conflict.

Statement of Problem and Aims of Study

As students’ dissatisfaction with feedback practices remains a significant problem in the higher education context, there is an urgency to explore what makes valuable feedback from the students’ point of view. In the post-COVID19 era, in which the provision of assessment feedback is increasingly occurring in digital modes, and the delivery of feedback is perceived as much more challenging than before, it is critically important to understand what students perceive as most useful feedback and their expectations for high-quality feedback in its technology-mediated settings. This includes not only the message of the feedback provided but also the timing and platforms used when delivering the feedback digitally. Viewing feedback as a communicative process, proactive uptake of feedback is influenced not only by the receiver, sender, and the message but also the context in which the message is delivered (Winstone et al. 2017). Thus, to be relevant with the current global pandemic in which the use of blended and online learning is expected to be increasingly prevalent, this study seeks to explore university students’ experiences, expectations, and preferences on the type of feedback they think most useful when drafting an academic (written) assignment via digital platforms. The proposed participants of this study are full-time undergraduate students registering in multiple programmes in one of the Scottish research universities in the UK. Regardless of their year of study, the students’ online feedback experiences, perceptions, and expectations will be explored.

This study aims to answer three research questions:

  1. What type of online feedback did students receive from their teachers when completing their written assignments?
  2. Which type of feedback do the students believe to be most useful to help them learn, specifically the type of written comments, platforms, and the timing of the comments?
  3. What is lacking in teachers’ feedback, and what are students’ expectations regarding how the feedback should have been given?

Methodology

This study will be utilising a survey method approach in which an online questionnaire consisting of closed-ended (Likert-type scale) and open-ended type of questions will be distributed. To answer research questions 1 and 2, the learners' feedback experiences and preferences will be measured based on a set of questions adapted from Dawson et al. (2019), in which the students will be asked to rate the level of detail, personalisation, and usability of the feedback comments, timing, and platforms they received digitally and whether they find them useful. The frequency and mean score of the student responses will be calculated, analysed, and discussed in detail. As for research question 3, students will be asked to answer an open-ended question about their expectations of what should be done by teachers to help students learn when receiving feedback in digital settings. The thematic analysis method will be used to analyse students’ responses to this question.

Conclusion

This study aims to conceptualise what makes valuable feedback in higher education contexts from the paradigm of students’ affect, formative assessment, and digital settings. Therefore, the findings will inform teachers in their design of digital feedback, which will have an impact on curriculum assessment and pedagogy in the UK.

References

  • Chalmers, C., Mowat, E., Chapman, M. (2017). Marking and providing feedback face-to-face: Staff and student perspectives. Active Learning in Higher Education, 19(1), 35-45. https://doi.org/10.1177/1469787417721363 
  • Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D., & Molloy, E. (2019). What makes for effective feedback: staff and student perspectives. Assessment and Evaluation in Higher Education, 44(1), 25–36.  https://doi.org/10.1080/02602938.2018.1467877 
  • Maggs, L.A. (2012.) A case study of staff and student satisfaction with assessment feedback at a small specialised higher education institution. Journal of Further and Higher Education, 38(1), 1-18. https://doi.org/10.1080/0309877X.2012.699512 
  • Nicol, D. (2010). From monologue to dialogue: Improving written feedback processes in mass higher education. Assessment and Evaluation in Higher Education, 35(5), 501–517. https://doi.org/10.1080/02602931003786559 
  • Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting Learners’ Agentic Engagement with Feedback: A Systematic Review and a Taxonomy of Recipience Processes. Educational Psychologist, 52(1), 17–37. https://doi.org/10.1080/00461520.2016.1207538 
  • Zhao, X., Cox, A., Lu, A., & Alsuhaibani, A. (2021). A comparison of student and staff perceptions and feelings about assessment and feedback using cartoon annotation. Journal of Further and Higher Education. https://doi.org/10.1080/0309877X.2021.1986620

Paper submission type

Individual

Themes

Curriculum, Assessment and Pedagogy

Second Theme

Educational Technology

608 Two Steps Forward One Step Back: Implementing a New Formative Feedback Policy

Dr Alphonse de Kluyver, Mr Chris Jones
Pearson College, London, United Kingdom

Abstract

In response to our experience with formative feedback, we decided to begin an action research project to review feedback across Pearson Business School. Informing the project was a frustration that, despite all the work that has been done in the sector, and by us as an institution, student perceptions of feedback remain relatively poor (King, 2019). As Brown and Glover (2019) point out, ‘there has been a proliferation of studies over the past two decades’ on formative assessment and feedback. Whilst an obvious point, ‘no matter how perfect the feedback message’, if it is not engaged with there is no improvement (Van der Kleij and Lipnevich, 2021, pp. 347-348). Engagement with feedback is a necessary element in successful feedback, but difficult to achieve.   

The school-wide approach to coursework formative feedback before the review was to provide individual, written formative feedback against the marking criteria for draft work on all the School’s main coursework assessments, but no mark. There was scope for clarification following feedback and time left for feedback to be implemented in work submitted for summative assessment. That approach sought to reflect the seven principles of good academic feedback practice distilled by Juwah, et al, (2004, p. 6), but with a focus on providing feedback that closed ‘the gap between current and desired performance’.

The project began in the 2021 summer term with review of our coursework formative feedback policy. A focus group was run with our students in which they were invited to comment on the existing feedback process and ideas for change. A working group was also set up with lecturers. Drafts of the policy were shared with them for comment and discussion. Students’ comments in the review indicated attachment to feedback being received on draft coursework, but some unhappiness with the nature of the feedback given. Lecturers' comments on the process reflected disappointment that written feedback, whilst meeting high standards of clarity, precision and support, was not necessarily acted on or, if it was, it was not always with enough depth and impact on the attainment gap.

The two processes of consultation came together in a new formative feedback policy, which was approved by the School Board for implementation in the 2021 autumn term. The new policy has begun to breathe life into a singular feedback process in which there was clearly investment by students and lecturers, although also some dissatisfaction. The new policy retained written, individual feedback on draft work as the default feedback mechanism. However, under the policy, lecturers were invited to deploy alternative feedback processes.

One observed outcome so far is that lecturers are cautious about moving away from the default model. Some students too in their feedback to us at the end of the autumn term, where alternatives were offered, expressed concern that they had missed out on not getting individual, written feedback on draft work. But real innovation was implemented with engaging, online technology called Bubl, which had been developed in our School, being used to deliver a range of formative feedback to students.

Next steps involve gathering and reflecting on student and lecturer comments on feedback from the 2021 autumn term. After observing some innovative practices, but also some dedication to the old system, we want to see if perceptions are changing. We will continue to encourage lecturers to adopt alternative methods of formative feedback. Suggestions for new processes of self and peer-based review by students have been made to lecturers for the 2022 spring term. Less far-reaching modifications, but ones which support a dialogic approach to feedback, such as 1-2-1 feedback meetings on draft coursework and recorded oral feedback through Turnitin, are also suggested. Both these approaches to feedback receive support in the literature for being engaging and effective (Gibbs, 2019, pp. 26-27; Juwah, et al, 2004; Nichols, 2010), but they are not ones that have been used extensively at our School. In-class meetings with students over the term, where our formative feedback processes and the reasons for their implementation will be explained and discussed, are also planned.

This paper provides an action research case study leading to critical examination of an existing formative feedback process and review of outcomes in the light of that to the range and balance of feedback opportunities offered to our students and satisfaction with them.

References

  • Brown, E. & Glover, C. (2019). ‘Evaluating written feedback’, in Bryan, C. & Clegg, K. (eds), Innovative Assessment in Higher Education: A Handbook for Academic Practitioners, London and New York: Routledge, pp. 77-87.
  • Gibbs, G. (2019). ‘How assessment frames student learning’, in Bryan, C. & Clegg, K. (eds), Innovative Assessment in Higher Education: A Handbook for Academic Practitioners, London and New York: Routledge, pp. 22-35.
  • Juwah, C., Macfarlane-Dick, D., Matthew, B., Nicol, D., Ross, D. & Smith, B. (2004). Enhancing student learning through effective formative feedback, The Higher Education Academy. Available at: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/viewer.html?pdfurl=https%3A%2F%2Fwww.heacademy.ac.uk%2Fsites%2Fdefault%2Ffiles%2Fresources%2Fid353_senlef_guide.pdf&clen=309726&chunk=true [Accessed: 31 January 2021]
  • King, H. (2019). ‘Stepping back to move forward: the wider context of assessment in higher education’, in Bryan, C. & Clegg, K. (eds), Innovative Assessment in Higher Education: A Handbook for Academic Practitioners, London and New York: Routledge, pp. 9-21.
  • Nicol, D. (2010). 'From monologue to dialogue: improving written feedback processes in mass higher education', Assessment & Evaluation in Higher Education, 35:5, 501-517, DOI: 10.1080/02602931003786559
  • Van der Kleij, F. & Lipnevich, A. (2021). ‘Student perceptions of assessment feedback: a critical scoping review and call for research', Educational Assessment, Evaluation and Accountability, 33: 345–373. Available at: https://link.springer.com/epdf/10.1007/s11092-020-09331-x?sharing_token=WNfWhoSUROIjbGKB-4czz_e4RwlQNchNByi7wbcMAY7FxJexYHrYPJ4iKyckfBuURznGrckQbIIIFE0Tu_drD8hORGferVcvJS_0AGrcetSazZ-0_Dh8GAonfRyUXCtFYbqaz84dCM410gaFYw3xj7yaBnQEajMMz2a6ZftmxAs%3D [Accessed: 31 January 2021]

Paper submission type

Individual

Themes

Curriculum, Assessment and Pedagogy