Thoughts from Wollongong (measuring the unmeasurable?) Part 1

I’m with Jim (Donohue), at the University of Wollongong, Australia, where we’re working with Emily (Purser) on understanding and sharing practices and ideas around embedding communication in the disciplines, prior to presenting our work at a symposium at the end of the week. Our discussions have been ranging over themes and ideas from our own and each other’s contexts.

We’ve been thinking about the ways universities are organized so that groups of people who you might imagine had common concerns and interests are fragmented and dismembered across an institution. Emily’s colleague, Alisa Percy, articulated this situation in her article from 2014: Re-integrating academic development and academic language and learning: a call to reason. Specifically, language teaching staff, learning developers and staff developers who all have a valuable part to play in educating students in the processes of learning, as well as their abilities to communicate what they’ve learned to a range of audiences. In an ideal world, there would also be some mechanism for including academic staff as part of these collaborations. The reality in most places is a model of separation and fragmentation between the education, learning developers and academics so the possibilities of joined up collaborations with academics are few and far between. Aside from the inherent complexity of large institutions, there’s no good reason why this should be the case.

Here in Australia, or at least in Wollongong, it seems that the need to show evidence of the impact of your work has been around longer and is more established, especially in the field of learning development. It’s similar in the UK but the benchmarking driven, audit culture in education is potentially about to increase with the introduction of the TEF. Bracketing the strange focus on employability and the NSS as indicators of teaching quality, how do we show evidence of our work and its value and impact in meaningful ways?

So, if you want to demonstrate that students on your degree programmes have had a good education in their discipline, which includes not just building disciplinary expertise but an understanding of possible professional dimensions, and their ability to communicate (in written and spoken forms, for example), is it enough just to look at their grades? Can you see how students have progressed over each year of the degree, is this also evident? We’re thinking particularly about their language and communication development and ways that we could measure this accurately and which could be used across institutions so that some kind of comparison is possible. But importantly, that the measurement doesn’t pull out language and communication as divorced from disciplinary knowledge construction, as this tends to devolve to reductive and simplistic analyses of communication.

Fortunately the debate has moved beyond just concentrating on the ‘language proficiency’ of international students, even though this phrase is still used. One thing that was tried in Australia (I’m not sure about the UK) are the PELAs or post-enrolment language assessments in which all students, as soon as they are enrolled on a degree do a language test. For those who fail to make the grade they are then, in some cases, referred to Learning Development or similar support services. At a glance this sounds like a catastrophic remedialisation of all students before they’ve even had their first lecture. Despite my deep suspicions of this sweeping diagnostic approach, there are examples, such as the MASUS framework, (that draws on Systemic Functional Linguistics) that try and formulate a more elaborate rubric and have been used in some cases as part of these PELAs. Developed and used in Sydney, the MASUS framework is an attempt to provide a detailed picture of the disciplinary features of language. It uses discipline-based assessments that are designed by language and discipline staff together, and are given as an additional task to assess students’ level of language and communicative ability.

In the Australian context, this was a response to research and a significant evidence trail that students were failing to compete in their search for graduate level jobs because of poor language levels. There was widespread concern that after three years you’d expect all students to have good levels of language and communication, but the reality was different. As an approach, I remain skeptical. Using add on assessments isn’t particularly helpful, as it again separates out language and communication, particularly if it's primarily language teaching staff who do the assessments. There are also questions about whether you would be able to resource this approach appropriately and provide sufficient guidance once you’d identified students who need extra help.

We heard of a good example from Susan Hoadley, who talked about how they’d developed their PELA (at Macquarie). The approach was used with 2,000 students in the business school and involved giving students an early assessment in their first semester and then good feedback on the task. The difference here was that this work was initiated by someone in the discipline, as Susan was a business lecturer. But more importantly, they were using it as a way of building and sharing expertise with the staff. Part of their approach involved training all staff in the assessment of the task students had written. So the discipline staff do the marking and the learning developers helped run the moderation meetings. I’m not sure about the outcomes, but this sounds like an example of good practice, where you use an early assessment of writing and then give students feedback so that they can see their levels at the beginning of their studies.

 

(to be continued …)

(written by Julian Ingle)