#LTHEChat – 2nd Feb 2022 – Supporting and humanising behavioural change without the behaviourism: nudges and digital footprints

Join at 20.00h on 2/2/22 (so many 2s!) via  https://twitter.com/LTHEchat 

Hosted by  Ameena Payne, Martin Compton, Sophie Kennedy an early career researcher, educator and incoming doctoral student (Payne), a  disabled, undergraduate psychology student (Kennedy) and me, our collaborative tweet chat aims to explore how behavioural change in online, higher education can be supported without behaviouristic approaches. Specifically, we will engage in discussion on how nudges and digital footprints may be deployed effectively to empower marginalised students – and the potential pitfalls of such data-driven pedagogy.

When students engage in online learning, they leave behind digital footprints, artefacts that trace their activities such as contributions, page views and communications. Digital learning management systems (LMS) generate data from these footprints that can provide insight into student progress and engagement as it relates to student success. These data are called learner analytics (LAs). LAs encompass the broad data mining, collection, analysis, and sharing/reporting/disseminating of students’ digital footprints. LAs are shaping the role of online instruction and student self-regulated learning by promoting ‘actionable intelligence’ (Bayne et al., 2020, p. 71), allowing instructors to orient students and empowering students to orient themselves. 

The growing adoption and interest in LAs has supported a strategic commitment to transparency regarding key drivers for improved student engagement, retention and success. At the same time, concerns are increasingly voiced around the extent to which students are informed about, supported (or hindered by), and tracked and surveilled as they engage online. It is important to acknowledge that making pedagogical conclusions based on delimited dimensions creates a context for stereotyping and discrimination, and profiling can result in hindering students’ potential and may hurt self-efficacy.

Nudge theory, coined by behaviour economist Richard Thaler, connects persuasion with design principles (Thaler, 2015). A nudge is an approach that focuses not on punishment and reward (behaviourism) but encourages positive choices and decisions – fundamental is understanding the context.

We’d like to share a few assumptions as we engage in this discussion:

  • Academic staff have a responsibility to support our increasingly diverse body of students and need to be open to new tools and techniques such as data generated by our students’ digital footprints and opportunities offered by behavioural psychology.
  • Achievement differentials and attainment gaps exist for marginalised students. Disabled students, or students with executive dysfunction, may struggle with skills vital to independent study and content learning e.g., initiation, planning, organisation, etc. For disabled students, a product of being under-served by higher education institutions (HEIs) is that they often demonstrate lower levels of engagement which leads to disproportionate completion rates and, subsequently, employment rates and other outcomes. 
  • Behaviouristic approaches (rewards and sanctions) are at the heart of much of what we still do in education but there have been movements and trends challenging manifestations of this – from banning of corporal punishment in schools to rapid growth in interest in ungrading. 
  • LMS data are not indicators of students’ potential and merit. LAs are not impartial; they are creations of human design. By giving a voice to the data, we’re defining their meaning through our interpretations.

It is valuable to build in periodic or persistent nudges of and toward ‘both the goal and its value’ to empower all students to sustain their efforts (CAST, 2018). We advocate the implementation of nudges as something that can be useful for everyone using an LMS, as compared to a tool aimed directly at disabled students, who may feel singled out. We hold that nudging is less of an evolution of behaviourism but more of a challenge to its ubiquity and all the common assumptions about its effectiveness. We propose the employment of empathy, human connection (in contrast with carrot and stick approaches of education) and understanding to help effect small changes through supportive nudges. Nudging, prompted by LAs, is one way to approach improving achievement, narrowing gaps and offering connection and support for all students. 

Q1 – If nudging students is less about coercive practises (punishments and rewards) and more about ‘soft’, small-step connections towards positive change, what examples can you offer from your practice?

Q2 – What role does/could learning analytics (LAs) play in shaping our in-course interactions with students, particularly those from marginalised groups? 

Q3 – LAs risks profiling students and driving inequality. How might we address the weaknesses of LAs (such as the cognitive biases we may bring to its interpretation and/or some students being advantaged by extra guidance)?

Q4 – What role might nudging and/ or LAs play in personalising/adaptive learning?

Q5 – Regarding the complex issues in the nexus of student agency & subjectivity, privacy, consent, & vulnerability, how might we differentiate between LAs & surveillance in online HE?

Q6 – Can nudges assist students in overcoming ‘learned helplessness’ especially when breaking through cycles of negative thoughts and self-blame? If so, how might nudges support students in taking control of their educational experiences?

Further reading:

Bayne, S., Evans, P., Ewins, R., Knox, J., Lamb, J., Mcleod, H., et al. (2020). The Manifesto for Teaching Online. Cambridge, MA: MIT Press.

CAST (2018). Universal Design for Learning Guidelines version 2.2. http://udlguidelines.cast.org

Commissioner for Fair Access. (2019). Disabled students at university: discussion paper. Scottish Government. Available at: https://www.gov.scot/publications/commissioner-fair-access-discussion-paper-disabled-students-university/

Gašević, D., Dawson, S., & Siemens, G. (2015). Let’s not forget: Learning analytics are about learning. TechTrends, 59(1), 64-71. https://link.springer.com/content/pdf/10.1007/s11528-014-0822-x.pdf 

Lim, L. A., Gentili, S., Pardo, A., Kovanović, V., Whitelock-Wainwright, A., Gašević, D., & Dawson, S. (2021). What changes, and for whom? A study of the impact of learning analytics-based process feedback in a large course. Learning and Instruction, 72, 101202.

Payne, A. L., Compton, M. & Kennedy, S. (In Progress). ‘Supporting and humanising behavioural change without the behaviorism: nudges and digital footprints.’ Human Data Interaction, Disadvantage and Skills in the Community: Enabling Cross-Sector Environments For Postdigital Inclusion. Springer.

Prinsloo, P. (2016). “Decolonising the Collection, Analyses and Use of Student Data: A Tentative Exploration/Proposal.” Open Distance Teaching and Learning (blog). https://opendistanceteachingandlearning.wordpress.com/2016/11/14/decolonising-the-collection-analyses-and-use-of-student-data-a-tentative-explorationproposal/.

Prinsloo, P., & Slade, S.(2015). Student privacy self-management: implications for learning analytics. In Proceedings of the Fifth International Conference on Learning Analytics And Knowledge (LAK ’15). Association for Computing Machinery, New York, NY, USA, 83–92. https://doi.org/10.1145/2723576.2723585

Prinsloo, P., & Slade, S. (2016). Student Vulnerability, Agency and Learning Analytics: An Exploration. Journal of Learning Analytics, 3(1), 159–182. https://doi.org/10.18608/jla.2016.31.10

Roberts, L. D., Howell, J. A., Seaman, K., & Gibson, D. C. (2016). Student Attitudes toward Learning Analytics in Higher Education: “The Fitbit Version of the Learning World”. Frontiers in psychology, 7, 1959. https://doi.org/10.3389/fpsyg.2016.01959

Thaler, R. (2015). The Power of Nudges, for Good and Bad. The New York Times. Available at: https://faculty.chicagobooth.edu/-/media/faculty/richard-thaler/assets/files/goodandbad.pdfWeijers, R.J., de Koning, B.B. & Paas, F. Nudging in education: from theory towards guidelines for successful implementation. Eur J Psychol Educ 36, 883–902 (2021). https://doi.org/10.1007/s10212-020-00495-0

‘Why might students not always act with academic integrity?’ We tried asking them

Guest post from Dr Alex Standen (UCL Arena). I am grateful to my colleague Alex for the post below. I can, I know, be a little bit ‘all guns blazing’ when it comes to issues of plagiarism and academic integrity because I feel that too often we start from a position of distrust and with the expectation of underhandedness. I tend therefore to neglect or struggle to deal even-handedly with situations where such things as widespread collusion have clearly occurred. It is, I accept, perhaps a little too easy to just shout ‘poor assessment design’! without considering the depth of the issues and the cultures that buttress them.  This post builds on an in intervention developed and overseen by Alex within which students were asked to select the most likely cause for students to do something that might be seen as academically dishonest. The conclusions, linked research and implications for practice are relevant to anyone teaching and assessing in HE. 

———————-

In 2021, UCL built on lessons learnt from the emergency pivot online in 2020 and decided to deliver all exams and assessments online. It’s led to staff from across the university engaging with digital assessment and being prompted to reflect on their assessment design in ways we haven’t seen them do before.

However, one less welcome consequence was an increase in reported cases of academic misconduct. We weren’t alone here – a paper in the International Journal for Educational Integrity looks at how file sharing sites which claim to offer ‘homework help’ were used for assessment and exam help during online and digital assessments. And its easy to see why, as teaching staff and educational developers we try to encourage group and collaborative learning, we expect students to be digitally savvy, we design open book exams with an extended window to complete them – all of which serve to make the lines around plagiarism and other forms of misconduct a little more blurry.

We worked on a number of responses: colleagues in Registry focused on making the regulations clearer and more in line with the changes brought about by the shift to digital assessment, and in the Arena Centre (UCL’s central academic development unit) we supported the development of a new online course to help students better understand academic integrity and academic misconduct.

The brief we were given was for it to be concise and unequivocal, yet supportive and solutions-focused. Students needed to be able to understand what the various forms of academic misconduct are and what the consequences of cases can be, but also be given support and guidance in how to avoid it in their own work and assessments.

Since September 2021, the course has been accessed by over 1000 registered users, with 893 students being awarded a certificate of completion. It’s too early of course to understand what – if any – impact it will have had on instances of academic misconduct. What it can already help us to think about, however, are our students’ perspectives on academic misconduct and how in turn we can better support them to avoid it in future.

The course opens with a Menti quiz asking participants for their opinions on academic misconduct, posing the question: Why might students not always act with academic integrity? Here are the results (absolute numbers):

A bar chart showing the results of a poll - it shows that students believe might be reasons for not acting with academic integrity such as confusion, time pressure, desire to get the best grade, not knowing how to do things and anxiety

What they are telling us is that it is less about a lack of preparation or feelings of stress and anxiety on their part, and more a lack of understanding of how to integrate (academic) sources, how to manage their workload and what academic misconduct can even entail.

Our students’ responses are in line with research findings: studies have found a significant degree of confusion and uncertainty regarding the specific nature of plagiarism (Gullifer and Tyson, 2010) and situational and contextual factors such as weaknesses in writing, research and academic skills (Guraya and Guraya 2017) and time management skills (Ayton et al, 2021).

All of which gives us something to work on. The first is looking at how we plan our assessments over the course of a year so that students aren’t impeded by competing deadlines and unnecessary time pressures. The second is to devote more time to working with students on the development of the academic skills – and it is key that this isn’t exclusively an extra-curricular opportunity. Focusing on bringing this into the curriculum will ensure that it is accessible to all students, not just those with the times and personal resources to seek it out. Finally, as we move to more digital assessments, it is about really reflecting on the design of these to ensure they are fit for this new purpose – and perhaps the first question we should all be asking ourselves is, do we really need an exam?

How effective are your questions?

[listen -10 mins -or read below]

Questions you ask students are at the heart of teaching and assessment but where and how you ask them, the types of questions you ask and the ways you ask them can sometimes be neglected. This is especially true of the informal (often unplanned) questions you might ask in a live session (whether in-person or online) where a little additional forethought into your rationale, approach and the actual questions themselves could be very rewarding. I was prompted to update this post when reviewing some ‘hot questions’ from new colleagues about to embark on lecturing roles for the first time.  They expressed the very common fears over ‘tumbleweed’ moments when asking a question, concerns over nerves showing generally and  worries about a sea of blank screens in online contexts and ways to check students are understanding, especially when teaching online.  What I offer below is written with these colleagues in mind and is designed to be practically-oriented:

What is your purpose? It sounds obvious but knowing why you are asking a question and considering some of the possible reasons can be one way to overcome some of the anxieties that many of us have when thinking about teaching. Thinking about why you are asking questions and what happens when you do can also be a useful self-analysis tool.  Questions aren’t always about working out what students know already or have learned in-session. They can be a way of teaching (see Socratic method overview here and this article has some useful and interesting comments on handling responses to questions), a way of provoking, a way of changing the dynamic and even managing behaviour.  In terms of student understanding:  Are you diagnosing (i.e. seeing what they know already), encouraging speculation, seeking exemplification or checking comprehension?  Very often what we are teaching- the pure concepts – are the things that are neglected in questioning. How do we know students are understanding?  For a nice worked example see this example of concept checking.

The worst but most common question (in my view). Before I go any further, I’d like to suggest that there is one question (or question type) that should, for the most part, be avoided.  What do you think that question might be? It is a question that will almost always lead to a room full of people nodding away or replying in other positive ways. It makes us feel good about ourselves because of the positive response we usually get but actually can be harmful. The reason for it is that when we ask it there are all sorts of reasons why any given student might not actually give a genuine response. Instead of replying honestly they see others nodding and do not want to lose face, appear daft, go against the flow. They see everyone else nodding and join in. But how many of those students are doing the same? How does it feel when everyone else appears to understand something and you don’t? Do you know what the question is yet? See foot of this post to check** (Then argue with me in comments if you disagree).

Start with low stakes questions. Ask questions that ask for an opinion or perspective or to make a choice or even something not related to the topic. Get students to respond in different ways (a quick show of hands, an emoji in chat if teaching online, a thumbs up/ thumbs down to an e-poll or a controversial quotation – Mentimeter does this well). All these interactions build confidence and ease students into ‘ways of being’ in any live taught session. Anything that challenges any assumptions they may have about how teaching ‘should’ be uni-directional and help avoid disengagement are likely to help foster a safe environment in which exchange, dialogue, discussion and the questions that are at the heart of those things are comfortably accepted. Caveat: it is worth noting here that what we might assume if a student is at the back and not contributing will almost certainly have reasons behind it that are NOT to do with indolence or distraction. A student looking at their phone may be anxious about their comprehension and be using a translator, for example  They are there! This is key. Be compassionate and don’t force it. Build slowly.

Plan your questions. Another obvious thing but actually wording questions in advance of a session makes a huge difference. You can plan for questions beyond opinion and fact checking types (the easiest to come up with on the fly). Perhaps use something  like The Conversational Framework or Bloom’s Taxonomy to write questions for different purposes or of different types.  Think about the verbal questions you asked in your last teaching session. How many presented a real challenge? How many required analysis, synthesis, evaluation? Contrast to the number that required (or could only have) a single correct response. The latter are much easier to come up with so, naturally, we ask more of them. If framing the higher order questions is tough on the spot, maybe jot a few ahead of the lecture or seminar. If you use a tool like Mentimeter to design and structure slide content it has many built in tools to encourage you to think about questions that enable anonymous contributions from students.

The big question. A session or even a topic could be driven by a single question. Notions of Enquiry and Problem-Based Learning (EBL/ PBL) exploit well designed problems or questions that require students to resolve. These cannot of course be ‘Google-able’ set response type questions but require research, evidence gathering, rationalisation and so on. This reflects core components of constructivist learning theory.

The question is your answer. Challenging students to come up with questions based on current areas of study can be a very effective way of gauging the depth to which they have engaged with the topic. What they select and what they avoid is often a way of getting insights into where they are most and least comfortable.

Wait time. Did you know that the average time lapse between a question being asked and a student response is typically one second? In effect, the sharpest students (the ‘usual suspects’ you might see them as) get in quick. The lack of even momentary additional processing time means that a significant proportion (perhaps the majority) have not had time to mentally articulate a response. Mental articulation goes some way to challenging cognitive overload so, even where people don’t get a chance to respond the thinking time still helps (formatively).  There are other benefits to building in wait time too. This finding by Rowe (1974)* is long ago enough for us to have done something about it. It’s easy to see why we may not have done though…I ask a question; I get a satisfyingly quick and correct response…I can move on. But instilling a culture of ‘wait time’ can have a profound effect on the progress of the whole group. Such a strategy will often need to be accompanied by….

Targeting. One of the things we often notice when observing colleagues ‘in action’ is that questions are very often thrown out to a whole group. The result is either a response from the lightning usual suspect or, with easier questions, a sort of choral chant. These sorts of questions have their place. They signify the important. They can demarcate one section from another. But are they a genuine measurement of comprehension? And what are the consequences of allowing some (or many) never to have to answer if they don’t want to? Many lecturers will baulk at the thought of targeting individuals by name and this is something that I’d counsel against until you have a good working relationship with a group of students but why not by section? by row? by table? “someone from the back row tell me….”. By doing this you can move away from ‘the usual suspects’ and change your focus- one thing we can inadvertently do is to focus eye contact, attention and pace on students who are willing and eager to respond thereby further disconnecting those who are less confident or comfortable or inclined to ‘be’ the same.

Tumbleweed.  The worry of asking a question and getting nothing in response can be one of those things that leads to uni-directional teaching. A bad experience early on can dissuade us from asking further questions and then the whole thing develops its own momentum and only gets worse. The low stakes questions, embedding wait time and building a community comfortable with (at least minimal) targetting are ways to pre-empt this. My own advice is that numbers are with you if you can hold your nerve and relaxed smile. Ask a question and look at the students and wait. 30 seconds is nothing but feels like an eternity in such a situation. However, there are many more of them than you and one of them will break eventually! Resist re-framing the question or repeating it too soon but be prepared to ask a lower stakes version and building from there. More advice is available in this easy access article.

Technology as a question not the answer. Though they may seem gimmicky (and you have to be careful that you don’t subvert your pedagogy for colour and excitement) there are a number of in- or pre-session tools that can be used.  Tools like Mentimeter, Polleverywhere, Socrative, Slido, Kahoot all enable different sorts of questions to be answered as does the ‘Hot Questions’ function in Moodle that prompted me to re-post this.

 

Putting thought into questions, the reason you are asking them and how you will manage contributions (or lack thereof) is something we might all do a little  more of, especially when tasked with teaching new topics or to new groups or in new modalities.

 

 

*Rowe, M. B. (1974). Wait‐time and rewards as instructional variables, their influence on language, logic, and fate control: Part one‐wait‐time. Journal of research in science teaching, 11(2), 81-94. (though this original study was on elementary teaching situations the principles are applicable to HE settings)

 

**Worst question? ‘Does everyone understand?’ or some such variant such as nodding and smiling at your students whilst asking ‘All ok? or ‘Got that?’. Instead ask a question that is focussed on a specific point. Additionally, you might want to routinely invite students to jot their most troubling point on a post it or have an open forum in Moodle (or equivalent space) for areas that need clarifying.

 

 

[This is an update- actually more of a significant re-working- of my own post,  previously shared here: https://blogs.gre.ac.uk/learning-teaching/2016/11/07/thinking-about-questions/]