Week 8 reflection

This week, we learnt about mixed-method research that employs both quantitative and qualitative methods in one study.
In my research topic, which investigates the use of educational gaming in mathematics and examines its effectiveness, it is possible to conduct mixed-methods research.

For instance, students could do pre and post-tests before playing the game in their mathematics class. This approach is common in educational research when measuring the impacts of changes in education regarding student performance. After collecting the data, we can compare the results to see if there is any significant change in student performance. As we discussed in the last few weeks, this could be done through statistical techniques to generalise the findings and determine whether the effect size was statistically significant. We probably should conduct paired samples t-test since the research will be within-subjects design where all participants go through the same treatments. I assume that the data will be normally distributed but the non-parametric tests could be used if the data is not normally distributed. Through these tests, we can determine whether our findings from the samples were collected by chance or have more generalisability. Then, either Cohen’s d or Rosenthal’s r to determine the effect size of the game playing. While the previous test will deal with the generalisability of our findings, the latter will focus on the size impact of our implementation on students learning performance.

Yet, the priority would be on the qualitative method because I believe the research question is more focused on individual experience on learning mathematics through educational games. Indeed, how students perceive playing games (i.e., enjoyment) could be elicited from quantitative methods such as surveys. However, with only a few text-based questions, it is highly likely that the responses would not give an in-depth understanding of the student experience compared to other approaches such as interviews or focus groups.

Logically, the qualitative method would be conducted first and qualitative later. The students would need to take the pre and post-tests to measure their performance in mathematics, and few students will be selected to participate in the qualitative study.

Week 7

Continuing from the last week, the class discussed different statistical inferences that the researchers could use in quantitative research.

The statistic is making a statement about population data observed from a sample of the population. Since the researchers cannot experiment on the whole population, the researchers collect the data from a smaller proportion of the population and attempt to generalise the findings to the whole again. This is statistical inference, the process of concluding the datasets. Before this, it is crucial to understand inferential statistics that can indicate the generalisability of observed datasets (Bryman et al., 2021).

Thus, in our class, we evaluated a paper by Herodotu (2018). We looked into the details of her research methods and critically analysed her statistical inference.

For one of her hypotheses, the author tried to confirm that there is a significant difference between the children aged 4 and 5 in knowledge about projectile motion in actions as measured by gaming performance after playing the game. The researcher measured the game score of children on Day 2 and Day 7 to compare the result. To confirm her hypothesis, the author selected a paired sample t-test that inform us how significant the observed data is. We often compare the p-value at the significant level of 0.05. If the p-value is bigger than 0.05, we cannot conclude statistical significance in our finding because there is too high a probability of obtaining the value by chance (Diamond and Jefferies, 2001). In the study, the author reported p=0.004 with children aged 5 but not for the other and concluded that her hypothesis was confirmed.

Yet, there are several problems with this conclusion. Before, or even at the beginning of, the class, I was not sure about the difference between within-subjects and between-subjects design. Even though our lecturer defined the terms, it was helpful to go over an activity together to confirm my understanding. The difference between the two designs is that within-subjects design provides the same conditions to all samples while between-subjects allocates different conditions to different groups to check its effect. So for our research, I thought it was a within-subject design since both groups went to the same treatment (i.e., playing games). However, the lecturer told us that it was a between-subjects experiment since the researcher separated the groups by their age. To be reassured, I checked the definition again and Field (2009) explains that there is no difference in an independent variable for the participants in the within-subject, but since the age was an independent variable, the experiment was between-subject.

Consequently, the decision to conduct a paired t-test became inadequate because we can only compare the person’s score in pre and post-condition (Field, 2009). Moreover, her statistical inference was insufficient anyways because the p-value cannot confirm her hypothesis. The p-value less than 0.05 only indicates that there is sufficient evidence to believe that her finding is not observed by the incident. So she could have concluded that there is a high chance of an increase in five years old children’s understanding of projectile motions and game performance. However, she did not compare the two groups and cannot confirm the hypothesis.

Finally, the effect size was missing. Coe (2002) says that effect size quantifies the size of the difference between two groups. Thus, to confirm the hypothesis, the author should have firstly acknowledged the distribution of data and the design of the experiment to choose the right statistical methods. Furthermore, she must have included the effect size to confirm the significant difference.

Reference

Bryman, A., Tom, C., Foster, L., and Sloan, L. (2021). Bryman’s social research methods. (Sixth edition / Tom Clark, Liam Foster, Luke Sloan, Alan Bryman ; editorial advisor, Elena Vacchelli. ed.).

Coe, R., 2002. It’s the effect size, stupid: what effect size is and why it is important,

FIELD, A. 2009. Discovering Statistics Using SPSS, SAGE Publications Ltd.

Herodotou, C., 2018. Mobile games and science learning: A comparative study of 4 and 5 years old playing the game Angry Birds. British Journal of Educational Technology, 49(1), pp.6-16.

 

Week 6 reflection

This week was about credibility criteria for quantitative research. In previous weeks, we studied one for qualitative research, so it was interesting to compare two different approaches. The transferability in qualitative research is called generalisability or external validity in the quantitative approach. Indeed, I think those names are interchangeable and, therefore, other scholars could use different terms. However, it is crucial to note that there is some subtle difference in assessing the criterion.

I believe that external validity is quite similar in both approaches in that the criterion is concerned about the degree that the findings of certain research can be applied to a wider context. Therefore, it focuses on the sampling and the setting of the study. For instance, in this week’s activity, we critically analysed the credibility of research done by Herodotou (2018) that investigated how young children gained knowledge in projectile motions through playing mobile games. To assess its external validity, Guba and Lincoln (1982) suggest that the research should provide an in-depth description of its content or use of purposive and theoretical sampling. In our case, the researcher explained the characteristics of young children and the reason behind the selection. The author used various references to build her hypothesis that there is a “significant difference between 4 and 5 years old in their understanding of projectile motion after playing the game” (Herodotou, 2018). Thus, the findings of this research are more likely to be applied to 4-5 years old children than other age groups.

Thus, in my research, it would be crucial to decide the specific sample group first. The specific group would be dependent on the level of mathematics in the game. Ideally, the sample would be from primary school students since there was no evidence that game-based learning could have a significant impact on children’s mathematical knowledge (Bragg, 2003; Peters, 1998; Mitchell and Smith, 2001). Besides, the younger the students, the students would have more flexibility regarding the curriculum because they are not bound by national exams such as GCSE. Therefore, the sample group would be the students who began primary school.

Literature suggests that having a focus group with children can elicit their original ideas and insights which are often neglected (Adler, Salanterä and Zumstein-Shaha, 2019). Of course, there are issues with those participants, especially with qualitative methods such as focus groups. Since we are dealing with young children, the data can be limited due to the low literacy level of participants (Kennedy, Kools and Krueger, 2001). We cannot expect them to express their emotions and thoughts as fluently as older people, and the researcher might need to take further actions such as triangulation to obtain the validity. Besides, as one of my classmates discussed, there is a potential impact of classroom dynamics since they are the students. Daley (2013) confirms this idea and suggests that focus groups that involve students might be limited by the equity of contribution, social pressure and group consensus.

Reference

Adler, K., Salanterä, S. and Zumstein-Shaha, M. (2019) ‘Focus Group Interviews in Child, Youth, and Parent Research: An Integrative Literature Review’, International Journal of Qualitative Methods. doi: 10.1177/1609406919887274.

K.M Cutler, D Gilkerson, S Parrott, M.TBowne, Developing math games based on children’s literature, Young Children, 58 (1) (2003), pp. 22-27

Daley A. M. (2013). Adolescent-friendly remedies for the challenges of focus group research. Western journal of nursing research, 35(8), 1043–1059. https://doi.org/10.1177/0193945913483881

Guba, E. G., & Lincoln, Y. S. (1982). Epistemological and methodological bases of naturalistic inquiry. Educational Communication & Technology, 30(4), 233–252.

A. Mitchell, C. Saville-Smith, The Use of Computer and Video Games for Learning. A Review of the Literature (The Learning and Skills Development Agency, London, 2004)

Kennedy, C., Kools, S., & Krueger, R. (2001). Methodological considerations in children’s focus groups. Nursing research, 50(3), 184–187. https://doi.org/10.1097/00006199-200105000-00010

Yusoff, Z., Kamsin, A., Shamshirband, S. et al. A survey of educational games as interaction design tools for affective learning: Thematic analysis taxonomy. Educ Inf Technol 23, 393–418 (2018). https://doi.org/10.1007/s10639-017-9610-5

Week 5 reflection

This week, we focused on qualitative research, especially using video recordings and thematic analysis. This topic was interesting because I did some thematic analysis with my undergraduate dissertation, but I have not done any video recording analysis.

Thus, the first activity, taking a deductive approach with a video recording of children working as a group to complete a given task, was intriguing. This deductive approach is when you develop ideas and hypotheses based on existing theories and reference to the evidence to test the hypothesis (Clark et al., 2016). Hence, we used the framework Cukurova et al. (2016) developed that combined the competencies for collaboration and problem-solving. It is important to notice that in the actual paper, the framework started from observing ‘fine-grained actions’ from the participants. Thus, it was interesting that, although the original framework took an inductive approach where the theories emerge from the observed phenomena, it could be used deductively by the others. This activity, therefore, reminded me of transferability, one of the rigour criteria for qualitative research. Last week, I was mainly focused on the findings of the qualitative research when considering its transferability. Yet the activity illustrated to me that the methods could also be transferable. Although the context of the original study and our video were subtly different, the framework was certainly applicable since the focus of our activity was to characterise collaborative problem-solving.

During the analysis, I encountered several problems. For instance, the audio quality was poor due to background noise which made me hard to understand the exact conversations. These issues were not raised when in my previous research that used audio recordings and video interviews. Besides, since the recording attempted to observe the participants natural behaviours, they were free to move around while the camera was in the same position. Consequently, some of the participants were often outside of or far from the camera angle. With video interviews which I used before, the audio was clearer and the participants’ faces were visible throughout the entire process. Thus, it was interesting to notice the unique challenges of video recording analysis.

Of course, there were many benefits of video recording analysis. After the analysis, I have looked into forum posts to check the others’ coding schemes. There were some similarities and differences between each coding scheme. This cross-checking with other people was a great way to enhance the credibility of the analysis since it is easy to share with others and I can always revisit the video repeatedly (Derry et al., 2010).

In the classroom, we attempted thematic analysis with an inductive approach. We were given a transcript of an interview conducted by other researchers. Firstly, familiarizing myself with the data to gain depth insight into the content (Braun and Clarke, 2006). The interview was about evaluating the professional development programme using the software called FractionsLab. After understanding the content, we generated initial codes from the data. This process involved identifying features that are most useful to answer the research question. Then we tried to group the similar codes and come up with the overarching themes. After the activity, thematic map, the process Braun and Clarke (2006) suggested, was presented and it was really helpful to have a visual example.

Overall, both deductive and inductive methods have different pros and cons. Although I preferred the inductive approach since it opens up more interpretation of the data, some degree of deductive methods will be inevitable. For instance, if I attempt to observe students who are using software, those are often built based on established learning theories. Consequently, the selection and analysis of data could be biased towards those learning theories. These potential challenges should be tackled with the cross-checks that I have done this week.

 

Reference

Clark, T., Foster, L., Sloan, L. & Bryman, A. (2021). Bryman’s Social Research Methods. 6th edition. Oxford University Press.

Cukurova, M., Avramides, K., Spikol, D., Luckin, R., Mavrikis, M. (2016). An analysis for collaborative problem solving in practice-based learning activities: A mixed-method approach. in Proceedings of the Sixth International Conference on Learning Analytics & Knowledge – LAK ’16

Virginia Braun & Victoria Clarke (2006) Using thematic analysis in psychology, Qualitative Research in Psychology, 3:2, 77-101, DOI: 10.1191/1478088706qp063oa

 

 

Week 4 reflection

This week’s class was about the criteria that might help us to judge the quality of qualitative research. Although I knew that qualitative researchers would have distinguished criteria from qualitative ones (i.e., acknowledging the potential biases rather than believing in completing objectivity), I have not thoroughly tried to conceptualise it. Hence, the paper by Guba and Lincoln (1982) was intriguing to read. The authors argue for the fundamental difference between the rationalistic and naturalistic paradigm. For instance, the use of random sampling, often used in scientific research, is impractical in a natural setting that consists of the complex social values of stakeholders. Hence, the authors conclude that the judgements and methods used in the rationalistic world are not adequate tools for the naturalistic world. This argument reminded me of the epistemological positions that we discussed in the previous week, and it was clear that the authors were more fond of interpretivism than positivism.

The paper later suggests the various standards that we could use to assess the quality of qualitative research. Although there were other many worthwhile criteria, I found neutrality and transferability the most interesting and relevant to my research.

Neutrality is concerned with the influence that the values of the researchers might have on the research findings. In a rationalistic paradigm, one can attain this through the intersubjective agreement of quantitative notions. However, in the naturalistic paradigm, the impossibility of complete objectivity is recognised (Guba and Lincoln, 1982). That is, one cannot be value-free but bound by the social environment that inevitably influences their decisions regarding the approaches to their research. For instance, the personal belief of the researcher might influence their choice of the relevant theories and interpretation of the data. In this case, we are more interested in the confirmability of the data rather than the researcher’s level of skills to collect the data.

I feel like this is a crucial part of my research. Firstly, I am interested in the use of games in learning because I like playing games. It was my interest that motivated the research question. Admittedly, this could lead to potential bias in favour of the use of games. Simundić (2013) suggests that the researchers might extrapolate the collected data to support the original hypothesis. Hence, in our class, someone suggested using the opposite perspectives to justify the methods and interpretations could be valuable. I might base my argument on a paper that disagrees with the usefulness of game-based learning and try to explain why I believe its methods or interpretation are inappropriate. From there, I could start developing my line of reasoning for my data collection and its analysis.

The second criterion that interested me was transferability. This criterion is concerned whether the findings from particular research can be applied to the other context. In a rationalistic paradigm, the researcher can attain this by generalisation through random sampling and argues that the sample represents the whole population. However, as individual minds are different, so as the context, the flawless generalisation would be impractical in qualitative research (Guba and Lincoln, 1982). Therefore, we are more interested in the degree of transferability of findings and their justification.

The assessment of transferability is important due to the nature of the design-based research. As the method often involves the creation of educational ecology that might result in the discontinuity and fundamental difference from the standard curriculum. In most of the DBR, it overcomes this issue through an iterative process where the researchers can discover critical variables and limitations (Amiel and Reeves, 2008).

The data collection methods of DBR could vary and could be a mixture of more than one method. For my research, I think it would be better to use at least two methods, observation and focus group, to collect sufficient data. Observations will allow the researcher to elicit the actions from the participants in a naturalistic setting. The purpose would be to observe how students would interact with the games and the instructions provided. 

Yet, the issue with the observation is that we are not sure about the cognitive process of the participants. Although their physical movements, such as the number of clicks that they made, we cannot confirm that their emotional state during the activity

So to confirm the congruence, the use of triangulation is often considered. Triangulation is when you use different methods and theories to cross-check the data collected. For instance, in a focus group, participants can elaborate on their experiences while playing games. Of course, their opinions might be skewed due to various factors such as classroom dynamics or the presence of the researcher. Thus, comparing the results from different methods would be crucial for the credibility of the research.

 References

Guba, E. G., & Lincoln, Y. S. (1982). Epistemological and methodological bases of naturalistic inquiry. Educational Communication & Technology, 30(4), 233–252.

Simundić A. M. (2013). Bias in research. Biochemia Medica23(1), 12–15. https://doi.org/10.11613/bm.2013.003

Tel Amiel, & Thomas C. Reeves. (2008). Design-Based Research and Educational Technology: Rethinking Technology and the Research Agenda. Journal of Educational Technology & Society, 11(4), 29–40. http://www.jstor.org/stable/jeductechsoci.11.4.29

Week 3 reflection

In this week’s class, we discovered different research approaches that are commonly used in research that involve educational technology. Although there were many interesting approaches, the design-based research seemed adequate for my research.

Design-based research (DBR) often focuses on developing theories that support the emerging successful pattern of learning in students due to changes in an educational setting. Consequently, the researchers define the purpose of the study, its theoretical intent before the experiment and what variables that it would like to investigate (Cobb, P. et al., 2003). The purpose of my research is to investigate the ways to provide an optimal learning experience. When comparing its effectiveness to the conventional learning environment, game-based learning often considers factors such as the impacts of instructions, entertainment and feedback on the learner’s motivation, engagement and academic performance (Erhel and Jamet, 2013). Sandoval and Bell, (2004) suggest that the purpose of the design-based research is developing an effective learning environment through the implementation of new tools and using this designed environment as a natural laboratory to observe learning and teaching. Therefore, constant reflection on these variables would be necessary after implementing change in the learning environment.

In DBR, the researcher observes the change in learning patterns through a designed environment. If there were unpredicted observations, the researcher could revisit the theoretical knowledge and implement other interventions that are more adequate for the research aim. Therefore, after the evaluation of the one cycle, the new conjectures might develop and cause a slight change in the intervention. Yet, it could lead to the inevitable discontinuity between standard curriculums and the designed environment. As soon as the researcher changes the learning environment, the students are often in a completely different setting from their normal classroom. Consequently, even if the implementation of game-based learning in one class was successful, the researcher should rigorously seek ways to apply it to the wider context. This could involve careful analysis of the elements that contributed to change in learning such as individual characteristics. The consequence of a change in the learning environment would vary between countries, schools and individual students. Therefore, it is crucial to reflect on this limitation and make a careful connection to the broader context, such as the national curriculum.

Since the aim of the DBR is to develop theoretically driven innovation that could be practically implemented in classrooms, it requires participants who can represent the wider target population. In DBR, qualitative analysis is often used because it wants an in-depth understanding of the learning ecosystem it created. Since my research question would focus on the mathematical learning of the young children, this would involve criterion sampling and restrict the participant’s age group. Furthermore, convenience sampling is likely to be used in the research because it would be impractical to collect data if I cannot have access to the schools or learners. During our class, we discussed that such a reason would not justify the sampling decision if we simply state ‘it was the easiest way’. Yet, throughout the development of the research, other justifications might emerge such as the school’s reluctance towards the use of games. Such reasonings could be elaborated in the research and provide more insight for the understanding of our topic.

References

Cobb, P. et al. (2003) ‘Design Experiments in Educational Research’, Educational Researcher, 32(1), pp. 9–13. doi: 10.3102/0013189X032001009.

Erhel, S. and Jamet, E., 2013. Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education, 67, pp.156-167.

Sandoval, W. and Bell, P., 2004. Design-Based Research Methods for Studying Learning in Context: Introduction. Educational Psychologist, 39(4), pp.199-201.

Week 2 reflection

This week, we explored different research method approaches. The first one was the inductive and deductive approach. The inductive approach is where you induce theory from observations and data whereas the other is that you prove your theory by providing evidence (Clark, et al., 2016). Both approaches have their value and justification, but for my research, I would like to use an inductive approach than a deductive approach. This is because I am interested in suggesting effective ways that the games can help student learning. Of course, what counts as ‘effective’ would vary with the context, such as the location, age of learners and their cultural and social environment. Thus, I believe that it could be absurd to formulate theory before observing and assessing the data. I wish to elaborate on this issue further as I narrow my research question further.
Another topic we discussed was how we could view knowledge (epistemology) and how it affects our research. There were three main epistemological paradigms: Positivism, interpretivism and critical theory. In my research, an interpretive approach would be taken because, as previously mentioned, I believe that the social condition can influence the understanding of individuals (Scotland, 2012). For instance, Wouters, van Nimwegen, van Oostendorp and van der Spek (2013) argue that the students would benefit from instructions in game-based learning because it often involves complicated problem solving that could overwhelm players. Such an approach to instructional teaching is also supported by the scholars who view human cognitive architecture in an information process model (Atkinson and Shiffrin, 1968). In this model, the working memory, which learners use to solve the problem, would be overloaded without instruction when faced with a complex challenge. Essentially, this would hinder the process of conveying information into long-term memory, otherwise known as ‘learning’ under the model.
However, Taber (2017) suggests that without instruction, students can have more idiosyncratic ways of understanding. Consequently, too much instruction can hinder students from acquiring skills such as creativity.
Overall, whether providing instruction in-game would have different impacts on individuals. Some students might benefit more from instruction because they have less support from human instructors whereas others might find it boring to just follow the formulated guidance. Therefore, the research should carefully examine the social factors that could have influenced their action and try to understand the social meaning from an individual’s perspective.

References

Clark, T., Foster, L., Sloan, L. and Bryman, A., 2016. Bryman’s social research methods. 6th ed. Oxford: Oxford University Press.

Kiili, K., 2005. Digital game-based learning: Towards an experiential gaming model. The Internet and Higher Education, 8(1), pp.13-24.

Scotland, J., 2012. Exploring the Philosophical Underpinnings of Research: Relating Ontology and Epistemology to the Methodology and Methods of the Scientific, Interpretive, and Critical Research Paradigms. English Language Teaching, 5(9).

Taber, K. (2011). “Constructivism in Education: Contingency in Learning, and Optimally Guided Instruction.”In Educational Theory, edited by J. Hassaskhah, 39–61. Hauppauge, NY: Nova Science Inc.

 

Week 1 reflection


Before this week, I had a blurry image of what educational technology is. Based on my last 18 months, the one that came straight to my mind was an image of a Zoom screen, a replicate of an old classroom environment with video calls. And perhaps there are more sophisticated technologies with the use of AI, such as Intelligent Tutoring system. Yet, I realised that there is more than these and that I need to find something specific as a research topic.

So I started to think about what is most important in learning that technology could improve. Once I was in primary school, I used to find mathematics boring because it was just too hard. Even when solving a problem, I answered it without serious thinking and was merely disappointed when I got it wrong. However, as I practised more, I could see my progress and could solve some more challenging questions. Although it was still hard to solve tricky questions, I started to enjoy the process of learning and the sensation when I got the correct answer. So, from my experience, the most crucial element in studying was the enjoyment of learning new things and the sense of accomplishment. Yet, based on the criteria for a good research question, the question remained too broad and unclear.

So I decided to research what could make learning enjoyable and found that many learning activities could support learning. Especially use of the games intrigued me because I could feel a similar sense of achievement when I played difficult games. This ‘hard fun’, as Papert said, comes after when people successfully resolve the challenging task (Harel, 2016). Indeed, everyone has different personalities and abilities. So it would be crucial to carefully design activities that would be right for the person and to the time of culture.

Additionally, making learning ‘fun’ should not be a priority, but students must learn something valuable from fun activities. The learning activity, regardless of how entertaining it is, would be meaningless if students do not learn. Even if both teachers and students find the use of fun activities beneficial, there are practical difficulties when schools are bound by the time limit and curriculum (Gros, 2007).

Overall, by using the criteria for a good research question, my question changed from ‘what is most important in learning that technology could improve?’ to ‘How students learn from playing games, and what would make a useful game for both students and teachers?’. Of course, this is still too broad and perhaps unoriginal. Therefore, more literature review and critical thinking are required to develop the question further.

Reference

Gros, B., 2007. Digital Games in Education. Journal of Research on Technology in Education, 40(1), pp.23-38.

Harel, I., 2016. A Glimpse Into the Playful World of Seymour Papert – EdSurge News. [online] EdSurge. Available at: <https://www.edsurge.com/news/2016-08-03-a-glimpse-into-the-playful-world-of-seymour-papert> [Accessed 12 October 2021].

 

Please sign in first
You are on your way to create a site.