E891 Part 3 Action 3.4 Applying an experimental frame to explore practice

As I concluded earlier on in the module, most of the research questions I identify tend to be suited to an interpretivist/constructivist frame. I am often interested in why and how something is happening, not necessarily on what is happening and whether there is a systematic effect, although the latter is potentially a secondary concern to me.

Having reconsidered my tentative research questions from earlier on in the module, there is, however, one question to which an experimental frame might be aligned.

To what extent would personalised study skills support from a personal tutor help students to achieve at university?

I’m particularly interested in students who perhaps have not developed strong self-efficacy or self-regulation and therefore are less prepared for HE study than some others in their cohort.

An experimental design could be applied with, for example, a randomised section of students receiving personal study support, and another randomised section not receiving any intervention. There might be potential ethical concerns around one group receiving something extra/better than the norm, so perhaps the other group could have other additional resources on study skills but no personal tutor intervention in this area. The experiment would need to run perhaps for a whole semester, and then marks could be tracked throughout. However, it would be rather difficult to compare the with/without group since the main assessments come at the end of the semester. Just comparing how the two groups do might result in false results because so many factors might influence the end of semester performance, particularly in year 1 as the students are settling in. Perhaps a test could be set mid-semester and performance of the two groups compared then (means etc).

I think this design might help me to understand better the impact a personal tutor might have, but I would see this as a starting point in the research as it might answer ‘what’ but not ‘why’ questions about the intervention and support needed. I would be interested in tutor and student perceptions of the intervention, what the students feel they are finding difficult early on in university, what kind of help they and the tutors would consider most effective etc. One might even track students over their HE careers as case studies, understanding how they develop the self-efficacy and self-regulation skills over time, and then taking lessons from that into the 1st year curriculum and skills development content.

E891 Action 2.2 Thinking about (educational) theory

  • When you think about engaging with theory what comes to mind? Does it make you groan, turn eagerly to the next page of the Study Guide, or something in between? If so why is that?

I think of it a bit like eating my greens – not particularly fun in the short-term, but will probably benefit me if I take a longer term perspective.

  • What does this say about your understanding of theories, their nature and function? For example, you may consider that they have no function.

I feel that much of the theory I read in educational research seems quite alien to my everyday experience.  Perhaps I am influenced by my original academic background as a scientist, and by reading scientific research, and then by my experience working as a professional accountant working to tight deadlines where reflection was a luxury seldom afforded.  Occasionally, however, I do read something that makes sense to me such as the articles debunking the ‘digital natives’ construct and considering individual differences in students’ access to and interest in technologies and using technologies to support their studies.  These take a perspective that individual differences exist here, in the same way as they do for other characteristics.

I sometimes feel that educational research published in academic journals is not aimed at or designed to be read by the ‘mere practitioner’ such as myself.  However, ‘popular theory’ is often viewed with disdain in academic circles.  Yet, more academically rigorous work needs to be put across clearly and concisely to be useful for and to be read by time-poor practitioners and policymakers.

  • Think of a learner with whom you work – this could be a colleague or a pupil you support. What things about them, and that you have to help them to do, influence how you support them? Can you identify where you got these ideas from?

I am thinking of students whom I have seen for academic consultations this week.  They are studying accounting in the first year of their degree, and this is their first semester at university.  I try to take this into account – it’s not just my subject that is new, it is the whole concept of university study that is new to most of them.  However, I think the degree to which they show signs of and potential to manage their own learning (self-regulation, self-efficacy) does influence how much help I will give.  I want to see learners meeting me part of the way, and they should be putting in effort as well as me – even if they are struggling, if I can see they have tried, I feel I can do more to help them successfully.  Some of this comes from Entwistle and Ramsden regarding self-efficacy and self-regulation, the learner’s conceptions of study and the teacher’s conceptions of teaching.

  • Are there practices that you typically use because you know they work? Can you say why they work? If you could would it help? How might it help?

There are tips and advice I use such as using mnemonics, breaking things down into steps etc.  I think they work because they are changing something from potentially overwhelming to something more do-able.  These are strategies I have used in my previous studies, and some were taught to me when I was training to be an accountant.  I also try to use variety, different ways of explaining things, examples – these are things that have come to me from talking to, observing and being taught by my mother, who is a teacher.  In terms of other influences, they come from working with current and former colleagues, research and conferences on accounting education which I read and attend, and from my experience coaching junior colleagues in professional practice on a one-to-one basis.  I think if I could explain better (with evidence) why they work, it would make it easier for me to ‘sell’ the benefits to students during a course.

E891: Activity 1.11 The assessment of research

How do you relate the different types of research to the categories indicated by Bassey and Ball?

The TLRP briefing lists a number of types of research, but these are driven by ultimate purpose/audience rather than philosophical categories.  There is some overlap between the TLRP briefing categories and Bassey’s pluralist conception of educational research.  The TLRP briefing does not include a holistic definition of what educational research is, but the the number of types identified indicates that the authors are thinking broadly in terms of what the discipline encompasses.  However, the idea of educational research being on a journey is not strongly reflected in the TLRP briefing categories.  The categories in the TLRP briefing do, however, identify practice-based research as a separate type, in common with Bassey’s ideas of personal theories etc.

I think linking the TLRP briefing and Ball is more difficult.  Some of the TLRP identified categories appear to fall within policy science – for example project evaluation etc.  However, most of the detailed lists of criteria include theoretical basis/framework as something reviewers/funders are looking for, so the policy scholarship ideas of Ball appear to be reflected and privileged here as well.

Where do the other articles you have read fit into the frameworks presented in the TLRP briefing?

Gao and Shu (2010) – published journal article but elements of the developmental/practice-based research framework.  The paper doesn’t have a particularly strong theoretical framework and it is focused on an issue identified in practice and policy recommendations relating to this issue.

Edwards, Sebba and Rickinson (2007) – published journal article but there is an element of project evaluation of the strand of the TLRP that these authors are writing about, and its wider implications for collaborative working between academic researchers and practitioners.  It therefore has potential relevance for future practice-based and developmental research studies.

TLRP ‘Improving working as learning’ (2008) – I found this difficult to fit neatly into the frameworks.  It appears to be partly project evaluation, but also practice-based research.

Do the evaluation criteria also reflect the different categories?

There is overlap between the criteria but they are tailored to the overall purpose and audience within each category.  I think that the research process led to development of ‘shopping lists’ of criteria.  What is not evident in the briefing is the relative emphasis on  tindividual criteria within each category.  For example – theoretical framework is mentioned in a number of the lists, but there is no indication of its importance in journal published articles vs practice-based research, for example.  I think that the ‘bar’ for theoretical robustness would be higher in journal published articles than in practice-based research, even though I would expect both types of research to be informed by theory.  

#E891 Part 1: Exploring and examining educational research (activities 1.1, 1.2)

I was aware that there were inherent tensions in educational research, but not that there were so many, before reading the Bassey (2007) and Ball (2007) articles from the course reader.  It appears that debates on the nature of educational research, its purpose, its ‘purity’ as academic enquiry vs subjugation as an instrument of government policy agendas are still raging.  The threat of ‘policy science’ to educational research raised by Ball appears to have emerged again in the discussions regarding the government’s establishment of the ‘What Works’ network of centres to investigate policy issues, and the March 2013 speech by Dr Ben Goldacre (a medical doctor) on ‘Building Evidence into Education’, which focused on the potential use of randomised controlled trials in educational research.  It does seem odd that the government should engage a medical practitioner and not an educationalist to review the use of evidence in educational research – one would think that this role would require someone/several people from within the education community, perhaps supported by those outside the field to provide critical challenge.   More detail can be found at: http://www.bera.ac.uk/resources/dfe-review-evidence-education-0 

More of the ‘ed’, less of the ‘tech.’? #E891

I’ve been rather quiet on this blog over the summer, as I’ve been beavering away updating modules, marking dissertations and getting ready for a new academic year.  Our iPad pilot continues, and I have just received a good number of survey responses from our students to analyse, with more expected next week.  The title of this blog post refers to the fact that I’m going to use this blog as a learning journal/jotting space for my (hopefully) last Open University MA module – which is on ‘Educational Enquiry’, and will last for the next 11 months.  This is a broader-based module than the previous ones I have studied which related specifically to online and distance education, so there will be less of an educational technology focus in the module, and hence probably in my blog entries related to it.  However, I’m hoping to focus some of my assignments (where I have a choice) on educational technology, so it won’t disappear completely from this blog.  

#ocTEL week 10 Reflecting on the ocTEL experience

My big question from the start of the course was  ‘How do we use technologies to enrich teaching and encourage our students to use them to enrich their learning, while still respecting individual differences and preferences?

While I don’t think I have answered this question during the 11 weeks of participation, I have widened my appreciation of some of the issues, and become aware of some recent literature and projects (mostly published since I paused my MA studies in 2011) that provide a tapestry of ideas for future projects.  My Delicious social bookmarking has got a lot of reading material in it from ocTEL!

I have got far more from this course than I imagined – not so much in terms of themes, which as an OU MA in Online and Distance Education student were largely familiar, but in terms of the overall experience.  It’s been great to ‘meet’ some peers doing interesting projects and share our knowledge and experiences.  I’ve even ‘met’ some people in my own institution, who I would not have otherwise come across! 

I’ve enjoyed being a MOOC participant, and I think the course has run pretty smoothly, which is due to what seems like a lot of work behind the scenes from the course team.  The course has also given me some focus for my edtech blog, which I had started before the course, but which has been very useful to reflect my thoughts for some of the activities.

I’ve even learnt to dip in and out (although some of this was forced by lack of time in what has been a busy teaching semester), a ‘buffet’ approach which isn’t possible with my assessed studies, but which has reminded me of the joys of studying without some of the pain!  All in all, it’s derveloped my knowledge a bit, but more importantly got me back into studying again online, in preparation for my final MA module which starts in October.

If people have an interest in educational technology, education as a whole, and their professional development, then future iterations of ocTEL would be a good place to start!

#ocTEL week 9 TEL project management

This week is interesting as I wouldn’t usually describe my forays into edtech as full-blown projects!  However, I am involved in a bigger project due to planned rollout of iPads for various student cohorts.  However, this project is at quite an early stage so I am trying to take everything in so that we implement good practice and avoid banana skins as the project progresses.

We had developed a proposal for a staff iPad pilot with a group of 6 teaching staff, which crossed with the wider student implementation project and hence became subsumed within it.  However, we are running our pilot as an autonomous group within the larger project.

Some good practices that I think we are following:

1) Project is about altering pedagogies as much as using the technological tools – we’ve identified strands of activity in terms of pedagogical affordances, and we’re setting out to explore how the iPads can help (or not help, as the case may be!)

2) Community of practice – the whole aim was to try to create a beacon group of staff who could then pass on expertise to wider groups of colleagues.  We all work closely together anyway and I am planning regular get togethers to foster this.

3) Detailed plan and timeframe – these were developed (by me!) with milestones at proposal stage

4) Collecting information about students’ tech uses and habits – we are planning a survey of our incoming cohorts as there is little data on this which is specific to our faculty or discipline.  Trying not to make dangerous assumptions!!

Some banana skins to avoid:

1)Inadequate consideration of risks and risk mitigation – for our pilot, I will use the JISC risk logs to develop the risks identified at proposal stage, track and mitigate what we can.  There are some risks I have already identified

2) staff time and resource issues- we are short of resource, both in terms my my time as I have a full teaching load and other responsibilities, and because one of our learning technologists, with prior experience in this area is on leave – again, I’m going to have to monitor this one closely and flag it in my progress reports…

#ocTEL week 7 – activity 7.1 Designing learner support (peer support)

I am interested in encouraging more of a peer support thread through one of my modules.  I therefore decided to have a look at some of the ocTEL resources in this area.  I found the REAP project principles practical, pragmatic and well-considered, so I have used these as the framework for my learning design (http://www.reap.ac.uk/PEERToolkit/Design.aspx).

Context:  This is a level 3/4 undergraduate course which is discursive and theory-based in nature.  In the current session, I continued with an activity included in the course design when I took it over this year – this was having students, in small groups, review past answers to an essay question and critique them.  However, they were quite reticent to provide comments, although fairly happy to put these in a grade boundary. So, I think there is more to do to really get the benefit from this activity in terms of feed-forward to coursework and exam essay improvement.  I did not provide detailed criteria for this activity, but I did for the assessed coursework.

Initial thoughts: I think the timing needs to be changed so that we do this activity earlier in the course, so there is more time for reflection in the run up to coursework submission.  I also think I need to give some overall criteria (these will probably be the overall grading criteria used in the department) to accompany this activity.  If there isn’t time in class, this could become an online activity.  I don’t currently ask the students to comment on each others’ work – I think I want to do more with this first to see if that would be feasible later.

Mapping of proposed design to the REAP criteria:

Atmosphere of trust – need to set some ground rules, such as no marks will be given, the ‘test scripts’ are anonymised and old, make clear that this is non-assessed, but is important because it will give suggestions of how students can improve performance in assessed work, give some examples of useful and not so useful feedback comments, what my role is (facilitate and give feedback on the feedback).

Practice with criteria – we could do one together in class and I’d go through the criteria and what they mean.

Explanations for review comments – linking back to the good feedback example, this would have advice for what needs to be done differently next time.  Students’ comments would be expected to do the same.

Practice in holistic appraisals – good example would have overall comments, to act as a model for student reviews.

Dialogue around reviews – explain that students need to be prepared to justify their comments.  If groups were looking at the same examples, then each group would need to justify comments and then I could draw out the justification for differences from each group.

Integrate self-reviews -this is tricky with the design I have specified.  I think there could be a link to using these criteria for self-review for assessed coursework.  Students can already submit drafts to me, but I think I could ask them to do self-review first and do a summary for me when they submit their draft to demonstrate that they have taken this step first.

Signposts for quality – mainly through me feeding back on the reviews.  Harder to do in class if oral exercise – would be easier for me to do if followed up online later (e.g. via a wiki or discussion board thread).

Encourage reflection on received reviews – not in the current design.  One to try later!

Make review a regular activity – this is interesting.  I think there would be further scope on the course in future years.


PS I really liked the idea of ‘one minute papers’ which was linked through from the REAP website i.e. questions you ask to encourage reflection at the end of a classroom session (e.g. a lecture).  I am experimenting with some classroom voting via iPads, iPhones and Android that can deal with free-text answers, so I think I might use this and then start the next class with the feedback from the end of the previous class.

#ocTEL week 6 activity 6.1 Reading and reflection on assessment practices

This activity was based on a JISC (2010) publication.  I’m basing my reflection on a new undergraduate level 2 module which myself and a colleague are planning at the moment. It is based on an online business game simulation played in groups.  It will have two group assessments, a business plan and a performance presentation, plus a peer assessment and an individual reflection.

  • How does your assessment approach(es) align with the four teaching and learning perspectives (page 11)?

Associative – some of the assessment criteria for the business plan and group presentation are based on this perspective.  There are some skills which students need to develop further on this course, such as budgeting and analysis of reasons for differences between budgeted and actual performance.

Constructivist – the reflection is designed to get students to self-evaluate their individual performance and that of their group.

Social constructivist – we expect students to mediate peer learning through their group decision-making and analysis of information in the game.  This will be formally assessed in the peer assessment.

Situative – the game itself encourages development of discipline-specific practices, as it’s quite realistic (within the confines of a simulation not a real business, of course).  However, our assessments don’t really get at this perspective.

  • How does your assessment approach(es) align with the twelve REAP (Re-Engineering Assessment Practices) principles of effective formative and feedback (page 15)?

Help to clarify what good performance is – I’m currently working on the assessment briefs, which (I hope) will make criteria clear.  I usually give examples of good and poor performance against each criterion in these briefs.  We will also spend time in class briefing the students on this for each assessment.

Encourage time and effort on challenging learning tasks – there is a gradual approach to assessment on the module, with staged deadlines.  We’ve done a lot of thinking about the ‘shape’ of the course, but we shall see…

Deliver high-quality feedback information that helps learners to self-correct – we will try!  

Provide opportunities to act on feedback – for me, this is partly about timing.  For example, on this course, students will receive feedback on their business plans, some of which will be relevant to their performance presentations.  So, we aim to provide the feedback while they still have time to incorporate adaptation in preparing their presentations.

Ensure that summative assessment has a positive impact on learning – this is a big ask!  However, in this module, we’ve had some freedom to aim at this.  With other modules, where exemption restrictions are a limiting factor, it isn’t easy.  For example, the professional bodies tend to assess in traditional lengthy exams, and expect similar assessment methods in order to grant exemptions.  They are (rightly) concerned with ensuring students cover the breadth of the syllabus rather than cherry-picking the areas they will study.  

Encourage interaction and dialogue around learning – we try to do this via seminar sessions and office hours.

Facilitate the development of self-assessment and reflection in learning – we’re aiming to do this explicitly via the reflection task, and via peer assessment.  However, having these elements as formally assessed is something new for me.

Give choice – no, not on this course!  I do think this is important at more advanced levels – for example, final level undergraduate dissertation choice and MSc level.  In my experience as an MA student, giving me some choice helps me to consider links between theory and practice – but this is quite a specific issue for me, given that my MA is in education and is relevant to my job!

Involve learners in decision-making about assessment policy and practice – I don’t know if students are involved formally at faculty level in this.

Support the development of learning groups/learning communities – this is a module with group collaboration.  However, when assessments are individual with personalised feedback, I don’t know whether that has any impact on development of learning communities within our classes.   Would be interesting to find out more about this area!

Encourage positive motivational beliefs and self-esteem – it’s hard to be positive sometimes with very weak pieces of work.  However, I do try to unpack my feedback messages into ‘what you can do differently next time’.  I think that the feed-forward we are planning to give on the business plan assessments will promote some groups to adapt and strive to perform in the game itself.

Provide information to teachers that can help shape their teaching – I find it helpful to review the feedback I have given, and take notes on it so I can consider what I might need to do differently in the following year.  I think team teaching can be a good opportunity to do this, as you have an inbuilt critical friend in your co-teacher.

  • How would you describe your  assessment design from the manager’s, practitioner’s and  learner’s perspectives (pages 17-22)?

Manager – we may prove to be a bit of a headache, as there will be a heavy admin load on having 4 assessments on a 10 credit module. We are not using technology in any special way the first time we run the module, but may look to do so in the future.

Practitioner – we feel this assessment will be richer than a more traditional model, with a variety of techniques.  However, I expect the workload to be high – significantly higher than with traditional methods.

Learner – I hope that the students will appreciate the level of feedback they receive, and the variety of different assessment components.  This module has explicit links to employability skills, which we will emphasise throughout, so we hope to scaffold further development of these areas in our students through the module and its associated assessments.    Time will tell!

#ocTEL week 6 – Assessment and Feedback, ‘If you do only one thing’

Hoping to do more than one thing on #ocTEL this week, but here’s my starter for ten.

This blog post will be discussing and critiquing assessment methods available on technology-enhanced learning courses.

Method 1: Online test

  • How does the assessment align with the course learning outcomes?

Tends to be more appropriate for lower level learning i.e. factual recall, unless the questions are exceptionally well-designed!  Some technologies will allow a variety of question types such as multiple choice, short answer, fill in the blanks etc

  • What kind of feedback would the learner receive and how would this contribute to her progress

Generic feedback received on correct/incorrect answers.  I haven’t seen a test which could provide feedback on overall performance, but I guess this could be done (e.g. within a particular range of marks overall, you would get a particular set of feedback).  Even though the feedback is generic, if carefully designed it can be a useful feed-forward to encourage students to correct their mistakes and do better this time.  I feel it lends itself to formative assessment, but could also be used summatively (and indeed was by a colleague at my former institution).  This type of assessment can often be linked with drill and all the negative connotations that has in education.  However, in my subject discipline of accounting, this type of technique can be important at introductory level, as students need to master basic techniques such as costing in management accounting and accounts preparation in financial accounting.

  • Which technologies would support this?

Many VLEs have this functionality, either directly or via a plug-in.  I have also seen demos of Google Drive (formerly Google Docs) being used in this way. 

Method 2: Assessed participation in e-moderated discussions

  • How does the assessment align with the course learning outcomes?

Again, depends on how it is designed.  I think this can be quite powerful in arts and social sciences disciplines, where there are not necessarily correct answers but the aim is to encourage students to provide a coherent argument.  I think this can work effectively when a multiply-sided argument is put forward and needs to be unpacked and critiqued.

  • What kind of feedback would the learner receive and how would this contribute to her progress

Ideally, this type of discussion would prompt peer feedback on a student’s contributions and/or tutor feedback as tutor is weaving discussion thread.  I think there might be difficulties taking the feedback and applying it to different parts of the syllabus – in my experirence as a learner in this situation, you tend to not be able to dissociate the content from the feedback.

I think these work best when they are summative but only a small proportion of the course.  If formative, not all will see the benefits of participating and the discussion could risk being dominated by the vocal few.

  • Which technologies would support this?

Discussion boards, Twitter, captured chat e.g. in Blackboard Collaborate, email list etc.