The major part of my project work involved qualitative data analysis of 5 hours of interview transcripts (approx 32,000), transcribed verbatim from interviews with five academics in the Dublin School  of Architecture.

The methods and findings from that research are outlined in the journal article submitted. The findings of that research also informed, in part, the design of an elearning artefact the aim of which was to assist staff who might be interested in exploring G Suite for Education as a teaching tool.

Initially I thought it was a very manageable project, and I envisaged a research group of ten participants. Before starting the research I was aware of the novel use of G Suite in particular in the Dublin School of Architecture, and also the use of personal websites instead of the officially-supported VLE Blackboard (aka webcourses). I had not however identified or approached any of the research subjects, when I did this I actually found it quite difficult to get ten participants and ended up with only five. However, I felt that the group were representative of the different perspectives which I had been hoping to capture – three were advocates of G Suite and now only made very limited use of Blackboard, one was primarily using a personal website plus very limited use of Blackboard and the final participant only used Blackboard, but in a not-very-enthusiastic manner. Nevertheless, in hindsight, it would have been better to have a clearer picture of my research group before committing to the research itself.

I enjoyed the interview process and could see immediately that I was getting really good insights from all five participants. However transcribing the interviews became something of an ordeal – I recorded the interviews, uploaded the recordings privately to YouTube, obtained the automatic transcript which YouTube provided and then began what turned into a very long process of correcting that transcript (auto transcripts are useful but result in a high volume of mistranscriptions which need editing). As I was going through that process however I could see the beginnings of a pattern beginning to emerge and began noting possible codes at that early stage. I began a more formal process of coding once I had clean transcripts – I needed a tool to help with this. Initially I tried coding in Excel, and then a tagging process in Word. But neither of those were satisfactory – time consuming and just very difficult to maintain any consistency. I hadn’t really considered specialized software for the task but by happy co-incidence Max QDA software started popping up on my timeline in Twitter. I downloaded a free trial copy of this and immediately found the process of coding much easier – it allowed you to easily create codes and easily tag extracts to multiple different codes. Its functionality went far beyond what I used it for but for just the coding alone it was tremendously useful.

At the end of the free Max QDA trial period I was able to export the coded extracts to Excel and there really the major work of coding, re-coding and searching for themes began. In hindsight I think I made this process more difficult for myself than it needed to be  – I wasn’t rigorous enough initially with my coding system meaning that when I finished the first pass of coding I had a lot of similar-but-not-quite-the-same codes which I need to look at again and to rationalise. Also the decision-making around what code was most appropriate for a given extract was quite difficult and I will admit that initially my rationale for coding particular extracts was not fully consistent – sometimes I coded based on the empirical (stated) content of the extract and sometimes on the interpretative (implied) content of the extract.  Were I to carry out a qualitiative data analysis coding exercise again I would define my codes and their rationale much more explicitly at an early stage in the process. After everything was coded I reviewed the extracts and what they had been coded to, removing duplications, rationalizing some codes, checking that all of the extracts linked to a particular code worked – that you could easily see a common reason as to why each extract was associated with that particular code. And also that the codes made sense under each theme. This was a vastly time consuming but necessary exercise.

Talking generally about codes – I think a possible additional output for the above research might be a list of fully articulated codes and themes which may be of interest to other researchers in the field of elearning.

Turning to the artefact, when I started this research DIT (as was) was firmly in the G Suite camp of cloud computing. However, shortly after DIT gained university status and became Technological University Dublin, the University switched to Micosoft Office 365 for all email and cloud computing requirements. G Suite apps remain available but are being phased out and are currently only available via a staff member’s “old” DIT login. This caused some difficulty in terms of artefact development and I have retained DIT nomenclature throughout the artefact. Again, the time lag between my starting and concluding the MSc has not been helpful here. The artefact itself contains some developed learning objects (videos, cribsheet) but is essentially in prototype format. The learning objects themselves unfortunately will no longer be of any use in my work or to TU Dublin as the institution is moving away from G Suite..

During the artefact development process, I carried out a loose needs analysis and audience definition – the audience principally being academics who were interested in exploring a non-institutional VLE, and I conjectured that if they were interested in using technology that wasn’t officially promoted as a teaching tool then they were likely to have an interest in, and ability with technology. The participants in my research had very much become self-starters in terms of solving their teaching and learning issues, and I sought to reflect that in the artefact – advocating that users turn to the internet generally if their issue was not addressed in the artefact.

While I was cognizant of the five stages of ADDIE – analysis, design, development, implement and evaluation – the evaluation part is lacking. The resource was never intended to be used by the research participants and by the time it was complete G Suite was no longer a promotable-option at TU Dublin.

Nevertheless, the process of artefact development has been quite rewarding. I storyboarded the artefact, developed additional G Suite skills to build the artefact shell in Google Sites and developed scripting skills, and basic video/audio editing skills when developing the learning objects to sit within the artefact. Again, it has been a beneficial and rewarding experience in itself.