This task was designed to introduce new researchers to research methods, primarily qualitative and social methods. I will confess I did not invest very heavily in this, and I will also say that my overall project direction changed after this task and thus its relevance to where I have ended up is somewhat limited. However, in the interests of sustained narrative, evolution and transparency, the task as written still belongs here.

Task 1

Researcher positioning

Telling the story of my positioning as a researcher in this project is perhaps best told by a concept unrelated to the conduct of research. My approach to disciplines, methods and paradigms is analogous to Edmondson’s (2013) concept of ‘teaming’ – rather than a static frame of reference, an agile universe of components move in and out to achieve research goals according to their affordances. My work at its core is transdisciplinary, synthesising from diverse bodies of work to inform action rather than existing within traditional discipline boundaries.

I am inherently a qualitative researcher working in an nterpretivist (constructivist) paradigm – meaning is individually and collectively constructed with no empirical truth. This is especially true in an organisational context, where the idea of organisational learning is constructed entirely of the subjective experiences, beliefs and values of the individuals working within it. People’s behaviour is driven by their constructed meanings, not empirical data.

In a similar vein, this project is fundamentally participatory research – I am an employee of the organisation and cannot disentangle my experience as such. My meanings are

constructed alongside, within and because of the meanings of those I work with, who are also the participants of this research project.

I am also positioned somewhat as a catalyst, as the implementer of the action research. I am exploring framing this role as an ‘organisational learning designer’, and draw equally on the epistemologies of individual learning and organisational learning here.

As a convergence of this line of thought, of the education-focused frame of action research and my own background as an educator, I have also begun to think of the action research project as an act of pedagogy. Learning outcomes as a goal-focused orientation of research questions allows research action to be firmly situated in meaningful practice for learning.

Research context

The macro-level frame for this project is transdisciplinary action research, following Maxwell’s (2003) modified action research model. The action – still being iterated as a result of working through this program – is an intervention model towards facilitating organisational learning in the university context. This was initially proposed as a framework but a more targeted intervention model is now being developed.

This project focuses on a target implementation zone of the Learning and Teaching Transformations directorate, positioned as a bounded-context micro-organisation within the university. The choice of LaTT as a bounded implementation context is based on level of researcher agency (given I am employed in the directorate), strategic relevance (LaTT is tasked with leading strategic change initiatives) and ability to scale learnings in the future due to analogous structure.

Within the action research spiral, the project takes a longitudinal sequential approach of undertaking reconnaissance to understand the context, developing an intervention, taking action, evaluating impact and iterating the intervention for further action as a continuous improvement cycle.

Task 2

Method units

Within the action research frame there are two key points of data gathering – reconnaissance and evaluation. For both of these, a set of non-traditional method units (MUs) are employed to gather and analyse qualitative data. The methods chosen are oriented explicitly towards individual and collective construction of meaning, and evaluative data for action and improvement.

The overarching MU (MU1) is Maxwell’s (2003) modified action research spiral, which acts as a conceptual framework for guiding which MUs are deployed when, for what purpose. The initial data collection point is reconnaissance, gathering contextual data to inform the situational analysis. Then later, after action is taken, evaluative data is gathered to inform reflection and revision or iteration.

There are two disparate MUs sequenced within this framework, and a third embedded as underpinning data collection and analysis.

The first (MU2) is Participatory Narrative Inquiry (Kurtz, 2014) – an extension of Narrative Inquiry (Clandinin, 2007) that positions the researcher as participant and focuses on co- constructed meaning, rather than extraction and siloed analysis of narratives. Storytelling is acknowledged as a valuable organisational process for meaning making

(Boje, 2008; Brown et al, 2005) and PNI facilitates the complex view of organisational experience required by this project.

The second (MU3) is assessment – assessment as a practice is not well understood as a data-gathering mechanism in research contexts, but it seems appropriate when designing a pedagogical intervention towards organisational learning that standard evaluative methods for practice improvement (Timperley, 2009; Nygaard & Belluigi, 2011) be borrowed from the education domain and used in this particular research context.

The third (MU4) is sensemaking (Weick, 1995; Kurtz, 2014) – again, another mechanism not usually seen in a research context, but as a valuable organisational learning function, using sensemaking occasions as a means of data collection is appropriate for this project. Sensemaking is inherently embedded in the PNI methodology, but can also function as a standalone process that underpins the construction of meaning in each MU event.

This configuration of MUs is illustrated in the below diagram:

These are sequenced longitudinally over time, but the nature of the action research spiral means that there is also a doubling back and repeating of the sequence to iterate the action taken.

Data gathering

Within the MUs, there are three main data collection strategies:

Group storytelling

Small group sessions where participants work with narrative prompts to tell stories that articulate their organisational experiences, then work collectively to construct meaning out of the stories told. Triangulation of data happens in real time as participants ‘bounce’ off each others’ experiences to construct shared meaning – a deliberate function of the group as the data unit rather than the individual.

Narrative Incident Reports (NIRs)

Another method outlined by Kurtz, NIRs allow the collection of stories as data outside of a fixed temporal context (useful when staff time is at a premium), and these can then be brought into the group storytelling sessions to catalyse the shared meaning-making process.


If I take the approach of framing the action in this project as an act of pedagogy, the intervention can then be designed as a teaching unit with learning outcomes and assessment. As participants undertake assessment, assessment data is gathered and referenced against achievement of the learning outcomes as well as used to inform further action (performing both a summative and formative function). Data triangulation here is achieved through a joint process of self-assessment and researcher-as-facilitator assessment.


Kurtz notes that sensemaking is embedded in the PNI methodology, and ‘It usually looks like some people in a room together for a few hours, doing things with collected stories.’ (2014; chapter 11 para 3). This will happen naturally during group storytelling sessions.

Sensemaking can also stand alone, and data gathered through a sensemaking process will also be done at what Weick (1995) calls ‘occasions for sensemaking’ that are likely to occur in times of ambiguity and uncertainty. The notion of evaluative and formative assessment also necessitates sensemaking, creating meaning from the assessment act

and translating it into practice.


Given the use of LaTT as a bounded context for implementation, this makes sampling a reasonably straightforward task. Clear boundaries for inclusion and exclusion exist based on employment, and LaTT is usefully vertical structure that is divided into a number of pillars and further divided into nested smaller teams that lend themselves to sampling groups.


The two main forms of data analysis are sensemaking (analysis for meaning) and assessment (again, not traditionally recognised as data analysis functions but appropriate in this context). In the case of PNI, analysis happens in real time in the storytelling groups, in what Kurtz (2014) refers to as ‘working with stories’, or a shared process of making meaning out of the stories. In standalone sensemaking contexts the analytic process is similar – what Weick (1995) refers to as the ongoing creation of reality. In both cases, the data provided by the situation and stories is analysed by asking ‘what does this mean?’.

Analysis of assessment data is a determination made against learning outcomes (summative sensemaking), alongside meaning constructed for future practice (formative sensemaking).


Boje, D. M. (2008). Storytelling organizations. Sage.

Brown, J. S., Denning, S., Prusak, L., & Groh, K. (2005). Storytelling in organizations: Why storytelling is transforming 21st century organizations and management. Routledge. Chicago.

Cooksey, R. (2016). Frames and Configurations: Using a Systems Perspective for Social and Behavioural Research [Powerpoint slides]. UNE Business School, University of New England, Armidale.

Gibbons et al (1994). The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies . London: Sage.

Kurtz, C. 2014. Working with Stories in Your Community or Organization: Participatory Narrative Inquiry. Third Edition. New York: Kurtz-Fernhout Publishing.

Maxwell, T.W. (2003). ‘Action Research for Bhutan?’, Rabsel III, 1-20. Edmondson, A. C. (2013). Teaming to innovate. John Wiley & Sons. Weick, K. E. (1995). Sensemaking in organizations (Vol. 3). Sage.

Nygaard, C. & Belluigi, D. Z. (2011) A proposed methodology for contextualised evaluation in higher education, Assessment & Evaluation in Higher Education, 36:6, 657-671, DOI: 10.1080/02602931003650037

Timperley, H. (2009). Using assessment data for improving teaching practice. In From 2009-ACER Research Conference series (p. 7).


Leave a Reply

Your email address will not be published. Required fields are marked *