In the last two years, I have lectured on research in the private sector to students at the Australian National University. Many of these students display a sad, but common characterisation of private sector consultants as cutting corners (not edges), failing to think critically and producing claims based on flimsy research. This stereotype of the ‘consultant’ is unfortunate, but it exists and endures for a reason.
Some may lay blame on the profit motivation of the private sector consultant, however I want to focus on a more critically important factor, which can undermine the quality of research work undertaken by consultants – research design and methodology.
Methodology – why it is important?
Research is not exclusive to academia – it is part of the day-to-day work within public and private institutions. Research design and the development of an appropriate methodology is a critical part of good research. Methodology is, however, often an unappreciated, and also perhaps poorly understood, part of the research process. It is not the part policy makers and the intended audience want to know about – the consultant is commissioned to identify findings and recommendations as a primary focus. However the fundamental importance of the research methodology is inescapable.
Methodology is the logic that underpins the selection of approach, research design, methods and analytical framework that will answer the questions posed. To take this further, the way you design the project methodology fundamentally affects the findings, conclusions and recommendations. Being honest with how your methodology sets boundaries around your work is also important – there are limits to the breadth and depth of issues you can cover, and thus setting parameters around what you aim to do is critical.
Consulting and the academic standard
The academic peer review process is the gold standard to ensure the integrity of the work produced. Fundamental to this is the assurance that the approach and methods used in a research project were to appropriate standards of rigour and validity. While this standard is not necessarily a reasonable expectation for an industry where time and resource constraints are paramount, this does not justify the consulting industry to simply shortcut, substituting a suite of ‘desk review, interviews and a survey’ to answer any question they are charged with investigating. Each research question needs careful consideration given to the best data collection methods, who will be the target audience and how the data collected will actually meet your overall objectives. Any shortcoming in these areas will risk the final products claiming too much and being misleading.
There are elements that can be replicated from academic research and applied to commercial consulting. This includes close attention to the development of a sound hypothesis, the availability of data, the most appropriate methods, and, ultimately, the design of a methodology that is fit-for-purpose and aligned to the context. A fundamental understanding of the strengths and weaknesses of the particular methods chosen is critical here.
For example, why employ a survey instead of a focus group or an interview? It might be cheaper and a more impressive number of responses obtained, but will it get the information or insight you require? Will you necessarily get an honest breakdown of what a stakeholder thinks, specifically around the role of other stakeholders, through a focus group rather than an interview? What will be the impact of multiple participants bouncing ideas off each other in a focus group, compared to a one-on-one interview?
The list of similar questions is long, but important to consider in the design phase. Other issues to consider include whether to target certain participants (your sample), how you will recruit participants (and how this might bias your sample), and how all of this, including the conduct (and phrasing of particular questions), will influence the collection of data, the analysis, the findings you identify and the recommendations. If you are working in a cross-cultural context there is an additional layer of consideration to whether the intended methods are culturally appropriate.
During the last few years I have come across a number of consulting reports (and to be fair, academic articles too) where the combination of these factors was not adequately considered. One example involved using convenience sampling (going to a market and talking to those who want to talk) for a qualitative study of 100 people and then claiming it to be representative of a country population. Amongst a number of issues with this, an immediate shortcoming that comes to mind is that it was likely to bias the sample away from the marginalised, elderly and infirm. Some of these reports were transparent in acknowledging the limitations – however these limitations have such a significant impact on the validity and utility of the findings that you can’t help but think that more skilled design of the methodology and consideration of potential risks and short-comings at the planning phase would have been worthwhile.
Our experience
To opine on the importance of methodology is all fine and good but, like good research, it must be supported by evidence. Of course, there is a limit to sharing some of the work that we have done as it is subject to client confidentiality provisions (as equally impenetrable as academic journal pay walls). However, the two examples of our work below are good examples of methodological credibility and rigour in consulting projects.
For both of these projects the data is sufficient for us to pursue publishing them in academic journals.
United Nations Development Programme – Solomon Islands Electoral Commission (SIEC) Awareness Survey.
This survey aimed to provide an evidence based assessment of the performance of the previous conducted voter awareness programmes, establish a baseline data set for future monitoring and evaluation, and to inform future communication messaging. The survey drew on a representative sample of the entire voter population (est. 339,000 people). Sampling considered key variables of gender, rural/urban divide, age, and education levels. A total of 1,332 respondents were surveyed from 14 Enumeration Areas (EAs) across five provinces within the Solomon Islands. The EAs were randomly identified by the National Statistics Office based on 2009 Census data.
While the final report was written for a non-technical (and non-statistical) audience, it was based on appropriate statistical tests and analysis, with graphs throughout based on either total counts or percentages of various responses, with standard error bars (95% confidence interval).
Australian Centre for International Agricultural Research (ACIAR) – Knowledge Systems and RAPID frameworks for impact assessment.
Sustineo developed new qualitative impact assessment to enable ACIAR to better capture the capacity building, relationship development and policy impacts of the its investments. This project drew on an extensive literature review focused on the intersection of quantitative and qualitative impact assessment methodologies, as well as related to the case study focus on the health effects of aflatoxin contamination and its effects on development.
This was complemented by over 22 key informant interviews in Australia and Indonesia. The appropriate participants were identified according the level of the project (i.e. institutions as oppose to livelihoods level), their previous engagement with the project (researchers and industry) and based on their organisational relevance for management today (government agencies and regulatory bodies). Our analysis revealed that the quantitative measures were not sufficient for understanding the research and policy impact of ACIAR’s investments; however the qualitative measures developed through the study were effective in identifying the on-going policy and research impact of ACIAR’s investment.