Asking SMART questions can link sustainability research to practitioner needs

Asking SMART questions can link sustainability research to practitioner needs

Despite large investments in research and development, India struggles to translate sustainability research into tangible impacts that help solve environmental challenges across its large geography. Anjali Neelakantan and Veena Srinivasan suggest this is in part due to the challenge of finding sharply framed bottom-up research questions and discuss a new project to match such questions to research students who can answer them.

India grapples with complex socio-economic and environmental challenges. There is a major gap in the knowledge required to understand and solve these problems. It is an approach to knowledge generation that often focuses on what we can answer and not what practitioners are asking. However, this approach regularly fails for several reasons:

Relevant questions: Many problems in these sectors have been studied in detail but require a contextual lens. Research students pore through literature to identify gaps in knowledge and undertake additional research to fill a few of those gaps, leading to research questioning the assumptions of established research, aiming for novelty and scale, but often without a practical application. In contrast, communities on the ground have fundamental questions that are not being studied – as they are deemed ‘too simple’.

‘Good’ research is too often theory building: To publish research, editors demand theoretical novelty and generalisability, consequently this is what researcher students chasing the ‘h-index’ must produce. These requirements result in the grounded research ‘demanded’ by civil society organisations and local governments being neglected.

A ‘push’ mindset: Research dissemination often aims to push findings towards decision makers in policy or practice. These recipients may find themselves not knowing how to make use of evidence, as it doesn’t fit their questions and timescales. However, policymakers and practitioners regularly encounter knowledge gaps in their work. A ‘pull mindset’ would enable researcher students to work with and fill these critical gaps.

Question Framing: There is often confusion between what researcher students can objectively answer and what they can’t. Practitioner and policymaker questions can often be framed in terms outside of academic research. Decisions that involve value judgements and trade-offs between stakeholders can only be addressed politically or by dialogue and no amount of empirical evidence can answer them.

A skills deficit: Most students in India pass their Master’s, or even PhD programmes, without having undertaken research methodology courses beyond their immediate disciplines. Rigorous research design training, especially in transdisciplinary contexts is urgently needed.

Finding Good Questions

In short, researchers, especially students and early career researchers, are regularly left with answers to questions no one asks, and no answers to questions people do ask. Research of course should be able to take a long-term view and to answer questions that have no immediate utility. But for research aimed at having a positive impact on the world, there remains a gap and to fill it, we could start by asking better questions.

One possible route is the (infamous) SMART framework, developed by George T. Doran in 1981. Familiar to many, from corporate strategizing and objective setting, it provides a plausible system for organising academic research.

SMART stands for Specific, Measurable, Achievable, Relevant and Timely. By tweaking this framework, we can ask and, more importantly, answer good questions. Every question should pass through the following filters:

  • Has this question already been answered? If so, can we make it accessible to a wider audience in a synthesised format?

Specific (S):

  • Can anyone reading this understand the question in the same way?
  • Is every part of the question well defined?

Measurable (M):

  • Does the question have a well-bounded answer?
  • If you heard the answer, would you feel like you had learned something or that the question has been answered?

Achievable (A):

  • What can be done with the resources we have?
  • Is secondary data available to answer this question?
  • If not, can primary qualitative/quantitative data be collected to answer the question?
  • What skills are required to answer this question?
  • Is there a feasible research design we can use to answer it?
  • Can the question be answered by a remote research student, or will it involve field work?

Relevant (R):

  • Does the question allow you to make a specific go/no-go decision?
  • Is there a decision you are struggling to make because you don’t have an answer to this question?

Timely (T):

  • How much time will it take to answer this question?

From our own experience evaluating questions, we have found this checklist can filter out redundant and tangential lines of inquiry.

Here is an example of a question that failed the SMART test:

‘Is farming viable without subsidies?’

By asking this question, we run the risk of problematisation (questioning the basic tenets of a particular topic). This can be useful when new evidence that challenges preconceptions come to light. However, in this case it is an established fact that farming in both developing and developed countries is not possible without government subsidies. A literature review will reveal that this is the case. Whether something is viable or not depends on other constraints in the system. In this sense it is not a well bound question.

Here is an example of a question that passed the SMART test:

Have the interventions implemented as a part of the agriculture production clusters (APCs) resulted in increasing the income of participating farmers in Odisha? If so, by how much?

This is a well-defined question in that APCs have a specific set of interventions to improve market linkages and are targeted as a part of large-scale government interventions. There is programmatic data that has been captured over the years, so to some extent secondary data can be used to answer this question. Depending on the full extent of programme data available over the years, this could be answered through quasi experimental methods, and through secondary datasets that are readily available. So, there is scope for this question to be answered remotely, which means that a Masters student in any part of the world could work on this over a three to four month timeline.

Bridging the gap between SMART questions and actionable answers

A lot of investment and effort goes into research and publications. However, they often don’t have a pathway to impact. This in part reflects poor alignment between those asking good questions as those able to answer them. Sourcing questions from practitioners, curating them using the SMART framework, and having them answered collaboratively with university students, provides a mechanism for bringing them together. For research students, it is challenging to find questions that are both academically rigorous and have prospects of delivering impact on the ground. Improving the supply of these research questions, can help them contribute to these grass roots need and demonstrate the real-world value of student research.

This is what we have been trying to do with the Research Public Square, a WELL Labs platform and matchmaker, which brings together supply and demand for grounded research in sustainability science. The platform will curate critical research questions posed by practitioners (including grassroots communities, government agencies and philanthropic organisations) and match them with research students with the skills, capabilities, mentorship, and funding (if possible) required to collect and analyse data to answer these questions.

We are currently working with some of the largest practitioners in India’s sustainability sector, including PRADAN, WASSAN, Water for People, Gram Vikas and Arghyam, to identify and curate questions that are currently relevant for their programmatic work. We receive a variety of questions ranging from impact evaluation to correlational analysis, which shows an untapped demand for research in this space. However, curating questions, and answering them at scale requires a different approach than answering one or two questions. For instance, questions are highly variable and will often require different research designs and methods. This requires high degree of transdisciplinary expertise and knowledge of research design during the curation process to make sure that questions are “SMART” enough to be answered and actionable for organisations.

By finding and asking the right questions, it is possible to avoid the mistake of transferring findings from one place to another in the hope that they will work in the same way. Sometimes what is necessary is an understanding of whether something has worked in a specific context and that is enough to inform a project or policy. By taking care with questions we can prioritise this kind of grounded research in a way that is rigorous, scientifically defensible and relevant.


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Deepak kumar via Unsplash.

Print Friendly, PDF & Email