• Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

1. introduction, 4. synthesis, 4.1 principles of tdr quality, 5. conclusions, supplementary data, acknowledgements, defining and assessing research quality in a transdisciplinary context.

  • Article contents
  • Figures & tables
  • Supplementary Data

Brian M. Belcher, Katherine E. Rasmussen, Matthew R. Kemshaw, Deborah A. Zornes, Defining and assessing research quality in a transdisciplinary context, Research Evaluation , Volume 25, Issue 1, January 2016, Pages 1–17, https://doi.org/10.1093/reseval/rvv025

  • Permissions Icon Permissions

Research increasingly seeks both to generate knowledge and to contribute to real-world solutions, with strong emphasis on context and social engagement. As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient—there is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a transdisciplinary research (TDR) context. We conducted a systematic review to help answer the question: What are appropriate principles and criteria for defining and assessing TDR quality? Articles were selected and reviewed seeking: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, proposed principles of research quality, proposed criteria for research quality assessment, proposed indicators and measures of research quality, and proposed processes for evaluating TDR. We used the information from the review and our own experience in two research organizations that employ TDR approaches to develop a prototype TDR quality assessment framework, organized as an evaluation rubric. We provide an overview of the relevant literature and summarize the main aspects of TDR quality identified there. Four main principles emerge: relevance, including social significance and applicability; credibility, including criteria of integration and reflexivity, added to traditional criteria of scientific rigor; legitimacy, including criteria of inclusion and fair representation of stakeholder interests, and; effectiveness, with criteria that assess actual or potential contributions to problem solving and social change.

Contemporary research in the social and environmental realms places strong emphasis on achieving ‘impact’. Research programs and projects aim to generate new knowledge but also to promote and facilitate the use of that knowledge to enable change, solve problems, and support innovation ( Clark and Dickson 2003 ). Reductionist and purely disciplinary approaches are being augmented or replaced with holistic approaches that recognize the complex nature of problems and that actively engage within complex systems to contribute to change ‘on the ground’ ( Gibbons et al. 1994 ; Nowotny, Scott and Gibbons 2001 , Nowotny, Scott and Gibbons 2003 ; Klein 2006 ; Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Erno-Kjolhede and Hansson 2011 ). Emerging fields such as sustainability science have developed out of a need to address complex and urgent real-world problems ( Komiyama and Takeuchi 2006 ). These approaches are inherently applied and transdisciplinary, with explicit goals to contribute to real-world solutions and strong emphasis on context and social engagement ( Kates 2000 ).

While there is an ongoing conceptual and theoretical debate about the nature of the relationship between science and society (e.g. Hessels 2008 ), we take a more practical starting point based on the authors’ experience in two research organizations. The first author has been involved with the Center for International Forestry Research (CIFOR) for almost 20 years. CIFOR, as part of the Consultative Group on International Agricultural Research (CGIAR), began a major transformation in 2010 that shifted the emphasis from a primary focus on delivering high-quality science to a focus on ‘…producing, assembling and delivering, in collaboration with research and development partners, research outputs that are international public goods which will contribute to the solution of significant development problems that have been identified and prioritized with the collaboration of developing countries.’ ( CGIAR 2011 ). It was always intended that CGIAR research would be relevant to priority development and conservation issues, with emphasis on high-quality scientific outputs. The new approach puts much stronger emphasis on welfare and environmental results; research centers, programs, and individual scientists now assume shared responsibility for achieving development outcomes. This requires new ways of working, with more and different kinds of partnerships and more deliberate and strategic engagement in social systems.

Royal Roads University (RRU), the home institute of all four authors, is a relatively new (created in 1995) public university in Canada. It is deliberately interdisciplinary by design, with just two faculties (Faculty of Social and Applied Science; Faculty of Management) and strong emphasis on problem-oriented research. Faculty and student research is typically ‘applied’ in the Organization for Economic Co-operation and Development (2012) sense of ‘original investigation undertaken in order to acquire new knowledge … directed primarily towards a specific practical aim or objective’.

An increasing amount of the research done within both of these organizations can be classified as transdisciplinary research (TDR). TDR crosses disciplinary and institutional boundaries, is context specific, and problem oriented ( Klein 2006 ; Carew and Wickson 2010 ). It combines and blends methodologies from different theoretical paradigms, includes a diversity of both academic and lay actors, and is conducted with a range of research goals, organizational forms, and outputs ( Klein 2006 ; Boix-Mansilla 2006a ; Erno-Kjolhede and Hansson 2011 ). The problem-oriented nature of TDR and the importance placed on societal relevance and engagement are broadly accepted as defining characteristics of TDR ( Carew and Wickson 2010 ).

The experience developing and using TDR approaches at CIFOR and RRU highlights the need for a parallel evolution of principles and criteria for evaluating research quality in a TDR context. Scientists appreciate and often welcome the need and the opportunity to expand the reach of their research, to contribute more effectively to change processes. At the same time, they feel the pressure of added expectations and are looking for guidance.

In any activity, we need principles, guidelines, criteria, or benchmarks that can be used to design the activity, assess its potential, and evaluate its progress and accomplishments. Effective research quality criteria are necessary to guide the funding, management, ongoing development, and advancement of research methods, projects, and programs. The lack of quality criteria to guide and assess research design and performance is seen as hindering the development of transdisciplinary approaches ( Bergmann et al. 2005 ; Feller 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2008 ; Carew and Wickson 2010 ; Jahn and Keil 2015 ). Appropriate quality evaluation is essential to ensure that research receives support and funding, and to guide and train researchers and managers to realize high-quality research ( Boix-Mansilla 2006a ; Klein 2008 ; Aagaard-Hansen and Svedin 2009 ; Carew and Wickson 2010 ).

Traditional disciplinary research is built on well-established methodological and epistemological principles and practices. Within disciplinary research, quality has been defined narrowly, with the primary criteria being scientific excellence and scientific relevance ( Feller 2006 ; Chataway, Smith and Wield 2007 ; Erno-Kjolhede and Hansson 2011 ). Disciplines have well-established (often implicit) criteria and processes for the evaluation of quality in research design ( Erno-Kjolhede and Hansson 2011 ). TDR that is highly context specific, problem oriented, and includes nonacademic societal actors in the research process is challenging to evaluate ( Wickson, Carew and Russell 2006 ; Aagaard-Hansen and Svedin 2009 ; Andrén 2010 ; Carew and Wickson 2010 ; Huutoniemi 2010 ). There is no one definition or understanding of what constitutes quality, nor a set guide for how to do TDR ( Lincoln 1995 ; Morrow 2005 ; Oberg 2008 ; Andrén 2010 ; Huutoniemi 2010 ). When epistemologies and methods from more than one discipline are used, disciplinary criteria may be insufficient and criteria from more than one discipline may be contradictory; cultural conflicts can arise as a range of actors use different terminology for the same concepts or the same terminology for different concepts ( Chataway, Smith and Wield 2007 ; Oberg 2008 ).

Current research evaluation approaches as applied to individual researchers, programs, and research units are still based primarily on measures of academic outputs (publications and the prestige of the publishing journal), citations, and peer assessment ( Boix-Mansilla 2006a ; Feller 2006 ; Erno-Kjolhede and Hansson 2011 ). While these indicators of research quality remain relevant, additional criteria are needed to address the innovative approaches and the diversity of actors, outputs, outcomes, and long-term social impacts of TDR. It can be difficult to find appropriate outlets for TDR publications simply because the research does not meet the expectations of traditional discipline-oriented journals. Moreover, a wider range of inputs and of outputs means that TDR may result in fewer academic outputs. This has negative implications for transdisciplinary researchers, whose performance appraisals and long-term career progression are largely governed by traditional publication and citation-based metrics of evaluation. Research managers, peer reviewers, academic committees, and granting agencies all struggle with how to evaluate and how to compare TDR projects ( ex ante or ex post ) in the absence of appropriate criteria to address epistemological and methodological variability. The extent of engagement of stakeholders 1 in the research process will vary by project, from information sharing through to active collaboration ( Brandt et al. 2013) , but at any level, the involvement of stakeholders adds complexity to the conceptualization of quality. We need to know what ‘good research’ is in a transdisciplinary context.

As Tijssen ( 2003 : 93) put it: ‘Clearly, in view of its strategic and policy relevance, developing and producing generally acceptable measures of “research excellence” is one of the chief evaluation challenges of the years to come’. Clear criteria are needed for research quality evaluation to foster excellence while supporting innovation: ‘A principal barrier to a broader uptake of TD research is a lack of clarity on what good quality TD research looks like’ ( Carew and Wickson 2010 : 1154). In the absence of alternatives, many evaluators, including funding bodies, rely on conventional, discipline-specific measures of quality which do not address important aspects of TDR.

There is an emerging literature that reviews, synthesizes, or empirically evaluates knowledge and best practice in research evaluation in a TDR context and that proposes criteria and evaluation approaches ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Klein 2008 ; Carew and Wickson 2010 ; ERIC 2010; de Jong et al. 2011 ; Spaapen and Van Drooge 2011 ). Much of it comes from a few fields, including health care, education, and evaluation; little comes from the natural resource management and sustainability science realms, despite these areas needing guidance. National-scale reviews have begun to recognize the need for broader research evaluation criteria but have had difficulty dealing with it and have made little progress in addressing it ( Donovan 2008 ; KNAW 2009 ; REF 2011 ; ARC 2012 ; TEC 2012 ). A summary of the national reviews that we reviewed in the development of this research is provided in Supplementary Appendix 1 . While there are some published evaluation schemes for TDR and interdisciplinary research (IDR), there is ‘substantial variation in the balance different authors achieve between comprehensiveness and over-prescription’ ( Wickson and Carew 2014 : 256) and still a need to develop standardized quality criteria that are ‘uniquely flexible to provide valid, reliable means to evaluate and compare projects, while not stifling the evolution and responsiveness of the approach’ ( Wickson and Carew 2014 : 256).

There is a need and an opportunity to synthesize current ideas about how to define and assess quality in TDR. To address this, we conducted a systematic review of the literature that discusses the definitions of research quality as well as the suggested principles and criteria for assessing TDR quality. The aim is to identify appropriate principles and criteria for defining and measuring research quality in a transdisciplinary context and to organize those principles and criteria as an evaluation framework.

The review question was: What are appropriate principles, criteria, and indicators for defining and assessing research quality in TDR?

This article presents the method used for the systematic review and our synthesis, followed by key findings. Theoretical concepts about why new principles and criteria are needed for TDR, along with associated discussions about evaluation process are presented. A framework, derived from our synthesis of the literature, of principles and criteria for TDR quality evaluation is presented along with guidance on its application. Finally, recommendations for next steps in this research and needs for future research are discussed.

2.1 Systematic review

Systematic review is a rigorous, transparent, and replicable methodology that has become widely used to inform evidence-based policy, management, and decision making ( Pullin and Stewart 2006 ; CEE 2010). Systematic reviews follow a detailed protocol with explicit inclusion and exclusion criteria to ensure a repeatable and comprehensive review of the target literature. Review protocols are shared and often published as peer reviewed articles before undertaking the review to invite critique and suggestions. Systematic reviews are most commonly used to synthesize knowledge on an empirical question by collating data and analyses from a series of comparable studies, though methods used in systematic reviews are continually evolving and are increasingly being developed to explore a wider diversity of questions ( Chandler 2014 ). The current study question is theoretical and methodological, not empirical. Nevertheless, with a diverse and diffuse literature on the quality of TDR, a systematic review approach provides a method for a thorough and rigorous review. The protocol is published and available at http://www.cifor.org/online-library/browse/view-publication/publication/4382.html . A schematic diagram of the systematic review process is presented in Fig. 1 .

Search process.

Search process.

2.2 Search terms

Search terms were designed to identify publications that discuss the evaluation or assessment of quality or excellence 2 of research 3 that is done in a TDR context. Search terms are listed online in Supplementary Appendices 2 and 3 . The search strategy favored sensitivity over specificity to ensure that we captured the relevant information.

2.3 Databases searched

ISI Web of Knowledge (WoK) and Scopus were searched between 26 June 2013 and 6 August 2013. The combined searches yielded 15,613 unique citations. Additional searches to update the first searchers were carried out in June 2014 and March 2015, for a total of 19,402 titles scanned. Google Scholar (GS) was searched separately by two reviewers during each search period. The first reviewer’s search was done on 2 September 2013 (Search 1) and 3 September 2013 (Search 2), yielding 739 and 745 titles, respectively. The second reviewer’s search was done on 19 November 2013 (Search 1) and 25 November 2013 (Search 2), yielding 769 and 774 titles, respectively. A third search done on 17 March 2015 by one reviewer yielded 98 new titles. Reviewers found high redundancy between the WoK/Scopus searches and the GS searches.

2.4 Targeted journal searches

Highly relevant journals, including Research Evaluation, Evaluation and Program Planning, Scientometrics, Research Policy, Futures, American Journal of Evaluation, Evaluation Review, and Evaluation, were comprehensively searched using broader, more inclusive search strings that would have been unmanageable for the main database search.

2.5 Supplementary searches

References in included articles were reviewed to identify additional relevant literature. td-net’s ‘Tour d’Horizon of Literature’, lists important inter- and transdisciplinary publications collected through an invitation to experts in the field to submit publications ( td-net 2014 ). Six additional articles were identified via supplementary search.

2.6 Limitations of coverage

The review was limited to English-language published articles and material available through internet searches. There was no systematic way to search the gray (unpublished) literature, but relevant material identified through supplementary searches was included.

2.7 Inclusion of articles

This study sought articles that review, critique, discuss, and/or propose principles, criteria, indicators, and/or measures for the evaluation of quality relevant to TDR. As noted, this yielded a large number of titles. We then selected only those articles with an explicit focus on the meaning of IDR and/or TDR quality and how to achieve, measure or evaluate it. Inclusion and exclusion criteria were developed through an iterative process of trial article screening and discussion within the research team. Through this process, inter-reviewer agreement was tested and strengthened. Inclusion criteria are listed in Tables 1 and 2 .

Inclusion criteria for title and abstract screening

Inclusion criteria for abstract and full article screening

Article screening was done in parallel by two reviewers in three rounds: (1) title, (2) abstract, and (3) full article. In cases of uncertainty, papers were included to the next round. Final decisions on inclusion of contested papers were made by consensus among the four team members.

2.8 Critical appraisal

In typical systematic reviews, individual articles are appraised to ensure that they are adequate for answering the research question and to assess the methods of each study for susceptibility to bias that could influence the outcome of the review (Petticrew and Roberts 2006). Most papers included in this review are theoretical and methodological papers, not empirical studies. Most do not have explicit methods that can be appraised with existing quality assessment frameworks. Our critical appraisal considered four criteria adapted from Spencer et al. (2003): (1) relevance to the review question, (2) clarity and logic of how information in the paper was generated, (3) significance of the contribution (are new ideas offered?), and (4) generalizability (is the context specified; do the ideas apply in other contexts?). Disagreements were discussed to reach consensus.

2.9 Data extraction and management

The review sought information on: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, principles of research quality, criteria for research quality assessment, indicators and measures of research quality, and processes for evaluating TDR. Four reviewers independently extracted data from selected articles using the parameters listed in Supplementary Appendix 4 .

2.10 Data synthesis and TDR framework design

Our aim was to synthesize ideas, definitions, and recommendations for TDR quality criteria into a comprehensive and generalizable framework for the evaluation of quality in TDR. Key ideas were extracted from each article and summarized in an Excel database. We classified these ideas into themes and ultimately into overarching principles and associated criteria of TDR quality organized as a rubric ( Wickson and Carew 2014 ). Definitions of each principle and criterion were developed and rubric statements formulated based on the literature and our experience. These criteria (adjusted appropriately to be applied ex ante or ex post ) are intended to be used to assess a TDR project. The reviewer should consider whether the project fully satisfies, partially satisfies, or fails to satisfy each criterion. More information on application is provided in Section 4.3 below.

We tested the framework on a set of completed RRU graduate theses that used transdisciplinary approaches, with an explicit problem orientation and intent to contribute to social or environmental change. Three rounds of testing were done, with revisions after each round to refine and improve the framework.

3.1 Overview of the selected articles

Thirty-eight papers satisfied the inclusion criteria. A wide range of terms are used in the selected papers, including: cross-disciplinary; interdisciplinary; transdisciplinary; methodological pluralism; mode 2; triple helix; and supradisciplinary. Eight included papers specifically focused on sustainability science or TDR in natural resource management, or identified sustainability research as a growing TDR field that needs new forms of evaluation ( Cash et al. 2002 ; Bergmann et al. 2005 ; Chataway, Smith and Wield 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Andrén 2010 ; Carew and Wickson 2010 ; Lang et al. 2012 ; Gaziulusoy and Boyle 2013 ). Carew and Wickson (2010) build on the experience in the TDR realm to propose criteria and indicators of quality for ‘responsible research and innovation’.

The selected articles are written from three main perspectives. One set is primarily interested in advancing TDR approaches. These papers recognize the need for new quality measures to encourage and promote high-quality research and to overcome perceived biases against TDR approaches in research funding and publishing. A second set of papers is written from an evaluation perspective, with a focus on improving evaluation of TDR. The third set is written from the perspective of qualitative research characterized by methodological pluralism, with many characteristics and issues relevant to TDR approaches.

The majority of the articles focus at the project scale, some at the organization level, and some do not specify. Some articles explicitly focus on ex ante evaluation (e.g. proposal evaluation), others on ex post evaluation, and many are not explicit about the project stage they are concerned with. The methods used in the reviewed articles include authors’ reflection and opinion, literature review, expert consultation, document analysis, and case study. Summaries of report characteristics are available online ( Supplementary Appendices 5–8 ). Eight articles provide comprehensive evaluation frameworks and quality criteria specifically for TDR and research-in-context. The rest of the articles discuss aspects of quality related to TDR and recommend quality definitions, criteria, and/or evaluation processes.

3.2 The need for quality criteria and evaluation methods for TDR

Many of the selected articles highlight the lack of widely agreed principles and criteria of TDR quality. They note that, in the absence of TDR quality frameworks, disciplinary criteria are used ( Morrow 2005 ; Boix-Mansilla 2006a , b ; Feller 2006 ; Klein 2006 , 2008 ; Wickson, Carew and Russell 2006 ; Scott 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Oberg 2008 ; Erno-Kjolhede and Hansson 2011 ), and evaluations are often carried out by reviewers who lack cross-disciplinary experience and do not have a shared understanding of quality ( Aagaard-Hansen and Svedin 2009 ). Quality is discussed by many as a relative concept, developed within disciplines, and therefore defined and understood differently in each field ( Morrow 2005 ; Klein 2006 ; Oberg 2008 ; Mitchell and Willets 2009 ; Huutoniemi 2010 ; Hellstrom 2011 ). Jahn and Keil (2015) point out the difficulty of creating a common set of quality criteria for TDR in the absence of a standard agreed-upon definition of TDR. Many of the selected papers argue the need to move beyond narrowly defined ideas of ‘scientific excellence’ to incorporate a broader assessment of quality which includes societal relevance ( Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ). This shift includes greater focus on research organization, research process, and continuous learning, rather than primarily on research outputs ( Hemlin and Rasmussen 2006 ; de Jong et al. 2011 ; Wickson and Carew 2014 ; Jahn and Keil 2015 ). This responds to and reflects societal expectations that research should be accountable and have demonstrated utility ( Cloete 1997 ; Defila and Di Giulio 1999 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Stige 2009 ).

A central aim of TDR is to achieve socially relevant outcomes, and TDR quality criteria should demonstrate accountability to society ( Cloete 1997 ; Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; de Jong et al. 2011 ). Integration and mutual learning are a core element of TDR; it is not enough to transcend boundaries and incorporate societal knowledge but, as Carew and Wickson ( 2010 : 1147) summarize: ‘…the TD researcher needs to put effort into integrating these potentially disparate knowledges with a view to creating useable knowledge. That is, knowledge that can be applied in a given problem context and has some prospect of producing desired change in that context’. The inclusion of societal actors in the research process, the unique and often dispersed organization of research teams, and the deliberate integration of different traditions of knowledge production all fall outside of conventional assessment criteria ( Feller 2006 ).

Not only do the range of criteria need to be updated, expanded, agreed upon, and assumptions made explicit ( Boix-Mansilla 2006a ; Klein 2006 ; Scott 2007 ) but, given the specific problem orientation of TDR, reviewers beyond disciplinary academic peers need to be included in the assessment of quality ( Cloete 1997 ; Scott 2007 ; Spappen et al. 2007 ; Klein 2008 ). Several authors discuss the lack of reviewers with strong cross-disciplinary experience ( Aagaard-Hansen and Svedin 2009 ) and the lack of common criteria, philosophical foundations, and language for use by peer reviewers ( Klein 2008 ; Aagaard-Hansen and Svedin 2009 ). Peer review of TDR could be improved with explicit TDR quality criteria, and appropriate processes in place to ensure clear dialog between reviewers.

Finally, there is the need for increased emphasis on evaluation as part of the research process ( Bergmann et al. 2005 ; Hemlin and Rasmussen 2006 ; Meyrick 2006 ; Chataway, Smith and Wield 2007 ; Stige, Malterud and Midtgarden 2009 ; Hellstrom 2011 ; Lang et al. 2012 ; Wickson and Carew 2014 ). This is particularly true in large, complex, problem-oriented research projects. Ongoing monitoring of the research organization and process contributes to learning and adaptive management while research is underway and so helps improve quality. As stated by Wickson and Carew ( 2014 : 262): ‘We believe that in any process of interpreting, rearranging and/or applying these criteria, open negotiation on their meaning and application would only positively foster transformative learning, which is a valued outcome of good TD processes’.

3.3 TDR quality criteria and assessment approaches

Many of the papers provide quality criteria and/or describe constituent parts of quality. Aagaard-Hansen and Svedin (2009) define three key aspects of quality: societal relevance, impact, and integration. Meyrick (2006) states that quality research is transparent and systematic. Boaz and Ashby (2003) describe quality in four dimensions: methodological quality, quality of reporting, appropriateness of methods, and relevance to policy and practice. Although each article deconstructs quality in different ways and with different foci and perspectives, there is significant overlap and recurring themes in the papers reviewed. There is a broadly shared perspective that TDR quality is a multidimensional concept shaped by the specific context within which research is done ( Spaapen, Dijstelbloem and Wamelink 2007 ; Klein 2008 ), making a universal definition of TDR quality difficult or impossible ( Huutoniemi 2010 ).

Huutoniemi (2010) identifies three main approaches to conceptualizing quality in IDR and TDR: (1) using existing disciplinary standards adapted as necessary for IDR; (2) building on the quality standards of disciplines while fundamentally incorporating ways to deal with epistemological integration, problem focus, context, stakeholders, and process; and (3) radical departure from any disciplinary orientation in favor of external, emergent, context-dependent quality criteria that are defined and enacted collaboratively by a community of users.

The first approach is prominent in current research funding and evaluation protocols. Conservative approaches of this kind are criticized for privileging disciplinary research and for failing to provide guidance and quality control for transdisciplinary projects. The third approach would ‘undermine the prevailing status of disciplinary standards in the pursuit of a non-disciplinary, integrated knowledge system’ ( Huutoniemi 2010 : 313). No predetermined quality criteria are offered, only contextually embedded criteria that need to be developed within a specific research project. To some extent, this is the approach taken by Spaapen, Dijstelbloem and Wamelink (2007) and de Jong et al. (2011) . Such a sui generis approach cannot be used to compare across projects. Most of the reviewed papers take the second approach, and recommend TDR quality criteria that build on a disciplinary base.

Eight articles present comprehensive frameworks for quality evaluation, each with a unique approach, perspective, and goal. Two of these build comprehensive lists of criteria with associated questions to be chosen based on the needs of the particular research project ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ). Wickson and Carew (2014) develop a reflective heuristic tool with questions to guide researchers through ongoing self-evaluation. They also list criteria for external evaluation and to compare between projects. Spaapen, Dijstelbloem and Wamelink (2007) design an approach to evaluate a research project against its own goals and is not meant to compare between projects. Wickson and Carew (2014) developed a comprehensive rubric for the evaluation of Research and Innovation that builds of their extensive previous work in TDR. Finally, Lang et al. (2012) , Mitchell and Willets (2009) , and Jahn and Keil (2015) develop criteria checklists that can be applied across transdisciplinary projects.

Bergmann et al. (2005) and Carew and Wickson (2010) organize their frameworks into managerial elements of the research project, concerning problem context, participation, management, and outcomes. Lang et al. (2012) and Defila and Di Giulio (1999) focus on the chronological stages in the research process and identify criteria at each stage. Mitchell and Willets (2009) , , with a focus on doctoral s tudies, adapt standard dissertation evaluation criteria to accommodate broader, pluralistic, and more complex studies. Spaapen, Dijstelbloem and Wamelink (2007) focus on evaluating ‘research-in-context’. Wickson and Carew (2014) created a rubric based on criteria that span the research process, stages, and all actors included. Jahn and Keil (2015) organized their quality criteria into three categories of quality including: quality of the research problems, quality of the research process, and quality of the research results.

The remaining papers highlight key themes that must be considered in TDR evaluation. Dominant themes include: engagement with problem context, collaboration and inclusion of stakeholders, heightened need for explicit communication and reflection, integration of epistemologies, recognition of diverse outputs, the focus on having an impact, and reflexivity and adaptation throughout the process. The focus on societal problems in context and the increased engagement of stakeholders in the research process introduces higher levels of complexity that cannot be accommodated by disciplinary standards ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Klein 2008 ).

Finally, authors discuss process ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Boix-Mansilla 2006b ; Spaapen, Dijstelbloem and Wamelink 2007 ) and utilitarian values ( Hemlin 2006 ; Ernø-Kjølhede and Hansson 2011 ; Bornmann 2013 ) as essential aspects of quality in TDR. Common themes include: (1) the importance of formative and process-oriented evaluation ( Bergmann et al. 2005 ; Hemlin 2006 ; Stige 2009 ); (2) emphasis on the evaluation process itself (not just criteria or outcomes) and reflexive dialog for learning ( Bergmann et al. 2005 ; Boix-Mansilla 2006b ; Klein 2008 ; Oberg 2008 ; Stige, Malterud and Midtgarden 2009 ; Aagaard-Hansen and Svedin 2009 ; Carew and Wickson 2010 ; Huutoniemi 2010 ); (3) the need for peers who are experienced and knowledgeable about TDR for fair peer review ( Boix-Mansilla 2006a , b ; Klein 2006 ; Hemlin 2006 ; Scott 2007 ; Aagaard-Hansen and Svedin 2009 ); (4) the inclusion of stakeholders in the evaluation process ( Bergmann et al. 2005 ; Scott 2007 ; Andréen 2010 ); and (5) the importance of evaluations that are built in-context ( Defila and Di Giulio 1999 ; Feller 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; de Jong et al. 2011 ).

While each reviewed approach offers helpful insights, none adequately fulfills the need for a broad and adaptable framework for assessing TDR quality. Wickson and Carew ( 2014 : 257) highlight the need for quality criteria that achieve balance between ‘comprehensiveness and over-prescription’: ‘any emerging quality criteria need to be concrete enough to provide real guidance but flexible enough to adapt to the specificities of varying contexts’. Based on our experience, such a framework should be:

Comprehensive: It should accommodate the main aspects of TDR, as identified in the review.

Time/phase adaptable: It should be applicable across the project cycle.

Scalable: It should be useful for projects of different scales.

Versatile: It should be useful to researchers and collaborators as a guide to research design and management, and to internal and external reviews and assessors.

Comparable: It should allow comparison of quality between and across projects/programs.

Reflexive: It should encourage and facilitate self-reflection and adaptation based on ongoing learning.

In this section, we synthesize the key principles and criteria of quality in TDR that were identified in the reviewed literature. Principles are the essential elements of high-quality TDR. Criteria are the conditions that need to be met in order to achieve a principle. We conclude by providing a framework for the evaluation of quality in TDR ( Table 3 ) and guidance for its application.

Transdisciplinary research quality assessment framework

a Research problems are the particular topic, area of concern, question to be addressed, challenge, opportunity, or focus of the research activity. Research problems are related to the societal problem but take on a specific focus, or framing, within a societal problem.

b Problem context refers to the social and environmental setting(s) that gives rise to the research problem, including aspects of: location; culture; scale in time and space; social, political, economic, and ecological/environmental conditions; resources and societal capacity available; uncertainty, complexity, and novelty associated with the societal problem; and the extent of agency that is held by stakeholders ( Carew and Wickson 2010 ).

c Words such as ‘appropriate’, ‘suitable’, and ‘adequate’ are used deliberately to allow for quality criteria to be flexible and specific enough to the needs of individual research projects ( Oberg 2008 ).

d Research process refers to the series of decisions made and actions taken throughout the entire duration of the research project and encompassing all aspects of the research project.

e Reflexivity refers to an iterative process of formative, critical reflection on the important interactions and relationships between a research project’s process, context, and product(s).

f In an ex ante evaluation, ‘evidence of’ would be replaced with ‘potential for’.

There is a strong trend in the reviewed articles to recognize the need for appropriate measures of scientific quality (usually adapted from disciplinary antecedants), but also to consider broader sets of criteria regarding the societal significance and applicability of research, and the need for engagement and representation of stakeholder values and knowledge. Cash et al. (2002) nicely conceptualize three key aspects of effective sustainability research as: salience (or relevance), credibility, and legitimacy. These are presented as necessary attributes for research to successfully produce transferable, useful information that can cross boundaries between disciplines, across scales, and between science and society. Many of the papers also refer to the principle that high-quality TDR should be effective in terms of contributing to the solution of problems. These four principles are discussed in the following sections.

4.1.1 Relevance

Relevance is the importance, significance, and usefulness of the research project's objectives, process, and findings to the problem context and to society. This includes the appropriateness of the timing of the research, the questions being asked, the outputs, and the scale of the research in relation to the societal problem being addressed. Good-quality TDR addresses important social/environmental problems and produces knowledge that is useful for decision making and problem solving ( Cash et al. 2002 ; Klein 2006 ). As Erno-Kjolhede and Hansson ( 2011 : 140) explain, quality ‘is first and foremost about creating results that are applicable and relevant for the users of the research’. Researchers must demonstrate an in-depth knowledge of and ongoing engagement with the problem context in which their research takes place ( Wickson, Carew and Russell 2006 ; Stige, Malterud and Midtgarden 2009 ; Mitchell and Willets 2009 ). From the early steps of problem formulation and research design through to the appropriate and effective communication of research findings, the applicability and relevance of the research to the societal problem must be explicitly stated and incorporated.

4.1.2 Credibility

Credibility refers to whether or not the research findings are robust and the knowledge produced is scientifically trustworthy. This includes clear demonstration that the data are adequate, with well-presented methods and logical interpretations of findings. High-quality research is authoritative, transparent, defensible, believable, and rigorous. This is the traditional purview of science, and traditional disciplinary criteria can be applied in TDR evaluation to an extent. Additional and modified criteria are needed to address the integration of epistemologies and methodologies and the development of novel methods through collaboration, the broad preparation and competencies required to carry out the research, and the need for reflection and adaptation when operating in complex systems. Having researchers actively engaged in the problem context and including extra-scientific actors as part of the research process helps to achieve relevance and legitimacy of the research; it also adds complexity and heightened requirements of transparency, reflection, and reflexivity to ensure objective, credible research is carried out.

Active reflexivity is a criterion of credibility of TDR that may seem to contradict more rigid disciplinary methodological traditions ( Carew and Wickson 2010 ). Practitioners of TDR recognize that credible work in these problem-oriented fields requires active reflexivity, epitomized by ongoing learning, flexibility, and adaptation to ensure the research approach and objectives remain relevant and fit-to-purpose ( Lincoln 1995 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Mitchell and Willets 2009 ; Andreén 2010 ; Carew and Wickson 2010 ; Wickson and Carew 2014 ). Changes made during the research process must be justified and reported transparently and explicitly to maintain credibility.

The need for critical reflection on potential bias and limitations becomes more important to maintain credibility of research-in-context ( Lincoln 1995 ; Bergmann et al. 2005 ; Mitchell and Willets 2009 ; Stige, Malterud and Midtgarden 2009 ). Transdisciplinary researchers must ensure they maintain a high level of objectivity and transparency while actively engaging in the problem context. This point demonstrates the fine balance between different aspects of quality, in this case relevance and credibility, and the need to be aware of tensions and to seek complementarities ( Cash et al. 2002 ).

4.1.3 Legitimacy

Legitimacy refers to whether the research process is perceived as fair and ethical by end-users. In other words, is it acceptable and trustworthy in the eyes of those who will use it? This requires the appropriate inclusion and consideration of diverse values, interests, and the ethical and fair representation of all involved. Legitimacy may be achieved in part through the genuine inclusion of stakeholders in the research process. Whereas credibility refers to technical aspects of sound research, legitimacy deals with sociopolitical aspects of the knowledge production process and products of research. Do stakeholders trust the researchers and the research process, including funding sources and other sources of potential bias? Do they feel represented? Legitimate TDR ‘considers appropriate values, concerns, and perspectives of different actors’ ( Cash et al. 2002 : 2) and incorporates these perspectives into the research process through collaboration and mutual learning ( Bergmann et al. 2005 ; Chataway, Smith and Wield 2007 ; Andrén 2010 ; Huutoneimi 2010 ). A fair and ethical process is important to uphold standards of quality in all research. However, there are additional considerations that are unique to TDR.

Because TDR happens in-context and often in collaboration with societal actors, the disclosure of researcher perspective and a transparent statement of all partnerships, financing, and collaboration is vital to ensure an unbiased research process ( Lincoln 1995 ; Defila and Di Giulio 1999 ; Boaz and Ashby 2003 ; Barker and Pistrang 2005 ; Bergmann et al. 2005 ). The disclosure of perspective has both internal and external aspects, on one hand ensuring the researchers themselves explicitly reflect on and account for their own position, potential sources of bias, and limitations throughout the process, and on the other hand making the process transparent to those external to the research group who can then judge the legitimacy based on their perspective of fairness ( Cash et al. 2002 ).

TDR includes the engagement of societal actors along a continuum of participation from consultation to co-creation of knowledge ( Brandt et al. 2013 ). Regardless of the depth of participation, all processes that engage societal actors must ensure that inclusion/engagement is genuine, roles are explicit, and processes for effective and fair collaboration are present ( Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Hellstrom 2012 ). Important considerations include: the accurate representation of those involved; explicit and agreed-upon roles and contributions of actors; and adequate planning and procedures to ensure all values, perspectives, and contexts are adequately and appropriately incorporated. Mitchell and Willets (2009) consider cultural competence as a key criterion that can support researchers in navigating diverse epistemological perspectives. This is similar to what Morrow terms ‘social validity’, a criterion that asks researchers to be responsive to and critically aware of the diversity of perspectives and cultures influenced by their research. Several authors highlight that in order to develop this critical awareness of the diversity of cultural paradigms that operate within a problem situation, researchers should practice responsive, critical, and/or communal reflection ( Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Mitchell and Willets 2009 ; Carew and Wickson 2010 ). Reflection and adaptation are important quality criteria that cut across multiple principles and facilitate learning throughout the process, which is a key foundation to TD inquiry.

4.1.4 Effectiveness

We define effective research as research that contributes to positive change in the social, economic, and/or environmental problem context. Transdisciplinary inquiry is rooted in the objective of solving real-word problems ( Klein 2008 ; Carew and Wickson 2010 ) and must have the potential to ( ex ante ) or actually ( ex post ) make a difference if it is to be considered of high quality ( Erno-Kjolhede and Hansson 2011 ). Potential research effectiveness can be indicated and assessed at the proposal stage and during the research process through: a clear and stated intention to address and contribute to a societal problem, the establishment of the research process and objectives in relation to the problem context, and the continuous reflection on the usefulness of the research findings and products to the problem ( Bergmann et al. 2005 ; Lahtinen et al. 2005 ; de Jong et al. 2011 ).

Assessing research effectiveness ex post remains a major challenge, especially in complex transdisciplinary approaches. Conventional and widely used measures of ‘scientific impact’ count outputs such as journal articles and other publications and citations of those outputs (e.g. H index; i10 index). While these are useful indicators of scholarly influence, they are insufficient and inappropriate measures of research effectiveness where research aims to contribute to social learning and change. We need to also (or alternatively) focus on other kinds of research and scholarship outputs and outcomes and the social, economic, and environmental impacts that may result.

For many authors, contributing to learning and building of societal capacity are central goals of TDR ( Defila and Di Giulio 1999 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Carew and Wickson 2010 ; Erno-Kjolhede and Hansson 2011 ; Hellstrom 2011 ), and so are considered part of TDR effectiveness. Learning can be characterized as changes in knowledge, attitudes, or skills and can be assessed directly, or through observed behavioral changes and network and relationship development. Some evaluation methodologies (e.g. Outcome Mapping ( Earl, Carden and Smutylo 2001 )) specifically measure these kinds of changes. Other evaluation methodologies consider the role of research within complex systems and assess effectiveness in terms of contributions to changes in policy and practice and resulting social, economic, and environmental benefits ( ODI 2004 , 2012 ; White and Phillips 2012 ; Mayne et al. 2013 ).

4.2 TDR quality criteria

TDR quality criteria and their definitions (explicit or implicit) were extracted from each article and summarized in an Excel database. These criteria were classified into themes corresponding to the four principles identified above, sorted and refined to develop sets of criteria that are comprehensive, mutually exclusive, and representative of the ideas presented in the reviewed articles. Within each principle, the criteria are organized roughly in the sequence of a typical project cycle (e.g. with research design following problem identification and preceding implementation). Definitions of each criterion were developed to reflect the concepts found in the literature, tested and refined iteratively to improve clarity. Rubric statements were formulated based on the literature and our own experience.

The complete set of principles, criteria, and definitions is presented as the TDR Quality Assessment Framework ( Table 3 ).

4.3 Guidance on the application of the framework

4.3.1 timing.

Most criteria can be applied at each stage of the research process, ex ante , mid term, and ex post , using appropriate interpretations at each stage. Ex ante (i.e. proposal) assessment should focus on a project’s explicitly stated intentions and approaches to address the criteria. Mid-term indicators will focus on the research process and whether or not it is being implemented in a way that will satisfy the criteria. Ex post assessment should consider whether the research has been done appropriately for the purpose and that the desired results have been achieved.

4.3.2 New meanings for familiar terms

Many of the terms used in the framework are extensions of disciplinary criteria and share the same or similar names and perhaps similar but nuanced meaning. The principles and criteria used here extend beyond disciplinary antecedents and include new concepts and understandings that encapsulate the unique characteristics and needs of TDR and allow for evaluation and definition of quality in TDR. This is especially true in the criteria related to credibility. These criteria are analogous to traditional disciplinary criteria, but with much stronger emphasis on grounding in both the scientific and the social/environmental contexts. We urge readers to pay close attention to the definitions provided in Table 3 as well as the detailed descriptions of the principles in Section 4.1.

4.3.3 Using the framework

The TDR quality framework ( Table 3 ) is designed to be used to assess TDR research according to a project’s purpose; i.e. the criteria must be interpreted with respect to the context and goals of an individual research activity. The framework ( Table 3 ) lists the main criteria synthesized from the literature and our experience, organized within the principles of relevance, credibility, legitimacy, and effectiveness. The table presents the criteria within each principle, ordered to approximate a typical process of identifying a research problem and designing and implementing research. We recognize that the actual process in any given project will be iterative and will not necessarily follow this sequence, but this provides a logical flow. A concise definition is provided in the second column to explain each criterion. We then provide a rubric statement in the third column, phrased to be applied when the research has been completed. In most cases, the same statement can be used at the proposal stage with a simple tense change or other minor grammatical revision, except for the criteria relating to effectiveness. As discussed above, assessing effectiveness in terms of outcomes and/or impact requires evaluation research. At the proposal stage, it is only possible to assess potential effectiveness.

Many rubrics offer a set of statements for each criterion that represent progressively higher levels of achievement; the evaluator is asked to select the best match. In practice, this often results in vague and relative statements of merit that are difficult to apply. We have opted to present a single rubric statement in absolute terms for each criterion. The assessor can then rank how well a project satisfies each criterion using a simple three-point Likert scale. If a project fully satisfies a criterion—that is, if there is evidence that the criterion has been addressed in a way that is coherent, explicit, sufficient, and convincing—it should be ranked as a 2 for that criterion. A score of 2 means that the evaluator is persuaded that the project addressed that criterion in an intentional, appropriate, explicit, and thorough way. A score of 1 would be given when there is some evidence that the criterion was considered, but it is lacking completion, intention, and/or is not addressed satisfactorily. For example, a score of 1 would be given when a criterion is explicitly discussed but poorly addressed, or when there is some indication that the criterion has been considered and partially addressed but it has not been treated explicitly, thoroughly, or adequately. A score of 0 indicates that there is no evidence that the criterion was addressed or that it was addressed in a way that was misguided or inappropriate.

It is critical that the evaluation be done in context, keeping in mind the purpose, objectives, and resources of the project, as well as other contextual information, such as the intended purpose of grant funding or relevant partnerships. Each project will be unique in its complexities; what is sufficient or adequate in one criterion for one research project may be insufficient or inappropriate for another. Words such as ‘appropriate’, ‘suitable’, and ‘adequate’ are used deliberately to encourage application of criteria to suit the needs of individual research projects ( Oberg 2008 ). Evaluators must consider the objectives of the research project and the problem context within which it is carried out as the benchmark for evaluation. For example, we tested the framework with RRU masters theses. These are typically small projects with limited scope, carried out by a single researcher. Expectations for ‘effective communication’ or ‘competencies’ or ‘effective collaboration’ are much different in these kinds of projects than in a multi-year, multi-partner CIFOR project. All criteria should be evaluated through the lens of the stated research objectives, research goals, and context.

The systematic review identified relevant articles from a diverse literature that have a strong central focus. Collectively, they highlight the complexity of contemporary social and environmental problems and emphasize that addressing such issues requires combinations of new knowledge and innovation, action, and engagement. Traditional disciplinary research has often failed to provide solutions because it cannot adequately cope with complexity. New forms of research are proliferating, crossing disciplinary and academic boundaries, integrating methodologies, and engaging a broader range of research participants, as a way to make research more relevant and effective. Theoretically, such approaches appear to offer great potential to contribute to transformative change. However, because these approaches are new and because they are multidimensional, complex, and often unique, it has been difficult to know what works, how, and why. In the absence of the kinds of methodological and quality standards that guide disciplinary research, there are no generally agreed criteria for evaluating such research.

Criteria are needed to guide and to help ensure that TDR is of high quality, to inform the teaching and learning of new researchers, and to encourage and support the further development of transdisciplinary approaches. The lack of a standard and broadly applicable framework for the evaluation of quality in TDR is perceived to cause an implicit or explicit devaluation of high-quality TDR or may prevent quality TDR from being done. There is a demonstrated need for an operationalized understanding of quality that addresses the characteristics, contributions, and challenges of TDR. The reviewed articles approach the topic from different perspectives and fields of study, using different terminology for similar concepts, or the same terminology for different concepts, and with unique ways of organizing and categorizing the dimensions and quality criteria. We have synthesized and organized these concepts as key TDR principles and criteria in a TDR Quality Framework, presented as an evaluation rubric. We have tested the framework on a set of masters’ theses and found it to be broadly applicable, usable, and useful for analyzing individual projects and for comparing projects within the set. We anticipate that further testing with a wider range of projects will help further refine and improve the definitions and rubric statements. We found that the three-point Likert scale (0–2) offered sufficient variability for our purposes, and rating is less subjective than with relative rubric statements. It may be possible to increase the rating precision with more points on the scale to increase the sensitivity for comparison purposes, for example in a review of proposals for a particular grant application.

Many of the articles we reviewed emphasize the importance of the evaluation process itself. The formative, developmental role of evaluation in TDR is seen as essential to the goals of mutual learning as well as to ensure that research remains responsive and adaptive to the problem context. In order to adequately evaluate quality in TDR, the process, including who carries out the evaluations, when, and in what manner, must be revised to be suitable to the unique characteristics and objectives of TDR. We offer this review and synthesis, along with a proposed TDR quality evaluation framework, as a contribution to an important conversation. We hope that it will be useful to researchers and research managers to help guide research design, implementation and reporting, and to the community of research organizations, funders, and society at large. As underscored in the literature review, there is a need for an adapted research evaluation process that will help advance problem-oriented research in complex systems, ultimately to improve research effectiveness.

This work was supported by funding from the Canada Research Chairs program. Funding support from the Canadian Social Sciences and Humanities Research Council (SSHRC) and technical support from the Evidence Based Forestry Initiative of the Centre for International Forestry Research (CIFOR), funded by UK DfID are also gratefully acknowledged.

Supplementary data is available here

The authors thank Barbara Livoreil and Stephen Dovers for valuable comments and suggestions on the protocol and Gillian Petrokofsky for her review of the protocol and a draft version of the manuscript. Two anonymous reviewers and the editor provided insightful critique and suggestions in two rounds that have helped to substantially improve the article.

Conflict of interest statement . None declared.

1. ‘Stakeholders’ refers to individuals and groups of societal actors who have an interest in the issue or problem that the research seeks to address.

2. The terms ‘quality’ and ‘excellence’ are often used in the literature with similar meaning. Technically, ‘excellence’ is a relative concept, referring to the superiority of a thing compared to other things of its kind. Quality is an attribute or a set of attributes of a thing. We are interested in what these attributes are or should be in high-quality research. Therefore, the term ‘quality’ is used in this discussion.

3. The terms ‘science’ and ‘research’ are not always clearly distinguished in the literature. We take the position that ‘science’ is a more restrictive term that is properly applied to systematic investigations using the scientific method. ‘Research’ is a broader term for systematic investigations using a range of methods, including but not restricted to the scientific method. We use the term ‘research’ in this broad sense.

Aagaard-Hansen J. Svedin U. ( 2009 ) ‘Quality Issues in Cross-disciplinary Research: Towards a Two-pronged Approach to Evaluation’ , Social Epistemology , 23 / 2 : 165 – 76 . DOI: 10.1080/02691720902992323

Google Scholar

Andrén S. ( 2010 ) ‘A Transdisciplinary, Participatory and Action-Oriented Research Approach: Sounds Nice but What do you Mean?’ [unpublished working paper] Human Ecology Division: Lund University, 1–21. < https://lup.lub.lu.se/search/publication/1744256 >

Australian Research Council (ARC) ( 2012 ) ERA 2012 Evaluation Handbook: Excellence in Research for Australia . Australia : ARC . < http://www.arc.gov.au/pdf/era12/ERA%202012%20Evaluation%20Handbook_final%20for%20web_protected.pdf >

Google Preview

Balsiger P. W. ( 2004 ) ‘Supradisciplinary Research Practices: History, Objectives and Rationale’ , Futures , 36 / 4 : 407 – 21 .

Bantilan M. C. et al.  . ( 2004 ) ‘Dealing with Diversity in Scientific Outputs: Implications for International Research Evaluation’ , Research Evaluation , 13 / 2 : 87 – 93 .

Barker C. Pistrang N. ( 2005 ) ‘Quality Criteria under Methodological Pluralism: Implications for Conducting and Evaluating Research’ , American Journal of Community Psychology , 35 / 3-4 : 201 – 12 .

Bergmann M. et al.  . ( 2005 ) Quality Criteria of Transdisciplinary Research: A Guide for the Formative Evaluation of Research Projects . Central report of Evalunet – Evaluation Network for Transdisciplinary Research. Frankfurt am Main, Germany: Institute for Social-Ecological Research. < http://www.isoe.de/ftp/evalunet_guide.pdf >

Boaz A. Ashby D. ( 2003 ) Fit for Purpose? Assessing Research Quality for Evidence Based Policy and Practice .

Boix-Mansilla V. ( 2006a ) ‘Symptoms of Quality: Assessing Expert Interdisciplinary Work at the Frontier: An Empirical Exploration’ , Research Evaluation , 15 / 1 : 17 – 29 .

Boix-Mansilla V. . ( 2006b ) ‘Conference Report: Quality Assessment in Interdisciplinary Research and Education’ , Research Evaluation , 15 / 1 : 69 – 74 .

Bornmann L. ( 2013 ) ‘What is Societal Impact of Research and How can it be Assessed? A Literature Survey’ , Journal of the American Society for Information Science and Technology , 64 / 2 : 217 – 33 .

Brandt P. et al.  . ( 2013 ) ‘A Review of Transdisciplinary Research in Sustainability Science’ , Ecological Economics , 92 : 1 – 15 .

Cash D. Clark W.C. Alcock F. Dickson N. M. Eckley N. Jäger J . ( 2002 ) Salience, Credibility, Legitimacy and Boundaries: Linking Research, Assessment and Decision Making (November 2002). KSG Working Papers Series RWP02-046. Available at SSRN: http://ssrn.com/abstract=372280 .

Carew A. L. Wickson F. ( 2010 ) ‘The TD Wheel: A Heuristic to Shape, Support and Evaluate Transdisciplinary Research’ , Futures , 42 / 10 : 1146 – 55 .

Collaboration for Environmental Evidence (CEE) . ( 2013 ) Guidelines for Systematic Review and Evidence Synthesis in Environmental Management . Version 4.2. Environmental Evidence < www.environmentalevidence.org/Documents/Guidelines/Guidelines4.2.pdf >

Chandler J. ( 2014 ) Methods Research and Review Development Framework: Policy, Structure, and Process . < http://methods.cochrane.org/projects-developments/research >

Chataway J. Smith J. Wield D. ( 2007 ) ‘Shaping Scientific Excellence in Agricultural Research’ , International Journal of Biotechnology 9 / 2 : 172 – 87 .

Clark W. C. Dickson N. ( 2003 ) ‘Sustainability Science: The Emerging Research Program’ , PNAS 100 / 14 : 8059 – 61 .

Consultative Group on International Agricultural Research (CGIAR) ( 2011 ) A Strategy and Results Framework for the CGIAR . < http://library.cgiar.org/bitstream/handle/10947/2608/Strategy_and_Results_Framework.pdf?sequence=4 >

Cloete N. ( 1997 ) ‘Quality: Conceptions, Contestations and Comments’, African Regional Consultation Preparatory to the World Conference on Higher Education , Dakar, Senegal, 1-4 April 1997 .

Defila R. DiGiulio A. ( 1999 ) ‘Evaluating Transdisciplinary Research,’ Panorama: Swiss National Science Foundation Newsletter , 1 : 4 – 27 . < www.ikaoe.unibe.ch/forschung/ip/Specialissue.Pano.1.99.pdf >

Donovan C. ( 2008 ) ‘The Australian Research Quality Framework: A Live Experiment in Capturing the Social, Economic, Environmental, and Cultural Returns of Publicly Funded Research. Reforming the Evaluation of Research’ , New Directions for Evaluation , 118 : 47 – 60 .

Earl S. Carden F. Smutylo T. ( 2001 ) Outcome Mapping. Building Learning and Reflection into Development Programs . Ottawa, ON : International Development Research Center .

Ernø-Kjølhede E. Hansson F. ( 2011 ) ‘Measuring Research Performance during a Changing Relationship between Science and Society’ , Research Evaluation , 20 / 2 : 130 – 42 .

Feller I. ( 2006 ) ‘Assessing Quality: Multiple Actors, Multiple Settings, Multiple Criteria: Issues in Assessing Interdisciplinary Research’ , Research Evaluation 15 / 1 : 5 – 15 .

Gaziulusoy A. İ. Boyle C. ( 2013 ) ‘Proposing a Heuristic Reflective Tool for Reviewing Literature in Transdisciplinary Research for Sustainability’ , Journal of Cleaner Production , 48 : 139 – 47 .

Gibbons M. et al.  . ( 1994 ) The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies . London : Sage Publications .

Hellstrom T. ( 2011 ) ‘Homing in on Excellence: Dimensions of Appraisal in Center of Excellence Program Evaluations’ , Evaluation , 17 / 2 : 117 – 31 .

Hellstrom T. . ( 2012 ) ‘Epistemic Capacity in Research Environments: A Framework for Process Evaluation’ , Prometheus , 30 / 4 : 395 – 409 .

Hemlin S. Rasmussen S. B . ( 2006 ) ‘The Shift in Academic Quality Control’ , Science, Technology & Human Values , 31 / 2 : 173 – 98 .

Hessels L. K. Van Lente H. ( 2008 ) ‘Re-thinking New Knowledge Production: A Literature Review and a Research Agenda’ , Research Policy , 37 / 4 , 740 – 60 .

Huutoniemi K. ( 2010 ) ‘Evaluating Interdisciplinary Research’ , in Frodeman R. Klein J. T. Mitcham C. (eds) The Oxford Handbook of Interdisciplinarity , pp. 309 – 20 . Oxford : Oxford University Press .

de Jong S. P. L. et al.  . ( 2011 ) ‘Evaluation of Research in Context: An Approach and Two Cases’ , Research Evaluation , 20 / 1 : 61 – 72 .

Jahn T. Keil F. ( 2015 ) ‘An Actor-Specific Guideline for Quality Assurance in Transdisciplinary Research’ , Futures , 65 : 195 – 208 .

Kates R. ( 2000 ) ‘Sustainability Science’ , World Academies Conference Transition to Sustainability in the 21st Century 5/18/00 , Tokyo, Japan .

Klein J. T . ( 2006 ) ‘Afterword: The Emergent Literature on Interdisciplinary and Transdisciplinary Research Evaluation’ , Research Evaluation , 15 / 1 : 75 – 80 .

Klein J. T . ( 2008 ) ‘Evaluation of Interdisciplinary and Transdisciplinary Research: A Literature Review’ , American Journal of Preventive Medicine , 35 / 2 Supplment S116–23. DOI: 10.1016/j.amepre.2008.05.010

Royal Netherlands Academy of Arts and Sciences, Association of Universities in the Netherlands, Netherlands Organization for Scientific Research (KNAW) . ( 2009 ) Standard Evaluation Protocol 2009-2015: Protocol for Research Assessment in the Netherlands . Netherlands : KNAW . < www.knaw.nl/sep >

Komiyama H. Takeuchi K. ( 2006 ) ‘Sustainability Science: Building a New Discipline’ , Sustainability Science , 1 : 1 – 6 .

Lahtinen E. et al.  . ( 2005 ) ‘The Development of Quality Criteria For Research: A Finnish approach’ , Health Promotion International , 20 / 3 : 306 – 15 .

Lang D. J. et al.  . ( 2012 ) ‘Transdisciplinary Research in Sustainability Science: Practice , Principles , and Challenges’, Sustainability Science , 7 / S1 : 25 – 43 .

Lincoln Y. S . ( 1995 ) ‘Emerging Criteria for Quality in Qualitative and Interpretive Research’ , Qualitative Inquiry , 1 / 3 : 275 – 89 .

Mayne J. Stern E. ( 2013 ) Impact Evaluation of Natural Resource Management Research Programs: A Broader View . Australian Centre for International Agricultural Research, Canberra .

Meyrick J . ( 2006 ) ‘What is Good Qualitative Research? A First Step Towards a Comprehensive Approach to Judging Rigour/Quality’ , Journal of Health Psychology , 11 / 5 : 799 – 808 .

Mitchell C. A. Willetts J. R. ( 2009 ) ‘Quality Criteria for Inter and Trans - Disciplinary Doctoral Research Outcomes’ , in Prepared for ALTC Fellowship: Zen and the Art of Transdisciplinary Postgraduate Studies ., Sydney : Institute for Sustainable Futures, University of Technology .

Morrow S. L . ( 2005 ) ‘Quality and Trustworthiness in Qualitative Research in Counseling Psychology’ , Journal of Counseling Psychology , 52 / 2 : 250 – 60 .

Nowotny H. Scott P. Gibbons M. ( 2001 ) Re-Thinking Science . Cambridge : Polity .

Nowotny H. Scott P. Gibbons M. . ( 2003 ) ‘‘Mode 2’ Revisited: The New Production of Knowledge’ , Minerva , 41 : 179 – 94 .

Öberg G . ( 2008 ) ‘Facilitating Interdisciplinary Work: Using Quality Assessment to Create Common Ground’ , Higher Education , 57 / 4 : 405 – 15 .

Ozga J . ( 2007 ) ‘Co - production of Quality in the Applied Education Research Scheme’ , Research Papers in Education , 22 / 2 : 169 – 81 .

Ozga J . ( 2008 ) ‘Governing Knowledge: research steering and research quality’ , European Educational Research Journal , 7 / 3 : 261 – 272 .

OECD ( 2012 ) Frascati Manual 6th ed. < http://www.oecd.org/innovation/inno/frascatimanualproposedstandardpracticeforsurveysonresearchandexperimentaldevelopment6thedition >

Overseas Development Institute (ODI) ( 2004 ) ‘Bridging Research and Policy in International Development: An Analytical and Practical Framework’, ODI Briefing Paper. < http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/198.pdf >

Overseas Development Institute (ODI) . ( 2012 ) RAPID Outcome Assessment Guide . < http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/7815.pdf >

Pullin A. S. Stewart G. B. ( 2006 ) ‘Guidelines for Systematic Review in Conservation and Environmental Management’ , Conservation Biology , 20 / 6 : 1647 – 56 .

Research Excellence Framework (REF) . ( 2011 ) Research Excellence Framework 2014: Assessment Framework and Guidance on Submissions. Reference REF 02.2011. UK: REF. < http://www.ref.ac.uk/pubs/2011-02/ >

Scott A . ( 2007 ) ‘Peer Review and the Relevance of Science’ , Futures , 39 / 7 : 827 – 45 .

Spaapen J. Dijstelbloem H. Wamelink F. ( 2007 ) Evaluating Research in Context: A Method for Comprehensive Assessment . Netherlands: Consultative Committee of Sector Councils for Research and Development. < http://www.qs.univie.ac.at/fileadmin/user_upload/qualitaetssicherung/PDF/Weitere_Aktivit%C3%A4ten/Eric.pdf >

Spaapen J. Van Drooge L. ( 2011 ) ‘Introducing “Productive Interactions” in Social Impact Assessment’ , Research Evaluation , 20 : 211 – 18 .

Stige B. Malterud K. Midtgarden T. ( 2009 ) ‘Toward an Agenda for Evaluation of Qualitative Research’ , Qualitative Health Research , 19 / 10 : 1504 – 16 .

td-net ( 2014 ) td-net. < www.transdisciplinarity.ch/e/Bibliography/new.php >

Tertiary Education Commission (TEC) . ( 2012 ) Performance-based Research Fund: Quality Evaluation Guidelines 2012. New Zealand: TEC. < http://www.tec.govt.nz/Documents/Publications/PBRF-Quality-Evaluation-Guidelines-2012.pdf >

Tijssen R. J. W. ( 2003 ) ‘Quality Assurance: Scoreboards of Research Excellence’ , Research Evaluation , 12 : 91 – 103 .

White H. Phillips D. ( 2012 ) ‘Addressing Attribution of Cause and Effect in Small n Impact Evaluations: Towards an Integrated Framework’. Working Paper 15. New Delhi: International Initiative for Impact Evaluation .

Wickson F. Carew A. ( 2014 ) ‘Quality Criteria and Indicators for Responsible Research and Innovation: Learning from Transdisciplinarity’ , Journal of Responsible Innovation , 1 / 3 : 254 – 73 .

Wickson F. Carew A. Russell A. W. ( 2006 ) ‘Transdisciplinary Research: Characteristics, Quandaries and Quality,’ Futures , 38 / 9 : 1046 – 59

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

A Step-To-Step Guide to Write a Quality Research Article

  • Conference paper
  • First Online: 01 June 2023
  • Cite this conference paper

quality research articles

  • Amit Kumar Tyagi   ORCID: orcid.org/0000-0003-2657-8700 14 ,
  • Rohit Bansal 15 ,
  • Anshu 16 &
  • Sathian Dananjayan   ORCID: orcid.org/0000-0002-6103-7267 17  

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 717))

Included in the following conference series:

  • International Conference on Intelligent Systems Design and Applications

275 Accesses

21 Citations

Today publishing articles is a trend around the world almost in each university. Millions of research articles are published in thousands of journals annually throughout many streams/sectors such as medical, engineering, science, etc. But few researchers follow the proper and fundamental criteria to write a quality research article. Many published articles over the web become just irrelevant information with duplicate information, which is a waste of available resources. This is because many authors/researchers do not know/do not follow the correct approach for writing a valid/influential paper. So, keeping such issues for new researchers or exiting researchers in many sectors, we feel motivated to write an article and present some systematic work/approach that can help researchers produce a quality research article. Also, the authors can publish their work in international conferences like CVPR, ICML, NeurIPS, etc., or international journals with high factors or a white paper. Publishing good articles improve the profile of researchers around the world, and further future researchers can refer their work in their work as references to proceed with the respective research to a certain level. Hence, this article will provide sufficient information for researchers to write a simple, effective/impressive and qualitative research article on their area of interest.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Nair, M.M., Tyagi, A.K., Sreenath, N.: The future with industry 4.0 at the core of society 5.0: open issues, future opportunities and challenges. In: 2021 International Conference on Computer Communication and Informatics (ICCCI), pp. 1–7 (2021). https://doi.org/10.1109/ICCCI50826.2021.9402498

Tyagi, A.K., Fernandez, T.F., Mishra, S., Kumari, S.: Intelligent Automation Systems at the Core of Industry 4.0. In: Abraham, A., Piuri, V., Gandhi, N., Siarry, P., Kaklauskas, A., Madureira, A. (eds.) ISDA 2020. AISC, vol. 1351, pp. 1–18. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-71187-0_1

Chapter   Google Scholar  

Goyal, D., Tyagi, A.: A Look at Top 35 Problems in the Computer Science Field for the Next Decade. CRC Press, Boca Raton (2020) https://doi.org/10.1201/9781003052098-40

Tyagi, A.K., Meenu, G., Aswathy, S.U., Chetanya, V.: Healthcare Solutions for Smart Era: An Useful Explanation from User’s Perspective. In the Book “Recent Trends in Blockchain for Information Systems Security and Privacy”. CRC Press, Boca Raton (2021)

Google Scholar  

Varsha, R., Nair, S.M., Tyagi, A.K., Aswathy, S.U., RadhaKrishnan, R.: The future with advanced analytics: a sequential analysis of the disruptive technology’s scope. In: Abraham, A., Hanne, T., Castillo, O., Gandhi, N., Nogueira Rios, T., Hong, T.-P. (eds.) HIS 2020. AISC, vol. 1375, pp. 565–579. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-73050-5_56

Tyagi, A.K., Nair, M.M., Niladhuri, S., Abraham, A.: Security, privacy research issues in various computing platforms: a survey and the road ahead. J. Inf. Assur. Secur. 15 (1), 1–16 (2020)

Madhav, A.V.S., Tyagi, A.K.: The world with future technologies (Post-COVID-19): open issues, challenges, and the road ahead. In: Tyagi, A.K., Abraham, A., Kaklauskas, A. (eds.) Intelligent Interactive Multimedia Systems for e-Healthcare Applications, pp. 411–452. Springer, Singapore (2022). https://doi.org/10.1007/978-981-16-6542-4_22

Mishra, S., Tyagi, A.K.: The role of machine learning techniques in the Internet of Things-based cloud applications. In: Pal, S., De, D., Buyya, R. (eds.) Artificial Intelligence-Based Internet of Things Systems. Internet of Things (Technology, Communications and Computing). Springer, Cham. https://doi.org/10.1007/978-3-030-87059-1_4

Pramod, A., Naicker, H.S., Tyagi, A.K.: Machine Learning and Deep Learning: Open Issues and Future Research Directions for Next Ten Years. Computational Analysis and Understanding of Deep Learning for Medical Care: Principles, Methods, and Applications. Wiley Scrivener (2020)

Kumari, S., Tyagi, A.K., Aswathy, S.U.: The Future of Edge Computing with Blockchain Technology: Possibility of Threats, Opportunities and Challenges. In the Book Recent Trends in Blockchain for Information Systems Security and Privacy. CRC Press, Boca Raton (2021)

Dananjayan, S., Tang, Y., Zhuang, J., Hou, C., Luo, S.: Assessment of state-of-the-art deep learning based citrus disease detection techniques using annotated optical leaf images. Comput. Electron. Agric. 193 (7), 106658 (2022). https://doi.org/10.1016/j.compag.2021.106658

Nair, M.M., Tyagi, A.K.: Privacy: History, Statistics, Policy, Laws, Preservation and Threat analysis. J. Inf. Assur. Secur. 16 (1), 24–34 (2021)

Tyagi, A.K., Sreenath, N.: A comparative study on privacy preserving techniques for location based services. Br. J. Math. Comput. Sci. 10 (4), 1–25 (2015). ISSN: 2231–0851

Rekha, G., Tyagi, A.K., Krishna Reddy, V.: A wide scale classification of class imbalance problem and its solutions: a systematic literature review. J. Comput. Sci. 15 (7), 886–929 (2019). ISSN Print: 1549–3636

Kanuru, L., Tyagi, A.K., A, S.U., Fernandez, T.F., Sreenath, N., Mishra, S.: Prediction of pesticides and fertilisers using machine learning and Internet of Things. In: 2021 International Conference on Computer Communication and Informatics (ICCCI), pp. 1–6 (2021). https://doi.org/10.1109/ICCCI50826.2021.9402536

Ambildhuke, G.M., Rekha, G., Tyagi, A.K.: Performance analysis of undersampling approaches for solving customer churn prediction. In: Goyal, D., Gupta, A.K., Piuri, V., Ganzha, M., Paprzycki, M. (eds.) Proceedings of the Second International Conference on Information Management and Machine Intelligence. LNNS, vol. 166, pp. 341–347. Springer, Singapore (2021). https://doi.org/10.1007/978-981-15-9689-6_37

Sathian, D.: ABC algorithm-based trustworthy energy-efficient MIMO routing protocol. Int. J. Commun. Syst. 32 , e4166 (2019). https://doi.org/10.1002/dac.4166

Varsha, R., et al.: Deep learning based blockchain solution for preserving privacy in future vehicles. Int. J. Hybrid Intell. Syst. 16 (4), 223–236 (2020)

Tyagi, A.K., Aswathy, S U.: Autonomous Intelligent Vehicles (AIV): research statements, open issues, challenges and road for future. Int. J. Intell. Netw. 2 , 83–102 (2021). ISSN 2666–6030. https://doi.org/10.1016/j.ijin.2021.07.002

Tyagi, A.K., Sreenath, N.: Cyber physical systems: analyses, challenges and possible solutions. Internet Things Cyber-Phys. Syst. 1 , 22–33 (2021). ISSN 2667–3452, https://doi.org/10.1016/j.iotcps.2021.12.002

Tyagi, A.K., Aghila, G.: A wide scale survey on botnet. Int. J. Comput. Appl. 34 (9), 9–22 (2011). (ISSN: 0975–8887)

Tyagi, A.K., Fernandez, T.F., Aswathy, S.U.: Blockchain and aadhaar based electronic voting system. In: 2020 4th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, pp. 498–504 (2020). https://doi.org/10.1109/ICECA49313.2020.9297655

Kumari, S., Muthulakshmi, P.: Transformative effects of big data on advanced data analytics: open issues and critical challenges. J. Comput. Sci. 18 (6), 463–479 (2022). https://doi.org/10.3844/jcssp.2022.463.479

Article   Google Scholar  

Download references

Acknowledgement

We want to think of the anonymous reviewer and our colleagues who helped us to complete this work.

Author information

Authors and affiliations.

Department of Fashion Technology, National Institute of Fashion Technology, New Delhi, India

Amit Kumar Tyagi

Department of Management Studies, Vaish College of Engineering, Rohtak, India

Rohit Bansal

Faculty of Management and Commerce (FOMC), Baba Mastnath University, Asthal Bohar, Rohtak, India

School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamilnadu, 600127, India

Sathian Dananjayan

You can also search for this author in PubMed   Google Scholar

Contributions

Amit Kumar Tyagi & Sathian Dananjayan have drafted and approved this manuscript for final publication.

Corresponding author

Correspondence to Amit Kumar Tyagi .

Editor information

Editors and affiliations.

Faculty of Computing and Data Science, FLAME University, Pune, Maharashtra, India

Ajith Abraham

Center for Smart Computing Continuum, Burgenland, Austria

Sabri Pllana

University of Bari, Bari, Italy

Gabriella Casalino

University of Jinan, Jinan, Shandong, China

Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology, Patiala, Punjab, India

Ethics declarations

Conflict of interest.

The author declares that no conflict exists regarding the publication of this paper.

Scope of the Work

As the author belongs to the computer science stream, so he has tried to cover up this article for all streams, but the maximum example used in situations, languages, datasets etc., are with respect to computer science-related disciplines only. This work can be used as a reference for writing good quality papers for international conferences journals.

Disclaimer. Links and papers provided in the work are only given as examples. To leave any citation or link is not intentional.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Tyagi, A.K., Bansal, R., Anshu, Dananjayan, S. (2023). A Step-To-Step Guide to Write a Quality Research Article. In: Abraham, A., Pllana, S., Casalino, G., Ma, K., Bajaj, A. (eds) Intelligent Systems Design and Applications. ISDA 2022. Lecture Notes in Networks and Systems, vol 717. Springer, Cham. https://doi.org/10.1007/978-3-031-35510-3_36

Download citation

DOI : https://doi.org/10.1007/978-3-031-35510-3_36

Published : 01 June 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-35509-7

Online ISBN : 978-3-031-35510-3

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

See More About

Sign up for emails based on your interests, select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Get the latest research based on your areas of interest.

Others also liked.

  • Download PDF
  • X Facebook More LinkedIn

Meille G , Owens PL , Decker SL, et al. COVID-19 Admission Rates and Changes in Care Quality in US Hospitals. JAMA Netw Open. 2024;7(5):e2413127. doi:10.1001/jamanetworkopen.2024.13127

Manage citations:

© 2024

  • Permissions

COVID-19 Admission Rates and Changes in Care Quality in US Hospitals

  • 1 Agency for Healthcare Research and Quality, US Department of Health and Human Services, Rockville, Maryland

Question   What was the association of COVID-19 admission rates with changes in hospital care quality for patients without COVID-19 in 2020?

Findings   In this cross-sectional study of 3283 acute care hospitals in 36 states and more than 19 million patient discharges, pressure ulcers and in-hospital mortality for nonsurgical care increased in 2020 during weeks with high COVID-19 admissions compared with weeks with low COVID-19 admissions. Increases were statistically significant and clinically meaningful; for example, pressure ulcer, heart failure mortality, and hip fracture mortality rates all increased by at least 20% during weeks with high compared with low COVID-19 admissions.

Meaning   These findings suggest that COVID-19 surges were associated with decreases in hospital quality, highlighting the importance of future strategies to maintain care quality during periods of high use.

Importance   Unprecedented increases in hospital occupancy rates during COVID-19 surges in 2020 caused concern over hospital care quality for patients without COVID-19.

Objective   To examine changes in hospital nonsurgical care quality for patients without COVID-19 during periods of high and low COVID-19 admissions.

Design, Setting, and Participants   This cross-sectional study used data from the 2019 and 2020 Agency for Healthcare Research and Quality’s Healthcare Cost and Utilization Project State Inpatient Databases. Data were obtained for all nonfederal, acute care hospitals in 36 states with admissions in 2019 and 2020, and patients without a diagnosis of COVID-19 or pneumonia who were at risk for selected quality indicators were included. The data analysis was performed between January 1, 2023, and March 15, 2024.

Exposure   Each hospital and week in 2020 was categorized based on the number of COVID-19 admissions per 100 beds: less than 1.0, 1.0 to 4.9, 5.0 to 9.9, 10.0 to 14.9, and 15.0 or greater.

Main Outcomes and Measures   The main outcomes were rates of adverse outcomes for selected quality indicators, including pressure ulcers and in-hospital mortality for acute myocardial infarction, heart failure, acute stroke, gastrointestinal hemorrhage, hip fracture, and percutaneous coronary intervention. Changes in 2020 compared with 2019 were calculated for each level of the weekly COVID-19 admission rate, adjusting for case-mix and hospital-month fixed effects. Changes during weeks with high COVID-19 admissions (≥15 per 100 beds) were compared with changes during weeks with low COVID-19 admissions (<1 per 100 beds).

Results   The analysis included 19 111 629 discharges (50.3% female; mean [SD] age, 63.0 [18.0] years) from 3283 hospitals in 36 states. In weeks 18 to 48 of 2020, 35 851 hospital-weeks (36.7%) had low COVID-19 admission rates, and 8094 (8.3%) had high rates. Quality indicators for patients without COVID-19 significantly worsened in 2020 during weeks with high vs low COVID-19 admissions. Pressure ulcer rates increased by 0.09 per 1000 admissions (95% CI, 0.01-0.17 per 1000 admissions; relative change, 24.3%), heart failure mortality increased by 0.40 per 100 admissions (95% CI, 0.18-0.63 per 100 admissions; relative change, 21.1%), hip fracture mortality increased by 0.40 per 100 admissions (95% CI, 0.04-0.77 per 100 admissions; relative change, 29.4%), and a weighted mean of mortality for the selected indicators increased by 0.30 per 100 admissions (95% CI, 0.14-0.45 per 100 admissions; relative change, 10.6%).

Conclusions and Relevance   In this cross-sectional study, COVID-19 surges were associated with declines in hospital quality, highlighting the importance of identifying and implementing strategies to maintain care quality during periods of high hospital use.

The COVID-19 pandemic posed numerous challenges to maintaining care quality in US hospitals. 1 During surges, hospitals experienced unprecedented strain, with increased volumes of patients with severe illness and shortages of staff, beds, and supplies. 1 - 3 However, hospitals often experienced few or no COVID-19 admissions, and during these periods occupancy levels were lower than in 2019. 4 Tracking how patient outcomes changed in response to COVID-19 admissions and occupancy fluctuations may offer insight into the association between hospital strain and care quality.

Previous studies have shown that the strain from COVID-19 surges was associated with increased in-hospital mortality. 5 - 7 However, interpreting this finding is complicated by changes in patient case mix during surges, leading studies to differentiate between patients with COVID-19 and patients without COVID-19. An extensive literature has found that hospital strain was associated with increases in mortality for patients with COVID-19, 8 - 16 and a smaller set of studies has found increases in mortality among patients without COVID-19. 5 , 7 , 17 Studies that examined patients without COVID-19 relied on Medicare claims, 17 a convenience sample of US insurance claims, 5 or United Kingdom data. 7 None of these studies examined a representative sample of hospitalizations of US patients or changes in patient case mix during surges. Related studies that examined patient morbidity during COVID-19 surges focused exclusively on health care–associated infections. 18 - 22

This study investigated the association between COVID-19 admission rates and hospital care quality for patients without COVID-19 using the Agency for Healthcare Research and Quality’s (AHRQ’s) Quality Indicators (QIs). The QIs are designed to differentiate care quality among hospitals or within hospitals over time with minimal case-mix bias 23 and are widely used to inform quality improvement and pay-for-performance initiatives. 23 - 25 We first examined changes in the case mix compared with 2019 as a function of COVID-19 admissions. We then examined the case-mix–adjusted QI rates for pressure ulcers, a nursing-sensitive complication, and for in-hospital mortality related to nonsurgical care for selected conditions.

This cross-sectional study was reviewed by AHRQ’s human protection administrator, who determined that the project did not constitute human participants research and did not require additional review by an institutional review board or informed consent. The study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline.

We used the AHRQ Healthcare Cost and Utilization Project State Inpatient Databases for 36 states (eTable 1 in Supplement 1 ). These data cover all discharges from all nonfederal acute care hospitals in the included states. Hospitals were included if they admitted patients in 2019 and 2020, resulting in 3283 hospitals (76.1% of US nonfederal acute care hospitals). 26 The data covered 2019, the 2020 prepandemic period (weeks 1-10), and the 2020 pandemic period (weeks 18-48). We excluded weeks 11 to 17, weeks 49 to 52, and patients with COVID-19 or pneumonia. Admissions from weeks 11 to 17 were excluded because of COVID-19 testing shortages early in the pandemic, 27 , 28 which may have led to difficulty identifying patients without COVID-19. Admissions during weeks 49 to 52 were excluded because patients discharged in 2021 were not included in the 2020 Healthcare Cost and Utilization Project State Inpatient Databases. Patients with pneumonia were excluded because they may have had undiagnosed COVID-19, especially early in the pandemic due to variable reliability, availability, and use of testing. The eFigure in Supplement 1 shows that a high proportion of pneumonia admissions throughout 2020 included a COVID-19 diagnosis.

The State Inpatient Databases consist of detailed discharge-level information, including patient disposition, clinical condition codes, demographics, admission date, and hospital identifiers. In sample statistics, we report the distribution of patient race and ethnicity for the 3 most common categories (Hispanic, non-Hispanic Black, non-Hispanic White) and other race and ethnicity. Discharge records on race and ethnicity are ideally based on patient self-reporting but may be collected by hospital staff based on observation. 29 , 30

Our primary outcome of interest was the quality of hospital care, as measured by AHRQ QIs for patient safety (Patient Safety Indicators [PSIs]) and inpatient quality (Inpatient Quality Indicators [IQIs]). 31 We excluded pediatric, obstetric, and surgical QIs, as these relate to distinct patient populations and hospital units that were subject to different policies (eg, moratoriums on elective procedures) and changes in case mix. We further limited QIs based on a power analysis, selecting indicators with sufficient observations to detect a 15% change at the standard 5% significance level (details provided in eAppendix 1 in Supplement 1 ). This power analysis was conducted before the main analyses and aimed to reduce the probability of type I and type II errors (ie, false-positive and false-negative results).

The power analysis yielded 7 indicators from the broader set of nonsurgical, nonobstetric, nonpediatric QIs (eTable 2 in Supplement 1 ). The selected QIs were the PSI for pressure ulcer rate (PSI 03); IQIs measuring in-hospital mortality for 5 medical conditions (IQI 15, acute myocardial infarction mortality; IQI 16, heart failure mortality; IQI 17, acute stroke mortality; IQI 18, gastrointestinal hemorrhage mortality; and IQI 19, hip fracture mortality); and in-hospital mortality associated with a nonsurgical invasive procedure for coronary artery disease (IQI 30, percutaneous coronary intervention mortality). Detailed definitions for these QIs are provided in eTable 3 in Supplement 1 . Prior literature has shown that many of these indicators are associated with nurse staffing, 32 - 34 and all these indicators vary across hospitals and are associated with poorer care quality. 23 , 31 , 35

Each QI specifies a denominator that corresponds to the patients at risk and a numerator for the adverse outcome (pressure ulcer or in-hospital death). Our first set of outcomes examined patient case mix, which may confound analyses of care quality. Changes in case mix may have increased or decreased patient severity, depending on factors such as risk aversion for COVID-19 infection, time sensitivity of conditions, and hospital and public health policies regarding elective admissions. For each QI, we examined the number of patients at risk (patient volume) and their mean age and mortality-weighted mean Elixhauser Comorbidity Index Refined for International Classification of Diseases, Tenth Revision, Clinical Modification 36 (hereafter, comorbidity index). The comorbidity index is a composite score that weights 38 comorbidities based on their correlation with in-hospital mortality. A robustness check was conducted using the readmission-weighted comorbidity index, which weights the comorbidities based on their correlation with readmissions. 36 (The number of comorbidities assigned a nonzero weight is 33 and 30 for the mortality and readmission comorbidity indices, respectively; additional details on methodology are provided in eAppendix 2, and weights are provided in eTable 4 in Supplement 1 .)

Our primary outcomes of interest were the rates of adverse outcomes for each QI. We used discharge-level indicator variables for pressure ulcer or in-hospital death within the sample of patients at risk for each outcome.

The exposure variable was the rate of COVID-19 admissions per 100 hospital beds measured at the hospital-week level. In previous work, our group showed that COVID-19 admissions were associated with fluctuations in occupancy. 4 Specifically, the previous study estimated that hospital-weeks in 2020 with low COVID-19 admission rates were associated with a 12.7% reduction in inpatient occupancy and a 5.4% reduction in intensive care unit (ICU) occupancy compared with the same hospital-weeks in 2019. In contrast, hospital-weeks with high COVID-19 admission rates were associated with a 7.9% increase in inpatient occupancy and a 67.8% increase in ICU occupancy. Our group’s prior study also provides detail on the distribution of COVID-19 admission rates over time, states, and types of hospitals. 4

COVID-19 admissions were identified using International Classification of Diseases, Tenth Revision, Clinical Modification codes U07.1 and J12.82 for patients aged 1 year or older. Following the prior study, 4 we used 5 indicators for the weekly COVID-19 admission rate per 100 hospital beds to allow a nonlinear relationship with outcomes: less than 1 (low), 1 to 4.9, 5 to 9.9, 10 to 14.9, and 15 or greater (high).

Data were analyzed between January 1, 2023, and March 15, 2024. For our case-mix analyses, we compared patient volume, mean age, and mean comorbidities for each hospital-week in 2020 with the corresponding hospital-week in 2019. Each outcome was regressed on the categorical COVID-19 admission rate interacted with indicators for the 2020 prepandemic and 2020 pandemic periods (2019 was the omitted baseline period). In addition, we controlled for hospital-week indicators. Regressions for age and comorbidities were weighted by patient volume in each hospital-week. For difference-in-differences analyses, the econometrics literature recommends clustering standard errors at the level of treatment to account for correlations in outcomes over time. 37 , 38 As the COVID-19 admission rate is a hospital-week–level variable, we followed our prior study and clustered standard errors at the hospital level in all regressions. 4

For pressure ulcer and mortality QIs, we estimated discharge-level regressions that compared each hospital-month in 2020 with the corresponding hospital-month in 2019. Thus, discharges during hospital-weeks with high and low COVID-19 admissions were compared with discharges during the corresponding hospital-month in 2019. Each QI was regressed on the categorical hospital-week–level COVID-19 admission rate interacted with the period (2020 prepandemic or 2020 pandemic), controlling for confounders by including hospital-month indicators and case-mix controls. Case-mix controls included an age spline interacted with patient sex, indicators for Medicare Severity Diagnosis Related Groups (or major diagnostic categories when Medicare Severity Diagnosis Related Groups had <1000 observations), and comorbidities. We included separate indicators for each of the 38 comorbidities underlying the comorbidity index, 36 thereby allowing associations between comorbidities and QIs to be determined empirically rather than using the predetermined weights in the index.

The tables in the body of the article report the estimated associations during the pandemic period. They separately report changes in case mix and QIs during hospital-weeks with low COVID-19 admissions, changes during high COVID-19 admissions, and the difference. The difference corresponds to the incremental change during high COVID-19 admissions compared with the change during low COVID-19 admissions. In addition to reporting changes for each QI, we report a summary measure for the selected mortality QIs. This measure is the sum of coefficients for patient volume and the patient volume–weighted mean of coefficients for age, the comorbidity index, and the mortality QIs. The summary measures reduced the probability of false positives from multiple hypothesis testing and increased statistical power by pooling across multiple QIs, which increased the sample size.

Unless otherwise noted, results discussed are statistically significant at the 5% level (2-sided). Analyses were performed using Stata/MP, version 18 software (StataCorp LLC).

Our sample consisted of 19 111 629 discharges from 3283 hospitals during 2019 and 2020. Table 1 shows that during the 2020 pandemic period, 35 851 hospital-weeks (36.7%) had low COVID-19 admission rates, and 8094 (8.3%) had high rates. The mean (SD) age of patients was 63.0 (18.0) years and 50.3% were female (compared with 49.7% male). The distribution of patient race and ethnicity was 9.9% Hispanic, 16.6% non-Hispanic Black; 66.4% non-Hispanic White; and 5.3% other race or ethnicity (including non-Hispanic Asian and non-Hispanic multiracial or other), with 1.8% missing race or ethnicity information. Additional sample characteristics, including characteristics for patients at risk of each QI, are provided in eTable 6 in Supplement 1 .

Table 2 presents changes in the number of patients without COVID-19 at risk for each QI. For each measure, compared with 2019, patient volume decreased in 2020, with larger decreases during weeks with high COVID-19 admissions. During weeks with low COVID-19 admissions, the number of patients at risk for pressure ulcers decreased by 2.69 per hospital-week (95% CI, 2.45-2.92 per hospital-week), a 4.0% decrease compared with the mean (SE) in 2019 of 66.98 (1.64) per hospital-week. The number of patients at risk of any selected in-hospital mortality QI decreased by 0.55 per hospital-week (95% CI, 0.49-0.62 per hospital-week), a 3.3% decrease relative to the 2019 mean (SE) of 16.69 (0.41) per hospital-week. During weeks with high COVID-19 admissions, the number of patients at risk for pressure ulcers decreased by 14.75 per hospital-week (95% CI, 13.60-15.89 per hospital-week; relative change, 22.0%), and the number of patients at risk for any selected in-hospital mortality QI decreased by 3.48 per hospital-week (95% CI, 3.17-3.79 per hospital-week; relative change, 20.9%). For each QI, the decrease in patient volume was larger during weeks with high COVID-19 admissions relative to weeks with low COVID-19 admissions.

Table 3 presents the results for the mean age of patients at risk for each QI. Relative to 2019, the mean patient age decreased for each QI in both weeks with low and weeks with high COVID-19 admissions. During weeks with low COVID-19 admissions, patient age decreased by 0.47 years (95% CI, 0.39-0.55 years) for patients at risk of pressure ulcers, a 0.7% decrease, and by 0.58 years (95% CI, 0.47-0.69 years) for patients at risk for any selected mortality QI, a 0.8% decrease. During weeks with high COVID-19 admissions, patient age decreased by 0.76 years (95% CI, 0.66-0.86 years; relative change, 1.1%) for patients at risk for pressure ulcers and by 0.73 years (95% CI, 0.60-0.86 years; relative change, 1.1%) for patients at risk of any selected mortality QI. Compared with weeks with low COVID-19 admissions, the decrease in age was larger for all QIs during weeks with high COVID-19 admissions. However, the incremental decrease was only statistically significant for patients at risk of pressure ulcers.

Table 4 presents the comorbidity index for patients at risk for each QI. For 5 of 7 QIs, the estimated change during weeks with low COVID-19 admissions was positive, indicating that patients at risk had more comorbidities compared with 2019. For 3 of 7 QIs, in weeks with high COVID-19 admissions, patients at risk also had more comorbidities compared with 2019. For all QIs, the number of comorbidities decreased among patients at risk admitted during weeks with high compared with low COVID-19 admissions. This difference was statistically significant for pressure ulcers (−0.18; 95% CI, −0.29 to −0.07), heart failure mortality (−0.29; 95% CI, −0.53 to −0.04), PCI mortality (−0.56; 95% CI, −0.96 to −0.16), and the weighted mean of the mortality QIs (−0.33; 95% CI, −0.50 to −0.16). Analyses that used the readmission-weighted index showed similar trends (eTable 5 in Supplement 1 ).

Table 5 presents QI changes during weeks with low and high COVID-19 admissions, adjusted for age, sex, comorbidities, and hospital-month fixed effects. Compared with 2019, during weeks with low COVID-19 admissions, pressure ulcers decreased by 0.12 per 1000 admissions (95% CI, 0.06-0.18 per 1000 admissions; relative change, 32.4%), and acute stroke mortality decreased by 0.54 per 100 admissions (95% CI, 0.22-0.86 per 100 admissions; relative change, 10.1%). Changes for other QIs were not statistically significant. In contrast, compared with 2019, during weeks with high COVID-19 admissions, in-hospital mortality increased for heart failure (0.42 per 100 admissions; 95% CI, 0.24-0.59 per 100 admissions; relative change, 22.1%), as well as for gastrointestinal hemorrhage (0.29 per 100 admissions; 95% CI, 0.06-0.52 per 100 admissions; relative change, 16.3%) and hip fracture (0.33 per 100 admissions; 95% CI, 0.05-0.61 per 100 admissions; relative change, 24.2%) (eTable 7 in Supplement 1 ). The mean change in in-hospital mortality across the mortality QIs (calculated by the summary measure) was an increase of 0.22 per 100 admissions (95% CI, 0.10-0.34 per 100 admissions; relative change, 7.8%) during weeks with high COVID-19 admissions. The change in pressure ulcer rate was not statistically significant during those weeks. For all QIs, adverse outcomes increased during weeks with high compared with low COVID-19 admissions, with statistically significant increases for pressure ulcers of 0.09 per 1000 admissions (95% CI, 0.01-0.17 per 1000 admissions; relative change, 24.3%), in-hospital mortality for heart failure of 0.40 per 100 admissions (95% CI, 0.18-0.63 per 100 admissions; relative change, 21.1%), hip fracture of 0.40 per 100 admissions (95% CI, 0.04-0.77 per 100 admissions; relative change, 29.4%), and the weighted mean for the mortality QIs of 0.30 per 100 admissions (95% CI, 0.14-0.45 per 100 admissions; relative change, 10.6%).

In eTable 7 in Supplement 1 , we show the full regression results, including estimates for all levels of COVID-19 admissions. The results suggest that the rate of most QIs increased as the COVID-19 admission rate increased.

This cross-sectional study examined selected QIs for patients without COVID-19, using 2019 to 2020 data from all nonfederal acute care hospitals in 36 states. We compared changes in QIs in 2020 with 2019 during weeks with high and low COVID-19 admission rates. After adjusting for case-mix and hospital-month fixed effects, we found increased rates of pressure ulcers and in-hospital mortality in weeks with high compared with low COVID-19 admissions, suggesting that health care quality decreased during surges.

In contrast to previous studies, 5 , 7 , 17 we used all-payer discharges from all nonfederal acute care hospitals in 36 states and examined narrowly defined QIs that may be more sensitive to differences in care quality and staffing shortages than all-cause mortality. Furthermore, we adopted stricter exclusion criteria to minimize observations with undiagnosed COVID-19 by dropping admissions during the early pandemic (when hospitals experienced shortages of laboratory testing and supplies) and admissions with pneumonia (which may have been undiagnosed COVID-19). We also improved upon previous studies by comparing outcomes in each hospital-month during 2020 with the same hospital-month in 2019 to avoid confounding COVID-19 surge effects within hospitals with effects due to changes in volumes across hospitals. Finally, we contribute to the literature on morbidity during the pandemic. Previous studies were limited to health care–associated infections, 18 - 21 and analyses of patients without COVID-19 did not compare infection rates by level of COVID-19 burden. 22

To our knowledge, our study is the first to estimate changes in patient case mix during periods with low and high COVID-19 admission rates. As noted by others, several factors may have affected case mix during the pandemic, including federal policies restricting elective admissions, hospitals conserving resources to treat patients with COVID-19, and individuals avoiding settings of care in which they may become infected with COVID-19. 39 - 41 Our results show decreases in age and comorbidities in weeks with high vs low COVID-19 admissions. They suggest that, on average, patients admitted during weeks with high COVID-19 admissions were healthier prior to admission compared with weeks with low COVID-19 admissions.

Our findings show that compared with 2019, pressure ulcer rates decreased 32.4% during weeks with low COVID-19 admissions in 2020, whereas we found no evidence of a change during weeks with high COVID-19 admissions in 2020. The significant difference between periods of high and low COVID-19 admissions reflects a positive association between COVID-19 burden and pressure ulcer rates, a key measure of complications from care. Our group’s previous study of occupancy rates during the pandemic offers essential context for these results, finding that occupancy declined slightly during weeks with low COVID-19 admissions and increased substantially during weeks with high COVID-19 admissions, especially for ICUs. 4 The decline in the rate of pressure ulcers during weeks with low COVID-19 admissions may be related to an increased nurse-to-patient ratio during periods with decreased patient volume or an increased emphasis and training on preventing pressure ulcers during the pandemic. 42 , 43 However, when hospitals are overburdened with severely ill patients, they may lack the necessary resources or staff to prevent pressure ulcers, a complication of hospital care particularly sensitive to nurse staffing. 32

The results for in-hospital mortality followed a different pattern. For heart failure, hip fracture, and our summary measure of in-hospital mortality for nonsurgical care, we did not find evidence of a change in mortality during weeks in 2020 with low COVID-19 admissions. However, we found increases in mortality during weeks in 2020 with high COVID-19 admissions. Several factors may have contributed to these decreases in care quality and worse outcomes, including staff shortages, assignment of inadequately trained staff to medical floors for patients without COVID-19, restrictive visitor policies limiting family support at the bedside, inability to monitor and manage changes in patient disposition, inadequate or lack of protective equipment to prevent the spread of infections due to supply chain limitations, and impaired quality improvement processes. 1 , 2 , 44 - 46 Further research is needed to explore these complex factors and identify practices that improve health care resiliency. Strategies such as building an organizational culture that prioritizes patient and workforce safety and maintaining nurse to patient ratios may help to build resiliency during periods of strain. 47 - 49

Our study has some limitations. First, although our models included extensive case-mix controls, our results may reflect spurious correlations between unmeasured dimensions of patient severity and fluctuations in COVID-19 admissions. For example, comorbidities may be imperfectly captured by administrative data collected primarily for reimbursement purposes. 50 , 51 Notwithstanding these limitations, our case-mix analyses found fewer comorbidities and younger age during weeks with high vs low COVID-19 admissions. If other unmeasured dimensions of patient severity also decreased during weeks with high vs low COVID-19 admissions, our results may have underestimated the associations of COVID-19 surges with care quality.

Second, we were unable to control for subnational trends, including supply and staff shortages. These factors may have covaried with COVID-19 admission rates and affected QIs. Our hospital-month fixed effects controlled for fixed hospital characteristics, and by comparing changes during periods with high and low COVID-19 admissions, we differenced out national trends.

In this cross-sectional study, COVID-19 surges were associated with declines in hospital quality for patients without COVID-19. The results highlight the importance of implementing strategies to maintain health care quality during periods of high hospital use, particularly during public health emergencies such as the COVID-19 pandemic. Mitigating staffing shortages, building a safety culture and adopting related organizational best practices, and harnessing new technologies to reduce burden may improve hospital resilience during periods of high use.

Accepted for Publication: March 25, 2024.

Published: May 24, 2024. doi:10.1001/jamanetworkopen.2024.13127

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2024 Meille G et al. JAMA Network Open .

Corresponding Author: Giacomo Meille, PhD, Agency for Healthcare Research and Quality, 5600 Fishers Ln, Rockville, MD 20852 ( [email protected] ).

Author Contributions: Dr Meille had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Meille, Owens, Decker, Selden, Miller, Grace, Valdez.

Acquisition, analysis, or interpretation of data: Meille, Owens, Selden, Miller, Perdue-Puli, Umscheid, Cohen, Valdez.

Drafting of the manuscript: Meille, Owens, Selden, Valdez.

Critical review of the manuscript for important intellectual content: All authors.

Statistical analysis: Meille, Decker, Selden, Cohen.

Administrative, technical, or material support: Perdue-Puli, Cohen, Valdez.

Supervision: Owens, Miller, Grace, Umscheid, Cohen, Valdez.

Conflict of Interest Disclosures: None reported.

Disclaimer: The views expressed in this article are those of the authors and do not necessarily reflect those of the Department of Health and Human Services or the Agency for Healthcare Research and Quality.

Data Sharing Statement: See Supplement 2 .

Additional Contributions: The authors gratefully acknowledge the 36 Healthcare Cost and Utilization Project partner organizations that contributed the data used in this study: Alaska Department of Health, Alaska Hospital and Healthcare Association, Arizona Department of Health Services, Arkansas Department of Health, California Department of Health Care Access and Information, District of Columbia Hospital Association, Florida Agency for Health Care Administration, Georgia Hospital Association, University of Hawaii Hilo Center for Rural Health Science, Hawaii Laulima Data Alliance, Illinois Department of Public Health, Indiana Hospital Association, Iowa Hospital Association, Kansas Hospital Association, Kentucky Cabinet for Health and Family Services, Louisiana Department of Health, Maine Health Data Organization, Maryland Health Services Cost Review Commission, Massachusetts Center for Health Information and Analysis, Michigan Health & Hospital Association, Minnesota Hospital Association (provides data for Minnesota and North Dakota hospitals), Mississippi State Department of Health, Missouri Hospital Industry Data Institute, Montana Hospital Association, Nevada Department of Health and Human Services, New Jersey Department of Health, New Mexico Department of Health, North Carolina Department of Health and Human Services, North Dakota (data provided by the Minnesota Hospital Association), Ohio Hospital Association, Oregon Health Authority, Oregon Association of Hospitals and Health Systems, Rhode Island Department of Health, South Carolina Revenue and Fiscal Affairs Office, Tennessee Hospital Association, Texas Department of State Health Services, Utah Department of Health, Virginia Health Information, and West Virginia Department of Health and Human Resources. The authors thank Marguerite Barrett, MS, and her team at IBM for their work on the creation of data files under the Agency for Healthcare Research and Quality’s Healthcare Cost and Utilization Project contract HHSA-290-2018-00001-C. Finally, the authors thank Thomas Hegland, PhD, and Edward Miller, PhD (Agency for Healthcare Research and Quality), for their helpful comments. Drs Hegland and Miller did not receive compensation for their contributions.

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

NASA Logo

Suggested Searches

  • Climate Change
  • Expedition 64
  • Mars perseverance
  • SpaceX Crew-2
  • International Space Station
  • View All Topics A-Z

Humans in Space

Earth & climate, the solar system, the universe, aeronautics, learning resources, news & events.

An image of two aircraft in front of a hill covered in snow and rock. In the foreground is the tail end of a white jet, filling the bottom and right side. The NASA logo and number 520 are on the tail. Behind the jet, in the middle of the image, another white aircraft takes off. It’s white with a blue horizontal stripe, with the NASA ‘worm’ logo on the tail. The brown and white hillside fills the rest of the frame.

NASA Mission Flies Over Arctic to Study Sea Ice Melt Causes

Image shows various color gradients across the continential United States with various regions highlighted in yellow, red, purple, and black to highlight TEMPO measurements of increased pollution.

NASA Releases New High-Quality, Near Real-Time Air Quality Data

Greenland glacier

Twin NASA Satellites Ready to Help Gauge Earth’s Energy Balance

  • Search All NASA Missions
  • A to Z List of Missions
  • Upcoming Launches and Landings
  • Spaceships and Rockets
  • Communicating with Missions
  • James Webb Space Telescope
  • Hubble Space Telescope
  • Why Go to Space
  • Astronauts Home
  • Commercial Space
  • Destinations
  • Living in Space
  • Explore Earth Science
  • Earth, Our Planet
  • Earth Science in Action
  • Earth Multimedia
  • Earth Science Researchers
  • Pluto & Dwarf Planets
  • Asteroids, Comets & Meteors
  • The Kuiper Belt
  • The Oort Cloud
  • Skywatching
  • The Search for Life in the Universe
  • Black Holes
  • The Big Bang
  • Dark Energy & Dark Matter
  • Earth Science
  • Planetary Science
  • Astrophysics & Space Science
  • The Sun & Heliophysics
  • Biological & Physical Sciences
  • Lunar Science
  • Citizen Science
  • Astromaterials
  • Aeronautics Research
  • Human Space Travel Research
  • Science in the Air
  • NASA Aircraft
  • Flight Innovation
  • Supersonic Flight
  • Air Traffic Solutions
  • Green Aviation Tech
  • Drones & You
  • Technology Transfer & Spinoffs
  • Space Travel Technology
  • Technology Living in Space
  • Manufacturing and Materials
  • Science Instruments
  • For Kids and Students
  • For Educators
  • For Colleges and Universities
  • For Professionals
  • Science for Everyone
  • Requests for Exhibits, Artifacts, or Speakers
  • STEM Engagement at NASA
  • NASA's Impacts
  • Centers and Facilities
  • Directorates
  • Organizations
  • People of NASA
  • Internships
  • Our History
  • Doing Business with NASA
  • Get Involved
  • Aeronáutica
  • Ciencias Terrestres
  • Sistema Solar
  • All NASA News
  • Video Series on NASA+
  • Newsletters
  • Social Media
  • Media Resources
  • Upcoming Launches & Landings
  • Virtual Events
  • Sounds and Ringtones
  • Interactives
  • STEM Multimedia

NASA’s Hubble Temporarily Pauses Science

NASA’s Hubble Temporarily Pauses Science

The waning gibbous Moon is pictured above Earth's horizon from the International Space Station as it orbited 258 miles above the Pacific Ocean northeast of Japan.

Space Station Research Advances NASA’s Plans to Explore the Moon, Mars

A large group photo taken indoors. The background features three large insignias: one for the International Space Station, the NASA logo in the center, and a mission patch on the right.

Welcome Back to Planet Earth, Expedition 70 Crew! 

Cristoforetti wears a hot pink shirt, black pants with white stripes on the side, and blue running shoes and is watching a laptop in front of her. A white harness on her torso connects her to the sides of the green treadmill. Her legs are slightly blurred from the motion of her running and the entire image is tilted to the left so that she seems to be running down a steep hill.

Astronaut Exercise

This computer-generated 3D model of Venus’ surface shows the volcano Sif Mons

Ongoing Venus Volcanic Activity Discovered With NASA’s Magellan Data

C.12 Planetary Instrument Concepts for the Advancement of Solar System Observations POC Change

C.12 Planetary Instrument Concepts for the Advancement of Solar System Observations POC Change

June’s Night Sky Notes: Constant Companions: Circumpolar Constellations, Part III

June’s Night Sky Notes: Constant Companions: Circumpolar Constellations, Part III

What’s Up: June 2024 Skywatching Tips from NASA

What’s Up: June 2024 Skywatching Tips from NASA

Hubble Views the Lights of a Galactic Bar

Hubble Views the Lights of a Galactic Bar

Eventually, our Sun will run out of fuel and die (though not for another 5 billion years). As it does, it will become like the object seen here, the Cat’s Eye Nebula, which is a planetary nebula. A fast wind from the remaining stellar core rams into the ejected atmosphere and pushes it outward, creating wispy structures seen in X-rays by Chandra and optical light by the Hubble Space Telescope.

Travel Through Data From Space in New 3D Instagram Experiences

Discovery Alert: Spock’s Home Planet Goes ‘Poof’

Discovery Alert: Spock’s Home Planet Goes ‘Poof’

Graphic shows a possible future General Electric jet engine with exposed fan blades in front of a cut-away-interior view of its core mechanisms -- all part of NASA's HyTEC research project.

NASA, Industry to Start Designing More Sustainable Jet Engine Core

Two men work at a desk in a NASA office as one points to some Aviary computer code displayed on a monitor. A picture of a future aircraft design appears on a neighboring monitor.

Aviary: A New NASA Software Platform for Aircraft Modelling

quality research articles

NASA’s X-59 Passes Milestone Toward Safe First Flight 

An array of microphones on an airfield, with a sunrise in the background

Tech Today: Measuring the Buzz, Hum, and Rattle

JPL engineers and technicians prepare NASA’s Farside Seismic Suite for testing

NASA to Measure Moonquakes With Help From InSight Mars Mission

Kenyan students surround a computer laptop. They are smiling and laughing at the screen.

NASA Around the World: Interns Teach Virtual Lessons in Kenya

The Moon and Amaey Shah

The Moon and Amaey Shah

two men stand at the base of a test stand

NASA Stennis Helps Family Build a Generational Legacy

2021 Astronaut Candidates Stand in Recognition

Diez maneras en que los estudiantes pueden prepararse para ser astronautas

Astronaut Marcos Berrios

Astronauta de la NASA Marcos Berríos

image of an experiment facility installed in the exterior of the space station

Resultados científicos revolucionarios en la estación espacial de 2023

quality research articles

Charles G. Hatfield

Earth science public affairs officer, nasa langley research center.

NASA has made new data available that can provide air pollution observations at unprecedented resolutions – down to the scale of individual neighborhoods. The near real-time data comes from the agency’s TEMPO (Tropospheric Emissions: Monitoring of Pollution) instrument, which launched last year to improve life on Earth by revolutionizing the way scientists observe air quality from space. This new data is available from the Atmospheric Science Data Center at NASA’s Langley Research Center in Hampton, Virginia.

“TEMPO is one of NASA’s Earth observing instruments making giant leaps to improve life on our home planet,” said NASA Administrator Bill Nelson. “NASA and the Biden-Harris Administration are committed to addressing the climate crisis and making climate data more open and available to all. The air we breathe affects everyone, and this new data is revolutionizing the way we track air quality for the benefit of humanity.”

To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video

The TEMPO mission gathers hourly daytime scans of the atmosphere over North America from the Atlantic Ocean to the Pacific Coast, and from Mexico City to central Canada. The instrument detects pollution by observing how sunlight is absorbed and scattered by gases and particles in the troposphere, the lowest layer of Earth’s atmosphere.

“All the pollutants that TEMPO is measuring cause health issues,” said Hazem Mahmoud, science lead at NASA Langley’s Atmospheric Science Data Center. “We have more than 500 early adopters using these datasets right away. We expect to see epidemiologists and health experts using this data in the near future. Researchers studying the respiratory system and the impact of these pollutants on people’s health will find TEMPO’s measurements invaluable.”

An early adopter program has allowed policymakers and other air quality stakeholders to understand the capabilities and benefits of TEMPO’s measurements . Since October 2023, the TEMPO calibration and validation team has been working to evaluate and improve TEMPO data products. 

We have more than 500 early adopters that will be using these datasets right away.

hazem mahmoud

hazem mahmoud

NASA Data Scientist

“Data gathered by TEMPO will play an important role in the scientific analysis of pollution,” said Xiong Liu, senior physicist at the Smithsonian Astrophysical Observatory and principal investigator for the mission. “For example, we will be able to conduct studies of rush hour pollution, linkages of diseases and health issues to acute exposure of air pollution, how air pollution disproportionately impacts underserved communities, the potential for improved air quality alerts, the effects of lightning on ozone, and the movement of pollution from forest fires and volcanoes.” 

Measurements by TEMPO include air pollutants such as nitrogen dioxide, formaldehyde, and ground-level ozone.

“Poor air quality exacerbates pre-existing health issues, which leads to more hospitalizations,” said Jesse Bell, executive director at the University of Nebraska Medical Center’s Water, Climate, and Health Program. Bell is an early adopter of TEMPO’s data.

Bell noted that there is a lack of air quality data in rural areas since monitoring stations are often hundreds of miles apart. There is also an observable disparity in air quality from neighborhood to neighborhood.

“Low-income communities, on average, have poorer air quality than more affluent communities,” said Bell. “For example, we’ve conducted studies and found that in Douglas County, which surrounds Omaha, the eastern side of the county has higher rates of pediatric asthma hospitalizations. When we identify what populations are going to the hospital at a higher rate than others, it’s communities of color and people with indicators of poverty. Data gathered by TEMPO is going to be incredibly important because you can get better spatial and temporal resolution of air quality across places like Douglas County.”

Determining sources of air pollution can be difficult as smoke from wildfires or pollutants from industry and traffic congestion drift on winds. The TEMPO instrument will make it easier to trace the origin of some pollutants.

“The National Park Service is using TEMPO data to gain new insight into emerging air quality issues at parks in southeast New Mexico,” explained National Park Service chemist, Barkley Sive. “Oil and gas emissions from the Permian Basin have affected air quality at Carlsbad Caverns and other parks and their surrounding communities. While pollution control strategies have successfully decreased ozone levels across most of the United States, the data helps us understand degrading air quality in the region.” 

The TEMPO instrument was built by BAE Systems, Inc., Space & Mission Systems (formerly Ball Aerospace) and flies aboard the Intelsat 40e satellite built by Maxar Technologies. The TEMPO Ground System, including the Instrument Operations Center and the Science Data Processing Center, are operated by the Smithsonian Astrophysical Organization, part of the Center for Astrophysics | Harvard & Smithsonian.

To learn more about TEMPO visit: https://nasa.gov/tempo

Related Terms

  • Tropospheric Emissions: Monitoring of Pollution (TEMPO)
  • Langley Research Center

Explore More

quality research articles

NASA-supported wireless microphone array quickly, cheaply, and accurately maps noise from aircraft, animals, and more.

  • History, Facts & Figures
  • YSM Dean & Deputy Deans
  • YSM Administration
  • Department Chairs
  • YSM Executive Group
  • YSM Board of Permanent Officers
  • FAC Documents
  • Current FAC Members
  • Appointments & Promotions Committees
  • Ad Hoc Committees and Working Groups
  • Chair Searches
  • Leadership Searches
  • Organization Charts
  • Faculty Demographic Data
  • Professionalism Reporting Data
  • 2022 Diversity Engagement Survey
  • State of the School Archive
  • Faculty Climate Survey: YSM Results
  • Strategic Planning
  • Mission Statement & Process
  • Beyond Sterling Hall
  • COVID-19 Series Workshops
  • Previous Workshops
  • Departments & Centers
  • Find People
  • Biomedical Data Science
  • Health Equity
  • Inflammation
  • Neuroscience
  • Global Health
  • Diabetes and Metabolism
  • Policies & Procedures
  • Media Relations
  • A to Z YSM Lab Websites
  • A-Z Faculty List
  • A-Z Staff List
  • A to Z Abbreviations
  • Dept. Diversity Vice Chairs & Champions
  • Dean’s Advisory Council on Lesbian, Gay, Bisexual, Transgender, Queer and Intersex Affairs Website
  • Minority Organization for Retention and Expansion Website
  • Office for Women in Medicine and Science
  • Committee on the Status of Women in Medicine Website
  • Director of Scientist Diversity and Inclusion
  • Diversity Supplements
  • Frequently Asked Questions
  • Recruitment
  • By Department & Program
  • News & Events
  • Executive Committee
  • Aperture: Women in Medicine
  • Self-Reflection
  • Portraits of Strength
  • Mindful: Mental Health Through Art
  • Event Photo Galleries
  • Additional Support
  • MD-PhD Program
  • PA Online Program
  • Joint MD Programs
  • How to Apply
  • Advanced Health Sciences Research
  • Clinical Informatics & Data Science
  • Clinical Investigation
  • Medical Education
  • Visiting Student Programs
  • Special Programs & Student Opportunities
  • Residency & Fellowship Programs
  • Center for Med Ed
  • Organizational Chart
  • Leadership & Staff
  • Committee Procedural Info (Login Required)
  • Faculty Affairs Department Teams
  • Recent Appointments & Promotions
  • Academic Clinician Track
  • Clinician Educator-Scholar Track
  • Clinican-Scientist Track
  • Investigator Track
  • Traditional Track
  • Research Ranks
  • Instructor/Lecturer
  • Social Work Ranks
  • Voluntary Ranks
  • Adjunct Ranks
  • Other Appt Types
  • Appointments
  • Reappointments
  • Transfer of Track
  • Term Extensions
  • Timeline for A&P Processes
  • Interfolio Faculty Search
  • Interfolio A&P Processes
  • Yale CV Part 1 (CV1)
  • Yale CV Part 2 (CV2)
  • Samples of Scholarship
  • Teaching Evaluations
  • Letters of Evaluation
  • Dept A&P Narrative
  • A&P Voting
  • Faculty Affairs Staff Pages
  • OAPD Faculty Workshops
  • Leadership & Development Seminars
  • List of Faculty Mentors
  • Incoming Faculty Orientation
  • Faculty Onboarding
  • Past YSM Award Recipients
  • Past PA Award Recipients
  • Past YM Award Recipients
  • International Award Recipients
  • Nominations Calendar
  • OAPD Newsletter
  • Fostering a Shared Vision of Professionalism
  • Academic Integrity
  • Addressing Professionalism Concerns
  • Consultation Support for Chairs & Section Chiefs
  • Policies & Codes of Conduct
  • First Fridays
  • Fund for Physician-Scientist Mentorship
  • Grant Library
  • Grant Writing Course
  • Mock Study Section
  • Research Paper Writing
  • Establishing a Thriving Research Program
  • Funding Opportunities
  • Join Our Voluntary Faculty
  • Child Mental Health: Fostering Wellness in Children
  • Faculty Resources
  • Research by Keyword
  • Research by Department
  • Research by Global Location
  • Translational Research
  • Research Cores & Services
  • Program for the Promotion of Interdisciplinary Team Science (POINTS)
  • CEnR Steering Committee
  • Experiential Learning Subcommittee
  • Goals & Objectives
  • Issues List
  • Print Magazine PDFs
  • Print Newsletter PDFs
  • YSM Events Newsletter
  • Social Media
  • Patient Care

INFORMATION FOR

  • Residents & Fellows
  • Researchers

Improving Quality of Care for Older Adults

A q&a with snigdha jain, snigdha jain, md, mhs.

When Snigdha Jain, MD, MHS , became an ICU physician, she found that two-thirds of the individuals she cared for in the ICU were older adults. She also found that illness did not end with survival and discharge from the hospital for these patients. The realization prompted her to better understand how the lives of older adults change after a critical illness.

Now committed to a career in aging research, Jain, an assistant professor of medicine in the Section of Pulmonary, Critical Care, and Sleep Medicine at Yale , recently won the American Geriatrics Society Health and Aging Foundation New Investigator Award. The honor recognizes individuals conducting new and relevant studies in geriatrics.

In an interview, Jain discusses the inspiration behind her research focus on older adults, the role of social factors in quality of care, and why people of all ages should strive to be active during hospital stays.

What inspired you to pursue research in aging?

I was interested in improving outcomes after critical illness, which matters to many older adults because they value independence and quality of life, not just survival. Older adults may be at higher risk of decline after hospitalization because of pre-existing issues such as cognitive impairment, frailty, or chronic conditions.

I didn't realize how the questions I was interested in were the mainstay of geriatric research until I was introduced to the geriatric epidemiology training program at Yale. Working with Drs. Thomas Gill and Lauren Ferrante showed me how function and cognition are measured and helped me gain the tools to ask research questions that addressed the clinical problems I was seeing.

How can we improve the quality of care for older people?

It’s important to listen to older adults, validate their concerns, and understand that they may have lingering symptoms and problems because of a critical illness. We need to provide them with all kinds of support, such as referral to a specialist or rehab. We also need to make sure that everyone, including low-income older adults, receives this support. For example, I might want a patient to go to an outpatient physical therapy center to strengthen their muscles, but the patient might not have the caregiver support or the transportation to do those things. Understanding how effective care processes, such as rehabilitation, are delivered across the continuum of care can help us design interventions to ensure equitable access and quality of care during and beyond hospitalization.

If patients are hospitalized in a skilled nursing facility or admitted to a nursing facility after staying in the ICU, as happens with a third of older adults, we need to ensure the quality of care they receive in skilled nursing can assist their recovery. It’s important to provide patients with support beyond the ICU and medical diagnostics to assist them in their journey to recovery.

What research discoveries have you made that you wish every person, regardless of age, knew?

One of my recent studies with Dr. Gill found that when many older adults leave the hospital after a critical illness, they still have symptoms like shortness of breath or fatigue within the first three months after hospitalization that restrict them to bed for more than half a day or that make them cut down their activities. We discovered that such symptoms are associated with downstream disability. How much dependence these adults develop over the next six months is linked to the symptoms that restrict their activity. If you're not moving around much, there is a possibility you’ll become more disabled down the road.

I encourage older adults and everybody who’s in the hospital to advocate for themselves about the need to be active. Being in the hospital should not mean inactivity. Studies support the value of mobilization in preserving downstream function and cognition in critically ill patients.

My research also shows that older adults with low income or limited English proficiency or those who live in rural areas are less likely to be mobilized or offered physical therapy. I hope to build on this work to advocate for systemic and policy changes to make sure everyone can get equitable access to therapy services. We need to take into account social vulnerability to improve outcomes for everyone, not just a select few.

The Section of Pulmonary, Critical Care and Sleep Medicine is one of the eleven sections within Yale School of Medicine’s Department of Internal Medicine. To learn more about Yale-PCCSM, visit PCCSM's website , or follow them on Facebook and Twitter .

  • Internal Medicine
  • Awards & Honors
  • Pulmonary, Critical Care and Sleep Medicine

Featured in this article

  • Snigdha Jain, MD, MHS Assistant Professor; Assistant Professor, Internal Medicine
  • Thomas M. Gill, MD Humana Foundation Professor of Medicine (Geriatrics) and Professor of Epidemiology (Chronic Diseases) and of Investigative Medicine; Director, Yale Program on Aging; Director, Claude D. Pepper Older Americans Independence Center; Director, Yale Center for Disability and Disabling Disorders; Director, Yale Training Program in Geriatric Clinical Epidemiology and Aging-Related Research
  • Lauren Ferrante, MD, MHS Assistant Professor of Medicine (Pulmonary, Critical Care and Sleep Medicine); Director, Operations Core, Yale Claude D. Pepper Older Americans Independence Center; Student Thesis Chair, Internal Medicine

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

AI Will Increase the Quantity — and Quality — of Phishing Scams

  • Fredrik Heiding,
  • Bruce Schneier,
  • Arun Vishwanath

quality research articles

How businesses can prepare now.

Gen AI tools are rapidly making these emails more advanced, harder to spot, and significantly more dangerous. Recent research showed that 60% of participants fell victim to artificial intelligence (AI)-automated phishing, which is comparable to the success rates of non-AI-phishing messages created by human experts. Companies need to: 1) understand the asymmetrical capabilities of AI-enhanced phishing, 2) determine the company or division’s phishing threat severity level, and 3) confirm their current phishing awareness routines.

Anyone who has worked at a major organization has likely had to do training on how to spot a phishing attack — the deceptive messages that pretend to be from legitimate sources and aim to trick users into giving away personal information or clicking on harmful links. Phishing emails often exploit sensitive timings and play on a sense of urgency, such as urging the user to update a password. But unfortunately for both companies and employees, gen AI tools are rapidly making these emails more advanced, harder to spot, and significantly more dangerous.

quality research articles

  • FH Fredrik Heiding is a research fellow in computer science at Harvard John A. Paulson School of Engineering and Applied Sciences and a teaching fellow for the Generative AI for Business Leaders course at the Harvard Business School. He researches how to mitigate AI-enabled cyberattacks via technical innovations, organizational strategies, and national security policies. Fredrik also works with the World Economic Forum’s Cybercrime Center to improve cybersecurity standards of AI-based cyber defense.
  • Bruce Schneier is an American cryptographer, computer security professional, privacy specialist, and writer. He is a lecturer in public policy at the Harvard Kennedy School and a fellow at the Berkman Klein Center for Internet & Society. He is a board member of the Electronic Frontier Foundation, a special advisor to IBM Security, and the Chief Technology Officer of Resilient. In 2015, Schneier received the EPIC Lifetime Achievement Award from the Electronic Privacy Information Center. He is the author of 14 books.
  • AV Arun Vishwanath , PhD, MBA, is a distinguished scholar and practitioner at the forefront of addressing cybersecurity’s “people problem” who has contributed commentary Wired , CNN, and The Washington Post . A former fellow at Harvard University’s Berkman Klein Center, he is the founder of the Cyber Hygiene Academy and serves as a distinguished expert for the NSA’s Science of Security & Privacy directorate. He is the author of the book The Weakest Link , published by MIT Press.

Partner Center

Patient Safety And Quality In A Virtual Environment: 3 Questions You Should Ask Your Telehealth Partner

Teladoc Health

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Patient safety sets the foundation for a safe, equitable, trustworthy care experience – regardless of whether you’re operating in a virtual health system, or a physical one. And yet, despite increased scrutiny on online “pill mills” and bad actors, few virtual care companies are upfront about their approach to clinical quality and patient safety.

At Teladoc Health, our industry-leading quality and patient safety programs have earned recognition from The National Committee for Quality Assurance (NCQA) and the Agency for Healthcare Research and Quality (AHRQ). Here are the top questions you should consider asking your telehealth partners:

1. Do you have a formal patient safety program?

All healthcare providers will tell you that they work hard to avoid errors of any kind. But when mistakes do happen, it’s important that they become opportunities for learning and improvement. Building upon foundational elements of best-in-class brick-and-mortar patient safety systems, telehealth programs should be optimized to address the types of errors that patients and providers could encounter in a virtual setting. To do this, you need to invest in a dedicated, multidisciplinary patient safety team.

At Teladoc Health, our patient safety team is comprised of physicians, nurses and a human factors engineering psychologist, with decades of combined experience from leading patient safety in brick-and-mortar systems. This team spent years conceptualizing, building and refining the customized event reporting system we use at Teladoc Health, and regularly shares our learnings at national and international conferences and other forums.

Our classification model captures potential events across categories such as: diagnosis, intervention, care management, medication, communication, and cybersecurity. This foundation is important because it allows us to quickly and proactively track, classify, and investigate potential safety events that are most likely to impact members, and incorporate learnings to prevent future errors.

2. How do you address common barriers for reporting?

If a physician or other provider is concerned that reporting an error could result in a cascade of disciplinary actions, they may be reluctant to report – which can lead to missed opportunities for improvement.

Given our understanding that clinicians may be reluctant to share and discuss problems and patient safety concerns, in 2019, we established the first virtual care Patient Safety Organization (PSO), The Institute of Patient Safety and Quality of Virtual Care. This federally-recognized entity of Teladoc Health makes it possible for information from healthcare providers to receive certain legal protections in the spirit of safety and improvement, so that the root causes of emerging issues can be identified and addressed quickly.

3. How do you support a continuous learning environment?

Identifying the errors is a foundational element of any patient safety program, but it’s not enough. The hard part is turning that data into insights, and that’s where the team makes a powerful difference.

Having a multidisciplinary team of professionals skilled in safety science facilitates rigorous investigations that help us better identify all contributing factors to a problem, and design robust action plans to improve. By investing in a team of clinical leaders with decades of experience in this field, we’ve been able to build a program that can analyze potential safety events more quickly, address root causes, and catch errors before they even reach the patient.

That spirit of continuous learning and improvement extends to other domains of clinical quality. Our team is continuously improving our rates of antibiotic and oral steroid use, preventative cancer screenings, control of chronic conditions like diabetes and hypertension, and patient reported outcomes in common mental health disorders like depression and anxiety. We utilize rigorous improvement science frameworks to reduce unnecessary variation in care and enable data-driven, measurable improvement.

With additional pressures from payers and regulators, we’re seeing more focus on quality and safety programs – and I think it’s a shift that we should welcome. Investing in patient safety and quality is not just the right thing to do, it’s what our patients and members deserve.

Dr. George R. Verghese

  • Editorial Standards
  • Reprints & Permissions

When To Say Goodbye? Survey Sheds Light On Difficult Decisions For Dog Owners

Back view of a woman sitting on the beach hugging her dog as they watch the sunset

Deciding when it’s time to say goodbye to a beloved furry friend is never easy, even when the animal in question may be suffering from low quality of life due to age or illness. The question for many dog owners remains — how do you know when it’s time?

Questions like this led members of the Dog Aging Project to conduct a survey of 2,570 dog owners inquiring about the circumstances of their companion dog’s death, including the cause of death, whether euthanasia was involved, the reason euthanasia may have been chosen, and what medical symptoms, age characteristics and quality of life the animal had prior to death.

The Dog Aging Project is a collaborative, community scientist-driven data-gathering research project that enrolls companion dogs from all backgrounds to study the effects of aging and gain a better understanding of what contributes to a long and healthy canine life. Many of its research projects have led to studies that inform not only dog health but also human health. 

Out of the owners who responded to this most recent survey, over 85% of those whose dogs had died reported choosing euthanasia; nearly half of those said they did so to relieve their pet’s suffering. More than half of the total respondents also listed illness or disease as the actual cause of death.

“What this survey shows is that all dog owners struggle with deciding when it’s time to say goodbye, and you’re not alone if you’re facing this decision,” said Dr. Kellyn McNulty, an internal medicine resident in the Texas A&M School of Veterinary Medicine and Biomedical Sciences’ (VMBS) Department of Small Animal Clinical Sciences, who worked on the survey.

“As veterinarians, we encourage people to consider not just the ‘lifespan’ of a companion animal, but also the ‘health span’ — the portion of your dog’s life when they’re in good health,” she said. “Advocating for your pet is about more than helping them live longer — it’s also about making sure that the time they have here on Earth is a good time.”

Why Age Isn’t Always A Factor

Curiously, one aspect of dog health that was not a significant contributor to whether euthanasia was chosen was dog age.

“One of the most interesting things that we learned from the survey was that euthanasia isn’t something that primarily affects older dogs,” McNulty said. “Some illnesses and diseases can affect younger dogs, which can lower their quality of life and lead owners to wonder whether it might be time to say goodbye sooner rather than later. 

“It’s not a decision that anyone wants to have to make, but the survey showed us that many pet owners face these questions regardless of how old their dog is,” she said.

However, as veterinary medicine continues to improve at treating and managing chronic illnesses, age may come to play a more significant role in determining quality of life.

“One thing dog owners can do is be on the lookout for the signs of aging so they can work with their veterinarian to assess whether age or chronic illness is causing quality of life to change,” McNulty said. “If your dog is well-trained and potty-trained and begins soiling the house, that’s a possible sign. Others include restlessness at night, new onset anxiety and fears, and arthritic issues.”

No One Is Alone

The important thing for dog owners to keep in mind is that they’re never alone when it comes to deciding whether to say goodbye. Not only are they in good company with other dog owners who are facing similar tough decisions, but they can always rely on their local veterinarian to provide expert advice.

“Taking your dog to the vet on a regular basis is critical for having a record of normal behavior and health statistics,” McNulty said. “It’s much easier to know what’s abnormal for your dog if we have a long record of health information to look back on. 

“We’re also here to help you look out for your pet. Many of us are pet owners, too, and we understand how hard it can be to let go without thinking of it as giving up,” she said.

The DAP is continuing to accept dogs of all breeds into the project. To date, more than 50,000 dogs have been enrolled. To enroll your dog, or learn more, visit dogagingproject.org .

Media contact: Jennifer Gauntt, [email protected],  979-862-4216

Related Stories

A dog on the Texas A&M campus during events prior to a home football game at Kyle Field.

New Study Dispels Myth That Purebred Dogs Are More Prone To Health Problems

Are mutts really healthier? A survey of more than 27,000 dogs found that the most common reasons for vet visits have little to do with breed.

a photo of a veterinarian and a dog

Texas A&M Veterinarians Developing Frailty Instrument To Personalize Canine Geriatric Care

Dog Aging Project researchers creating scale to help guide health decisions and personalize geriatric veterinary care.

a vet technician cares for a small dog on a table, with a larger dog in the background

Texas A&M, UW Researchers Explore Canine Aging Project In Nature Article

The Dog Aging Project team outlines how the open-source data it is gathering could be useful for myriad studies.

Recent Stories

a gloved hand holding a vessel-chip

Engineered Circulatory Systems May Help Fight Disease

Vessel-chip technology may offer more personalized pharmaceutical drug testing, leading to our improved ability to combat disease.

an illustration of a VR headset with a garden scene

Enjoying The Sights And Smells Of A Virtual Garden

Texas A&M researchers are using virtual reality technology for people living in long periods of sensory deprivation.

A mother and her young daughter cleaning a kitchen surface together at home.

Hidden Dangers Are Lurking In Everyday Products

Environmental toxicologist explains what to look for when shopping for products used in personal care, cleaning, cooking and gardening.

Decorative photo of the Academic Building

Subscribe to the Texas A&M Today newsletter for the latest news and stories every week.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.368; 2020

Clinical Updates

Quality improvement into practice, adam backhouse.

1 North London Partners in Health and Care, Islington CCG, London N1 1TH, UK

Fatai Ogunlayi

2 Institute of Applied Health Research, Public Health, University of Birmingham, B15 2TT, UK

What you need to know

  • Thinking of quality improvement (QI) as a principle-based approach to change provides greater clarity about ( a ) the contribution QI offers to staff and patients, ( b ) how to differentiate it from other approaches, ( c ) the benefits of using QI together with other change approaches
  • QI is not a silver bullet for all changes required in healthcare: it has great potential to be used together with other change approaches, either concurrently (using audit to inform iterative tests of change) or consecutively (using QI to adapt published research to local context)
  • As QI becomes established, opportunities for these collaborations will grow, to the benefit of patients.

The benefits to front line clinicians of participating in quality improvement (QI) activity are promoted in many health systems. QI can represent a valuable opportunity for individuals to be involved in leading and delivering change, from improving individual patient care to transforming services across complex health and care systems. 1

However, it is not clear that this promotion of QI has created greater understanding of QI or widespread adoption. QI largely remains an activity undertaken by experts and early adopters, often in isolation from their peers. 2 There is a danger of a widening gap between this group and the majority of healthcare professionals.

This article will make it easier for those new to QI to understand what it is, where it fits with other approaches to improving care (such as audit or research), when best to use a QI approach, making it easier to understand the relevance and usefulness of QI in delivering better outcomes for patients.

How this article was made

AB and FO are both specialist quality improvement practitioners and have developed their expertise working in QI roles for a variety of UK healthcare organisations. The analysis presented here arose from AB and FO’s observations of the challenges faced when introducing QI, with healthcare providers often unable to distinguish between QI and other change approaches, making it difficult to understand what QI can do for them.

How is quality improvement defined?

There are many definitions of QI ( box 1 ). The BMJ ’s Quality Improvement series uses the Academy of Medical Royal Colleges definition. 6 Rather than viewing QI as a single method or set of tools, it can be more helpful to think of QI as based on a set of principles common to many of these definitions: a systematic continuous approach that aims to solve problems in healthcare, improve service provision, and ultimately provide better outcomes for patients.

Definitions of quality improvement

  • Improvement in patient outcomes, system performance, and professional development that results from a combined, multidisciplinary approach in how change is delivered. 3
  • The delivery of healthcare with improved outcomes and lower cost through continuous redesigning of work processes and systems. 4
  • Using a systematic change method and strategies to improve patient experience and outcome. 5
  • To make a difference to patients by improving safety, effectiveness, and experience of care by using understanding of our complex healthcare environment, applying a systematic approach, and designing, testing, and implementing changes using real time measurement for improvement. 6

In this article we discuss QI as an approach to improving healthcare that follows the principles outlined in box 2 ; this may be a useful reference to consider how particular methods or tools could be used as part of a QI approach.

Principles of QI

  • Primary intent— To bring about measurable improvement to a specific aspect of healthcare delivery, often with evidence or theory of what might work but requiring local iterative testing to find the best solution. 7
  • Employing an iterative process of testing change ideas— Adopting a theory of change which emphasises a continuous process of planning and testing changes, studying and learning from comparing the results to a predicted outcome, and adapting hypotheses in response to results of previous tests. 8 9
  • Consistent use of an agreed methodology— Many different QI methodologies are available; commonly cited methodologies include the Model for Improvement, Lean, Six Sigma, and Experience-based Co-design. 4 Systematic review shows that the choice of tools or methodologies has little impact on the success of QI provided that the chosen methodology is followed consistently. 10 Though there is no formal agreement on what constitutes a QI tool, it would include activities such as process mapping that can be used within a range of QI methodological approaches. NHS Scotland’s Quality Improvement Hub has a glossary of commonly used tools in QI. 11
  • Empowerment of front line staff and service users— QI work should engage staff and patients by providing them with the opportunity and skills to contribute to improvement work. Recognition of this need often manifests in drives from senior leadership or management to build QI capability in healthcare organisations, but it also requires that frontline staff and service users feel able to make use of these skills and take ownership of improvement work. 12
  • Using data to drive improvement— To drive decision making by measuring the impact of tests of change over time and understanding variation in processes and outcomes. Measurement for improvement typically prioritises this narrative approach over concerns around exactness and completeness of data. 13 14
  • Scale-up and spread, with adaptation to context— As interventions tested using a QI approach are scaled up and the degree of belief in their efficacy increases, it is desirable that they spread outward and be adopted by others. Key to successful diffusion of improvement is the adaption of interventions to new environments, patient and staff groups, available resources, and even personal preferences of healthcare providers in surrounding areas, again using an iterative testing approach. 15 16

What other approaches to improving healthcare are there?

Taking considered action to change healthcare for the better is not new, but QI as a distinct approach to improving healthcare is a relatively recent development. There are many well established approaches to evaluating and making changes to healthcare services in use, and QI will only be adopted more widely if it offers a new perspective or an advantage over other approaches in certain situations.

A non-systematic literature scan identified the following other approaches for making change in healthcare: research, clinical audit, service evaluation, and clinical transformation. We also identified innovation as an important catalyst for change, but we did not consider it an approach to evaluating and changing healthcare services so much as a catch-all term for describing the development and introduction of new ideas into the system. A summary of the different approaches and their definition is shown in box 3 . Many have elements in common with QI, but there are important difference in both intent and application. To be useful to clinicians and managers, QI must find a role within healthcare that complements research, audit, service evaluation, and clinical transformation while retaining the core principles that differentiate it from these approaches.

Alternatives to QI

  • Research— The attempt to derive generalisable new knowledge by addressing clearly defined questions with systematic and rigorous methods. 17
  • Clinical audit— A way to find out if healthcare is being provided in line with standards and to let care providers and patients know where their service is doing well, and where there could be improvements. 18
  • Service evaluation— A process of investigating the effectiveness or efficiency of a service with the purpose of generating information for local decision making about the service. 19
  • Clinical transformation— An umbrella term for more radical approaches to change; a deliberate, planned process to make dramatic and irreversible changes to how care is delivered. 20
  • Innovation— To develop and deliver new or improved health policies, systems, products and technologies, and services and delivery methods that improve people’s health. Health innovation responds to unmet needs by employing new ways of thinking and working. 21

Why do we need to make this distinction for QI to succeed?

Improvement in healthcare is 20% technical and 80% human. 22 Essential to that 80% is clear communication, clarity of approach, and a common language. Without this shared understanding of QI as a distinct approach to change, QI work risks straying from the core principles outlined above, making it less likely to succeed. If practitioners cannot communicate clearly with their colleagues about the key principles and differences of a QI approach, there will be mismatched expectations about what QI is and how it is used, lowering the chance that QI work will be effective in improving outcomes for patients. 23

There is also a risk that the language of QI is adopted to describe change efforts regardless of their fidelity to a QI approach, either due to a lack of understanding of QI or a lack of intention to carry it out consistently. 9 Poor fidelity to the core principles of QI reduces its effectiveness and makes its desired outcome less likely, leading to wasted effort by participants and decreasing its credibility. 2 8 24 This in turn further widens the gap between advocates of QI and those inclined to scepticism, and may lead to missed opportunities to use QI more widely, consequently leading to variation in the quality of patient care.

Without articulating the differences between QI and other approaches, there is a risk of not being able to identify where a QI approach can best add value. Conversely, we might be tempted to see QI as a “silver bullet” for every healthcare challenge when a different approach may be more effective. In reality it is not clear that QI will be fit for purpose in tackling all of the wicked problems of healthcare delivery and we must be able to identify the right tool for the job in each situation. 25 Finally, while different approaches will be better suited to different types of challenge, not having a clear understanding of how approaches differ and complement each other may mean missed opportunities for multi-pronged approaches to improving care.

What is the relationship between QI and other approaches such as audit?

Academic journals, healthcare providers, and “arms-length bodies” have made various attempts to distinguish between the different approaches to improving healthcare. 19 26 27 28 However, most comparisons do not include QI or compare QI to only one or two of the other approaches. 7 29 30 31 To make it easier for people to use QI approaches effectively and appropriately, we summarise the similarities, differences, and crossover between QI and other approaches to tackling healthcare challenges ( fig 1 ).

An external file that holds a picture, illustration, etc.
Object name is baca050916.f1.jpg

How quality improvement interacts with other approaches to improving healthcare

QI and research

Research aims to generate new generalisable knowledge, while QI typically involves a combination of generating new knowledge or implementing existing knowledge within a specific setting. 32 Unlike research, including pragmatic research designed to test effectiveness of interventions in real life, QI does not aim to provide generalisable knowledge. In common with QI, research requires a consistent methodology. This method is typically used, however, to prove or disprove a fixed hypothesis rather than the adaptive hypotheses developed through the iterative testing of ideas typical of QI. Both research and QI are interested in the environment where work is conducted, though with different intentions: research aims to eliminate or at least reduce the impact of many variables to create generalisable knowledge, whereas QI seeks to understand what works best in a given context. The rigour of data collection and analysis required for research is much higher; in QI a criterion of “good enough” is often applied.

Relationship with QI

Though the goal of clinical research is to develop new knowledge that will lead to changes in practice, much has been written on the lag time between publication of research evidence and system-wide adoption, leading to delays in patients benefitting from new treatments or interventions. 33 QI offers a way to iteratively test the conditions required to adapt published research findings to the local context of individual healthcare providers, generating new knowledge in the process. Areas with little existing knowledge requiring further research may be identified during improvement activities, which in turn can form research questions for further study. QI and research also intersect in the field of improvement science, the academic study of QI methods which seeks to ensure QI is carried out as effectively as possible. 34

Scenario: QI for translational research

Newly published research shows that a particular physiotherapy intervention is more clinically effective when delivered in short, twice-daily bursts rather than longer, less frequent sessions. A team of hospital physiotherapists wish to implement the change but are unclear how they will manage the shift in workload and how they should introduce this potentially disruptive change to staff and to patients.

  • Before continuing reading think about your own practice— How would you approach this situation, and how would you use the QI principles described in this article?

Adopting a QI approach, the team realise that, although the change they want to make is already determined, the way in which it is introduced and adapted to their wards is for them to decide. They take time to explain the benefits of the change to colleagues and their current patients, and ask patients how they would best like to receive their extra physiotherapy sessions.

The change is planned and tested for two weeks with one physiotherapist working with a small number of patients. Data are collected each day, including reasons why sessions were missed or refused. The team review the data each day and make iterative changes to the physiotherapist’s schedule, and to the times of day the sessions are offered to patients. Once an improvement is seen, this new way of working is scaled up to all of the patients on the ward.

The findings of the work are fed into a service evaluation of physiotherapy provision across the hospital, which uses the findings of the QI work to make recommendations about how physiotherapy provision should be structured in the future. People feel more positive about the change because they know colleagues who have already made it work in practice.

QI and clinical audit

Clinical audit is closely related to QI: it is often used with the intention of iteratively improving the standard of healthcare, albeit in relation to a pre-determined standard of best practice. 35 When used iteratively, interspersed with improvement action, the clinical audit cycle adheres to many of the principles of QI. However, in practice clinical audit is often used by healthcare organisations as an assurance function, making it less likely to be carried out with a focus on empowering staff and service users to make changes to practice. 36 Furthermore, academic reviews of audit programmes have shown audit to be an ineffective approach to improving quality due to a focus on data collection and analysis without a well developed approach to the action section of the audit cycle. 37 Clinical audits, such as the National Clinical Audit Programme in the UK (NCAPOP), often focus on the management of specific clinical conditions. QI can focus on any part of service delivery and can take a more cross-cutting view which may identify issues and solutions that benefit multiple patient groups and pathways. 30

Audit is often the first step in a QI process and is used to identify improvement opportunities, particularly where compliance with known standards for high quality patient care needs to be improved. Audit can be used to establish a baseline and to analyse the impact of tests of change against the baseline. Also, once an improvement project is under way, audit may form part of rapid cycle evaluation, during the iterative testing phase, to understand the impact of the idea being tested. Regular clinical audit may be a useful assurance tool to help track whether improvements have been sustained over time.

Scenario: Audit and QI

A foundation year 2 (FY2) doctor is asked to complete an audit of a pre-surgical pathway by looking retrospectively through patient documentation. She concludes that adherence to best practice is mixed and recommends: “Remind the team of the importance of being thorough in this respect and re-audit in 6 months.” The results are presented at an audit meeting, but a re-audit a year later by a new FY2 doctor shows similar results.

  • Before continuing reading think about your own practice— How would you approach this situation, and how would you use the QI principles described in this paper?

Contrast the above with a team-led, rapid cycle audit in which everyone contributes to collecting and reviewing data from the previous week, discussed at a regular team meeting. Though surgical patients are often transient, their experience of care and ideas for improvement are captured during discharge conversations. The team identify and test several iterative changes to care processes. They document and test these changes between audits, leading to sustainable change. Some of the surgeons involved work across multiple hospitals, and spread some of the improvements, with the audit tool, as they go.

QI and service evaluation

In practice, service evaluation is not subject to the same rigorous definition or governance as research or clinical audit, meaning that there are inconsistencies in the methodology for carrying it out. While the primary intent for QI is to make change that will drive improvement, the primary intent for evaluation is to assess the performance of current patient care. 38 Service evaluation may be carried out proactively to assess a service against its stated aims or to review the quality of patient care, or may be commissioned in response to serious patient harm or red flags about service performance. The purpose of service evaluation is to help local decision makers determine whether a service is fit for purpose and, if necessary, identify areas for improvement.

Service evaluation may be used to initiate QI activity by identifying opportunities for change that would benefit from a QI approach. It may also evaluate the impact of changes made using QI, either during the work or after completion to assess sustainability of improvements made. Though likely planned as separate activities, service evaluation and QI may overlap and inform each other as they both develop. Service evaluation may also make a judgment about a service’s readiness for change and identify any barriers to, or prerequisites for, carrying out QI.

QI and clinical transformation

Clinical transformation involves radical, dramatic, and irreversible change—the sort of change that cannot be achieved through continuous improvement alone. As with service evaluation, there is no consensus on what clinical transformation entails, and it may be best thought of as an umbrella term for the large scale reform or redesign of clinical services and the non-clinical services that support them. 20 39 While it is possible to carry out transformation activity that uses elements of QI approach, such as effective engagement of the staff and patients involved, QI which rests on iterative test of change cannot have a transformational approach—that is, one-off, irreversible change.

There is opportunity to use QI to identify and test ideas before full scale clinical transformation is implemented. This has the benefit of engaging staff and patients in the clinical transformation process and increasing the degree of belief that clinical transformation will be effective or beneficial. Transformation activity, once completed, could be followed up with QI activity to drive continuous improvement of the new process or allow adaption of new ways of working. As interventions made using QI are scaled up and spread, the line between QI and transformation may seem to blur. The shift from QI to transformation occurs when the intention of the work shifts away from continuous testing and adaptation into the wholesale implementation of an agreed solution.

Scenario: QI and clinical transformation

An NHS trust’s human resources (HR) team is struggling to manage its junior doctor placements, rotas, and on-call duties, which is causing tension and has led to concern about medical cover and patient safety out of hours. A neighbouring trust has launched a smartphone app that supports clinicians and HR colleagues to manage these processes with the great success.

This problem feels ripe for a transformation approach—to launch the app across the trust, confident that it will solve the trust’s problems.

  • Before continuing reading think about your own organisation— What do you think will happen, and how would you use the QI principles described in this article for this situation?

Outcome without QI

Unfortunately, the HR team haven’t taken the time to understand the underlying problems with their current system, which revolve around poor communication and clarity from the HR team, based on not knowing who to contact and being unable to answer questions. HR assume that because the app has been a success elsewhere, it will work here as well.

People get excited about the new app and the benefits it will bring, but no consideration is given to the processes and relationships that need to be in place to make it work. The app is launched with a high profile campaign and adoption is high, but the same issues continue. The HR team are confused as to why things didn’t work.

Outcome with QI

Although the app has worked elsewhere, rolling it out without adapting it to local context is a risk – one which application of QI principles can mitigate.

HR pilot the app in a volunteer specialty after spending time speaking to clinicians to better understand their needs. They carry out several tests of change, ironing out issues with the process as they go, using issues logged and clinician feedback as a source of data. When they are confident the app works for them, they expand out to a directorate, a division, and finally the transformational step of an organisation-wide rollout can be taken.

Education into practice

Next time when faced with what looks like a quality improvement (QI) opportunity, consider asking:

  • How do you know that QI is the best approach to this situation? What else might be appropriate?
  • Have you considered how to ensure you implement QI according to the principles described above?
  • Is there opportunity to use other approaches in tandem with QI for a more effective result?

How patients were involved in the creation of this article

This article was conceived and developed in response to conversations with clinicians and patients working together on co-produced quality improvement and research projects in a large UK hospital. The first iteration of the article was reviewed by an expert patient, and, in response to their feedback, we have sought to make clearer the link between understanding the issues raised and better patient care.

This article is part of the Quality Improvement series ( https://www.bmj.com/quality-improvement ) produced by The BMJ in partnership with and funded by The Health Foundation.

Contributors: This work was initially conceived by AB. AB and FO were responsible for the research and drafting of the article. AB is the guarantor of the article.

Competing interests: We have read and understood BMJ policy on declaration of interests and have no relevant interests to declare.

Provenance and peer review: This article is part of a series commissioned by The BMJ based on ideas generated by a joint editorial group with members from the Health Foundation and The BMJ , including a patient/carer. The BMJ retained full editorial control over external peer review, editing, and publication. Open access fees and The BMJ ’s quality improvement editor post are funded by the Health Foundation.

IMAGES

  1. Research literature & levels of evidence

    quality research articles

  2. 🔥 Quality research papers. Quality Research Paper Examples That Really

    quality research articles

  3. Credible Qualitative Research: The Total Quality Framework Credibility

    quality research articles

  4. Quality Research Papers 3rd edition

    quality research articles

  5. Write high quality research articles, reports, and summaries by

    quality research articles

  6. 9780310239451: Quality Research Papers

    quality research articles

VIDEO

  1. The Surprising Truth Behind the 5-Second Rule 🫢

  2. The Benefits of Using QI Methods to Address Joy in Work

  3. 🔬The Science Behind the 5-Second Rule 🧫 #food #science #microbiology

  4. Online Qual: Best Practices for Mobile Consumer Research

  5. Quality Management in Clinical Research: Quality Management Challenges Part 3

  6. Effective Research Review For High Quality Research #phd #research

COMMENTS

  1. Research quality: What it is, and how to achieve it

    2) Initiating research stream: The researcher (s) must be able to assemble a research team that can achieve the identified research potential. The team should be motivated to identify research opportunities and insights, as well as to produce top-quality articles, which can reach the highest-level journals.

  2. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  3. Quality in Research: Asking the Right Question

    This column is about research questions, the beginning of the researcher's process. For the reader, the question driving the researcher's inquiry is the first place to start when examining the quality of their work because if the question is flawed, the quality of the methods and soundness of the researchers' thinking does not matter.

  4. A Review of the Quality Indicators of Rigor in Qualitative Research

    Researcher reflexivity, essentially a researcher's insight into their own biases and rationale for decision-making as the study progresses, is critical to rigor. This article reviews common standards of rigor, quality scholarship criteria, and best practices for qualitative research from design through dissemination.

  5. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  6. Research quality: What it is, and how to achieve it

    Section snippets What research quality historically is. Research assessment plays an essential role in academic appointments, in annual performance reviews, in promotions, and in national research assessment exercises such as the Excellence in Research for Australia (ERA), the Research Excellence Framework (REF) in the United Kingdom, the Standard Evaluation Protocol (SEP) in the Netherlands ...

  7. Defining and assessing research quality in a transdisciplinary context

    2.2 Search terms. Search terms were designed to identify publications that discuss the evaluation or assessment of quality or excellence 2 of research 3 that is done in a TDR context. Search terms are listed online in Supplementary Appendices 2 and 3.The search strategy favored sensitivity over specificity to ensure that we captured the relevant information.

  8. Assessing the quality of research

    Figure 1. Go to: Systematic reviews of research are always preferred. Go to: Level alone should not be used to grade evidence. Other design elements, such as the validity of measurements and blinding of outcome assessments. Quality of the conduct of the study, such as loss to follow up and success of blinding.

  9. How to … assess the quality of qualitative research

    The abstract at the very beginning of a qualitative research article can be seen as the first marker of high quality. An abstract should clearly explicate the research problem and the aim of the study, and should skillfully condense the contents of the study, as well as indicate its novelty and contribution to research and practice in health ...

  10. PDF Criteria for Good Qualitative Research: A Comprehensive Review

    screening and eligibility, 45 research articles were identi-fied that offered explicit criteria for evaluating the quality of qualitative research and were found to be relevant to this review. Figure 1 illustrates the complete review process in the form of PRISMA flow diagram. PRISMA, i.e., ''preferred

  11. Sustainability

    The strategic relevance of innovation and scientific research has amplified the attention towards the definition of quality in research practice. However, despite the proliferation of evaluation metrics and procedures, there is a need to go beyond bibliometric approaches and to identify, more explicitly, what constitutes good research and which are its driving factors or determinants.

  12. Full article: Four decades of research on quality: summarising

    Previous quantitative studies of research on quality. Martínez-Lorente, Dewhurst, and Dale (Citation 1998) note that the term TQM started to become popular in the mid-1980s.Dereli, Durmuşoğlu, Delibaş, and Avlanmaz (Citation 2011) conclude that QM began to attract an increasing amount of interest from the service industry during the last decade.. Additionally, Dereli et al. (Citation 2011 ...

  13. Full article: Quality 2030: quality management for the future

    6.1. General conclusions. Quality 2030 consists of five collectively designed themes for future QM research and practice: (a) systems perspectives applied, (b) stability in change, (c) models for smart self-organising, (d) integrating sustainable development, and (e) higher purpose as QM booster.

  14. How do you determine the quality of a journal article?

    The better the quality of the articles that you use in the literature review, the stronger your own research will be. When you use articles that are not well respected, you run the risk that the conclusions you draw will be unfounded. Your supervisor will always check the article sources for the conclusions you draw. We will use an example to ...

  15. Full article: The Many Meanings of Quality: Towards a Definition in

    1. Introduction. Quality is a multi-faceted and intangible construct (Charantimath, Citation 2011; Zhang, Citation 2001) that has been subject to many interpretations and perspectives in our everyday life, in academia, as well as in industry and the public domain.In industry, most organisations have well-established quality departments (Sousa & Voss, Citation 2002), but the method of ...

  16. Quality of Research Practice

    The research quality model based on four basic concepts and 28 sub-concepts has-overall-been found valid, as an important contribution for describing the quality of research practice, by 42 senior researchers from three major Swedish universities. The majority of the respondents believed that all concepts in the model are important, and did ...

  17. A Step-To-Step Guide to Write a Quality Research Article

    Today publishing articles is a trend around the world almost in each university. Millions of research articles are published in thousands of journals annually throughout many streams/sectors such as medical, engineering, science, etc. But few researchers follow the proper and fundamental criteria to write a quality research article.

  18. Google Scholar

    Google Scholar provides a simple way to broadly search for scholarly literature. Search across a wide variety of disciplines and sources: articles, theses, books, abstracts and court opinions.

  19. Toward a Framework for Appraising the Quality of Integration in Mixed

    Although integration is a crucial element of mixed methods research (MMR), most MMR quality frameworks have not comprehensively addressed integration in their criteria. These frameworks tend to focus on whether integration is present, without considering important aspects such as the rigor of the integration processes used or their consistency ...

  20. COVID-19 Admission Rates and Changes in Care Quality in US Hospitals

    Design, Setting, and Participants This cross-sectional study used data from the 2019 and 2020 Agency for Healthcare Research and Quality's Healthcare Cost and Utilization Project State Inpatient Databases. Data were obtained for all nonfederal, acute care hospitals in 36 states with admissions in 2019 and 2020, and patients without a ...

  21. NASA Releases New High-Quality, Near Real-Time Air Quality Data

    Space Station Research Advances NASA's Plans to Explore the Moon, Mars. article 4 hours ago. 2 min read. Hubble Views the Lights of a Galactic Bar. article 7 hours ago. Highlights. ... "Poor air quality exacerbates pre-existing health issues, which leads to more hospitalizations," said Jesse Bell, executive director at the University of ...

  22. Research versus practice in quality improvement? Understanding how we

    The gap between implementers and researchers of quality improvement (QI) has hampered the degree and speed of change needed to reduce avoidable suffering and harm in health care. Underlying causes of this gap include differences in goals and incentives, preferred methodologies, level and types of evidence prioritized and targeted audiences.

  23. Full article: What is quality, innovative research?

    Quality research starts from the research proposal and goes through to final dissemination of results. Quality research means you have thoroughly examined the relevant theoretical frameworks and situated your researchable problem within this literature. It means that you have obtained ethical clearance and conducted the research in an ethical ...

  24. Improving Quality of Care for Older Adults

    When Snigdha Jain, MD, MHS, became an ICU physician, she found that two-thirds of the individuals she cared for in the ICU were older adults.She also found that illness did not end with survival and discharge from the hospital for these patients. The realization prompted her to better understand how the lives of older adults change after a critical illness.

  25. AI Will Increase the Quantity

    Gen AI tools are rapidly making these emails more advanced, harder to spot, and significantly more dangerous. Recent research showed that 60% of participants fell victim to artificial intelligence ...

  26. Patient Safety And Quality In A Virtual Environment: 3 ...

    At Teladoc Health, our industry-leading quality and patient safety programs have earned recognition from The National Committee for Quality Assurance (NCQA) and the Agency for Healthcare Research ...

  27. Editorial: How to develop a quality research article and avoid a

    Introduction. The process of concluding that a submitted research article does not align with the specific journals' aims and scope or meets the required rigour and quality, is commonly termed - desk rejection.This element of the review process is normally performed by the journal Editor directly or by the team of specialist sub-Editors.

  28. When To Say Goodbye? Survey Sheds Light On Difficult Decisions For Dog

    The Dog Aging Project is a collaborative, community scientist-driven data-gathering research project that enrolls companion dogs from all backgrounds to study the effects of aging and gain a better understanding of what contributes to a long and healthy canine life. Many of its research projects have led to studies that inform not only dog ...

  29. Clinical Updates: Quality improvement into practice

    This article was conceived and developed in response to conversations with clinicians and patients working together on co-produced quality improvement and research projects in a large UK hospital. The first iteration of the article was reviewed by an expert patient, and, in response to their feedback, we have sought to make clearer the link ...