Skip to main content

Planning and implementing practice changes in Ontario maternal-newborn hospital units: a secondary qualitative analysis

Abstract

Background

Moving evidence into practice is complex, and pregnant and birthing people and their infants do not always receive care that aligns with the best available evidence. Implementation science can inform how to effectively move evidence into practice. While there are a growing number of examples of implementation science being studied in maternal-newborn care settings, it remains unknown how real-world teams of healthcare providers and leaders approach the overall implementation process when making practice changes. The purpose of this study was to describe maternal-newborn hospital teams’ approaches to implementing practice changes. We aimed to identify what implementation steps teams take (or not) and identify strengths and potential areas for improvement based on best practices in implementation science.

Methods

We conducted a supplementary qualitative secondary analysis of 22 interviews completed in 2014–2015 with maternal-newborn nursing leaders in Ontario, Canada. We used directed content analysis to code the data to seven steps in an implementation framework (Implementation Roadmap): identify the problem and potential best practice; assemble local evidence; select and customize best practice; discover barriers and drivers; tailor implementation strategies; field-test, plan evaluation, prepare to launch; launch, evaluate, and sustain. Frequency counts are presented for each step.

Results

Participants reported completing a median of 4.5 of 7 Implementation Roadmap steps (range = 3–7), with the most common being identifying a practice problem. Other steps were described less frequently (e.g., selecting and adapting evidence, field-testing, outcome evaluation) or discussed frequently but not optimally (e.g., barriers assessment). Participants provided examples of how they engaged point-of-care staff throughout the implementation process, but provided fewer examples of engaging pregnant and birthing people and their families. Some participants stated they used a formal framework or process to guide their implementation process, with the most common being quality improvement approaches and tools.

Conclusions

We identified variability across the 22 hospitals in the implementation steps taken. While we observed many strengths, we also identified areas where further support may be needed. Future work is needed to create opportunities and resources to support maternal-newborn healthcare providers and leaders to apply principles and tools from implementation science to their practice change initiatives.

Peer Review reports

Background

Improving the quality of care in maternal-newborn settings is an international, national, and local priority [1,2,3]. Despite ongoing improvement efforts, pregnant and birthing people and their infants continue to receive care that is not always aligned with the best available evidence [4, 5]. Moving evidence into practice is complex and remains an ongoing challenge in healthcare. Studies have revealed that individuals and organizations face many barriers to implementing evidence into practice, including lack of skills with few opportunities for training, unsupportive organizational cultures and the undervaluing of evidence, as well as limited time and physical and human resources [6]. In maternal-newborn care specifically, clinical practice changes can be particularly complex due to the involvement of different healthcare providers (e.g., nurses, physicians, midwives), care that focuses on two different patient populations (e.g., the pregnant or birthing person and infant), and the fact that some practices are affected by separate hospital units (e.g., birthing unit, mother-baby unit, neonatal intensive care unit).

Implementation science is defined as “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice” [7]. Although the field of implementation science has developed rapidly over the past two decades, its full potential has not yet been realized [8]. There is emerging discussion about how the knowledge gained through implementation science has largely remained in the scientific domain, and has not been well translated into practice-based settings such as healthcare [9, 10]. It is important to assess if and how implementation evidence, principles, and tools are being applied to identify opportunities to optimize evidence-informed implementation.

Within maternal-newborn care, there are increasing examples of implementation science applications, focusing on topics such as prioritizing content areas for implementation research [11] and practice [12], identifying and examining the effectiveness of different implementation strategies [13,14,15], and exploring barriers and facilitators to implementation [15,16,17]. While this literature makes an essential contribution to advancing our understanding of evidence-informed strategies to implement evidence and the potential challenges, it typically has not focused on the entire implementation process needed to bring about change (i.e., taking a planned action [18] approach to changes). Recently, there have been more examples of how teams work through the overall implementation process being published as best practice implementation reports [19,20,21]. However, these reports are typically focused on a practice change in a single setting, and by virtue of publishing their work, likely over-represent teams that are more familiar (and potentially more successful) with implementation processes. To complement this existing literature, there is a need to shift from learning about single implementation strategies or single projects to also looking more holistically at how maternal-newborn teams implement practice changes in their day-to-day work.

The province of Ontario, Canada provides a unique opportunity to learn about the process of moving evidence into practice. Every birthing hospital in the province has access to a perinatal data registry called the Better Outcomes Registry & Network (BORN) Ontario, which captures data on nearly every birth in the province [22]. Contributing hospitals can use their own BORN data to facilitate practice improvements [22], for example, to identify evidence-practice gaps, understand current practice, and monitor and evaluate implementation projects. Although hospitals have access to this large and robust data system, it remains largely unknown what processes teams are using to implement practice changes and how well their processes align with current best practices in implementation science.

In 2012, BORN launched the Maternal Newborn Dashboard (“the dashboard”), which is an online audit and feedback system that maternal-newborn hospital teams can use to facilitate practice change improvements. The dashboard includes six key performance indicators related to practices such as newborn screening, episiotomies, breastfeeding, repeat elective cesarean sections, Group B streptococcus screening, and inductions [23]. In 2014, an evaluation of the dashboard commenced, providing an opportunity to learn how Ontario maternal-newborn hospitals approach practice changes and how they use the dashboard to support their work. One part of the evaluation involved interviews with nursing leaders in Ontario maternal-newborn hospitals about how they implement practice changes.

Using these data, we aimed to understand maternal-newborn leaders’ usual approaches to implementing practice changes in their hospital units, including what steps they take or not, and identify potential areas where the implementation process could be improved.

Methods

While the focus of this current paper is to present the results of a qualitative secondary analysis, as per the reporting guidance of Beck [24], we report the methods of the primary study for context, as well as the methods for this current secondary analysis. We used the Consolidated Criteria for Reporting Qualitative Research checklist (Additional file 1) to guide our reporting [25].

Methods for previously conducted primary study

The methods for the primary study, which was one part of a larger mixed-methods evaluation, were detailed in a published protocol [23] and are summarized below.

Objectives

The objective of the primary study was to qualitatively explore potential factors that may explain the differences among maternal-newborn hospitals in their use of the dashboard. Because the purpose of the interviews was to inform the development of a questionnaire for all Ontario maternal-newborn hospitals to measure the identified factors, the interview data were never prepared for publication.

Design

The primary study used a qualitative descriptive design [26].

Sample

A criterion-based approach [27] was used to identify a purposeful sample of obstetrical managers and directors at maternal-newborn hospitals in Ontario, Canada, reflecting different birth volumes, acuity levels, geographic locations, and engagement with the dashboard. Obstetrical managers and directors were targeted due to their knowledge of clinical practice, quality improvement processes, and dashboard use at their organization. A total of 34 individuals were invited by email: three declined participation, nine did not respond, and 22 consented. The researchers assessed for data saturation throughout recruitment, data collection, and analysis. Recruitment stopped when interviews did not lead to new information [28].

Data collection

The original research team developed a semi-structured interview guide (Additional file 2) informed by the Promoting Action on Research Implementation in Health Services (PARIHS) framework [29], the Organizational Readiness for Knowledge Translation (OR4KT) tool [30, 31], and the Knowledge-to-Action framework [32]. The interviews were conducted between November 2014 and March 2015 by one of two female research staff (master’s-prepared research coordinator with expertise in quality improvement; research assistant with maternal-newborn nursing experience). Both interviewers had qualitative research experience and were trained by the study investigators. The interviewers did not have a prior relationship with study participants. The interviews, which lasted an average of 34 min (range of 17 min to 49 min), were completed by telephone and audio-recorded. Interviews were transcribed verbatim by a transcriptionist and verified by the research team.

Methods for current secondary analysis of primary study

Objectives

The objectives of this current secondary analysis were to: (1) describe maternal-newborn teams’ approaches to implementing practice changes; (2) identify the implementation steps and activities that teams do and do not take; and (3) identify any strengths and potential areas for improvement based on best practices in implementation science.

Design

The study we report here is a supplementary analysis [33]. Specifically, we conducted a more detailed analysis of one component of the dataset (i.e., approaches to implementing practice changes) that was not the focus of the primary study [33].

Researchers’ relationship to the primary dataset

The lead researcher for this secondary analysis (Reszel) was a research coordinator for the overall evaluation study [23], but was not directly involved in the collection or analysis of the primary dataset. The co-principal investigator (Dunn) and a co-investigator (Graham) from the primary study were involved in this secondary analysis and provided contextual and methodological details as needed.

Data used

We obtained permission from the co-principal investigator of the primary study (Dunn) and research ethics board approval to access the de-identified transcripts. Aggregate demographic information was provided for contextual information. We did not collect any new supplementary data.

Assessment of quality and fit of dataset for secondary analysis

Of the 22 interviews, 21 transcripts were available. There was no transcript for one interview due to the audio-recorder malfunctioning; however, detailed notes from this interview were available. There were specific questions in the semi-structured interview guide focused on participants’ usual practice change processes (questions 6–11 in Additional file 2), providing a rich dataset to assess if and how these practice change processes aligned with best practices in implementation science.

Data analysis

We conducted a directed content analysis [34], using NVivo12 Pro for data management [35]. We used an implementation framework, the Implementation Roadmap [36], as the initial coding scheme. As a “planned action” framework [18], the Implementation Roadmap provides comprehensive guidance to facilitate implementation, presenting the necessary steps and activities to effectively plan and execute implementation projects [36]. It was therefore anticipated that coding according to the Implementation Roadmap would provide insight into the comprehensiveness of the teams’ approaches and highlight which steps were being taken or not. The initial coding scheme included the seven steps in the Implementation Roadmap, namely: (1) identify the problem of concern and potential best practice; (2) assemble local evidence on context and current practices; (3) select and customize best practice to local context; (4) discover barriers and drivers for best practice implementation; (5) tailor implementation strategies; (6) field-test, plan evaluation, prepare to launch; and (7) launch, evaluate, and sustain the gains (Fig. 1).

Fig. 1
figure 1

The Implementation Roadmap

Copyright 2021 Wiley. Used with permission from Harrison MB, Graham ID, Knowledge Translation in Nursing and Healthcare: A Roadmap to Evidence-Informed Practice, John Wiley & Sons Inc [36]

Each transcript was read in its entirety and coded by one team member (Reszel). Text was coded to the Implementation Roadmap steps where possible and frequency counts presented. Any content that could not be coded to the Implementation Roadmap was assigned a new code. For instance, we coded examples of engagement using the applicable levels described by Manafò et al. (e.g., inform, consult, involve, collaborate) [37]. The codes were grouped into broader categories. The coder (Reszel) met weekly with one other team member (Graham), who was the developer of the Implementation Roadmap, to discuss the coding and emerging trends. Once coding was completed, another team member (Daub), who has expertise in knowledge translation and implementation, reviewed all coded transcripts for accuracy and comprehensiveness. Throughout this review, the coders (Reszel, Daub) met to discuss the coding and categories and to reach consensus. The coding of the transcripts was subsequently updated. The coding review did not result in any changes to the coding scheme.

Finally, a summary of the analysis was presented to and discussed by the broader research team, which included a parent representative (Pervez), clinicians (Cassidy, Dunn, Lightfoot, Quosdorf), hospital intermediaries (Hafizi, Wood), and implementation science experts (Cassidy, Dunn). This discussion served to confirm and challenge the analysts’ interpretation and informed the final presentation of results.

Author positionality

Before presenting the results, we briefly describe our positionality. Our team includes healthcare providers and leaders, pregnant people and parents, intermediaries, and researchers of maternal-newborn care, with many of us having experience in multiple domains. We believe that the care provided to pregnant and birthing people should be informed by the best available evidence. We recognize it can be challenging to implement evidence-informed practices, but we believe that these challenges can be overcome and that maternal-newborn teams should be supported to develop the knowledge and skills to apply evidence in practice. This position influenced the question we chose to investigate and our professional and lived experiences informed the interpretation of our findings.

Results

Participants came from diverse hospital settings in Ontario, representing different professional roles, levels of care, and birth volumes (Table 1).

Table 1 Demographic information of interview participants (N = 22)

We present our results under two main categories: (1) implementation steps, including the seven steps in Implementation Roadmap [36], and (2) implementation approach, including processes/frameworks and engagement level (Fig. 2).

Fig. 2
figure 2

Organization of study findings

Implementation steps

Across the sites, there were examples of each Implementation Roadmap step (Fig. 3). The most frequently described steps were identifying the problem and potential best practice (n = 22, 100%); assembling local evidence (n = 17, 77%); and identifying barriers and drivers (n = 21, 95%). The least frequently described steps were selecting and customizing the best practice (n = 7, 32%) and field-testing, planning evaluations, and preparing to launch (n = 2, 9%). Participants described using a median of 4.5 out of the 7 implementation steps (range of 3 to 7). One participant mentioned all seven steps in their interview.

Fig. 3
figure 3

Implementation steps by hospital site

*Discussed does not mean it was done optimally; but there was at least a general mention of the step

Identify problem of concern and potential best practice to address it

All participants (n = 22) described how their teams came to identify a potential practice problem that needed to be addressed. The problem was identified through provincial, regional, or organizational priorities or mandates; emerging evidence learned through conferences, publications, and guidelines; seeing a red signal on their dashboard; and through seeing how their practice rates compared to other hospitals:

“The thing that actually drove the change wasn’t the evidence, it wasn’t the talking or creating a need, it was ‘okay, here we are, we’re red [on the dashboard] and this is embarrassing’…now we’re being compared to other people and how we fit in and it wasn’t pretty.” (Participant 5)

About one-third of participants (n = 7, 32%) discussed how they identify potential best practices that could address the problem. Sources included research literature, clinical practice guidelines (e.g., Registered Nurses’ Association of Ontario [RNAO], Society of Obstetricians and Gynecologists of Canada [SOGC]), and evidence from obstetrical safety training programs (e.g., MOREOB).

Only one participant mentioned appraising the evidence underlying the potential best practice.

Assemble local evidence on context and current practices

Most participants (n = 17/22, 77%) gave at least one example of how they learn about their current practice and the context in which the practice is occurring. This assessment occurred by collecting data and team experiences and impressions. Participants mentioned local data sources such as the BORN data registry and the dashboard, abstracted data from decision support, chart audits, and informal discussions with staff.

“You have subjective impressions of ‘we have a problem here; we know we do too many social inductions.’ You get that subjective perspective from staff and physicians but what the data does is make it clear. There’s no arguing with: here’s how many you did, right?” (Participant 10)

Participants described how they used different data sources to “drill down” to specific cases and explore precipitating factors, leading to a more fulsome understanding of what may be driving current practice (e.g., patient demographics, specific healthcare providers, time of day, data entry).

Finally, two participants (9%) described conducting what would be considered a formal “gap-analysis” to measure the difference between current practice and the best practice they are targeting (i.e., the evidence-practice gap).

Select and customize best practices to local context

One-third of participants (n = 7/22, 32%) discussed how their teams select and customize the best practice for their setting. While no participants described a structured process for selecting the specific best practice to be implemented, five participants (23%) explained the importance of securing the support or endorsement of others. This support was achieved through sharing the evidence for the best practice in an understandable way, showing how the best practice aligns with provincial and regional priorities, and ongoing discussions to share and resolve concerns, as described by a participant:

“People don’t stay up at night trying to do things wrong, so helping them understand the rationale for why—taking that extra time to appraise the research and look at translating that so it’s in simple terms that they would be able to understand why. Sometimes telling them who else has already done it this way helps with the buy-in and engagement as well.” (Participant 6)

One participant described the challenge of working as an interprofessional team where different professions rely on and value different sources of evidence, highlighting the need for a tailored approach to build support for the selected best practice. Finally, only one participant described how they customized the best practice, providing the example of modifying the recommendation to make it more achievable in their setting. There were no examples of teams customizing the best practice to align with their context by explicitly considering the who, what, how, and when.

Discover barriers for best practice implementation

Nearly all participants (n = 21/22, 95%) stated they consider potential barriers to implementing the selected best practice. Participants generally described the barriers assessment process as informal, involving brainstorming among the working group and general discussions with the broader clinical team. Several participants stated they were just familiar with the “usual” barriers based on their previous experience, resulting in a barriers assessment not being repeated for the current practice change initiative.

Three participants (14%) described a more systematic approach to assessing barriers, detailing specific steps they take to identify barriers and the application of a framework:

“When you do your root cause analysis you always look at opportunities and barriers. So it’s a SWOT [Strengths, Weaknesses, Opportunities, Threats] analysis, right? What are we doing well? What are we not doing well? What can we improve on? And then what are we anticipating from a change perspective is going to be a barrier? Who do we have to engage to eliminate that?” (Participant 10)

No participants described assessing which barriers are feasible to address or prioritizing which barriers to target for the most impact.

Tailor implementation strategies

All participants (n = 22) provided examples of strategies they use to implement practice changes in their settings (i.e., implementation strategies). In total, participants identified 14 different implementation strategies, with the most frequently mentioned being meetings and discussion, interprofessional teamwork, and staff education, training, or coaching (Table 2).

Table 2 Implementation strategies used by study participants to implement practice changes

Half the participants (n = 11, 50%) indicated they take some steps to tailor the implementation strategies, either by tailoring them to identified barriers or by tailoring them to the local context. Five participants (23%) gave examples of how they consider the identified barriers when choosing which implementation strategies to use. For example, one participant described how they changed the days they were offering a specific clinical service to address an identified barrier to meeting best practice guidelines:

“A perceived barrier was that physicians couldn’t get a scheduled time for their cesarean section after 39 weeks so they were doing them earlier. We thought okay, that’s a barrier that we were really only doing elective sections from Monday to Friday. So we’re working at removing that barrier by doing a bit more planning with anesthesia to plan the cesarean section on a weekend.” (Participant 21)

Seven participants (32%) indicated that they take some steps to tailor the implementation strategies to the local context. For instance, some participants described how instead of “recreating the wheel” they looked to strategies used by other sites and adapted them to fit with their local culture and ways of doing things.

Field-test, plan evaluation, prepare to launch

Two participants (9%) described piloting or trialing their change initiatives prior to full-scale implementation. These “tests of change” were described as an important way to engage a core group of key supports and gather their feedback on what works and what does not. This information was then used to adjust the selected strategies prior to broader implementation:

“And then being willing to trial something rather than implement something and just say ‘this is how it is.’ So we’ve adapted to try an idea and then if it doesn’t work, adjust it to what we’ve learned from that experience.” (Participant 12)

No participants described developing an evaluation plan prior to implementation or completing a pre-launch assessment.

Launch, evaluate, and sustain the gains

Nearly all participants (n = 21, 95%) stated they engaged in some form of evaluation and/or sustainability activities. The most frequently described activity was monitoring adherence to the best practice. Eighteen participants (82%) described how they monitored their practice change initiative over time by using tools such as the BORN data registry and the dashboard to track changes in rates and colored signals. This monitoring data could be used to assess whether the implementation strategies were effectively bringing about the desired change, or if the strategies needed to be adjusted or boosted:

“The BORN data lets us know exactly where we’re sitting, how big the problem is, and when you can pull it month-to-month you can kind of get a glimpse as to: are we making any improvements in the interventions [best practices]? We rolled out a signed consent for formula supplementation—did that make any difference in our rates? The lunch and learns that we do around supplementation and around breastfeeding issues and techniques—are they making any difference?” (Participant 13)

Participants described how the monitoring results were shared with staff via meetings and posting on unit boards and communicated to leadership (e.g., division chiefs) and committees (e.g., quality and safety committees). However, some participants described the challenge of monitoring changes without access to timely data. Fewer participants described undertaking process evaluations or impact evaluations, with only one participant stating they also evaluate process indicators and patient outcomes.

Many participants (n = 16, 73%) stated that they were taking steps to ensure the sustainability of their practice changes. Examples of sustainability strategies included partnering with healthcare providers from the onset to secure their buy-in; making organizational changes that entrench the change in day-to-day work; ongoing monitoring for non-adherence; and maintaining ongoing communication with the team about the practice change. As described by one participant:

“We’re measuring it consistently and communicating that back to the clinicians: the physicians, the nurses, the midwives. What works well is if you measure it, people pay more attention to it. So that will definitely be one of our initiatives to make sure that it is sustained.” (Participant 15)

Implementation approach

Processes and frameworks

Nine participants (41%) named at least one formal process or framework they used to guide their change process. These processes and frameworks included Lean (n = 5), Plan-Do-Study-Act (PDSA) or Plan-Do-Check-Act (PDCA) (n = 5), Baby Friendly Initiative (BFI) steps (n = 3), project management (n = 1), as well as theories such as adult learning theory (n = 1) and change management theory (n = 1). One participant described the benefit of having a consistent process that is applied across different practice change initiatives to help the team understand the steps:

“It [previous practice change initiative] was a good exercise to go through with the staff…when it came time to launch our next project, they’re understanding it’s the same—we’re going to follow the same model.” (Participant 12)

Another nine participants (41%) stated that their organization did not use a formal process or framework to guide the change process, with one participant stating they “fly by the seat of [their] pants” (Participant 2). The remaining participants (n = 4, 18%) stated their organizations do use a formal framework to guide practice changes, but they were not able to name it, as expressed by one director:

“I’m sure if you talked to somebody else, they could tell you what our actual change management tool is that we use. I just don’t have a good handle on that this morning, but it’s the basic principles that everyone else uses.” (Participant 4)

Engagement level

Five participants (23%) gave examples that indicated their level of engagement with point-of-care staff was meant to inform staff of the changes. This tended to be one-way communication from the working group to point-of-care staff about what decision was made and how it will be implemented. For instance, when asked if clinical staff offer their opinions on practice changes, one manager stated: “No, I don’t think so. We just made an executive decision around what we thought would help” (Participant 18).

Over three-quarters of participants (n = 17, 77%) provided examples of two-way exchanges with point-of-care staff during the implementation process. Participants most frequently provided examples of how they consulted with point-of-care staff (n = 12, 55%), for example, by asking for suggestions on what practice changes to prioritize or soliciting input on how the change process is working. These consultations occurred through formal ticket systems (i.e., staff submit ideas and suggestions), meetings and huddles, emails, and informal conversations.

Other participants (n = 6, 27%) described involving point-of-care staff (and in one case, patient advisors) in the change process through their involvement in unit committees and councils, as described by one participant:

“We have a patient and family-centered care steering committee, which is comprised of both hospital employees and patient advisors, and we solicit their input quite frequently with different actions that we need to take.” (Participant 13)

Two participants (9%) described collaborating with point-of-care staff by including them in the core implementation working group as equal partners in the process:

“They’re [clinical staff] part of the working group, so they participate in the change. They contribute in terms of the root cause identification, in terms of ideas, problem solving, what do they want to try, what do they think will work, what are the barriers. All of that, they’re involved in it.” (Participant 10)

Discussion

In this study we aimed to explore how maternal-newborn hospital teams implement practice changes in their units. We learned about the usual implementation processes, focusing on what steps teams do and do not take. By comparing the described steps to an implementation framework, the Implementation Roadmap [36], we identified which steps were most frequently discussed (e.g., identifying a problem), which were less frequently discussed (e.g., selecting and adapting evidence, evaluation), and which were discussed frequently but not optimally (e.g., barriers assessment). We identified many strengths, including efforts to work through varied implementation steps, the depth of experiential knowledge, and efforts to engage point-of-care staff. By noting gaps in the implementation process, we identified potential areas where further capacity development and support may be needed.

Across the 22 sites, only one participant described all seven steps, with most describing four or less steps, potentially signaling that sites’ implementation processes are not comprehensive. Although we do not know what the sites’ implementation outcomes were (and therefore cannot make inferences about the effectiveness of the different approaches), our previous work identified that teams who were successful in their practice change initiatives were more likely to have used a formal framework (like the Implementation Roadmap) to guide their implementation process [39]. Furthermore, we identified variability across the sites regarding which implementation steps are taken. This is consistent with other studies that have reported differences in what, when, and how implementation steps are taken [40, 41]. There are several potential explanations for this variability. First, we expect that the nature of the change (e.g., size, complexity) may influence the number of steps that teams take, with smaller, simpler changes resulting in fewer steps taken. Second, the urgency of a change may prevent some steps from being taken (e.g., field-testing), such as when units are required to implement changes immediately due to organizational or provincial mandates (as we recently observed during the COVID-19 pandemic). Third, it is likely that the education and training experience of the implementation leader influences the process used. In our study, participants were predominantly nursing leaders who would have been trained in nursing clinical practice. Despite nurses frequently being tasked with improvement work, implementation practice and quality improvement are not typically included in nursing education programs and there are few opportunities for ongoing professional development in these areas [42, 43]. There is a need to better equip nurses with implementation science knowledge and skills to better position them to translate evidence-informed practices into care [44].

Some of the most frequently identified implementation steps were identifying a problem and best practice, assembling local evidence, and monitoring and evaluation. While it is promising to see so many sites engaging in these steps, it is important to consider how access to the BORN dashboard and registry may have contributed to these high numbers, and thus not necessarily be reflective of what is occurring in the broader maternal-newborn system outside of Ontario. The dashboard facilitates identification of a problem by alerting teams with a colored signal (red or yellow) when best practice is not being met; it assists with learning about current practice by allowing users to drill down into specific cases to explore factors that may be driving the observed rates; and it allows users to monitor changes over time by observing changes in their rates and colored signals [23, 45]. Other settings may not have access to a similar data system that has been designed to facilitate and improve care; rather, many teams rely on data systems designed for collecting data for clinical and administrative purposes, rather than monitoring and evaluation purposes [46, 47]. While our findings speak to the value of a dashboard for facilitating specific steps in the implementation process, our findings also highlight the need for teams to actively engage in implementation steps beyond using a dashboard. For instance, although many participants reported monitoring their dashboard (a largely passive activity), no participants described developing an evaluation plan beforehand and only one participant described undertaking a process and impact evaluation. More attention to active evaluation planning, consideration of broader outcomes (e.g., implementation outcomes, service outcomes, and client outcomes [48]), and resources to support evaluation are needed to better assess the effect of the practice change initiatives.

Although assessing barriers was one of the most frequently mentioned steps, study participants rarely described a comprehensive or theory-based approach, with some relying on experiential knowledge of barriers acquired through past projects. While the application of tacit knowledge is particularly useful in familiar situations [49], each practice change initiative is unique and relying on knowledge gained through past projects alone risks missing opportunities to learn about other relevant, current factors. In addition, few participants described selecting their implementation strategies based on the identified barriers or evidence. This challenge has been described elsewhere, with teams selecting strategies based on familiarity or the ISLAGIATT (“it seemed like a good idea at the time”) principle [50, 51]. While participants provided some examples of implementation strategies, it is important to note that this list is not exhaustive compared to the wide-ranging implementation strategies identified in the literature [52]. In our study, the most frequently identified implementation strategies were educational in nature, which aligns with previous literature [53, 54]. However, the barriers to practice change are often multi-factorial (e.g., at the individual, interpersonal, organizational, and system level), going beyond individual knowledge deficits. This requires implementation strategies that are tailored to the change being implemented, the identified multi-level barriers, and the implementation context [55]. Given there is evidence to suggest that tailoring implementation strategies to identified barriers can positively influence practice changes [56], there are opportunities to build further capacity in this area.

Less than half of participants named a process or framework that they use to guide the implementation process. Several study participants stated they used a framework or process but could not name it. Of those that did identify a process or framework, no one identified a comprehensive implementation framework (e.g., planned action framework [18]) that guided the full implementation process. Unsurprisingly, the most frequently identified processes and frameworks were grounded in quality improvement approaches (e.g., Lean, PDSA). Recently, there has been increased interest in the intersection between quality improvement and implementation science, with calls for the two complementary fields to better align [57, 58]. Adding implementation science to existing quality improvement approaches may have several benefits including an increased emphasis on using evidence-informed practices and a focus on applying theory-driven and systemic approaches to assessing determinants of practice and selecting implementation strategies [36]. We assert that implementation science can enhance (not replace) these existing quality improvement approaches and tools, providing a systematic and comprehensive approach for teams.

We identified examples of how teams engaged point-of-care staff to varying degrees in the implementation process, with most providing examples of two-way exchanges between the implementation working group and staff. Governance structures such as unit councils were identified as a means to facilitate this engagement. The COVID-19 pandemic may have resulted in changes to shared governance, with clinical priorities quickly shifting and “nonessential” activities such as council meetings sometimes suspended [59]. Because our data were collected pre-pandemic, it is unknown what shared governance structures remain in place and how this may have changed staff engagement. Our future work will explore the existing shared governance infrastructure and how it is used to facilitate engagement of point-of-care staff in the implementation process. While our study provided many examples of how point-of-care staff are engaged in practice changes, only one study participant described engaging patients or families and six described using patient education and engagement as an implementation strategy. Given the limited examples of engaging patients in the working group itself, there remain opportunities for earlier partnership with patients in the implementation process. Indeed, patients and caregivers can contribute meaningfully to the implementation process and can be a powerful motivator for changes [60].

Limitations and strengths

We acknowledge this study has several limitations, many of which are inherent to conducting a secondary analysis. The interview questions were not designed to probe for the different Implementation Roadmap steps. The results need to be interpreted with caution; a participant not describing a step may in fact reflect a lack of precision in the interview questions, rather than an indication that the participant did not do it, or lacks the knowledge or skills to do it. Conversely, a participant stating they completed a step does not necessarily mean it was actually completed (or completed optimally). In addition, implementation science continues to grow yearly, and it is possible that at the time of the interviews, participants may not have had access to the same implementation language to articulate the Implementation Roadmap steps. However, implementation science has not been well translated to practice-based settings [9, 10, 43], and so this challenge would likely remain if the interviews were conducted today. To mitigate this challenge, we were liberal with our coding and coded according to participants’ descriptions, regardless of the specific terms used.

Next, our results may be influenced by social desirability bias, whereby participants shared information they perceived as socially acceptable, rather than sharing information that reflects their true reality [61]. For instance, some participants may have attempted to describe a more thorough implementation process than is actually used in practice. Brief or vague answers may be an indication of social desirability tendencies [61]; we were therefore attentive to this in our analysis, identifying where participants provided short answers without any elaboration on when or how the step is actually performed and highlighted this in our results (e.g., barriers assessment). However, social desirability was likely not an issue across all participants, as some did explicitly acknowledge their lack of awareness or completion of some steps.

Finally, it is important to acknowledge that at the time of analysis, the data were eight years old and these results may not reflect implementation practice in maternal-newborn hospitals today. To mitigate this limitation, each member of our research team was involved in interpreting the data, many of whom are clinicians embedded in maternal-newborn care. Based on our collective experience, and the knowledge that practice changes slowly [62], these findings would likely still ring true today. These results are being used to develop a survey to distribute to all Ontario maternal-newborn hospital units to learn about what Implementation Roadmap steps teams are currently taking, their confidence completing them, and their perception of their importance. The results we report here are informing the development of survey questions to probe identified gaps and to tailor the question wording to align with local language. The upcoming survey will complement this qualitative secondary analysis by providing updated data from a wider sample of hospitals, allowing us to better understand what gaps and needs remain.

Conducting a secondary analysis also offered several strengths. Given the current demands on our health system and its leaders, we may not have been able to enroll the same number of study participants in today’s conditions. Given the sufficient fit between the original dataset and our question, conducting a secondary analysis eliminated the participant burden that would have been required to collect new qualitative data from busy clinicians and administrators [24]. Another strength of our study was the application of a recent evidence-informed framework (Implementation Roadmap [36]) that synthesizes theoretical and experiential knowledge in implementation science and practice, allowing us to interpret the data in a new light and identify future areas for research and practical support. Finally, our study makes a unique contribution to the literature by describing and comparing the implementation approaches of many maternal-newborn teams. With data on 22 sites (about one-quarter of birthing hospitals in the province), our sample provides insight into the implementation processes of diverse teams, highlighting commonalities and differences. These insights serve as potential areas to focus future implementation capacity-building efforts in maternal-newborn health services.

Conclusion

Overall, we observed variability in the reported implementation processes used by 22 maternal-newborn hospital units. While participants provided many examples of steps and activities they use to implement practice changes, we identified several areas where teams may need additional support. These results provide a foundation for future work to explore current implementation practice in maternal-newborn hospitals and will inform the development of tailored practice change resources, informed by implementation science, for maternal-newborn healthcare providers and leaders.

Availability of data and materials

The dataset used in this secondary analysis (i.e., interview transcripts) are not publicly available due to them containing information that could compromise research participant privacy/consent, but they are available from the corresponding author on reasonable request.

Abbreviations

BFI:

Baby Friendly Initiative

BORN:

Better Outcomes Registry & Network Ontario

MOREOB :

Managing Obstetrical Risk Efficiently

PDCA:

Plan-Do-Check-Act

PDSA:

Plan-Do-Study-Act

QI:

Quality improvement

RNAO:

Registered Nurses’ Association of Ontario

SOGC:

Society of Obstetricians and Gynecologists of Canada

SWOT:

Strengths, Weaknesses, Opportunities, Threats

References

  1. World Health Organization. Network for improving quality of care for maternal, newborn and child health. 2021. https://www.qualityofcarenetwork.org/. Accessed 26 Jul 2023.

  2. Accreditation Canada, Healthcare Insurance Reciprocal of Canada, Canadian Medical Protective Association, Salus Global Corporation. Obstetrics services in Canada: advancing quality and strengthening safety. Ottawa; 2016. https://www.cmpa-acpm.ca/static-assets/pdf/research-and-policy/system-and-practice-improvement/Obstetrics_Joint_Report-e.pdf. Accessed 26 Jul 2023.

  3. Provincial Council for Maternal and Child Health. Perinatal & newborn health. 2022. https://www.pcmch.on.ca/reproductive-newborn-health/. Accessed 26 Jul 2023.

  4. Weiss D, Dunn SI, Sprague AE, Fell DB, Grimshaw JM, Darling E, et al. Effect of a population-level performance dashboard intervention on maternal-newborn outcomes: an interrupted time series study. BMJ Qual Saf. 2018;27:425–36. https://doi.org/10.1136/bmjqs-2017-007361.

    Article  PubMed  Google Scholar 

  5. Squires JE, Cho-Young D, Aloisio LD, Bell R, Bornstein S, Brien SE, et al. Inappropriate use of clinical practices in Canada: a systematic review. CMAJ. 2022;194:E279–96. https://doi.org/10.1503/cmaj.211416.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Abu-Odah H, Said NB, Nair SC, Allsop MJ, Currow DC, Salah MS, et al. Identifying barriers and facilitators of translating research evidence into clinical practice: a systematic review of reviews. Health Soc Care Community. 2022;30:e3265–76. https://doi.org/10.1111/hsc.13898.

    Article  PubMed  Google Scholar 

  7. Eccles MP, Mittman BS. Welcome to implementation science. Implement Sci. 2006;1:1. https://doi.org/10.1186/1748-5908-1-1.

    Article  PubMed Central  Google Scholar 

  8. Beidas RS, Dorsey S, Lewis CC, Lyon AR, Powell BJ, Purtle J, et al. Promises and pitfalls in implementation science from the perspective of US-based researchers: learning from a pre-mortem. Implement Sci. 2022;17:55. https://doi.org/10.1186/s13012-022-01226-3.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Rapport F, Smith J, Hutchinson K, Clay-Williams R, Churruca K, Bierbaum M, et al. Too much theory and not enough practice? The challenge of implementation science application in healthcare practice. J Eval Clin Pract. 2022;28:991–1002. https://doi.org/10.1111/jep.13600.

    Article  PubMed  Google Scholar 

  10. Lyon AR, Comtois KA, Kerns SEU, Landes SJ, Lewis CC. Closing the science–practice gap in implementation before it widens. In: Albers B, Shlonsky A, Mildon R, editors. Implementation Science 3.0. Cham: Springer Nature; 2020. p. 295–313.

    Chapter  Google Scholar 

  11. Sharma R, Buccioni M, Gaffey MF, Mansoor O, Scott H, Bhutta ZA. Setting an implementation research agenda for Canadian investments in global maternal, newborn, child and adolescent health: a research prioritization exercise. CMAJ Open. 2017;5:E82–9. https://doi.org/10.9778/cmajo.20160088.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Lassi ZS, Kumar R, Mansoor T, Salam RA, Das JK, Bhutta ZA. Essential interventions: implementation strategies and proposed packages of care. Reprod Health. 2014;11(Suppl 1):S5. https://doi.org/10.1186/1742-4755-11-S1-S5.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Doherty E, Kingsland M, Wiggers J, Wolfenden L, Hall A, McCrabb S, et al. The effectiveness of implementation strategies in improving preconception and antenatal preventive care: a systematic review. Implement Sci Commun. 2022;3:121. https://doi.org/10.1186/s43058-022-00368-1.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Imamura M, Kanguru L, Penfold S, Stokes T, Camosso-Stefinovic J, Shaw B, et al. A systematic review of implementation strategies to deliver guidelines on obstetric care practice in low- and middle-income countries. Int J Gynecol Obstet. 2017;136:19–28. https://doi.org/10.1002/ijgo.12005.

    Article  Google Scholar 

  15. Batinelli L, Thaels E, Leister N, McCourt C, Bonciani M, Rocca-Ihenacho L. What are the strategies for implementing primary care models in maternity? A systematic review on midwifery units. BMC Pregnancy Childbirth. 2022;22:123. https://doi.org/10.1186/s12884-022-04410-x.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Stokes T, Shaw EJ, Camosso-Stefinovic J, Imamura M, Kanguru L, Hussein J. Barriers and enablers to guideline implementation strategies to improve obstetric care practice in low- and middle-income countries: a systematic review of qualitative evidence. Implement Sci. 2016;11:144. https://doi.org/10.1186/s13012-016-0508-1.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Dadich A, Piper A, Coates D. Implementation science in maternity care: a scoping review. Implement Sci. 2021;16:16. https://doi.org/10.1186/s13012-021-01083-6.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53. https://doi.org/10.1186/s13012-015-0242-0.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Kitila SB, Sudhakar M, Feyissa GT. Compliance to immediate newborn care practice among midwives working in maternity wards: a best practice implementation project. JBI Evid Implement. 2020;18:337–44. https://doi.org/10.1097/XEB.0000000000000237.

    Article  PubMed  Google Scholar 

  20. Huang X, Zhang J, Zhou F, Yang Y, Lizarondo L, McArthur A. Promotion of early breast milk expression among mothers of preterm infants in the neonatal ICU in an obstetrics and gynaecology hospital: a best practice implementation project. JBI Evid Implement. 2020;18:278–87. https://doi.org/10.1097/XEB.0000000000000223.

    Article  PubMed  Google Scholar 

  21. JBI Evidence Implementation. Special collection: women’s, children’s and adolescent health. 2023. https://journals.lww.com/ijebh/pages/collectiondetails.aspx?TopicalCollectionId=1. Accessed 28 June 2023.

  22. Murphy MS, Fell DB, Sprague AE, Corsi DJ, Dougan S, Dunn SI, et al. Data resource profile data resource profile: better outcomes registry & network (BORN) Ontario. Int J Epidemiol. 2021;5:1416–1417h. https://doi.org/10.1093/ije/dyab033.

    Article  Google Scholar 

  23. Dunn S, Sprague AE, Grimshaw JM, Graham ID, Taljaard M, Fell D, et al. A mixed methods evaluation of the maternal-newborn dashboard in Ontario: dashboard attributes, contextual factors, and facilitators and barriers to use: a study protocol. Implement Sci. 2016;11:59. https://doi.org/10.1186/s13012-016-0427-1.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Beck CT. Secondary Qualitative Data Analysis in the Health and Social Sciences. New York: Routledge; 2019.

    Book  Google Scholar 

  25. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Heal Care. 2007;19:349–57. https://doi.org/10.1093/intqhc/mzm042.

    Article  Google Scholar 

  26. Sandelowski M. What’s in a name? Qualitative description revisited. Res Nurs Health. 2010;33:77–84. https://doi.org/10.1002/nur.20362.

    Article  PubMed  Google Scholar 

  27. Creswell JW. Qualitative inquiry and research design. Choosing among five traditions. 3rd ed. Thousand Oaks: Sage Publications; 2013.

    Google Scholar 

  28. Mason M. Sample size and saturation in PhD studies using qualitative interviews. Qual Soc Res. 2010;11:Art 8. https://doi.org/10.17169/fqs-11.3.1428.

    Article  Google Scholar 

  29. Rycroft-Malone J. The PARIHS framework — a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;19:297–304. https://doi.org/10.1097/00001786-200410000-00002.

    Article  PubMed  Google Scholar 

  30. Attieh R, Gagnon MP, Estabrooks Ca, Légaré F, Ouimet M, Roch G, et al. Organizational readiness for knowledge translation in chronic care: a review of theoretical components. Implement Sci. 2013;8:138. https://doi.org/10.1186/1748-5908-8-138.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Gagnon MP, Labarthe J, Légaré F, Ouimet M, Estabrooks CA, Roch G, et al. Measuring organizational readiness for knowledge translation in chronic care. Implement Sci. 2011;6:72. https://doi.org/10.1186/1748-5908-6-72.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26:13–24. https://doi.org/10.1002/chp.47.

    Article  PubMed  Google Scholar 

  33. Heaton J. Secondary analysis of qualitative data: an overview. Hist Soc Res. 2008;33:33–45 (https://www.jstor.org/stable/20762299).

    Google Scholar 

  34. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15:1277–88. https://doi.org/10.1177/1049732305276687.

    Article  PubMed  Google Scholar 

  35. QSR International Pty Ltd. NVivo qualitative data analysis software, version 12 Pro. 2017.

    Google Scholar 

  36. Harrison MB, Graham ID. Knowledge translation in nursing and healthcare: a roadmap to evidence-informed practice. Hoboken: John Wiley & Sons, Inc; 2021.

    Book  Google Scholar 

  37. Manafò E, Petermann L, Vandall-Walker V, Mason-Lai P. Patient and public engagement in priority setting: a systematic rapid review of the literature. PLoS ONE. 2018;13:e0193579. https://doi.org/10.1371/journal.pone.0193579.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  38. Provincial Council for Maternal and Child Health. Perinatal, birthing and newborn levels of care. Toronto; 2023. https://www.pcmch.on.ca/wp-content/uploads/Perinatal-Birthing-and-Newborn-LOC-Guidance-Document_March-2023.pdf. Accessed 26 Jul 2023.

  39. Reszel J, Dunn SI, Sprague AE, Graham ID, Grimshaw JM, Peterson WE, et al. Use of a maternal newborn audit and feedback system in Ontario: a collective case study. BMJ Qual Saf. 2019;28:635–44. https://doi.org/10.1136/bmjqs-2018-008354.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Reszel J, van den Hoek J, Nguyen T, Aravind G, Bayley MT, Bird M-L, et al. How community-based teams use the stroke recovery in motion implementation planner: longitudinal qualitative field test study. JMIR Form Res. 2022;6:e37243. https://doi.org/10.2196/37243.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Harrison MB, Graham ID, van den Hoek J, Dogherty EJ, Carley ME, Angus V. Guideline adaptation and implementation planning: a prospective observational study. Implement Sci. 2013;8:49. https://doi.org/10.1186/1748-5908-8-49.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Vroom EB, Massey OT. Moving from implementation science to implementation practice: the need to solve practical problems to improve behavioral health services. J Behav Heal Serv Res. 2022;49:106–16. https://doi.org/10.1007/s11414-021-09765-1.

    Article  Google Scholar 

  43. Westerlund A, Sundberg L, Nilsen P. Implementation of implementation science knowledge: the research-practice gap paradox. Worldviews Evidence-Based Nurs. 2019;16:332–4. https://doi.org/10.1111/wvn.12403.

    Article  Google Scholar 

  44. Boehm LM, Stolldorf DP, Jeffery AD. Implementation science training and resources for nurses and nurse scientists. J Nurs Scholarsh. 2020;52:47–54. https://doi.org/10.1111/jnu.12510.

    Article  PubMed  Google Scholar 

  45. Sprague AE, Dunn S, Fell D, Harrold J, Walker M, Kelly S, et al. Measuring quality in maternal-newborn care: Developing a clinical dashboard. J Obstet Gynecol Canada. 2013;35:29–38. https://doi.org/10.1016/s1701-2163(15)31045-8.

    Article  Google Scholar 

  46. Clarke GM, Conti S, Wolters AT, Steventon A. Evaluating the impact of healthcare interventions using routine data. BMJ. 2019;365:l2239. https://doi.org/10.1136/bmj.l2239.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Dixon-Woods M, McNicol S, Martin G. Ten challenges in improving quality in healthcare: lessons from the Health Foundation’s programme evaluations and relevant literature. BMJ Qual Saf. 2012;21:876–84. https://doi.org/10.1136/bmjqs-2011-000760.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Heal Ment Heal Serv Res. 2011;38:65–76. https://doi.org/10.1007/s10488-010-0319-7.

    Article  Google Scholar 

  49. Dawson T. Tacit (intuitive) knowledge has issues. Medium. 2018. https://theo-dawson.medium.com/tacit-knowledge-has-problems-420272c4cf1. Accessed 28 June 2023.

  50. Reszel J, Daub O, Leese J, Augustsson H, Bellows D, Cassidy CE, et al. Essential content for teaching implementation practice in healthcare: a mixed-methods study of teams offering capacity-building initiatives. Implement Sci Commun. 2023. Submitted.

  51. Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Heal. 2019;7:3. https://doi.org/10.3389/fpubh.2019.00003.

    Article  Google Scholar 

  52. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21. https://doi.org/10.1186/s13012-015-0209-1.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Curran JA, Gallant AJ, Wong H, Shin HD, Urquhart R, Kontak J, et al. Knowledge translation strategies for policy and action focused on sexual, reproductive, maternal, newborn, child and adolescent health and well-being: a rapid scoping review. BMJ Open. 2022;12:e053919. https://doi.org/10.1136/bmjopen-2021-053919.

    Article  PubMed  PubMed Central  Google Scholar 

  54. Campbell A, Louie-Poon S, Slater L, Scott SD. Knowledge translation strategies used by healthcare professionals in child health settings: an updated systematic review. J Pediatr Nurs. 2019;47:114–20. https://doi.org/10.1016/j.pedn.2019.04.026.

    Article  PubMed  Google Scholar 

  55. Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14:42. https://doi.org/10.1186/s13012-019-0892-4.

    Article  PubMed  PubMed Central  Google Scholar 

  56. Baker R, Camosso-Stefinovic J, Gillies C, Shaw EJ, Cheater F, Flottorp S, et al. Tailored interventions to address determinants of practice. Cochrane Database Syst Rev. 2015;2015:CD005470.

    PubMed  PubMed Central  Google Scholar 

  57. Koczwara B, Stover AM, Davies L, Davis MM, Fleisher L, Ramanadhan S, et al. Harnessing the synergy between improvement science and implementation science in cancer: a call to action. J Oncol Pract. 2018;14:335–40. https://doi.org/10.1002/14651858.CD005470.pub3.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Leeman J, Rohweder C, Lee M, Brenner A, Dwyer A, Ko LK, et al. Aligning implementation science with improvement practice: a call to action. Implement Sci Commun. 2021;2:99. https://doi.org/10.1186/s43058-021-00201-1.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Hess RG, Weaver SH, Speroni KG. Shared governance during a pandemic. Nurse Lead. 2020;18:497–9. https://doi.org/10.1016/j.mnl.2020.05.008.

    Article  PubMed  PubMed Central  Google Scholar 

  60. Boaz A, Robert G, Locock L, Sturmey G, Gager M, Vougioukalou S, et al. What patients do and their impact on implementation. J Health Organ Manag. 2016;30:258–78. https://doi.org/10.1108/JHOM-02-2015-0027.

    Article  PubMed  Google Scholar 

  61. Bergen N, Labonté R. “Everything is perfect, and we have no problems”: detecting and limiting social desirability bias in qualitative research. Qual Health Res. 2020;30:783–92. https://doi.org/10.1177/1049732319889354.

    Article  PubMed  Google Scholar 

  62. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104:510–20. https://doi.org/10.1258/jrsm.2011.110180.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We thank the research team of the primary study (led by co-author Dr. Sandra Dunn and co-principal investigator Dr. Mark Walker) who collected the data used in this secondary analysis. We also thank the nursing leaders for their time and willingness to share their implementation experiences.

Funding

The primary study was funded by the Canadian Institutes of Health Research (CIHR FRN #133576) and the Ontario Ministry of Health and Long-Term Care (MOHLTC #06684). JR is funded by a CIHR Vanier Canada Graduate Scholarship and has received awards from the Integrated Knowledge Translation Research Network (IKTRN) and the University of Ottawa. IDG is a recipient of a CIHR Foundation grant (FDN# 143237).

Author information

Authors and Affiliations

Authors

Contributions

JR, SID, CEC, IDG contributed to the conceptualization and design of the study. JR, OD, IDG analyzed the data. JR, OD, SID, CEC, KH, ML, DP, AQ, AW, IDG contributed to interpreting the data; JR drafted the initial manuscript draft. OD, SID, CEC, KH, ML, DP, AQ, AW, IDG contributed to critically reviewing and editing the manuscript drafts. IDG supervised the overall conduct of the study. All authors approved the final version of the manuscript.

Corresponding author

Correspondence to Jessica Reszel.

Ethics declarations

Ethics approval and consent to participate

Ethics approval for the secondary analysis reported in this paper was obtained from the University of Ottawa Research Ethics Board (H-11-22-8739). The Children’s Hospital of Eastern Ontario (CHEO) Research Ethics Board (#13/218X) and the University of Ottawa Research Ethics Board (A01-14-03) approved the primary study. Prior to starting the interview, a research staff reviewed the study information letter with each participant and obtained their verbal informed consent. Due to the interviews being conducted by telephone and the low risk to participants, the abovementioned research ethics boards approved the use of verbal informed consent. All methods were carried out in accordance with relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

COREQ Reporting Checklist.

Additional file 2.

Semi-structured interview guide.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Reszel, J., Daub, O., Dunn, S.I. et al. Planning and implementing practice changes in Ontario maternal-newborn hospital units: a secondary qualitative analysis. BMC Pregnancy Childbirth 23, 735 (2023). https://doi.org/10.1186/s12884-023-06042-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12884-023-06042-1

Keywords