Discussion
Routinely collected measurements and public reporting of metrics are known to catalyse attention and action on the quality of health services,2 which is an urgent priority for many health systems, including those in the LHDIS countries. The absence of collective and concerted efforts to do so for LHDIS countries is, therefore, indefensible. Measuring the quality of health services does not need to be reserved for high-resource settings; it is possible and imperative that we do so for the countries experiencing the most egregious problems in the quality of care. Therefore, our objective was to demonstrate the potential of using standardised and publicly available data sources to measure quality across LHDIS countries. We used the QoC FCVS framework7 and Donabedian dimensions8 to map metrics drawn from DHS and SPA data sources. Using the established metrics, we selected four countries for comparison that met two criteria—DHS and SPA data availability in the last 10 years and ranking low on the Human Development Index.
We have shown that it is possible to use publicly available data to analyse the quality of care in LHDIS countries. Policymakers and other stakeholders can rapidly analyse an LHDIS country’s state of quality with these minimal resources. However, this approach was not without limitations. The first is that while DHS and SPA survey completion is sparse across countries, it is the best publicly available and standardised. While two-thirds of LHDIS countries have conducted a DHS survey in the last 10 years, only 20% have ever conducted complete SPA surveys. Only those 20% have reported on both DHS and SPA. Additionally, of the countries that conducted one or both surveys, the survey was typically unique, and the country did not present longitudinal survey data, making it impossible to track the internal progress in quality of care performance.
It is also important to note that the majority of the data used in our analysis predates the COVID-19 pandemic. This temporal aspect introduces two central considerations. One is the notable impact of the pandemic on healthcare quality, which has resulted in declines across the healthcare spectrum. Consequently, the ‘state of quality’ in healthcare systems is likely considerably worse than the baseline represented by the pre-pandemic data used in this study. Healthcare quality is dynamic and subject to changes influenced by various factors, including unforeseen emergencies such as infectious disease outbreaks, environmental crises, economic challenges and political disruptions. Recognising these emergent factors as important markers of healthcare system quality is crucial. Our study serves as a snapshot in time and highlights the importance of continuously adapting and evolving quality assessment metrics to capture the evolving realities of healthcare systems. It is also critical to note the vital interface between resilience and quality of care and the need for coordinated action on both, particularly in FCV settings where the risk for public health emergencies may be more significant.
Beyond the limitations associated with the age and availability of country data, we found that the available DHS and SPA data elements do not address all quality of care areas across the QoC FCVS framework and Donabedian dimensions, thus failing to provide as holistic a representation of a country’s quality of care as would be desirable. Critical gaps exist in particular related to understanding the system environment required to support quality of care efforts at the point of care. Further, even where indicators are available to populate the matrix, these are of varying value in truly informing on the quality of care; they are often not the best indicators from a conceptual standpoint, instead representing the best available from the SPA and DHS datasets. In addition, data availability does not provide insights into the quality of processes employed within a country to engage stakeholders in generating and using data for care improvement. Beyond these broad efforts with widely available data sources, there may be value in more targeted efforts that incorporate participatory methods, such as community scorecards used across FCV settings. Such efforts may bridge the information gaps and offer a more comprehensive view of healthcare quality, considering the local context and the involvement of stakeholders in the process.
Despite these limitations, results from the illustrative set of countries demonstrate the strengths of our study and the value of these data sources to evaluate the quality of care and promote global and national dialogue for its improvement. In particular, the data demonstrate widespread, persistent challenges in quality care across all the included countries, reinforcing the need for sustained national and global efforts to advocate for action on quality of care and generate learning on how to achieve this in the most challenging settings. The data missingness for some of the most basic human resources, tracer items and guidelines across all settings further reflect this need. However, it is most evident in the still shocking mortality figures presented, on which lack of access to quality care is likely to have a significant impact.
Nevertheless, our findings also demonstrated areas where practice may be better than anticipated, illustrating that progress in quality of care is possible in LHDIS countries. An example is the data on injections given with a syringe and needle set from a new, unopened package, for which average figures across each country are above 90%. This critical action for patient safety happens at the point of care. However, it is the culmination of a well-functioning system requiring sustained equipment availability alongside appropriate provider knowledge and practice.
In addition to these global findings, the results presented here demonstrate the potential value of such data in driving individual country efforts on quality of care. The side-by-side comparison this exercise produces further enhances this finding, allowing for contextualised performance benchmarking. For example, our findings demonstrated that while Senegal excels in the improving clinical care domain compared with the other included countries, the country fares worse in ensuring access and basic infrastructure. Policymakers in Senegal could use these findings to redirect resources, for example, to rectify the deficiencies in pharmaceutical and lab readiness at the facility level. In addition, while national stakeholders may use these findings to improve the services they provide and critical infrastructure and inputs, they can also use the quality data for advocacy with external donors for relief or support in specific service areas. The variation seen between settings, such as antenatal care provision, could help spur cross-country learning and collaboration efforts. The quality of care challenges highlighted by Afghanistan’s ranking in the country comparison and, for example, across prenatal care services could support their efforts in requesting targeted external financing and support to improve a service area. Furthermore, the data illuminates Haiti’s urgent need to address subpar scores across multiple SPA indicators within the ensuring access and basic infrastructure domain. These shortcomings point to a complex issue that necessitates focused interventions. These shortcomings point to a complex issue that necessitates focused interventions. These challenges encompass potential deficits in standardised healthcare protocols and procedures, deficiencies in data collection, management, and utilisation, inadequacies in laboratory capabilities, and staffing shortfalls. To effectively tackle these multifaceted challenges, a comprehensive approach, including policy reforms, increased investments in healthcare infrastructure, and bolstering the healthcare workforce, is imperative. Presenting a country’s relative performance in a clear comparative format alongside countries with similar human development scores helps contextualise their quality of care and pinpoint specific areas that need improvement.
The differences in the quality of care among the selected countries, particularly Senegal’s relatively better performance, may be influenced by the fact that Senegal is not classified as an FCVS, unlike the other three countries. This distinction could contribute to Senegal’s comparatively stronger healthcare infrastructure and potentially better resource access. While Senegal’s status may have influenced the results, the study’s primary goal was to demonstrate the feasibility of using publicly available data to assess healthcare quality in low-resource settings, and the findings highlight the variations and opportunities for improvement among these countries, regardless of their FCV status.
Our assessment demonstrated that, with the limited open-access and standardised national data that report on the relevant quality of measures available, we could examine quality within and across selected LHDIS countries. For those countries that have done, or are planning to do, DHS and SPA assessments, the approach described here can already be used to provide actionable, comparable data on the quality of care that, while not comprehensive, do span multiple quality domains and have many apparent uses to improve the quality of care. However, under the umbrella of the donor-funded DHS programme, these assessments do not provide a long-term sustainable solution for widespread, timely, cross-country comparative data on quality of care in LHDIS countries, given the reliance on donor funding and lack of a formal regular schedule of surveys. Conducting these surveys is often time-consuming, extended and somewhat complex. Specific settings may prioritise other tools to meet data needs for health planning, such as bespoke national surveys or tools from other organisations.
A particular area of focus for many countries is the use of routine health management information systems, like the District Health Information System (DHIS2)20 to enable the timely collection and aggregation of health service performance indicators. As such systems become more widely and effectively used across LHDIS countries, it is necessary to explore how they can facilitate collation and cross-country comparison of data on the quality of care. Other recent global efforts, such as the Harmonised Health Facility Assessment, which contains a module on quality of care,21 and the WHO-UNICEF Primary Healthcare Measurement Framework and Indicators,22 may also begin to provide sources of relevant data as they are rolled out more systematically.
While this paper has looked only at two readily accessible data sources with available data for multiple settings, individual LHDIS countries can likely leverage other sources to build a similar picture of the quality of care and to keep this profile up to date. These may include routine health management information systems; ad hoc or regular surveys and health facility assessments; disease or technical programme data repositories; donor and technical partner monitoring and evaluation; and other regional and global health data initiatives not mentioned here. Governments and other stakeholders could adapt the approach outlined in this paper to combine such data into pragmatic indicator sets to improve quality of care, as outlined above. However, doing this on a large scale with sufficient rigour and cross-country comparability would require significant resources. Finally, data need not only to be collated by displayed and disseminated in a way that is helpful to policymakers.
Despite the drawbacks—and acknowledging the imperative to make rapid progress in developing methods to collect meaningful and actionable quality-of care-data—a realistic strategy may be to invite willing and interested countries to make the relevant data from DHIS2 and other high-quality sources accessible for quality-of-care measurement initiatives.