Original Research

Assessing the feasibility of using publicly available data sources to identify healthcare data discrepancies and enhance service delivery: a retrospective cross-sectional study of four low human index scoring countries

Abstract

Introduction Quality of care (QoC) remains a persistent challenge in countries with low Human Development Index scores (LHDIS) despite global efforts to promote universal health coverage. Addressing root causes and systematically implementing improvement interventions in LHDIS countries require a better understanding and use of QoC data. We aim to describe the data gaps and illustrate the state of quality in health services across a small set of countries. We demonstrate how we can leverage currently available, although imperfect, public data sources to compile quality metrics across multiple LHDIS countries.

Methods Using public data sources, the Demographic Health Survey (DHS) and Service Provision Assessment (SPA), we selected relevant quality metrics and categorised them within a QoC matrix. We based the selection of metrics on the quality of care in fragile, conflict-affected and vulnerable settings framework domains and the Donabedian model. Criteria for our retrospective cross-sectional study included a LHDIS and recent availability of both DHS and SPA data for data relevance.

Results The approach was feasible, with relevant indicators distributed across various QoC categories. However, some cells in the indicator matrix lacked suitable indicators from SPA and DHS data. We selected the Democratic Republic of the Congo, Haiti, Afghanistan and Senegal for a snapshot of QoC in LHDIS countries. Comparisons highlighted areas of positive performance and shared challenges across these countries, with notable variability in certain categories. Senegal ranked highest overall, while Afghanistan ranked lowest across all matrix categories. Senegal had the most comprehensive data, with 94.7% of metrics available. Missing data existed for two specific metrics in all four countries, particularly within the improving clinical care domain.

Conclusion The results are a clarion call for advancing efforts to develop standardised, publicly available, routinely collected and validated data sets to measure and publicly report LHDIS countries’ state of quality to marshal global attention and action in pursuit of more significant health equity.

What is already known on this topic

  • Quality of care remains a persistent challenge in countries scoring low Human Development Index scores (LHDIS). A better understanding and use of data on the quality of care is necessary to address root causes and systematically implement improvement interventions in LHDIS countries.

What this study adds

  • In LHDIS countries, where the most significant quality of care challenges exist in LHDIS countries, there is no routine monitoring and reporting function of well-accepted quality metrics on a global level, and few examples of systematic national-level efforts. This paper aims to support addressing this critical data gap by demonstrating the feasibility of gathering these indicators in lower-resourced countries.

How this study might affect research, practice or policy

  • The four-country snapshot provided within this paper reiterates both the pervasive challenge of poor-quality care in LHDIS settings and the value of data in supporting improvements. While meaningful and publicly available indicators are available, efforts must be scaled to realise the benefits of systematically presenting comparable data across LHDIS settings.

Introduction

In 2018, the Lancet Commission on High-Quality Health Systems noted that, of more than 8 million deaths each year from avoidable conditions through adequate treatment, 60% are due to poor-quality care, while the remainder is due to lack of access to care.1 Many, if not most, countries face persistent challenges across the various quality domains—effectiveness, safety, people-centredness, timeliness, equity, efficiency and integration—all resulting in suboptimal or even egregious health outcomes.2

Indeed, quality of care is central to the Sustainable Development Goals target of universal health coverage.3 Realising this vision will require action to address massive deficiencies in access to reliable quality care worldwide, most notably in areas of extreme adversity. While there have been considerable improvements in the health outcomes of the global population, essential gaps widely persist in people’s ability to attain the highest possible level of health.1 About half of the world’s population lack access to the services they need, and poor health disproportionally affects those faced with adverse social and other determinants of health.

There is growing recognition of the deleterious effects of uncertain access and uneven quality of healthcare worldwide. However, a consensus is necessary around a standardised process for publicly reporting quality status and tracking progress. As well as having a clear role globally in driving the quality of care agenda, the presentation of country-level data on quality of care can be a powerful tool to promote accountability, inform improvements in the health system, allow productive comparisons and benchmarking across different settings, and underpin cross-country learning to improve health system performance.2 The Organisation for Economic Co-operation and Development Healthcare Quality Indicators (OECD HCQI) project performs this data function for its 38 member countries, all of which are high or high-middle-income countries.4 The Primary Healthcare Performance Initiative measures primary healthcare performance outcomes across select countries.5 However, where the most significant challenges exist in the rest of the world, there is no such routine monitoring and reporting function on a global level, and few examples of systematic national-level efforts.

In this paper, we aim to support addressing this critical data gap by demonstrating the feasibility of gathering well-accepted quality metrics in lower-resourced countries, defined for this project as those with low Human Development Index scores (LHDIS).6 The LHDIS summary composite ranks a country with very high, high, medium or low achievement based on three essential aspects of human development: health, knowledge and standard of living.6 First, we describe our process of selecting quality metrics. Using the established quality metrics, we then use the only readily available standardised public data sources to provide an example of a four-LHDIS country comparison. Finally, we make policy recommendations.

Methods

In our retrospective cross-sectional study, our analysis included various steps, including selecting public data sources containing relevant metrics; choosing metrics and categorising them within a quality of care matrix; selecting countries; and examining results across sample countries. Below, we present our methodology and findings. All data analyses were conducted using Stata/SE 16.1 and Microsoft Excel V.16.

Data sources selection and overview

Within this paper, we adopt a consensually developed framework from the WHO’s ‘Quality of Care in Fragile, Conflict-Affected and Vulnerable Settings’ technical document (hereafter referred to as ‘QoC FCVS’), which has become internationally recognised as the gold standard for addressing health service quality in FCV settings.7 The QoC FCV framework leverages existing knowledge on quality features in FCV settings and addresses the shared structural and healthcare challenges observed in LHDIS countries, making it a practical choice in the absence of a specific LHDIS-tailored framework. The resource provides a checklist of health service indicators categorised under five intervention categories: ensuring access and basic infrastructure, shaping the system environment, improving frontline clinical care, reducing harm to patients and populations, and engaging and empowering patients, families and communities. These categories represent priorities for action in FCV settings and link closely with key quality domains. To support the selection of a set of indicators that reflect somewhat comprehensively the state of quality in LHDIS countries, we adopted a conceptual framing that cross-references the QoC FCVS categories with the Donabedian model’s structure, process and outcome dimensions (see table 1). The Donabedian model for assessing healthcare quality has been widely used for over five decades and is equally applicable across the globe. Donabedian postulated that quality of care is achieved through a synchronised relationship between structure, process and outcome.8 The structure dimension includes the resources associated with providing healthcare, such as the availability of infrastructure, medicines, equipment and staff training (sometimes referred to as inputs). The process dimension relates to things done to and for the patient, such as health services and health education provided or referrals made. Finally, is the desired result or outcome of care.8 Selected indicators within this initiative had to fit within the resulting matrix, which also allows assessment of the balance of included indicators in respect of how they inform different aspects of the state of quality.

Table 1
|
Average indicators available in FCV countries by FCV domain and Donabedian component

While within individual countries, multiple methods may be used to gather data relevant to quality of care, our search for data sources focused on finding publicly available standardised surveys with recent data available across multiple countries. As such, sources are likely to be most feasible to allow scale-up of the approach and meaningful cross-country comparison. Across LHDIS countries, sources reporting on the quality of care data and measures are very limited, and data accuracy, timeliness and completeness vary substantially.9 Among the available data sources, the Demographic Health Survey (DHS) and the Service Provision Assessment (SPA) survey data were the only data sources that provided standardised data on the quality of care across multiple LHDIS countries.10 Combining DHS and SPA data sources allowed us to address all the QoC FCVS categories, with varying degrees of spread across the Donabedian dimensions. Both surveys are semi-routinely collected in many individual LHDIS countries to allow comparisons over time. The breadth of information collected at the population and facility level provides a snapshot of a country’s quality state.

DHS surveys are collected at the household level, while SPA data are captured at the facility level. Both are nationally representative, including large sample sizes. The DHS typically includes between 5000 and 30 000 households and SPA between 400 and 700 formal-sector health facilities from a country’s comprehensive list of health facilities. DHS surveys provide data for various monitoring and evaluation indicators in population, health and nutrition areas. The DHS provides nationally representative information on fertility, family planning, mother and child health, gender, and specific disease areas. DHS data, therefore, provide critical outcome-level indicators of population health. SPA surveys include four main questionnaires: (1) facility inventory; (2) health staff; (3) observational protocol; (4) exit interviews for antenatal care, family planning and sick child consultations. Individual countries have slight variations in the information collected through SPA surveys, but each country generally collects consistent data within these four questionnaire categories. The Service Availability and Readiness Assessment general indicators are widely employed to provide national data on facility-level health service delivery.11 We can derive the nine indicators—infrastructure, infection, services, guidelines, staffing, health data, lab, pharma and supplies—from variables in the SPA questionnaire. SPA data provide strong evidence of a country’s health structure and process-level capability by reporting on these indicators.

Together, the DHS and SPA are critical and publicly available tools that can provide insight into a country’s state of quality ranging from a facility’s readiness and services to the impact of the quality on health outcomes. By combining DHS’s population outcomes, and SPA’s structure and process outcomes, these sources are currently the most substantial publicly available sources from which we can extract a wide array of information to depict the quality of care across multiple LHDIS countries.

Metric selection and categorisation

We created an indicator matrix linking the Donabedian structure, process, and outcome dimensions to the QoC FCVS framework domains (table 1). The dimensions ensure the feasibility of capturing a comprehensive range of indicators within the QoC FCVS framework domains. We then examined the data elements and indicators collected in the DHS and SPA to populate the framework.

Country selection

After identifying data sources and metrics, we reviewed the available DHS and SPA records to select countries with recent data, which we defined as within 10 years, arguably longer than desirable but necessitated by data gaps. The aim was to illustrate how the available data can portray and compare the quality across countries. Our country’s inclusion criteria included (1) ranking low on the Human Development Index (32/193 countries) and (2) reporting both DHS and SPA data available in the last decade to ensure the relevancy of data.

Results

Feasibility of the approach

Mapping suitable SPA and DHS indicators across the matrix (table 1) revealed relevant indicators across all QoC FCVS categories. 41% of indicators focused on improving clinical care, 26% on ensuring access to basic infrastructure, 13% on engaging and empowering patients, families and communities, 10% on reducing harm to patients and populations and 3%—or one indicator—on shaping the system environment. When we combine these categories, there is a relatively even spread across structure (9 indicators), process (15 indicators) and outcome (12 indicators). However, improving clinical care was the only QoC FCVS category to capture metrics across all Donabedian dimensions. Areas of particular strength in suitable indicator availability were structural measures for ensuring access and basic infrastructure, and process and outcome measures for improving clinical care. However, 6/15 cells in the matrix have no identified suitable indicators from SPA or DHS.

We found that of all United Nations member states, 46.6% (90/193) provide DHS data, 35.8% (69/193) have conducted more than one DHS, 6.7% (13/193) provide SPA report data and only 3.1% (6/193) have conducted more than one SPA. In addition, within the last 10 years, 29.5% (57/193) of countries have conducted DHS surveys, and 4.7% (9/193) have conducted SPA surveys (online supplemental file 2). Across the LHDIS countries, in the last 10 years, 71.9% (23/32) reported DHS data, 18.1% (6/32) reported SPA data, and 18.1% (6/32) reported both SPA and DHS data.

Illustrative snapshot of the state of quality in LHIS countries

We present results for data from the Democratic Republic of the Congo (DRC),12 13 Haiti,14 15 Afghanistan16 17 and Senegal,18 19 all of which meet our requirements for recent DHS and SPA data and ranking as LHDIS countries (table 2). We also present a country comparison ranking overview (figure 1). The four selected countries provide a snapshot of the quality state in LHDIS countries, spanning three global regions. Countries such as Tanzania and Malawi also met the criteria, though our selection aimed for geographic diversity to demonstrate feasibility in an illustrative set of countries.

Table 2
|
Country Human Development Index ranking and DHS and SPA report availability
Figure 1
Figure 1

Four country state of quality of care comparison.

Table 2 presents a country results table with direct comparisons of available indicators across the selected countries. The data therein provide an illustrative overview of the state of quality within each country that can be interpreted in relation to each particular context and their service delivery challenges, priorities and assets while facilitating cross-country comparison. While this paper does not analyse the specific identified challenges, we present some key highlights below and in online supplemental file 1. In addition, figure 2 provides graphics comparing each country’s SPA data at different levels of indicator availability.

Figure 2
Figure 2

SPA facility-level indicators presented by level of tracer items present. SPA, Service Provision Assessment.

First, across all countries, there are common areas of broadly positive performance, such as diphtheria, tetanus and pertussis (whooping cough) vaccine, third dose dropout, safe injection and knowledge of contraception. Data from all countries also revealed common areas of challenge, such as treatment for new tuberculosis cases and care seeking for children with fever. Of particular note is the variability across settings for many indicators, for example, those related to antenatal and postnatal care and quality-sensitive outcome measures such as neonatal and under-five mortality. These areas of significant variation merit further analysis to understand underlying causes.

Across all indicators, there was an average 50% gap between the lowest- and highest-performing countries in our sample set. The widest gaps between sample countries were among those indicators within ensuring basic access to infrastructure at 61% on average and the lowest within engage and empower patients, families, and communities at 20%.

On average, Senegal ranked the strongest in quality of care, especially across the improving clinical care domain. While the DRC scored poorly within improving clinical care and engaging and empowering patients, families and communities, it scored strongly within the ensuring access and basic infrastructure domain. Senegal and Haiti scored especially well across the antenatal care content within their health system. However, Haiti scored poorly across many of the SPA indicators within the ensure access and basic infrastructure domain, including guidelines, health data, lab, and staffing. On average, Afghanistan ranked the lowest across all QoC FCVS categories.

Senegal had the most complete data available, presenting data on 94.9% (37/39) metrics. Missing data across all four countries existed for two specific metrics, ‘Perinatal mortality rate at 5 years’ and ‘Counselling and Testing for HIV among pregnant women,’ both within the improving clinical care domain and derived from DHS data sources. Afghanistan had the highest missing data rate, with up to 45% (10/22) missingness in the improve clinical care domain.

Discussion

Routinely collected measurements and public reporting of metrics are known to catalyse attention and action on the quality of health services,2 which is an urgent priority for many health systems, including those in the LHDIS countries. The absence of collective and concerted efforts to do so for LHDIS countries is, therefore, indefensible. Measuring the quality of health services does not need to be reserved for high-resource settings; it is possible and imperative that we do so for the countries experiencing the most egregious problems in the quality of care. Therefore, our objective was to demonstrate the potential of using standardised and publicly available data sources to measure quality across LHDIS countries. We used the QoC FCVS framework7 and Donabedian dimensions8 to map metrics drawn from DHS and SPA data sources. Using the established metrics, we selected four countries for comparison that met two criteria—DHS and SPA data availability in the last 10 years and ranking low on the Human Development Index.

We have shown that it is possible to use publicly available data to analyse the quality of care in LHDIS countries. Policymakers and other stakeholders can rapidly analyse an LHDIS country’s state of quality with these minimal resources. However, this approach was not without limitations. The first is that while DHS and SPA survey completion is sparse across countries, it is the best publicly available and standardised. While two-thirds of LHDIS countries have conducted a DHS survey in the last 10 years, only 20% have ever conducted complete SPA surveys. Only those 20% have reported on both DHS and SPA. Additionally, of the countries that conducted one or both surveys, the survey was typically unique, and the country did not present longitudinal survey data, making it impossible to track the internal progress in quality of care performance.

It is also important to note that the majority of the data used in our analysis predates the COVID-19 pandemic. This temporal aspect introduces two central considerations. One is the notable impact of the pandemic on healthcare quality, which has resulted in declines across the healthcare spectrum. Consequently, the ‘state of quality’ in healthcare systems is likely considerably worse than the baseline represented by the pre-pandemic data used in this study. Healthcare quality is dynamic and subject to changes influenced by various factors, including unforeseen emergencies such as infectious disease outbreaks, environmental crises, economic challenges and political disruptions. Recognising these emergent factors as important markers of healthcare system quality is crucial. Our study serves as a snapshot in time and highlights the importance of continuously adapting and evolving quality assessment metrics to capture the evolving realities of healthcare systems. It is also critical to note the vital interface between resilience and quality of care and the need for coordinated action on both, particularly in FCV settings where the risk for public health emergencies may be more significant.

Beyond the limitations associated with the age and availability of country data, we found that the available DHS and SPA data elements do not address all quality of care areas across the QoC FCVS framework and Donabedian dimensions, thus failing to provide as holistic a representation of a country’s quality of care as would be desirable. Critical gaps exist in particular related to understanding the system environment required to support quality of care efforts at the point of care. Further, even where indicators are available to populate the matrix, these are of varying value in truly informing on the quality of care; they are often not the best indicators from a conceptual standpoint, instead representing the best available from the SPA and DHS datasets. In addition, data availability does not provide insights into the quality of processes employed within a country to engage stakeholders in generating and using data for care improvement. Beyond these broad efforts with widely available data sources, there may be value in more targeted efforts that incorporate participatory methods, such as community scorecards used across FCV settings. Such efforts may bridge the information gaps and offer a more comprehensive view of healthcare quality, considering the local context and the involvement of stakeholders in the process.

Despite these limitations, results from the illustrative set of countries demonstrate the strengths of our study and the value of these data sources to evaluate the quality of care and promote global and national dialogue for its improvement. In particular, the data demonstrate widespread, persistent challenges in quality care across all the included countries, reinforcing the need for sustained national and global efforts to advocate for action on quality of care and generate learning on how to achieve this in the most challenging settings. The data missingness for some of the most basic human resources, tracer items and guidelines across all settings further reflect this need. However, it is most evident in the still shocking mortality figures presented, on which lack of access to quality care is likely to have a significant impact.

Nevertheless, our findings also demonstrated areas where practice may be better than anticipated, illustrating that progress in quality of care is possible in LHDIS countries. An example is the data on injections given with a syringe and needle set from a new, unopened package, for which average figures across each country are above 90%. This critical action for patient safety happens at the point of care. However, it is the culmination of a well-functioning system requiring sustained equipment availability alongside appropriate provider knowledge and practice.

In addition to these global findings, the results presented here demonstrate the potential value of such data in driving individual country efforts on quality of care. The side-by-side comparison this exercise produces further enhances this finding, allowing for contextualised performance benchmarking. For example, our findings demonstrated that while Senegal excels in the improving clinical care domain compared with the other included countries, the country fares worse in ensuring access and basic infrastructure. Policymakers in Senegal could use these findings to redirect resources, for example, to rectify the deficiencies in pharmaceutical and lab readiness at the facility level. In addition, while national stakeholders may use these findings to improve the services they provide and critical infrastructure and inputs, they can also use the quality data for advocacy with external donors for relief or support in specific service areas. The variation seen between settings, such as antenatal care provision, could help spur cross-country learning and collaboration efforts. The quality of care challenges highlighted by Afghanistan’s ranking in the country comparison and, for example, across prenatal care services could support their efforts in requesting targeted external financing and support to improve a service area. Furthermore, the data illuminates Haiti’s urgent need to address subpar scores across multiple SPA indicators within the ensuring access and basic infrastructure domain. These shortcomings point to a complex issue that necessitates focused interventions. These shortcomings point to a complex issue that necessitates focused interventions. These challenges encompass potential deficits in standardised healthcare protocols and procedures, deficiencies in data collection, management, and utilisation, inadequacies in laboratory capabilities, and staffing shortfalls. To effectively tackle these multifaceted challenges, a comprehensive approach, including policy reforms, increased investments in healthcare infrastructure, and bolstering the healthcare workforce, is imperative. Presenting a country’s relative performance in a clear comparative format alongside countries with similar human development scores helps contextualise their quality of care and pinpoint specific areas that need improvement.

The differences in the quality of care among the selected countries, particularly Senegal’s relatively better performance, may be influenced by the fact that Senegal is not classified as an FCVS, unlike the other three countries. This distinction could contribute to Senegal’s comparatively stronger healthcare infrastructure and potentially better resource access. While Senegal’s status may have influenced the results, the study’s primary goal was to demonstrate the feasibility of using publicly available data to assess healthcare quality in low-resource settings, and the findings highlight the variations and opportunities for improvement among these countries, regardless of their FCV status.

Our assessment demonstrated that, with the limited open-access and standardised national data that report on the relevant quality of measures available, we could examine quality within and across selected LHDIS countries. For those countries that have done, or are planning to do, DHS and SPA assessments, the approach described here can already be used to provide actionable, comparable data on the quality of care that, while not comprehensive, do span multiple quality domains and have many apparent uses to improve the quality of care. However, under the umbrella of the donor-funded DHS programme, these assessments do not provide a long-term sustainable solution for widespread, timely, cross-country comparative data on quality of care in LHDIS countries, given the reliance on donor funding and lack of a formal regular schedule of surveys. Conducting these surveys is often time-consuming, extended and somewhat complex. Specific settings may prioritise other tools to meet data needs for health planning, such as bespoke national surveys or tools from other organisations.

A particular area of focus for many countries is the use of routine health management information systems, like the District Health Information System (DHIS2)20 to enable the timely collection and aggregation of health service performance indicators. As such systems become more widely and effectively used across LHDIS countries, it is necessary to explore how they can facilitate collation and cross-country comparison of data on the quality of care. Other recent global efforts, such as the Harmonised Health Facility Assessment, which contains a module on quality of care,21 and the WHO-UNICEF Primary Healthcare Measurement Framework and Indicators,22 may also begin to provide sources of relevant data as they are rolled out more systematically.

While this paper has looked only at two readily accessible data sources with available data for multiple settings, individual LHDIS countries can likely leverage other sources to build a similar picture of the quality of care and to keep this profile up to date. These may include routine health management information systems; ad hoc or regular surveys and health facility assessments; disease or technical programme data repositories; donor and technical partner monitoring and evaluation; and other regional and global health data initiatives not mentioned here. Governments and other stakeholders could adapt the approach outlined in this paper to combine such data into pragmatic indicator sets to improve quality of care, as outlined above. However, doing this on a large scale with sufficient rigour and cross-country comparability would require significant resources. Finally, data need not only to be collated by displayed and disseminated in a way that is helpful to policymakers.

Despite the drawbacks—and acknowledging the imperative to make rapid progress in developing methods to collect meaningful and actionable quality-of care-data—a realistic strategy may be to invite willing and interested countries to make the relevant data from DHIS2 and other high-quality sources accessible for quality-of-care measurement initiatives.

Conclusion

Data are well-known and universally accepted as foundational for systemic and systematic progress in improving quality of care.23 24 Publicly reported and transparent data motivate and catalyse improvement efforts and can result in better patient care.24 These transparent comparisons can influence and motivate individual countries to expedite progress in addressing quality deficits. A global platform, such as OECD HCQI,4 is needed to routinely measure longitudinal improvements within LHDIS countries. We have demonstrated the feasibility of using publicly available data to measure a country’s quality state. Future endeavours involved in presenting and analysing inter-country or cross-country comparisons using this framework would be invaluable in gaining a deeper understanding of settings’ healthcare quality.

The science of quality improvement in healthcare has evolved considerably with a growing evidence base to select and implement evidence-based interventions to improve healthcare quality. Tools and resources are now available to support national, district and facility interventions across policy and practice. Therefore, data and insights regarding quality of care have become increasingly actionable, yielding significant improvements.

The four-country snapshot provided within this paper reiterates both the pervasive challenge of poor-quality care in LHDIS settings and the value of data in supporting improvements. Efforts must now be scaled up to realise the benefits of systematically presenting comparable data across LHDIS settings.