The javascript used on this site for creative design effects is not supported by your browser. Please note that this will not affect access to the content on this web site.
Skip Navigation
H H S Department of Health and Human Services
U.S. Department of Health and Human Services
Health Resources and Services Administration

A-Z Index  |  Questions? 

Managing Data for Performance Improvement

Part 1: Overview

Part 2: Using the Data for Analysis and Interpretation

Part 3: Related Resources


Part 2: Using the Data for Analysis and Interpretation 

How Do You Treat Each Measure?

In quality improvement you are usually looking to track changes in measures over time and therefore, the measures should be calculated at multiple points in time. The first measurement (called the baseline), will help you to identify problems and to establish baseline results. Successive measurements allow an evaluation of the impact of your quality improvement efforts and make it possible to monitor and sustain incremental improvements.  How often you calculate each measure depends on what you are measuring and what kinds of resources are available to you for performance improvement monitoring.  Generally, you will want to re-measure your indicators at least annually.  However, some measures may warrant more frequent monitoring, depending on your performance and the nature of the indicator.  A helpful tool in tracking repeated measurements is the improvement tracker provided by the Institute for Healthcare Improvement that allows you to set an aim and enter your data in order to track your quality measures.

How Can You Present Data in a Meaningful Way? (Charts)

Data analysis and interpretation is the process of assigning meaning to the information you have collected and determining the significance and implications of the findings.  Charts are a useful way to provide a visual display of your data and to help convey ideas about the data that might not be readily apparent if they were displayed in a table or as text.  There are several different kinds of charts that are useful for presenting performance improvement data:

  • Run Charts: Run charts are linear graphs that allow you to track improvements by displaying data in a time sequence. Time is generally displayed on the horizontal (x) axis and the measure that you are tracking is displayed on the vertical (y) access.  Using a run chart will allow you to see if improvement is really taking place by displaying a pattern of data that you can observe as you make changes to your process.  For example, a run chart can be used to plot the percentage of HIV positive females that receive a Pap test during a primary care visits on the vertical access and month on the horizontal access in order to detect if the number is increasing, decreasing, or remaining the same.
  • Control Charts: Similar to a run chart, a control chart is also used to study how a process changes over time.  However, a control chart also includes three reference lines which are determined by historical data: a central line which represents the average, an upper line which represents the upper control limit, and a lower line which represents the lower control limit.  By comparing current data to the reference lines, you can assess whether the process variation is in control (consistent) or out of control (unpredictable.)  For example, a control chart can be used to plot the average length of time patients spend waiting before being seen at an ambulatory clinic.  Plotting this data on a control chart allows you to determine whether or not patient wait time is significantly affected by factors of variation, such as day of the week.
  • Dashboard Reports: A dashboard report, which can be created using Excel, allows you to present at-a-glance information on your measures.  A dashboard report is meant to present important data in summary form in order to make it easier to identify key trends.  It provides a quick overview of the current state of your data, without detailed information on causes or solutions.  Examples of data that can be displayed in dashboard reports include financial indicators such as days cash on hand, patient satisfaction indicators such as average length of wait time, clinical indicators such as number of patients undergoing blood pressure exams, and provider performance indicators such as compliance with various clinical standards.
  • Other Methods of Data Display: Depending on the nature of your measures, there are several other kinds of charts that can be used to display your data.  These include pie charts, bar charts, and box plots.  Rather than tracking data over time, these charts are generally used to visually represent snapshots of data.  For example, a pie chart might be used to display the racial composition of a clinic’s patient population.
Acting on the Results

How do we assess and measure our progress?

  • Now that you have collected and analyzed your data, you can compare your measures to your quality improvement goals. If the results meet or exceed your goals, it is still important to continue measuring at regular intervals to continuously monitor quality. If the measures show room for improvement, you may find it necessary to launch a quality improvement project.
  • To implement quality improvement projects, it is necessary to examine the underlying causes behind the data you have collected. What possible factors may be contributing to the results that your data collection produced?  Here it is important to consider both patient and provider factors.
  • The data you have collected may also be useful in assessing physician performance. Collecting data and reporting on standardized clinician performance measures can help your organization improve efficiency and ensure that patients are receiving a consistently high quality of care.
  • Clinician performance measurement and reporting is a strategy used to evaluate physician adherence to evidence-based care guidelines and serves as a basis for physician incentives and rewards programs.  Physician performance data is a good way to initiate internal quality improvement.
  • In many cases clinicians resist efforts to measure their performance using any data. Very often this is a result of a lack of trust that performance measurement will be based on reliable and valid data and that ultimately, performance measurement will serve QI purposes rather than punitive purposes. It is important to demonstrate the validity of data used in performance measurement to clinicians and work collaboratively with them to develop performance monitoring strategies.
  • It is important to recognize that stakeholder opinions on physician measurement and reporting vary. Health plans and purchasers tend to support physician measurement as a cost-control mechanism, but some providers remain skeptical of the practice due to concerns over accuracy of the data.  Generally, clinicians tend to be more comfortable with using physician measurement for internal quality improvement and incentives than for public data reporting.  When implementing physician measurement for quality improvement, it is important to demonstrate that physician concerns are understood and to offer a response, as well as to provide an opportunity for physician engagement.   

How do we address problems?

There are an array of potential problems that can arise during data collection and analysis.  Some of these problems are technical issues resulting from underlying problems with the data collection method or the data itself.  These problems can be corrected by making adjustments to your data collection plan. Some potential adjustments to consider include:

  • Is more data needed for complete analysis? Is the data you are collecting sufficient to construct quality measures and answer your evaluation questions?  Do you need to collect data on other aspects of care?
  • Are there underlying problems with the data? Is the data you are collecting consistent and thorough?  Is the standardized data collection plan being followed correctly?  
    Once you have identified the source of the problem, you can make necessary adjustments to the data management process and resume data collection.
Plan-Do-Study-Act Cycle for Continuous Improvement

Quality improvement is an ongoing process. The Plan-Do-Study-Act (PDSA) cycle is a model for sustaining continuous performance improvement.  In order to continue improvement activities, the PDSA model proposes these four steps:

  1. Plan. Plan a change. Formulate specific aim statements, develop a detailed data collection plan, and establish project timelines.
  2. Do. Test the change. Carry out your data analysis plan, document any unexpected problems or challenges, track progress against timeline benchmarks.
  3. Study. Review the tests. Analyze collected data, compare results to project aims, summarize and present data.
  4. Act. Take action based on what you have learned.  If the change did not work, go through the cycle again with a different change. If the change was successful, use what you learned to begin planning new improvements.    



You will need Adobe Acrobat® Reader™ to view PDF files located on this site. If you do not already have Adobe Acrobat® Reader™, you can download here for free. Exit Disclaimer