Background 5.1 “However beautiful the strategy, you should occasionally look at the results.” – Winston Churchill Introduction 5.2 The Department of Health (DOH) administers many critical health-related programs that are delivered to New Brunswick residents. The 2006/2007 expenditure budget for DOH was approximately $1.9 billion, approximately 30% of the total provincial budget of $6.5 billion. This means that three of every ten dollars spent by the Province go either directly or indirectly into health programs. 5.3 With the 2005 transfer of direct delivery of provincial public health and mental health programs from DOH to the Regional Health Authorities (RHAs), most of the programs administered by DOH are now delivered by the RHAs. DOH funds, coordinates, and monitors these programs pursuant to the Health Act and other provincial legislation. 5.4 DOH is also responsible for making critical decisions about the programs under their administration. For example: • Should a new program be created in response to an identified need? • Is an existing program still relevant to its target clients and the priorities of government and the department or should it be discontinued or have its focus changed? • Should a pilot program be extended, expanded or discontinued? • What level of resources should be committed to a particular program in the coming year? • Should changes be made to the way a program is being delivered to make it more relevant, cost-effective and/or successful in achieving its objectives? Value of evaluative information 5.5 In order to make appropriate decisions, DOH needs good information. Some of this information may be available in the form of operating information from management information systems, and anecdotal evidence from those delivering the programs. However, relying on this information alone does not provide a sufficient knowledge base for sound decision-making. Access to objective evaluative information about program relevance, cost-effectiveness and success in achieving objectives is critical. The primary function of program evaluation is to provide such information. 5.6 RHAs are responsible for the direct delivery of most health programs. As such, they also need good information upon which to base the decisions needed to ensure relevant, cost-effective, and successful programs and to discharge their accountability obligation to DOH relating to these programs. 5.7 The information provided in program evaluation reports can aid decision-makers in: • understanding, verifying, and increasing the impact of services on clients; • improving delivery mechanisms to be more efficient and therefore less costly; • verifying if the program is really running as envisaged; • thinking about how they will recognize that a particular program is successful; • measuring program results; • determining the extent to which observed outcomes are as a result of program activities; • identifying deficiencies in a program that may reduce the program’s relevance, cost-effectiveness, and/or success in achieving its objectives; • comparing different programs in deciding where to increase or reduce funding allocations where there are changes in overall departmental funding; and • identifying best practices and lessons learned that can be applied to other departmental programs. 5.8 Conversely, there are significant risks associated with a failure to consider evaluative information in making program decisions. These include: • the risk that a program that is no longer needed continues to be funded and delivered; • the risk that a program is poorly designed, and therefore completion of prescribed activities has a low probability of achieving desired objectives; • the risk that a program is not adequately funded to achieve stated objectives; • the risk that activities are not carried out efficiently, or that alternative activities exist that would result in more efficient achievement of planned objectives; and • the risk that observed outcomes would have occurred with or without the program being in place. 5.9 Information provided by program evaluation can also be used by senior management, legislators, and the public in holding program administrators and managers to account for the achievement of positive, equitable results with resources provided to them. 5.10 A definition of program evaluation and some related terms are provided in Appendix I of this chapter. Our review 5.11 In 2002, our Office conducted a series of scoping interviews with DOH management and health stakeholders to identify potential areas for examination by our Office. At that time a number of interviewees, and in particular many stakeholders, indicated that there were serious deficiencies in the evaluation of DOH programs and in program-related decision-making by DOH officials. They indicated that DOH should be employing evidence-based decision-making in establishing and operating health programs. 5.12 Some of the more specific comments we received at that time included the following: • DOH does not use available data in a systematic way to identify problem areas. • DOH uses a hit-or-miss approach in designing departmental programs. • There are no information systems in place for many provincial programs and no performance indicators for these same programs. In such an environment, how do DOH and/or RHAs know if programs are being delivered well or badly? Before DOH states that they need more money for health and wellness programs, perhaps they should know how effective and efficient existing programs are. More emphasis on program evaluation and accountability is needed. • DOH program evaluations that are completed typically rely on subjective information. Most information is gathered from interviews and questionnaires. The public can be “satisfied” with a program that at the same time may not improve their health status or be cost-effective in comparison with other options. DOH program evaluations should be asking two key questions. 1) Has care been provided cost effectively? 2) Has the treatment received improved your health? • The program evaluation process for a program/pilot should be established as part of program/pilot design. This would allow for the ongoing evaluation of the efficiency and effectiveness of permanent programs. It would also allow for more accurate evaluations of the success of a pilot program. • DOH officials make program decisions at central office without proper consultation and changes made often have little effect on service delivery in communities. There appears to be an attempt to make complex decisions simple and it doesn’t work. 5.13 In 2004, our Office began an audit of the program evaluation function at DOH, which was known as the Department of Health and Wellness at that time. However, shortly thereafter as a result of a 2004 budgetary decision, the DOH Evaluation Unit was disbanded, and duties in this area were reassigned to the DOH Internal Audit group. DOH requested that we postpone our audit in the wake of that decision. We noted at the time that as a result of the change, DOH’s capacity in the area of program evaluation appeared to have been significantly reduced. 5.14 In lieu of our planned work at DOH, our Office completed a survey of program evaluation practices in all government departments. Our findings and observations from that survey were reported in Chapter 6 of the 2004 Report (Volume 2). In paragraph 6.15 of that Chapter, we indicated that we intended to do additional work in the area of program evaluation. And we identified DOH as our planned target for the next phase of that work. 5.15 As we noted in our 2004 Report, our Office has the mandate to assess whether appropriate effectiveness reporting systems are in place for departmental programs. Pursuant to that mandate, our work on this review is focused on the involvement of DOH in ensuring that adequate evaluative information is available for program decision-makers. We have made no attempt to evaluate any of the programs under review. Scope 5.16 Our objectives for this project were: To determine whether adequate systems and practices have been established to regularly evaluate programs funded by the Department of Health. 5.17 And, if adequate evaluation systems are found not to exist: To recommend a practical model that can be applied in the regular evaluation of programs for which the department has been given responsibility. 5.18 In completing this work, we sent program evaluation surveys to DOH and three RHAs for each of seven selected programs that are administered by the Department of Health. We received a total of 27 completed surveys in response. 5.19 We also sent an evaluation survey to the Department of Health for an eighth program, the Provincial Epidemiology Service, that is both administered and delivered by the Department, but received no response. 5.20 Responses to our survey were summarized and key findings, along with our observations and analyses, are presented in this chapter. It should also be noted that we did not attempt to audit or otherwise verify the responses received. 5.21 We also reviewed a number of program evaluation reports prepared for departmental programs that have been evaluated in recent years. 5.22 In performing our work, we referenced findings and research completed pursuant to our 2004 survey of departmental program evaluation practices. We also completed some additional research. Report recommendations Introduction 5.23 Program evaluation is not a panacea. However, regular evaluations of programs can provide decision-makers with credible evidence on program relevance, cost-effectiveness, and success in achieving established objectives. This is information to which decision-makers may not otherwise have access. And access to this information will increase the probability that optimal program-related decisions will be made. 5.24 We would like to make the following recommendations relating to our review of program evaluation systems and practices at DOH. Findings and observations supporting these recommendations are discussed in the Detailed Findings and Observations section of this chapter that follows. 5.25 All the recommendations are directed towards the Department of Health. However, it is our hope that Regional Health Authorities may also be able to make use of information presented in this chapter in order to improve the health programs they deliver. Evaluation guidelines, planning, and resourcing 5.26 We recommend DOH set appropriate formal program evaluation guidelines that specify standard departmental approaches to program evaluation for reference by the evaluators of departmentally-administered programs. 5.27 We recommend DOH ensure that appropriate formal documented evaluation plans have been developed for all programs under its administration. 5.28 We recommend DOH ensure that appropriate provincial performance expectations are set for each program they administer and that those performance expectations (i.e. objectives, performance indicators and targets) are communicated to the RHAs. Evaluation coordination and monitoring 5.29 We recommend DOH act as the provincial coordinator for evaluative work on departmentally-administered programs. 5.30 We recommend DOH monitor evaluative work to ensure that evaluation plans are being carried out as intended. Accountability and reporting 5.31 We recommend DOH ensure that it receives regular reports from RHAs for each program it administers covering the continued relevance, cost-effectiveness, and success of that program in achieving provincial performance expectations. Further, DOH should ensure that pertinent comparative information is shared among all RHAs. 5.32 We recommend that all program evaluation reports prepared for DOH-administered programs be widely distributed among program managers in the department and in the RHAs. 5.33 We recommend DOH improve program reporting in its annual report by providing information on the continued relevance and success of each program it administers. It should also consider including program cost-effectiveness information in departmental annual reports. Detailed observations and findings 5.34 Observations and findings included in this chapter are primarily derived from our review and analysis of the information provided by DOH and RHA representatives in response to our program evaluation survey. As such, these observations and findings should not be construed as representing or addressing the situation for any of the individual programs or services surveyed. Rather they are intended to present an overall picture of the state of program evaluation for programs administered by DOH. 5.35 In cases where specific programs are mentioned, we have done so either: • to identify best practices that we believe should be considered in order to improve evaluative practices for programs that are not already employing those practices; or • to make pertinent observations gleaned from the departmental program evaluation reports we reviewed. Areas selected for review 5.36 We chose seven areas we felt were most important in looking at program evaluation in the Department of Health. Those areas are detailed in column one of the table that follows. Column one also shows the review criterion, or statement of principle, for each of these areas. The review criteria were developed by our Office and reviewed with senior representatives of the Department of Health. 5.37 The criteria shown in the following table established the framework for our review. And it is against these criteria that we analyzed program evaluation at the Department of Health. Our findings for each criterion are summarized in the second column of the table and discussed in more detail in the sections that follow. 5.38 The following programs and services were those that were selected for survey, and for which we received completed surveys from the Department of Health and Regional Health Authorities. • Public Health - Early Childhood Initiatives (ECI) - 3.5 Year Old Health Clinic • Hospital Services - Diagnostic Imaging Services • Hospital Services - Laboratory Services • Physician Recruitment and Retention Program • Mental Health – Child and Adolescent Services Program • New Brunswick Extra-Mural Program • Public Health - Healthy Learners in School Program Conclusion 5.39 Based upon our review, we have concluded that adequate systems and practices have not been established to regularly evaluate programs funded by the Department of Health. 5.40 Due to the time required to work through the survey process for this review and the significant improvements needed in program evaluation process at the Department of Health, we have stopped short of recommending a practical program evaluation model for adoption by DOH at this time. However, implementation of the recommendations provided above would establish a workable framework within which DOH could develop an effective evaluation system. Program evaluation planning Department of Health involvement in evaluation planning 5.41 When asked to provide their overall rating of the state of program evaluation for their program, respondents’ average ratings for the seven surveyed programs were: • “Excellent” for one program; • “At an acceptable level” for four programs; and • “Needs improvement” for two programs. 5.42 Survey responses indicated that DOH has no formal documented evaluation plans in place for any of the seven programs we surveyed. There are, however, informal evaluative and performance monitoring processes in place for several of the programs, particularly around operating issues with one region accumulating information in a balanced scorecard . One survey respondent made the following comment: “Ongoing routine evaluation of service and wait times occur however we do not have a formal evaluation plan.” 5.43 We also noted that there have been at least two formal program evaluations in the past few years relating to surveyed programs: one covering Diagnostic Imaging; and a second covering the entire Early Childhood Initiatives program, of which the 3.5 Year Old Health Clinic is a part, that was just completed in mid-2006. There was also an evaluation of the Healthy Learners in High School pilot program. These evaluations were performed in response to requests from DOH decision-makers for information. 5.44 While recognizing that formal evaluation plans do not exist for their programs, most survey respondents agreed that a number of players should be responsible for developing and maintaining an evaluation plan for their program. Those most commonly mentioned included: • RHA program management; • RHA administrative staff; • DOH program management; and • DOH staff monitoring the program. 5.45 We believe that, while RHAs should be involved in developing and/or executing an evaluation plan for each health program they deliver, DOH needs to take a leadership role to ensure that appropriate evaluation standards are met. It is DOH who is responsible under the Health Act for administering health programs in the Province, and therefore it is also DOH who is accountable for the performance of those programs. Consequently DOH has a vested interest, as the program decision-maker, in ensuring that it gets the best evaluative information possible. 5.46 Unfortunately, the elimination of the Evaluation Unit of the DOH Planning Branch in 2004 greatly reduced the capacity of DOH to involve itself in program evaluation in any meaningful way. A departmental representative indicated to us at the time that it was the expectation of the department that the RHAs would take over the evaluation function and DOH would simply monitor program performance. Further, DOH would not assume any sort of coordinating role related to any regional evaluations completed. We are not aware of additional resources being assigned to RHAs in recognition of this new role. 5.47 There have been few DOH-led evaluations since the Evaluation Unit was disbanded. 5.48 RHA responses to our survey indicate that they believe the role of DOH should be, as a minimum, to set standards for and coordinate program evaluations. One respondent went much further stating “The programs are developed by the provincial government and should be evaluated by the provincial government. The design of any program should include evaluation tools from the start.” Survey respondents suggestions for improvement in program evaluation 5.49 Respondents to our survey provided a number of suggestions to improve the evaluation of their programs. Note that while some of these suggestions may already be in place for certain of the programs we surveyed, the list provides some improvements that could be made to programs not currently employing these practices. Suggestions we received included: • better provincial program standards; • provision for inter-jurisdictional comparisons; • better collaboration/sharing of information between regions (e.g. comparatives) and provincially standardized regional benchmarks and indicators; • better evaluation methodology that allows improved assessment of cost effectiveness and outcome success; • establishment of an evaluation methodology to be applied to the program as part of the program design process; • better provincial benchmarks and indicators (especially related to expected program outcomes); • regular evaluations performed by those responsible for establishing the program (i.e. the Province); • better data collection systems; and • development of a provincial evaluation guide. 5.50 In particular, one respondent provided the following comment about how evaluative practices could be improved. That evaluation be ongoing, that all stakeholders (participatory) be included as possible, that new knowledge/research/best practices/evidence be integrated in a timely manner, that formal evaluations be conducted every 3 years, that various formats for evaluation be used, that evaluation findings be communicated in a transparent and timely manner to stakeholders, that ongoing evaluation be resourced as an integral part of program implementation and not seen as an “add on”, and that evaluation be valued as an essential accountability mechanism. 5.51 Several of these suggestions would be implemented by preparing evaluation plans for DOH-administered programs. We also agree that DOH should prepare a provincial evaluation guide for reference by DOH, RHA, and private sector evaluators. Suggestions related to the availability of data and departmental standard-setting for individual programs are discussed later in this chapter. Limitations currently preventing the improvement of the program evaluation function 5.52 Respondents went on to identify four primary limitations on the ability of the DOH and RHAs to improve program evaluation practices. They include: • a lack of financial resources/time for evaluations; • a departmental emphasis on direct service delivery over administrative activities; • a lack of appropriate data capture systems leading to insufficient program data being captured; and • a lack of qualified evaluation staff. 5.53 All four bullet points are closely related. In recent years, given the limitations on available resources, budget allocations within DOH have focused heavily on direct service delivery and proportionally reduced funding for administrative activities. (i.e. bullet points 2 through 4). 5.54 One respondent commented, “If there was more resource allocation for [our program] more effort could be placed on program development and evaluation; presently, service delivery and operational issues dominate the resources.” 5.55 Unfortunately, this means that assigned resources are not sufficient to properly evaluate DOH-administered programs. So, decision-makers do not have access to sufficient evaluative information as to whether programs are relevant, cost-effective, or successful in achieving the objectives they were set up to accomplish. 5.56 We believe that, given current resource levels, DOH and the RHAs do not have the capacity to develop evaluation plans or otherwise improve the current state of evaluation for the programs they administer and deliver. We drew this same conclusion about provincial departments in general in our 2004 Report. Program objectives and targets Consistency of program objectives 5.57 In order to evaluate the success of a program, it is critical that clear program objectives be established up front. Survey responses indicated that program objectives had been set for all programs. However, in a few cases, the DOH version of program objectives varied significantly from the RHA versions. 5.58 Uncertainty around the objectives of health programs were also noted in one of the program evaluation reports we reviewed, that prepared for the New Brunswick Critical Care Nursing Program in 2004. In it the evaluator indicated that there was uncertainty among interviewees as to whom the program was targeted towards, experienced nurses or new nurses. It further noted that this had caused conflict within and between RHAs. The evaluator also indicated that there was uncertainty as to the roles and responsibilities of staff responsible for delivering the program. 5.59 DOH needs to take a leadership role in ensuring that there is a common understanding of the objectives of each of the programs it administers between itself and the RHAs. This is especially important given the recent devolution of the delivery of all public health and mental health programs to the RHAs, as this means that DOH no longer delivers most of the programs for which it is accountable. Lack of performance indicators and targets 5.60 While provincial performance indicators and targets have been set for some of the programs we surveyed, there are a number of programs for which this has not been done. We also noted that in some cases outcomes that are specified are not clear and measurable. 5.61 Again we feel this is an area where DOH needs to take a leadership role. DOH needs to ensure that appropriate provincial performance expectations are set for each program they administer and that those performance expectations (i.e. program objectives, performance indicators and targets) are communicated to the RHAs. Further, DOH needs to ensure that RHAs report actual performance against those indicators and targets to allow DOH to evaluate program success and follow-up to ensure that problems are addressed. Best practices 5.62 We noted two best practices in the areas of program objectives and targets from survey responses we received. • The Extra Mural Program is provincially managed, although day to day delivery is handled by RHAs. RHAs look to the program goals and objectives prepared provincially for guidance. DOH has also established performance targets and expected outcomes for this program. The establishment of common goals and performance expectations for all regions facilitates comparisons between regions, the development of data systems, and the sharing of information and best practices between regions. Among other things, this means the administrative costs associated with the program are lower than those for programs that are managed separately in each RHA, because it does not require each RHA to establish its own program goals and performance expectations. Reporting against performance expectations also aids decision-makers in evaluating the success of the program. • The Healthy Learners in School Program is administered and delivered cooperatively by the Department of Health and the Department of Education. The program guidelines for that program clearly state program goals and targets. The guidelines also included a full logic model for the program, which describes the connection between resources allocated to the program, activities undertaken, and the planned outcomes of the program. All survey respondents were familiar with these guidelines and referenced them in their responses. We feel that the guidelines for this program provide a good working reference to all organizations involved in delivery and/or administration of the program and as such could serve as a best practice model for other departmental programs. 5.63 Responses also indicated that comprehensive service standards or guidelines have been developed by the department for most programs we surveyed. Evaluation of ongoing program relevance Evaluation of Relevance for Health Programs 5.64 Most of the programs we surveyed undergo some informal relevance evaluation periodically, through a combination of discussion at meetings between provincial and regional representatives, the monitoring of results, the review of statistical data, the review of published research or other means. In addition program relevance was one of the issues looked at as part of the recent Early Childhood Initiatives evaluation which included reviewing the 3.5 Year Old Health Clinic. We also noted a best practice in the evaluation of relevance in the Extra Mural Program: The relevance of the program is evaluated by the Provincial Director of EMP and Regional EMP Directors through the use of statistical data collected via PtCT and HFUMS, feedback from frontline professionals, client survey information, and discussion and ongoing collaboration with RHA (intramural services) and community partners. 5.65 In the evaluation report for the Healthy Learners in High School pilot project prepared in 2002, the evaluator indicated a number of actions that had been taken by regional administrators that facilitated the successful implementation of the pilot. They included the following actions relating to the relevance of the pilot project: • needs assessment work was completed prior to implementation; and • many districts attempted to obtain the support of external partner groups and school/community buy-in to the project early in the process. 5.66 However, in the same evaluation report, the evaluator indicated that there were problems with the program model and its application. There was discontent about perceived lack of flexibility to adapt the program to the specific needs of individual schools, and the introduction of more bureaucracy around the pilot. The evaluator went on to indicate that DOH should consider allowing hybrid approaches that would better meet the needs of individual schools while maintaining program objectives. 5.67 The evaluation report for the Critical Care Nursing Program also recommended that the program be made more flexible in order to accommodate program clients, thereby improving its relevance. Furthermore, the overall conclusion in the report spoke directly to program relevance, indicating that the program needed to continue indefinitely because it was addressing a real need for more critical care nurses. 5.68 The evaluation of relevance takes a slightly different form for hospital services such as laboratory services and diagnostic imaging that are required to be maintained under the Hospital Services Act. Survey respondents indicated that these services are perpetually relevant given that both services are required by current medical science. For those services, one respondent commented, “… what is regularly evaluated may be the relevance of one particular exam or procedure, the availability and relevance of newly developed modality, exam or procedure, or the adoption of new technology to improve efficiency and productivity.” Department of Health involvement in the evaluation of program relevance 5.69 Most survey respondents agreed that a number of players should be responsible for evaluating relevance for their program or service. Those most commonly mentioned included: • RHA administrative staff; • DOH monitoring staff; • RHA program management; • DOH program management; • DOH senior management; and • DOH planning branch. 5.70 One respondent stated, “The program should be evaluated by those who fund and are accountable for the Program as well as the users and partners.” Another noted, “The RHAs are given a budget to provide the community with the most efficient and effective service within their resources. They have surveyed community needs and have the expertise required for evidence based decision making which will meet those needs.” 5.71 We agree that input is needed from those administering the day-to-day operations of the program, those delivering the program, and even the clients of the program in evaluating continued program relevance. However, based on arguments advanced earlier in this chapter around accountability, we feel that ultimate responsibility for ensuring that program relevance is evaluated on a regular basis should rest with DOH. 5.72 At least one survey respondent supported this opinion stating: “From my point of view, there is no one in the regions in charge of evaluating the relevance and the effectiveness of the programs. It’s done at the provincial level through the Health Minister’s Planning and Evaluation.” Survey respondents suggestions for improvement in the evaluation of program relevance 5.73 Respondents to our survey provided a number of suggestions specifically intended to improve the evaluation of relevance for their program. Again, please note that while some of these suggestions may already be in place for certain of the programs we surveyed, the list provides some improvements that could be made to programs not currently employing these practices. Suggestions we received included: • more information relating to the program’s target population, including follow up information on current and past clients; • improved electronic data collection at point of care; • a report that indicates the impact of the service on patient outcomes (whether positive or negative); • national and/or provincial standards for the program or service; • improved communication between regions to improve consistency of delivery; • increased coordination of information from various health technology assessment services; and • a workload measurement tool. 5.74 One respondent noted, “a formal evaluation, improved electronic data collection system, and increased opportunities for networking” would be useful in evaluating the relevance of the program. 5.75 These suggestions seem to highlight the need for: • better data collection systems; • better dissemination of collected information; • better provincial standards; and • improved communication between regions. Evaluation of program cost-effectiveness Evaluation of cost-effectiveness for health programs 5.76 Respondents for approximately half of the surveyed programs indicated that evaluations of cost-effectiveness are done. The formal program evaluation of Laboratory Services covered some aspects of cost-effectiveness. We also noted three responses that identify some best practices in the evaluation of program cost-effectiveness among survey responses we received. • Related to the Mental Health – Child and Adolescent Services Program, a respondent stated, “The Department supports evidence-based best practices for their clients and examines alternate service delivery methods within that context, with the realization that increased funding may not be possible. … Non-productive activities are identified on an ongoing basis at the Program Manager and Regional Director levels, as well as the level of Provincial Director of Child and Adolescent Services. Cost per CMHC client is analyzed on a year-over-year basis at the Provincial level, as well as intra-regionally.” • Related to Hospital Services – Diagnostic Imaging, a respondent indicated, “… we’re constantly evaluating cost through the budgeting process to establish the best test first approach and also look at productivity and cost per procedure.” • Related to Hospital Services – Laboratory Services, a respondent said, “[We] Constantly evaluate less expensive testing that will give the same or similar clinical information. Done with guidance from laboratory physicians and physicians. Also annual budgeting process and comparisons with provincial and federal benchmarking for cost per test, cost per unit (e.g. CIHI report).” Another respondent stated, “[Evaluation of cost-effectiveness] is part of the role and responsibilities of Senior laboratory management staff. Including Program Director and Medical Director. Prior to implementation of a new test-methodology and/ or service, a complete analysis is done with regards to cost-effectiveness and clinical priorities. The analysis is based on clinical relevance, TAT, cost, volumes, customer service, etc. Data used included MIS, workload stats, literature reviews, etc. The VP Health Information is ultimately responsible.” Department of Health involvement in the evaluation of program cost-effectiveness 5.77 Most survey respondents agreed that a number of players should be responsible for evaluating cost-effectiveness for their program or service. Those most commonly mentioned included: • RHA administrative staff; • RHA program management; • DOH staff monitoring the program; and • DOH program management. 5.78 One respondent further stated, “Given that [the RHA] is granted a global budget it is the responsibility of [the RHA] to ensure the effective utilization of same.” 5.79 We agree that most of the evaluative work in this area has to be done by those delivering the program, that is to say the RHAs. Given their need to work within budgets while still delivering effective programs, cost-effectiveness information is valuable to RHAs. But where this information indicates that major changes are needed in program delivery, it can also be a source of valuable information for DOH decision-makers. 5.80 Regardless of who does the evaluative work, however, we believe that DOH should take a leadership role in ensuring that cost-effectiveness evaluations are done periodically for all programs under their administration. Survey respondents suggestions for improvement in the evaluation of cost-effectiveness 5.81 Respondents to our survey provided a number of suggestions specifically intended to improve the evaluation of cost-effectiveness for their program. Again, please note that while some of these suggestions may already be in place for certain of the programs we surveyed, the list provides some improvements that could be made to programs not currently employing these practices. Suggestions we received included: • more data on outcomes; • information on funds saved and/or other impacts from program interventions; • improved workload measurement system; • better data collection at point of care; • provincial benchmarking; • data on appropriateness of exams requested; • better sharing of cost information between DOH and RHAs; • comparative cost data from other RHAs; and • comparative cost data from other provinces. 5.82 Once again, many of the survey respondents’ suggestions for improvement related to the need for better information than is currently available to decision-makers, and better provincial standards for program performance and evaluative practices in general. One respondent summed it up by stating, “It is difficult to measure cost effectiveness of a program when information required is not available....” 5.83 For example, the program evaluation report for the Critical Care Nursing Program indicated that it was impossible to evaluate program cost-efficiency in terms of dollars per nurse educated because of a lack of financial data upon which to base that analysis. Evaluation of program success Evaluation of program success for health programs 5.84 Evaluation of program success for health programs needs improvement. From survey responses we noted some use of balanced scorecards to track program performance. But we also noted that for a number of programs, outcomes that are specified are not clear and measurable. And there appears to be a lot of reliance on anecdotal evidence in determining program success. One respondent commented, “No formal process is established to evaluate success of the program.” 5.85 Further, we noted that there is very little consideration given to the extent to which program results can be linked to program activities (i.e. attribution of results) in evaluating program success. 5.86 We did note that performance reporting regimes have been established for both the Mental Health – Child & Adolescent Program and the Extra Mural Program. Both could serve as best practice models for other health programs. 5.87 For example, in the case of the Mental Health program clearly- stated short and long term service indicators have been defined in program guidelines and are being measured and reported upon. Performance indicators cover the effectiveness, efficiency, accessibility, and acceptability of the program. They include, among others, performance indicators such as the following: • decrease in symptoms/occurrence of mental illness and increase in the functionality in children and adolescents; • decrease in the need for hospitalization and increase of family stability; and • decrease in incarceration rates of youth. 5.88 Further, in relation to the usage of information for evaluative purposes, a respondent commented, The management team for the Mental Health Program with members of the Mental Health Program QI committee and the MH Program committee reviews the success of the Program, through evaluating objectives, indicators, and feedback from other services. Various activity fact sheets, satisfaction surveys, health record reports, etc are used to assist in evaluation. Reports on financial and Program status are completed monthly and submitted to the RHA as well as MHSD monthly and quarterly. Department of Health involvement in the evaluation of program success 5.89 Most survey respondents agreed that a number of players should be responsible for evaluating the success of their program or service in achieving its objectives. Those most commonly mentioned included: • DOH monitoring staff; • RHA administrative staff; • RHA program management; and • DOH program management. 5.90 While all players involved with a program need to be concerned about the performance of the program, we again believe that DOH should provide leadership in the area of performance. It should do this by: • setting clear provincial objectives, performance indicators and annual targets for programs; • monitoring actual performance and ensuring that action is taken where warranted; and • ensuring that attribution of results is periodically reviewed to ensure that it is the activities associated with the program, and not some other factors, that have led to observed outcomes. Survey respondents suggestions for improvement in the evaluation of program success 5.91 Respondents to our survey provided a number of suggestions specifically intended to improve the evaluation of the success of their program in achieving its objectives. Again, please note that while some of these suggestions may already be in place for certain of the programs we surveyed, the list provides some improvements that could be made to programs not currently employing these practices. Suggestions we received included: • an electronic data system that allows the capture of data at point of care; • satisfaction surveys for patients, physicians, and staff; • more detailed and accurate measurements of wait times and workloads; • a report that assesses/evaluates the impact of program initiatives on actual outcomes; and • the ability to compare performance between regions. 5.92 Survey respondents provided the following comments: • Data collection at the “Point of Care” would greatly improve the ability to track and monitor activities at the regional level. Data collection at the regional level is conducted manually resulting in inefficient use of our resources and inability to effectively monitor outcomes and objectives. Improved compatibility of information system between facilities in the RHA would also improve the ability to track and monitor activities in the organization. • There is no electronic data system to capture information from the … Program. 5.93 Our review of survey responses indicated that the key factor limiting the ability of DOH and RHAs to evaluate program success in achieving objectives is the lack of data available to evaluators and decision-makers. Action taken in response to evaluation findings Information available to DOH/RHAs for decision-making purposes 5.94 It is apparent from comments received in response to our survey that program decision-makers are generally responsive when presented with evidence indicating that their program needs changes or adjustments. For example, an action plan was put in place in response to the findings and recommendations included in the recent Diagnostic Imaging evaluation report. And in fact, all respondents were able to give numerous examples of recent changes made to their programs. 5.95 Evidence-based decision making is considered the best practice in program decision-making. Most survey respondents agreed that a number of types of evidence need to be considered in making changes to a program. Those most commonly mentioned included: • continued program relevance; • program success in achieving its objectives; • program cost; • strategic priorities of the organization/government; • clinical experience; • newly published scientific research; • program cost-effectiveness; • formal evaluation reports; and • information from similar programs in other jurisdictions; 5.96 However, as discussed throughout this chapter, many respondents also indicated that they lack critical information about the continued relevance, cost-effectiveness and/or success of their programs. We noted a similar problem across departments in our 2004 Report where we stated, “Effectiveness information (i.e. actual versus targeted results and the results of formal program evaluations) is not as readily available to decision-makers as more traditional forms of program-related information (i.e. numerical reports, narrative reports, and financial reports).” So, this problem is not limited to health programs. 5.97 Lack of access to such information means that there is a high risk that problems exist of which decision-makers have no knowledge. Neither DOH nor the RHAs can react to problems they don’t know about. Who makes program decisions? 5.98 DOH administers and is accountable to the Legislative Assembly for all health programs under the provincial Health Act. Because of this, the Minister of Health is the ultimate decision-maker for health programs. However, the Minister delegates his responsibilities to senior management, program administrators and managers within the Department. The Minister has also delegated delivery of most health programs to RHAs. The Chief Executive Officers of those RHAs report directly back to the Deputy Minister of Health. 5.99 From responses to our survey, it is apparent that RHAs look to DOH for many program-related decisions. RHAs are provided by DOH with a global budget that is intended to fund all programs and services to be delivered by the RHAs. RHAs may only spend in excess of budgeted amounts for legislated programs like Diagnostic Imaging and Laboratory Services, and only if demand warrants it. They have only a limited ability to make adjustments in the budget for non- legislated programs, as adding funds to the budget of one program means taking funds away from another. This, combined with limited administrative resources, restricts the ability of RHAs to make major changes to the delivery of programs without the involvement of DOH. 5.100 For example, in the case of the Extra Mural Program, a respondent commented: The Hospital Services Branch, Department of Health, is responsible for the overall direction of the provincial Extra-Mural Program. A central team is responsible to: 1. direct the development of the EMP in collaboration with the RHAs; 2. foster the development of provincial forums to direct and advise on issues relating to the Program; 3. set provincial policy and standards; and 4. fund and monitor the Program. Although managed by eight individual regional health authorities, the EMP retains its former provincial character through the collaborative efforts of the RHA and Department staff. Through this collaboration, the Program is able to deliver consistent, quality home healthcare services throughout the province. … 5.101 And, in connection with the recent transfer of Mental Health Services delivery to the RHAs, the following comment was made. Under the Provincial Health Plan and with the transfer of Mental Health Services delivery to the RHA’s, the Department of Health has a role to plan, design, fund and monitor the delivery of Mental Health Services in the province. The operational delivery of services, however, is the responsibility of the RHA’s and their monitoring should be reflective of the overall program objectives. 5.102 Another respondent expressed a concern about the RHAs’ ability to change programs in response to observed problems with cost- effectiveness. “Regionally we have been hearing for some time that current research is showing that [a process] is not cost effective…Would [another process possibly cost less?] …Regions cannot change the program in response to new evidence/research. This is a DH role. Integrating new knowledge / research / evidence into our work seems to rarely happen.” 5.103 It appears that, aside from day-to-day operating decisions about a program, RHAs have limited decision-making power. It is DOH that makes most key program decisions and therefore it is DOH that has a vested interest in ensuring that appropriate evaluative information is available upon which to base those decisions. Sharing of program evaluation reports within DOH 5.104 Among the information provided in a program evaluation report is: • information identifying problem areas associated with the development, implementation, and/or delivery of a program and recommendations for improvement; and • information identifying aspects of program development, implementation, and/or delivery of the program that have been working well. 5.105 Both types of information can be of significant value not only to those directly responsible for the program, but also those administering other DOH programs. It provides valuable lessons learned and best practices that can be transferred to other programs in order to improve their efficiency and effectiveness. 5.106 However, it is our experience that program evaluation reports produced for DOH since the departmental evaluation unit was disbanded are not easily accessible. They are typically distributed only to managers administering the particular program being evaluated. Consequently, the opportunity for managers of other health programs to benefit from this valuable information is being lost. Public reporting of evaluation findings 5.107 As previously discussed, DOH is accountable for the performance of health programs it administers. The provincial Administration Manual states: The prime function of an annual report is to be the major accountability document by departments and agencies for the Legislative Assembly and the general public. It serves as the key public link between the objectives and plans of a government entity and the results obtained. 5.108 Therefore in accordance with this policy, the DOH annual report is the means through which the department should discharge its accountability obligation in relation to programs it administers. 5.109 The policy, which was implemented in 1994, goes on to provide the following guidance. Content a. To the degree possible, departments and agencies should give a clear account of goals, objectives and performance indicators. The report should show the extent to which a program continues to be relevant, how well the organization performed in achieving its plans and how well a program was accepted by its client groups. It is recognized that management information systems in many departments and agencies do not produce sufficient relevant data to meet this goal. However, over time, departments and agencies are expected to develop performance indicators and to include this information in their annual reports. b. Actual and budget financial information in summary form and a narrative explaining major variances as well as other aspects of financial performance are to be included in all annual reports. … 5.110 This policy therefore addresses reporting on relevance and success in achieving objectives. “B” above also addresses cost information to be reported, although it does not specifically refer to cost-effectiveness. 5.111 In other words, the policy requires annual reports to provide sufficient information to allow legislators and the public to assess whether departmentally-administered programs are relevant and successful in achieving their objectives. 5.112 We reviewed the 31 March 2005 Department of Health and Wellness annual report for information on the seven programs that were surveyed. The 31 March 2006 annual report was not available at the time of our work. We noted the following from our review: • purpose/objective(s) of the program was reported for three of the programs; • other program information was reported for four of the programs; • activity and/or other operating data was reported for four of the programs; and • initiatives undertaken during the year were reported for two of the programs. 5.113 In only one case, the Extra Mural Program, was information presented that would allow the reader to make any judgment on the continued relevance of the program. None of the programs listed any performance indicators or performance targets, and none reported any actual results that could be considered indicative of success in achieving program outcomes. 5.114 Therefore, we would conclude that in general the 31 March 2005 Department of Health and Wellness annual report did not comply with government policy in that it did not include required information on the continued relevance and success of programs in achieving their objectives. 5.115 In our 1998 Report, we made one recommendation as a result of our audit of the government’s response to the recommendations of the Commission on Excellence in Education. We recommended that the results of the evaluation of the Early Childhood Initiatives (ECI) be tabled in the Legislative Assembly when it was completed. As indicated previously in this chapter, an evaluation of ECI has now been completed and we continue to believe it is important for legislators to review this evaluation to determine if intended results were achieved. Subsequent to the completion of our survey, this evaluation report was released. Appendix I Program Evaluation Definitions Program Evaluation is the systematic process of asking critical questions, collecting appropriate information, analyzing, interpreting and using the information in order to improve programs and be accountable for positive, equitable results and resources invested. Program relevance addresses whether the program continues to be consistent with department and government –wide priorities and to realistically address a significant need. Program cost-effectiveness addresses whether the program utilizes the most appropriate and efficient means for achieving the objectives, relative to alternative design and delivery approaches. Program success addresses whether the program is effective in meeting its objectives, within budget and without resulting in significant unwanted outcomes. This includes consideration of whether observed outcomes can be attributed to program activities.