|Overview : |
During UNDP’s 2014-2017 Strategic Plan, the organization introduced clear standards and processes for quality programming for which managers are accountable. The Development Impact Group (DI Group) in the Bureau for Policy and Programme Support (BPPS) is responsible for developing and maintaining these programming standards.
The quality standards for programming with accompanying rating tools at both the programme/CPD and project levels, were introduced in March 2016 and enable managers and appraisal committees to objectively assess the quality of the CPDs and projects in an evidence-based manner.
All Project quality assurance (QA) data is entered in the corporate planning system, and analyzed using Microsoft Power BI. Data is collected at the project level at three stages: design, implementation and closure. Every development project in UNDP (approx. 3500 projects in total) regardless of level (country, regional or global) is assessed against the quality standards. The rating tool used depends on the stage of the project. The project document for new projects are assessed using the design rating tool in the corporate planning system, with results discussed at the Local Project Appraisal Committee (LPAC.) Ongoing projects are assessed once per year using the implementation rating tool. Projects that are being operationally closed are appraised one last time using the closure rating tool.
Quality at the programme level is independently assessed by the HQ PAC Secretariat at the design stage and kept in an Excel spreadsheet. Ongoing programmes are assessed during the Results-Oriented Annual Reporting (ROAR) process.
Now that all projects and most programmes in UNDP have been assessed against these new standards, UNDP would like to undertake a comprehensive review of the standards to inform learning and provide recommendations for further revision to ensure they are fit-for-purpose and effective. UNDP would like to engage a firm for a portion of this work, namely to complete the following objectives:
- Conduct an independent spot check of the Project QA data to advise on the credibility of the self-reported data entered in the system and the rigor and evidence-base that underpins the completion of the exercise. A random representative sample of projects (approx. 450 projects) will be spot checked. Issues that will be reviewed include: replicability of the ratings given the provided evidence in the system; strength of the management plans; full completion of the exercise; review of who assessed and approved the QA reports in the system; summary of the exemptions recorded and the stated reasons; the final list of dimensions will be discussed between the firm and UNDP to determine what can be done within the available time and resources. Results of the spot check should be provided for the organization and by region, type of project, stage of project and others.
- Collect user feedback on the user friendliness and relevance of the quality standards and QA process. Also use recent assessments (RBM Performance Audit, Institutional Effectiveness Assessment, and donor assessments) to inform the analysis as independent sources. Recommend changes to enhance the utility of the system, which may include revisions to the formulation of the questions in the rating tools. User feedback will be collected through a survey sent to all QA Assessors and QA Approvers, as well as follow-up interviews with a sample of these. Feedback will also be obtained through online discussions. This should also include recommendations on areas that may not be covered well in the current standards, such as on partnerships.
An analysis and recommendations paper will be developed and shared with UNDP for consideration and will inform a revision of the standards and rating/screening tools, as needed. It will also inform management decision-making on investments that are needed to build staff capacities, take specific leadership action, structure incentives and other areas that may address the findings and recommendations of the assessment.
Scope of Work
The recommendations of this Review will help the DI Group to revise the prescriptive content for the Quality Standards for Programming and accompanying rating tools.
This will involve analyzing data from project and programme QA; reviewing the current standards and QA rating tools; reviewing a representative sample of QA assessments and comparing them with the evidence provided; reviewing evaluations, performance audits and partner assessments; and collecting user feedback through surveys, focus groups and key informant interviews. Other related tasks may be requested as needed.
Key background documents that will be provided to facilitate this work include:
- Quality Standards for Programming prescriptive content, rating tools, and background papers (including the 2016 HQ PAC lessons learned paper;
- Data on Project QA from Power BI and customized data extractions upon request, along with QA ratings completed for CPDs by the HQ PAC Secretariat;
- Data and evidence entered on the corporate planning system’s project spaces, including Project QA assessments;
- OAI’s RBM Performance Audit; the Joint IEO/OAI Institutional Effectiveness Assessment; MOPAN, MAR, 2016 QCPR
The primary deliverable of this assignment will be a final report including the results of the spot check and recommendations for revision and further investment.